Policy on the ethical use of Artificial Intelligence (AI) tools

Región Científica aligns itself with the initiatives, declarations, and guidelines promoted by the international and regional scholarly publishing community to guide the responsible, transparent, and ethically supervised use of artificial intelligence tools in scholarly communication. In particular, the journal adopts as key references COPE (Authorship and Artificial Intelligence Tools), WAME (Chatbots, Generative Artificial Intelligence, and Scholarly Manuscripts), ICMJE (Artificial Intelligence–enabled Technologies), and the Heredia Declaration (GEDIA) (Principles on the Use of Artificial Intelligence in Scholarly Publishing), acknowledging the potential of these technologies to support research and editorial processes without replacing human responsibility. Within this framework, the journal reaffirms its commitment to academic integrity, information traceability, the protection of confidentiality during peer review, and the quality of the knowledge it publishes.

Policy on the ethical use of Artificial Intelligence (AI) tools in Región Científica

1) Purpose

This policy establishes clear, proportionate, and operational rules for the use of Artificial Intelligence (AI) tools in order to protect scientific integrity, transparency, editorial confidentiality, and the reproducibility of results.

2) Operational definitions

2.1. Generative AI (GenAI): Systems that generate new content (text, code, images, audio, etc.) from instructions/inputs, typically using language or multimodal models (e.g., chatbots or image generators). This type of AI can introduce inaccuracies, biases, non-existent references, or unattributable content, as well as confidentiality and intellectual property risks.

2.2. Non-generative AI (Non-Gen AI) or assisted AI: tools that do not produce substantially new content, but rather support tasks such as detection (plagiarism/similarity, anomalies), correction (spelling/grammar), classification, structured extraction, tagging/metadata, or analytics on data provided by the authors.

2.3. Hybrid tools: Many solutions combine generative and non-generative functions. In Región Científica, the classification is based on effective use:

  • If the author uses the tool to rewrite, paraphrase, summarize, draft, or generate text/code/figures → it is considered GenAI use and requires disclosure (see section 6).
  • • If used only for spelling/grammar/punctuation without generating substantive content → no disclosure required.

3) Guiding Principles

  1. Human Responsibility: The responsibility for the manuscript and its results rests with the authors. AI does not replace critical thinking.
  2. Proportional Transparency: Relevant use of AI is disclosed without creating an undue bureaucratic burden.
  3. Confidentiality and Intellectual Property: Unpublished manuscripts and sensitive data should not be exposed on third-party platforms without appropriate safeguards.
  4. Verifiability: All AI-assisted output must be verifiable (data, method, traceability).

4) Authorship rule (non-negotiable)

AI tools cannot be listed as authors/co-authors or assume authorship responsibilities.

5) Permitted and Prohibited Uses (by Role)

5.1 Authors

Permitted (with human supervision):

  • Style improvement, clarity enhancement, translation, or advanced proofreading when the author reviews and validates the final content.
  • Support in manuscript organization, brainstorming, or programming assistance, provided the author reviews, tests, and validates the output.
  • Use of AI in the research methodology (e.g., analytics/ML) if described in a reproducible manner in the Methods section (tool/model, version, relevant parameters, data, validation).

Not allowed / grounds for editorial investigation:

  • Using GenAI to replace core responsibilities (substantive writing without verifiable human contribution; generation of unsupported conclusions; creation of unverified references).
  • • Presenting invented or "filler" content as "empirical findings" (includes nonexistent citations or synthetic data without a robust methodology).
  • • Deliberately omitting the declaration of GenAI use (lack of transparency will be considered an ethical violation).

5.2 Reviewers

  • It is prohibited to upload unpublished manuscripts, tables, figures, or data from the review process, in whole or in part, to third-party GenAI tools due to confidentiality/IP risks.
  • AI may only be used to improve the wording of the review (spelling/style), with the reviewer retaining full responsibility for the content of their evaluation.
  • If the reviewer used AI beyond basic proofreading, they must indicate this in their review ("AI was used solely for language improvement").

5.3 Editors and Editorial Team

  • The journal may use AI tools (preferably non-generative) for editorial tasks such as: metadata support, similarity detection, linguistic quality control, or administrative support, without replacing human editorial judgment.
  • It is prohibited to upload unpublished manuscripts to third-party GenAI tools when this could disclose content or identity.
  • The use of AI by the editorial team must be simply recorded in the editorial file (internal log).

5.4 Communication and dissemination (web/social media)

  • The use of AI for summaries or communication pieces is permitted only if:
    1. the final text is reviewed by a human editor, and
    2. no claims not present in the original article are added (to avoid misinformation).

6) Figures, Graphs, Images, and Data Visualization

Instead of a complete ban, Región Científica adopts an integrity + traceability approach:

6.1 Permitted (with requirements)

The use of AI (including GenAI) to create visualizations (graphs, diagrams, schematics) is permitted provided that:

  1. the figure is derived directly from data/results provided by the authors (dataset, tables, code, or statistical output);
  2. the author can verify the "figure ↔ data" correspondence;
  3. a statement of provenance is included (see 8.2) and, if requested by the editor, the minimum verifiable input (dataset/code/source table) is provided.

This approach is based on the idea of ​​requiring reproducible descriptions when AI is part of the method (applicable here to the production of data-based figures).

6.2 Not allowed

  • Using AI to alter evidence or introduce/remove features in images that could change their interpretation (especially clinical images, micrographs, gels, field photographs, etc.). These manipulations are unacceptable.
  • Generating "realistic" images that appear to be empirical data (e.g., photographs) without clearly indicating that they are illustrations.
  • Producing "synthetic data" to replace missing data without a robust and declared method.

6.3 Acceptable Adjustments

Adjustments to brightness/contrast/color are acceptable only if they do not obscure or remove information from the original.

7) AI Use Statement (minimal, clear, and non-bureaucratic)

When is it mandatory?
When GenAI or advanced AI is used beyond basic proofreading, and when AI is involved in methods, analysis, substantive writing, advanced translation, or the production of data-driven figures.

Where is it placed?

  • Articles: in the Methods section (if it affected research/analysis/figures) and/or in a section after the acknowledgments (if it was used for writing/translation support).

Minimum content (mandatory):

  1. Tool(s) used and type (GenAI / Non-GenAI / Hybrid).
  2. System/model name (if applicable) and version (if known).
  3. Purpose (e.g., translation, clarity improvement, code assistance, visualization generation).
  4. Scope (which part of the manuscript or process was assisted).
  5. Human oversight measures (data validation, reference verification, final review).

Extended content (only if requested by the editor):

  • Prompts, session logs, intermediate files, dataset/source code for figures, etc. (this avoids bureaucracy but maintains auditability). Your example of a very detailed statement (including prompt/date/model) can be used as a reference here, but not as a general requirement.

8) Detection and Management of Undisclosed Use

Automatic AI detection is not infallible. Therefore, Región Científica combines tools + qualified editorial review (training and disinformation prevention model).

8.1 Warning Signs (not conclusive on their own)

  • Similarity/plagiarism indicators and/or automatic signals of AI-assisted writing (e.g., reports from the anti-plagiarism system).
  • Non-existent references, methodological inconsistencies, inconsistent style, unsupported claims.
  • Visualizations with numerical inconsistencies or lack of traceability to the data.

8.2 Procedure

  1. Internal marking by the editor (record in the file).
  2. Preliminary human verification (review of the text/figures/references).
  3. Request to the author (suggested timeframe: 5–10 business days) requesting:
    • Declaration of AI use (if missing) and necessary corrections;
    • Minimum evidence where applicable (e.g., source table or dataset/code for a figure).
  4. Proportional editorial decision:
    • Continue with the process (if the clarification is satisfactory),
    • Request corrections,
    • Reject (if there is deliberate concealment, serious lack of integrity, or repeated non-compliance).
  5. If the case involves plagiarism, the journal's current anti-plagiarism policy will be applied.
  6. If the case involves an already published article and results in a correction/retraction/expression of concern, it will be processed according to the specific post-publication policy (which is already approved).

9) Commitment to Continuous Improvement

This policy will be reviewed and, when necessary, updated to maintain its relevance and effectiveness, in accordance with:

  1. Available evidence: new findings and consensus on risks and best practices in the use of AI (e.g., accuracy, biases, traceability, confidentiality, data integrity, generation of unverifiable references or content).
  2. Editorial and scientific integrity standards: updates to recommendations, guidelines, and criteria applicable to scientific journals (e.g., guidance from recognized committees and organizations, indexing guidelines, and international editorial practices).
  3. The journal's technical and operational capabilities: changes in the tools and controls available to authors and the editorial team (e.g., detection/verification systems, OJS workflows, preservation of minimum evidence, or resources for auditing), always ensuring that the measures are proportionate and do not create unnecessary bureaucratic burdens.

The journal will conduct these reviews periodically or when relevant changes are identified and will publish the current version with its update date.