Statement on the Use of Generative Artificial Intelligence (AI)
Purpose and Scope
This policy outlines expectations for ethical, transparent, and responsible use of generative artificial intelligence (AI) and AI-assisted technologies in all stages of the publication process, including manuscript preparation, peer review, and editorial decision-making, consistent with internationally accepted publishing ethics principles. Prominent guidance from COPE (Committee on Publication Ethics) and ICMJE (International Committee of Medical Journal Editors) informs these principles. https://www.icck.org/policies/ethical-publishing
Definitions
For this policy, generative AI tools include any automated software or systems capable of producing content - text, images, data, or code, in response to user inputs. Examples include large language models and text-generation models such as ChatGPT, Claude, Gemini, and similar systems.
Authorship and Accountability
AI tools cannot be listed as authors or co-authors because they cannot meet authorship criteria such as accountability, responsibility for content, and disclosure of conflicts of interest. All listed authors must be human and must take full responsibility for the work submitted, including any parts created or influenced by AI tools. https://www.icck.org/policies/ethical-publishing
Human responsible authorship criteria (per ICMJE):
- Substantial contributions to research design, data interpretation, or drafting
- Final approval of the submitted version
- Accountability for all aspects of the work
Acceptable Uses of AI Tools
Authors may use generative AI tools for language refinement, grammar checking, and formatting assistance, provided that:
- The content remains the author’s original intellectual work
- Outputs are critically reviewed and edited by the authors
- Use is fully disclosed as described below
Many journals clarify that tools used only for grammar/spell-checking or format standardization (e.g., Microsoft Word Editor or basic reference managers) usually do not require disclosure, but advanced generative content requires it. https://mail.ijain.org/index.php/IJAIN/about/editorialPolicies
Mandatory Disclosure of AI Use
When generative AI tools are used in preparing a manuscript (e.g., summarization, translation, content refinement), authors must include a clear disclosure statement in the submitted manuscript. The disclosure should:
- Name the AI tool used (e.g., ChatGPT version 4.1)
- State the purpose of its use (e.g., language editing, abstraction support)
- Clarify that authors retain responsibility for all content
- Indicate that human authors critically reviewed and edited the output
Placement options for disclosure:
- A dedicated section titled “Declaration of Generative AI Use” before the References
- Within the Acknowledgements
- Within the Methods section for methods-related AI assistance https://mail.ijain.org/index.php/IJAIN/about/editorialPolicies
Example disclosure:
“Generative AI tool [Tool Name, Version] was used for [language editing / clarity improvement]. All conceptual content, analysis, and final text decisions are the sole responsibility of the authors.”
Use of AI in Data and Methodological Contexts
If AI tools are used as part of data analysis, simulation, modeling, or other research steps (not just language generation), authors must describe this use explicitly in the Methods section of the manuscript, including:
- Tool name and version
- Purpose and extent of use
- Parameters or settings used
This ensures reproducibility and transparency for reviewers and readers alike.
AI and Figures/Visual Content
Generative AI should not be used to create or alter scientific images, figures, or artwork unless the AI application is part of the research methodology itself (e.g., AI-assisted image analysis in biomedical imaging). In such cases, authors must provide:
- Full machine settings and tool information
- Clear labeling of AI-assisted visuals
- Unadjusted or raw source files upon request
Otherwise, AI-generated visuals are not permitted https://www.icck.org/policies/ethical-publishing
Reviewers and Editors
Peer reviewers must not upload unpublished manuscripts or reviewer comments into third-party AI systems that might compromise confidentiality. AI may support minor language refinement, but the reviewer retains full accountability for the review content. Editors may use AI for routine administrative tasks (e.g., formatting checks), but critical editorial decisions must remain human-driven.
Unacceptable Uses of AI
The following uses are prohibited:
- Using AI tools to generate original conceptual content, theoretical framing, or scientific conclusions without author oversight
- Using AI to produce fabricated data, misleading interpretations, or falsified results
- Listing AI tools as authors or attributing intellectual agency to non-human tools
Undisclosed or inappropriate use of AI tools may be treated as a form of ethical misconduct leading to rejection, correction, or retraction. https://www.elsevier.com/products/scopus/content/content-policy-and-selection
Plagiarism and Hallucinations
Authors must ensure that AI-generated text does not introduce unattributed material, fabricated references, or “hallucinated” content. Manuscripts may be subjected to plagiarism checks, and any violations will be adjudicated per COPE’s ethical procedures.
Policy Review and Updates
This policy will be reviewed and updated periodically to align with evolving best practices in scholarly publishing. Clear version history and effective dates will be maintained on the journal website.
Relevant External Policies and Guidelines
- COPE — Committee on Publication Ethics: Principles of Transparency and Best Practice in Scholarly Publishing https://publicationethics.org/principles-transparency
- ICMJE Recommendations: Authorship and Disclosure Requirements http://www.icmje.org/recommendations/
- Elsevier Generative AI Policies for Journals: Author and Publisher Guidance https://www.elsevier.com/en-au/about/policies-and-standards/generative-ai-policies-for-journals
- Elsevier Policy on Use of Generative AI and AI-assisted Technologies: Ethics and Disclosure Guidance https://www.elsevier.com/about/policies-and-standards/the-use-of-generative-ai-and-ai-assisted-technologies-in-writing-for-elsevier
- ICCK Ethical Publishing including Generative AI: Example Policy Framework https://www.icck.org/policies/ethical-publishing
