 |
Policy on the Use of Artificial Intelligence (AI) |
 |
|
| | Post date: 2026/02/10 | |
|
The NFHD recognizes the emerging role of Artificial Intelligence (AI) tools in scientific research and communication. However, the use of these tools, particularly generative AI, introduces significant concerns regarding accountability, transparency, bias, and copyright. This policy outlines the acceptable use of AI in manuscripts submitted to our journal and in the peer review process.
AI and Authorship
Large Language Models (LLMs) and other generative AI tools cannot satisfy the criteria for authorship as defined by the ICMJE.
- Accountability: Authorship entails responsibility for the intellectual content of a work and accountability for its integrity. An AI tool cannot be held accountable in this manner.
- Documentation: The use of any LLM (e.g., ChatGPT, Claude, Gemini) or similar generative AI tool in the preparation, analysis, or writing of the manuscript must be explicitly declared and described in the Methods section (or in a dedicated statement if a Methods section is not standard for the article type).
- Declaration must include: The name and version of the AI tool, the date(s) of use, and a clear description of how it was used (e.g., "for initial drafting of the introduction," "for language polishing," "for generating R code for statistical analysis").
Exception: AI-Assisted Copy Editing
The use of AI tools solely for copy-editing purposes does not need to be declared. This is defined as:
- Correcting grammar, spelling, punctuation, and tone.
- Improving readability and formatting.
- Ensuring consistency in citation style.
Crucially, this exception does not include generative writing, data interpretation, or the creation of new intellectual content. The final text must always represent the authors' own intellectual work, and authors are solely responsible for any errors or omissions introduced by the AI tool.
Generative AI in Image and Figure Creation
The use of generative AI tools (e.g., DALL-E, Midjourney, Stable Diffusion) to create, enhance, or alter images, figures, or graphical abstracts is strictly prohibited.
This prohibition is due to unresolved and serious concerns regarding:
- Copyright and Intellectual Property: The legal status of AI-generated images is unclear, and they may infringe upon the copyright of the data on which the model was trained.
- Scientific Integrity: AI-generated images can introduce non-existent or altered features, fabricate data, and create a misleading representation of scientific findings. This undermines the foundational principle of data integrity in scientific publishing.
Permitted Uses and Exceptions:
- Non-Generative AI Tools: The use of standard, non-generative AI/machine learning tools for image analysis (e.g., cell counting, particle analysis, image segmentation) is permitted and should be described in the Methods section.
- Images as the Object of Study: If the manuscript is specifically about AI and includes AI-generated images for analysis or critique, this will be considered on a case-by-case basis. Such images must be clearly labeled as "AI-generated" within the figure legend.
- Legally Sourced AI Art: In rare cases for graphical abstracts or schematics, if an AI tool is used that is explicitly trained on copyright-free or licensed data and provides verifiable attribution, it may be considered. Prior written permission from the Editorial Office is required, and the image must be clearly labeled as "Created with [AI Tool Name]" in the caption.
Use of AI by Peer Reviewers
Peer reviewers are strictly prohibited from uploading any part of a manuscript, including the abstract, into generative AI tools.
This is mandated to protect:
- Confidentiality: Manuscripts are confidential scholarly work. Uploading them to a third-party AI platform constitutes a breach of trust and confidentiality.
- Data Security: AI services may use submitted data to train their models, potentially making unpublished research publicly accessible.
- Reviewer Accountability: The peer review process relies on the expert judgment of the reviewer. Using an AI to generate a review report is unacceptable.
Declaration of AI Use in Evaluation:
If a reviewer uses an AI tool in a minimal way to support their evaluation—for instance, to check a statistical reference or to improve the clarity of their own review report—this use must be transparently declared to the editor in the confidential comments section. The core intellectual judgment of the review must remain the reviewer's own.
|
|
|
|
|
|