Ethics Policy on the Use of Artificial Intelligence (AI)

Purpose

This policy establishes ethical and operational guidelines for using artificial Intelligence (AI) tools in the preparation, review and editing of manuscripts submitted to Lilloa. The objective is to ensure transparency, scientific integrity, and adherence to the highest editorial standards at all stages of the publication process, while also leveraging the capabilities offered by these systems.

General principles

- The use of AI must not result in data manipulation, plagiarism, falsification of results, or violations of intellectual property rights.
- Any misuse, whether intentional or accidental, will be addressed according to the guidelines of the Committee on Publication Ethics (COPE) and may result in editorial sanctions.
- The responsibility for the content rests solely with the authors and reviewers. Authorship and reviewership are strictly reserved for human researchers.
- AI must be considered as an auxiliary tool, never a substitute for scientific responsibility or human critical judgement.
- For further information on ethical recommendations, please refer to the UNESCO document (2024): Guidelines for the Use of Generative AI on Education and Research.

For authors

Declaration of the use of generative Artificial Intelligences

Authors are required to explicitly declare the use of any generative AI tools and Large Language Models (LLMs) in the preparation of their manuscript. This declaration must be included in the section titled “Declaration of AI Use” positioned at the end of the manuscript, immediately preceding the references. This section must detail the name and version of the AI tools used and provide a brief description of their application in the manuscript. It is not necessary to declare the use of an LLM or other AI tools for “AI-assisted text editing”. In this sense, we define “AI-assisted text editing” as AI-assisted enhancements to human-generated texts to improve readability and style, ensuring the texts are free of grammatical, spelling, punctuation, and tone errors. These AI-assisted enhancements may encompass changes relating to phrasing and document formatting, but do not include generative editorial work or the autonomous creation of content. In all cases, human responsibility for the final version of the text must be maintained, and the authors must agree that the edits accurately reflect their original intellectual contribution.

Citations

AI tools employed must be appropriately cited according to the journal’s citation style. Example:. OpenAI. (2025). ChatGPT GPT-4o [Large Language Model]. https://chat.openai.com

Usage limitations

- The use of AI does not absolve authors of their responsibility for the content of the manuscript.
- Generative AI must not be used for research data fabrication or manipulation.
- Authors must carefully review and edit any AI-generated text to ensure its accuracy and relevance, especially given the risk of generating meaningless, biased, or false information.

Regarding the use of AI-generated images

The generation of images using AI tools has introduced new legal issues concerning integrity in research. As publishers, we strictly adhere to the current copyright legislation and best practices in publishing ethics.

Exceptions (Images will be accordingly reviewed)

- Images from agencies with valid licenses.
- AI-generated tools based on scientific datasets that can be attributed, checked, and verified to ensure their accuracy are acceptable, provided that ethical, copyright, and terms of use restrictions are respected.
Any AI-generated image that falls under an exception must be clearly identified as such in the corresponding figure caption.
It is important to note that not all AI tools are generative. The use of non-generative machine learning tools for the manipulation, combination, or enhancement of existing images or figures must be disclosed in the corresponding caption upon submission for individualized review.

For reviewers

The use of AI tools cannot, and must not, replace the reviewer’s responsibilities in the process of scientific evaluation. The peer review process must be conducted by individuals with expertise in the field, who are responsible for the opinions expressed. Peer reviewers play a vital role in scientific publishing. Their expert assessments and recommendations guide editors in their decisions and ensure that published research is valid, rigorous, and credible.

Despite rapid technological advancements, generative AI tools have significant limitations: they may be unable to access up-to-date knowledge and can produce meaningless, biased, or false information.

Given that manuscripts may contain sensitive or unpublished data, their content must not be uploaded to open or insecure generative AI platforms.

Evaluation guidelines

The evaluation process must reflect sound human judgement: AI is only allowed to assist in the drafting or analyzing the review report conducted by the reviewer. The peer review process must be driven by humans, with AI playing only a supporting role. If any part of the manuscript evaluation is supported by AI tools, we require that peer reviewers transparently declare their use in the peer review report.

Editorial Team’s Use of AI

The editorial team may use AI tools exclusively for administrative and editorial tasks, such as manuscript formatting or plagiarism detection. However, all AI-assisted processes must be carefully overseen by humans to ensure editorial quality and consistency.

Consequences of non-compliance

The misuse or undeclared use of AI tools may result in:
- Manuscript rejection
- Formal retraction of published articles
- Temporary or permanent ban from publishing in the journal

Review and update: This policy will be reviewed and updated periodically by the Lilloa editorial board, to align with ongoing advancements in technology, evolving frameworks, regulatory frameworks and ethics concerning the use of AI in scientific research.

سرور مجازی بایننس