POLICY ON THE USE OF ARTIFICIAL INTELLIGENCE (AI) IN THE WRITING OF SCIENTIFIC DOCUMENTS
POLICY ON THE USE OF ARTIFICIAL INTELLIGENCE (AI) IN WRITING SCIENTIFIC DOCUMENTS
Revised in February 2025.
Introduction
This Policy establishes guidelines regarding the use of Artificial Intelligence (AI) tools in the production of scientific documents, aiming to ensure academic integrity, transparency, and author accountability. It is aligned with the recommendations presented in "Guidelines for the Ethical and Responsible Use of Generative Artificial Intelligence: A Practical Guide for Researchers" (Sampaio; Sabbatini; Limongi, 2024), considering the specifics of the production of scientific documents, including articles, theses, dissertations, reports, and others.
Definitions. AI Tools: Automated systems that assist in writing, reviewing, translating, data analysis, content generation, or other tasks related to the production of scientific documents. These include, but are not limited to, large language models (LLMs), such as ChatGPT or DeepSeek, and/or specialized tools, such as image generators or statistical analysis software. Generative AI: AI systems capable of generating new content, such as text, images, code, etc. Authorship: Significant intellectual contribution to the conception, execution, analysis, and writing of the work. AI tools cannot be listed as co-authors. The responsibility for the content of the document rests solely with the human authors. General Principles Complementarity, not substitution: The use of AI tools should complement and not replace the intellectual contribution of the authors. Human intellectual contribution must be predominant (at least 70% of the content). Author responsibility: Authors are fully responsible for the accuracy, originality, and integrity of the work. The AI tool is an auxiliary instrument and not the primary source of ideas or results. Transparency and disclosure: Authors must explicitly declare, in the AI Use Declaration, the use of AI systems and/or tools, specifying: The name and version of the tool or system used. The specific purpose of the use (e.g., summary generation, grammar review, text structure assistance). The extent of use, including the estimated percentage of content generated or modified by the AI. A detailed record of the use of AI must be kept and submitted as an annex to the signed AI Use Declaration by the author(s). Scientific Integrity: Authors must rigorously verify the accuracy and truthfulness of any AI-generated content. The inherent limitations and biases of AI systems and tools must be explicitly considered. Training and Competence: Institutions must provide mandatory training on the ethical and responsible use of AI in the production of scientific documents. Authors must demonstrate competence in the critical use of the AI tools employed. Privacy and Data Security All data used in AI systems and/or tools must be handled in accordance with current data protection regulations (e.g., LGPD in Brazil, GDPR in Europe, etc.). Sensitive or confidential information should not be entered into public or insecure AI systems. Copyright Authors must ensure that the use of AI does not violate the copyrights of third parties. Content substantially generated by AI must be clearly identified, and its originality verified. Review and Appeal Process. Reviewers must receive specific guidance to evaluate documents that have used AI tools, considering the principles of transparency and integrity established here. The evaluation must take into account the level of AI contribution and the clarity of the author's description of its use. An AI ethics committee may be established to adjudicate complex cases or appeals. Sanctions and Compliance Failure to comply with this policy may result in document rejection, retraction, or other sanctions. Cases of misconduct are investigated following pre-established institutional procedures. Updates and Review
This policy will be reviewed annually to incorporate technological advancements and feedback from the academic community, maintaining alignment with best practices and ethical recommendations.