Principles and values of the policy
The journal supports the responsible integration of generative artificial intelligence technologies into scholarly activity, recognising their potential to enhance the quality of communication and improve research processes. At the same time, the use of such tools must adhere to ethical norms established by international standards – in particular, the recommendations of the Committee on Publication Ethics (COPE) and the principles outlined in the editorial policies of Elsevier and Springer. The primary aim of the policy is to safeguard academic integrity and prevent situations in which these technologies could distort research content or mislead readers.
Authors’ role and responsibility in using generative Artificial Intelligence
Authors may employ generative artificial intelligence in cases where it contributes to improving the technical or communicative quality of a manuscript. This may include text editing, enhancing readability, preparing illustrative materials or carrying out automated data analysis, provided that all outputs are thoroughly verified by a human. Such tools may serve as a supplementary resource, but not as a source of scientific content.
Generative artificial intelligence cannot perform the functions of an author. It must not produce primary scientific claims, generate conclusions or synthesise data that are presented as research results. The use of artificial intelligence to fabricate references, falsify data or create material that could be misinterpreted as an authentic scholarly contribution is strictly prohibited. Introducing confidential or unpublished information into such systems is also unacceptable.
Disclosure of generative Artificial Intelligence use
Authors are required to provide transparent disclosure regarding the use of generative artificial intelligence in the preparation of their manuscript. This information must be included in the “Acknowledgements” section or in the “Materials and Methods” section. Authors should specify which tools were used, for which tasks, and the extent to which they influenced the handling of text or data.
Generative artificial intelligence cannot be credited as a co-author and bears no responsibility for the article’s content – this responsibility lies solely with the authors.
Use of generative Artificial Intelligence in peer review
Reviewers must adhere to the principle of confidentiality as defined by COPE’s international standards. For this reason, the transmission of manuscript content to external systems, including generative artificial intelligence tools, is strictly prohibited. Such tools may only be used for technical support in preparing the review text – for example, for language refinement – and only if the content of the manuscript is not disclosed to the artificial intelligence system.
Use of generative Artificial Intelligence by the Editorial Team
The editorial team may, where necessary, employ generative artificial intelligence technologies to automate certain internal processes, such as handling correspondence, checking the technical quality of submitted materials or preparing administrative communications. However, such tools are not used to make editorial decisions, alter the scientific content of manuscripts or work with confidential materials without the oversight of the responsible editor.
Responsibilities of the parties
Authors bear full responsibility for the accuracy of the data they present, the soundness of their interpretations and the compliance of their work with scholarly standards. The use of generative artificial intelligence does not diminish this responsibility. If errors or breaches are identified that stem from inappropriate use of such technologies, the editorial office reserves the right to request corrections, reject the submission or initiate ethical procedures in accordance with COPE’s international guidelines.