AI policy

The policy on the use of AI in the preparation and publication of scientific publications conducted by the Editorial Board of the electronic journal «Nauchno-tekhnicheskiy vestnik Bryanskogo gosudarstvennogo universiteta» is based on principles, norms, and recommendations adopted by the international scientific community for the use of AI in the production and publication of scholarly works, as reflected in the documents of the Committee on Ethics for Scientific Publications (COPE) and initiatives from CANGARU, leading Russian and international journals, and publishers.

1. Policy on artificial intelligence for authors of scientific publications

When submitting materials for publication in the electronic journal «Nauchno-tekhnicheskiy vestnik Bryanskogo gosudarstvennogo universiteta», the authors (groups of authors) must adhere to the following principles.
1.1 The electronic journal, «Nauchno-tekhnicheskiy vestnik Bryanskogo gosudarstvennogo universiteta», recognizes that authors may use AI technology for both research and writing articles. Authors should understand that AI is only a tool and human control, responsibility, and transparency are essential.
1.2. The authors are allowed to use AI and related products only to improve the readability of their manuscripts, checking grammar and language quality. This applies only to writing stages and does not restrict the use of AI-based tools for real-world research, such as data analysis, modeling or extraction of results. Authors who use these technologies must carefully monitor the process and check and edit all content generated by AI at every stage.
1.3 The authors bear full responsibility for the accuracy, completeness and compliance with the requirements of their final manuscript. AI systems can generate plausible-looking content, but it may be erroneous, incomplete, or biased. If AI is used in the process of working on the manuscript, this should be indicated and confirmed in the published article by a corresponding statement. This level of transparency strengthens trust among all stakeholders and ensures that the terms of technology use are respected.
1.4 AI and systems using AI cannot be considered authors, co-authors or sources of copyright content. The author’s contribution and written consent for the publication of the final version of the manuscript was received and approved by people who participated in research and are responsible for its reliability. They approve the final manuscript and agree to be responsible for all aspects of work. Each author can defend his article if there are questions after its publication or provide new information or correct errors after publishing.
1.5 The Journal strictly prohibits the use of generative AI or tools based on it to create, modify, or improve any images, figures, drawings, photographs, or visual elements in submitted manuscripts. This restriction also applies to altering, translating, or obscuring objects in images. Normal image adjustments are permitted, including adjusting brightness, contrast, or color balance, as long as the images do not contain any content that is invisible during normal viewing.
1.6. AI and tools based on it can be used in the research process if they are an integral part of its methodology. For example, they can be used for automated image analysis in computer vision systems and measurements using AI. All cases of using AI tools should be fully described in the «Methods» section. This includes information about the types, manufacturers, processing history, verification of AI systems used, exact names and versions of models, types of equipment, manufacturers’ names, all processing parameters, and adjustments made according to specific procedures.
1.7 The Editorial Board reserves the right to use image analysis software and other tools to determine whether an image created with the help of AI or modified with it has been used. Authors may be asked to provide original photographs, unaltered source images, raw data files, or other additional documentation confirming compliance with this policy. To obtain a graphic description using components created using AI, the authors must obtain prior permission from the editorial board, as well as proof of copyright compliance and attribution.
1.8 Failure to comply with the principles regarding the use of AI may lead to the rejection of the manuscript, the official retraction of the published article, or other editorial measures.

2. Policy on artificial intelligence for reviewers of scientific publications

When reviewing materials submitted to the electronic journal «Nauchno-tekhnicheskiy vestnik Bryanskogo gosudarstvennogo universiteta» for publication, the reviewers should follow these principles:
2.1 The journal supports the growing trend of using AI in scientific research, and pays special attention to ensuring the quality of the reviewing process. Reviewers should ensure that the research is of high quality and that they maintain confidentiality, impartiality and academic objectivity in their reviews, which are standards for the publishing system.
2.2 Reviewers are not allowed to attach any part of the text prepared using generative AI tools or AI-enabled systems to the manuscript. The ban applies to all components of the manuscript, including abstracts, methods, results, figures, tables, and additional materials. It also applies to any information identifying the author or information on which the research was based. These are clear violations of privacy and may violate authors’ copyright and data protection laws. Obligations to maintain confidentiality remain after review, regardless of publication.
2.3. Reviews contain confidential comments, as well as potentially identifying information about both the manuscript itself and the reviewers. Therefore, they are subject to the same confidentiality requirements. Reviewers should not use AI tools to edit peer-reviewed manuscripts for any purpose, such as improving language, checking grammar, or optimizing style. AI systems cannot and should not replace human judgment and original critical analysis. Each reviewer independently bears full personal responsibility for the content, clarity of presentation, and professionalism of the reviewed report.
2.4 Scientific evaluation of manuscripts requires expert knowledge, a deep understanding of the subject, knowledge of methods, critical thinking, etc. which exceeds the capabilities of modern AI. Reviewers must evaluate the quality of work, correctness of experiments, interpretation of data, and scientific validity of research. AI systems lack sufficient knowledge to evaluate research papers and may provide biased, inaccurate or unjustified assessments that can negatively affect the quality of publications. The journal relies on reviewers’ personal experience to maintain high standards of scientific excellence.
2.5. The authors can use AI to improve the language of their manuscripts and make them more readable. If necessary, they should indicate this before the «List of References» section. The reviewers should also pay attention to any such disclosures. However, they shouldn’t change their minds based on whether AI was used in the study. Instead, they should evaluate the scientific quality and reliability of the research methods. If the reviewers believe that a study used AI tools that weren’t specified in the description and that may cast doubt on its reliability and originality, they should report it in confidential comments to the editor rather than relying on independent verification with AI tools.
2.6 Intentional violations of the rules regarding AI may result in the disqualification of a reviewer. If reviewers are unsure whether the above-mentioned principles apply to their situation or any unique circumstances, they should contact the editorial board prior to taking any further action.

3. Policy on artificial intelligence for editors of scientific publications

When editing materials sent for publication to the electronic journal «Nauchno-tekhnicheskiy vestnik Bryanskogo gosudarstvennogo universiteta», editors should follow these principles:
3.1. The journal adheres to high standards of publication ethics and takes measures to prevent unfair practices in publishing. Editors act as protectors of scientific heritage, guided by their professional judgment, which can’t be replaced or enhanced by AI systems.
3.2 Editors should not use generative AI tools or methods to upload, enter or process submitted manuscripts or their parts. Compliance with this rule ensures protection of authors’ intellectual property and confidentiality, and protects against possible violations of personal data protection laws. It also reduces the risk of editors, reviewers, and staff gaining access to confidential information contained in manuscripts. Confidentiality applies to editorial information such as reviews, decision letters, internal correspondence, and discussions, as well as internal documents. However, this information can be processed using external AI to improve language and provide administrative assistance.
3.3 The review process requires a high level of expertise in the subject area, an understanding of the context, and critical thinking skills, which only humans possess. Editors should not use generative AI tools or AI-based technologies to receive, evaluate, or make decisions on manuscripts at any stage of the publication process. These technologies lack the comprehensive understanding of scientific methods, the significance of research, and disciplinary context required for high-quality editing. They also carry a significant risk of making superficial, incorrect, or biased assessments, which can negatively affect publication quality and scientific accuracy. Each editor has full responsibility and the right to make editorial decisions, sending the manuscript for review before publication.
3.4 Editors should be aware that the use of AI to improve the quality of language in manuscripts may be acceptable, provided that authors include mandatory statements about the use of AI in the appropriate section before the list of references. Journal editors should take this information into account when making editorial decisions, paying special attention to scientific quality, methodological reliability, and the value of the research. If editors discover a possible violation of the journal’s AI policy by authors, reviewers, or readers, they will report it to the editorial board via email at ntv-brgu@yandex.ru, along with the necessary documentation to support their claims. The board will then investigate further to determine whether a violation has occurred in accordance with journal policy.
3.5 The editorial board strives to keep abreast of new AI technologies and their impact on scientific publications, but it will never neglect the importance of confidentiality and editorial integrity. The journal’s AI policy will be reviewed and updated as AI capabilities evolve, in order to comply with the best editorial practices, and use the latest technologies, to promote scientific publications without compromising the editorial independence or confidentiality of the manuscripts.