AI (not) against AI

Authors

  • Oleksandr Nesterenko Doctor of Sciences in Engineering, Professor, Head of the Department of Information Technologies, International European University, Kyiv, Ukraine https://orcid.org/0000-0001-5329-889X
  • Petro Yatsuk Candidate of Sciences in Engineering, Associate Professor of the Department of Information Technologies, International European University, Kyiv, Ukraine https://orcid.org/0009-0002-7124-4849

DOI:

https://doi.org/10.32347/2411-4049.2025.4.134-153

Keywords:

information technologies, digital transformation, future, security, threats

Abstract

The aim of the research is to identify information, technological and methodological approaches to artificial intelligence developing in modern conditions of digital transformation, and in the long term. The objectives of the research are the following questions: a) to conduct a systematic analysis of the main aspects of AI development, in particular by the category of AI security; b) to identify the main time trends in AI improvement and ensuring AI security; c) to assess the current state of AI, development directions in this area and solving problems of ensuring AI security. Given the specifics of the research topic, tools in the form of modern AI means, such as ChatGPT, Claude, Copilot, Gemini, were used to conduct the research. As criteria for assessing the validity of conclusions, it is proposed to rely on the relevance and pertinence of the results of AI means' responses to various prompts. The main topic of inquiries is devoted to key trends that characterize the development of artificial intelligence, how AI will develop in the near future and until 2100, and what consequences this will have for humanity. The main stages of AI development have been identified in accordance with the above predictions. The main results show that the above-mentioned AI means primarily indicate an increase in attention to the ethical and legal aspects of using AI, as well as the growing AI integration into various areas of human activity. At the same time, it was noted that it is important to maintain a balance between using AI to enhance human capabilities and ensuring that people do not lose their intellectual skills. Particular attention is paid to the possible moment of singularity, when AI will begin to develop at an exponential rate, which will lead to radical changes in society and technology, and the consequences of this process are difficult to predict with certainty. Such systematic reviews are suitable for the formation of information meta-resources, which can be used by responsible persons as well as directly by specialists in various fields to support decision-making regarding the formation of AI security tools.

References

McCarthy, J., Minsky, M., Rochester, N., & Shannon, C. E. (1955). A proposal for the Dartmouth summer research project on artificial intelligence. Retrieved 2006 http://wwwformal. stanford.edu/jmc/history/dartmouth/dartmouth.html

Atienza-Barba, M., del Rio-Rama, M. D., Meseguer-Martínez, A., & Barba-Sánchez, V. (2024). Artificial intelligence and organizational agility: An analysis of scientific production and future trends. European research on management and business economics, 30(2), 100253. https://doi.org/10.1016/j.iedeen.2024.100253

Buchanan, B. G., & Headrick, T. E. (1970). Some speculation about artificial intelligence and legal reasoning. Stanford Law Review, 23(1), 40-62. https://doi.org/10.2307/122775

Bell, A.J. (1999). Levels and loops: the future of artificial intelligence and neuroscience. Philosophical transactions of the royal society of London series B-Biological sciences. 354 (1392), 2013-2020. https://doi.org/10.1098/rstb.1999.0540

Yampolskiy, R. V. (2012). Leakproofing the Singularity Artificial Intelligence Confinement Problem. Journal of consciousness studies, 19 (1-2), 194-214.

Yampolskiy, R., & Fox, J. (2013). Safety Engineering for Artificial General Intelligence. TOPOI-AN international review of philosophy, 32 (2), 217-226. https://doi.org/10.1007/s11245-012-9128-9

Barrat, J. (2013). Our final invention: artificial intelligence and the end of the human era. New York: Thomas Dunne Books.

Busol, O. Yu. (2015). Potentsiina nebezpeka shtuchnoho intelektu. Informatsiia i pravo, 2(14), 121-128 (in Ukrainian). [Бусол, О. Ю. (2015). Потенційна небезпека штучного інтелекту. Інформація і право, 2(14), 121-128]. https://doi.org/10.37750/2616-6798.2015.2(14).272708

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

Tinnirello, M., & Raton, B. (2022). The global politics of artificial intelligence, London, New York: CRC Press Tylor & Francis Group.

Omrani, N., Rejeb, N., Maalaoui, A., at al. (2022). Drivers of Digital Transformation in SMEs. IEEE transactions on engineering management. https://doi.org/10.1109/TEM. 2022.3215727

Trofymchuk, O., Nesterenko, O., & Netesin, I. (2022). Methodology for Designing Analytical Information Systems for Administrative Management. Science and Innovation, 18(4), 25–40 (in Ukrainian). [Трофимчук, O., Нестеренко, O., & Нетесін, I. (2022). Методологія проєктування інформаційно-аналітичних систем адміністративного управління. Наука і інновації, 18(4), 25–40]. https://doi.org/10.15407/scine18.04.025

Shahrzadi, L., Mansouri, A., Alavi, M., & Shabani, A. (2024). Causes, consequences, and strategies to deal with information overload: A scoping review. International Journal of Information Management Data Insights, 4(2), 100261. https://doi.org/10.1016/j.jjimei.2024.100261

Dashko, I., Cherep, O., & Mykhailichenko, L. (2024). Rozvytok shtuchnoho intelektu: perevahy ta nedoliky. Ekonomika ta suspilstvo, (67) (in Ukrainian). [Дашко, І., Череп, О., & Михайліченко, Л. (2024). Розвиток штучного інтелекту: переваги та недоліки. Економіка та суспільство, (67)]. https://doi.org/10.32782/2524-0072/2024-67-31

Yarovoi, T. S. (2023). Mozhlyvosti ta ryzyky vykorystannia shtuchnoho intelektu v publichnomu upravlinni. Economic Synergy, (2), 36–47 (in Ukrainian). [Яровой, Т. С. (2023). Можливості та ризики використання штучного інтелекту в публічному управлінні. Economic Synergy, (2), 36–47]. https://doi.org/10.53920/ES-2023-2-3

Grenci, S. B. (2024). Artificial intelligence applications to support the automation of the administrative procedure. Rivista Italiana di Informatica e Diritto, 6 (1). https://doi.org/ 10.32091/RIID0139].

Seckelmann, M. (2023). Artificial intelligence in administration: The draft of a European AI Regulation and the handling of information technology risks. Verwaltung, 56(1), 1–29. https://doi.org/10.3790/verw.56.1.1.

Labrecque, L. I., Peña, P. Y., Leonard, H., & Leger, R. (2024). Not all sunshine and rainbows: exploring the dark side of AI in interactive marketing. Journal of Research in Interactive Marketing, 18 (5), 970-999. https://doi.org/ 10.1108/JRIM-02-2024-0073

Ravšelj D, Keržič D, Tomaževič N, Umek L, Brezovar N, A. Iahad N, et al. (2025). Higher education students’ perceptions of ChatGPT: A global study of early reactions. PLoS ONE 20(2): e0315011. https://doi.org/10.1371/journal.pone. 0315011

Kozyrenko, V. P., & Kozyrenko, S. I. (2024). Ryzyky zastosuvannia shtuchnoho intelektu v osviti. Vcheni zapysky Kharkivskoho humanitarnoho universytetu «Narodna ukrainska akademiia», 30, 31-36 (in Ukrainian). [Козиренко, В. П., Козиренко, С. І. (2024). Ризики застосування штучного інтелекту в освіті. Вчені записки Харківського гуманітарного університету «Народна українська академія», 30, 31-36]. https://doi.org/10.5281/zenodo.11200073

Khatoon, Z. B., Chaudhary, M., Wasim J., et al. (2024). Mindful Horizons: Navigating the Future Challenges and Potential Threats of Brain-Computer Interfaces (BCIS). Journal of Intelligent Systems and Internet of Things, 14 (1), 31–44. https://doi.org/10.54216/JISIoT.140103

Sun, P., Wan, Y., Wu, Z., et al. (2024). A survey on privacy and security issues in IoT-based environments: Technologies, protection measures and future directions. Computers and Security, 148, 104097. https://doi.org/10.1016/ j.cose.2024.104097

Skitsko, O., Skladannyi, P., Shyrshov, R., Humeniuk, M., & Vorokhob, M. (2023). Zahrozy ta ryzyky vykorystannia shtuchnoho intelektu. Elektronne fakhove naukove vydannia «Kiberbezpeka: osvita, nauka, tekhnika», 2(22), 6–18 (in Ukrainian). [Скіцько, О., Складанний, П., Ширшов, Р., Гуменюк, М., & Ворохоб, М. (2023). Загрози та ризики використання штучного інтелекту. Електронне фахове наукове видання «Кібербезпека: освіта, наука, техніка», 2(22), 6–18]. https://doi.org/10.28925/2663-4023.2023.22.618

Harrison, M., Ruzzo, W., & Ullman, J. (1976). Protection in Operating Systems. Communications of the ACM, 19, 461-471. https://doi.org/10.1145/360303.360333

Nesterenko, O. V. (2009). Bezpeka informatsiinoho prostoru derzhavnoi vlady. Tekhnolohichni osnovy. Kyiv, Naukova dumka (in Ukrainian). [Нестеренко О.В. (2009). Безпека інформаційного простору державної влади. Технологічні основи. Київ: Наукова думка].

Nesterenko, Alexander V., Netesin, Igor E. (2020). Cybersecurity graph model of information resources. Journal of automation and information sciences, 52(8), 14-31. https://doi.org/10.1615/ JAUTOMATINFSCIEN.V52.I8.20

Russell, S. (2018). Provably Benefcial Artifcial Intelligence. https://people.eecs.berkeley.edu/ ~russell/papers/russell-bbvabook17-pbai.pdf

Kitchenham, B., Bugden, D., & Brereton, O.P. (2010). The value of mapping studies – A participant-observer case study. Proceedings. 14th International Conference on Evaluation and Assessment in Software Engineering, 1–9.

Published

2025-12-22

How to Cite

Nesterenko, O., & Yatsuk, P. (2025). AI (not) against AI. Environmental Safety and Natural Resources, 56(4), 134–153. https://doi.org/10.32347/2411-4049.2025.4.134-153

Issue

Section

Information technology and mathematical modeling