The new post-COVID era has divided humanity into three camps. The first ones enthusiastically embrace innovative breakthroughs, have skillfully mastered Python, know how to properly use ChatGPT, DeepSeek, Claude, or Gemini for academic or professional purposes, create AI-processed songs or images, write different content with its help, and generally maintain a positive attitude towards artificial intelligence development.

The second camp, while accepting, still cautiously receives news about how artificial intelligence with its great predictive capabilities can help engineers forecast situations in aircraft engine manufacturing, diagnose a rare disease that puzzled many experienced medical councils, or even assist in deciphering ancient Tibetan manuscripts that would take decades for a regular "human" researcher. Nevertheless, doubt can be read in their eyes: won't this powerful tool become something more than just an assistant? Won't it turn into a shadow that one day eclipses its creator?

The third camp consists of those who clearly resist the new reality, fearing that AI will leave them jobless—translators, teachers, journalists, writers, graphic designers, editors, social media managers, and many other professions—therefore, they have no desire to study AI algorithms, resisting its obvious breathing down their necks, as they have solid arguments about malicious AI applications in cyberattacks, deepfake creation, information warfare, plagiarism, appropriation of scientific works, and much more. Additionally, they have an excellent trump card—famous film "I, Robot," based on Asimov's work, where humanity faces danger of unprecedented scale due to AI.

Once, while navigating the Internet, I encountered an apt comparison of AI and human interface with the tandem of Asterix and Obelix. Possessing incredible physical strength, Obelix is undoubtedly not the most intellectual resident of the Gallic village, who prefers tasty abundant meals to clever and cunning puzzles, and spends his spare time moving menhirs—huge heavy stones—but if properly managed by the short but smart Asterix, together they can defeat Caesar himself and the entire Roman army, traveling through various cities and regions, helping Egyptians, Chinese, and other peoples of the world. Similarly, AI under human control has enormous potential to conquer heights we would previously only dream of and write myths and legends about, like the ancient Greeks.

But using AI's incredible capabilities is similar to the magic potion created by Panoramix—in dishonest "hands," it can indeed lead to the direst results.

For instance, in Spain in 2024, the small town of Almendralejo, with only 33,000 inhabitants, unexpectedly became the epicenter of mass media and national public attention due to a serious scandal related to new types of cyberattacks—here, a group of schoolchildren distributed AI-processed nudes of their female classmates and blackmailed them for money, threatening to publish erotic photos on social networks and even on OnlyFans, mainly known for pornographic content. As a result, the court sentenced 15 teens to one year of probation for creating and distributing such images. The Almendralejo case became the third recent incident in Spain, as in Ayamonte and Alcalá de Henares, several teenagers had also managed to "undress" their peers by means of AI.

A year earlier, a prolific scientist from Córdoba, Rafael Luque, working in "green" chemistry, was suspended from the University of Córdoba for 13 years. Being affiliated with two other international universities—in Saudi Arabia and the Russian Federation—he found it difficult to convince the Spanish university administration that he could successfully combine his full-time academic activities at the local institution while collaborating with two other universities and in parallel publishing almost 24/7—in just one year he published more than 100 scientific papers, equivalent to one publication every 4 days. In the first quarter of 2023 alone, he nearly made it into the Guinness Book of Records, producing 58 papers, equivalent to one publication every 37 hours. Later, the scientist admitted to "polishing" his works using ChatGPT.

The story of Rafael Luque not only caused wide resonance in Spanish society but also became the first case where a scientist of such caliber was suspected of unethical practices using chatbots. Luque himself, however, explains all the accusations as colleagues´ jealousy.

The use of fake profiles and AI-generated music also stirred up the most creative layers of Spanish society—for example, in 2024, rich in events, music fans of the Las Nenas group began to suspect that the three vocalists—Viviana, Claudia, and Naira—were nothing less than neural network products. After this, Las Nenas tracks were removed from all streaming platforms, and the authors of this digital product admitted they had gone too far but, in their official statement, wrote:

We, Las Nenas, never intended to deceive anyone. We just wanted to share songs that we started creating as a joke, but they turned out so good that it would have been a sin to keep them to ourselves, the creators confessed.

For us, it was always something fun, a kind of curiosity. We don't think artificial intelligence can replace real artists.

Questions of ethical AI application are now more acute than ever before in Spanish society. The Spanish Ministry of Innovation, Science, and Universities created an AI Research Ethics Committee, which states that ethics in AI should become the cornerstone of responsible technological development. Research results from the University of Valencia, published by Pr. Joshua Beneite Martí, state:

Ethical problems associated with the use of artificial intelligence (AI) in education emphasize the need to implement reliable regulatory frameworks and educational policies that would consider data protection, fairness, and transparency. Moreover, it is crucial to develop ethical education and critical thinking among teachers and students so they can understand, analyze, and responsibly interact with AI.

Joshua's conclusions suggest that "it is necessary to ensure an ethical approach to the development, implementation, and use of AI in educational institutions. Only this way can we guarantee inclusive, fair, and responsible education that uses the advantages of technology without sacrificing society's fundamental values."

In 2024, UNESCO published the framework "AI Competency System for Learners," in which one of the main theses emphasizes the priority of assigning a central role to humans in AI interaction, developing critical thinking in students, and ensuring safe and responsible AI use. Additionally, this international organization published "Recommendations on Ethical Aspects of Artificial Intelligence," which emphasizes that "at no stage of an artificial intelligence system's lifecycle should a person be subjected to physical, economic, social, political, or moral harm."

Undoubtedly, no one questions that the recommendations of this world-renowned organization for education, science, and culture, with its more than solid reputation, should be carefully considered and universally applied by world states, as ethical AI use is not just a fashion trend but a vital necessity on which our future depends.

But while big committees made up of world experts go on developing and coordinating various mechanisms for monitoring ethical AI use, both in small towns at the foot of the Pyrenees, rarely visited by tourists, and in big metropolises where life boils like a geyser, students continue to actively use AI technologies in their daily lives.

These young digital-savvy minds, born in the era of digitalization and 5G, who have never heard of the rattling sound of dial-up connections, are already mastering tools that are rapidly changing the world. They use AI for studying, creativity, solving everyday tasks, and, unfortunately, exceeding the limits, as shown by the cases described earlier, often without even thinking about the harm AI can cause when used maliciously.

The challenge for these generations of Gen Z and Alpha is not only to master AI potential but also to use it for the benefit of society, with respect and understanding of ethical norms.

Only this way can we direct AI's powerful force toward creating a better future for all.