Artificial intelligence has taken off triumphantly. Since 2022, new “generative” technologies, foremost among them ChatGPT, have spread at an unprecedented rate1. The entire IT ecosystem has mobilized to support innovation. Like the speculative bubbles of the 2000s and 2012, the technological leap has led to a frantic race, both in terms of narratives and financial bets, and geo-economic ambitions. For some, it is a Promethean “super-intelligence” and a major advance2. For others, it is the specter of a stock market crash3 and the extinction of humanity4. For still others, it is disillusionment5 and a missed opportunity for effective intelligence6 and economic productivity7.

Amid the free-for-all, China's breakthrough with DeepSeek in early 2025 reignited a competition that began three years earlier with the American unicorn, OpenAI. However, this hubbub and the resulting perceptual confusion mask a much quieter and more structural process that is currently unfolding on the artificial intelligence and governance front. The current takeoff of AI is accelerating the establishment of a global technostructure whose cards are being strategically positioned.

A technostructure based on six methodological components

This process has become more visible in recent years in the scientific world, as well as in numerous civil and multilateral initiatives that are stepping up their work in the field of AI and governance. The topics covered explicitly address four major areas, all studied from the perspective of the emancipatory transformations provided by AI: ethics, the formulation and design of public policy, decision-making through digital modeling of real systems, and the optimization of resource allocation. These developments are already feeding into institutional projects and concrete practices. However, they are treated in a compartmentalized manner, which limits the possibility of forming an overall view. While each individual contribution may at first glance appear to be well-intentioned, when taken together they reveal a very different orientation.

The common thread linking these various works and initiatives leads to the development of a technostructure of governance, based on the automation of human deliberation and its exclusion from traditional mechanisms of regulation and political control. Six major methodological components can be identified8, echoing the areas of investigation mentioned above.

  1. Modeling natural and human systems through digital twinning.

  2. Automating moral and ethical arbitration.

  3. Designing rules and laws.

  4. Implementing regulation through computerized infrastructure.

  5. Adaptive feedback from the technostructure.

  6. Behavioral and cognitive modeling of society.

1. Representing reality through “digital twins”

This first methodological component is based on representing how society functions in a digital model. “Digital twinning” of reality consists of establishing a virtual mirror of the real world or a subsystem of society on a computer. Among the scientific investigations published, five major areas of modeling are highlighted: agriculture, health, urban infrastructure, energy, and ecosystems. The United Nations has included this approach in its agenda since 2022. The Action Plan for a Sustainable Planet in the Digital Age9” proposes planetary digital twins10 that can measure, monitor, and model the health of the planet's biosphere and its interactions with social and economic systems11.

In the same year, 2022, the UN institution documented institutional AI models capable of expanding digital twinning on a global scale12. In the field of precision agriculture13, the models developed range from the scale of the geographical region to the individual plant sown in the cultivated field. In healthcare, digital twinning simulates the metabolism of an entire population and the individuals within it14. For urban environments, models simulate energy consumption, urban traffic, and social behaviors15.

The official goal of the digital twinning initiative is to make public management more efficient and effective. AI is a new decision-making tool. It makes it possible to respond to “causal queries through intervention analysis16” and to “improve the design of evidence-based public policies17.” However, modeling, as proposed in these initiatives, involves establishing a new layer of representation that comes between society and reality. Modeling for better decision-making leads to the transfer of discernment and decision-making, usually provided by collective deliberation and expert opinion, to this layer and to computerized simulators. In the process, human intervention is relegated to the background behind the computing power displayed by the digital model.

These modeling approaches are currently being promoted at the global and European levels18. They involve different forms of technocratic coordination19, ranging from institutional linkages and digital twin ecosystems to the search for a common methodological framework20.

2. Automating moral judgment

This second field of research involves the use of computers in moral deliberation. It involves programming ethics into computers and subjecting every potential human action to arbitration defined according to a specific set of standards. Any action resulting from a computerized system can therefore be activated by simulating a kind of “moral conscience,” transcribed within the computer and capable of taking into account the ethical dilemmas inherent in any choice to be made in a social system.

For researchers, the ethics incorporated into the computer must be capable of “initiating a structural turn21” and now requires an approach that goes beyond specific cases to consider the socio-technical system as a whole22. Instead of adapting an algorithm to make it “fairer,” or even coupling it with a stage of human supervision, it is recommended that ethics be inserted as a separate layer of the computerized infrastructure. In other words, ethics is encoded into the computerized system as if it were a separate software layer of its operating system23. Such an approach involves shifting the application of moral references, which are usually located upstream and downstream of any collective choice, to the stages of data collection, model building, integration, and system monitoring. Here we find a mechanism of exclusion similar to the previous point.

This approach is already being tested in practice. In particular, an “ethical reasoner” has been designed to produce ethical trade-offs in real time, based on variables characterizing a given social context24. The prototype combines formal rules derived from philosophy or law with computer-processed statistical reasoning. A second approach25 is being developed on the methodological basis of “multi-objective reinforcement learning26”, which allows explicit ethical constraints to be incorporated into AI. These processes ultimately build computer systems capable of acting as “automated ethical arbiters” and comparatively evaluating the options considered by other AI systems. An initial normative effort has been put in place to frame this agenda. The conferences on AI, ethics, and society (AAAI/ACM)27 produce references on the framework for implementing technical ethics. At the same time, the Sustainable AI Coalition28 has developed benchmarks for acting ethically “by design.”

It should be noted that the breakdown in the relationship between ethics and society is not explicitly mentioned in this work. The moral dimension is implicitly converted into a software layer of the computerized system, thereby reducing the substance of human ethics.

3. Designing law and regulation by computer

Intelligent action driven by AI, therefore, positions itself in the realm of representations of the world that societies need in order to govern themselves (first component) and in that of the moral trade-offs inherent in collective action (second component). A third methodological dimension is added to these first two: that of the design of legislation and regulation through language models.

Significant progress has been made in recent years within the United Nations and environmental governance. One such initiative, the UNBench29 project, comparatively evaluates the performance of language models in UN agenda activities. Using the documentary archives of the UN Security Council, chatbots were used to draft a diplomatic resolution, simulate the accession of member countries, and generate diplomatic statements. The test demonstrates that language models produce texts whose quality is close to that of UN resolutions. In the field of marine conservation, a conversational agent was developed to provide official positions and recommendations on the High Seas Treaty301. Finally, in a third case, language models were tested in the United States to generate texts related to the new federal executive order 1411031 on AI governance32. Some models produced results that were almost as convincing as those of experts in interpreting the executive order. Chatbots also verify the relevance of other texts on the same subject, proving that AI is capable not only of designing the content of new policies, but also of interpreting and critiquing existing ones.

As everyone knows, these new methodologies are currently being disseminated throughout the institutional and professional world. Economic and political analysts use chatbots to draft reports, synthesize ideas, and convert technical data into accessible language. At the multilateral level, the United Nations University now encourages international agencies to adopt their own language model to facilitate the drafting of texts33. In these different scenarios, the primary objective is indeed to help policymakers, particularly those in developing countries, to build expertise in a clear and rapid manner. The heavy work of reconstructing facts, standards, and scenarios tends to be transferred to the computer. The latter is therefore in a position to set out policies that humans are supposed to follow or implement. This does not mean that legislators or regulators will disappear, but their role will evolve towards that of supervisors who must arbitrate between different orientations or resolve certain specific uncertainties. However, the intervention of computers introduces various algorithmic and conceptual biases. In the context of UN policy documents, various commentators have pointed out that AI expertise reflects an ethnocentric viewpoint centered on Western positions34.

Beyond its role in assisting with design, the field in which computers are becoming involved remains a highly sensitive area of the formulation of symbols, language, rules, and the political agenda. Intelligent AI writing is likely to present issues from a preferential angle, minimize certain concerns, or induce compromises that ultimately change the direction of regulation. As such, arguments of speed and efficiency are put forward as a counterargument to legitimize the substitution of direct human action in this area. The fact remains that this redistribution of power to computers reduces the role of experts and regulators. Their activities are shifting from designing and approving policies to supervising or validating AI-driven processes.

4. Encoding policy implementation in the technostructure

The fourth methodological component underpinning this technostructure is the implementation of policies within a computerized infrastructure. This structure can then directly take charge of executing the actions set out in a given policy, or at least some of them.

Digital currencies are one area of application. Unlike cash, central bank digital currencies can be programmed to influence how money is spent. Financial assistance can be activated when purchasing a particular product on the market. A carbon tax can be automatically applied to a transaction if it involves the consumption of fossil fuels. The German Bundesbank and the European Central Bank highlight the advantages of this solution, emphasizing the benefits of automated payments (road tolls, money laundering controls, tax collection, consumer assistance in emergencies)35. In urban areas, cities tend to equip themselves with computerized infrastructure based on telemetric networks. Sensors36, distributed throughout the urban area and its traffic flows, can be linked to individual or collective means of transport, with the aim in particular of restricting travel beyond a local perimeter in order to reduce the carbon footprint associated with transport.

A second area of application can be seen in the political field of regulating “safe, secure, and reliable37” AI in the United States38. This regulatory framework requires federal agencies to ensure that public and private AI systems are subject to bias detection and aligned with common principles (compliance with values and threat prevention). The administration thus sets the standard and then requires that any AI deployed in the areas of finance, health, or critical infrastructure incorporate control or enforcement within the technostructure (enforcement by design). This directive even mentions the concept of “global AI governance” with international partners, suggesting a quest for compliance at this level.

Along the way, the automated enforcement of a policy by an IT infrastructure materializes a new register of governance being transferred to the technostructure. Furthermore, automating the compliance of a citizen's or subsystem's behavior nullifies the possibility of free will, not to mention what it implies in terms of the erasure of intermediary bodies. Far from being a minor issue, this reconfiguration calls into question the social contract. In a republican state, the application of laws is based on social bodies (justice, police, government agencies) and has a certain flexibility, always depending on the nature of the political regime in which we live. Once enforcement is automated, with rules executed by algorithms, possibly following ethical clauses and certain transparency criteria, there is no room for maneuver.

5. Feedback and adaptation of the technostructure

The components we have just seen are the main drivers of the technostructure. Their operation generates enormous amounts of data. Each telemetric sensor, each authorized or prohibited transaction, and each decision supervised by AI in turn generates other data, which can be fed back into digital twins and other automated actions39. Managing this feedback is therefore emerging as a methodological component in its own right.

However, this learning process is not just about looping digitized data flows. It also concerns standards, norms, and structure. The approach taken by the Global Conference on AI, Ethics and Society40, as well as that of the Sustainable AI Coalition41, embodies this search for an evolving technostructure. In addition to producing knowledge, these approaches are changing the development of general approaches and normative criteria. In the same vein, some researchers see the continuous recalibration of the technostructure as a response to the “default” deficiency of socio-technical systems42. When a problem arises, structural causes are likely to be at play. It is therefore necessary to modify the general structure of the system, rather than simply correcting specific, isolated elements.

In such a context, the human mind intervenes a priori to evaluate higher-level objectives and respond to anomalies identified in the system. Human action is indeed required, at least that of experts and qualified individuals. However, operational anomalies are likely to decrease over time, further reducing this involvement. The corollary of centralizing functions within the technostructure is to create an opacity that is difficult, if not impossible, to comprehend for any social actor affected by decisions or in a legitimate position to express political demands. Consider, for example, electronic voting, which has been authorized in various countries, and the proliferation of multiple fraud schemes downstream, without citizens having any real leverage to audit and make the mechanism transparent.

In terms of effectiveness, this principle of recursive feeding of the technostructure contributes to a new sleight of hand to whitewash its legitimacy. It can therefore learn from experience and embody a potential solution to the flaws of human governance (corruption, bias, inertia, lack of information).

6. Modeling behavior and knowledge

A final component closes the methodological circuit formed by the previous elements. The technostructure aims to reduce human error and unpredictability to a minimum. According to the technostructure, the weak link in governance stems from the imperfections of human intelligence. The aim is therefore, albeit experimentally but openly stated in the scientific agenda, to connect the human brain to the technical system via brain-computer interfaces43. Governments44 and companies are already investing in neuroscience and this type of interface45. In 2025, Neuralink technologies were tested on humans46. In fact, institutions working in the field of programming ethics into computers are simultaneously exploring the field of neuroethics47, namely the design and regulation of brain-computer interfaces according to ethical standards. The automation of moral and ethical arbitration, the second component we described, reappears here. Just as moral principles have been encoded to be taken into account in automated action, it is conceivable that the thought patterns or emotional reactions of connected individuals could be engraved with a view to maintaining them within a zone of acceptance or equilibrium.

In parallel with mind-computer interfacing, the creation of a cognitive environment favorable to the technostructure project and the underlying technologies is proving to be an even more decisive strategy. In terms of narratives and language, the vocabulary used in official communications is most often benevolent, friendly, irenic, and even humanistic. Such language is rooted in emancipatory goals and spontaneously limits any mistrust of the highly laudable intentions of governing more effectively, preventing famine, optimizing the use of resources, or promoting an inclusive digital society.

However, this façade is deceptive. On the one hand, it forces adherence to the technopolitical project and tends to portray any resistance as backward. On the other hand, it masks a reversal of ethics against the individual. Each ethical criterion is based on the underlying rules formulated by the technostructure. Instead of starting from existing moral frameworks to design the law, ethics is stated in the opposite direction, starting from the rules to be applied48. In concrete terms, the current terms of AI ethics are deduced not from philosophical reflection, but from the regulatory constraints that seek to be applied. Similarly, the United Nations Sustainable Development Goals and Agenda 2030 are filled with statements that take on an ethical and moral tone to induce rules of compliance.

This offensive use of ethics is relatively well illustrated by the Globe Ethics conferences on good AI governance49. AI is consistently positioned on the side of good and progress, while technological development is portrayed as a remedy for flawed and imperfect governance. The AI for Good movement50 reinforces this approach. Reluctance to accept automated governance is considered morally reprehensible. This incitement to consent through ethics is found throughout all initiatives that associate AI with health, education, climate and ecosystems, governance, etc. In contrast to the corruption, slowness, and imperfection of traditional political systems, the technostructure exhibits attributes of efficiency, transparency, ethics, order, and quality.

Finally, certain antagonisms are being exploited to shape public opinion. While extolling the virtues of AI and advocating for rights for machines, leading entrepreneurs in the IT industry are simultaneously confirming their dangerous nature by comparing them to nuclear risk or the risk of species extinction51.

The threats posed by AI are already being highlighted by certain think tanks. These threats relate to bioterrorism, disinformation, crime, autonomous weapons, and social destabilization. In fact, these statements raise a debate about access to AI, given that it is identified as dangerous. Such a veil allows the major threat posed by the technostructure52 to be overshadowed. Technological competition between major powers adds to this picture. In January 2025, the stumbling release of DeepSeek naturally led the United States to outbid it with the Stargate project and its internal AI deployment agenda. However, the Chinese flagship, DeepSeek, is not really a competitor for its US rival, as there is intense scientific cooperation between the two countries53, and the servers used by DeepSeek are based on Nvidia technologies imported into China (and afterwards banned) by the United States54. Such a significantly distorted competition has, above al,l served to justify the technological race in the field of AI.

A coup d'état 3.0...

This path allows us to glimpse the contours of a vast project of encroachment, even capture, of governance by a technostructure currently being put in place. Although this architecture is still in its infancy, the qualitative leap made by artificial intelligence since 2022 has drastically accelerated its agenda. Its observable modalities allow it to be compared to other conflictual maneuvers carried out at the global level in the field of geoeconomics. There is indeed a parallel in form with the strategies that have been in place for just under a century around the environmental agenda, the United Nations, and sustainability.

Little by little, the landscape that is unfolding points directly to the cybernetic and transhumanist culture formulated by Julian Huxley, Norbert Wiener, and Ray Kurzweil. The stated objectives constantly revolve around double-talk. Since the consent of the masses is an existential condition for such a technostructure, it must legitimize the blurring of the boundary between man and machine and create a break in the relationship between ethics and human action.

At this point, it is important to clarify that AI itself is not the main troublemaker and driver of a technical system whose natural inclination would be to inevitably move in the direction of a discretionary control system. Fundamentally, AI is a form of deferred intelligence55, transferred from humans to computers through programming. This deferred intelligence produces emergent intelligence when it comes into contact with the immediate intelligence provided by the human mind. Effective intelligence, i.e., the most virtuous and creative intelligence for human activities, fundamentally arises from the synergy between the two. This synergy is not malicious or negative in itself. Nor is it neutral, insofar as it has profoundly changed the cultural and societal environment over several decades. But it is above all the use of this synergy in the service of a Machiavellian project that is at the heart of the dynamic we are describing.

The purpose of this technostructure fundamentally refers to a project of systemic control over individuals and society. In order to gain the support and control of the masses, it conceals its purpose behind a conceptual and ethical veneer, taking advantage of the contemporary computer system. This project closely combines the search for consent and the subversion of traditional systems of governance. Artificial intelligence, located at the strategic frontier between information, automation, and intelligence, is a preferred mode of action. The conceptual origin of this design, which goes far beyond the scope of this note, remains a key element in understanding it as a whole. A vast conglomerate of organizations and individuals, including the major unicorns of the digital economy, are participating in it. Given its nature, scale, and modus operandi, this project appears to be the most sophisticated control engineering project in modern history.

The third industrial revolution, which began with microcomputing in the 1970s, is now embroiled in an unprecedented conflict. It has become irreversibly entangled with assumptions of global supremacy, geo-economic control, and domination of societies and individuals. At a time when this agenda is taking a turn, isn't it time to shift gears and respond to such obscurantism?

Notes

1 One hundred million users in just a few weeks for ChatGPT.
2 The Gentle Singularity at Sam Altman.
3 Brace for a crash before the golden age of AI at Financial Times.
4 Artificial intelligence could be as dangerous as “pandemics or nuclear war,” according to industry leaders at Le Monde.
5 The Delusion at the Center of the A.I. Boom at Slate.
6 Welcome to the AI trough of disillusionment at The Economist.
7 MIT report: 95% of generative AI pilots at companies are failing at Fortune.
8 AI for Good, Escape Key.
9 Action Plan for a Sustainable Planet in the Digital Age, UNEP.
10Action Plan for a Sustainable Planet in the Digital Age, UNEP.
11 Towards a UN Role in Governing Foundation Artificial Intelligence Models, UNU.
12 Precision Agriculture Revolution: Integrating Digital Twins and Advanced Crop Recommendation for Optimal Yield, Banerjee, S., Mukherjee, A., Kamboj, S. (2025).
13 Digital twins for health: a scoping review at NPJ.
14 Digitalization of urban multi-energy systems – Advances in digital twin applications across life-cycle phases ON Science Direct.
15 Intervention analysis is a statistical technique used to assess the impact of a specific intervention or event on a set of time series data.
16 Digital twins of the Earth with and for humans at Nature.
17 Destination Earth.
18 Towards ecosystems of connected digital twins to address global challenges at Zenodo.
19 Digital Twin Hub.
20 Sustainable AI and the third wave of AI ethics: a structural turn at Springer Nature.
21 Artificial Intelligence, Power and Sustainability on Data Ethics.
22 Embedding Ethical Oversight in AI Governance through Independent Review at Responsible AI Institute.
23 Towards Developing Ethical Reasoners: Integrating Probabilistic Reasoning and Decision-Making for Complex AI Systems, Arxiv.
24 Ethical Decision-making in AI systems: a reinforcement learning framework for moral at Research Gate.
25 Reinforcement learning.
26 Artificial Intelligence, Ethics, and Society at AAAI.
27 Coalition for Sustainable Artificial Intelligence.
28 Benchmarking LLMs for Political Science: A United Nations Perspective at ARXIV.
29 AI Language Models Could Both Help and Harm Equity in Marine Policymaking: The Case Study of the BBNJ Question-Answering Bot Aat ARXOC.
30 Executive Order 14110.
31 Harnessing AI for efficient analysis of complex policy documents: a case study of Executive Order 14110 at ARXIV.
32 Towards a UN Role in Governing Foundation Artificial Intelligence at UNU.
33 Ziegler, M., Lothian, S., O'Neill, B., Anderson, R., Ota, Y. (2024). Opcit.
34 CBDC – How Dangerous is Programmability? at Duke.
35 Unpacking the ‘15-Minute City’ via 6G, IoT, and Digital Twins: Towards a New Narrative for Increasing Urban Efficiency, Resilience, and Sustainability. Sensors at MDPI.
36 Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Executive Order 14110 of October 30, 2023, at Federal Register.
37 White House AI Memo Addresses National Security and AI at Hunton.
38 IoT-Based Framework for Digital Twins in the Industry 5.0 Era at MDPI.
39 Proceedings of the Seventh AAAI/ACM Conference on AI, Ethics, and Society at AAAI.
40 Sustainable AI Coalition.
41 Sustainable AI and the third wave of AI ethics: a structural turn at Springer Nature.
42 Neuralink.
43 Brain-Computer Interfaces, UK Parliament Post.
44 Brain-Computer Interfaces, United States Government Accountability Office.
45 Neuralink Technology.
46 International Conference on the Ethics of Neurotechnology at UNESCO.
47 The Great Inversion, Escape Key.
48 AI for Good Governance at Globethics.
49 AI for Good.
50 Statement on AI Risk at Center for AI Safety.
51 An Overview of Catastrophic AI Risks at Center for AI Safety.
52 AI for Good.
53 How US and China collaborate on AI despite rivalry at The National.
54 Patrick Patterson's post on X.
55 Blanc, P., Chevalier, H., Corniou, J-P., Lorphelin, V., Volle, M. (2018). Élucider l’intelligence artificielle. Institut de l’iconomie.