As generative AI becomes increasingly embedded in our lives, from automated news summaries to AI-generated academic writing, we face a profound challenge to long-standing notions of truth and objectivity. Traditionally, objectivity has been understood as the capacity to represent reality accurately, free from bias or distortion. Scientific facts, journalistic reporting, and legal judgments all rely on the belief that it is possible—and necessary—to separate subjective opinions from objective truths. Yet in a world where generative AI systems can produce plausible, coherent, and even persuasive outputs without any connection to reality, we must ask: What does objectivity mean when truth can be simulated?

This essay explores how generative AI creates what we can call a crisis of objectivity. The crisis does not stem from a simple failure of AI to tell the truth, nor from the risk that AI might "lie" or mislead us. Rather, it arises because AI produces outputs that resemble objective knowledge but are, in fact, detached from the processes that traditionally grounded knowledge claims—observation, verification, and accountability. In doing so, generative AI challenges us to rethink how objectivity functions in technologically mediated societies and how meaning is stabilized when simulation blurs the line between the real and the artificial.

The historical roots of objectivity

To understand the nature of this crisis, we need to briefly revisit the historical roots of objectivity. In modern science, objectivity emerged as a guiding ideal in response to the problem of bias. Enlightenment thinkers sought methods that would allow individuals to transcend their personal perspectives and arrive at shared, verifiable truths about the world. This led to the development of systematic observation, experimentation, and peer review—mechanisms designed to ensure that knowledge was not merely subjective opinion but had some claim to universality.

Journalism followed a similar path, especially in the 19th and 20th centuries, with the rise of professional standards that emphasized factual reporting, source verification, and editorial independence. Objectivity became a social value, linked to trust in institutions, expert knowledge, and public reason.

But these systems of objectivity were always mediated—by language, by institutions, and by technologies of communication. The printing press, photography, radio, and television each played a role in shaping how objectivity was practiced and perceived. And with each new medium came new tensions: who controls the means of representation? What counts as evidence? How do we verify what we see or hear?

Generative AI and the simulation of knowledge

Generative AI introduces a new form of mediation—one that produces simulated knowledge. Unlike earlier media, which transmitted representations of observed reality (a photograph of an event, a report of a study), generative AI creates expressions that appear to be grounded in knowledge but are in fact generated through probabilistic associations in data. These outputs are not lies in the traditional sense, because they are not deliberate deceptions. Nor are they errors in need of correction, because they often cannot be traced back to a factual claim that was misrepresented. They are hallucinations—convincing simulations that lack a direct referent in the world.

This creates a problem for objectivity as traditionally conceived. If AI-generated content can mimic the form of objective discourse—scientific papers, news articles, legal documents—without engaging in the epistemic practices that justify truth claims, then the authority of objectivity itself is undermined. We are left with content that looks objective but lacks the grounding that makes objectivity meaningful.

Systems theory and the relativity of truth

From the perspective of systems theory, particularly as developed by Niklas Luhmann, this situation invites a different approach. Luhmann argued that truth is not a static property of statements but a communicative operation—a way that social systems stabilize meaning by distinguishing between true and false within particular contexts. Truth, in this sense, is not absolute but relational; it depends on the procedures and expectations of the system in which it is used.

Generative AI disrupts these procedures because it operates outside the traditional systems of knowledge production. It does not participate in the scientific method, journalistic standards, or legal reasoning. Yet its outputs enter these systems, forcing them to adapt. The crisis of objectivity, then, is not a collapse of truth, but a transformation in how truth is produced and recognized. It challenges us to develop new ways of distinguishing between simulated and grounded meaning, between plausible appearance and justified belief.

Toward a dynamic objectivity

Rather than clinging to a static notion of objectivity that no longer fits our mediated reality, we can begin to envision a dynamic objectivity—one that acknowledges the role of mediation and simulation in contemporary knowledge but seeks to re-anchor truth in processes of critical engagement. This means cultivating a reflexive awareness of how AI generates its outputs, demanding transparency in algorithmic design, and fostering human capacities for interpretation and judgment.

Dynamic objectivity is not about rejecting AI but about integrating it responsibly into our meaning-making systems. It requires us to see objectivity not as a property of texts or data, but as an ongoing social practice—one that now must include an understanding of how AI simulations influence what we take to be true.

Preparing for the third essay: knowledge as interaction

In the next essay, Retrieval-Augmented Generation and the Future of Knowledge, we will explore how new AI architectures combine generation with retrieval to create hybrid forms of knowledge. These systems draw on vast information databases while producing new expressions, blending the simulation of meaning with connections to established knowledge sources. We will examine how this changes our understanding of knowledge from something stored and transmitted to something co-constructed in real-time interactions between humans and machines.

Together, these essays form a conceptual foundation for understanding how generative AI reshapes the symbolic structures of society. They offer a way to navigate the complex interplay between simulation, truth, and meaning in an era where the boundaries between human and machine expression are increasingly porous.