In 1990, Margaret Thatcher famously declared the famous slogan T.I.N.A - No, No, No – There’s No Alternative. At the time, the Prime Minister of the United Kingdom was referring to the fall of the Soviet Union, which officially happened in December 1991, implying that the only viable path toward the future was through liberal capitalism.

Just over three decades later, that same rhetoric seems to have found new life in the world of technology: “There’s no alternative to A.I.” From business to education, from art to healthcare, the message is clear: resist, and you’ll be left behind. The speed of innovation no longer feels like progress—it feels like a mandate.

Yet, just as Thatcher’s statement once disguised ideology as inevitability, today’s AI narrative hides a similar logic. We are told that automation, algorithmic governance, and machine learning are simply the way things are going to be. Questioning them is perceived as naïve, as if the only responsible response is to adapt, not to doubt.

This mindset echoes what Mark Fisher calls capitalist realism in his book Capitalist Realism: Is There No Alternative? Fisher argues that slogans like Thatcher’s TINA have created a cultural atmosphere where it is easier to imagine the end of the world than the end of capitalism—or, today, the unquestioned rise of A.I. The notion that there is no alternative shapes not only our economic and political thinking but also our imagination, conditioning us to accept systems as inevitable rather than choices we can critique or change.

But convenience has always been a powerful seducer. A.I. promises efficiency, precision, and even creativity—while quietly redefining what it means to be human. As we outsource more decisions, tasks, and even feelings to machines, the question is no longer whether AI can replace us, but what parts of ourselves we’re willing to give away in exchange for comfort.

But what’s wrong with having a little extra time and enjoying the benefits brought by AI? The answer lies in reflection: what happened to genuine, critical thinking?

In 1970, Paulo Freire published a book called Pedagogy of the Oppressed—a work that examines the relationship between the oppressor and the oppressed. In his book, he emphasizes conscientization—the process by which the oppressed become aware of their situation—and advocates for a dialogical, liberating education that challenges oppression rather than reproducing it. In simple terms, Paulo Freire believes that learning the capital of a city, for example, doesn’t do much good if you don’t understand why that city was chosen as the capital. By the way, do you know why your country’s capital was chosen?

We can apply that same logic to the AI system and the way it’s been designed to make us believe it will only bring benefits. One I would like to expose here is aligned with a similar promise that was made to us in the past: A.I. will democratize film content, education, etc. But we shall pay attention to the fact that the verb “democratize” appeared in the 19th century and means to make something accessible, participatory, or controlled by the people rather than by a small elite (who is ahead of AI development). In theory, this sounds empowering.

But history—particularly our experience with the Internet—shows that mere access does not guarantee actual learning; the real democratization comes from human connection and a sense of community—something that capitalism has taken away from us.

While you can access articles, magazines, and online courses, the Internet has done very little to ensure meaningful engagement or understanding. Why? The information is there, but because it is so easily accessible, there is little incentive to focus deeply. Algorithms study you and, based on your preferences, provide content tailored to your interests. This keeps you stuck in a circuit of passive consumption. So what is the advantage of having access to all information if an invisible hand dictates what you see, read, or watch? Democratization in name does not always translate into democratization in practice—power still shapes what we consume and how we think.

AI is only a problem if we don’t become critical thinkers, constantly questioning the need for using it and deliberating when it actually serves us, instead of letting convenience dominate our choices.

The seduction of A.I. is undeniable, but convenience should never replace critical thought. Just as Freire urged the oppressed to understand the structures shaping their reality, and as Fisher warned about the ideological dominance of capitalist realism, we must understand the systems shaping ours. A.I. is a tool—not a mandate—and the true challenge lies not in mastering technology, but in mastering the capacity to think, question, and choose. The promise to democratize knowledge and culture is alluring, but only critical, reflective thinking will ensure that this democratization actually empowers rather than pacifies.

Always remember: the future may offer convenience, but true power lies in thinking critically—especially in the age of A.I. If everyone is using the same tool to produce content, art, education, etc., they'll fit into the same box. I dare to say that the future will belong to the ones that craft their knowledge and own it. Don’t fall under the promise that AI will democratize things—at least not this time.