While AI—or more specifically, large language models (LLMs)—is becoming an everyday reality for most people, the truth is that LLMs have already been an integral part of many social sectors for a while now. Used as a tool, LLM can increase productivity and reduce complex, repetitive, and dull tasks to a few minutes of your time. Moreover, once you’ve set them up to your needs and character, they will continuously shorten your workload. In that aspect, LLMs are truly wonderful.
And these changes are happening everywhere, even if we’re not in direct contact with them. You might have noticed that programming/coding workers are in a weird spot right now, with a number of layoffs announced or already made. Academia is struggling not just because the students are using the LLMs for their assignments, but also because of equally serious considerations of scientific article reviews. The customer service sector is also experiencing this change—why hire people for mail and chat when a bot can do the service equally well for the most part?
But I want to focus on something most of us are in contact with every day, and that is the media, like news outlets, social networks, or everyone’s favorite—the comments section below the article, post, or video.
Dead internet practice
Dead Internet Theory claims that most of the internet is no longer produced by real humans but by bots, automated content, and AI, creating an illusion of active online life.
This illusion is made by “content farming” companies, troll farms, or advertisement campaigns that artificially boost the metrics of a page, post, or channel. The basic idea is always this: based on artificial activity (likes, posts, or articles), companies or political agendas will create an illusion of relevancy. The fact that this is uncritically accepted or that it’s a built-in feature of all the major platforms like Google and Facebook is a whole different issue.
But what we want to focus on is the actual practice of creating the Dead Internet. Below are a few symptoms of this ailment, as we discuss how to recognize AI-generated content.
The hook
Have you ever wondered why so much faceless YouTube content starts with a “Have you ever wondered” sentence, followed by a promise of a surprising revelation just about halfway through the video or a text? What if I told you that this wasn’t a mere coincidence, but that the follow-up sentences, like “What if I told you that this wasn’t a mere coincidence,” are meant to intrigue you enough to stay engaged with the content?
This is usually narrated by a monotonous, recognizable voice that fits the given topic character. There are plenty of text-to-speech options, some made by Google, that can create perfectly dictated videos. Some more examples of these hooks include:
Imagine for a moment that.
Before you scroll away, consider this.
Have you ever?
These question-based hooks trigger curiosity, create a sense of missing knowledge, and make viewers feel personally involved. This structure lowers resistance by inviting the audience into a guided thought process rather than presenting a claim outright. Ultimately, these patterns persist because they reliably capture attention and because AI models learned them from the most viral, engagement-driven content online.
This structure of intriguing you and promising a big revelation is used because it works. This does not have to be a bad thing, but when paired with the following, it can be.
Promise of grandeur
AI content is not just about providing engaging information. It’s about enslaving humankind to AI overlords, and you’re the one who understands this.
(I am joking; that is clear enough, hopefully.)
This sentence structure of “not just about x” (with x being a discussed topic, for example, A.I. usage) + “It’s about y” (with y being a wider context that justifies the importance of x). It’s usually followed by an implication that the consumer is the one who understands this, as if consuming that specific content is already a conscious, important act.
Of course, this structure can at times be silly, but at other times, following this logic is what creates AI-induced psychosis, a well-acknowledged new phenomenon.
The rise of the em dash
Hyphens, en dashes, and em dashes are three punctuation marks that serve different purposes. A hyphen (-) connects words or parts of words, as in "well-being" or "state-of-the-art." An en dash (–) usually indicates a range or relationship between two values, like 2020–2025 or a New York–London flight. An em dash (—) creates a strong break in a sentence, adding emphasis or an explanatory pause.
And I think everyone will recognize the em dash now, as its usage is popularized by the AI tools and grammar checks. I do think we can all agree that AI making the texts more readable and grammatically correct is completely a positive.
So the em dash is a solid indicator that a text is AI-generated or processed—especially if the text is not written in the English language.
Pew Research Center
In recent years, studies from the Pew Research Center have increasingly appeared across digital content, from news articles to blogs and social media posts. This rise is partly due to the growing demand for credible, fact-based information in an era of rapid information flow and AI-generated content. Automated writing tools and AI-assisted articles often cite Pew as a trusted source, amplifying its visibility even further. As a result, Pew’s research has become a go-to reference for writers and content creators seeking authoritative data on topics like social media use, demographics, and global trends.
Again, if used correctly, there’s nothing wrong with this—it’s just an indicator that the content might be AI-generated. If used wrongly, it’s just an authoritative reference point to make a text more credible, especially if the referenced Pew Research Center study is something the AI hallucinated.
Lists with emoticons
Finally, my least favorite, and a favorite of LinkedIn, Facebook, or Instagram entrepreneurs. When summarizing texts, in order to make them more captivating, readable, or approachable, content creators use emoticons instead of bullet points. The “bullseye” emoticon for the final point, the magnifying glass when “researching” something, the plant emoticon if the focus is on growing, and a checkmark to confirm our understanding of the text.
And to paraphrase a great guru’s opinion on PowerPoint—there is no emotion behind those emoticons, and there is no point to those points. For LinkedIn, especially, it’s AI-generated content replying to AI-generated content.
Conclusion
Understanding these patterns of AI-generated content is not just about becoming better consumers of online information—it's about preserving our ability to distinguish authentic human communication from manufactured engagement.
Here are the key indicators to watch for:
The hook: formulaic openings like "Have you ever wondered," "What if I told you," or "Before you scroll away, consider this" that create artificial curiosity and promise revelations.
Promise of grandeur: the "not just about X, it's about Y" structure that elevates simple topics into grand narratives, making readers feel they possess a special understanding.
The rise of em dash: overuse of em dashes (—) for emphasis and breaks, especially noticeable in non-English texts that have been AI-processed.
Pew Research Center: frequent citations of this authoritative source, sometimes even hallucinated references, are used to boost credibility.
Lists with emoticons: bullet points replaced with emojis (for final points, for research, for growth, and for confirmation) that create an illusion of engagement without genuine emotion.
While AI tools can genuinely enhance productivity and readability, understanding these patterns helps us distinguish between content created with human intent and that generated to merely fill space or manipulate engagement metrics.
The Dead Internet Theory may sound dystopian, but its practice is already here, quietly reshaping how we consume information. By staying aware of these indicators, we can make more informed choices about what content deserves our attention and preserve spaces for authentic human expression in an increasingly automated online world.
A real conclusion
This list is not final, but I think it offers some good indicators. With some experience, you will probably become aware of the fluff patterns AI tools use and some omissions by the authors, like when they need to fill in the AI-generated template with a specific [reference]. Bots in the comment section will show language inconsistency and lack of nuance (like writing a literal translation of an idiom). Maybe you have already noticed these changes, but just didn’t attribute them to AI tools, but to general internet style.
I’ll once again emphasize that AI writing tools can be wonderful, useful, etc. I also think it’s great that so much knowledge can be systematized, reproduced, and arranged in an understandable, approachable way. The tools themselves aren't inherently problematic. AI can help us write more clearly, research more efficiently, and communicate more effectively.
The concern arises when these patterns become so pervasive that they replace genuine human thought and expression, when entire comment sections, news feeds, and social platforms become echo chambers of automated responses talking to automated content. At that point, we exist for the interest of companies owning the AI, not the other way around.















