The spread of false information to the masses can sometimes cause more damage than a bullet. What we now call "disinformation"—deliberately misleading information—has, without exaggeration, become a weapon. Lies disguised as news are used to polarize societies, sabotage democratic processes, and shake international balances. Misinformation is no longer a simple journalistic error or an innocent rumor; instead, it is deliberately launched like a planned piece of ammunition. That’s why defining disinformation as “not news, but a weapon” is appropriate. Especially with the instant and unlimited reach of social media, disinformation can penetrate from one country to the world within seconds. During the Cold War, propaganda was carried out through brochures and radio broadcasts; now, it has been replaced by digital psychological warfare using bot accounts and AI-generated fake content.
One of the most striking actors of this new era is Russia. The Soviet-inherited tactics of “dezinformatsiya” are systematically implemented both domestically and internationally by the Kremlin today. Russia has elevated disinformation to the level of state policy. Domestically, through state-controlled television channels and official news agencies, it bends the truth for its own people. For example, it tries to justify the war it launched against Ukraine by calling it a “special military operation” and spreads fabricated victory stories to cover up failures at the front. Independent media are either silenced or discredited as “foreign agents” in the eyes of society. Abroad, Russia uses the weapon of disinformation even more aggressively.
Especially in Western countries, it attempts to shape public perception according to its own interests through what can be described as social engineering. It heavily utilizes social media and digital platforms to do this. Russian troll armies and automated bot accounts conduct coordinated campaigns on platforms like Facebook and Twitter. It is now widely known that Russian trolls interfered in the 2016 U.S. elections—fake accounts pushed opposing themes on sensitive issues like race, religion, and gun rights to divide Americans. Similar Kremlin-backed disinformation campaigns have been identified in Europe.
One clear example is the fabricated “Lisa case” in Germany in 2016: the false story that a 13-year-old German girl of Russian descent was kidnapped and raped by migrants was heavily circulated by Russian media outlets. Though this story was proven false, it was a disinformation attempt aimed at stirring public outrage among the Russian diaspora and the general public in Germany. Through this incident, Berlin bitterly realized Moscow’s intention to destabilize Europe from within. Today, Russia has further developed these tactics. Kremlin-backed groups set up fake news websites and use a technique called “doppelgänger” to create fake copies of reputable media outlets, spreading fabricated news stories. For example, they publish fake content on replica websites of newspapers like Der Spiegel or Le Monde and circulate it on social media via hundreds of bot accounts.
The goal is to make lies appear as legitimate news and convince a wider audience. In 2024, a similar campaign reportedly re-emerged during the election process in Germany: fake websites spread fabricated scandals targeting the German government, while pro-Russian narratives favoring the far-right AfD party—known for its sympathies toward Moscow—gained traction. Russian trolls tried to plant seeds of doubt in the German electorate with messages like “The West’s support for Ukraine is bringing your country to ruin.”
The hallmark of this disinformation flood is its aim to collapse societies from within: to break people’s trust in news sources and to foster the feeling of “you can’t trust anyone—truth doesn’t exist.” This is, in essence, the core of the Kremlin’s strategy. When people in a disinformation-filled environment begin to view one another with suspicion and lose faith in democratic institutions, Russia achieves its goal. Because once trust collapses, a society becomes easily manipulated through engineered narratives.
China also presents a unique approach in weaponizing disinformation at the state level. Like Russia, Beijing places great importance on controlling the truth, but with a different tone. The Chinese Communist Party’s decades-old machinery of propaganda and censorship continues in full force in the digital age. The fact that platforms like Facebook, Twitter, and YouTube are banned in China is no coincidence. Through the “Great Firewall,” China has isolated its information ecosystem from the outside world. This allows the government to largely suppress voices that challenge the party’s narrative. Chinese citizens mostly consume news from state-approved sources and have limited access to differing viewpoints.
Furthermore, there is the massive online comment army known as the “50 Cent Army.” This group, whose existence has been indirectly acknowledged by official sources, is believed to include hundreds of thousands of part-time or full-time members. Their task is to act like ordinary citizens online and spread pro-party messages. Studies have shown that these trolls generate hundreds of millions of misleading social media posts each year. What’s notable is that this content is usually not overly negative or aggressive; instead, it tends to praise the Chinese government, portray it as successful, and divert public attention to positive stories.
Unlike Russia’s chaos-driven disinformation, China’s strategy usually aims to paint a rosy picture or create distractions. For example, if there is an economic crisis or a scandal in the country, public officials and fake accounts quickly try to steer attention elsewhere on social media and shift the conversation to a different topic. This is a form of social engineering: preventing public anger from building up by replacing it with nationalist sentiment or tales of success. While China uses the power of disinformation effectively within its borders, in recent years it has also expanded its propaganda and misinformation activities abroad.
For a long time, it wasn’t as aggressive as Russia in spreading outright fake news internationally; it focused more on polishing its own image and deflecting criticism. However, as the China-U.S. rivalry and global information warfare intensified, Beijing also began conducting aggressive influence operations on social media. In a 2023 report, Meta (the parent company of Facebook) revealed that China had jumped to third place—after Russia and Iran—in terms of the number of foreign influence campaigns detected and removed that year. Fake account networks originating from China were discovered operating in regions ranging from Africa to the Americas, and from Asia to Europe. These networks were generally aimed at promoting China’s interests and narratives: deflecting criticism about human rights violations, discrediting critics of the Chinese government, and persuading global audiences to support Beijing’s position on sensitive issues such as Taiwan.
For instance, Meta identified a group of thousands of fake Facebook accounts posing as Americans and mimicking U.S. politicians. These fake accounts reposted content copied from real politicians, conducting a subtle perception operation. Although the method might seem odd at first glance, the goal was to deepen societal fractures in the U.S. or at least create confusion. Compared to Russia, China’s influence campaigns generally receive less engagement, as many of them appear too organized and artificial, almost like spam.
However, this hasn’t deterred Beijing. On the contrary, there are signs that China is investing in more technologically sophisticated methods. One of the most critical issues on the horizon is the use of AI-generated disinformation. While deepfake technologies are strictly banned for internal use in China, authorities seem willing to use these tools in external operations. A concrete example of this emerged during the 2024 elections in Taiwan. Manipulated videos and images were circulated online to portray the Taiwanese government and anti-China politicians in a negative light. Some of these materials were believed to have been generated using artificial intelligence.
In other words, China has shown no hesitation in using the very “fake video” technology it publicly criticizes, as long as it serves its strategic interests. Thanks to deepfake videos, it is now possible to create footage of political figures seemingly saying things they never actually said. This foreshadows the alarming direction in which disinformation may evolve. Indeed, we have also witnessed similar techniques used by Russia. Not long ago, a fake video circulated on pro-Russian channels showing someone impersonating a U.S. State Department official making a fabricated statement.
This video—likely AI-generated—was presented by Russian media as authentic and was only debunked after an in-depth analysis. In a world where technology can create such powerful illusions, our old instinct of “I won’t believe it unless I see it” becomes obsolete. As the line between truth and fabrication blurs, the weaponizing power of disinformation grows even stronger. Against this backdrop, the position of democratic countries carries a striking contradiction. Established democracies like Germany and France are targeted by disinformation campaigns, yet they must address these threats within the framework of the rule of law.
Compared to authoritarian systems, democracies can be more vulnerable to disinformation because open societies tend to avoid strict control over information flows. The principle of free expression allows both truth and falsehood to circulate, inadvertently creating fertile ground for malicious actors. For example, a propaganda piece produced by Russia or China can be freely disseminated via social media in Germany or France. Still, democracies are not simply sitting idly. Germany has taken significant steps in recent years, especially in combating Russian disinformation. A law passed in 2017 (NetzDG) obliges social media platforms to quickly remove fake news and hate speech.
Security agencies also issue warnings to the public during election periods, urging caution against foreign manipulation attempts. Recently, Germany’s domestic intelligence agency (BfV) published a report warning that young people exposed to Russian and Chinese propaganda on platforms like TikTok may adopt more pro-Kremlin views. According to this report, German youth using TikTok were more likely to view Russia’s invasion of Ukraine less critically and even favor China’s authoritarian system over democracy. These findings caused concern in Berlin, as they demonstrated the deep, indirect impacts of disinformation.
France, on the other hand, has learned some hard lessons. During the 2017 elections, cyberattacks and disinformation campaigns targeting Emmanuel Macron’s campaign prompted Paris to take pioneering steps in Europe. Macron directly accused Russian state media outlets like RT and Sputnik of spreading fake news and took a firm stance against them. After winning the election, he revoked their Élysée Palace accreditations, sending a clear message: zero tolerance for disinformation.
One defining incident during that period was the “MacronLeaks” affair. Hundreds of emails and documents were leaked and dumped online in a last-minute attempt to influence the election. However, the French media and institutions handled the situation with composure. News outlets refrained from amplifying the leaks due to electoral silence rules. Online platforms were cautious in spreading suspicious content. Macron’s team, anticipating such an attack, had even planted fake information in the leaked files to discredit the perpetrators. In the end, the disinformation “bomb” failed to affect the political outcome.
Encouraged by this experience, France passed a controversial “anti-fake news” law in 2018. The law grants judges the authority to swiftly remove clearly false content from the internet during election periods. It also allows for halting broadcasts by foreign state-controlled media if they are found to be spreading intentional disinformation. Unsurprisingly, the law was criticized in the context of press freedom; some feared it could be used to silence dissent. However, in practice, it has mainly been used to restrict obvious propaganda outlets like RT.
Fighting disinformation in a democratic society requires constant balancing: protecting freedoms while defending the truth. France has not hesitated to take strict measures within this balance to protect itself. All these examples reveal the stark difference in how authoritarian and democratic regimes respond to disinformation. In authoritarian states, disinformation becomes a weapon powered by the state. Russia and China turn public perception into a target of strategic operations, backed by vast manpower and technological investment.
Their strength lies in centralized coordination: hundreds of thousands of accounts can spread the same message with a single command, and national media is so tightly controlled that alternative voices are silenced. Even educational and cultural institutions are aligned with the official narrative. In short, the state becomes a full-scale propaganda machine. However, the weakness of such regimes lies precisely in this enforced uniformity. It may not be sustainable in the long run to block people’s access to the truth. No matter how strict the censorship, at some point, people begin to ask questions or turn to external sources.
Moreover, authoritarian leaders can become so immersed in the disinformation they create that they start making strategic miscalculations. For instance, a regime that portrays the war front as an endless success may end up overestimating its military capacity and engaging in reckless foreign policy moves. This shows that systems built on lies are, in fact, highly fragile. Democratic regimes, despite starting at a disadvantage in this struggle, are able to develop resilience precisely because they are based on public participation and transparency. Yes, open societies provide an environment where disinformation can spread more easily—but that same openness also allows falsehoods to be exposed.
A free press, independent researchers, civil society organizations, and even individuals can question claims and uncover lies. Since state institutions are held accountable, excessive propaganda by governments can trigger backlash and ultimately fail. For example, during the COVID-19 pandemic in Germany, conspiracy theories and anti-vaccine disinformation initially influenced many people. However, through the efforts of scientists and journalists, much of it was eventually debunked, and the public once again recognized the value of reason and evidence. Still, things are far from perfect on the democratic front.
Disinformation thrives by exploiting the very sensitivities of liberal societies: it weaponizes freedom of speech and slips through the legal gaps protected by democratic principles like a Trojan horse. The global structure of social media makes it difficult for nation-states to regulate the flow of information on these platforms. A platform operating within the rules in the U.S. or Europe can still be flooded with thousands of fake accounts originating from Russia or China. It is not always technically or legally possible to remove these in time, and by the time action is taken, the disinformation has often already achieved its intended impact.
The technological dimension presents an even greater challenge for all countries. Bot accounts, algorithms, and artificial intelligence dramatically amplify the effects of disinformation. For powerful actors with enough automated accounts, making a fake narrative trend on platforms like Twitter (now X) or Facebook is child’s play. When thousands of bots simultaneously post using the same keywords, the topic quickly rises to the top of trending lists, creating the illusion of genuine public demand. Algorithms on platforms like YouTube are designed to prioritize content that gets the most engagement, and unfortunately, fake news often gets more attention because it is provocative and sensational. This means that algorithms, unintentionally, end up favoring disinformation. AI brings an entirely new layer of threat: the ability to produce fake texts, photos, and videos.
Technologies known as "deepfakes" can copy a real person's appearance and voice, making them appear to do or say things they never actually did. In the future, what’s to stop an authoritarian regime from releasing a fake video of a rival world leader saying something inflammatory, sparking an international crisis? Or conversely, how likely is it that voters in a democratic country will believe a viral video is fake just days before an election, especially if it shows a candidate saying something scandalous? These questions highlight the minefield that technology creates in the weaponization of disinformation. Looking at the broader picture, it's clear that there is no definite winner in the fight against disinformation.
Authoritarian systems are skilled and ruthless at spreading lies; democratic systems are better equipped to resist them but are still vulnerable. Perhaps the most critical factor is the level of awareness within society. Educated citizens with strong media literacy and critical thinking skills are more resilient against disinformation bombs. In such societies, fake news is quickly questioned, sources are cross-checked, and caution prevails over panic. In contrast, in poorly informed societies, conspiracy theories and rumors are readily accepted as truth and spread like wildfire. Social media giants must also recognize their responsibility. Today, platforms like Facebook and Twitter have taken some precautions due to public pressure, but these efforts are not enough. Combating disinformation requires transparent algorithms, effective content moderation, and international cooperation.
Democratic governments are introducing regulations to compel these companies to act in the public interest, but every new regulation raises the concern, “Will this turn into censorship?” In the end, in our current era, the pollution of information is not just a communication issue—it has become a political and societal security threat. Whether it’s used to strengthen the grip of an authoritarian leader or to destabilize a rival country, the political weaponization of disinformation is challenging the very nature of truth around the world. Democratic countries may believe they are better positioned in this fight, at least they have free institutions capable of countering lies. But their open structures also leave them vulnerable to internal subversion.
Authoritarian regimes may use the disinformation weapon skillfully, but they must not forget that such systems are built on sustained deception—a model that may eventually rot from within. After all, how long can a society function when the truth is systematically suppressed? The real question is this: In the battle between truth and falsehood, who will win the future? In a world where news has been transformed into a weapon, can societies find a way to neutralize it? There is no clear answer to this difficult question.
Perhaps the answer lies in each of us—the readers, the sharers, the commenters. In the age of disinformation, each of us is on the front line. The real question is: how prepared are we, and how far are we willing to go to defend the truth? These questions will shape the future of democratic values and social cohesion. They are worth pondering—because in the end, it may not be technology that determines the outcome of this war between truth and lies, but the level of human consciousness.