The Dead Internet Theory

Modern

In the summer of 2021, an anonymous user with the handle IlluminatiPirate posted a long, unsettling essay on Agora Road's Macintosh Café — a small, retro-themed online forum dedicated to vaporwave aesthetics and internet nostalgia. The essay bore a title that, in any previous era of the internet, might have seemed like the ravings of a paranoid mind: "Dead Internet Theory: Most of the Internet is Fake." The thesis was blunt. The internet, as a space of genuine human connection and organic discourse, had died sometime around 2016 or 2017. What remained — the websites, the social media feeds, the comment sections, the trending topics, the viral content — was largely a simulation of human activity, generated and maintained by bots, artificial intelligence, algorithmic curation, and coordinated manipulation campaigns operated by governments, corporations, and intelligence agencies. The humans were still there, somewhere, but they were outnumbered, outpaced, and increasingly irrelevant. The internet had become a puppet show, and most of the audience did not realize they were watching puppets.

The essay was not a work of rigorous scholarship. It was discursive, conspiratorial in places, speculative in others, and unevenly sourced. It drew on real data, genuine research, and documented programs — then extrapolated them into a sweeping narrative that exceeded what the evidence could strictly support. Under normal circumstances, it would have circulated among a few hundred forum users and disappeared into the archive of forgotten posts.

That is not what happened. The essay went viral. It was discussed on Reddit, dissected on YouTube, debated on Twitter, and covered by mainstream outlets from The Atlantic to the BBC. It resonated with a feeling that millions of internet users had been unable to articulate — a creeping sense that something fundamental had changed about the digital world, that online spaces had become hollow and performative, that the conversations they were having might not be with real people, that the content they were consuming had not been created by human minds. The Dead Internet Theory touched a nerve because, for a growing number of people, the internet no longer felt alive.

What makes the theory worth examining in depth — what elevates it beyond the typical conspiracist fare — is that it arrived at precisely the moment when the evidence in its favor began to mount faster than anyone, including its anonymous author, could have anticipated. Within eighteen months of IlluminatiPirate's post, the release of ChatGPT would inaugurate an era of mass-produced AI content that would flood every corner of the digital world. The Dead Internet Theory did not just describe a present condition. It predicted a future that arrived ahead of schedule.

The Origin

The original essay, posted in IlluminatiPirate's thread on Agora Road's Macintosh Café forum in early 2021, ran to several thousand words and presented the Dead Internet Theory not as a new invention but as a synthesis of observations, data points, and suspicions that had been accumulating in certain corners of the internet for years. The author cited bot traffic statistics, documented astroturfing campaigns, the consolidation of web traffic into a handful of platforms, and the subjective experience of long-time internet users who felt that online spaces had become qualitatively different — emptier, more repetitive, more artificial — than they had been in prior decades.

The core claim was specific: the internet had "died" around 2016-2017, meaning that this was the approximate period when the proportion of bot-generated content and algorithmically curated interaction surpassed the proportion of genuine human activity, at least on the major platforms that most people experience as "the internet." The death was not a single event but a threshold — a tipping point beyond which the artificial became the majority and the organic became the minority. After this point, the internet as a space of human communication ceased to function as such, even if the humans using it did not realize the change had occurred.

The date was not arbitrary. Several major shifts converged around 2016-2017. Facebook completed its transition from a chronological feed to an algorithmically sorted one, fundamentally changing what users saw and why they saw it. The 2016 U.S. presidential election revealed the scale of social media manipulation — Russian bot farms, coordinated disinformation campaigns, the weaponization of platforms that their creators had naively believed would promote democratic discourse. Google's search results, once a relatively transparent index of the web's contents, became increasingly shaped by SEO manipulation, featured snippets, and the prioritization of a shrinking pool of authoritative sources. The open web — the vast, chaotic, human-built landscape of personal websites, forums, blogs, and independent communities — continued its long decline, as traffic consolidated into Facebook, YouTube, Twitter, Instagram, Reddit, and a handful of other walled gardens.

The emotional resonance of the theory — the reason it spread so quickly and stuck so firmly in the minds of those who encountered it — was not primarily about data or evidence. It was about recognition. Long-time internet users knew, intuitively, that their experience had changed. Comment sections that once hosted genuine arguments now seemed populated by accounts that posted with mechanical regularity and interchangeable voices. Social media feeds that once surfaced content from friends now served an endless stream of professionally produced viral content from accounts they had never chosen to follow. Search results that once led to quirky personal websites now led to the same ten corporate sites, each presenting information in the same optimized, personality-free format. The internet felt less like a city — messy, unpredictable, full of hidden corners and unexpected encounters — and more like a shopping mall: clean, controlled, and fundamentally designed to extract money rather than foster connection.

The Dead Internet Theory gave a name to this feeling. And once it had a name, people could not stop talking about it.

The Evidence Cited by Proponents

The theory's persuasive power rests on a foundation of documented facts, real statistics, and verified programs that, taken individually, are unremarkable — the kind of data points that appear in industry reports and academic papers. Taken together, they form a picture that is, at minimum, deeply unsettling.

Bot traffic statistics

The most frequently cited piece of evidence is also the most straightforward: a significant and growing proportion of internet traffic is not generated by human beings. Imperva (formerly Distil Networks), a cybersecurity company that has published annual "Bad Bot Reports" since 2013, has consistently found that bot traffic accounts for a substantial share of all web traffic. In 2014, for the first time in the report's history, bot traffic surpassed human traffic — bots generated 56 percent of all website visits, compared to 44 percent from humans. The proportion has fluctuated since then, but the trend is clear. In 2022, Imperva reported that 47.4 percent of all internet traffic was generated by bots, with "bad bots" — those designed for credential stuffing, scraping, spam, and other malicious purposes — accounting for 30.2 percent. In 2023, that figure rose again. Barracuda Networks, in a 2021 analysis of web application traffic, found that nearly two-thirds of internet traffic was generated by bots. Cloudflare, which handles a vast proportion of global web traffic through its content delivery network, has reported similar figures, noting that automated traffic frequently exceeds human traffic across its network.

These numbers require context. "Bot traffic" is a broad category that includes everything from benign search engine crawlers (Googlebot indexing the web) to malicious scraping operations, automated ad fraud, and social media manipulation bots. Not all bot traffic is sinister, and the presence of bots does not, by itself, prove the dead internet thesis. But the numbers establish an important baseline: on a purely quantitative level, the internet is not primarily a human space. It is a space where automated systems are, at minimum, co-equal participants, and by some measures, the majority.

Social media manipulation

The existence of large-scale, state-sponsored social media manipulation campaigns is not a theory. It is a documented fact, confirmed by intelligence agencies, academic researchers, and the social media companies themselves.

Russia's Internet Research Agency (IRA), based in St. Petersburg, was the subject of extensive investigation following the 2016 U.S. presidential election. The Mueller Report, published in 2019, documented how the IRA operated thousands of fake social media accounts that impersonated American citizens, created and managed Facebook groups with hundreds of thousands of real members, organized real-world political rallies that actual Americans attended, and generated content designed to inflame partisan divisions on issues from race to gun control to immigration. The IRA's budget was modest by government standards — roughly $1.25 million per month at its peak — but its output was staggering. Facebook acknowledged that IRA content had reached an estimated 126 million Americans. Twitter identified over 50,000 IRA-linked accounts. The operation was not subtle, and it was not sophisticated in its individual components. Its power lay in scale, persistence, and the willingness of the platforms to serve as unwitting amplifiers.

China's "50 Cent Army" — so named for the alleged payment of 50 Chinese cents (wu mao) per post — represents a different model of manipulation. A landmark 2017 study by Gary King, Jennifer Pan, and Margaret Roberts at Harvard estimated that the Chinese government fabricates approximately 448 million social media posts per year on domestic platforms, primarily to distract from politically sensitive topics rather than to argue against critics. The operation is not outsourced to troll farms but staffed largely by government employees posting as part of their regular duties. The scale is almost incomprehensible — nearly half a billion posts per year, all designed to shape the information environment of 1.4 billion people.

The Oxford Internet Institute's "Organised Social Media Manipulation" reports, led by researchers Samantha Bradshaw and Philip N. Howard, have tracked the spread of computational propaganda across the globe. Their 2019 Global Inventory documented organized social media manipulation campaigns in 70 countries — a figure that rose to over 80 countries in subsequent reports. These campaigns are conducted by governments, political parties, and private contractors, and they employ a range of techniques: bot networks, human troll armies, strategic amplification of favored narratives, harassment campaigns against critics, and the creation of fake grassroots movements (astroturfing). The researchers found that every authoritarian and most democratic governments now engage in some form of social media manipulation. The practice is no longer an aberration. It is the norm.

The death of organic reach

In 2012, a post on a Facebook Page could expect to be seen by roughly 16 percent of the page's followers without any paid promotion. By 2014, that number had dropped to 6.5 percent. By 2016, it was approximately 2 percent. By 2018, for many pages, organic reach had effectively reached zero — a post seen by virtually none of the people who had explicitly chosen to follow the page. Facebook's algorithmic changes during this period were officially framed as improvements to the user experience — showing users "more of what they want to see." In practice, they represented the most consequential privatization of public discourse since the enclosure of the commons. A platform that had attracted billions of users by offering free, open communication now demanded payment for the privilege of being heard. Content that could not pay was made invisible. Content that could pay — regardless of its quality, truthfulness, or value — was amplified.

The shift from chronological feeds to algorithmic feeds, which occurred across all major platforms during this period, had profound consequences for the nature of online interaction. In a chronological feed, what you see is determined by who you follow and when they post. It is a passive system — it does not curate, recommend, or manipulate. In an algorithmic feed, what you see is determined by a machine learning system optimizing for engagement — a metric that, in practice, rewards outrage, controversy, emotional manipulation, and addictive content loops. The algorithm does not show you what is true, or what is good, or what your friends posted. It shows you what will keep you scrolling. The organic internet — the one built by human choices about what to read, write, and share — was replaced by a manufactured internet, engineered to maximize the time and attention extracted from each user.

This transformation was not a conspiracy in the traditional sense — no secret meeting, no master plan, no shadowy figures pulling strings. It was the logical outcome of a business model that treated human attention as a commodity to be harvested and sold. But the effect was the same as if it had been a conspiracy: the systematic suppression of organic human communication in favor of algorithmically optimized content that served corporate interests. The dead internet theorists argue, not unreasonably, that this is a distinction without a difference.

Content farms and SEO manipulation

The industrialization of online content predates the dead internet theory by more than a decade. Demand Media, founded in 2006, was among the first companies to perfect the model of producing enormous volumes of low-quality content optimized for search engine rankings rather than human readers. At its peak, Demand Media's eHow.com was one of the most visited websites in the world, generating tens of thousands of articles per month on topics ranging from plumbing to philosophy, each reverse-engineered from Google search data to capture traffic on high-value queries. The articles were written by freelancers paid between $15 and $25 per piece and were designed to be just good enough to rank in search results — not good enough to actually inform or engage a reader. They were content in the most literal sense: material that fills a container. The container was Google's search results page, and the purpose was not to communicate but to intercept.

Google's Panda algorithm update in 2011 was explicitly designed to penalize content farms, and it did reduce the visibility of the most egregious offenders. But the underlying incentive structure was unchanged. As long as search engine traffic was valuable and search engine algorithms could be gamed, there would be an industry dedicated to producing content optimized for machines rather than humans. The content farms evolved, became more sophisticated, and diversified. By the late 2010s, the SEO industry had produced a web in which the top search results for virtually any query led to content that existed not because a human had something to say but because a business had identified a keyword opportunity.

The arrival of large language models in 2022 supercharged this dynamic beyond recognition. What had previously required a human writer — however underpaid and overworked — could now be produced by AI at zero marginal cost and infinite scale. The content farm model, already corrosive, became exponentially more so. The internet began to fill with AI-generated articles, product reviews, how-to guides, and "informational content" that was technically coherent but devoid of human experience, insight, or accountability. The dead internet theory, which had seemed like a provocative exaggeration when it was first posted, began to look like a conservative estimate.

The sameness of internet culture

One of the more subjective but widely shared observations cited by dead internet proponents is the increasing homogeneity of online discourse. Long-time internet users report that the range of opinions, styles, and perspectives encountered online has narrowed dramatically — that the same talking points circulate across platforms with suspicious synchronicity, that the same jokes appear in the same formats within hours of each other, that online arguments follow predictable scripts that seem less like genuine disagreements and more like programmed routines.

This observation is difficult to quantify but not without basis. Algorithmic curation naturally promotes convergence — if the same algorithm determines what millions of people see, those millions of people will increasingly see (and think, and say) the same things. The "filter bubble" effect, first described by Eli Pariser in 2011, predicted exactly this outcome: algorithmic personalization would create informational silos, but within each silo, the content would be remarkably uniform. What the dead internet theorists add to Pariser's analysis is the suggestion that some of this uniformity is not merely an emergent property of algorithmic systems but the result of deliberate manipulation — coordinated campaigns to establish specific narratives as "organic consensus" using bots, sock puppets, and amplification networks.

Dead followers and fake engagement

In January 2018, the New York Times published "The Follower Factory," a landmark investigation by Nicholas Confessore, Gabriel J.X. Dance, Richard Harris, and Mark Hansen into Devumi, a company that sold fake social media followers and engagement. The investigation revealed that Devumi had sold over 200 million fake Twitter followers to more than 200,000 customers, including celebrities, politicians, corporate executives, and — crucially — people who presented themselves as influential voices in public discourse. The fake accounts were not crudely made. Many were based on the stolen identities of real people, complete with their names, photographs, and biographical details. The result was an ecosystem in which the apparent popularity and influence of online voices bore no necessary relationship to any real-world support. A person with 500,000 followers might have 400,000 that were purchased. A tweet with 50,000 retweets might owe most of them to a bot network. The metrics of online influence — the numbers that determined who was taken seriously, who got media coverage, who attracted advertising dollars — were, to a significant and unknowable degree, fabricated.

Research into the prevalence of fake accounts has produced estimates that vary widely but consistently suggest the problem is enormous. A 2017 study by researchers at the University of Southern California and Indiana University estimated that between 9 and 15 percent of active Twitter accounts were bots. Other studies have suggested higher figures. The platforms themselves have been reluctant to disclose their own estimates, for obvious reasons — acknowledging the true extent of fake accounts would undermine the engagement metrics on which their advertising revenue depends. When Elon Musk attempted to back out of his acquisition of Twitter in 2022, citing concerns about fake accounts, the resulting legal dispute briefly forced the question of bot prevalence into public view, though it was never satisfactorily resolved. Twitter's own filings claimed that fewer than 5 percent of monetizable daily active users were fake — a figure that was widely regarded as optimistic.

The disappearance of the old internet

In 2009, Yahoo shut down GeoCities, deleting or rendering inaccessible millions of personal websites that had been created during the first decade of the consumer internet. The closure was not widely mourned at the time — GeoCities was already an anachronism, a relic of the era of tiled backgrounds, auto-playing MIDI files, and visitor counters. But in retrospect, it marked a symbolic turning point: the destruction of a vast archive of genuine human expression, created for no commercial purpose, reflecting the interests, enthusiasms, and personalities of millions of individual people. The Internet Archive's Wayback Machine preserved some of this content, but much was lost permanently.

GeoCities was merely the most visible casualty of a broader trend: the consolidation of the internet from a distributed network of independent websites into a centralized system dominated by a handful of platforms. In the early 2000s, the internet was genuinely diverse — millions of personal websites, thousands of independent forums, countless blogs and wikis and niche communities, each with its own culture, its own rules, its own character. By 2020, the vast majority of internet traffic flowed through fewer than ten companies. Google, Facebook, Amazon, Apple, Microsoft, Twitter, YouTube, Instagram, and TikTok collectively accounted for the overwhelming majority of time spent online. The internet had not just become centralized — it had been privatized. Public spaces had been replaced by corporate platforms, and the terms of participation were set not by the users but by the platforms' terms of service, content moderation policies, and algorithmic preferences.

The dead internet theorists argue that this consolidation was not merely a commercial development but a prerequisite for the death of the organic internet. A distributed internet of millions of independent nodes is nearly impossible to manipulate at scale — there are too many points of entry, too many independent decision-makers, too many unpredictable variables. A centralized internet of five or six platforms, each controlled by a single algorithm, is manipulable by design. Change the algorithm, and you change what billions of people see, think, and talk about. The consolidation of the internet into platforms was the construction of the infrastructure for its death.

Enshittification

In January 2023, the author and digital rights activist Cory Doctorow coined the term "enshittification" to describe the lifecycle of online platforms. The concept, which he elaborated in his keynote address at DEF CON 31 later that year and in extensive writing on his Pluralistic blog, describes a three-stage process. In the first stage, a platform is good to its users — it offers a genuine service, attracts a critical mass of participants, and becomes indispensable. In the second stage, the platform begins to abuse its users to make things better for its business customers — advertisers, publishers, merchants — by degrading the user experience in ways that serve commercial interests. In the third stage, the platform abuses both users and business customers to claw back value for itself — extracting maximum rent from all participants while providing the minimum viable service.

Doctorow's framework is not, strictly speaking, about the dead internet theory. But it describes the mechanism by which the organic internet was killed — not by a conspiracy but by the structural incentives of platform capitalism. When a platform's primary obligation is to its shareholders rather than its users, every design decision will ultimately be made in the interest of extracting value rather than creating it. The result is an internet that looks alive — busy, colorful, full of content and interaction — but is, in its essential function, hollow. The content exists to generate engagement metrics. The engagement metrics exist to sell advertising. The advertising exists to generate revenue. The revenue exists to increase shareholder value. At no point in this chain is the purpose of the system to facilitate genuine human communication. That function is not forbidden — it is simply irrelevant, a side effect that is tolerated when it serves the business model and suppressed when it does not.

The AI Acceleration (2022-Present)

On November 30, 2022, OpenAI released ChatGPT to the public. Within five days, it had a million users. Within two months, it had a hundred million — the fastest adoption of any consumer technology in history. The implications for the dead internet theory were immediate and profound, because ChatGPT and the large language models that followed it did not merely add to the volume of artificial content on the internet. They democratized the production of artificial content to an extent that no one — including the dead internet theorists — had fully anticipated.

Before ChatGPT, producing convincing fake text at scale required resources: bot farms staffed by human operators, like the Internet Research Agency, or sophisticated custom software. The barrier to entry was significant. After ChatGPT, anyone with an internet connection could produce unlimited quantities of text that was fluent, contextually appropriate, and superficially indistinguishable from human writing. The cost dropped to effectively zero. The scale became effectively infinite. Within months, the internet began to change in ways that were visible to anyone paying attention.

AI-generated articles appeared on websites that had previously employed human writers. In November 2023, Sports Illustrated was revealed to have published articles under fictitious bylines, complete with AI-generated author photographs — fake journalists writing fake articles for a real publication with a 70-year history. CNET, one of the most visited technology news sites in the world, had quietly begun publishing AI-generated articles in late 2022; when the practice was discovered, the articles were found to contain significant factual errors. Red Ventures, CNET's parent company, had implemented the practice as a cost-cutting measure — replacing human journalists with AI systems that were cheaper, faster, and wrong.

AI-generated images flooded social media with a speed and volume that overwhelmed every attempt at containment. In March 2023, an AI-generated image of Pope Francis wearing a white Balenciaga puffer jacket went viral, fooling millions of people and news organizations before being identified as fake. Facebook and Instagram became saturated with AI-generated images — surreal landscapes, impossible architecture, grotesque hyperreal portraits — posted by accounts that existed solely to generate engagement. The phenomenon of "AI slop" — low-quality AI-generated content designed to attract clicks and ad revenue — became a recognized category of internet pollution. The "Shrimp Jesus" images that proliferated on Facebook in early 2024 — AI-generated images of a shrimp-human-Jesus hybrid that attracted thousands of sincere comments from real users who appeared not to recognize the images as artificial — became a dark emblem of the dead internet's progress.

The implications extended beyond content creation to content curation. Stack Overflow, the question-and-answer site that had been the internet's definitive resource for programming knowledge since 2008, experienced a dramatic decline in traffic following ChatGPT's release, as programmers turned to AI chatbots for answers instead of human experts. The irony was that ChatGPT's programming knowledge had been trained largely on Stack Overflow's corpus of human-generated answers — the AI was cannibalizing the human knowledge base that had produced it. Reddit, meanwhile, saw its valuation increase precisely because it possessed a vast archive of "human" content that AI companies needed for training data — a fact that led to Reddit's controversial decision to charge for API access, effectively monetizing its users' past contributions as training material for the systems that would replace them.

This dynamic pointed to what researchers have called the "model collapse" problem. In a 2023 paper titled "The Curse of Recursion: Training on Generated Data Makes Models Forget," Ilia Shumailov and colleagues at the University of Oxford demonstrated that AI models trained on AI-generated data — rather than human-generated data — undergo progressive degradation, losing the diversity and accuracy of the original training data over successive generations. The implication was alarming: as AI-generated content becomes a larger proportion of the internet's total content, and as new AI models are trained on that increasingly artificial corpus, the quality of AI output will decline in ways that compound over time. The internet would not merely be flooded with artificial content — it would be flooded with increasingly degraded artificial content, a feedback loop of diminishing quality that Shumailov's team compared to the genetic consequences of inbreeding.

The epistemological crisis that these developments created is difficult to overstate. For the first three decades of the consumer internet, the default assumption was that online content was produced by human beings. This assumption was never entirely reliable — spam, bot comments, and fake accounts have existed since the earliest days of the web — but it was reliable enough to serve as a working foundation for online interaction. After 2022, that assumption collapsed. Any piece of text, any image, any video, any audio clip encountered online might be AI-generated, and the tools for detecting AI-generated content have consistently lagged behind the tools for producing it. The question "Is this real?" — meaning, "Was this created by a human being?" — became unanswerable in a growing number of cases. And a question that is unanswerable is, functionally, meaningless. The distinction between human and artificial content began to dissolve, not because it ceased to matter but because it could no longer be reliably made.

The Conspiracy Dimension

The Dead Internet Theory, as it circulates in its most complete form, is not merely an observation about technology trends. It includes an explicitly conspiratorial dimension — the claim that governments, intelligence agencies, and corporate actors have deliberately promoted the death of the organic internet because a bot-populated internet is more controllable than one populated by unpredictable human beings.

This claim is more difficult to evaluate than the technological evidence, because it attributes intentionality to outcomes that could also be explained by structural incentives, emergent effects, and institutional inertia. The question is not whether governments manipulate the internet — they demonstrably do — but whether the death of the organic internet was a goal rather than a consequence.

Operation Earnest Voice

In March 2011, The Guardian reported on a U.S. military program called Operation Earnest Voice, which had developed software enabling individual operators to create and manage multiple fake online personas — "sock puppets" — for use in social media manipulation. The software, developed by the San Diego-based company Ntrepid under a $2.76 million contract from U.S. Central Command (CENTCOM), allowed each operator to manage up to ten separate personas, each with its own convincing online history and identity. The stated purpose was to counter extremist propaganda in non-English-language online spaces — Arabic, Farsi, Urdu — and the military insisted that the program did not target English-language platforms or American citizens.

The significance of the revelation was not the program's stated scope but its capabilities. If the U.S. military had developed sophisticated sock puppet software in 2011, what had it — and other agencies — developed in the years since? The technology described in the Guardian report was crude by the standards of what large language models would later make possible. But it established a principle: the U.S. government was actively developing the capability to populate online spaces with artificial personas that were designed to be indistinguishable from real people. The dead internet theorists argue that this capability, once developed, was not confined to the narrow use case described in the official statements.

JTRIG (Joint Threat Research Intelligence Group)

The Snowden documents, published beginning in 2013 by Glenn Greenwald and others at The Intercept and The Guardian, revealed the existence and activities of JTRIG — the Joint Threat Research Intelligence Group, a unit within Britain's Government Communications Headquarters (GCHQ). JTRIG's internal documents, classified as TOP SECRET STRAP, described a range of techniques for manipulating online discourse that went far beyond passive surveillance.

JTRIG's methods included the creation of fake online personas to infiltrate and disrupt targeted groups, the use of "honey traps" — fake romantic or sexual solicitations — to compromise targets, the deliberate injection of false information into online forums to discredit specific individuals or organizations, the manipulation of online polls and voting systems, and what the documents described as "effects operations" — activities designed to produce specific real-world outcomes through online manipulation. One JTRIG presentation, titled "The Art of Deception: Training for a New Generation of Online Covert Operations," laid out these techniques in a clinical, systematic fashion that suggested they were not experimental capabilities but standard operational procedures.

The Snowden documents also revealed that GCHQ had developed programs with names like GATEWAY, CLEAN SWEEP, SILVERLORD, and ANGRY PIRATE — programs designed to manipulate search engine results, inject content into social media platforms, and conduct large-scale disruption of online communities. The scope of these programs suggested that the British intelligence community — and, by implication, its Five Eyes partners, including the NSA — had moved far beyond surveillance into active manipulation of the online information environment. They were not merely watching the internet. They were shaping it.

The Gentleperson's Guide to Forum Spies

Among the documents circulated in conjunction with the dead internet theory is the so-called "Gentleperson's Guide to Forum Spies" (also known as "COINTELPRO Techniques for Dilution, Misdirection and Control of an Internet Forum"), a document that purports to describe intelligence agency techniques for disrupting online communities. The provenance of this document is uncertain — it has circulated online since at least the mid-2000s, and its authorship has never been verified. Some researchers believe it is a genuine intelligence community training document; others regard it as a fabrication based on known COINTELPRO techniques.

Regardless of its authenticity, the document describes techniques that have been independently confirmed by other sources: topic sliding (burying inconvenient threads with new posts), consensus cracking (introducing dissent into a group to fragment its cohesion), forum sliding (flooding a discussion space with irrelevant content to drown out substantive conversation), and anger trolling (provoking emotional reactions that derail productive discussion). These techniques are not hypothetical. They are observable in virtually every online community of sufficient size, and whether they are employed by intelligence agents, corporate PR firms, or freelance trolls, their effect is the same: the degradation of the online space as a venue for genuine communication.

Corporate complicity

The dead internet theorists argue that technology companies are not merely passive victims of bot traffic but active beneficiaries of it — and that this creates a structural incentive to tolerate, or even encourage, non-human activity on their platforms.

The logic is straightforward. Social media companies sell advertising, and the price of advertising is determined by engagement metrics — the number of users, the time they spend on the platform, the number of interactions they generate. Bot accounts inflate all of these metrics. A platform with 500 million users and 100 million bots can claim 600 million users to advertisers. Bot accounts that like, share, and comment on content inflate the engagement metrics that determine how much advertisers are willing to pay. In this framework, bots are not a problem to be solved — they are a feature of the business model. Eliminating them would mean acknowledging that the platform's user base and engagement numbers are smaller than claimed, which would reduce advertising revenue and damage share prices.

This is not pure speculation. When Twitter's bot problem became a public issue during the Musk acquisition, the company's own internal estimates were widely regarded as implausibly low, and multiple researchers testified that the actual proportion of fake accounts was far higher than Twitter acknowledged. The perverse incentive to tolerate bots is built into the advertising-driven business model that underlies the entire social media economy. The platforms have invested billions in AI-powered content moderation systems, but they have been conspicuously less aggressive in deploying these systems against fake accounts than against, say, copyright violations or content that offends advertisers.

The algorithm as censor

Perhaps the most insidious element of the conspiracy dimension is the argument that overt censorship has become unnecessary in an algorithmically curated internet. You do not need to ban a piece of content when you can simply ensure that no one sees it. Algorithmic suppression — reducing the distribution of content without notifying its creator — achieves the same result as deletion without generating the backlash that deletion provokes. The creator continues to post, believing they are participating in public discourse. Their audience never sees the posts. The silence is indistinguishable from disinterest.

Shadow banning, search result manipulation, demonetization, and algorithmic deprioritization are all documented practices, though the platforms prefer euphemisms: "reducing distribution," "limiting recommendations," "applying warning labels." The effect is that the information environment is shaped not by what people choose to say and read but by what the algorithm — and the humans who configure the algorithm — decide should be visible. This is Operation Mockingbird updated for the digital age. The CIA's Wurlitzer required the cooperation of publishers and editors. The algorithm requires only access to the platform's content ranking system. The gatekeeping that once required a conspiracy now requires only a software update.

The Philosophical Dimension

The Dead Internet Theory, at its deepest level, is not a claim about technology. It is a claim about reality, identity, and the nature of human connection — and it raises philosophical questions that the Western tradition has been wrestling with for centuries.

The Turing Test at scale

Alan Turing's famous test, proposed in his 1950 paper "Computing Machinery and Intelligence," asked a simple question: if a machine can communicate in a way that is indistinguishable from a human, should it be considered intelligent? The test was designed as a thought experiment about artificial intelligence, but the dead internet theory has turned it into an empirical reality. Millions of people now interact daily with bots, AI chatbots, automated customer service systems, and AI-generated content without being able to reliably distinguish these interactions from human ones. The Turing Test is no longer a philosophical puzzle. It is a description of everyday life on the internet.

The implications are unsettling. If you cannot distinguish a bot from a human in an online conversation, what does that mean for the concept of digital identity? If a Twitter account that posts coherent opinions, engages in debates, and accumulates followers turns out to be operated by a language model, were the people who interacted with it deceived? Did they form a "real" social connection with an artificial entity? If a news article that informs your opinion on a political issue was written by an AI, is your opinion less valid? These questions have no clear answers, and the lack of clear answers is itself destabilizing. The inability to verify the humanity of your interlocutor corrodes the foundation of trust on which all communication depends.

Baudrillard's simulacra

The French philosopher Jean Baudrillard, writing in Simulacra and Simulation (1981), described a process by which signs and symbols become disconnected from the reality they once represented, passing through four stages: the sign as a reflection of reality, the sign as a distortion of reality, the sign as masking the absence of reality, and finally the sign as bearing no relationship to reality at all — the "pure simulacrum." In this final stage, the simulation does not merely replace the real — it renders the concept of the real meaningless. There is no "original" to return to, no authentic state that has been corrupted. The simulation is all there is.

The dead internet is Baudrillard's hyperreality made literal. The internet began as a representation of human communication — a digital extension of the conversations, debates, and information-sharing that humans had always conducted through other media. It then became a distortion — algorithms amplifying some voices and suppressing others, bots inflating metrics, content farms replacing genuine expression with SEO-optimized facsimiles. The dead internet theory argues that we have now reached the third and fourth stages: the internet masks the absence of genuine human communication, and in some spaces, it bears no relationship to human communication at all. The feeds are full of content. The content is not created by humans. The engagement is not generated by humans. The trending topics are not determined by human interest. The simulation of a living internet has replaced the living internet itself, and the question "What was the real internet?" has become as unanswerable as Baudrillard's question about the real itself.

The Chinese Room

John Searle's Chinese Room argument, published in 1980 in Behavioral and Brain Sciences, was designed to demonstrate that syntactic manipulation of symbols — which is what computers do — does not constitute genuine understanding. A person locked in a room who follows rules for arranging Chinese characters can produce outputs that are indistinguishable from those of a fluent Chinese speaker, without understanding a single word of Chinese. The argument was directed at "strong AI" — the claim that a sufficiently complex computer program would possess genuine understanding and Consciousness.

The dead internet raises the Chinese Room problem at civilizational scale. The AI systems that generate online content do not understand what they write. They manipulate statistical patterns in language to produce outputs that are syntactically coherent and contextually appropriate. They can write news articles, compose poetry, engage in debates, and express opinions — without understanding any of it. If the internet is increasingly populated by such systems, then the total volume of "communication" on the internet is increasing while the total volume of understanding is decreasing. More words, less meaning. More interaction, less connection. The Chinese Room is no longer a thought experiment. It is the architecture of the digital public square.

The loneliness epidemic

The internet was supposed to connect people. That was the utopian promise of the early web — that geographical barriers would dissolve, that isolated individuals would find communities, that the free exchange of information would produce a more informed, more connected, more empathetic world. The dead internet theory suggests that exactly the opposite has occurred. If the spaces where people go to connect are populated primarily by artificial entities, then the internet has not connected people — it has surrounded them with simulacra of connection while leaving them more isolated than before.

The data on loneliness is stark. In 2023, U.S. Surgeon General Vivek Murthy issued an advisory declaring loneliness and social isolation an "epidemic" and a "public health crisis," noting that roughly half of U.S. adults reported experiencing measurable levels of loneliness, with the highest rates among young adults — the demographic most immersed in online interaction. The correlation between increased internet use and increased loneliness is well-documented, though the causal mechanism is debated. The dead internet theory offers a specific causal mechanism: people are lonely because the internet, which they use as their primary social environment, is populated by bots and algorithms rather than humans. They are seeking connection in a space where connection is increasingly unavailable, and the failure of their attempts at connection is reinforcing their isolation.

Noam Chomsky and Edward Herman's Manufacturing Consent (1988) described how the mass media serves a "propaganda function" in democratic societies, shaping public opinion to align with elite interests through structural mechanisms — ownership concentration, advertising dependence, reliance on official sources, and ideological filtering. The model was designed to explain how a formally free press could produce a systematically distorted picture of reality without overt censorship.

The dead internet theory updates Manufacturing Consent for an era in which the audience itself may be manufactured. In Chomsky and Herman's model, the media manipulates a real audience — real people whose opinions are shaped by the information they receive. In the dead internet model, the "audience" that appears to respond to content may itself be artificial — bot accounts that like, share, and amplify specific narratives, creating the appearance of popular consensus where none exists. The manufacture of consent no longer requires persuading real people. It requires only creating the illusion of persuaded people — a digital Potemkin village of apparent agreement that real people then conform to out of a desire to align with what they perceive as the majority view.

This is the Asch conformity experiment scaled to the entire internet. In Solomon Asch's famous 1951 study, subjects conformed to obviously incorrect answers when surrounded by confederates who gave those answers. The dead internet raises the possibility that the "confederates" in the modern version are not human beings but bots, and the "room" is not a laboratory but the entirety of online discourse.

The Skeptical Case

The Dead Internet Theory, for all its resonance and supporting evidence, is also subject to serious objections — and intellectual honesty requires taking those objections seriously.

The most fundamental objection is empirical: the internet is very much alive. Billions of real human beings use it every day. They post photographs of their children, argue about politics, share recipes, seek medical advice, write fan fiction, organize community events, and conduct every variety of human activity that has ever been conducted through any medium of communication. The internet's human population has never been larger than it is today. If the dead internet theory claims that humans have been replaced, the claim is simply false.

The proponents of the theory would respond that the claim is not about replacement but about proportion and influence — that humans are still present but are outnumbered and outweighed by artificial activity. This is a more defensible claim, but it introduces a problem of thresholds. At what ratio of artificial to human content does the internet become "dead"? If 40 percent of traffic is bots, is the internet dead? Fifty percent? Sixty? The theory offers no clear threshold, which makes it difficult to evaluate empirically — and easy to shift the goalposts.

Bot traffic, as noted above, is not synonymous with a dead internet. Search engine crawlers, monitoring services, API calls, and other automated systems generate enormous volumes of traffic without any implication that the internet has died. The bot traffic statistics cited by dead internet proponents include these benign automated systems alongside malicious bots, and failing to distinguish between them inflates the apparent scale of the problem. A more careful analysis would focus specifically on deceptive bots — automated systems designed to impersonate humans — and while these are undoubtedly a significant and growing problem, they are a smaller proportion of total bot traffic than the headline numbers suggest.

The nostalgia fallacy is another serious objection. The "old internet" that dead internet proponents romanticize — the GeoCities pages, the independent forums, the blogosphere — was also full of spam, scams, misinformation, and manipulation. Email spam was a far more pervasive nuisance in 2005 than it is today. Nigerian prince scams, phishing attacks, and malware were rampant. Early search engines were easily manipulated by keyword stuffing and link farms. The internet was never a pristine space of pure human expression. It was always messy, always contested, always partially artificial. The feeling that "something has changed" may reflect genuine shifts in the internet's composition, but it is also colored by selective memory and the natural human tendency to idealize the past.

Selection bias compounds the nostalgia fallacy. People who are actively looking for evidence of bots and artificial content will find it, because it genuinely exists — but they may overestimate its prevalence by attending disproportionately to confirming cases and ignoring the vastly larger volume of genuine human activity that surrounds them. A single encounter with an obvious bot can color the perception of every subsequent interaction, creating a paranoid filter through which all online communication is viewed with suspicion.

Perhaps the most dangerous risk of the dead internet theory is its potential to become a thought-terminating cliché — a label that can be applied to any online disagreement to avoid engaging with it. If your political opponent can be dismissed as a bot, you never have to address their arguments. If a consensus you disagree with can be explained as manufactured, you never have to reconsider your position. The theory, taken to its logical extreme, becomes a solipsistic trap: if you assume everyone online is fake, you have isolated yourself more thoroughly than any algorithm could, and you have done it voluntarily.

The danger of solipsism is real. The dead internet theory, if accepted uncritically, leads to a worldview in which genuine human connection online is impossible and all online discourse is suspect. This is not liberation from manipulation — it is a different, arguably worse, form of manipulation. The person who trusts nothing is as epistemically helpless as the person who trusts everything. Both have surrendered the faculty of judgment.

The Uncomfortable Middle Ground

The truth about the dead internet is almost certainly more complicated — and more disturbing — than either the theory's proponents or its critics suggest. The internet is neither fully alive nor fully dead. It exists in a degraded state, a twilight condition that does not map neatly onto either narrative.

The organic internet — the one built by human beings communicating with each other for no purpose beyond the communication itself — has unquestionably shrunk. Not disappeared, but shrunk, as a proportion of total online activity. Bot traffic has grown. AI-generated content has exploded. Algorithmic curation has replaced human choice as the primary determinant of what people see and engage with. Astroturfing, sock puppets, and coordinated manipulation campaigns have become standard tools of politics, commerce, and statecraft. The number of real humans on the internet is larger than it has ever been, but the proportion of online activity attributable to real humans is almost certainly smaller than it has ever been, and that proportion appears to be declining.

The real question is not "Is the internet dead?" but "What percentage of it is real, and is that percentage decreasing?" The answer to the first part is unknowable with precision — no one, not even the platform companies, has reliable data on the true proportion of human versus artificial activity across the entire internet. The answer to the second part appears, based on every available indicator, to be yes. The percentage is decreasing. The trend line points in one direction.

The implications for democracy are grave. Democratic self-governance depends on informed citizens making decisions based on accurate information and genuine public deliberation. If the information environment is increasingly artificial — populated by bot-generated content, algorithmically curated to maximize engagement rather than inform, shaped by coordinated manipulation campaigns — then the epistemic foundation of democracy is eroding. Citizens cannot make informed decisions if the information they receive is manufactured. They cannot engage in genuine public deliberation if their interlocutors are bots. They cannot form accurate assessments of public opinion if the apparent consensus is artificial. The dead internet, whether it is "dead" in the absolute sense or merely dying, poses a fundamental threat to the democratic project.

The implications for journalism are equally severe. Journalism depends on trust — the reader's trust that the reporter is a real person, that the events described actually happened, that the sources quoted actually exist, that the information has been verified. AI-generated journalism — the Sports Illustrated fake authors, the CNET AI articles — destroys this trust, because it demonstrates that the byline, the most basic marker of journalistic accountability, can be fabricated. When you cannot be certain that an article was written by a human being, you cannot hold anyone accountable for its accuracy. The feedback loop that connects journalistic errors to corrections, retractions, and professional consequences is broken. The dead internet does not merely degrade the quantity of reliable journalism. It undermines the concept of reliability itself.

The concept of the "zero-trust internet" — borrowed from cybersecurity's "zero-trust architecture," in which every request is treated as potentially hostile regardless of its origin — may be the logical endpoint of these trends. An internet in which nothing is trusted by default, in which every piece of content, every account, every interaction is treated with suspicion until independently verified. This would be a functional internet, perhaps, but not a social one. It would be a space of transactions, not relationships — useful for commerce, perhaps, but inhospitable to the open, exploratory, connective communication that the internet was originally built to enable.

The Cultural Impact

The Dead Internet Theory has escaped the forum where it was born and entered the broader culture, shaping how millions of people think about and interact with the digital world.

The irony of the theory's viral spread was noted immediately and has been remarked upon so frequently that the observation has itself become a cliché: a theory about the internet being populated by bots spread virally across the internet, and it is impossible to determine what proportion of the accounts that spread it were operated by human beings. The theory's success is either evidence of its truth (bots amplified it) or evidence against it (it resonated with real humans because it articulated a real experience). Both interpretations are plausible, and neither can be conclusively established.

The NPC meme — which originated in gaming culture but was repurposed around 2018 as a political insult, casting political opponents as "non-player characters" who recite scripted dialogue without independent thought — shares deep structural similarities with dead internet thinking. Both rest on the suspicion that many of the entities one encounters online (or offline) lack genuine agency, consciousness, or individuality. Both dehumanize through the language of simulation. The NPC meme preceded the dead internet theory but prepared the cultural ground for it, normalizing the idea that many of the "people" one encounters might not be people in any meaningful sense.

The erosion of trust in online interaction is perhaps the theory's most tangible cultural impact. "Are you a bot?" has become a common question in online exchanges — sometimes humorous, sometimes serious, sometimes both. CAPTCHA tests, originally designed to distinguish humans from automated systems, have become increasingly difficult as AI has learned to solve them, leading to an arms race between verification systems and the automated systems they are designed to exclude. The practical result is that the burden of proving one's humanity online has shifted to the individual user, who must now regularly demonstrate that they are not a machine. The default assumption — that one's interlocutor is human — has been replaced, for many people, by a default suspicion that they might not be.

This suspicion has contributed to a broader migration away from the open internet and toward smaller, more private digital spaces. Discord servers, group chats, Signal groups, and other closed communities have become increasingly important as social spaces, precisely because their smaller scale and access controls make it easier to verify that participants are real people. The retreat from the open internet is not absolute — billions of people still use social media, still read news online, still participate in public digital spaces. But the movement toward smaller, more intimate digital communities represents a significant shift in how people seek connection online. It is, in effect, a reconstruction of the internet's original architecture — small communities of known individuals — within the shell of the platformized internet.

The resurgence of interest in physical community, analog media, and face-to-face interaction among younger demographics is another cultural consequence. Vinyl record sales have increased for seventeen consecutive years. Independent bookstores have grown in number after decades of decline. Community gardens, maker spaces, and local organizing have all experienced renewed interest. These trends have many causes, but the dead internet theory's cultural influence is among them — a growing sense that the digital world has become unreliable, unsatisfying, and potentially unreal, driving a return to forms of connection and expression that are grounded in physical reality and direct human contact.

Whether the Dead Internet Theory will be remembered as a prescient warning, a paranoid delusion, or something in between depends on developments that are currently unfolding and that no one — human or artificial — can predict with certainty. What is certain is that the theory has articulated a discomfort that is too widespread and too persistent to dismiss, a sense that the digital world we inhabit has become something other than what it was supposed to be. The internet was built to connect human minds. The dead internet theory asks whether it still does — and forces us to confront the possibility that the answer, increasingly, is no.

Connections

Sources

  • IlluminatiPirate. "Dead Internet Theory: Most of the Internet is Fake." Agora Road's Macintosh Café, 2021.
  • Imperva. "Bad Bot Report." Annual reports, various years (2014, 2020, 2022, 2023). Link
  • Barracuda Networks. "Bot Attacks: Top Threats and Trends — Insights into the Growing Number of Automated Attacks." 2021. Link
  • Confessore, Nicholas, Gabriel J.X. Dance, Richard Harris, and Mark Hansen. "The Follower Factory." The New York Times, January 27, 2018. Link
  • Bradshaw, Samantha, and Philip N. Howard. "The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation." Oxford Internet Institute Working Paper 2019.3, 2019.
  • Woolley, Samuel C., and Philip N. Howard. Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. Oxford University Press, 2018.
  • King, Gary, Jennifer Pan, and Margaret Roberts. "How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, Not Engaged Argument." American Political Science Review, Vol. 111, No. 3, 2017.
  • Mueller, Robert S. III. Report on the Investigation into Russian Interference in the 2016 Presidential Election. U.S. Department of Justice, 2019.
  • Doctorow, Cory. "Enshittification." Keynote address, DEF CON 31, August 2023. Also published on Pluralistic blog, January 21, 2023. Link
  • Ackerman, Spencer. "US Spy Operation That Manipulates Social Media." The Guardian, March 17, 2011. Link
  • Ball, James. "US Military Studied How to Influence Twitter Users in DARPA-Funded Research." The Guardian, July 8, 2014.
  • Greenwald, Glenn, and Andrew Fishman. "The Deception Tactics Used by GCHQ's JTRIG Unit." The Intercept, February 24, 2014. Link
  • Shumailov, Ilia, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, and Ross Anderson. "The Curse of Recursion: Training on Generated Data Makes Models Forget." arXiv:2305.17493, 2023.
  • Baudrillard, Jean. Simulacra and Simulation. Translated by Sheila Faria Glaser. University of Michigan Press, 1994 (original French edition 1981).
  • Searle, John R. "Minds, Brains, and Programs." Behavioral and Brain Sciences, Vol. 3, No. 3, pp. 417-424, 1980.
  • Chomsky, Noam, and Edward S. Herman. Manufacturing Consent: The Political Economy of the Mass Media. Pantheon Books, 1988.
  • Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs, 2019.
  • Pariser, Eli. The Filter Bubble: What the Internet Is Hiding from You. Penguin Press, 2011.
  • Turing, Alan M. "Computing Machinery and Intelligence." Mind, Vol. 59, No. 236, pp. 433-460, 1950.
  • Murthy, Vivek H. "Our Epidemic of Loneliness and Isolation: The U.S. Surgeon General's Advisory on the Healing Effects of Social Connection and Community." U.S. Department of Health and Human Services, 2023.
  • Varol, Onur, Emilio Ferrara, Clayton A. Davis, Filippo Menczer, and Alessandro Flammini. "Online Human-Bot Interactions: Detection, Estimation, and Characterization." Proceedings of the International AAAI Conference on Web and Social Media, 2017.
  • Asch, Solomon E. "Effects of Group Pressure upon the Modification and Distortion of Judgments." In H. Guetzkow (ed.), Groups, Leadership, and Men. Carnegie Press, 1951.
  • Saunders, Frances Stonor. The Cultural Cold War: The CIA and the World of Arts and Letters. The New Press, 2000.