In 1975, Senator Frank Church — the Idaho Democrat who chaired the Senate Select Committee to Study Governmental Operations with Respect to Intelligence Activities — appeared on NBC's Meet the Press and issued a warning so precise that it reads today less like prophecy than like a technical description of the present. Speaking of the National Security Agency, Church said: "That capability at any time could be turned around on the American people, and no American would have any privacy left, such is the capability to monitor everything: telephone conversations, telegrams, it doesn't matter. There would be no place to hide." He paused, and then added the line that would become his most quoted: "I don't want to see this country ever go across the bridge. I know the capacity that is there to make tyranny total in America, and we must see to it that this agency and all agencies that possess this technology operate within the law and under proper supervision, so that we never cross over that abyss. That is the abyss from which there is no return."
The bridge has been crossed. The abyss has been entered. And the story of how it happened — incrementally, legally, and in plain sight — is the story of how a democratic society built the most comprehensive surveillance apparatus in human history and persuaded itself that it had no choice.
The NSA's surveillance of American citizens did not begin with the internet or with the War on Terror. It began with telegrams.
Project SHAMROCK was a secret arrangement initiated in 1945, at the end of the Second World War, under which the major American telegraph companies — Western Union, RCA Global, and ITT World Communications — provided the NSA and its predecessors with copies of all international telegrams entering or leaving the United States. Every day, for nearly thirty years, the telegraph companies handed over physical copies of cables to NSA couriers. At its peak, SHAMROCK processed approximately 150,000 messages per month. The program had no statutory authorization, no judicial oversight, and no warrant requirement. It operated on the basis of informal agreements between intelligence officials and corporate executives — gentlemen's agreements between men who believed that national security justified any intrusion, and that secrecy ensured there would be no consequences.
SHAMROCK ran continuously from 1945 to 1975 — through the Korean War, the McCarthy era, the Vietnam War, the civil rights movement, and the Watergate scandal. It was the longest-running warrantless surveillance program in American history, predating the NSA itself (the agency was established in 1952 by a classified presidential directive from Harry Truman). The program's existence was so closely guarded that even senior government officials outside the intelligence community were unaware of it. When Lieutenant General Lew Allen Jr., the NSA director, testified before the Church Committee in 1975, it was the first public acknowledgment that the program had ever existed.
Running alongside SHAMROCK was Project MINARET, established in 1967 as a watch-list program designed to monitor the international communications of specific American citizens. The watch lists were compiled at the request of other government agencies — the FBI, the CIA, the Secret Service, the Bureau of Narcotics and Dangerous Drugs, and the Department of Defense. The targets included civil rights leaders, antiwar activists, journalists, and members of Congress. Martin Luther King Jr., Muhammad Ali, Jane Fonda, Dr. Benjamin Spock, Senator Frank Church himself, and Senator Howard Baker were among the approximately 1,650 American citizens whose communications were monitored under MINARET. The NSA produced over 3,900 intelligence reports on these individuals between 1967 and 1973, distributing them to the FBI, the Secret Service, and other requesting agencies. The reports were hand-delivered by couriers and bore no NSA markings — a deliberate measure to conceal the program's existence even from those who received its intelligence product. Each report carried a warning: "not releasable to foreign nationals" and classified at a level that ensured only a handful of officials would ever see them.
The Church Committee's exposure of SHAMROCK and MINARET led directly to the passage of the Foreign Intelligence Surveillance Act (FISA) in 1978, which established the Foreign Intelligence Surveillance Court — a secret court that was supposed to provide judicial oversight of government surveillance and prevent the abuses the Church Committee had uncovered. The FISA court was designed as a safeguard. It would become, as history demonstrated, something closer to a rubber stamp. But in 1978, the lesson appeared to have been learned: the intelligence community had overreached, the democratic system had corrected the overreach, and legal protections were now in place. The bridge had been approached and pulled back from.
That is the story the country told itself. It was not the full story even then.
On February 13, 2003, the New York Times reported on a new program at the Defense Advanced Research Projects Agency (DARPA) that seemed to belong in a dystopian novel. The program was called Total Information Awareness — TIA — and it was run by Vice Admiral John Poindexter, a figure who carried with him the wreckage of an earlier scandal. Poindexter had served as Ronald Reagan's National Security Advisor and had been convicted on five felony counts of conspiracy, obstruction of Congress, and making false statements in connection with the Iran-Contra affair. The convictions were overturned on appeal in 1990 on the grounds that his immunized testimony before Congress may have influenced the prosecution. He had never been exonerated. He had been found a technicality.
Now Poindexter was back, and his ambition had grown. TIA's goal was nothing less than the creation of a comprehensive database that would integrate and cross-reference the digital traces of every person in the United States — and ultimately the world. Financial transactions, travel records, communications metadata, medical records, educational records, biometric data — everything that left a digital footprint would be vacuumed into a single analytical framework. The program's stated purpose was counterterrorism: by correlating vast datasets, TIA's architects believed they could identify terrorist plots in their planning stages, detecting the "signatures" of terrorist activity buried in the noise of ordinary life. The program's logo, which was later changed after public outcry, featured the all-seeing eye of the Illuminati atop a pyramid, casting its gaze over the entire globe, with the Latin motto Scientia Est Potentia — Knowledge Is Power. It was either an astonishing failure of public relations or a moment of rare honesty.
The public reaction was fierce. Civil liberties organizations, journalists, and members of Congress attacked TIA as an Orwellian nightmare. Senator Ron Wyden of Oregon called it "the biggest surveillance program in the history of the United States." In September 2003, Congress formally defunded TIA and closed Poindexter's office. The program was dead.
Except it was not dead. It had simply been renamed and distributed.
Investigative reporting by Shane Harris, later published in his 2010 book The Watchers: The Rise of America's Surveillance State, and separate reporting by the National Journal in 2006, revealed that the core components of TIA had been transferred to other agencies — primarily the NSA — under classified programs with new names. The data-mining research continued under the code name "Basketball." The communications analysis tools were absorbed into existing NSA programs. The collaboration network analysis — the program's tools for mapping relationships between individuals based on their communications patterns — was continued by the NSA's Advanced Research and Development Activity. The Congressional defunding of TIA was, in practice, a renaming exercise. The capabilities Poindexter had envisioned did not disappear. They migrated into the classified world where congressional oversight was minimal and public scrutiny was impossible. The architecture of total information awareness was built. It was simply built in the dark.
On October 26, 2001 — forty-five days after the September 11 attacks — President George W. Bush signed the USA PATRIOT Act into law. The acronym stood for "Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism." The bill was 342 pages long. It passed the House 357-66 and the Senate 98-1, with Senator Russ Feingold of Wisconsin casting the sole dissenting vote. Most members of Congress later acknowledged that they had not read the bill before voting on it. Representative John Conyers Jr. of Michigan said, "We don't really read most of the bills. Do you know what that would entail if we read every bill that we passed?" The irony of not reading a bill that redefined the relationship between the citizen and the state apparently escaped him.
The Patriot Act expanded the government's surveillance authorities in ways that would take years to be fully understood. It broadened the definition of "domestic terrorism" in ways that civil liberties organizations warned could be applied to political protest. It expanded the use of National Security Letters — administrative subpoenas issued by the FBI without judicial approval — allowing the Bureau to compel the production of records from telecommunications companies, financial institutions, and internet service providers, along with a gag order preventing the recipient from disclosing that they had received the request. The FBI issued over 300,000 National Security Letters between 2003 and 2006 alone, according to a 2007 Inspector General report. A subsequent audit found widespread violations of the Bureau's own internal guidelines, including cases in which NSLs were issued without proper authorization or in circumstances that did not meet even the reduced legal standards the Patriot Act established.
But it was Section 215 that would become the most consequential — and most controversial — provision. Section 215 amended the Foreign Intelligence Surveillance Act to allow the FISA court to issue orders compelling the production of "any tangible things" relevant to an authorized investigation to protect against international terrorism or clandestine intelligence activities. The language was deliberately expansive. "Any tangible things" could mean business records, library records, medical records, financial records — anything. And the word "relevant" was interpreted by the government in a way that no ordinary reading of the English language would support.
In 2006, the NSA began using Section 215 as the legal basis for the bulk collection of telephone metadata — the records of every phone call made within the United States. Not the content of the calls, the government emphasized, but the metadata: the originating number, the receiving number, the time of the call, the duration of the call, and the location of the callers. The FISA court approved this interpretation in a series of classified rulings that remained secret until Edward Snowden's disclosures in 2013. The court's reasoning was that if individual phone records could be relevant to a terrorism investigation, then the entire database of all phone records was relevant, because the government could not know in advance which records would prove relevant. The logic was circular, self-justifying, and totalizing: everything is relevant because anything might be relevant, and the only way to find what is relevant is to collect everything.
The FISA court, established in 1978 to prevent the abuses the Church Committee had uncovered, had become the legal mechanism that authorized abuses of a scale the Church Committee never imagined. Between 1979 and 2012, the FISA court approved 33,942 surveillance orders and denied 11. The approval rate was 99.97 percent. The court held its proceedings in secret. Its rulings were classified. The targets of its orders were not represented and, in most cases, never learned they had been surveilled. Civil liberties advocates called it a rubber stamp. The court's defenders argued that the high approval rate reflected the quality of the applications, not the absence of scrutiny. The structural reality was that the FISA court operated as a secret judiciary interpreting secret law to authorize secret surveillance — an arrangement that is, by definition, incompatible with democratic accountability.
In 2006, a retired AT&T technician named Mark Klein stepped into the offices of the Electronic Frontier Foundation in San Francisco and provided documents that would expose one of the most significant components of the NSA's post-9/11 surveillance infrastructure.
Klein had worked at AT&T's Folsom Street facility in San Francisco for over twenty years. In 2002 and 2003, he witnessed the construction of a secret room — Room 641A — inside the AT&T building. The room was built by the NSA and accessible only to personnel with NSA security clearances. Klein was not cleared for the room, but his work on the facility's fiber-optic infrastructure allowed him to understand what was happening. The NSA had installed a beam splitter — a device that creates a copy of the light signal passing through a fiber-optic cable — on AT&T's main fiber trunk line, which carried the internet traffic of millions of AT&T customers, including email, web browsing, and voice-over-internet-protocol calls. The splitter copied the entire data stream and routed the duplicate into Room 641A, where it was processed by a Narus STA 6400, a sophisticated traffic analysis system capable of inspecting data in real time at speeds up to 10 gigabits per second.
The implications were unambiguous. The NSA was copying the entirety of AT&T's internet traffic — the communications of millions of Americans who were not suspected of any crime, who were not the targets of any investigation, and who had no idea that their emails, phone calls, and web browsing were being duplicated and fed into a government surveillance system. Klein's documents showed that similar installations existed at other AT&T facilities across the country — in Seattle, San Jose, Los Angeles, and San Diego — suggesting a nationwide program of internet surveillance conducted with the active cooperation of one of America's largest telecommunications companies.
The Electronic Frontier Foundation filed a class-action lawsuit, Hepting v. AT&T, on behalf of AT&T's customers. The case was eventually consolidated with other challenges and effectively neutralized by the FISA Amendments Act of 2008, which granted retroactive legal immunity to telecommunications companies that had cooperated with the NSA's warrantless surveillance programs. The law was supported by both parties and signed by President Barack Obama, who had initially opposed telecom immunity as a candidate but reversed his position before the general election. The companies that had enabled mass surveillance were shielded from accountability. Klein's revelations, though widely reported, produced no legal consequences for the government or the corporations involved. The infrastructure remained in place.
On June 5, 2013, the Guardian published a classified FISA court order compelling Verizon to hand over to the NSA, "on an ongoing daily basis," the call detail records of all telephone calls — both domestic and international — handled by the company. The next day, the Guardian and the Washington Post published articles describing PRISM, a program that provided the NSA with direct access to data stored on the servers of nine major American technology companies: Microsoft, Yahoo, Google, Facebook, PalTalk, YouTube, Skype, AOL, and Apple. The articles were based on classified documents provided by Edward Joseph Snowden, a twenty-nine-year-old contractor employed by Booz Allen Hamilton at an NSA facility in Hawaii.
Over the following weeks and months, Snowden's disclosures — provided to journalists Glenn Greenwald, Laura Poitras, and Barton Gellman — revealed the contours of a surveillance apparatus so vast and so technically sophisticated that even informed observers were stunned by its scope. The documents described not a single program but an ecosystem of interlocking surveillance capabilities that, taken together, gave the NSA the ability to monitor the communications of virtually anyone on the planet.
PRISM was the program that attracted the most initial attention. It allowed the NSA to collect stored communications — emails, chats, video calls, file transfers, photos, and social networking data — from the servers of participating technology companies. The NSA accessed this data through a system in which the companies were served with directives under Section 702 of the FISA Amendments Act. The tech companies initially denied that the NSA had "direct access" to their servers, and the precise nature of the technical interface became a subject of debate. What was not in dispute was the outcome: the NSA was routinely obtaining the private communications of hundreds of millions of people from the world's largest technology platforms.
XKeyscore was the NSA's most comprehensive search tool — a system that allowed analysts to search through vast databases of intercepted communications using search terms such as email addresses, phone numbers, names, or keywords. Snowden described it as a tool that allowed him to "wiretap anyone, from you or your accountant to a federal judge, to even the President, if I had a personal email." The system collected and indexed approximately 20 terabytes of data per day from over 700 servers at 150 sites around the world. Training slides leaked by Snowden showed analysts that XKeyscore could search the full content of emails, chats, and browsing history — not just metadata but the actual substance of communications. The system retained full content data for three to five days and metadata for thirty days at each collection site, with longer-term storage available through other systems.
Boundless Informant was the NSA's internal data visualization tool, which mapped the scope of global surveillance operations. One of the leaked Boundless Informant maps showed that in a single thirty-day period in March 2013, the NSA collected 97 billion pieces of intelligence from computer networks worldwide — 3 billion from the United States alone. The tool contradicted repeated assurances from NSA officials that the agency did not track the volume of communications collected from American sources. When Senator Wyden asked Director of National Intelligence James Clapper in a public hearing on March 12, 2013, whether the NSA collected "any type of data at all on millions or hundreds of millions of Americans," Clapper replied, "No, sir." He later characterized this answer as the "least untruthful" statement he could make in an unclassified setting. He was never prosecuted for lying to Congress.
MUSCULAR was a joint NSA-GCHQ program that intercepted data flowing between the private data centers of Google and Yahoo. Unlike PRISM, which operated through front-door legal processes, MUSCULAR tapped into the unencrypted fiber-optic links connecting these companies' data centers — links that the companies had believed were secure internal infrastructure. When the Washington Post published a hand-drawn NSA slide showing the point at which the agency intercepted data from Google's internal cloud, with a smiley face drawn next to the notation "SSL added and removed here," Google's engineers were reported to have reacted with fury. The program collected millions of records per day, including both metadata and content, from users who had no connection to any intelligence target.
Tempora was the British counterpart — a GCHQ program that tapped into over two hundred fiber-optic cables carrying internet traffic into and out of the United Kingdom, storing the content of communications for three days and metadata for thirty days. Because a large proportion of the world's internet traffic passes through undersea cables landing in the UK, Tempora gave GCHQ — and, through intelligence sharing, the NSA — access to a staggering volume of global communications. At its peak, Tempora was processing 600 million "telephone events" per day and had the capacity to tap 46 fiber-optic cables simultaneously.
Upstream collection referred to the NSA's practice of intercepting communications as they transited the internet's backbone infrastructure — the high-capacity fiber-optic cables, switches, and routers that carry the bulk of the world's internet traffic. The names of the specific programs involved — FAIRVIEW, STORMBREW, BLARNEY, and OAKSTAR — referred to corporate partnerships with different telecommunications providers that allowed the NSA to install collection equipment at key points in the network. Upstream collection captured both the content and metadata of communications in transit, including those of Americans who were not targets of any investigation but whose data happened to traverse the same cables as the communications of foreign intelligence targets.
The cumulative picture was one of total surveillance capability. The NSA could monitor the phone calls, emails, internet activity, file transfers, and social media interactions of virtually anyone, anywhere. It could track the physical movements of individuals through their cell phones. It could map social networks based on communications patterns. It could store this data for years. And it was doing all of this in secret, under the authority of a secret court, with minimal oversight from a Congress that was itself often kept in the dark about the programs' full scope.
The Snowden documents revealed not merely an American surveillance apparatus but a transnational intelligence-sharing network of extraordinary scope. The Five Eyes alliance — comprising the intelligence agencies of the United States, the United Kingdom, Canada, Australia, and New Zealand — operates under the UKUSA Agreement, a signals intelligence treaty first signed in 1946 between the United States and the United Kingdom and progressively expanded to include the three other Anglophone nations.
The arrangement is simple in principle and devastating in its implications. Each member nation conducts surveillance that may be restricted by its own domestic laws, and then shares the results with its partners. The practical effect is the circumvention of domestic legal protections. If British law prohibits GCHQ from surveilling British citizens without a warrant, the NSA can surveil those citizens and share the results. If American law restricts the NSA's collection of Americans' communications, GCHQ — through programs like Tempora — can collect them and pass them along. The intelligence agencies of all five nations have access to a shared pool of surveillance data that is, in aggregate, vastly more comprehensive than anything any single nation could legally collect on its own.
The Five Eyes framework extends beyond signals intelligence. The alliance includes agreements on human intelligence, defense intelligence, geospatial intelligence, and the sharing of analysis and assessments. The partnership is so deep that in practice, the intelligence agencies of the Five Eyes nations operate less as separate organizations than as nodes in a single, distributed system. NSA personnel are embedded in GCHQ facilities and vice versa. Australian Signals Directorate operators work alongside NSA analysts. The integration is seamless and, for all practical purposes, invisible to the democratic institutions notionally responsible for overseeing each nation's intelligence activities.
The Snowden documents showed that the Five Eyes alliance had been used to target not only terrorist suspects but foreign governments, international organizations, and private companies. The NSA had tapped the personal cell phone of German Chancellor Angela Merkel — an ally. GCHQ had conducted surveillance on delegates at the G20 summit in London in 2009, monitoring their emails and phone calls to give British negotiators an advantage. The Australian Signals Directorate had spied on the Indonesian president and his wife. The Five Eyes had surveilled the United Nations, the International Atomic Energy Agency, the European Union, the African Union, and UNICEF. The scope of the surveillance bore no relationship to counterterrorism. It was intelligence collection in the service of geopolitical power — precisely the kind of activity that the Five Eyes' member nations routinely condemn when conducted by adversaries.
The alliance operates in a space beyond any single nation's democratic accountability. No parliament or congress oversees the Five Eyes as a whole. No court has jurisdiction over the alliance's collective operations. No treaty has been publicly ratified by any of the five nations' legislatures — the UKUSA Agreement was classified for its first sixty years, and its full text was not publicly released until 2010. The Five Eyes is, in structural terms, a shadow intelligence apparatus that operates by, for, and among the permanent security bureaucracies of five nations, sharing the intimate data of billions of human beings without meaningful democratic oversight.
The relationship between the surveillance state and Silicon Valley is a story of complicity, resistance, and the uncomfortable space between the two.
The Snowden documents made clear that the major technology companies had cooperated with the NSA's collection programs, whether willingly or under legal compulsion. Microsoft was the first company to join PRISM, in September 2007. Yahoo followed in 2008, Google and Facebook in 2009, Apple in 2012. The companies were served with directives under Section 702 of the FISA Amendments Act, and compliance was mandatory — refusal could result in contempt-of-court charges. Yahoo had, in fact, challenged a predecessor program in the FISA court in 2007 and lost. The court ruled that Yahoo must comply, and the ruling was classified. The company was legally prohibited from disclosing that it had fought the order, or that the order existed.
The tech companies' public reactions to the Snowden disclosures followed a pattern. First came denial — carefully worded statements asserting that the companies did not provide "direct access" to their servers and did not participate in any program that would allow "unfettered access" to user data. Then came outrage — particularly over MUSCULAR, the program that intercepted data between Google's and Yahoo's internal data centers without any legal process at all. Then came reform — or at least the appearance of reform. In the months following the Snowden revelations, the major tech companies invested heavily in encrypting data in transit between their data centers, closing the vulnerability that MUSCULAR had exploited. Google, Apple, and others expanded the use of end-to-end encryption, making it technically more difficult for government agencies to intercept communications even with a warrant.
The encryption debate that followed has been one of the most significant and unresolved conflicts of the digital age. Law enforcement and intelligence agencies have argued that strong encryption — particularly end-to-end encryption on messaging platforms like Signal, WhatsApp, and iMessage — creates "going dark" zones where criminals and terrorists can communicate beyond the reach of lawful surveillance. FBI Director James Comey made this argument repeatedly in 2014 and 2015, calling encryption an existential threat to public safety. Attorney General William Barr renewed the push in 2019, pressuring Facebook to abandon its plans to extend end-to-end encryption across its messaging platforms.
The technology companies and civil liberties organizations have countered that weakening encryption for law enforcement necessarily weakens it for everyone — that there is no mathematical backdoor that only the good guys can use. Any vulnerability built into an encryption system will be found and exploited by hostile intelligence agencies, criminal hackers, and authoritarian governments. The argument is not ideological but mathematical: you cannot build a lock that opens only for authorized keys. You can only build a lock that is strong or a lock that is weak.
The deeper question is one of structural power. The technology companies that dominate digital life — Google, Apple, Meta, Microsoft, Amazon — possess surveillance capabilities that dwarf those of most nation-states. They track the location, communications, purchases, browsing habits, social relationships, health data, and biometric information of billions of people. They do so not under the authority of secret court orders but under the terms of service that users click "agree" to without reading. Shoshana Zuboff's The Age of Surveillance Capitalism (2019) argues that these companies have created a new economic order based on the extraction of "behavioral surplus" — the data generated by human activity that exceeds what is needed to improve a service, harvested instead to predict and modify human behavior for profit. The surveillance state and surveillance capitalism are not the same thing. But they are symbiotic. They share infrastructure, share data, and share an underlying premise: that the comprehensive monitoring of human behavior is both technically feasible and, for those who control the infrastructure, irresistibly useful.
The government's defense of its bulk surveillance programs consistently relied on a distinction between content and metadata — between what you said and the record of who you called, when, for how long, and from where. Metadata, officials argued, was not constitutionally protected in the same way as content. The legal basis for this claim was Smith v. Maryland (1979), a Supreme Court case in which the court ruled that a person has no reasonable expectation of privacy in the phone numbers they dial, because they voluntarily convey that information to the telephone company. The ruling, issued when a pen register was a physical device attached to a single phone line, was used to justify the collection of the communications metadata of every person in the United States.
The metadata defense was always disingenuous, and those who understood the intelligence value of metadata knew it. General Michael Hayden, former director of both the NSA and the CIA, said in a public debate at Johns Hopkins University in 2014: "We kill people based on metadata." He was not being flippant. He was being precise. The American drone program — which carried out targeted killings in Pakistan, Yemen, Somalia, and elsewhere — relied heavily on metadata analysis to identify and locate targets. The NSA's analysis of communications metadata — call patterns, geolocation data, network mapping — was the primary intelligence product used to generate the "kill lists" that determined who would live and who would be struck by a Hellfire missile fired from a Predator drone. In some cases, the targets were identified entirely through metadata analysis, without ever intercepting the content of a single communication. The person was killed because of who they called, when they called, where they were when they called, and the pattern those calls formed when mapped against known networks.
Stewart Baker, the NSA's former general counsel, put it even more bluntly: "Metadata absolutely tells you everything about somebody's life. If you have enough metadata, you don't really need content." A person's metadata reveals who their doctor is, who their lawyer is, who their romantic partners are, what political organizations they belong to, whether they are having an affair, whether they are seeking treatment for addiction or mental illness, whether they attend a mosque or a church, and who their closest associates are. A comprehensive metadata record is not less intimate than the content of a conversation. In many cases, it is more intimate, because it reveals patterns that the person themselves may not be conscious of.
Professor Edward Felten of Princeton University submitted a declaration to the ACLU's legal challenge against the NSA's metadata program explaining that modern metadata analysis can reveal "political affiliation, religious practices, and other sensitive information" and that the government's distinction between metadata and content was a "false dichotomy" that did not survive technical scrutiny. The Second Circuit Court of Appeals agreed, ruling in ACLU v. Clapper (2015) that the bulk collection of telephone metadata was not authorized by Section 215 of the Patriot Act. But by then, the program had been running for nearly a decade.
The most insidious consequence of mass surveillance is not what it discovers but what it prevents. The knowledge that communications are being monitored — even the suspicion that they might be — changes behavior. This is the chilling effect, and it is not theoretical. It has been measured.
A 2016 study by Jon Penney, published in the Berkeley Technology Law Journal, analyzed Wikipedia traffic data and found that after the Snowden revelations, there was a statistically significant decline in page views for Wikipedia articles related to terrorism — not because people had become less curious, but because they were afraid that reading about certain topics would attract government attention. The study documented a 20 percent drop in traffic to articles on topics the Department of Homeland Security had identified as terms that could trigger surveillance. People were self-censoring their reading habits based on the perceived risk of being watched.
A 2013 survey by PEN American Center found that one in six writers — and one in four writers who had written on topics related to the Middle East, military affairs, or the War on Terror — had avoided writing or speaking on a topic they thought would subject them to surveillance. Twenty-four percent had deliberately avoided certain topics in phone or email conversations. Sixteen percent had refrained from conducting internet searches or visiting websites on topics that might be considered controversial. The writers were not being censored. No one had told them what they could or could not write. But the awareness of being watched had produced the same result as censorship — a narrowing of expression, a retreat from the dangerous, the controversial, the unpopular.
The impact on journalism has been particularly severe. In the years following the Snowden revelations, multiple investigations documented the ways in which mass surveillance had degraded the ability of journalists to protect their sources. A 2014 report by Human Rights Watch and the ACLU, titled With Liberty to Monitor All, found that government surveillance was undermining press freedom in the United States. Journalists described sources who refused to communicate, who demanded elaborate security measures, or who simply went silent after the Snowden disclosures. National security reporters at major news organizations adopted encryption tools, burner phones, and in-person meetings as standard practice — measures that were once associated with Cold War spycraft but had become necessary for the basic practice of journalism in a democracy. James Risen of the New York Times, who was himself subjected to years of legal pressure to reveal a confidential source, wrote that the Obama administration's war on leaks had been "the most aggressive since the Nixon administration" and that mass surveillance was "the infrastructure that makes that war possible."
The chilling effect extends beyond journalism to the basic civic activities of a democratic society. When people know they are watched, they conform. They avoid associations that might attract scrutiny. They self-censor their political speech. They do not attend protests if they believe their presence will be recorded. They do not search for information about controversial topics. They do not contact their elected representatives about sensitive issues. The aggregate effect is not the silencing of dissent but its slow erosion — a gradual narrowing of the space in which free thought and free expression can operate. The panopticon does not need to watch everyone all the time. It needs only to create the belief that anyone might be watched at any time. The uncertainty is the mechanism. The conformity is the product.
If the Western surveillance model is built on secrecy — collect everything, process it in the dark, and deny its existence — China's model represents the opposite approach: surveillance as an explicit instrument of social control, announced, documented, and integrated into daily life.
China's Social Credit System, first outlined in a 2014 State Council document titled "Planning Outline for the Construction of a Social Credit System," is an evolving network of programs that use data collection and algorithmic scoring to rate the "trustworthiness" of individuals and businesses. The system integrates data from financial records, court records, social media activity, purchasing behavior, and — critically — the vast network of surveillance cameras that blanket Chinese cities. China has installed an estimated 700 million surveillance cameras nationwide, many equipped with facial recognition technology capable of identifying individuals in real time. The stated purpose is the promotion of "sincerity" and the punishment of "dishonesty" — categories so broad that they can encompass anything from financial fraud to jaywalking to posting politically sensitive content on Weibo.
The consequences of a low social credit score are tangible. Individuals on the system's blacklists have been blocked from purchasing airline tickets (over 23 million times by 2018, according to China's National Public Credit Information Center), from buying train tickets (over 6 million times), from enrolling their children in private schools, from staying in certain hotels, and from obtaining loans. Their names and photographs are displayed on public screens. Their phone ringtone, when others call them, is replaced with a message identifying them as "untrustworthy." The system is not fully unified — it operates as a patchwork of local and corporate scoring programs — but the trajectory is clear: the creation of a comprehensive behavioral management system in which every action is observed, recorded, scored, and consequenced.
The Western response to China's social credit system has been one of horror — and hypocrisy. The surveillance infrastructure that enables China's system is, in technical terms, not fundamentally different from the infrastructure that Snowden revealed in the West. The distinction is not one of capability but of application. Western governments collect the same data, from the same sources, using the same technologies. They simply do not — yet — use it for explicit social scoring. The question that China's system poses to Western democracies is not whether such a system is possible but whether the infrastructure that makes it possible can exist without being used that way. History suggests that capabilities, once built, are eventually used to their full extent.
The story of mass surveillance cannot be told without accounting for what happened to the people who told the truth about it. The pattern is consistent, and it is damning.
William Binney was a senior NSA cryptanalyst who spent thirty-two years at the agency and helped design its intelligence-gathering systems. After 9/11, he watched as the surveillance tools he had built were turned on the American public without the privacy protections he had designed into them. He resigned in October 2001 and, along with colleagues J. Kirk Wiebe and Edward Loomis, filed a complaint with the Department of Defense Inspector General. In 2007, the FBI raided his home, pointing guns at him as he stepped out of the shower. He was never charged with a crime. The message was clear.
Thomas Drake was a senior executive at the NSA who reported waste, fraud, and warrantless surveillance to his superiors, to the NSA Inspector General, and to the congressional intelligence committees — every channel the system said he was supposed to use. When nothing happened, he provided unclassified information to Baltimore Sun reporter Siobhan Gorman. In 2010, he was indicted under the Espionage Act — the same law used to prosecute spies. The case collapsed when it became clear that the classified information the government claimed Drake had leaked was in fact unclassified, and much of it had been previously published. Drake pleaded to a misdemeanor. He was sentenced to community service. But his career was destroyed, his finances were ruined, and his security clearance — his livelihood — was permanently revoked. He now works at an Apple Store.
Edward Snowden fled the United States before his disclosures were published, knowing that the legal system offered no protection for a whistleblower who had exposed classified programs. He traveled to Hong Kong, where he met with Greenwald and Poitras, and then to Moscow, where he was stranded when the State Department revoked his passport while he was in transit. He has lived in Russia since June 2013 and was granted Russian citizenship in 2022. He was charged under the Espionage Act with theft of government property and two counts of violating the Espionage Act. The Espionage Act does not allow a defendant to argue that the information they disclosed was in the public interest. Under the law, the content and consequences of the disclosure are irrelevant. The act of disclosure is the crime.
Reality Winner, a twenty-five-year-old NSA contractor, was arrested in June 2017 for leaking a classified NSA report describing Russian interference in the 2016 election to The Intercept. She was sentenced to five years and three months in federal prison — the longest sentence ever imposed for an unauthorized disclosure to the media. She served more than four years before being released to a halfway house.
The message delivered to potential whistleblowers is unambiguous: the system will destroy you. Go through channels, and you will be ignored or retaliated against. Go to the press, and you will be prosecuted as a spy. The machinery of classification ensures that the public cannot know what it needs to know, and the machinery of prosecution ensures that those who try to tell them are punished with a severity ordinarily reserved for those who sell secrets to hostile powers. The surveillance state does not merely monitor. It protects itself.
In July 2021, a consortium of seventeen media organizations led by the French nonprofit Forbidden Stories, with technical analysis by Amnesty International's Security Lab, published the Pegasus Project — an investigation revealing that Pegasus spyware, developed by the Israeli surveillance company NSO Group, had been used to target the phones of journalists, human rights activists, lawyers, and political leaders in at least forty-five countries.
Pegasus is a "zero-click" exploit — it can compromise a smartphone without the target clicking a link, opening an attachment, or taking any action at all. Once installed, it gives the operator complete access to the device: messages, emails, photos, contacts, microphone, camera, location, and encrypted communications on apps like Signal and WhatsApp. The phone becomes, in effect, a surveillance device carried voluntarily by the target.
The Pegasus Project identified over 50,000 phone numbers selected for potential targeting by NSO Group's government clients. Among the confirmed targets were Hatice Cengiz, the fiancee of murdered Washington Post journalist Jamal Khashoggi, whose phone was compromised by Saudi Arabia. Multiple journalists at Al Jazeera, Le Monde, the Financial Times, CNN, the New York Times, and the Associated Press were targeted. Mexican journalist Cecilio Pineda Birto was selected as a target weeks before he was murdered in 2017. Indian journalists, Hungarian journalists, Azerbaijani journalists — the pattern repeated across authoritarian and nominally democratic states alike. French President Emmanuel Macron, South African President Cyril Ramaphosa, Pakistani Prime Minister Imran Khan, and Iraqi President Barham Salih were among the heads of state whose numbers appeared in the leaked data.
NSO Group maintained that Pegasus was sold only to vetted government agencies for the purpose of combating terrorism and serious crime. The evidence overwhelmingly contradicted this claim. The tool was being used by governments to monitor the people who held those governments accountable — journalists, opposition politicians, human rights defenders. It was the privatization of mass surveillance, available for purchase by any government willing to pay, deployed against the very people democratic societies depend on to function.
The history of mass surveillance traces a single arc: from Project SHAMROCK's telegraph intercepts to the NSA's collection of the world's internet traffic, from watch lists compiled by hand to algorithmic systems that process billions of communications per day, from rooms full of filing cabinets to cloud infrastructure that stores the digital lives of entire populations. At every stage, the expansion has been justified by threat — the Cold War, the War on Terror, cybercrime, child exploitation — and at every stage, the capabilities built to address the threat have been applied far beyond their stated purpose. SHAMROCK was built to monitor Soviet communications and was used to surveille Martin Luther King. TIA was built to find terrorists and became the blueprint for total domestic surveillance. PRISM was authorized to target foreign intelligence threats and collected the communications of millions of Americans who had done nothing wrong.
The question that mass surveillance poses is not a technical question or a legal question. It is a question about the nature of freedom. Jeremy Bentham designed the panopticon in 1787 as the perfect prison — a circular structure in which a single watchman could observe all inmates without the inmates knowing whether they were being watched at any given moment. The genius of the design was that it did not require constant observation. It required only the possibility of observation. The uncertainty itself was the disciplinary mechanism. Michel Foucault, in Discipline and Punish (1975), recognized that the panopticon was not merely a building but a principle — a model of power that operates not through force but through visibility, not through punishment but through the internalization of the watcher's gaze.
The digital panopticon that exists today is Bentham's design scaled to encompass the entire world. The watchtower is the NSA's data centers. The cells are our phones, our laptops, our smart speakers, our cars. The inmates are everyone. And the most remarkable feature of this panopticon is that the inmates built their own cells, carried them willingly, and paid for the privilege.
Whether a society that is watched in its entirety can meaningfully be called free is a question that admits no comfortable answer. The formal structures of democracy remain — elections, legislatures, courts, a constitution that guarantees the right to be secure in one's person, papers, and effects against unreasonable searches and seizures. But the substance of that right — the expectation that your thoughts, your associations, your reading habits, your movements, your intimate conversations are your own — has been hollowed out. The Fourth Amendment has not been repealed. It has been rendered moot by technology, by secret legal interpretations, by a FISA court that approved 99.97 percent of the government's requests, and by a public that traded its privacy for convenience and a vague assurance that the watchers were keeping it safe.
Frank Church saw the abyss in 1975 and warned against crossing the bridge. The bridge was crossed. The question now is not whether we can go back — the infrastructure is built, the data is collected, the precedents are set — but whether we will acknowledge where we are. The first step in any reckoning is to see the cell clearly. That is what the whistleblowers — Binney, Drake, Klein, Snowden, Winner — tried to make possible. They paid for it with their careers, their freedom, and their lives as they knew them. The least the rest of us can do is look at what they showed us and understand what it means.
A society that is watched in its entirety is not a society that has sacrificed some freedom for some security. It is a society that has built the infrastructure of totalitarianism and called it something else. Whether the switch is ever fully flipped — whether the data collected in the name of counterterrorism is ever used for the comprehensive social control that the technology makes possible — is a question that will be answered not by the technology but by the people who control it. And the first thing to understand about those people is that you do not know who they are, you did not elect them, and they are watching.