In 2003, Oxford philosopher Nick Bostrom published a paper that reframed an ancient question in the language of probability theory. The argument was simple, the conclusion was not.
If any civilization, anywhere, ever develops the computational power to run detailed simulations of conscious beings — ancestor simulations, historical recreations, experimental universes — then the number of simulated minds in existence would vastly outnumber the "real" ones. By pure statistics, any given conscious being is overwhelmingly more likely to be simulated than biological. Including you. Including right now.
This is not science fiction. It is a logical argument, and it has not been refuted.
Bostrom's argument takes the form of a trilemma. At least one of the following must be true:
Almost all civilizations go extinct before reaching the computational power required to run simulations. Or almost all civilizations that reach that level choose not to run simulations. Or we are almost certainly living in a simulation right now.
The first option is the Great Filter from the The Fermi Paradox — civilizations destroy themselves before reaching this capability. The second requires a universal, species-wide decision not to simulate, which seems improbable given the diversity of motivations any advanced civilization would contain. The third is the simulation hypothesis itself.
The uncomfortable part is that options one and two are difficult to defend. Computing power has grown exponentially for decades. Video games already simulate worlds with increasing fidelity. The trajectory from Pong to photorealistic virtual reality to full neural simulation is a matter of engineering, not physics. If it can be done, someone will do it. And if they do it once, they will do it trillions of times.
The simulation hypothesis is not directly testable, which places it in an uncomfortable epistemological position. But its proponents point to features of reality that are at least consistent with a simulated universe.
Quantum mechanics behaves suspiciously like optimization. Particles exist in superposition — undefined states — until they are observed, at which point they "collapse" into definite values. This is strikingly similar to how a video game engine works: it does not render what no one is looking at. Why would a physical universe behave this way? A simulated one has an obvious reason — computational efficiency.
The Planck length — approximately 1.6 × 10⁻³⁵ meters — appears to be the minimum meaningful distance in the universe. Below this scale, the concept of space itself breaks down. This looks remarkably like a resolution limit. A pixel size.
The speed of light functions as a maximum processing speed — an upper bound on how fast information can propagate through the system. In a simulation, such a limit would prevent distant regions from interacting in ways that would overload the computation.
None of this is proof. All of it is suggestive.
The deepest challenge to the simulation hypothesis is Consciousness itself — specifically, The Hard Problem. If consciousness is purely computational, then a sufficiently detailed simulation would produce genuine conscious experience, and the beings inside it would have no way to distinguish their reality from "base" reality. The simulation would be, for all practical and moral purposes, real.
But if consciousness is not computational — if subjective experience requires something beyond information processing — then a simulation might produce beings that behave as if they are conscious without actually being conscious. Philosophical zombies running on cosmic hardware.
This question — whether simulated consciousness is real consciousness — may be the most important question the simulation hypothesis raises. And it is, at bottom, the same question that The Hard Problem has been asking all along: what is consciousness, and can it be reproduced?
If we are in a simulation, someone built it. The question of who, and why, is the deepest rabbit hole of all.