There is something it is like to be you. Right now, reading these words, there is an experience happening — light hitting your eyes, meaning forming in your mind, a sense of being here. This is consciousness. And after centuries of philosophy and decades of neuroscience, nobody can explain why it exists.
We can map every neuron in the brain. We can trace electrical signals from your retina to your visual cortex. We can describe, in exquisite mechanical detail, how information flows through neural tissue. None of this explains why any of it feels like something. A sufficiently advanced robot could process the same information without any inner experience at all. So why do we have one?
This is not a gap in our knowledge — it is a gap in our framework. Physics describes matter, energy, space, and time. Nowhere in those equations is there a variable for experience. Consciousness is not predicted by any law of nature. It simply shows up, unexplained, riding on top of biology.
In 1974, philosopher Thomas Nagel published a paper that crystallized the problem. He asked a deceptively simple question: what is it like to be a bat? A bat perceives the world through echolocation — a sensory modality humans do not possess. We can study bat neurology exhaustively, map every neural pathway involved, and still never know what it feels like from the inside to navigate by sonar. "Fundamentally an organism has conscious mental states," Nagel wrote, "if and only if there is something that it is like to be that organism — something it is like for the organism." The subjective character of experience, he argued, is irreducible. No amount of objective, third-person description can capture a first-person fact.
Consciousness is not one topic among many in Apeirron — it is the substrate on which every other question rests. Whether we ask about The Simulation Hypothesis, Altered States, the existence of God, or The Nature of Time, we are always asking from inside a conscious experience. We cannot step outside it. Every observation, every theory, every doubt occurs within the field of awareness.
This makes consciousness uniquely strange as a subject of inquiry. It is both the thing doing the investigating and the thing being investigated. The telescope is pointed at itself.
Neuroscience has established strong correlations between brain activity and conscious experience. In 1990, Francis Crick — co-discoverer of the DNA double helix — and neuroscientist Christof Koch published a landmark paper proposing a research program to identify the "neural correlates of consciousness" (NCC): the minimal neural mechanisms sufficient for any specific conscious experience. This launched a generation of research mapping which brain regions light up during which experiences.
The results are striking. Damage to specific brain regions predictably alters perception, memory, and personality. Psychedelics can radically reshape experience. The default mode network — a set of brain regions active during rest and self-reflection — appears to be central to maintaining the sense of a stable self.
But correlation is not explanation. Knowing that neural activity in the visual cortex correlates with the experience of seeing blue does not tell us why that particular pattern of firing produces that particular shade of felt experience. Joseph Levine called this the "explanatory gap" in 1983 — and it remains completely open. David Chalmers later formalized this as The Hard Problem, and it has dominated the field ever since.
Three lines of experimental work have been especially revealing — not because they solve the mystery of consciousness, but because they make it stranger.
Binocular rivalry is deceptively simple. Present one image to the left eye and a completely different image to the right eye. The brain cannot fuse them. Instead of seeing a blend, you experience spontaneous alternation — one image dominates for a few seconds, then the other takes over, back and forth, endlessly. The sensory input stays constant. The neural processing stays constant. But conscious experience switches. Researchers can watch neurons in the visual cortex flip their allegiance in real time, tracking which image the subject reports seeing. The physical stimulus has not changed. Something deeper has. Binocular rivalry isolates the moment where neural activity and conscious experience come apart — and nobody can explain why one particular pattern of firing gets to be "the one you see."
Split-brain patients reveal something even more unsettling. In the 1960s, Roger Sperry and Michael Gazzaniga studied patients whose corpus callosum — the thick bundle of nerve fibers connecting the brain's two hemispheres — had been surgically severed to treat severe epilepsy. The results earned Sperry the 1981 Nobel Prize in Physiology. When an image was shown only to the left visual field (processed by the right hemisphere), the patient could not verbally report what they saw — because language lives in the left hemisphere. But the left hand (controlled by the right hemisphere) could point to the correct object. In Sperry's own words from his 1968 paper: "Each hemisphere... has its own... private sensations, perceptions, thoughts, and ideas all of which are cut off from the corresponding experiences in the opposite hemisphere." Two separate streams of consciousness in one skull. If consciousness is unified, what happens when you physically divide the organ that supposedly produces it?
Blindsight is perhaps the strangest of all. Patients with damage to the primary visual cortex (area V1) report being completely blind in part of their visual field. They insist they see nothing. But when forced to guess — "Is the object on the left or the right?" — they guess correctly at rates far above chance. They can navigate around obstacles they cannot see. They respond to facial expressions they report not perceiving. Information is reaching the brain and guiding behavior, but it is not reaching consciousness. Blindsight demonstrates that there are two separable things: processing visual information, and experiencing visual information. Materialism must explain why one occurs without the other. Dualism must explain how a non-physical mind could be selectively disconnected from certain inputs but not others.
You are having a single, unified experience right now. You see the words on this screen, hear the ambient sounds around you, feel the pressure of your body against a surface, and weave all of it into one seamless moment of being. But your brain is not a unified thing. It is roughly 86 billion neurons, each firing independently, scattered across dozens of specialized regions. Color is processed in one area, shape in another, motion in a third, sound in yet another. There is no single neuron, no master region, no "Cartesian theater" where it all converges.
So how does a unified experience arise from billions of separate processes?
This is the binding problem, and it is distinct from The Hard Problem — though the two are deeply entangled. Even if you could explain why neural activity feels like something, you would still need to explain how the brain binds the separate threads of processing into a single tapestry of experience. Some researchers have proposed that synchronized neural oscillations — particularly gamma-wave activity around 40 Hz — serve as a binding mechanism, with neurons that fire in synchrony being "bound" into one conscious percept. But knowing that synchronized firing correlates with binding tells us nothing about why synchrony produces unity. The binding problem reproduces the explanatory gap at a different level.
Two major scientific theories of consciousness have emerged in the last three decades, and they disagree about almost everything.
Global Workspace Theory (GWT), proposed by Bernard Baars in his 1988 book A Cognitive Theory of Consciousness, uses the metaphor of a theater — but not the Cartesian kind that Daniel Dennett & Modern Materialism dismantled. In Baars' model, the brain contains a vast number of unconscious specialist processors (vision, memory, language, planning) that work in parallel and compete for access to a limited-capacity "global workspace." When information wins this competition and enters the workspace, it is broadcast widely to all other processors — and that broadcast is what we experience as consciousness. Consciousness, in GWT, is not a special substance or a mysterious glow. It is a specific functional architecture: the moment when information becomes globally available.
GWT has a clean elegance. It explains why we can only consciously attend to one thing at a time (the workspace has limited capacity). It explains why unconscious processing is so much faster than conscious thought (most work happens outside the workspace). It predicts specific neural signatures — a "global ignition" pattern where activity in prefrontal and parietal cortex suddenly spikes when a stimulus becomes conscious. These predictions have held up in experiments.
Integrated Information Theory (IIT), developed by Giulio Tononi and first published in BMC Neuroscience in 2004, takes a radically different approach. IIT starts not with the brain but with the mathematics of experience itself. Tononi argues that consciousness has specific axiomatic properties — it is intrinsic, structured, specific, unified, and definite — and then derives a mathematical framework that any physical system must satisfy to be conscious. The key quantity is phi (Φ), a measure of how much a system's information is integrated — meaning how much the whole exceeds the sum of its parts. High phi means rich consciousness. Zero phi means no consciousness.
The implications diverge wildly from GWT. In IIT, consciousness is not about function or information broadcasting. It is about the intrinsic causal structure of a system. A digital computer, no matter how sophisticated, might have low phi — because its transistors are largely modular and independent — while a simple biological network might have high phi. This means IIT could declare your laptop unconscious and a honeybee conscious, which is exactly what Tononi has suggested. And because even very simple systems can have nonzero phi, IIT leads, by its own logic, toward Panpsychism.
In 1998, neuroscientist Christof Koch made a wager with philosopher David Chalmers: that within 25 years, science would discover a clear neural signature of consciousness. Koch favored IIT. In June 2023, at the annual meeting of the Association for the Scientific Study of Consciousness in New York, the results of a major adversarial collaboration — designed specifically to test the predictions of both IIT and GWT — were announced. Neither theory was fully confirmed. IIT's specific predictions about the posterior cortex as the seat of consciousness received partial support, but its core mathematical claims were not validated. GWT's predictions about prefrontal involvement received partial support as well.
Koch lost the bet. He conceded publicly and sent Chalmers a case of fine Portuguese wine. But the deeper lesson was more sobering than a lost wager: after 25 years of unprecedented neuroscientific progress — brain imaging, optogenetics, massive computational models — we are not appreciably closer to explaining why any of this processing feels like something. The neural correlates have been refined. The mystery has not budged.
In 1989, the physicist Roger Penrose published The Emperor's New Mind, arguing that consciousness cannot be the product of computation. His argument was mathematical: Penrose invoked Godel's incompleteness theorems to claim that human mathematical understanding involves non-computable processes — things no algorithm could ever do. If consciousness is non-computable, then it cannot arise from the computational activity of neurons (which are, at the classical level, essentially biological computers). Something else must be going on.
Penrose proposed that this "something else" is quantum mechanics. Together with anesthesiologist Stuart Hameroff, he developed Orchestrated Objective Reduction (Orch-OR), a theory claiming that consciousness arises from quantum computations in microtubules — protein structures inside neurons that are far smaller than synapses. In Orch-OR, quantum superpositions in microtubules undergo a special kind of collapse — "objective reduction" — governed by quantum gravity, and these collapse events are the elementary moments of conscious experience.
The theory is wildly controversial. Most neuroscientists consider the brain too warm and wet for quantum coherence to survive long enough to matter. But in 2014, experiments on photosynthetic bacteria demonstrated quantum coherence at biological temperatures for longer than physicists had thought possible, and some researchers have found suggestive evidence of quantum effects in microtubules. Orch-OR remains a minority position, but it refuses to die — because the mainstream alternatives have not solved the problem either.
Perhaps nothing exposes our ignorance more starkly than general anesthesia. Every day, millions of people are rendered unconscious by anesthetic drugs. The patient goes under. Surgery is performed. The patient wakes up. It works reliably. But here is the discomfiting truth: we do not know what anesthesia does to consciousness. We know which receptors the drugs bind to. We know that they disrupt certain patterns of neural communication — particularly the long-range feedback connections between cortical regions that both GWT and IIT consider important. We know that the loss of consciousness under anesthesia correlates with a breakdown of integrated information processing.
What we do not know is whether anesthesia eliminates consciousness or merely disconnects it from memory and behavior. The patient cannot report, cannot respond, cannot form memories. But is the lights-are-on-but-nobody's-home metaphor accurate? Or is it more like a phone that is receiving calls but has no speaker? Some patients under anesthesia show neural responses to meaningful stimuli — their brains react to their own names, to emotional words — despite being completely unresponsive. Are they having experiences that are simply not being recorded? We do not know. We can switch consciousness off — or at least switch off everything we can measure about it — and we have no idea what we are switching off.
This is the scandal at the heart of Materialism. We have a technology that manipulates consciousness directly, and we cannot explain what it manipulates.
In 2018, Chalmers introduced a further twist: the "meta-problem" of consciousness. Set aside the The Hard Problem for a moment — set aside the question of why experience exists. Ask instead: why do we think there is a hard problem? Why do physical creatures made of neurons report that consciousness is mysterious? Why do we have the intuition that subjective experience cannot be explained in physical terms?
This is a question that Materialism should be able to answer. Even if the hard problem is genuine, the meta-problem is a problem about behavior and cognition — about why brains generate reports of mystery. And if a materialist explanation of the meta-problem turns out to be fully satisfying — if we can explain why we think consciousness is mysterious without invoking anything beyond physics — then perhaps the hard problem dissolves after all. Perhaps the mystery is not in reality but in us.
Or perhaps not. Perhaps the meta-problem has a simple answer (our introspective mechanisms cannot fully model their own operation) and the hard problem remains untouched. Chalmers left the question open. It sits there, a problem about a problem, turtles all the way down.
If consciousness is just a byproduct of computation, then sufficiently advanced AI will be conscious — with all the moral weight that implies. If consciousness is fundamental to reality, as Panpsychism and Idealism propose, then the universe is not the dead mechanical place physics describes — it is, in some sense, alive. If consciousness can exist independent of the brain — as experiences reported during Altered States sometimes suggest — then death may not be what we think it is. If Penrose is right that consciousness is non-computable, then no AI will ever be truly conscious, no matter how intelligent it appears.
Every answer rewrites the world. And the honest truth is: we do not have the answer yet. We barely have the right question.