The Skeptics Society & Skeptic magazine

I am Not Living in a Computer Simulation, and Neither Are You

The notion that we’re all just computer simulations living in a simulated universe—once the stuff of late-night college dormitory bull sessions—has now resurfaced, having been espoused by (among other eminences) a world-famous astrophysicist and an Internet entrepreneur billionaire.

The notion is the latest manifestation of what was perhaps first contemplated (and then, at least to his own satisfaction, disproved) by Descartes, whose cogito, ergo sum—I think, therefore I am—was the first step in an attempt to figure out what can be reliably known, although he considered not a computer program creating the illusion of his body and his world, but an evil demon. The idea is a close cousin of the philosophers’ thought experiment (or parlor trick) known as “brain-in-a-vat,” which is said to have inspired the Matrix movies. Notice, though, that brain-in-a-vat requires a real brain in a real vat and the Matrix movies had real brains in real people plugged into their simulated world. The proposition that we’re all just computer simulations in a simulated universe eliminates the vat, the brain, the people, and the world. It can also be seen as the nerd’s version of the notion that we’re all simply dreams in the mind of God—perhaps the central creed of a Church of Computer Science.

The computer simulation argument proceeds along these lines:

  • The universe contains a vast number of stars.
  • Some of these stars have planets.
  • Some of these planets must be like Earth.
  • Since intelligent life arose and eventually invented computers on Earth, intelligent life must have arisen and invented computers on some of these planets.
  • It is (or inevitably will be) possible to simulate intelligent life inhabiting a simulated reality on a computer.
  • Since it’s possible, it must have been done.
  • There must be a vast number of such simulations on a vast number of computers on a vast number of planets.
  • Since there’s only one real universe but there’s a vast number of simulations, the probability that you’re living in a simulation approaches one, while the probability that you’re living in the real universe approaches zero.

Having no empirical evidence or testable implications, this argument is not science or even scientific speculation. In the language of pragmatism and logical positivism, if the notion can’t possibly make any testable difference from the belief that we’re … really living in a real universe, the notion is worse than wrong—it’s meaningless.

It’s also remarkably anthropocentric. Although the rise of our kind of technological intelligence seems inevitable in retrospect, every step leading to Homo sapiens and our computers was both extremely rare and entirely accidental. All the crucial steps in this chain appear to have occurred only once on Earth: multicellular life, intelligent life, Homo sapiens, and computers. In particular, there’s little evidence that there’s any selection pressure toward greater intelligence, and there’s even less evidence that, except for humans, organisms with greater intelligence inevitably invent anything, including computers. And, even if another form of life on another planet developed computers, on what basis do we assume that such a project would even occur to them? Maybe they’d find a better use—such as solving the problem of the heat death of the universe.

The notion that you’re a simulation is, essentially, cybernetic solipsism: there’s little reason to argue that anyone else in your simulated universe is conscious—to achieve verisimilitude, there’d be no need to actually program anyone else’s consciousness but yours—just as there’d be no need to simulate the entire universe in full detail, only the part of the universe you happened to be encountering. Believing that everyone else in your simulated world is conscious is like thinking that the people in your dreams are also actually conscious. (Although perhaps when you say you think I’m conscious, too, you’re just being polite.)

Speaking of dreams, it must be conceded that we’re not all that hard to fool—after all, we believe in the reality of our dreams as we’re dreaming them—it’s only upon awakening that we realize how insubstantial and unconvincing our dreaming reality was. But we do realize it, recognizing the richness of our experience, its persistence, its detail, its continuity, its logicality and consistency.

In our thought experiment of simulated consciousnesses living in a simulated world, our imagined computer is capable of producing detail equivalent to our waking reality at unlimited speed, and would be flawlessly convincing. Having to deal with petabytes of data nearly instantaneously, this imagined computer would have to be vastly more powerful than even the most powerful supercomputer now in existence, and would have to run a program vastly more complicated (and less error-ridden) than any ever yet written—call it a superdupercomputer.

There’s also the issue of whether, in the simulation, you’ve actually lived as long as you think you have, or whether your memories and experiences are also simply simulated. That might simplify the programming—you could be started at, say, age 21, and the program wouldn’t have to deal with all the complexity of you maturing from an infant to an adult and learning all the things you learned over that time. (It would be similar to the argument that when God created the world 6,000 years ago He included all the evidence that scientists now interpret to conclude that the world is billions of years old.)

But even a superdupercomputer wouldn’t produce even a single conscious being. The crucial move in the argument is that the simulation of a human mind would actually be conscious in the same sense that you and I are. Your computer simulation wouldn’t simply behave exactly like a real person, it would actually feel pain, pleasure, lust, fear, anger, love, nausea, angst, ennui, and everything else you can feel. It would actually experience the same optical (and other sensory) illusions you do. It would feel what you feel when you get sick, or when you drink or take drugs. It would fall asleep and dream, and then wake up to realize that it was only dreaming. Presumably, it would even die.

The argument that a sufficiently complex computer program would be conscious in the same way you and I are goes something like this:

  • The brain is an information processor.
  • A computer is an information processor. (Computers used to be called data processors, but they’ve been promoted.)
  • A computer can be programmed to process the same sort of information the brain processes in the same way that the brain processes information.
  • The conscious mind arises from information processing in the brain.
  • Therefore, a conscious mind will arise from equivalent information processing on a computer.

The argument depends crucially on the concept of information, which isn’t as straightforward as it perhaps appears. In some contexts, we mean physical properties that, when we look at them, are traces of a prior (or perhaps current) state of affairs. This sort of information includes, for example, tree rings, ice cores, geological layers, fossils, forensic crime scene evidence, photographs, movies, and audio recordings. In these cases, the physical properties properly interpreted give us evidence of such things as the age of a tree or a glacier, the age of a geological formation, the presence of certain animals during certain periods of time, what happened during the commission of a crime, and so on.

In other contexts, physical properties have been intentionally set, directly or indirectly, by conscious agents (on our world that presumably means people) to be a signal—that is, they are intentional representations or transmissions. For example, the words I’m writing and you’re reading here are subject to your interpretation. To someone who’s never encountered written English, these words are just meaningless marks on paper (or screen). In fact, until the signs or signals are interpreted, they can’t be said to contain information at all—only data, despite the fact that the relevant field of study has been dubbed information theory (rather than, more accurately, data transmission theory or signal theory). In some cases, the question of whether or not a piece of data is a signal is a problem. For example, in the search for extraterrestrial intelligence (SETI), it can be an open question whether a particular stream of electromagnetic radiation from a particular source actually carries any signals from an alien intelligence or is simply a natural phenomenon.

The distinction between data and signals is crucial to our understanding of the brain. To simplify matters, we’ll begin with the simple reflex arc. In a reflex arc an afferent nerve responds to a stimulus (for example, a doctor’s hammer hitting your knee just below the kneecap) in a way that makes an efferent nerve discharge (causing, for example, your knee to jerk). In medical and physiological textbooks of the 19th century, the reflex arc was described in terms of discharge, conveyance, conversion, and the like. (Concepts of information, messages, signals, and so on were all available in the 19th century—but the authors of those texts didn’t liken the reflex arc to a telegraph.) In short: the reflex arc isn’t processing information. As it is for the nerves in a reflex arc, so it is for the rest of the nervous system, including the brain.

Although we find the analogy or metaphor compelling, the brain isn’t actually a computer. What the neurons in the brain are doing isn’t dependent on their interpretation—although neuroscientists try to interpret the data from, for example, a particular neuron in the visual cortex. Our recordings of brain activity (for example, fMRIs) provide us data—but it’s not data being processed by the brain itself. Our neurons are not interpreting signals, they’re simply behaving as they evolved to behave.

Contrast this with a computer. A computer contains, processes, and displays data like a highway road sign consisting of a rectangular array of light bulbs. As we drive by, we can interpret the pattern of light as letters and words, but the message we read is actually nowhere contained in the display. Imagine a space alien interpreting the display as a binary code, with each column of eight light bulbs conveying one byte. How would they interpret a sign that to us read DANGER—CONSTRUCTION AHEAD? A computer is processing data (now, information) only because we interpret it as doing so; a brain behaves as it does without interpretation. Thus, we have no foundation to assert that a computer running a program that simulates the brain would actually be conscious—what, in the days of GOFAI (good old fashioned artificial intelligence) used to be called Strong AI. (Ironically, Strong AI is often espoused by the very same people who assert that we have no basis for assuming that anyone other than ourselves is really conscious or that consciousness itself— even our own—is an illusion.)

Skeptic 21.4 (cover)

This article appeared in Skeptic magazine 21.4 (2016).
Buy this issue

There’s another irony concerning the notion that we’re all just computer simulations. If you believe you’re living in a computer simulation, then everything you think you know about the world—including its vastness, the probability of intelligent life elsewhere in the universe, and even the very existence of computers—is part of that simulation, and so is completely worthless. The evidence on which the entire chain of reasoning depends, in short, is illusory—and so nothing at all can be argued from it.

Finally, if we believe we’re just simulations, how should we behave? Should we treat everyone around us as if they’re just a figment of someone else’s imagination, shamelessly manipulating them for our own pleasure or gain? Should we take careless risks, knowing we’ll live again in another simulation or after a reboot? Should we even bother to get out of bed, knowing that it is all unreal? I think not. Applying a variation on Pascal’s Wager, I’ll live my life on the assumption that I’m real—and so are you. END

About the Author

Peter Kassan, over the course of his long career in software, was a programer, a software technical writer, a manager of technical writers and programmers, and an executive at a software products company. He’s the author or co-author of several software patents. He’s been a skeptical observer of the pursuit of artificial intelligence for some time. His last piece for Skeptic was “AI Gone Awry: The Futile Quest for Artificial Intelligence,” in Vol. 12, No. 2.

For those seeking a sound scientific viewpoint


Be in the know!

Subscribe to eSkeptic: our free email newsletter and get great podcasts, videos, reviews and articles from Skeptic magazine, announcements, and more in your inbox once or twice a week.

Sign me up!

Copyright © 1992–2022. All rights reserved. | P.O. Box 338 | Altadena, CA, 91001 | 1-805-576-9396. The Skeptics Society is a non-profit, member-supported 501(c)(3) organization (ID # 95-4550781) whose mission is to promote science & reason. As an Amazon Associate, we earn from qualifying purchases. Privacy Policy.