This is a revised version of an article that originally appeared in the Spring 2014 edition of The Gadfly, Columbia University’s undergraduate philosophy magazine.
In his 2003 paper, “Are You Living in a Computer Simulation?”, Bostrom argues that at least one of the following propositions is true:
Proposition 1 (P1): The human species is very likely to go extinct before reaching a “posthuman” stage;
Proposition 2 (P2): Any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
Proposition 3 (P3): We are almost certainly living in a computer simulation.
We’ll get more into the ideas attached to the term “posthumanity” as we go along, but for now we’ll take this term to refer to beings whose intelligence has evolved to a degree sufficient for their being able to produce computer simulations of minds (which we may also refer to as “consciousness” or “first-person experience”) like ours. Before considering the argument more closely (with special emphasis on P3), some important notes on how we should evaluate it.
Bostrom recommends that we should assign roughly equal subjective probability to each proposition. It’s “subjective,” because of the lack of information we have about possible outcomes. Contrast this with a fair coin flip, where you can be nearly certain that the result will be either heads or tails. Bostrom’s argument isn’t like this; that is, it’s not like a three-sided die, so that each proposition gets 33.3% probability. In a level-headed FAQ on his webpage, Bostrom points out that each of us will have our own intuitive response to the argument. He himself assigns the probability of P3 to be “roughly in the 20% region, perhaps, maybe,” because we don’t have any evidence that these propositions are true or false. Those are better odds, by the way, than you have of throwing a seven (16.67%) with two fair dice! And way better than rolling a two (2.7%).
As for me, I’m hesitant to assign any probability at all to P3, given our lack of information about the possible outcomes. With a coin toss, we have statistical reasons (some of them well correlated with physical facts about the coin, borne out across repeating events) to assign heads 50%, and near-zero, if anything, to, say, the coin spontaneously combusting or getting carried off by a passing hummingbird. But let’s set these concerns aside and evaluate the argument on its own terms. I then assign near-zero to P3, because there are assumptions that must be worked through before even getting to the simulation argument, one of the most crucial of which ― that human consciousness is substrate independent ― I assign very low subjective probability.
Substrate independence (Sub-I, for short) is simply a fancy way of saying that minds like ours can exist on some substance other than living neural tissue; for example, in computer hardware. I’m going to follow Bostrom’s lead and include another assumption within the Sub-I concept, which is that consciousness can somehow be coded into a computer program. Bostrom assumes that Sub-I is true, and points out that many thinkers these days agree with him. It’s true that many do, but it’s also true that many don’t; nor is it the case that opinions about Sub-I’s truth are founded on direct experimental evidence. Instead, arguments in favor usually go something like this: We are currently making things that exhibit the sorts of behaviors that we, as outside observers, associate with conscious entities; as technology achieves the appropriate level of complexity, these things will begin to not just simulate conscious behavior, but to be genuinely conscious. Let’s consider this more closely.
(Side Note: To be clear, Bostrom’s idea isn’t simply that P3 is either true or false. Indeed, it’s true that P3 is either true or false. He’s arguing something much more interesting than this, the underlying driving assumption of which ― that Sub-I is true ― is what I’m challenging. I am willing to concede, of course, that conditional on Sub-I being true, P3’s truth obtains probabilistic significance, as does Bostrom’s three-proposition disjunct as a whole.)
P3 refers to the possibility that you, me, and the world we live in are quite literally the products of a computer program written by a computer programmer from an advanced civilization. This isn’t the Matrix, where we have a living physical body that’s hanging out somewhere. And it’s not a Brain in the Vat type thought experiment designed to help us test our of theories about knowledge against skepticism. No, what P3 states is that our minds literally just are computer programs in run mode, likely housed in planet-sized computers (note that Bostrom spends a fair amount of his paper arguing for another assumption: the computing power required to simulate our universe is physically possible). We have no reason to believe, however, that the complexity of the vivid inner mental world that you and I experience every day could be produced by a computer program.
This notion of complexity is important. There are those who argue that the difference between the consciousness of, say, a thermometer and an intelligent human is merely a matter of complexity. Somewhere in between would be something like the children’s toy Furby. Indeed, the toy’s creator, Caleb Chung, has vehemently argued that the Furby has conscious experience “at its level,” and can “feel and show emotion” (Radiolab, Season 10, Episode 1, “Talking to Machines”). I’ll grant the showing ― via engineered mechanical imitation ― but I emphatically reject that a Furby literally has an internal, first-person experience of anything at all, at any level at all; exactly no more or less than does a thermometer or a rock. I think Chung is in the minority on this view that the Furby feels emotion, but, for those who accept Sub-I, the idea is generally that, if a computer’s circuitry can be mapped out with the right level of complexity, consciousness will arise, though it need not initially be as sophisticated as human consciousness.
This makes sense, because, unless you grant that the behavior of, if not a Furby, some highly sophisticated AI technology counts as a simple form of consciousness, what’s getting more complex as such machines develop is merely programming and circuitry, and not consciousness. This makes consciousness, whose boundary lines are unclear as it is, all the more mysterious. Perhaps this is why some bite the bullet and call Furby-like machines conscious (i.e., subjects with a low level of first-person experience). There are far more questions about the relationship between complexity and consciousness (not to mention consciousness in general) than I have space to get into here, so I’ll move on.
The point is that we are very far from having reached the degree of technological sophistication required to fabricate anything close to human-like consciousness, nor have our creations reached a low bar ― for example, passing the Turing Test ― for even consistently appearing to be intelligent (note that one of the big questions surrounding the Turing Test is whether appearing intelligent is the same as being intelligent; this question has nothing to do with whether the machine, intelligent or not, is self-ware: few, if any, artificial intelligence researchers consider themselves to be chasing down machine consciousness). But, the idea goes, given how quickly technology is developing, it just seems to make sense ― to feel right ― that conscious machines will eventually happen. You might respond that, given that most of us currently accept materialism/physicalism to be the case, i.e., that mental states just are physical brain states, there is no reason to think that consciousness couldn’t arise from a well designed computer simulation of our neural machinery.
I’m not so sure. What’s the difference between introducing information ― e.g., instructions on how to bake a cake ― into a living human brain, inputting it into a computer, or writing it on a piece of paper? It seems to me that inputting information into a computer is much closer to writing it onto a piece of paper than it is to introducing it to a human brain.
A key difference from paper, of course, is that the computer is capable of autonomous algorithmic processing (i.e., “following” our instructions), and has the capacity to change its behavior over time in response to new information. It can “learn.” For example, you can tell a computer to add another “1” indefinitely, or to keep track of the words most employed by particular users, so that it responds differently to different users. A stable enough computer could engage in a kind of expansive autonomous self-replication over thousands of years, becoming exponentially more complex, and, if properly programmed to begin with, could eventually achieve a conscious state. Perhaps this could happen. Perhaps not. Unfortunately, there’s currently no way to test this. Compound this with the fact that most thinkers (though not all) agree that we really don’t have a clear account of what consciousness even is, much less where it is or how it works. Still, there are those who believe, as a matter of faith, that something like the above process will eventually take place. Perhaps it will, perhaps it won’t.
I have similar gripes about foundational assumptions in math and physics, and am glad to know that there are reputable scientists whose intuitions agree with mine. For example, I’m skeptical about many notions surrounding infinity, and so is MIT-based physicist Max Tegmark. In his book, Our Mathematical Universe, he writes, “I remember distrusting infinity already as a teenager, and the more I’ve learned, the more suspicious I’ve become.” He points out that the notion is rarely questioned because “we haven’t discovered good alternatives.” Indeed, Tegmark suspects that infinity is the “fundamentally flawed assumption at the very foundation of physics.” Of course, we all know that current physics doesn’t add up; what’s interesting here is that Tegmark is pointing to an uncontroversial idea as the culprit. Consider also the physicist Sir Roger Penrose, who has been working for the past ten years on a book called Fashion, Faith, and Fantasy in the New Physics, and who has referred to string theory as a matter of “fashion,” quantum mechanics “faith,” and cosmic inflation “fantasy” (Science Friday, April 4, 2014).
To be clear, Tegmark does believe ― or, I would say, has faith ― in math’s ability to describe everything, including consciousness experience; and Penrose is by no means suggesting that we should abandon physics. It just happens that their intuitions guide them in different directions than do the intuitions of many other equally intelligent and informed thinkers.
I note all this in order to reflect on what it means for subjective probability. Perhaps, you might argue, the scientist’s intuitions about simulation are more educated than ours. But consider that, when deciding upon which expert to follow, we are adding yet another layer of subjective probability: To which expert do I assign the greatest chance of being right? (Probably the one that confirms my pre-existing opinions.)
Of course, none of this is to say that compelling evidence for simulation couldn’t be discovered. For one thing, as Bostrom points out in his FAQ, our programmers could make our true situation known to us. We could also develop indirect evidence by creating a successful simulation ourselves. I’ll comment more on this below, but first, let’s consider the aforementioned team of physicists who claim to have found evidence that simulation might be true.
Their paper, “Constraints on the Universe as a Numerical Simulation,” cites Bostrom’s hypothesis as a motivation for their research. Their paper motivated me to write this article, because of the attention it, and in turn Bostrom’s simulation argument, has been receiving from mainstream news sources. The idea is that we run simulations already, which exhibit certain anomalies. Over time, these simulations will likely get sophisticated enough to include minds like ours. Still, because its structure is finite, it would exhibit anomalies similar to those exhibited by current simulations. They claim to have found such anomalies in cosmic rays. As you can guess at this point, I’m suspicious of these assumptions about future simulations. And, interestingly, Bostrom, in his FAQ, rejects that evidence for simulation would come from glitches or anomalies in the programming. The sort of evidence Bostrom would like to see is the creation of a successful running of substrate-independent conscious experience. Current simulations are far from that, and won’t be possible until humans have evolved to a posthuman stage. Perhaps, however, we can nudge humanity’s evolutionary process along. It turns out that this is something Bostrom is pushing for. More on that in a moment.
You might be wondering at this point, “Who cares if we’re a simulation? What difference would it make?” Well, living in a simulation would come with important implications. Bostrom, on his FAQ page, insists that simulation has no relation to religion. This is true in the sense that simulation would neither confirm nor deny the existence of a god or gods, though perhaps untrue in another sense, in which simulation certainly would have implications both for believers and nonbelievers. If you are a simulation, you have no immortal soul, but you do have a consciousness not bound to what we take to be the physical world. You also have a creator, a purpose for having been created, and, most significantly, you now have the potential to live on indefinitely: if your mind exists in a computer program, it’s up to the programmer whether to terminate the program (and you along with it), or, even better, to transfer you to a posher program.
Note, too, that there are moral implications here, which function at two levels. First, there’s the moral character of our programmers. Why would such an advanced civilization ― one that undoubtedly has access to many times the destructive power we do, yet has avoided extinguishing itself ― allow for the creation of a world like ours, a world with so much suffering? How we respond to this question, by the way, might lead us to give more weight to P2.
The other moral level involves the question: If it’s just a simulation, what does it matter what we do? This has been addressed, for example by Robin Hanson, who wrote a paper called, “How to Live in a Simulation.” The gist is that, if we determine that we might be simulated, we might then concern ourselves with figuring out how to impress our programmers so that our situation may be selfishly optimized. Add to this the likelihood that, due to limited computing resources, many simulated animals and humans wouldn’t actually have conscious minds, though we can’t know which. I’ll leave that discussion to those who believe they might live in a simulation.
We see, then, that simulation being true could be good news for those of us (atheist or otherwise) who yearn for immortality. And the surest way we have to prove it’s true is to create our own successful simulations (into which, by the way, it’s assumed we could upload our minds as a means of extending life beyond bodily death). But, it seems, we can’t do that as mere humans. Thus enter transhumanism, a proactive path to posthumanity and immortality.
Back in 1998, Bostrom founded the nonprofit World Transhumanist Association (WTA), which changed its name to Humanity+ in 2006. Their mission is to advocate the elevation and expansion of human capacities through the ethical application of technology. In other words, they wish to influence the goals of scientists, technicians, and public-policy-makers towards efforts that facilitate the transition of humanity into a posthuman stage. Solving death is a fundamental goal here. See, for example, the article in the organization’s magazine, H+, “Our ‘GooglePlex Action’ for Radical Life Extension,” by Alexey Turchin, which features images of Transhumanists picketing Google offices with signs that read “immortality now,” and in which he writes, “I can easily envision a moment a decade from now when 10 or 20 percent of representative seats in a large country will be held by transhumanists.”
Of course, radical (or revolutionary) thinking of this sort, as farfetched as it might seem, does not count against Bostrom’s simulation hypothesis or Sub-I. Also, there are some very deep and important issues that transhumanism addresses that go beyond the usual public discourse about certain social concerns (e.g., the role that genetic intervention could or should play in pruning humanity of violent behaviors). Still, I can’t help but wonder if some beliefs in favor of Sub-I and the like aren’t motivated by an oversized desire to escape death, which is why I mention transhumism within the context of simulation, at the risk of conflating the two. Such thinking also brings to mind an important point about subjective probability: Once you accept a proposition as true, out of that proposition there will grow a web of ideas and theories each of whose probability is conditioned by the ideas and theories that precede it. The less empirically qualified that first proposition, the more fragile the web as a whole will be.
So, before considering the subjective probability that you’d assign to the three propositions of Bostrom’s argument, consider closely the other layers of opinion and assumption involved. As for myself, given my discomfort with the assumptions that underly it, I assign very low, practically zero, subjective probability to the proposition that I am a computer simulation. In fact, I would bet serious money on its being false.
Consciousness Can’t Be Simulated
When discussing whether consciousness can be computer generated, we use words like ‘simulation,’ ‘imitation,’ ‘synthetic,’ and ‘artificial.’ These terms are misleading, however, because consciousness can’t be simulated. A thing is either conscious or it isn’t. That is, if a thing is conscious of anything at all — has any internal experience at all — then that consciousness is real, authentic, genuine. Not simulated.
Consciousness Not Explained
Consciousness resulting from a computer program poses puzzles similar to those of human consciousness. For example…
…questions about representation;
…the distinction, if any, between the hardware and/or program and the consciousness they generate (this is the computer version of the mind-body problem);
…the distinction between (a) the external object of consciousness, (b) the internal content of consciousness, and (c) consciousness in itself. Most unclear (to me, at least), is the line between (b) and (c), particularly in light of the observation that (a) and (b) could be considered ‘fabricated,’ while (c) occurs as a ‘natural’ (i.e., non-artificial) phenomenon. For example, the conscious subject takes itself to see a (a) table, which results in a (b) mental picture of a table, of which the subject is (c) conscious. Do you first create the program that generates (c) consciousness, then program the code for the (a) table, then (b) happens automatically once (a) is somehow introduced to (c)? Note that this needs to be parsed out in a way that avoids a homunculus regression.
…more fundamentally, even if we could program a computer to be conscious, we probably still wouldn’t know why that program results in consciousness. In other words, we would have figured out how to assemble a set of conditions sufficient for consciousness to arise, though why those conditions are sufficient would likely remain utterly elusive.
P1 Is False
It might be suggested that what I’m really arguing for here is the truth of P1, given that, if simulating minds like ours is impossible, then it’s trivially true that no beings will ever evolve to the point of being able to do so. This can’t be the case, however, as Bostrom’s argument as a whole requires that simulation be possible, regardless of P1’s truth value. I’ll explain.
Bostrom defines “posthuman” civilization as a stage “where humankind has acquired most of the technological capabilities that one can currently show to be consistent with physical laws and with material and energy constraints.” So, for P1 to be true, human kind must go extinct before reaching a point at which it’s able to do pretty much anything that can be done technologically. Given how Bostrom’s argument is set up, one of those technologies must be the simulation of minds and worlds like ours. This is what makes it the case that, if P1 is false, then either P2 or P3 must be true.
This also means that, if P1 is true, it’s still assumed that simulation is a real technological possibility. So, P1 can’t be made true simply by simulation being impossible. Otherwise, it would be possible for P1 to be false, while P2 and P3 are also false. This is clearly at odds with Bostrom’s argument, which states that at least one of the three propositions is true.
Here’s what I mean. If P1 is false and simulation is possible, then either P2 or P3 must be true. However, if P1 is false and simulation is impossible, then P3 must also be false. Bostrom’s claim is that one of the propositions must be true, yet we can’t say this of P2, which could be either true or false, even when simulation is impossible (e.g., in the same sense that I could say, “Any human who can sprout wings and fly is unlikely to do so during a blizzard” — this could be true or false, regardless of whether humans can sprout wings and fly). So, Bostrom’s argument that at least one of the propositions is true doesn’t hold.
This would be clearer were P1 expressed as as a conjunct, in which both claims must be true for P1 to be true:
The human species is very likely to go extinct before reaching a stage at which it is intellectually, morally, physically or otherwise in possession of the endowments that would allow it to exploit the technological possibility of creating computer simulations of worlds and minds like ours AND Creating such computer simulations is indeed physically possible.