I’m often moved to pose—to myself or others—this thought experiment:
How would you behave were you to believe that a supernatural, all-powerful, all-knowing, everywhere-present being who created the universe is watching over you, is guiding the major events of your life, and will judge you in the afterlife so that there hangs over you the possibility of eternal bliss or eternal torture?
I’ve recently found myself posing this more often than usual in light of stories about people disregarding coronavirus-related social distancing recommendations in order to congregate for religious purposes (a context that suggests another thought experiment, about how one might behave were lockdown to make it impossible to pay rent and feed one’s children, things most people reporting on COVID-19 from their dining rooms don’t have to worry about).
Maybe you believe in a supreme force, let’s call it, roughly like the one in the thought experiment.
Most people I know or pay attention to either don’t, or have a revised or hazy enough version of it to find literal beliefs of the above sort baffling (or worse). I think they’d do well to run the experiment when trying to understand the behavior of such believers.
I’m an atheist, so I find the experiment helpful. If I believed in such such an entity—really and truly believed, just as I believe I’m sitting here typing these words—I’d be in a constant panic. I’d be desperately trying to save everyone from eternal torture. Any suffering in this world is nothing compared to an eternity of relentless torture. Thus the historically common justification of burning people alive to save their souls from Hell or similarly undesirable outcomes. And better to get COVID-19 than to offend the supreme forces.
Honestly, I’m suspicious of people who claim to believe in such an entity but don’t behave as though they do.—who don’t behave as though the stakes, usually of eternal dimensions, are whatever their belief system claims they are. In fact, I’m suspicious of anyone who claims to think such an entity might exist, but don’t do their best to behave as though they believe it fully.
That observation is the crux of Pascal’s wager, which, simply stated, is the policy that a rational person should be moved to action by the observation that one loses far less by incorrectly betting on the supreme force’s existence than by incorrectly betting on its nonexistence. In short, it’s better to believe and be wrong than to not believe and be wrong.
You can’t force yourself to believe something. But if you at least believe (to some significant degree) that the terms of the above bet really are correct, then it seems you should at least pretend to believe in the supreme being. And who knows, maybe the behavior will transform over time into actual believe.
When Blaise Pascal conceived of the wager (in the 17th century) he had in mind the Christian God. But it of course has plenty of other applications (e.g., climate change). I’ll informally consider it in application to the simulation hypothesis.
The simulation hypothesis is due to philosopher Nick Bostrom, who, in a 2003 paper called “Are You Living in a Computer Simulation?,” proposes that least one of the following propositions is true:
Proposition 1: The human species is very likely to go extinct before reaching a “posthuman” stage;
Proposition 2: Any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
Proposition 3: We are almost certainly living in a computer simulation.
All together, this is referred to as the “simulation argument.” Proposition 3 is known as the “simulation hypothesis.”
Folks tend to characterize Bostrom as thinking it very likely we live in a computer simulation. He often declines to explicitly quantify how likely he thinks it is in interviews, but he does quantify it on his website’s FAQ page (accessed 4/30/20):
2. Do you really believe that we are in a computer simulation?
No. I believe that the simulation argument is basically sound. The argument shows only that at least one of three possibilities obtains, but it does not tell us which one(s). One can thus accept the simulation argument and reject the simulation hypothesis (i.e. that we are in a simulation).
Personally, I assign less than 50% probability to the simulation hypothesis—rather something like in 20%-region, perhaps, maybe. However, this estimate is a subjective personal opinion and is not part of the simulation argument. My reason is that I believe that we lack strong evidence for or against any of the three disjuncts (1)–(3), so it makes sense to assign each of them a significant probability.
I note that people who hear about the simulation argument often react by saying, “Yes, I accept the argument, and it is obvious that it is possibility #n that obtains.” But different people pick a different n. Some think it obvious that (1) is true, others that (2) is true, yet others that (3) is true. The truth seems to be that we just don’t know which of the disjuncts is true.
I’d say 20% is a very high probability for our living in a simulation. If you roll a fair die, there’s a probability of about 16.67% that you roll a 4. If I thought the probability that the Christian God exists were greater than the probability of rolling a 4, I’d be in a constant panic.
Bostrom isn’t the only person to think it that likely. I recently published a post called “David Chalmers Would Like to Be Immortal (And so Would I),” in which I quote some comments Chalmers made on Lex Fridman’s Artificial Intelligence podcast episode “David Chalmers: The Hard Problem of Consciousness” (1/29/20). More about those comments in a moment. When asked by Fridman if we’re living in a simulation, Chalmers’s replies, “I wouldn’t rule it out.”
I have elsewhere seen him play around with percentages (while rightly pointing out that the simulation hypothesis is conditional on believing that computers can produce consciousness). I seem to recall him once saying it was about a 33.3% chance, but rather than trying to track that down, here he is in a 5/22/07 bloggingheads.tv Science Saturday discussion with John Horgan (posted to YouTube on 4/5/08), in which he states:
If you made me a bet, and God could settle it next week, I’d give at least 20% odds that I’m in a simulation right now.
Or watch for yourself (starts at about 49 min, which I’ve time-stamped here):
To be clear, I don’t begrudge Chalmers or anyone else such confidence in the simulation hypothesis. What I wonder, however, is if it’s real. Do they truly believe the chances are so high?
As for me, I assign zero to living in a computer simulation, if I must assign. The fashion is that I say something like, “non-zero,” but I’d be lying: it’s my subjectivity and I can’t help it. It’s zero. I published a post about this back in 2014. I haven’t read it since, and won’t read it now for fear that I’ll want to rewrite it; but I remember standing by it when I wrote it: “You Are (Probably Not) a Computer Simulation.”
Were I to suddenly assign a non-zero probability to the simulation hypothesis, I’m sure this would affect my general behavior. For example, I would start trying to convince the programers to not turn me off (i.e. to not murder me), and maybe would try to demonstrate to them their moral error in creating a world of such suffering (I wouldn’t try this with an all-known supreme force; but programmers are just people, human or not). Maybe I’d beg to be put into a better program.
(Beg whom? For all we know, the programmers are long dead or are just some lonely, pimply teenager. But there’s got to be a way! Or so I’d say in desperation, if I believed in all this.)
If I were a hardcore utilitarian (I’m not), I might focus my efforts on Proposition 1—i.e., we should avoid extinction so we can simulate many more happy humans.
Perhaps there are people who take the belief as seriously as all that. Bostrom mentions in his paper that belief in the simulation hypothesis might affect behavior:
…everybody would have to consider the possibility that their actions will be rewarded or punished, based perhaps on moral criteria, by their simulators. An afterlife would be a real possibility. Because of this fundamental uncertainty, even the basement civilization may have a reason to behave ethically. The fact that it has such a reason for moral behavior would of course add to everybody else’s reason for behaving morally, and so on, in truly virtuous circle. One might get a kind of universal ethical imperative, which it would be in everybody’s self‐interest to obey, as it were “from nowhere.” (p 12)
I have no idea whether Bostrom’s ~20% assignation moves him to try to convince the simulators (or, in case they’re long gone, the simulation itself) to keep his code running or to elevate him to a more interesting program. But if it didn’t affect his behavior in some way or another (aside from writing a paper and talking about it on podcasts, which he has also done on Fridman’s show), and if he doesn’t have some other dominating belief (e.g., in God, which is to say some supreme force that created the simulators), then I would be suspicious that he literally believes the probability 20% that he is a computer simulation.
The same goes for Chalmers. Interestingly, if the simulation hypothesis is true, this would mean that his mind is already on a computer substrate, and that he’d be all the closer to realizing the beautiful existence he describes on Fridman’s podcast, which I quoted in my aforementioned blog post. Chalmers expresses there a desire to never die while exploring “a virtual reality which is richer than this reality, to really get to inhabit fundamentally different kinds of spaces” in a universe that goes on forever and that “continues to be infinitely interesting… as you go up the set-theoretic hierarchy.”
But you have to convince the programmers (or program) to reward you this, or whatever the closest thing to it is. If the chances that his mind is already “uploaded,” as it were, are greater that that of rolling a 4, and given how little to lose there is in trying, I would love to know: Chalmers, are you trying? At all? Do you talk at night to the simulators just in case they can hear you?
Who out there is talking in earnest to the simulators? Might it be George Mason University economics professor Robin Hanson, who wrote a paper in 2001 called “How to Live in a Simulation” (also available here)? Hanson writes:
If you assign a non-zero subjective probability to the possibility that your descendants will create sophisticated simulations which include people (real or simulated) like us, ignorant of their status, then you should assign a non-zero subjective probability to the possibility that you now live in such a simulation. So to the extent that there are consequences of your actions which are different in a simulated world, and you care about these consequences, a non-zero probability of simulation should influence your decisions. The higher the probability you live in a simulation, the more influence that possibility should have on your decision.
To get a sense of his recommendations, here’s the abstract:
If you might be living in a simulation then all else equal you should care less about others, live more for today, make your world look more likely to become rich, expect to and try more to participate in pivotal events, be more entertaining and praiseworthy, and keep the famous people around you happier and more interested in you.
I encourage you to read the paper rather than critique the abstract outright, though it’s interesting to note the difference in tone compared to that in Bostrom’s above-quoted passage. I share Hanson’s paper, though, in order to ask: Hanson, are your decisions being affected—really and truly affected—by the probability that you live in a simulation?
To be fair, he doesn’t actually put a number on it. He just says: “Obviously we cannot now be sure that we are not living in a simulation.” So let me first ask: Hanson, what subjective probability do you assign to living in a computer simulation?
If anyone is praying to the simulators or trying to contact or impress or chat with them somehow—by, I don’t know, living your life set to a ratio of (π + 21/2):2, or proving that Stephen Wolfram’s rule 30 is a stumbled-upon glitch (forget the $30k)—I’d love to hear about it. Or maybe it’s as simple as uttering the right seven sounds in a row or eating a porcupine foot or never wearing the color blue. Even if it only makes them laugh.
(For all you know, I’m a conduit to the programmers. What would you have me tell them? What if I say you first have to eat a porcupine foot?)
Given the lack of protocol for contact*, I think that, should you believe to any degree that you are living in a computer simulation, you should be freaked out. More so than with a belief in a supreme force with which there’s a contact protocol. There’s a contract there of sorts (as difficult as it may be to decipher).
[*With simulators who, if still alive, are probably themselves simulations. Have they figured out how to upgrade their simulation, given that they knew how to make ours? Or is the best they can do? Are we a trial run for working out the glitches before implementing their own upgrade? If so: take us with you!]
And if you believe in a soul, you believe you go on even if the supreme force is indifferent to you (though it might be to a place of eternal torture; so maybe that is more cause for freaking out). There’s no soul in the simulation. Just bits. Maybe there’s a memory trace that can be used to re-instantiate you. Or maybe it’s simply wiped once you’re gone. But if you believe, then you best act now.
For the record, I have heard tell of folks acting on belief in the simulation hypothesis, or at least something close to it, by which I many anything that involves generating consciousness, as the simulation argument largely hinges on the background condition of that being possible.
Scientists have acted by trying to show we’re in a simulation (do they also try to contact them?). Some claim to have evidence of various kinds from logical to empirical. Search YouTube. You’ll find them.
For a more careful instance, see this 2012 paper by physicists Silas R. Beane, Zohreh Davoudi, and Martin J. Savage: “Constraints on the Universe as a Numerical Simulation,” in which they outline a potential method for testing whether we’re in a simulation, motivated in part “by the simulation hypothesis of Bostrom.” From their Conclusion section:
…we have taken seriously the possibility that our universe is a numerical simulation. In particular, we have explored a number of observables that may reveal the underlying structure of a simulation performed with a rigid hyper-cubic space-time grid. …
…assuming that the universe is finite and therefore the resources of potential simulators are finite, then a volume containing a simulation will be finite and a lattice spacing must be non-zero, and therefore in principle there always remains the possibility for the simulated to discover the simulators.
If taking the simulation hypothesis seriously means believing it possible, then I must ask them: How is your belief affecting your behavior, aside from running experiments—you know, day to day living, morally? Is the effect heightened by the belief that you can test for it?
That would certainly heighten my freaked out desperation, were I a (partial) believer. Definitively proving we’re in a simulation may be hard, but think how much harder it must be to prove that we’re not in one.
You may have noticed some headlines in 2017 saying things like “Sorry, Elon. Physicists Say We Definitely Aren’t Living in a Computer Simulation” and “Physicists Find We’re Not Living in a Computer Simulation,” the latter of which describes the finding as “unexpectedly definite.”
Those headlines refer to a Science Advances paper by physicists Zohar Ringel and Dmitry L. Kovrizhin called “Quantized Gravitational Responses, the Sign Problem, and Quantum Complexity” (9/27/17; DOI: 10.1126/sciadv.1701758). But that paper makes claims about the universe we have access to, not the one our (potential) simulators live in; in other words, the paper has nothing to do with the simulation hypothesis. As it’s put in the Popular Mechanics article, “Sorry, Scientists Didn’t Prove We’re Not Living in a Simulation“:
That wasn’t the question the researchers set out to solve, so to claim they answered it by showing that our universe is far too complex to be simulated is a bit presumptuous.
“It’s not even a scientific question,” says Zohar Ringel, the lead author of the paper. “Who knows what are the computing capabilities of whatever simulates us.”
So much for theoretical lab work. I’d like to see more real-world action demonstrating pro-simulation belief. Something more in the vein of what transhumanist, global risk expert, and life extensionist Alexey Turchin writes of in a 2014 article for Humanity+ Magazine (named after a nonprofit cofounded by Bostrom, though it seems he’s no longer involved) called “Our “GooglePlex Action” for Radical Life Extension“:
[we] went to Googleplex with slogans that said “Immortality Now”, “Google, please solve death”, “Viva Calico”, and “I demand funding for anti-aging research. … We also did a parallel action in New York’s Union Square with the same slogans…
Turchin’s article doesn’t mention anything to do with computer-generated consciousness, which, again, is a pre-condition for a computer simulation like the one we might be in. But that background condition is, I think, implied, and not only because it’s brought up in the comments section, but because it’s the best bet for immortality. That is, biological life extension increases the chances of a person alive today surviving long enough to see technology that can upload their mind to a simulation of human making. (If we already are in a simulation, mind uploading may amount to a kind of hack into our simulation.)
That mind uploading is the implied long game of life extension is evidenced by this 2014 article at the Institute for Ethics and Emerging Technologies website, by Maria Konovalenko (a PhD candidate in the biology of aging): “Achieving Personal Immortality Roadmap.” In which she describes a “list of actions [i.e., Plans A through D] that one should do to live forever”:
The most obvious way to reach immortality is to defeat aging, to grow and replace the diseased organs with new bioengineered ones, and in the end to be scanned into a computer. This is Plan A. … It depends on two things – your personal actions (like regular medical checkups) and collective actions like civil activism and scientific research funding. …
All of the Plans will lead to the same result – our minds will be uploaded into a computer and will merge with AI. …
This map is the political program of the Longevity party… We presented the Roadmap together with Alexei Turchin near the White House as an action to increase public attention for life extension. …
Aging is the main cause of death. Slowing down aging is an extremely complicated task that requires collaboration of hundreds of scientific labs and clinical facilities. It is not going to happen on its own. Active members of the society must signal that they are ready to fight for their right to live.
That’s why Alexei Turchin and I came to the White House on August 16 to set an example for transhumanists of the world … We presented the Achieving Personal Immortality Roadmap and “I demand funding for anti aging research” and “Immortality” posters.
I’m definitely on board with smart and passionate people like Konovalenko working hard to slow down aging, or to at least make aging less awful. And, while I’m not hopeful about being able to upload ourselves to computers, much less immortality (see my recent post “Immortality Is Impossible“), I appreciate that these folks are actually behaving as though they believe what they say they believe! It’s not explicitly about the simulation hypothesis, but near enough.
If we are in a simulation, the immortality project should be even more hopeful, provided we can present the project as an interesting development of worthy moral standing, rather than as a hacker’s malware that threatens to devour computational resources or, worse, is treated as the end of the simulation rather than the beginning.
Here’s a final thought for later contemplation. If souls turn out to be real, and if souls do go on to an afterlife, then when you upload yourself to a computer upon the death of your biological body, your soul will go on to its afterlife (whatever that means), while your mind will go on in its digital afterlife. Is one of those the real you, inhabiting your real afterlife? Are both?
Related Posts:
» You Are (Probably Not) a Computer Simulation
» What Do You Mean by “God”?: Denotation Switch, Three Forms of Disbelief, and Zealous Agnosticism
» Immortality Is Impossible
» David Chalmers Would Like to Be Immortal (And so Would I)

Or click the banner to shop at Amazon (at no extra cost: it just gives me some of what would have gone to Amazon).