Home / Columns / Can we know if our universe is a simulation?
David Kipping
π Science and technology π Society

Can we know if our universe is a simulation?

David Kipping, Assistant Professor of Astronomy at Columbia University

The ques­tion of whether we might be liv­ing inside a com­put­er sim­u­la­tion has inspired many a work of sci­ence fic­tion. But is it pos­si­ble to cal­cu­late the odds that we are the vir­tu­al cre­ations of a supe­ri­or intel­li­gence? A new study1 aims to put to rest cer­tain mis­com­pre­hen­sions in the pop­u­lar culture.

Long before The Matrix2 and the nov­el Sim­u­lacron3, which great­ly con­tributed to open­ing up the con­cept of sim­u­lat­ed real­i­ties in the col­lec­tive con­science, Pla­to, with his “alle­go­ry of the cave”, likened human beings to chained pris­on­ers unable to see real­i­ty. There is no doubt that the idea we live in a sim­u­la­tion is entic­ing. More recent­ly, the 2020 Goncourt Prize win­ner The Anom­aly4 ques­tions how we, as a soci­ety, would deal with learn­ing that we might be liv­ing in such a reality.

A seri­ous theory

In 2003, Nick Bostrom, a philoso­pher at the Uni­ver­si­ty of Oxford, pub­lished an arti­cle in which he imag­ined a tech­no­log­i­cal­ly advanced civil­i­sa­tion who pos­sess the immense com­put­ing pow­er required to sim­u­late new real­i­ties with con­scious beings in them5. His hypoth­e­sis implies that if we lived in a sim­u­la­tion, it would be because there is a life form more intel­li­gent than us, capa­ble of cre­at­ing such a uni­verse. Is this pos­si­ble? And how would we know that our dai­ly lives – and more broad­ly the uni­verse – were not the avatars of some gigan­tic com­put­er programme?

The argu­ment can be likened to pre­dict­ing whether or not life exists else­where in the uni­verse. Until we find such extra-ter­res­tri­al life, how­ev­er, the only infor­ma­tion we have is that it was able to start here, on Earth. Such argu­ments are par­tic­u­lar­ly amenable to Bayesian infer­ence – a type of analy­sis that cal­cu­lates the degree of con­fi­dence giv­en to a hypo­thet­i­cal cause. This algo­rith­mic tech­nique uses Bayes’ the­o­rem, which cal­cu­lates the prob­a­bil­i­ty of an event occur­ring by first con­sid­er­ing the prob­a­bil­i­ty that anoth­er sim­i­lar event that has already occurred. In Bayesian sta­tis­tics, you can lay out every­thing you know and every­thing you don’t know.

No hard evidence

Set­ting up the sim­u­la­tion argu­ment in a Bayesian frame­work reveals that many of today’s hypothe­ses – be they in favour of or against the exis­tence of a sim­u­lat­ed real­i­ty – often have too many assump­tions built into them. For exam­ple, if the uni­verse is a sim­u­la­tion, one might assume that it is a “giant com­put­er”. And even if it were, this is not evi­dence in itself that the uni­verse was cre­at­ed by a high­er intelligence.

Nor can it be said that the pres­ence of “flaws” (like the black cat that walks past twice in The Matrix) is evi­dence of a sim­u­lat­ed world6. This hypoth­e­sis is par­tic­u­lar­ly incon­clu­sive because even if some­one did notice such glitch­es, the “cre­ator” could always rewind the sim­u­la­tion and delete the evi­dence (wip­ing the mem­o­ry). Since there are also fun­da­men­tal com­pu­ta­tion­al lim­its that inevitably make the sim­u­la­tion “grainy”, the cre­ator might choose not to make a detailed phys­i­cal sim­u­la­tion of the entire uni­verse, but only our per­cep­tion of it.

Sim­pli­fy­ing the argument

Human­i­ty as we know it may one day dis­ap­pear or be sup­plant­ed by one (or more) post-human species that might want to cre­ate sim­u­la­tions of their ances­tors – that is to say human beings with con­scious­ness, like us. But how do we know whether we are the orig­i­nal human beings or already the sim­u­la­tions of a future society’s ances­tors? Bostrom pro­posed a con­cep­tu­al frame­work in which to address this ques­tion and his sim­u­la­tion argu­ment con­tains three propo­si­tions, one of which he rea­soned must be true. Either:

  1. human soci­eties invari­ably go extinct before reach­ing a stage where they are able to sim­u­late new realities;
  2. even if they do reach this stage, they are unlike­ly to be inter­est­ed in sim­u­lat­ing a real­i­ty much sim­pler than their own;
  3. the prob­a­bil­i­ty that we are liv­ing in a sim­u­la­tion is close to one.

Since the final out­come of Bostrom’s first two propo­si­tions is that sim­u­la­tions do not exist, they can be col­lapsed into a sin­gle propo­si­tion [1]. The trilem­ma thus becomes a dilem­ma in which there are now two possibilities:

1. a nat­ur­al non-sim­u­lat­ed uni­verse (ours);
2. an “orig­i­nal” nat­ur­al uni­verse that spawns one or more sim­u­la­tions that may them­selves spawn fur­ther sim­u­la­tions, one of which would con­tain our universe.

In the absence of any oth­er infor­ma­tion, both sce­nar­ios should be con­sid­ered on an equal foot­ing accord­ing to a basic tenet in sta­tis­tics – Laplace’s “Prin­ci­ple of Indif­fer­ence”, which states that in the absence of evi­dence, all hypothe­ses should be con­sid­ered equal­ly like­ly. But, the fact that the sim­u­lat­ed hypoth­e­sis nec­es­sar­i­ly con­tains one nat­ur­al uni­verse amongst the many sim­u­lat­ed uni­vers­es means that there would be a slight­ly less than 50% chance that we live in a com­put­er sim­u­la­tion. The rea­son for the slight­ly less than 50% chance is that it is impos­si­ble to prove whether or not we are liv­ing in a sim­u­la­tion. Even if we were vir­tu­al beings, there would be no real evi­dence to prove it.

Anoth­er impor­tant pre­cept of Bayesian sta­tis­tics, Occam’s razor (which states that the sim­plest expla­na­tion, all things being equal, is usu­al­ly the right one) is also dif­fi­cult to for­mal­ly build into the sim­u­la­tion hypoth­e­sis. This is because we don’t know how many sim­u­la­tions are plau­si­ble and because we don’t real­ly know how to math­e­mat­i­cal­ly describe the com­plex­i­ty asso­ci­at­ed with each real­i­ty. So, while there is a less than 50% chance that we live in a sim­u­la­tion, this fig­ure should be treat­ed as an absolute upper lim­it. Indeed, even when we gen­er­ous­ly ignore the inher­ent­ly over­ly-com­plex nature of the sim­u­la­tion hypoth­e­sis, there is no way make the sim­u­la­tion odds bet­ter than 50%.

Heads or tails?

The main chal­lenge in such stud­ies is, sim­ply, the lack of infor­ma­tion. The only real fact we have to go on is that we exist. Even adding the extra con­di­tion that we our­selves haven’t launched a sim­u­la­tion bare­ly affects the final outcome.

Say we did start sim­u­lat­ing real­i­ties, how­ev­er, and there were con­scious enti­ties in them that were unaware that they were liv­ing inside those sim­u­la­tions, then that would flip all the odds. This is because we would be chang­ing the ini­tial con­di­tion from a (nul­li­parous) real­i­ty that can­not give birth to new real­ties to a (parous) real­i­ty than can gen­er­ate oth­er real­i­ties. It would then become high­ly prob­a­ble that we live in a sim­u­lat­ed universe.

Nev­er­the­less, accord­ing to Bayesian sta­tis­tics, the most like­ly out­come of this sce­nario is that we live in a uni­verse where it’s not pos­si­ble to stim­u­late new realities.

A hier­ar­chy of realities

This appar­ent para­dox is well described by the Amer­i­can the­o­ret­i­cal physi­cist Sean Car­roll7. He argues that if you have a hier­ar­chy of (Incep­tion-like) real­i­ties in which each sim­u­la­tion launch­es its own sim­u­la­tion, then there would be a reduc­tion in com­pu­ta­tion­al abil­i­ty at each sub­se­quent lev­el. This means that each sim­u­lat­ed uni­verse would be sim­pler than the uni­verse in which it was created.

So, while you would still be able to pro­duce sim­u­la­tions, and pre­sum­ably even very impres­sive sim­u­la­tions, the low­est lev­els in these wouldn’t have the sophis­ti­ca­tion to host tru­ly con­scious enti­ties. There is the pos­si­bil­i­ty, nonethe­less, accord­ing to Car­roll, that we live in one of these “low lev­els of reality”.

Work­ing on such ideas is impor­tant, espe­cial­ly when we hear about the incred­i­bly high “bil­lion-to-one odds” that we live in a non-sim­u­lat­ed uni­verse often cit­ed in the pop­u­lar cul­ture8. This fig­ure implies an incred­i­ble cer­tain­ty and was obtained by extrap­o­lat­ing cur­rent com­put­er trends and capa­bil­i­ties. This premise, how­ev­er, con­tains an inher­ent uncer­tain­ty if treat­ed in a Bayesian frame­work: we can­not sim­ply exclude the pos­si­bil­i­ty of a nat­ur­al uni­verse outright.

Per­haps we need to go from think­ing that liv­ing in a sim­u­la­tion is an inevitabil­i­ty to think­ing that it is a some­what unlike­ly sit­u­a­tion?9

Isabelle Dumé
1https://www.mdpi.com/2218–1997/6/8/109
2https://​www​.imdb​.com/​t​i​t​l​e​/​t​t​0​1​3​3093/
3https://​www​.goodreads​.com/​b​o​o​k​/​s​h​o​w​/​8​0​7​8​0​1​.​S​i​m​u​l​a​c​ron_3
4http://www.gallimard.fr/Catalogue/GALLIMARD/Blanche/L‑anomalie
5https://​aca​d​e​m​ic​.oup​.com/​p​q​/​a​r​t​i​c​l​e​-​a​b​s​t​r​a​c​t​/​5​3​/​2​1​1​/​2​4​3​/​1​6​10975
6https://​arx​iv​.org/​a​b​s​/​1​2​1​0​.1847
7https://​www​.pre​pos​ter​ousuni​verse​.com/​b​l​o​g​/​2​0​1​6​/​0​8​/​2​2​/​m​a​y​b​e​-​w​e​-​d​o​-​n​o​t​-​l​i​v​e​-​i​n​-​a​-​s​i​m​u​l​a​t​i​o​n​-​t​h​e​-​r​e​s​o​l​u​t​i​o​n​-​c​o​n​u​n​drum/
8https://​www​.space​.com/​4​1​7​4​9​-​e​l​o​n​-​m​u​s​k​-​l​i​v​i​n​g​-​i​n​-​s​i​m​u​l​a​t​i​o​n​-​r​o​g​a​n​-​p​o​d​c​a​s​t​.html
9https://​www​.youtube​.com/​w​a​t​c​h​?​v​=​H​A​5​Y​u​w​vJkpQ

Contributors

David Kipping
David Kipping
Assistant Professor of Astronomy at Columbia University

David Kipping’s research focuses on extrasolar planets and moons and he leads the project The Hunt for Exomoons with Kepler (HEK). He also studies the characterisation of transiting exoplanets, the development of new detection and characterisation techniques, exoplanet atmospheres, Bayesian inference, population statistics, and understanding stellar hosts. Passionate about science communication, he manages a YouTube channel where he talks about his research and related topics.