Yes, We Live in a Virtual Reality. Yes, We are Supposed to Figure That Out.
Once again, another thinker has fired the dire warning: beware testing out the validity of the simulation hypothesis! This time the alarm was sounded by Dr. Preston Greene, a professor of philosophy from Nanyang Technological University in Singapore, in a New York Times opinion piece. In the article, Are We Living in a Computer Simulation? Let’s Not Find Out, Greene writes:
“So far, none of these [simulation hypothesis] experiments has been conducted, and I hope they never will be. Indeed, I am writing to warn that conducting these experiments could be a catastrophically bad idea — one that could cause the annihilation of our universe.”
Bostrom’s own “simulation shutdown” idea was cited by Greene:
“This is my point: The results of the proposed experiments will be interesting only when they are dangerous. While there would be considerable value in learning that we live in a computer simulation, the cost involved — incurring the risk of terminating our universe — would be many times greater.”
“[I]f our universe has been created by an advanced civilization for research purposes, then it is reasonable to assume that it is crucial to the researchers that we don’t find out that we’re in a simulation. If we were to prove that we live inside a simulation, this could cause our creators to terminate the simulation — to destroy our world.”
These ideas are far from new ideas surrounding the simulism debates.
Many of them hold the key assumption stated above: that finding hard proof that we are inside a simulation will corrupt us as viable sample lifeforms in our VR universe. Once we realize we are little sims in a video game, the jig is up. Kaboom universe.
I don’t think so.
In 2015 I published an essay against this idea for the Institute of Ethics and Emerging Technologies called Why it Matters that You Realize You’re in a Computer Simulation (later republished in my first book 3 Essays on Virtual Reality: Overlords, Civilization, and Escape.) The essay itself is about 2,600 words — complete with handy diagrams — but I will attempt to offer a shorthand version.
Here’s the problem: if you created a simulated universe complete with evolving intelligent lifeforms that are given say an infinite amount of time to develop their intellect and discover the nature of the universe that they’re embedded in, then it should be assumed that eventually they are always going to figure out that they are in a simulated universe. I call this the Savvy Inevitability. In essence: you cannot evolve intelligent lifeforms in a simulated universe and at the same time occlude them from the simulated nature of their universe indefinitely. It will never work. Eventually, they will always figure it out.
One argument goes we should “play dumb.” Here I will directly quote my previously mentioned essay:
“[T]here are other problems with the previously mentioned ‘playing dumb’ suggestion. The notion that we should (or even could) occlude our ‘outside’ observers, the simulator(s), or ourselves from whatever knowledge we may have about our environment is not only probably impossible, it is also metaphysically unreasonable. ‘We better not know,’ even if it is the correct recourse, is impossible to maintain. Ethically, this notion is odious in that it is not only ultimately anti-science, anti-intellect, and indeed anti-evolution, but it goes on to actually assume punishment for such evolutionary developments, which are, in part, outside of the evolving intellect’s hands. We can’t be held responsible for natural discoveries, just as we can’t help but see the sun. They are the very fingerprints of the gods, so to speak. We can only truly be made responsible for what we do with natural discoveries; we cannot be made responsible for the fact that we can actually make these natural discoveries. Arguably, nearly all conscious life is defined by its ability to sense its environment. Discovering that the environment is a computer simulation, if that is the case, is a natural consequence of the environment itself.”
I always found the idea that the sort of “forbidden knowledge” angle that some simulist thinkers throw out there sounds almost exactly like a MacGuffin out of an H.P. Lovecraft short story. If we learn too much the universe will be destroyed. That idea is essentially what’s being pitched.
Here’s something simulists don’t always conjure with: what if we are supposed to find out that we are in a simulation? What if that’s part of the game; part of the process of our evoltion? I call entities that realize they are in a simulation “Savvy.” Imagine we are the simulation-runners and one of our simulated lifeforms becomes Savvy:
“[…] Savvy lifeforms would probably be extremely likely to produce fascinating forms of expression, technology, novelty, social organization, and so on. They would also likely begin to create their own life-producing simulated universes themselves. They may even attempt to signal their outside simulation-designing hosts somehow. Therefore I, as part of the original hypothetical simulation-running team, would be extremely hesitant, if not downright protective, of that Savvy sample’s survival and evolution — that is if I were to interfere at all. What could possibly give me more insight into what I, the original simulation creator and maintainer, have done than this Savvy sim living in my ever-growing mock universe? Would I really throw out the sim that realized they were in The Sims? Indeed, evolving a sim that realizes they are in The Sims might feel like I’m actually getting my computational weight’s worth — that goes especially if I was putting in all this effort to power and evolve a simulated universe in the first place. If our simulated universe is inadvertently an intelligence test for the evolving lifeforms inside it, then I’d hope we grow a winner. A sample so intelligent that it can actually see the code at the edge of matter is likely a sample we’d benefit from studying. It’s not too far removed from teaching great apes to sign.”
Although it is always good to see the simulation hypothesis in the news, Dr. Greene has merely reiterated an arguably anti-scientific point that has already been stated time and time gain: We shouldn’t tinker with this idea because it could be our undoing.
I couldn’t disagree more.
I think that it is actually a key to our evolution as an intelligent species. I think we are supposed to go there. If we are sims in a simulated universe, then figuring that out does not represent an end to our species, or our universe—but a valid and worthwhile ontological awakening.
Furthermore, if the simulation hypothesis is true, then in the words of Agent Smith, when it comes to our discovery of that truth: “It is inevitable.”