We are not living inside a simulation – we are building one

An idea popularized in the Matrix trilogy of films suggests that humanity could be living in a giant computer simulation, blissfully unaware of some other, external reality. Of course, this idea did not begin with the Wachowski brothers – it has actually puzzled Western philosophers at least as far back as Descartes. The problem is that the proposition is impossible to falsify: we cannot objectively measure the “realness” of our reality, armed only with the instruments of perception granted by this universe. Nevertheless, it tickles our brains to imagine that what we believe is real could really be an artificial conceit of some other technological civilization.

Perhaps the idea has gained in popularity because we are now creating virtual worlds that rival the real one, at least in terms of the amount of time we spend in them. Indeed, some thinkers argue that social networks like twitter, Facebook and Foursquare are just the beginning of a revolutionary move toward the creation of artificial intelligence at the global scale. This school of thought, sometimes referred to as transhumanism, posits that the exponential growth in the speed of computation (Moore’s law) is going to continue, with fundamental consequences for humankind. According to futurist Ray Kurzweil, by the year 2029 the computational power equivalent to a human brain will be available in computers which cost less than a cheap laptop today. By 2040, he projects that total computer intelligence on the planet Earth will outstrip biological intelligence by a factor of a billion to one.

That’s all very far out stuff. But where things start to get really interesting is when we continue up the exponential curve a bit further – say, 200 hundred years or so. Once we have built computers at the nano and quantum scales and embedded those computers into our own bodies and most of the environment around us, we actually start to run out of space, or ‘substrate’ upon which to compute. A physical feature of the universe – known as Planck’s constant – places a bottom limit on the minimal size and speed at which we can perform computation. So, once computers are engraved on the tiniest building blocks of matter and can’t possibly get any smaller, the only remaining alternative is to find more available matter and energy with which to make our computer bigger.

Fortunately for our future civilization there is a lot of matter relatively close by with which to fashion more quantum-scale computers. The planet Jupiter for example contains enough mass – 317 times that of the earth – to increase our total computational power by a corresponding 31,700%. Despite its seeming emptiness, even the space between planets and stars contains matter and energy, which can be added to our growing computational mesh. Before too long, however, the relentless exponential growth of Moore’s law will mean that all of the matter and energy in our solar system will have been harnessed toward computation. We will have truly become what Nikolai Kardashev terms a type two civilization. On a computational scale, our solar-system spanning supercomputer would contain ten million trillion trillion trillion times more processing power than our entire human civilization (living and artificial) does today. Kurzweil believes we might achieve this as early as the twenty-second century.

In order for this exponential curve to hold, we would then need to venture out of our solar system and begin converting distant stars and the interstellar dust between them into computational bits. If not for the constraining limit of the speed of light, within a relatively short period of time, the Milky Way galaxy would have been colonized by our runaway mega-computer. If we don’t find a way to surpass the speed of light by then, the galactic CPU will take hundreds of thousands of years to build and definitively stop the exponential progress of Moore’s law. But if we do surpass the speed of light, the entire mass of the Milky Way galaxy could be harnessed as computational substrate within another century.

Authors Krauss and Starkman have calculated that the entire observable universe contains enough energy to permit 1.35 x 10^120 bits of computation. According to those authors, continual exponential growth in computer power would result in us reaching that limit within 600 years of the start of Moore’s law. And that’s it. After that point we would have to re-write the physical laws of the universe, or discover a new one, in order to continue the exponential growth of computation.

The upper limit on computation for the universe is thus defined as a function of the planck constant (minimal computer size) times the amount of available matter and energy growing at a rate limited by the speed of light and ultimately halted by the event horizon of the observable universe.

A number of questions logically follow from this calculation. The first is, why haven’t we yet encountered the advancing wavefront of an alien computer colonization project? The speed with which transhumanists imagine this taking place means that they, more than anyone, need to seriously contend with Fermi’s paradox. Perhaps such a computer has already gobbled us up, and we simply aren’t aware of it. In this scenario, we wouldn’t exist inside of a simulation, but the atoms in our bodies would be carrying out computation without our awareness or consent. Another question to consider is: what will our civilization do with 10^100 calculations per second? Will we simulate our own universes? For what purpose? Nick Bostrom suggests that we’ll do it because we are curious about our predecessors and want to enable an effective form of time travel.

Importantly, if the transhumanists are right, then it is almost certain that first contact with an extraterrestrial civilization will be a meeting of machine intelligences, rather than biological organisms.

 

Further Reading:

The Intelligent Universe: AI, ET, and the Emerging Mind of the Cosmos — James Gardner

The Singularity Is Near: When Humans Transcend Biology — Ray Kurzweil

 

3 thoughts on “We are not living inside a simulation – we are building one

  1. The discrepancy may have to do with Kurzweil’s optimism, or it may reflect differences in what we mean by ‘computer power equivalent to a human brain’.

    Either way, sometime in the 2030s, we anticipate consumer-grade computers with human-level computational capacity.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>