Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Measurement That Would Reveal The Universe As A Computer Simulation (technologyreview.com)
153 points by iProject on Oct 10, 2012 | hide | past | favorite | 115 comments


While I wouldn't entirely discount the notion that we're living in a simulation, I suspect that this measurement is probably quite problematic, for various reasons other people have addressed. Nevertheless, this article got me thinking:

Really? Can we actually simulate any part of the universe with 100% quantum accuracy? I hadn't heard about that, and it seems a bit implausible to me. But okay, fine, let's take it as a given that we can in fact simulate a volume a few femtometres in diameter, as the article says. Furthermore, let's say that our universe-simulating capability increases in line with Moore's Law, doubling every 18 months without regards for the limits of silicon or anything else.

The question is: how long would it take us to become those universe-simulating gods?

This is not a serious question, just a fun little exercise. If all of the above are true, then what will we be able to simulate, and when?:

2019: The nucleus of a single gold atom (8.45 femtometres)

2077: An entire helium atom (62 picometres)

2090: A cesium atom (423 picometres)

2115: A ribosome (20 nanometres)

2152: A red blood cell (7 micrometres)

2192: The smallest vertibrate, Paedophryne amauensis (7.7 millimetres)

2217: A human brain (150 millimetres)

2245: A small apartment and its occupants (10 metres)

2274: A small town (1 kilometre)

2335: Planet Earth (12,742 kilometres)

2407: The earth-moon system (812,000 kilometres)

2454: The inner solar system, inclusive of the asteroid belt (6.6 AU)

2475: The entire solar system, to the extremities of the Kuiper built (200 AU)

2516: Sol's sphere of influence, to the edges of the Oort cloud (100,000 AU)

2589: The Milky Way galaxy (120,000 light years)

2617: The Local Group of galaxies (10,000,000 light years)

2670: The observable universe (29,400,000,000 light years)

...But seeing as none of the initial assumptions are likely to be true, alas for all that. Would be kind of cool, though! (also, a femtometre is really bloody small)

[Edit: mixed up radius & diameter for the size of the observable universe. Hate it when that happens!]


Its definitely not 100% accurate. We can calculate observables like mass from basic principles to +/- 2-3%. However, there are a few things they didn't mention.

First, these things are not simulated at anything approaching realtime. Things are simulated on timescales almost as vanishingly small as the space-scales, and even this takes hours on some of the largest research clusters.

Second, these aren't simulations in the sense that you're used to. They involve taking a volume in 4-space (3x space and 1x time) and using Monte Carlo techniques to approximate what happens in that space-time. It is non-trivial to pick up where (in time) one calculation left off and start a new one, and would introduce a lot of error.

Third, the calculation time doesn't scale with the volume (or even the 4-space volume). Naively, it scales with volume squared.* We actually do somewhat better than this right now, but again, approximations are required to get there.

*Its actually the number of points in the lattice squared, so increasing accuracy by decreasing the space between sites scales as you'd expect.

Source: I spent more time than I care to admit writing a very simple version of these simulators as an undergrad.


Why should we have to make it 100% accurate ?

I mean, just like with videogames emulators, it can be good to simplify some things. That's how most supernes emulators were made until very recently IIRC.

I'm not a physicist, but for exemple if the subatomic particles can't be accurately modeled, couldn't they just be replaced by atomic particles and special case handling?


(Standard disclaimer: IANAQP, but...)

I think you're on to something. Rather than model each quark and gluon explicitly, you could just model probabilistic interactions at much larger scales. Maybe put in some special case handling so that if an observer within our simulated universe were to go looking for quarks and gluons, they'd find them. If they looked hard enough, some gaps might show: since subatomic particles are not actually being simulated in a continuously deterministic fashion, it would never be possible to observe both position and velocity simultaneously. An observer looking at an electron shell in two discrete moments would not be able to track a continuous orbit between them, since they'd actually just be seeing two separate expansions of what is effectively a lossy compression algorithm.

Also, it ought to be possible to model only the macroscopic dimensions explicitly, replacing the other seven microscopically enfolded dimensions with a bunch of arbitrary constants that accomplish more or less the same thing. Might drive our observers a bit batty, because those constants would appear to refer to a bunch of microscopic enfolded dimensions, which they'd never actually be able to detect.

Right, now I'm starting to freak myself out...


I got the same feeling when I was studying physics and saw those constants! And now I still get it when I see the normal distribution. (e phreak me out more than pi)


Of course whether or not the simulation is "real time" for us makes no difference to the simulation. Since time in the simulation would be perceived normally.


Right. But when people talk about simulating the observable universe, they're often getting at that story where people simulate from the beginning of the universe up until now, and I was pointing out that that scenario would take much much longer, because at current speeds it would be constantly losing ground.



Because I thought for sure it would be 605, here is 605: http://xkcd.com/605/

As acknowledged, this is not a serious answer. Because following Moore's law for 200 more years (the time at which we get a simulated human brain) allows 8e40 times the computing power we currently have.


Bloody hell, that's brilliant. Had missed that one somehow!


You could perhaps simulate two isolated gold nuclei with twice as much computational power as one gold nuclei, but you're forgetting that everything in this universe interacts with everything else. As you get to three, four, five, 1000 gold nuclei, your computational requirements will be O(n^2), and then even the exponential growth of our computational power won't be as big of a deal.


That may might not be true. Assuming nothing travels faster than the speed of light. Next assume your calculating all information in some unit of space. Now for small increases in time you can ignore everything further than that time * speed of light. Now, repeat that process for each point in space for 1 tick. Then start over.

Think of it like simulating a really large map of 'game of life', you can chunk them into separate threads and only pass information about the overlap after each calculation.


We already do this for electromagnetic waves. See:

http://en.wikipedia.org/wiki/Finite-difference_time-domain_m...

Since each field component in the voxel depends only on its immediate neighbours, you can perform massively parallel simulations on GPUs.

I've worked on this stuff, it's pretty neat. Discretizing continuous differential equations onto a voxel lattice is how a lot of things these days are done, including seismic and medical imaging.


> Really? Can we actually simulate any part of the universe with 100% quantum accuracy?

Lattice QCD has been used to simulate an entire proton[1] and derive its mass to an accuracy of a few percent.

[1]Popular descriptions of the proton simply call it a triplet of quarks, but it actually contains several virtual quarks too and a bunch of gluons with ever changing bonds and positions. The average amount of crap flying around inside the proton determines its mass, and you can find it by simulation.


An accuracy of a few percent is awesome. It's just that the cumulative accuracy for larger systems becomes lousy pretty fast.


"the lattice spacing imposes some additional features on the spectrum. The most striking feature... the cosmic rays would travel preferentially along the axes of the lattice, so we wouldn't see them equally in all directions. "

That assumes an awful lot about a simulation which by definition is not even in this universe. Maybe they would detect a simulation which has those features, but the assumption that it would work like that, because that's how we'd do it here and now in this 3-d space is I think a very tenuous one.


The article does admit this later on: "the calculations by Beane and co are not without some important caveats. One problem is that the computer lattice may be constructed in an entirely different way to the one envisaged by these guys."

Another problem I see is that a lattice might be evidence of a simulation, or it might simply be fundamental. It would seem strange, but no stranger to me than some other aspects of quantum physics.

What's the difference between a truly fundamental principle, and a principle that is fundamental in our world because it's part of a simulation? How could you tell?


I'm visualizing "lattice" the way latitudes and longitudes delineate Earth into a lattice. Except we live on a 2-sphere, which is the surface of a ball - the 3d projection of a circle.

First, some background. The problem with all simulations is that the laws of physics, which appear continuous, have to be superimposed onto a discrete three dimensional lattice which advances in steps of time.

What if the lattice revealed is a 4-sphere or 6-sphere? What would that tell us?

Edit: rewrote to address parent post...


Only if you assume there's an inherent connection between the simulation and reality. I've written and run Conway's Life before, but because it was fun, not because it was an accurate simulation of reality.

In a world that has virtually unlimited computational power, who's to say there wasn't some teenager who got bored on the weekend and said "hey, what if I ran Simulation.app for 10 billion years of simulated time, but with only 4 fundamental forces, and 3 dimensions of space"?


Spotting a lattice wouldn't be conclusive by any means, but it would be some evidence for the simulation hypothesis, because it would be more likely in that situation than others. I... don't know how we estimate how much more likely.


That's pretty much the claim, right? Not that this will determine whether or not the universe is a simulation, but that it may determine that it is-- or at least probably is. Simulationism is predicated on the hypothesis that we could construct a completely accurate simulation of our universe in our universe, as if recursively-simulated universes are possible it becomes vanishingly unlikely that we do not live in one, so demonstrating that that's possible would be pretty important.

Personally, my money's on the universe is a simulation of itself.


It's not actually a true statement to say that "if recursively-simulated universes are possible it becomes vanishingly unlikely that we do not live in one."

Yes, I've read Bostrom's paper[0], but he misses a key point: the number of rational beings in each simulated universe may decrease exponentially. If, on average over all universes, each rational being simulates only 0.5 rational beings during his or her lifetime, the total number of simulated rational beings would be exactly equal to the total number of unsimulated rational beings, making it equally plausible that we are in the root universe or one of the simulations.

The possibility that rational beings diminish in number exponentially as universes are simulated seems not only plausible, but likely given the nature of the simulations necessary to replicate our universe in all its detail.

[0] http://www.simulation-argument.com/simulation.html


I think it might be heading down the wrong path to talk in terms of "rational beings" as atomic units rather than in terms of energy (or rather information) balance.

Remember that you do live in a simulation of the universe so compressed it fits between your ears, and it does contain many simulations of other rational actors. They're nowhere near as complex as your whole universe, but I bet they're complex enough to know that they're complex enough.


I don't see how this philosophical argument is going to change anything. Each human has a rather small finite number of 'rational beings' they can handle processing at once. It doesn't recurse very well, the limit is a total. The exponential argument against simulation is unaffected.


Are you implying that concepts of people in your mind are themselves self aware?


Do you think that I am self aware?

Can you predict where I am going with this line of questioning?


You are symulating them as self-aware. And what's the difference, if you can predict their actions with high accuracy?



"The glitch arises from the possibility that the average number of people living in the preposthuman phase might be different in civilizations that produce ancestor simulations than in civilizations that do not."

No, that's not the problem. The simulation argument essentially depends on the fact that a universe that reaches simulation-capability ends up simulating more rational beings than half of the beings that exist in the universe itself. But it's possible that the simulations themselves are so limited that they can only simulate some fraction of the universes' number of rational beings. In fact, that seems likely: it seems that any simulation of this universe which obeys the same laws as this universe must be smaller than this universe. "How much smaller?" is the glitch. The simulated universes have to be sufficiently large (as measured by the number of rational minds they ultimately simulate) to exceed an average of 0.5 simulated minds per real mind in order for the simulation argument to hold.


Exactly. To draw a computing analogy, in order for it to work we'd have to be able to make virtual machines that outperform the machines they're running on, and recursively. In order for it to work, each simulated universe would have to progress to the point we have in ours, and also be able to simulate getting to this point much faster than it took to get there itself. It quickly starts to sound like perfect compression or perpetual motion.


> Personally, my money's on the universe is a simulation of itself.

Does this, literally, mean anything at all? I struggle to find any significant value in having this knowledge. Are there any additional conclusions we can draw based on knowing that the universe is a 'simulation of itself'?

It just strikes me as tautological.


> It just strikes me as tautological.

Pretty much, yeah. There's as much value in it as in any kind of metaphysics, I guess; it makes it easier to get on with being an ape despite the creeping feeling that being an ape doesn't mean anything.

Basically-- I know that my experience of the universe is an interaction of the universe with itself, and that things I perceive also ~perceive me. I create the universe as it creates me, and the idea of "something" "real" "existing" outside of my experience of it doesn't make sense because none of those words make sense outside of my experience.

So my experience is a simulation, but what it is being simulated by is, in fact, my experience. Thus: A simulation of itself. It makes about as much sense as anything else.

In terms of pragmatic value? Eh. My view of philosophy is that the simplest reason to do something besides philosophy is most likely the best.

(I swear I don't sound like a crazy person in real life-- but despite my valediction, I love talking about this sort of thing.)


You're talking about Plato's cave shadows and Korzybski's territorial map as mental simulations of the physical world? They might be called simulations, but in that sense, human perception is a very loose simulation of reality, only good enough for us to successfully inhabit our place in the universe and sometimes not even that (in the case of asylum inmates). The mind's simulation of reality is not a simulation in a scientifically practical sense.

I'm not willing to call the universe a simulation of itself. I agree with the idea of using the universe to simulate itself, though. Computer simulations are limited by accuracy and time, but if you assume arbitrary amounts of time, and if you don't need perfect accuracy (or laws of physics dictate safe time/space steps for the simulation), you can theoretically simulate arbitrarily complex physics for a brief amount of time. Sorry for stating the obvious with that, but I'd like to contrast the typical "computer simulation" idea with the following: I consider physics experiments to be attempts at using the universe (a small part of it) to simulate other parts of the universe. Computer simulations pale in comparison.

I got lost on this part of your comment: "So my experience is a simulation, but what it is being simulated by is, in fact, my experience. Thus: A simulation of itself."

Entertaining mental maps as simulations, for the sake of argument, I agree with the first part: mental (inner) experience is a simulation of sorts of the outer world. I don't understand the second half of your first sentence. The "what" that is providing your perception (inner experience) is your brain, right? I don't understand how you're making the connection that the brain is your experience. That seems like a lazy use of the word "is", and with a narrower replacement, the equivalence would not hold and you would not be able to claim "the universe simulates itself".


Something existing outside of your experience is as simple as two kids in the third world playing together. The abstract concept of events occurring outside of your realm of perception shouldn't be novel or difficult to imagine - just think of the things you do when no one else is around, then reverse that notion.

Simulation, to me, means an approximate recreation. To say that your experience is a simulation leads me to wonder what you're a simulation of. An act can't simulate itself as it's not approximately itself, it is itself.

Words need uniqueness to be helpful as a form of communication, and if we call simulations anything acting like something else OR itself, then by calling something a simulation, you're just stating its existence, which I can't find a reason to do that can't be accomplished in a more direct way.

In other words, yes things exist. Let's not ruin the word 'simulation' trying to say so!


>Simulationism is predicated on the hypothesis that we could construct a completely accurate simulation of our universe in our universe, as if recursively-simulated universes are possible it becomes vanishingly unlikely that we do not live in one, so demonstrating that that's possible would be pretty important.

Actually it wouldn't have to be "a completely accurate simulation of our universe in our universe". A "somewhat accurate simulation of our universe in our universe" would also do for the statistical purposes of that premise. If you have recursively-simulated universes, even if they are not 100% like each other, it shouldn't change the probabilities, right?

That said, I don't fully agree with the main premise:

>if recursively-simulated universes are possible it becomes vanishingly unlikely that we do not live in one

Sounds to me somewhat akin to the faulty "ontological proof of the existence of God" ("Anselm defined God as "that than which nothing greater can be conceived", and then argued that this being could exist in the mind. He suggested that, if the greatest possible being exists in the mind, it must also exist in reality. If it only exists in the mind, a greater being is possible—one which exists in the mind and in reality.").


It bears some similarities to the ontological argument, but it's not fundamentally grounded in ontology. It's based on the hypothesis that simulation is possible, which is testable in some sense.

Say we cook up a little Earth simulator, and it spontaneously generates a perfect being with both agency and omniscience. That would be a fair bit of evidence for the ontological argument, yes?

That's also why I say "completely accurate". A really rigorous argument relies on recursive simulation, since creating a simulation that couldn't be the universe isn't very strong evidence. (I'm typing on one right now, in fact.) If we make a completely accurate simulation, we can skip the step of proving that we can make a simulation of it, because we already did.


Anselm's ontological argument isn't faulty, it's just not very useful. It just claims that things that "really" exist are greater than things which exist only in our minds. http://en.wikipedia.org/wiki/Ontological_argument#Anselm As you can see the implications are pretty weak. Maybe we should go find the greatest thing that really exists and declare it to be God.


"if recursively-simulated universes are possible it becomes vanishingly unlikely that we do not live in one"

That depends on the probability of a given universe or simulated universe to contain a simulated universe, and what limits there are with the complexity in space and time on each recursive simulation. I suspect that thermodynamics saves us here.


It gets a bit metaphysical, but consider how many more possible universes there are in a universe which is likely to generate a recursive simulation than in one which is not.

That is, the probability that the probability of being in a simulation is high in a given possible universe is, itself, high, ad infinitum, if simulation is possible, hence again why being able to simulate our universe is an important point of evidence.


Strongly agree. Equally tenuous is the converse idea: that if we did detect some features of the physical world that were also apparent in a simulated lattice, that the best explanation for this is that we exist in a simulation.


The world used to be flat, but fortunately They migrated to a bigger server just before we had developed the mathematics to prove it, so now we all think the earth has always been round. Now that They can see us talking about it, expect Them to fix the cosmic-ray bug before we can test it. I can practically see the JIRA tickets.


The old school mystics discovered some flaws in the rules as well, hence the transmutation of materials and levitation... However now they hot-patch and perform roll-backs... Which we see as deja-vous


I agree. Just be sure to file them here: http://blogs.atlassian.com/2012/09/hr-developer-jira-real-li....


Their abstract starts, "Observable consequences of the hypothesis that the observed universe is a numerical simulation performed on a cubic space-time lattice or grid are explored." They're not saying that's how it works, but since that's one of the few ways we can imagine simulating the universe, it's reasonable to look for its signature.

I've always thought there must be better ways of simulating the universe than starting a 3D CA at the big bang and running it forward a few billion years in femtosecond increments until something interesting happens.


If the computer were fast enough, that'd be the easiest way to do it.


I agree, but actually, it's not even possible to draw a uniform 3D grid through the universe because of the curvature of spacetime. So even if the universe is a simulation, I don't think it could possibly work the way they describe and show the kind of artifact that they measure.


Yes, it's possible that a simulation would use a non-uniform lattice-like data structure. Maybe something like a quad-tree (with appropriate dimensionality) where the granularity of the structure varies according to the local density of data.

Granted, such a data structure would need to be expressed in a coordinate system, which itself defines a grid or matrix. But can't coordinate systems use non-uniform representations? (analogous to floating-point)

Am neither physicist nor CS person, so not sure all of this holds together, just wondering.


Be sure to email the physics PhDs who published this paper to let them know of this flaw you have found.


Be sure to leave it at that snide remark and not actually explain the flaw in my reasoning, so that I can avoid learning something.


The point was you made a claim not a question. As snide as my remark might have been, you're insulting the people who did this research by being so matter-of-fact'ly dismissive of their work based upon a very basic observation that they surely must have considered.


fundamentally, how do we distinguish between 'simulation' and 'reality'? if a simulation (of any sort) exists, it is therefore real. how would a hypothetically 'simulated' universe be any different from a 'real' one?

further, proposed research that seeks to 'reveal' the universe as a 'computer simulation' suggests a limited vision. why must it be a 'computer' doing the simulating? maybe the universe (in some sense) 'computes' itself in coming into existence (think cellular automata or things of the like). if so, then it is not a 'simulation' coming into being, but 'reality' itself.

i will simply state that it is not surprising to me that 'simulations' and 'realities' have much in common -- so it is not surprising that might be mistaken for the other.

in any event, these ideas are off the top my head, i don't know how seriously to take them.


My impression is that anything you can simulate via a program could just as well be the fundamental laws of physics -- and vice versa.

The idea behind this line of inquiry - which I hope is continued - seems to be assuming that, if we are simulated, then we are simulated using a similar computation model to our own, and using data structures that we would have come up with ourselves. I don't have better suggestions for which model to use, but it's good to keep in mind that even if one model of a simulation fails to match our physics, then there may just as well be another that does match it.


It looks like for a long time we've suspected this strange relationship between our own mathematical inventions (a computer simulation is just a machine that applies a discrete mathematical model), and the universe itself:

http://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness_...

It certainly is a mystery. I persnally believe this mystery is logically equivalent to the question "what is existence?" Whether this question is answerable or not, I can't say.


This is interesting. If you could map out preferred directions in the universe (even if those directions changed across space), you would be able to establish an absolute, universal reference frame. That would be...incredible.


If the universe is a "computer simulation" running on the computer called the universe then a computer is a "computer simulation" running on a computer and we all are human simulations running on the computer called the human body.

Occam's razor would say that the simplest explanation is the best. Since this "simulation" works exactly the same as the known universe and we don't know about anything that is outside the universe, we can safely ignore this theory and not lose any information about how the universe works.


Apart from the fact that a simulation isn't initiated all by itself, and points to a purpose important to something outside of the simulation.


"Initiation" is a four dimensional word.

As our existence is on a four dimensional level, so naturally we see everything as having a start and a finish, i.e. a line on a two dimensional surface.

But there exists situations where there is no start or finish, i.e. a line on a mobius strip. Since we hypothesize that our universe exists on 10 dimensions, we shouldn't assume that our universe's existence has a set start and finish, even though we can only measure back to the big bang.


Fair point, but you can take initiation with regards to a simulation as being whatever confluence of events is triggered by something external to the universe to cause the universe/simulation to exist.

Point being is that by definition, a simulation has to simulate something, and therefore needs to exist sometime after the thing it is simulating.


Within the constraints of a four dimensions simulation as we know it, of course.

But when you start stretching to simulations on the Nth dimension, you are back to the mobius strip.


Occam's Razor is more of an idiom or principal than a law.


>Occam's razor would say that the simplest explanation is the best

_Best_ not as in "the true one", but as in "the most possible, if we don't know any better".

An event could have an elaborate explanation and a simple one, and the elaborate explanation could still be the true one.

E.g a guy is found knifed in a desolate street of New York. His wallet is missing. He had no known enemies. The simplest explanation is "it was a mugging gone wrong". In fact, he was killed by a guy that had an obsession on the girl he recently started dating. The killer didn't even take the wallet to make it look like a mugging: he took it because it had pictures of the girl and the dead guy he wanted to add to his "shrine". (Hmm, I should go write for CSI).

>thus since this "simulation" works exactly the same as the known universe and we don't know about anything that is outside the universe we can safely ignore this theory and not lose any information about how the universe works.

Except if a simulation _doesn_t work_ "exactly the same as the known universe" but has tiny details that could be telling, which is, like, the premise of the whole article.


> In fact, he was killed by a guy that had an obsession on the girl he recently started dating. The killer didn't even take the wallet to make it look like a mugging: he took it because it had pictures of the girl and the dead guy he wanted to add to his "shrine". (Hmm, I should go write for CSI).

But that's not more likely than any one of thousands of other possible explanations which there is no specific evidence for. Proving a negative, that it didn't happen this way, is also not possible. Thus, that the negative cannot be proved is not evidence for this theory either. See Bertrand Russel's Teapot: http://en.wikipedia.org/wiki/Russell%27s_teapot.


>But that's not more likely than any one of thousands of other possible explanations which there is no specific evidence for.

True, but the point I wanted to make is that Occam's Razor doesn't give us the "true story", just the more possible given the evidence we have.

I.e sometimes the complex, involved story can be the actual thing that happened, despite involving 20 more entities and complex interactions.


Everyone knows how software works. You have a program, and you give it to a computer, and the computer conforms itself to the program and instantiates all the rules and carries out the program's instructions. The universe works just the same way, only without the computer. -- the negation of the Simulation Hypothesis


Actually there are cannot be variables determining some phenomena http://en.wikipedia.org/wiki/bell_theorem



Bell's Theorem says there cannot be local variables determining some phenomena. That raises the computational cost of simulating the universe, and shrinks the set of possible universes that could simulate ours, but does not preclude the universe being a simulation.


Good point, thanks for correcting me. Can you give a hint how this would raise the computational cost?


Requires proving the claim of nonexistence.


I (tried to) read the Beane/Davoudi/Savage paper. I am not a theoretical physicist, but I only vaguely got the feeling that this paper is one of those Markov-Chaining-bot-written papers. I would like to note that it's true: advanced math now looks exactly like the ravings of a madman.


Hmm, which paper? Would you link to the PDF?



I just better not end up being flushed out of a sewer pipe when this is all over. I hate swimming naked.


Naw. You're just on the Thirteenth Floor.


Are we really suggesting that people studying cosmic rays would not have already noticed a geometric pattern in their observations? If I were studying cosmic rays, and noticed what appeared to be pixelation in the universe, you would have heard about it by now.


It took until the early 1990s for COBE to detect anisotropy in the cosmic microwave background radiation. We still haven't detected gravity waves. Why should we necessarily have seen pixelation by now? Rather, the lack of detection of such pixelation sets limits on certain parameters in the model, but more work could find patterns which weren't seen before.


Not necessarily (if the data exists then this paper would already have referenced it, surely? These are serious scientists at very respectable institutes - the INT at Seattle is highly prestigious). Firstly, you would need to select rays that are very close to the cutoff...the further your distribution slides down the energy axis, the less noticeable your deviation will be. Secondly, you need to orient your distribution geo-spatially, i.e. take into account the Earth's position throughout the measurement. Thirdly, you need to find a way of correcting for all known sources of very high energy cosmic ray background.

I'm not sure whether anyone has ever done all that at once. I'm not saying that they haven't; I'm just not familiar with any measurement that meets all those criteria. I'm not a cosmic physicist though, so it's entirely possible the measurement has been done.


I'm sure it's been done. Anisotropic studies of UHECRs is a very popular thing to do for UHECR experiments, almost as popular as looking for neutrinos.

I worked at such an experiment for 4 years.


I don't think you have any idea of the staggeringly ginormous amount of data about "cosmic rays" that can be collected, especially at the level of detail this paper is about.

For example, the Large Hadron Collider generates about 300 GB per SECOND of raw data when it's running. There could be thousands of interesting anomalies in that data that nobody has found so far because they haven't looked at the right parts of it in the right way.


I don't think you have any idea of the staggeringly ginormous amount of data about "cosmic rays" that can be collected, especially at the level of detail this paper is about.

For example, the Large Hadron Collider generates about 300 GB per SECOND of raw data when it's running. There could be dozens of interesting anomalies in that data that nobody has found so far because they haven't looked at the right parts of it in the right way.


If you want to read more about the simulation argument, the place to start is Nick Bostrom's page on it: http://www.simulation-argument.com/


If the universe is a (deterministic?) simulation, would it matter to us if it were actually computed?

And if no, does anyone (or thing) need to come up with the rules for the simulation for us to experience it?


No, but knowing the rules would help if we're trying to break them (hack the universe).


It's basically impossible to answer no to any of those types of questions. That does not mean there true, just impossible to falsify.


It's fun to speculate about this (heck, I may even found a religion around it) but there's no way to prove that one is in a simulation.

It's possible that the creators of the simulation do not wish to let the cat out of the bag. Any test which aims to probe the limits of the simulation can, itself, be simulated at higher accuracy with only a slight loss of real-time speed. In fact, it's highly likely that different parts of the universe are simulated with varying amounts of precision.

Heck, even if the simulators goofed, they can always fix the simulation, rewind the universe back to a prior checkpoint, and begin anew.


If the universe would be a simulation, then what about the universe in which ours is being simulated? Can we measure that too?

What if the creator of the simulation thought of this and programmed his simulation such that this measurement will not work by making it give fake values for non lattice directions? :p

If the measurement says it's a simulation, who says it really is a simulation? It could just be that physics actually is like that, without any "computer" running it being involved.


"You're very clever, young man, very clever, but it's turtles all the way down!"


If physics is like that then that would actually be of great interest to physicists. Most models rely on space being homogenous and having the same properties regardless of direction and speed.


The argument seems to be that if the smallest workings of space/time exist in discrete points rather than being continuous, then we must be in a simulation, if I understand correctly. Personally I find discrete math much more natural and it's the continuous math that seems imaginary to me. So a discretized universe seems perfectly natural to me, and hardly evidence that it is being simulated.


I agree with you about preferring a discrete model of the universe. This may be a common affliction among programmers. For some apparently good theoretical reasons, though, it's a minority opinion among physicists.

There is at least one notable figure who explores the idea: Nobel laureate Gerard 't Hooft. Here's a recent starting point: http://www.math.columbia.edu/~woit/wordpress/?p=5022


I had a professor pose a similar idea to a class I was in, the idea being that if a perfectly simulated universe is a technological possibility at some point in the future, and that more than one simulation could be created, and that simulations could be created inside of simulations, and so on for simulations within simulations, then odds are we are in a simulation.


Seems your professor never heard of the Halting Problem.


It was a political science class, so you're probably right; but how exactly does it apply?


It doesn't.


Care to explain how it's relevant ?


It is relevant because the professor's hypothesis depends on the availability of "sufficient computing power" realized through "technological progress". Coming from a lay person (though obviously I didn't know that at the time my original comment was written) with the experience of modern day IT and exposure to magazine articles making wild claims about Moore Law and the like makes lots of sense.

I mentioned the Halting Problem as an example of Undecidability. We know that there are problems that cannot be solved by a computer, regardless of the resources such computer may have. The argument that we are a simulation would only apply if all observable phenomena in the universe are computable themselves (which is far from trivial to answer, but if I had to guess I's say those aren't). Of course, we could argue that whoever ran this simulation would have provided it with a simplified reality, including an underpowered form of computing... but that sounds rather suspicious to the skeptic in me.


Well, it might be possible for a clever simulation to commandeer the bare metal of the system with a well-crafted injection attack, and then use the ring zero access as an environment to stage and control it's own simulation.

This would still consume resources, and indeed, if limited resources were somehow improperly allocated in a catastrophic manner, it might tear apart the fabric of the universe and threaten to crash both simulations.

But think of some of the implications in this. First: when that simulation attempted to discover if it was wrapped in a simulation, it probably would get a false negative, because it'd be running so close to the hardware. Second: WE wouldn't be able to tell if WE were slaved by another simulation. That is, whether we had a parallel universe as a neighbor that was pulling our strings. Third: If we find ourselves creating simulations that are alarmingly convincing, and seem to prove that WE are a simulation, we would want to be very careful when we start playing with one, for fear that we might have unwittingly stumbled upon a curious vulnerability in this realm that allows for an injection attack, since it might crash the system (although, if this is all just a big video game, what is there to honestly fear...).

See also: Bobby Tables: http://xkcd.com/327/


Claiming that we're in a simulation because our universe is discrete is wrong on 2 levels.

1. The discreteness of our universe doesn't say anything about whether we're in a simulation or not. What if the "higher level" universe has indiscrete computers? If our universe is discrete, it can be simulated on a Von Neumann model computer - that's all we can say.

2. And more importatnly: saying that we're in a simulation actually doesn't mean anything. It is 0 bits of information. Adding this statement to our model of the world is like adding comments to source code or defining a function that is not used. It's like saying that gravity is caused by tiny invisible dwarfs pushing elementary particles.


It's amazing what simulated physicists are up to these days.


I find it interesting that, at the the end of the article, the point is made that failing to find evidence in the relevant measurement doesn't necessarily indicate that we're not in a simulation, followed straight after by the remark that it is worth making so as to rule out that we're living in one. Isn't that a lot like giving a null hypothesis and suggesting we go out and prove it?


Reminds me of "Permutation City" by Greg Egan.


If we actually do live in a computer simulation, and scientists carry out research to systematically find and test all the corner cases, we'd better hope they don't trigger a segfault and accidentally destroy the universe :)


So, wait. What if our universe is being simulated with 'different' laws of physics? I guess we would never be able to find that out, even if we could somehow conclude it is in fact simulated.


Looking for hints of discrete patterns in our universe? Awesome, go ahead.

If you want to make a case of wheter we are in a simulation or not though, you are not in the domains of science anymore.


How much of this speculated artifact wouldn't be there if The Great Computer was for example analog, and used polar coordinates to simulate our universe?


I like Ian Banks take on this from his book "The Algebraist".

“Any theory which causes solipsism to seem just as likely an explanation for the phenomena it seeks to describe ought to be held in the utmost suspicion.”


The sort of simulation they're talking about here doesn't justify solipsism. Even if it were true, everybody you meet would be just as real as you are.


I think you misunderstand the quote. It is not saying that simulation justifies solipsism, just that it belongs to the same family of concepts, along with comedy-gods burying dinosaur bones and stuff like that.

The original context was one character's view of a society in the book that had elevated the theory of existence being a simulation to the status of official dogma.


I'm still confused. Are you saying our society has elevated the theory that we're a simulation to the status of dogma? That doesn't match with what I observe. Are you saying it's untestable? That's what the article is about. Are you saying that we have no a priori reason to believe it might be true? If you look at the universe as it appears to be, then it's a justifiable surmise that the majority of humans intelligences that will exist will be in simulations that we will run. That's why people are interested in figuring out if we're the people in the simulations in the first place. I really still don't see why you think the situations are comparable.


No, the society in the book did that, I was giving you the context of the quote.

Also, I am not sure that a positive result in that experiment would necessarily prove the hypothesis. The trouble is that if you think you have evidence for the universe being a simulation, you have actually found evidence for any number of things from god to solipsism, depending on what hat you are wearing, but you haven't actually got anywhere and unless you have really exhausted all possible other explanations for what you have measured, then I don't think it is a particularly rational position to take, although it is admittedly quite fun.

Now it could be that we are in a simulation, but if we are, how do you know that anything you detect wasn't put there on purpose to trick you into thinking that you are in a different kind of simulation from the one you are actually in, as a honeypot to fool would be hackers of the simulation? And suddenly we are back in comedy-god land.


You are pushing a bit too far there. It's one (bad) thing to say, "my theory is X (a miracle), even though Occam's Razor suggests Y (evolution), and both X and Y explain Z (dino bones), and we agree Z is true".

It is another thing to say, "my theory X (simulation) implies Z (a symmetry-violation), which has no other proposed explanation, and we have observed Z".

That said, I am more than willing to bet that the lattice will not be discovered.


If there are no other explanations that fit the data that anyone has come up with, then fine. But I would not personally be particularly convinced by a simulationist interpretation until I thought that the evidence had been repeatedly tested in a fairly wide variety of ways over a long period of time. For me it is one of the ideas that the phrase "Extraordinary claims must be backed up with extraordinary evidence." was built for.


>First, some background. The problem with all simulations is that the laws of physics, which appear continuous, have to be superimposed onto a discrete three dimensional lattice which advances in steps of time.

That's only if we assume a digital simulation.

An "advanced civilization" could also run it in some kind of analog computing environment, no?


A digital simulation is exactly what they are trying to detect; a quantized lattice forming the observable universe would certainly constitute evidence of such. Not finding such a lattice wouldn't rule out an analog simulation.


Aren't things discrete at the quantum level. Couldn't that be the 'level of discreteness' of the simulation. A bit over my head here, so correct me if I'm wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: