Almost as bad as the bug where they made light the same speed in all reference frames. I heard they didn't even fix the bug, they just put in some wonky fixes that mess things up when you go really fast or get close to the world boundary, since they figured nobody would ever do that!
I think that's a feature rather than a bug -- or, rather, a requirement for preventing buffer overflows. A hard wall at the limits of the addressable memory would be too obviously artificial, so that was a no-go. But time-dilation allows them to push faster particles into lower and lower priority threads. A lot of handy optimisations there.
No, where I think they really screwed up was in level of detail -- you know, where you get to smaller scales and start generating details procedurally rather than pulling geometry out of memory. I mean, I understand why it's necessary to do that -- who wants to store the position and momentum of every particle in the observable universe? -- but they could have at least faked some kind of continuity between observations, rather than calling Math.random() every time!
It's not really shared memory. It's kind of a compression hack that coalesces a particular state matrix of a whole set of particles into a single vector and when you read out the state of a particular particle it unfolds this coalescion by partial orthogonalization, starting with a random state vector from the spanning vector space.
You're guaranteed that for any other particle of the remaining set, the state vectors are orthogonal to the state you just read out. If you do the experiment with two entangled particles, by reading one, you'll immediately know the state you're going to read for the other one.
If you do it for more than one particles, for each state you read you reduce the size of the remaining set of vectors that my come out.
From a compression/encoding point of view it's kind of neat. If you do it for lots and lots of systems of many particles in a certain microstate, on average you're going to end up with nearly identical results for each total readout process, although the precise values and the order in which they appear will vary wildly.
Now because all this iterative state unfolding more or less comes down to be a kind of hash function you want to make sure, that users of the middleware don't rely on hidden internal state, or assume some kind of hidden seed. The downside to this is, that this particular implementation detail destroys locality, which kind of goes against the whole idea of the fixed-event-propagation-differential system, that aims to isolate high energy processes from neighboring parts of the simulation by easing their timespace metric.
There are a few corner cases (which actually came to happen in a lot of instances in the simulation), where out-of-bounds stress-energy densities are (successfully) isolated from the rest of the simulation, leaving visible to the rest of the simulation only a meta-description of the contents inside the region, that boils down to mass, charge and spin (where due to some interesting interaction charge and spin happen to have the same kind of visible effects on the outside timespace metric) and the surface area of the boundary region. However right at the boundary region, the cursor iterating over the aforementioned state vector unfolding may cross into the isolated region. At first it looked as if this could break the simulation. But it allowed for a wonderful hack for an incremental garbage collection inside the isolated regions, by treating the whole isolated region as a single meta particle, holding N instances of the state vector, where N is proportional to the surface area of the boundary region. Randomly selecting one of the quasi frozen states from inside the isolated area, we can call its destructor, be unfolding its complement an entangled particle that happens to by just outside of the boundary region.
This goes nicely with another hack, introduced early in development: The on-demand spawning of entangled particle/antiparticle pairs, which can be used to transmit forces between the actual particles you want to simulate.
By applying these on-demand spawns on the isolated regions, it turns out, these regions can be garbage collected, by kind of "evaporating" their contents through a entropy maximizing process, thereby avoiding the need to faithfully reconstruct the original information; instead the remaining hash value is uniformly distributed over the simulation and used to seed the entropy pool from which random numbers for the unfolding process are taken.
In a simulation, the beings running it have so much power that there is probably nothing we could do to stop that. (Although if whatever the simulation is running on has connections to outside, it is theoretically possible we could exploit some bug from within the simulation to make changes outside that make their memory wipe not work...but this is probably extremely unlikely. It requires us to discover a bug and understand its implications and exploit it before the operators figure out they need to do a reset and wipe).
But this reminds me of another memory wipe situation I've wondered about. One of the common things in many tales of alien encounters is the aliens messing with the memories of those who interact with them.
Unlike with simulation operators, who are essentially gods to us with omniscience and omnipotence and completely unconstrained by any laws of science or logic that we know, aliens would presumably be constrained. They would not be omniscient and omnipotent--just more advanced than us.
So with aliens we would have hope of combating their memory wiping, or at least detecting it.
For example, if you often drive at night in isolated areas where you might be particularly vulnerable to alien abduction, you could keep some innocuous physical item in your car, such as a book with a bookmark in it, or a Rubik's cube, or a cassette tape of a band you hate. Pick an item that you can put in a certain state that is alterable, and have a standard state you keep it in. The bookmark is always on page 100, the Rubik's cube is always solved, the tape is always rewound.
If you ever see anything strange that even suggests "UFO", you alter the item. Move the bookmark to page 110. Add a couple twists to the cube. Start playing the cassette.
If you remain aware of the possible UFO until it goes away or you figure out what mundane thing it is you are seeing, you fix the item.
If, however, you either remember that you saw something but don't remember what happened, or you don't remember anything weird but find you have lost time, you can check the item and if it is out of its normal state you know that you thought you saw a UFO and now you don't remember it.
The idea here is that the stories of aliens usually include something making much of our technology fail so that we can't record them, and they also probably know enough about to us to recognize when someone who sees them writes a note on paper and so deal with that. But unless they have a way to read minds or extract and interpret memories, they probably won't recognize that, say, starting to listen to music is actually a form of note taking, and so won't know they need to rewind the cassette before they let you go with your memories wiped.
So...anyone of you actually do anything like that?
This is the theme of a few episodes of Doctor who. Characters marked there skin with a pen/sharpy, every time they saw an alien. The aliens automagically wiped the memory of anyone who saw them.
Your link only provides a way to falsify a theory, not provide evidence for simulation.... which is impossible by definition.
Not to mention the idea of a simulated universe is, you know, philosophically boring and implies mostly false things in most peoples' minds.... like an anthropomorphic scientist god. In reality, it would change virtually nothing about how we view our universe.
> not provide evidence for simulation.... which is impossible by definition.
I disagree.
Science cannot prove that a theory is 100% true, only that it is not wrong by repeatedly testing the predictions made by the theory. The best we can do is say: this theory (e.g. general theory of relativity) is the best explanation for data we have so far and every experimental prediction it has made has come true.
The paper (https://arxiv.org/pdf/1703.00058.pdf) linked on the wiki page proposes four experiments that seeks to test the simulation theory. None of these experiments will single-handedly prove the simulation theory, but if they all pass we can only keep testing and seeking alternate explanations.
If the experiments keep passing and no alternative theories can be found, then we're either in a simulation, or the universe just happens to behave exactly in the way that a simulated universe would, but it isn't. I agree that we can never truly know which is true, but at that point the difference is reduced to semantics. It's like saying electrons don't really exist, instead they're just wave/particles that are exactly like electrons in every way except for some immeasurable quality.
This assumes the entity running the sim will never directly influence it, of course if that ever happens (requiring a _very_ high burden of proof) it'll be proof of the sim.
Dunno about you but if I were pentesting a machine and found out it was a VM, then maybe I'd try to get a foothold into the hypervisor, host, or sibling VMs. Not boring.