## Reviews and Relativity

In 2002, Stephen Wolfram published his book, A New Kind of Science. About a month later, Scott Aaronson published a review of it which included a proof intended to demonstrate that the kind of discrete, deterministic universe Wolfram described was a scientific impossibility. I only just read this review, which makes me rather late to the party.

I like Aaronson’s review a lot, not because of what it has to say about NKS, but because the proof it contains. This proof, in my opinion, is one of those rare, wonderful moments in which a scientist with relatively mainstream views takes the time to refute a position in digital physics in a precise fashion. Out of such moments, stronger theories are made.

For those who’re interested, the review can be found here. I encourage all those who’re interested in this topic to take a look–particularly at Section 3.2.

For those who aren’t inspired to take a look, the gist of the proof is this: Any model that incorporates both quantum entanglement and special relativity is going to run into situations in which a measurement B in one reference frame precedes the event A that appears to precipitate it. The same situation must be viewable in other reference frames in which the events appear the other way around. The proof points out that a completely discrete model like the one Wolfram proposes lacks the quantum mechanical tools that usually help us resolve such scenarios. In the discrete case, either event A causes event B, or vice versa.

The proof is important because it’s not specifically directed at Wolfram’s ideas, but rather *all* fully discrete models of physics. What the proof proposes, in essence, is that complete discrete models are fundamentally incompatible with what we see in experimental physics.

I think I know what’s wrong with this proof and I’ll try to make my thinking on the topic clear here. If anyone out there disagrees with what I have to say, I’d be delighted to hear about it. To be honest, my idea of what’s wrong is so simple that I can’t quite believe that nobody else has said it. Quite possibly, there’s something massively obvious that I’m missing. If that’s the case, I can’t wait to learn what it is.

I believe that Aaronson’s proof fails because of the literal requirement of Assertion 2, which states:

R satisfies the relativity postulate. That is, assuming the causal network approximates a ﬂat Minkowski spacetime at a large enough scale, there are no preferred inertial frames.

I would argue that while the proof may work so long as Assertion 2 is true, there’s no requirement that it hold. This is because we don’t know that spacetime actually conforms Minkowski space. We only know that whenever we observe objects traveling through space at less than the speed of light, their behavior is consistent with that model.

It’s true that every observation we’ve ever made has been *rigorously, perfectly consistent* with the Minkowski-space model, but we also know that we can never actually prove that spacetime conforms to Minkowski-space from basic philosophy of science. Notably, the work of Karl Popper.

To quote the mighty Wikipedia on Popper’s work:

Logically, no number of positive outcomes at the level of experimental testing can confirm a scientific theory, but a single counterexample is logically decisive: it shows the theory, from which the implication is derived, to be false.

In other words, we can never prove that something is true–only that it’s false. This concept is important here because spacetime is a bit like dark matter–we can never measure it directly. We can only ever measure the motion of particles traveling through it. I would argue that this changes the requirements for a working model of digital physics. Namely, the requirement becomes that *particles within our model must always travel in a Lorentz-invariant fashion*.

This distinction is key because if we can create other models of spacetime for which Lorentz-invariant motion always holds, but for which discretization works properly, then Aaronson’s proof fails for that case.

Are there such models? Doesn’t Special Relativity *require* Minkowski space? So far as I understand the topic, yes there are such models, and no, Relativity doesn’t need it. For an alternative model that I can’t find a problem with, all we need to do is a little algebra.

Here is the expression that defines the properties of Minkowski space, in units where the speed of light is 1:

s^2 = t^2 – x^2 – y^2 – z^2

To get something a little nicer, let’s just get rid of those pesky minus signs by moving our spatial axes to the other side of the equation. Then we get this:

t^2 = s^2 + x^2 + y^2 + z^2

Suddenly we have something that’s flat and local. But what does it mean in practice? It means that we need a simulation with an extra compact dimension, in addition to the three we’re used to looking at, that codes for the spacetime interval *s*. Motion in this compact dimension operates as a measure of the ‘subjective time’ that a particle experiences. With each iteration, particles travel at fixed velocity in some direction that combines motion in s, x, y and z. Simulation steps are then ordered along the axis *t*, which we might think of as ‘objective time’. I have a video of particles traveling this way on the web, and which I’ve mentioned in a previous post. You can find it here.

“But,” I hear you say, “that doesn’t look like Special Relativity, for a start, there’s a preferred frame of reference–namely the one through which we’re viewing the simulation”. Yes, it’s true that from outside the simulation, there’s a preferred frame, but *there isn’t one when viewed from inside*. Different reference frames are manifested as different angles with respect to the compact dimension, and motion in each direction is exactly the same. From within the simulation, measurements are completely consistent with the Minkowski-space model because the math governing them is identical.

“But what about Lorentz boosts?” you may ask. “What about Lorentz contraction? How come just one extra dimension is necessary? Don’t you need three?” Only one dimension is necessary because we know that to all extents and purposes, particles are point-like. Particles without extent don’t experience Lorentz-contraction. All of the physical properties that we observe of them emerge from their subjective experience of time.

Using this model starts making a difference when we get to the line in Aaronson’s proof at the bottom of page 10.

Then for all Z we require the following, based on what observers in different inertial frames could perceive:

This line and those that follow presuppose that in our discrete model, what an observer perceives as simultaneous *is* actually simultaneous. In other words, there is some discrete link directly connecting cause and effect. This is true in the Minkowski-space approximation, but in our compact-dimension model, it’s not. An observer perceives two events as simultaneous simply because the light from those events reaches him at the same time with respect to the objective time axis *t*.

What this means for examples such as the one that Aaronson raises, is that from outside our discrete simulation, we always know exactly when a particle interaction occurs, even if observers within the simulation may never be able to agree. It doesn’t matter that in some reference frames, effect B appears to precede cause A, because the perceived ordering of events no longer implies that the controlling simulation treats them the same way.

One of my current projects is a simulation that will hopefully make this point absolutely clear. I intend to track the subjective experiences of a large number of pseudo-particles traveling across a discrete space approximation that uses an extra compact dimension of the sort I describe. It is my belief that by constructing a secondary graph from the set of their subjective-time paths, it should be possible to obtain a causal set graph that approximates Minkowski space. Tools to measure the properties of such graphs have been developed by theorists working in Causal Set theory. By applying those tools, it should be possible to confirm that the experience of Special Relativity in a discrete simulation doesn’t require that the supporting graph mimic Minkowski-space directly.

This still leaves us with the topic of how exactly to encode quantum entanglement in a fully discrete system, as Aaronson’s proof relates as much to this topic as it does to relativity. This topic, though, is perhaps one for another post. However, it is worth stating that modeling entanglement in its most basic form appears to be extremely straightforward. The models I’ve built so far use something rather like the ‘long-range thread’ approach that Wolfram describes in his book, and it appears to work fine. Encouraging a particle to collapse into one of two spatially disjoint positions is easy in discrete models–the Jellyfish algorithm I’ve described in previous posts revealed this behavior on its own without any coaxing from me.

Ironically, the trickiest problem I’ve encountered in this area isn’t entanglement, but the encoding of information in geometric form. In order to create a working Bell Inequality simulation, we have to be able to simulate particle orientation and have two particles that retain their orientation in a coordinated way that is linked to the shared particle state we wish to collapse. This turns out to be tricky–particularly at the tiny scales at which my simulations run. It may be that there are better ways to manage Bell’s Inequality than the tools I’m currently using. Dan Miller, who also posts on this blog, has some interesting ideas in this arena which he will hopefully share in a later post.

To conclude, let me say that there is one old saying with which I ferociously disagree, and it is this: *better to keep your mouth shut and have others think you’re a fool and to open it and remove all doubt*. This sentiment negates learning. If you think I’ve illustrated ignorance or folly in this posting, call me on it. If you believe in science, this is your opportunity to share what you know to a willing audience. From me, you will hear only thanks.

Scott Aaronson’s lecture series on quantum computing has a section where he outlines his feelings (as of 2006, a few years after his NKS review) about so-called ‘hidden varaible’ theories, of which I believe our discrete models are necessarily a subset:

http://www.scottaaronson.com/democritus/lec11.html

Spoiler: he comes out in favor of them! Or at least, he seems to feel they are useful abstractions, consistent with ‘real physics’, and he doesn’t quite get why they seem to be so lowly thought of by ‘real physicists’.