## A Little Background

In this blog, we’ve talked a lot about particles, relativity, quantum mechanics, and even the reason for the universe itself. One important topic that I haven’t yet covered is *spacetime*. Where does it come from and why does it take the form that it does? Any Grand Unified Theory that we’d like to propose can’t just satisfy itself with describing the matter and energy that makes up the things we see. It also has to explain how the gaps between the things come to be there. In other words, it needs to be ‘background independent‘.

This feature has also been conspicuously absent from all of the research I’ve shared so far. In each case I’ve outlined, I’ve simulated space by sprinkling dots onto a preexisting smooth surface and hooking them up to those nearby. This isn’t good enough. In fact, it’s avoiding one of the hardest problems of the lot, and the physics community know this. If you look at any of the most promising research on discrete approaches, the main focus is on the structure of spacetime itself and how it changes. From that, it’s felt, everything else can spring.

People have had mixed success in this regard. There’s loop quantum gravity, which has been a relatively successful physical theory. However, at least as I understand it, it presupposes structures that have the four dimensions of spacetime we expect.

There’s the theory of causal sets, which starts with nothing but the idea of a partial order, and which can derive something roughly spacetime-like from it. However, reconciling it with quantum mechanics has proven tricky.

Then there’s causal dynamical triangulation, which has successfully assembled spacetime-like structures out of very simple raw ingredients. However, those ingredients once again have an implicit four-dimensionality built in at the smallest scales.

Do I have a model of spacetime to share with you that’s better than any of these? No. Categorically not. As with all of my research, I’m deliberately not trying to do physics directly. Instead, my goal is simply to illustrate ways that discrete techniques might make solving thorny physics problems easier, and to add to the theoretical toolkit with tricks from computer science.

What I *do* have is a way of building large, irregular networks from scratch that behave like smooth spatial surfaces, while using no geometrical information whatsoever. I’m going to share it with you over a sequence of posts. You’ll have to assess for yourselves whether you think it’s a good fit for nature.

As a starting point, let’s look at a simplified version of the problem. We’ll forget about time, and concentrate on only a single dimension of space.

Imagine you have fifty friends who you’re playing a party game with. The aim is to use your cellphones to form an invisible circle. When the circle is finished, you’ll be able to call Alice. Alice will then able to call Bob. Bob can call Cindy, and so on. At the end of the chain, Zachary can call you and tell you what message he received. The message will have gone through everyone in turn.

Each person is allowed to store the numbers of up to two friends on his phone. They can swap their numbers for others by calling one of their contacts and saying ‘who do you know?’ and picking which numbers to keep or discard. They can also say to someone, ‘you’re my friend now’. They can’t say anything else, or rank the contacts they receive by name or number.

At the start of the game, the numbers in everyone’s phones are random. *How do they organize themselves into a chain?*

As party organizer, you have one extra perk you can use if you want to. You can add people to the party one at a time if you like. If you decide to do that, people will receive their random phone numbers when they join, and the numbers they receive will always be for people who’re already at the party.

Any ideas?

## Reflections on Waves

In my recent post series, Making Waves (starting here), I outlined a very simple system for duplicating the kinds of effects seen in the Double Slit experiment, which Richard Feynman famously described as *“the only real mystery in quantum mechanics”*. The approach I used was completely discrete, and one for which pseudo-random numbers will happily suffice instead of the ‘pure randomness’ that’s often stated as a prerequisite for any QM model.

In the wake of these posts, I decided that it was only appropriate to talk a little about the limitations of the approach I outlined, and also to address some of the questions or yes-buts that I imagine some readers may have.

First, the limitations.

**Relativity: **It’s not that hard to come up with different interpretations of QM, so long as you don’t have to worry about reconciling it with relativity. Any Causal Set enthusiasts looking over my work might well point out that my spatial model isn’t Lorentz invariant, and therefore hard to take seriously. As it stands, this observation is absolutely right. And we can go further. In Scott Aaronson’s review of New Kind of Science, which I have mentioned in previous posts, he points out that a network-based approach to QM simply won’t work with a discrete model of spacetime, if we respect the Minkowski metric in that model. Fortunately, as I’ve outlined in previous posts, we simply don’t have to use that metric. Using causal sets to describe spacetime is a nice approach with lots of potential, but by no means a necessity. So while the model I’ve mentioned here is limited, future posts will show at least one way it can be extended.

**Bell inequality violation: **The particle I use here doesn’t have any properties as sophisticated as spin. It’s pretty clear, then, that as it stands, we wouldn’t be able to extrapolate it to that most marvelous demonstration of quantum effects at work: Bell’s experiment. However, the reason for that is a little different from the one that makes most models fall at this hurdle. Usually, the problem lies in getting around the limits imposed by locality. With a network-based approach, non-locality doesn’t present a problem. However, making particles with persistent orientation is harder. While I’ve been able to produce such particles, they currently still have limitations and currently don’t follow all paths.

**Scale:** The algorithm I described in the last post isn’t among the world’s most efficient, and it’s hard to imagine it replacing lattice QCD any time soon as the simulation engine of choice. So while the implications for QM may be interesting, it’s hard to scale the approach up enough to show what it’s really capable of. This means that the results I get are going to be noisy and incompletely convincing unless someone happens to have a whole bunch of supercomputer time that they’re giving away. This is something I’m prepared to live with.

And now, some yes-buts.

**Randomness: **People are fond of saying that QM is random, and therefore that exploring an algorithmic approach such as the one I’ve shown doesn’t make sense at some fundamental level. However, this statement is just wrong. You can know that a variable is unpredictable, but you can *never* know that it’s random, unless you have an infinite amount of computing power with which to prove it. So long as you have finite computing power, the variable you’re considering may simply be the output of a computing machine that has one bit more reasoning power than yours does. Thus you can say that it’s *effectively* random from your perspective, but no more. And when considering a universal algorithm, it’s completely acceptable to propose algorithms that use the entire state of the universe at any one iteration step to calculate the next. Thus, unless you’re outside the universe, you’d have no way to predict the behavior of even a single atom.

What a theoretical model can do is *assert* that quantum events are random, even when no proof can ever be supplied, which is what we currently do. I confess that I’m not a big fan of faith-based approaches, when it comes to randomness or anything else.

**Efficiency: **In Seth Lloyd’s eminently readable pop-science book, *Programming the Universe* he suggests that the universe is a quantum computer computing itself. Why not an ordinary computer, given that the set of problems that can be solved by both types of machine is exactly the same? Because quantum computers are massively more efficient. To his mind, it doesn’t make sense to consider nature as an ordinary computation because achieving what nature does takes ordinary computational models huge swathes of time.

However, when considering algorithms that potentially run the universe, and through which *all* reference frames are determined, I would propose that efficiency is irrelevant. In order for us to care about efficiency of the algorithm, we’re also tacitly proposing that someone is making a design choice about the universe, which seems like a ridiculous assertion to me. The reason to pursue a computational model of the nature is because it presents a more concrete, more reductionist, and more scientific view of how the universe operates. Not less. We don’t need someone to have designed the universe to justify digital physics any more than a continuum theory requires that someone be running the universe on a large array of valves and rheostats.

**Usefulness:** The reaction to the digital physics approach to QM that I have the most respect for is the experimentalist shrug. It’s completely fair to say at this point that the kind of algorithm I’ve outlined is far less useful as a scientific tool than what is currently being used. It’s also fair to say that experimental evidence for discrete spacetime is scant and elusive. And while these things are true, I see no reason for most physicists to alter their approach in any way.

However, I have two caveats. First, those theorists considering Theories Of Everything have no excuse to not consider discrete models. The set of physical systems that can be described by them is very much larger than the set that is conveniently differentiable. To assume that the universe lies in the differentiable set is rather like the man who looks for his car keys in the study rather than the street, because the light is better indoors. Such attitudes are particularly indefensible when, rather than considering systems of minimal complexity, we instead are expected to suspend disbelief about parallel universes, hidden dimensions, and tiny vibrating strings with no width.

The second caveat is that I suspect the game is about to change. The coming era of quantum computation will test our understanding of QM more thoroughly than anything that has come before, and I will be heartily surprised if there are not surprises that come with it. While digital physics represents a philosophical distraction now, I very much doubt that the same will be true in a hundred years.