## Hello again, cubic symmetry, and simulations

Hello all. It’s been a while since I’ve posted anything on this blog. My life has been in flux of late, as I’ve been moving to Princeton, NJ, changing jobs, and having a baby all at the same time. Now that things are starting to settle, it should be a lot easier for me to find time to write.

With that in mind, here’s my take on a recent article that people forwarded to me a while back during my break–the result from Silas Beane at the university of Bonn that claims to have something to say on the subject of the simulated universe. The arxiv blog, as usual, as a good write up.

The gist of the research is this: if the universe is running in a simulation on a cubic lattice, in much the way that current quantum chromodynamics simulations are calculated, then there should be experimentally observable consequences. Beane and his team identify two: the anisotropic distribution of cosmic rays (different amounts of rays in different directions), and a cut-off in the energy of cosmic ray particles. This article generated some excitement because the cut-off matches a phenomenon that’s already been observed.

A great moment for digital physics, right? I’m not convinced. I have a few concerns about this work. For starters, as I have discussed on this blog, there are a huge number of ways of building discrete universe models, of which a 3D lattice is only one. That simulation style has significant limitations, which, while not insurmountable, certainly make it a tough fit for a huge number of observed physical effects, such as relativity and spatial expansion.

Furthermore, in order to make their predictions, Beane and his associates simulated at a tiny scale. This is convenient because you only have to consider a single reference frame, and can treat space as a static backdrop for events. In other words, it’s pretty clear that the main problems with regular lattice simulations are things that their research didn’t touch.

I would find it *astonishing*, therefore, if we discovered the predicted cosmic ray anisotropy. And this brings me on to my second major concern. People, upon finding no irregularity in the cosmic ray distribution, are then likely to think, “gosh, well the universe was isotropic after all, I guess we’re not in a simulation.”

Except, let’s recall, experiments have *already seen* the expected energetic cut-off. In other words, the cosmic ray observations we see are perfectly consistent with a universe that’s discrete, but also isotropic. In other words, irregular, like a network. This, perhaps, shouldn’t come as a surprise.

Then, there is my third concern, and this reflects the interpretation imposed on this result. Namely, that a universe that turns out to run on an algorithm must somehow be a simulation running on a computer elsewhere. This, as I’ve also mentioned in previous posts, is just plain wrong.

Algorithms, like equations, are tools we use to build models. One does not have primacy over the other. One is not more natural than the other. A universe that turns out to be algorithmic no more requires a computer to run on than a universe based on differential equations needs a system of valves. The one main difference between algorithms and equations is that you can describe a *vastly larger* set of systems with algorithms. Equations are nice because, once you’ve figured them out, you can do lots of nifty reasoning. However, the number of possible systems that are amenable to this treatment is vanishingly small, compared to the systems that are not.

Most physicists want the universe to turn out to be completely describable with equations, because it would make life a lot easier for everyone. It’s a nice thing to hope for. It’s just that given the set of options available, it’s not terribly likely.

## The trouble with symmetry

One of the greatest advances in theoretical particle physics in the 20th century is Noether’s theorem. If you’ve never heard of it, you’re not alone. It’s an achievement that seldom makes it into popular titles, despite the fact that it’s arguably the greatest single achievement of mathematical physics that’s ever been made. It was conceived of by one of the unsung heroes of the field–Emmy Noether, probably the greatest woman mathematician who ever lived.

What Noether’s theorem tells us is that for every symmetry of a physical system, there is a conserved quantity, and vice-versa. The conservation of energy, for instance, corresponds to symmetry in time. Conservation of momentum corresponds to the symmetry due to translation through space. What Noether’s theorem essentially tells us is that when you’re trying to build a working theory of physics, what really counts are the symmetries. Nail the symmetries and you’ve essentially nailed the problem.

The problem for digital physics is that Noether’s theorem specifically relates to *differentiable* symmetries. In other words, ones that change smoothly. For symmetries that don’t change smoothly, all bets are off. This means that anyone trying to use a discrete, computational system to model physics is hamstrung right out of the gate.

In order to bridge this gulf, it seems to me that you need some way of describing computation in terms of symmetries, or symmetries in terms of computation. Either way, you need some nice formal way of putting the two notions on the same footing so that a meaningful, discretized version of Noether’s theorem can be derived. In other words, you need some kind of super-math that slides right in there between calculus and the theory of computation.

Though the link may not yet be obvious, this was where I was going with my recent post on Simplicity and Turing machines. But what does simplicity have to do with symmetry? Plenty, I suspect. I propose that we try to bridge the gulf between symmetry and computation with an idea that has elements of both: the idea of a *partial symmetry*.

But what is a *partial symmetry*? This terminology doesn’t exist anywhere in math or physics. And what does it even mean? Either something is symmetric or its not. In truth, partial symmetry something I made up, inspired by the reading I was doing on partial orders. And it’s a notion I’m still ironing the bugs out of. It works like this:

Any time you have a system that displays a symmetry, there is informational redundancy in it. Because there is redundancy in it, you can look at that system as the outcome of some sequence of copying operations applied to an initial seed from which redundancy is missing. Consider a clock face. We can treat the clock as a shape that happens to have twelve-fold symmetry, or we can think of it as a segment for describing a single hour that’s been replicated twelve times. This isn’t how we normally think about symmetry, but in spirit it’s not that far from a more familiar idea that mathematicians use called a group action.

However, if your copying operation doesn’t preserve all the information in the initial seed, you don’t have a full symmetry. Consider what happens if, instead of taking those clock segments and lining them up in a circle, you copy and move with each step in such a way that a part of each segment is hidden. You still end up with something that’s got a lot of the properties of a symmetric object, but it’s not fully symmetric. Furthermore, as soon as you do this, the ordering of the sequence of copying operations suddenly matters.

My proposal is that partial symmetry is equivalent to computation. And that armed with this idea, we can start to look at the symmetries that appear in nature in a new light. That might sound like a bit of a stretch, but in later posts I’m going to try to show you how it works.

## Simplicity and Turing Machines

I have an exciting result that I want to share with you. However, in order to get there, I’m going to have to take this blog in a slightly more abstract direction than it’s been going recently.

‘More abstract?’ I hear you ask. What could be more abstract than discussing whether the fundamental nature of reality is based on simple algorithms? The answer is: discussing the nature of simplicity itself. Is such an abstruse subject still relevant to physics? Undoubtedly. I believe that it holds the key to resolving the long-standing difference in perspective between computer scientists and physicists.

I have been to several painful conference discussions in which physicists at one end, and computer scientists at the other, debate about the nature of reality. The computer scientists proclaim that nature must be discrete, and that Ockham’s razor supports their reasoning. The physicists look at them blankly and tell them that Ockham’s razor is a tool for building models from experimental data, and represents a heuristic to guide reasoning–nothing more. Neither side can apparently fathom the other. Filling the panels with luminaries from the highest levels of science seems to only make the problem worse.

It’s my belief that the study of simplicity can potentially provide a language that can unify everything from group theory up to quantum mechanics, and put this battle to bed forever. I will endeavor show you how.

Discussions in digital physics often revolve around the notion of programs that are ‘simple’, but not much is said about what simplicity actually entails. Computer scientists are very familiar with the notion of complexity, as measured by the way in which the solution to a given problem scales with the size of the problem, but simplicity is something else.

For instance, consider Turing machines. There are idealized models of computation that computer scientists use to model what computers can do. A few years ago, Stephen Wolfram held a competition to prove that a given Turing machine model was capable of universal computation. Why was this model considered interesting? Because it contained fewer components than any other Turing machine for which the same proof had been made.

A Turing machine is a pretty good place to start exploring the idea of simplicity. You have a tape with symbols on it and a machine that can read and write those symbols while sliding the tape forward or backward, based on what it reads. You can build one out of lego.

Though there’s not much to it, but this incredibly simple machine, given enough tape and enough time, can do anything that the most sophisticated computers on Earth can do. And if we ever succeed in building quantum computers, the humble Turing machine will be able to do everything they can do too.

However, when it comes to providing a truly simple model of computation, I propose that the Turing machine doesn’t go far enough. This is because there is hidden information in the Turing machine model that isn’t written in the symbols, or stored in the state of the machine. In fact, for a classic description of a Turing machine, I’m going to propose that there is an infinite amount of information lurking in the machine, even when there are no symbols on the tape and the machine isn’t even running.

The hidden information is hiding in the *structure of the tape*. In order for a Turing machine to operate, the machine has to be able to slide the tape left or right. Unless we know which piece of tape is connected to which other piece, we have no program to run. This problem, I’d propose, infects the theory of information down to its roots. When we discuss the amount of information in a string of binary bits, we consider the number of bits, but not the fact that the bits need to come in a sequence. A bag of marbles colored white or black which can be drawn in any sequence doesn’t hold much information at all.

Any truly *simple* model of computation, therefore, needs to contain an explicit description of what’s connected to what. Hence, I’d propose that the simplest unit of machine structure isn’t the *bit*, but the *reference*. In other words, a pointer from one thing to another. You can build bits out of references, but you can’t build references out of bits, unless you presume some mechanism for associating bits that’s essentially identical to references.

Once you start representing computation using references, the structures you come up with suddenly start looking a lot more like the programs for replicating physical experiments that I’ve outlined in previous posts. From a digital physics perspective, this is already useful. However, we can go deeper than that. When we compute using references, something strange and wonderful can happen that I’m still figuring out the implications of. In the next post, I’ll show you what I mean.

## How long is a very fast piece of string?

In his work on special relativity, Einstein outlined the relation between time and distance, and in doing so, changed physics as we know it. In recent posts I’ve outlined a way to rebuild that effect using a discrete network-based approach. However, those posts have avoided addressing a one of the most astounding experimental consequences of that theory: Lorentz contraction.

Lorentz contraction, simply put, describes the fact that when an object is travelling fast, it appears to squash up along its direction of motion. This gives rise to the well-known barn paradox, in which a ladder too long for a barn will seemingly fit inside that barn so long as it’s moving quickly enough. (I don’t recommend trying this at home.)

With the kind of discrete system that I described, objects have fixed length, regardless of how fast they’re going. So how can I possibly claim that the essence of special relativity has been captured?

The answer is simple: There is no actual, physical Lorentz contraction.

Am I denying that Lorentz contraction is an observed phenomenon? No. Do I contest the fact that it can be experimentally demonstrated to exist? Absolutely not. It happens, without a doubt, but what I’m proposing is that, in reality, Lorentz contraction has everything to do with time, and nothing at all to do with length.

Far from being a wild and implausible conjecture, this idea is actually a necessary consequence of other things we know about nature. For starters, that physical particles are observed to be point-like. At the scales that experiments have been able to probe, particles don’t behave as if they have width. And if particles don’t have width, at least of a kind that we recognize, how can they possibly be compressing? The answer is, they can’t.

So where does the Lorentz contraction we observe in experiment come from? It comes from synchronization. Or, to state the case more exactly, from the relationship between objects where their relative velocity is mediated by messages being passed between those objects.

Consider a fleet of starships readying to take part in a display of fancy close-formation flying. They all start at rest somewhere near the moon, each at a carefully judged distance from each other. Then, the lead pilot of the formation begins to accelerate and the others pull away with him to keep the formation in step. Because the formation is tricky to maintain near the speed of light, the ships use lasers to assess their relative distances. They measure how long it takes for each laser ping to return from a neighbor and use that to adjust their velocity.

Should we expect the fixed formation of starships to exhibit Lorentz-contraction just like every other fast moving object? Of course we should, whether the ships are inches apart, or separated by distances wider than the solar-system. Should it make a difference if the starships are tiny, and piloted by intelligent bacteria? Or even of zero length? Not at all.

So, in other words, the size of the starships themselves is irrelevant. It’s the testing of the distances between them using lasers that makes the difference. And this, of course, is what particles do. They exchange force-carrying particles to determine how far away they want to be from each other.

What *does* change, depending on how fast you’re going relative to someone else, is the wavelength of light that you see coming from other people. Things moving toward you look bluer. Things moving away turn red. And the good news is that wavelength isn’t the same thing as length.

To illustrate this, try to build yourself a mental cartoon of a particle emitting a photon. The particle looks like a bowling ball. The photon looks like a length of rucked carpet being spat out the side of it. The length of the carpet sample determines the wavelength of the light. Now take your cartoon, wind it back to the start, and run it forward very slowly. At first, the carpet will be just sticking out of the bowling ball. A few frames onward you can see how rucked and wiggled it is. A few frames after that and the carpet is all the way out and flying on its way, maybe with a tiny Aladdin on it.

We can play this little sequence out because the carpet sample has *physical extent*. This means that carpet-emission isn’t a single event–it’s a sequence. And this will be true for any model that we build for photon emission that gives wavelength physical meaning.

This leaves with realization that one of the following two statements must be true:

- Photon emission is instantaneous. Therefore particle wavelength doesn’t involve physical length. Therefore we need an extra mechanism to explain why wavelength should be affected by Lorentz contraction.
- Photon emission requires time. Therefore particle wavelength is real. Therefore it’s possible (perhaps preferable) to model it as a
*pair*of events: a start and an end.

By treating the emission of a photon, or any other messenger particle, as a pair of events, our problems with Lorentz contraction evaporate. The timing of the start-of-photon and end-of-photon events is determined by how fast the emitting particle is travelling. Similarly, the perceived wavelength of a photon by a receiving particle is determined by the subjectively experienced delay between the start-of-photon and end-of-photon events. And, voila, temporal effects substitute for spatial ones.

This puts constraints on our choice of simulation model, of course. If we’re going to model photons with start and end events, that’s going to have implications on the kinds of wave implementation we can reasonably use. Fortunately though, the implementation I outlined for my posts on the double-slit experiment will work just fine.

I won’t lie to you and say that everything about this approach is solved. How the timing of photon events of this sort translates into energy is something I still don’t have an answer for. And it’s debatable how useful this way of treating relativity will ever turn out to be. However, I think what this model demonstrates is that when it comes to physics as weird as relativity, it’s worth looking for workable implementations that don’t rely on the mathematical tools we usually use. Their requirements can shed light on assumptions in the theory that we’re often not even aware that we’re making.

## What does it all mean?

In the last few posts, I’ve talked a fair bit about relativity and have struggled to make my thinking on the subject clear enough to read. What that process has revealed to me is that some topics in science are just hard to talk about. In part, that’s because they’re counter-intuitive, but there’s a lot more to it than that. A lot of what’s going on is, I’d propose, social, and deeply concerning about how we engage with science.

Open any number of pop-science books that attempt to give you a grand overview of the universe and somewhere near the start there are usually the same two chapters. One of these is on relativity and the other is on quantum mechanics. These chapters are the author’s attempt to explain the ‘wacky’ things that happen in physics. In most cases, the author ends by saying something like, “this might sound incredible, but it’s what we see in experiments, so suck it up”.

And this is usually where real scientific dialog with the public stops. Subsequent chapters in these books are usually short on specifics and relatively thick on prose like “Geoff and I were sitting eating a sandwich, feeling sad, and suddenly it occurred to me that if we ran the same simulation backwards, it would give us the eigenvectors we were looking for, only with the parameters inverted! We raced back to the lab without even finishing our lunch!”

Different books make the break in different places but the effect is usually the same. The physicist in question gives up on trying for an intuitive explanation of what they were doing and resorts to personal drama to try to retain reader interest.

Underpinning this switch is the belief that the only way to really understand the ideas being discussed is to do the math. Without the math, you just can’t get there. At some level, the math *is* the understanding. I take issue with this notion pretty strongly. Not only is it dead wrong. It’s counter-productive. In fact, it’s an angry badger driving a double-decker bus into the side of the temple of science.

Let’s go over some of the problems that this ‘math equals understanding’ approach creates.

First, it causes the public to disengage. People feel that if they aren’t good at math, they’ll never get it. Yet life goes on, so science can’t possibly be relevant to them. And, at the end of the day, this creates funding problems.

Second, and far worse, is that the people who do the math and get the answer right feel like they *have* understood it, even though deep down, it still doesn’t make any sense. They sweep that feeling under the rug and press on but become increasingly defensive when pressed on topics that make them feel uncertain. This just makes the gulf between scientists and everyone else all the wider.

On top of this, attempts to communicate the math, rather than the meaning, to the public end up creating a folk-notion of how physics ‘has to be’. This creates a whole stew of junk reasoning when people try to extend that folk-notion. For instance, in relativity, people are told that you can’t go faster than light because if you did, you’d be travelling backward in time in someone else’s reference frame. This is incredibly, insanely wrong. And it’s just one step from there to “if I go faster than light I go backwards in time”.

Perhaps most horribly of all, this process creates physicists who can’t uncouple the tools they’re used to using from the problems they’re trying to solve. This creates massive blind-spots in the reasoning of some of our brightest and finest researchers, because these people are never tested to see whether they have understood the principles in the absence of the math.

Here’s an example from relativity: “spacetime exhibits Lorentz-invariance”. This might sound fine, until you think about the fact that we can only ever examine spacetime by passing things through it. We have *no idea* what properties spacetime exhibits, because we can never directly test it. All we can know about is the things we can observe. Saying that test on moving objects yield a pattern of Lorentz invariance is fine, but often, that’s not what’s said.

Here’s another relativity example from my own life. I sat down in a cafe a few years ago with a grad-student in particle physics to talk over some things I wanted to understand. We got on to the subject of using a compact dimension for spacetime interval in the way I outlined in the last post. He pulled a face.

“I don’t think you can do that with just one dimension,” he said. “I think you need three.”

We debated the point for some time, even breaking out some equations on a napkin. In the end, he still wasn’t convinced, though he *couldn’t say why*, or point out a hole in my reasoning. All this despite the fact that his math skills were far in advance of my own.

Why did he make the assertion that he did, even though fifteen minutes of basic logic crunching could have demonstrated otherwise? Because the way relativity is taught makes use of the idea of Lorentz boosts. People use six dimensions to model what’s going on because it makes the math easier. They never just use one dimension for *s*. This fellow, extremely bright and talented though he was, was wedded to his tools.

So where do we go from here? What do we do? If science has a problem, how do we solve it?

I’d propose that all math can ever do is supply a relation between things. “If this is true, then that is true”. Math gives you a way to explore what the implications of an idea are, without ever saying anything about the idea itself, other than whether it’s self-consistent. In essence, math in physics tries to describe how things behave solely in terms of constraints, and without ever trying to provide an implementation. In other words, it deliberately avoids saying what something *means*, and says only what it *does*. This is because meaning, I’d propose, is a property that comes with a choice of specific model.

This is why physics tends to become fuzzy and unsatisfying when it diverges from physical experience. We can describe relativity or quantum mechanics easily using math by defining the constraints on the behavior we see. However, we are used to having a specific model to back our reasoning up–the one provided by intuitive experience of the world. When that model goes away, we lose touch with the implications of our own logic.

Does this mean that we are forced to rely on math for insight at that point, as is commonly proposed? No. In fact, I’d suggest that the reverse is true. This is the point at which we should trust math less than ever. This is because self-consistency is only as good as the conjectures you apply it to. I think it was Bertrand Russell who said that from a false premise you can prove anything. The only way to determine whether our physical premises are correct is to have more than one way at arriving at confidence in their validity. That’s why physical intuition is a vital tool for preventing self-consistent nonsense from creeping into theory.

Hence, instead of just leaning on our analytical crutch, we should strive harder than ever to find metaphors for physical systems that *do* work, and which bring phenomena such as relativity within easy mental reach.

And this, to my mind, is exactly where digital physics can help. Digital physics asserts that we should only consider a physical theory reasonable if we can construct a viable implementation for it. If a system is self-consistent, but non-implementable, then we shouldn’t expect it to match nature, as nature clearly is implemented, by virtue of the fact that we are witnessing it. By requiring concrete implementations, we force ourselves to create metaphors with which to test our understanding.

In other words, if the math leaves us asking the question, ‘what does it all mean?’, then we haven’t done enough digital physics yet.

*Does this mean that any one of the implementations we pick is correct?* No. In fact, the more workable implementations, the better. Digital models are not theories.

*Does it mean that digital physics represent a substitute for mathematical reasoning?* No, of course not. Math lies at the heart of physics. It just can’t exist in a vacuum of understanding.

Digital physics, then, is a *different tool*, through which the set of theoretical models of nature can be tested and understood. It’s a way of ruling out theories that don’t add up even if the math works out. It is, I would propose, the best antidote to Geoff and his half-eaten sandwich that physics has going for it.

## The Ant and the Pipe-Elf

In my last post, I talked about Lorentz invariance. I got some great feedback. (Thank you Keir.) And from that, it seems pretty clear that relativity is not something I can pass over lightly. I’m going to go over the rest of how to capture special relativity in networks as carefully as I can.

Last time, I suggested that you could duplicate relativistic effects by creating a hidden, rolled-up dimension to capture the notion of subjective time. One of the comments I got was that this seemed to imply that time was going round in a tiny loop, which isn’t what we experience. Fair point. What I was aiming to say was that the *act* of traversing the hidden dimension produces the *sensation* of subjective time, not that the hidden direction was actually a compact time axis. A fine-grained distinction, I grant you.

In fact, whichever way you cut it, having to have this little extra dimension isn’t very satisfactory. We’d like to have a way of capturing the experience of subjective time that’s not dependent on it. Not least because creating networks that contain extra compact dimensions is complicated. So how can we do better?

We can do better by making the extra direction *s* be a feature of *particles*, rather than a feature of spacetime itself. In other words, if a particle’s not there, the extra direction isn’t there. And only particles that have mass can create this extra direction.

For those of you familiar with the idea of the Higgs boson, this might sound familiar. For the Higgs field, we imply that there’s a special field everywhere in space, except where a particle happens to be. The gap in that field creates wiggle-room that the the particle can use to create the phenomenon of mass. The way we currently understand physics, the mass that’s endowed by the Higgs field has nothing to do with the mass endowed by relativistic effects. But wouldn’t it be nice if we could achieve both kinds of mass with a single mechanism? Maybe we can.

If we’re implying, though, that particles carry the extra direction around with them, how can that possibly work? How can a particle have a dimension inside it? What would that even mean?

It turns out we don’t *need* an extra dimension. We just need the particle to create some wiggle-room, the same as for the Higgs field. We can imagine this by creating a particle *inside* another particle. The way we do this is by creating a relation between the inner particle and the outer one that people don’t usually use in physics, but which is very easy to do with networks.

Let’s call the inner particle the *ant*. The ant is always racing about at fixed speed. The outer particle, we’re going to call the *pipe-elf*. The job of the pipe-elf is to make sure that the ant has something to walk on (some wriggle-room). Whenever the ant reaches the front of the pipe, the pipe-elf builds a new piece of pipe and sticks it on the front so that the ant has somewhere to go.

At each time-step in our simulation, the ant either reaches the front of the pipe, or it does not. If it doesn’t reach the front, the elf has some time on his hands. He can do things like receive phone-calls or clear up the old bits of pipe he’s left lying around. However, while the ant is keeping him busy, doing these things is impossible.

Now, let’s think about the different possible paths the ant can take. If it’s travelling straight down the pipe, the elf will never have any free-time. He’s going to be building new pipe-segments as fast as he can. However, if the ant is just racing around and around near the front of the pipe like a hamster on a wheel, the elf can do whatever he likes. He has all the time in the world. In other words, so far as the elf is concerned, he’s either experiencing lots of free time, moving very fast, or something in between.

Let’s call the phone-calls that the elf gets photons, or messenger particles. Let’s call the amount of old pipe left hanging about the relativistic mass of the particle. And let’s say that the ant is the one who’s really in charge. Stopping this particle means you have to find and bump into the ant. When you do that, and only then, you collapse all the elf’s pipe-segments down on top of you. Unless you meet the ant, the pipe sections are like so much smoke. You can walk through them without knowing that they’re there.

This pretty much covers the bases of what we need for special relativity. The set of angles that the ant can walk at *exactly* corresponds to the set of possible directions we might need to cover to model special relativity. The ant is a particle *constrained* *by its context**, *just as for the Higgs field, and so travelling on a helical path. The only wacky thing here is the notion that the elf can only interact with the rest of the universe when it’s not building pipe segments. But that nicely covers the relation between velocity and time. And we don’t need a special network for the ant-elf pair to travel around on. A perfectly ordinary spatial network will do.

Hence, we can imagine a universe filled with lengths of invisible, untouchable pipe arcing through the void, each filled with whizzing ants. Do I think that the universe actually looks this way? No. This isn’t a theory, it’s a model. But what it does give us is the behavior described by special relativity happening against a discrete background, without a hair of Minkowski space in sight.

Not everyone may be cheering just yet, I admit. Anyone familiar with special relativity may in fact be writing in their chair by now because I haven’t mentioned Lorentz-contraction–the effect that special relativity has on distance. The way that we’re used to thinking about relativity, the length of objects in their direction of travel is affected just as much as the time they experience.

But this omission is on purpose. In this model, you don’t *need* Lorentz-contraction. It’s not there. That may sound counter-intuitive, but I assure you, the math works out. The observed contraction is the same. And the quantization of the background doesn’t even give you any problems when you change reference frame. Next time, I’ll try to explain why. I may even get round to telling you how quantum mechanics might fit in this picture.

## Lorentz Invariance

In my last post, I showed off an algorithm that could create nicely irregular networks with integer dimension two, without using geometric information. In other words, I made lumpy spheres.

While I’m proud of this result, it doesn’t look much like the kind of spacetime networks that are used in almost all branches of discrete physics. That’s because the dimension of *time* is missing. And, as anyone who’s read a little Einstein will tell you, time and space are part of the same thing. They can’t really be uncoupled.

Except, of course, they can be uncoupled. It’s dead easy. It’s just that for most of the math that physicists do, it makes more sense to wedge them together.

I’ve briefly outlined in previous posts the way in which space and time can be unpacked from each other. However, after a fun conversation with a *very* math literate friend the other day revealed to me that I haven’t really done a good enough job of explaining. In this post, I’m going to try to set that straight.

Perhaps the easiest way of describing the relation between space and time that Einstein uncovered is using Minkowski space. In other words, space and time are connected by the following equation:

*s^2 = t^2 – x^2 – y^2 – z^2*

Where *t* is time, *x*, *y*, and *z* are the familiar dimensions of space, and *s* is the ‘spacetime interval’. What this relation says is that if someone is moving toward or away from you, how fast they’re doing it is going to affect how you both perceive time to be passing.

To illustrate this, we can invoke a classic example from science fiction. If you get on a spaceship and make a very fast trip to a distant star (let’s call it Distantia) and back at almost light speed, when you return almost no time will have passed for you. However, for us, it will seem as if decades have gone by. What matters here is who does the accelerating. If you go to Distantia, and a week later I join you there, then we will both perceive the same amount of elapsed time.

What’s chewy about this is that we know from experiment that we have to treat all reference frames as the same. Consider the following scenario: we discover a very fast moving planet in the heavens–called Speedia. Aliens from Speedia decide to visit the Earth. They show up, chat for a while, and then head home. From the perspective of the people of Speedia, the *same relation* should hold as did for our trips to Distantia. In other words, their travelers should stay young while the homebodies get creaky. (They should also see themselves as still, and everyone else as travelling fast.)

The way to fix this is to invoke some high-school algebra and move the terms in our equation about. In other words, we reframe the relation between space and time as follows:

*t^2 = s^2 + x^2 + y^2 + z^2*

At first, it’s not clear that we’ve done anything here. It’s the same formula as before, just the other way around. But let’s ask ourselves what the terms in this expression actually mean. What *is* a spacetime interval? What is the Minkowski relation *actually* saying?

It’s saying that for a pair of travelers approaching me from the same place, that the subjective experience of time needed to reach me will depend on how fast they’re going. In other words, what the spacetime interval defines is the experience of *subjective time* for those travelers, as taken from my perspective.

For relativity specialists out there, this may seem obvious. It may seem like I haven’t said anything yet. But here’s the kicker–once you’ve framed things this way round, you can *pick a frame of reference for t* and describe all the others in terms of it. In other words, so long as we have a way of encoding distance traveled in the *s* direction and if we maintain a *fixed relation* between the distance traveled in the s direction and the distance travelled in *x*, *y*, or *z, *we can describe everything in ordinary Euclidean space. (Note that the fixed relation here is key!)

Another way to think about this is that by turning the normal formula for spacetime around, we’ve created an external reference frame. Let’s call it Father Christmas’s reference frame. Nobody in the universe has access to FC’s frame. As far as they’re concerned, Minkowski space works are usual. All frames of reference are still equal *and the math is exactly the same*. Only FC can see this special view of the universe, which is handy as he needs to visit a very large number of chimneys very quickly and surprise everyone at Christmas.

The awesome thing for Father Christmas is that the universe has an unambiguous, objective geometry that encompasses everything that’s going on, and has this natty extra dimension *s. *For FC, creating a discrete model of spacetime is a breeze. He just divides everything up into a locally connected network. End of problem.

But hang on, we can’t *see* a direction *s*. And we haven’t detected it in any experiments! So how can I claim that this is a solution to the problem of relativity? It turns out that the physicists solved this for us years ago when they came up with something called Kaluza-Klein theory. The trick is that we roll the *s* direction up very tight into a little circle so that it’s invisible to us, but important at small scales. Sound familiar? It should do, this is exactly the trick that they use to make String Theory work. In fact, there’s exactly nothing new here. What I’m describing is old physics. If you can’t believe in a compact direction for *s*, you have to throw String Theory away too!

From a discrete physics perspective, this trick is super-useful. This is because it means that so long as an object can only travel a fixed distance with each objective time step, special relativity will hold as long as we add a hidden Euclidean dimension. I’ve tested this and it works. For those of you who like videos, here’s a small demonstration. The flashing of each blob represents the time its experiencing. Note how slow blobs flash quickly, and fast blobs hardly flash at all. If you make the blobs send messages to each other at the speed of light, everything pans out just as Einstein would have predicted. (The video is a special superluminal Christmas treat, because you’re viewing it from FC’s reference frame.)

Note that this only works for simulations that are isotropic (the same in all directions). This means that, unless you’re being super clever, the same trick can’t work for cellular automata.

So where does this leave us? With a really nice tool for making thorny spacetime problems go away. However, we still need to build networks that have the extra magic direction *s*, and it still needs to magically relate to subjectively experienced time. The network we started off doesn’t have that direction, and we don’t have a way to encode the experience of particles, so lots more work is needed. But in the next post in this series, I’ll show you how to pull these tricks off too.

(By the way, if this post still doesn’t make a shred of sense, somebody please let me know and I’ll try again.)

## A Little More Background

In my recent post, A Little Background, I started to try to explain how one might go about building networks that looked like smooth structures with integer dimension at large scales. In other words, networks that might look something like the empty space our planet sails through.

At the end of that post, I set a puzzle in which a group of party attendees who could only communicate via cellphone tried to organize themselves into a ring. I did a horrible job of making the details clear, but one of the fine commenters on this blog (Keir) solved it anyway. Hoorah for commenters!

Along the way, Keir remarked on the key point that I was hoping people would find. And that is this: when you’re trying to build a network that has some global property like the friendly geometry of space, it’s fairly easy to do if you add nodes to the network one at a time. It’s usually *impossible* if you try to acquire smoothness some other way, like, for instance, starting with a random network and telling it how to untangle itself.

This is true whenever you insist that the information available to each node in the network be local. In other words, so long as a node can only gather information about its neighbors, or even its neighbors’ neighbors’ neighbors, it won’t ever know enough to untangle its position relative to all the others. Bits of your network will always be breaking off. And even if you put in tricks to prevent the network from coming apart, it will retain twists and knots that stop it from ever sorting itself out (except perhaps in 1D).

This is interesting, and, to my mind, suggestive, because it says that *if* the universe is discrete, and *if* it started from simple initial conditions, then there should be ample evidence that it used to be tiny and got a lot bigger. And this, of course, is what we see. This doesn’t prove anything about the discreteness of spacetime, put it’s nice to know that reality and our simple simulation tools line up.

However, while it’s one thing to get drunk partygoers with cellphones to form an electronic conga-line, it’s trickier to get them to form a network that looks like a flat surface, (unless, perhaps, it’s a party full of topologists.) But it *is* doable, and the information that you need to make available to any node in the network doesn’t need to be that large. The result is the simplest closed surface that’s locally two-dimensional everywhere. In other words, a sphere.

Without getting too deep into the grimy details, here’s an example of a rule that does the job:

- Start with a graph of four nodes, all connected to each other.
- Make a new node that you’d like to add to the graph. Call this X.
- Pick a random node on the graph to add to. Call this A.
- Pick a second node that’s a neighbor of A. Call this B.
- Pick a node that’s a neighbor of A and B, with the lowest neighbor-count possible. Call this C. Call the set {A,B,C} the New Neighbors.
- Look at the triangles that are formed when you take X and any pair of New Neighbors. We call each of those triangles a Face.
- Each Face should be adjacent to another triangle that contains two New Neighbors and a different node. We call that node the Opposite (O).
- If the sum of the neighbor counts of the two New Neighbors is greater than the sum of the neighbor counts of X and O plus two, then remove the link that runs between the New Neighbors, and add a link between X and O.
- Return to step 8 and keep iterating through your set of Faces until you’re not swapping any more links.
- Return to step 2 to add the next node.

In practice, you have to be slightly careful how you implement this, but not *that* careful. In fact, I’ve found several algorithms that will work for 2D. Here’s the output from one that’s completely deterministic.

And here’s another that’s adding at random.

And here’s what it looks like inside a randomized surface, because it’s pretty.

While I’ve had great success in 2D, I haven’t found an algorithm yet that will work for 3D. I don’t think that’s because it’s impossible, it’s just that it makes my head hurt. The bubble you form in the 3D network is wrapped in 4D, which takes a little getting used to. And when you increase the number of dimensions, the number of edge cases you need to consider goes up accordingly.

Of course, there are plenty of things wrong with these networks. For a start, they’re way too regular compared to the kind we’d like to run particles across. But these obstacles strike me as eminently surmountable. So while we’re still a ways a way from being able to build a plausible Big Bang from scratch, there are at least signs that it could be done.

By the way, for those who like a challenge, I vigorously encourage anyone keen to take a crack at the 3D case. If you want inspiration, here’s a sample Java file I wrote to build a 2D surface. Also, if you’re looking for a fresh perspective on this topic, I can highly recommend Ray Hinde’s excellent new digital physics blog over at Finitism Forever. He seems to be tackling the same problem.

## On Consciousness and Free Will

On this blog, we’ve recently tackled religion and the nature of existence, but we’ve left out one huge chewy topic that people tend to lump into this philosophical category, and that’s *consciousness*. You need only look at a site like Closer to Truth in order to see just how tightly coupled these ideas are in the public imagination. It’s also a topic of significance to me, as it was through writing on this subject that I first found myself exploring digital physics many years ago.

One of the great defenders of the specialness of human consciousness in the physical realm has been Roger Penrose, the man who proposed that consciousness was non-computable because it was founded on non-computable processes in nature. A lot of this blog has been about demolishing that idea. So on Penrose’s hypothesis, digital physics is pretty clear.

However, there are plenty of other ways you might integrate consciousness with discrete reality. For instance, take the essays submitted for the 2011 FQXI prize, on the subject ‘Is Reality Digital or Analog‘ (probably the highest profile public discussion forum on this subject in the last five years), and you’ll see the word consciousness showing up within the first four titles. What about these other models? Do they have anything to add? Here’s my answer:

*Digital physics has nothing to do with consciousness, because consciousness has nothing to do with physics. *

The notion that consciousness has any bearing on quantum mechanics, and therefore physics at large, is, to my mind, a lamentable side-effect of the times in which QM was first formulated. Poor old Neils Bohr had the unfortunate fate of hanging out with a bunch of logical positivists, who were sort of trendy at the time, and he tracked some of that muck back into physics along with him.

Enthusiasts on the topic of quantum consciousness point to the fact that observation of a QM event affects how it will play out. However, as we’ve seen in previous posts, we can generate identical effects in a simulation by simply asking the question–is information leaving the system or not? If it is, then an observation has taken place, if it isn’t, then an observation hasn’t happened yet.

In other words, particles are non-committal. They’ll hedge their bets and be everywhere until you force them to make up their mind. And forcing them to decide often has the side effect of forcing a bunch of their friends to decide too. In this regard, particles are rather like teenagers trying to decide where to go on a Friday night. They’re no more strange and magical than sixteen year-olds. (Though, admittedly, sixteen-year-olds are pretty strange.) Ask any self-respecting working particle physicist about the role of consciousness in QM, and they will struggle not to roll their eyes at you. This is why.

So if we can rule out consciousness having an impact on quantum mechanical events, and we can rule out its dependence on smooth symmetries of nature, is there anywhere left for the specialness of consciousness to hide? At this point, we invoke the principle of minimal complexity which we used to unpack the idea of god, and we ask ourselves if the universe is more or less complex if we have to carve out some special extra room for sentience in physical law. The answer, I’d argue, is that it’s more complex, and therefore massively unlikely. Nice though it might be to cogitate on, then, consciousness arises naturally out mechanistic physical processes, just like everything else.

But what about free will? Given that quantum mechanical events are completely unpredictable, isn’t there at least enough room left for that? Not in this model of the reality, there ain’t.

To describe the universe completely, we need to treat the rules that run nature *and* the data that they run on as a single closed system. Otherwise we haven’t finished describing them yet. Thus, if we find that a huge pile of random numbers *are* necessary for the universe to work, then they belong as part of our model–as a giant list of lottery tickets printed at the beginning of time and slowly spent.

Of course, a huge pile of random numbers that lasts for the length of the universe is a really, really awful implementation. The principle of minimal complexity strongly suggests that reality is better than that.

Free will, does it exist, then (at least in the sense of something special outside of logic)?

Sorry, no luck, as it were, so to speak, if you’ll pardon the pun, etc.

## The Big Bang

One of the theoretical digital physics endeavors that has received the most attention in recent years is the program by Stephen Wolfram to find an ultimately simple universal algorithm–the rule that defines all of nature. Despite the fact that he’s a brilliant man with extraordinary resources at his disposal, and though I’d love for him to succeed, I don’t think he’s going to. In fact, I’m pretty certain of it. In this post I’m going to tell you why.

But first, let me tell you my understanding of what Wolfram is doing, as I may be missing something. In his book, A New Kind of Science, Wolfram makes the point that the only way to tell what kind of output algorithms are going to produce is by running them. Given that simple algorithms produce lots of interesting patterns that crop up in nature, he recommends exploring as many as we can, and looking for ones that produce useful effects. And for various reasons similar to those I’ve outlined on this blog, he suggests that we might be able to find the rule for the universe somewhere in that stack of programs, and that we’re fools if we don’t at least try to look. His plan, then, is to sift through the possibilities looking for those that produce a network structure that shows the properties of an expanding spacetime. In other words, a Big Bang.

My main concern with this springs from the importance of conservation laws in physics. In other words, though it has changed size, the universe contains a certain amount of *stuff. *So far as we can tell, there is essentially the same amount of stuff now as there was at the beginning of the universe, because properties like charge and lepton number are conserved. Certainly you can do things like create charged particle pairs spontaneously from photons, and so forth, but this doesn’t get around the fact that everything we know about cosmology suggests that the early universe was *dense*.

If Wolfram finds an algorithm that produces spacetime from scratch, where does all the stuff come from? The only solution, it would seem, it to have particles spontaneously generated as the network increases in size. But this isn’t what we see. If this were true, there’d be a lot more going on in the gaps between the galaxies than we witness. So, while finding an algorithm that produces spacetime geometry would certainly be an interesting result, in my opinion, it’d be highly unlikely to be physically relevant. Hence, so long as he’s looking for spacetime, my guess is that he’ll be out of luck.

So is Wolfram’s approach doomed? Far from it, I would propose, so long that we change the kind of network that we’re looking for. After all, just because we need an algorithm that eventually features conservation laws doesn’t mean we can’t have one that builds up a large amount of stuff *before* it builds the space to scatter it in. In other words, just because the Big Bang is where we measure the start of the universe from, there’s nothing to say that there wasn’t a prior era in which it was the stuff that was expanding instead of the space. If this is true, we should look for an algorithm that experiences a phase transition.

We already know of some algorithms that do this. Langton’s Ant experiences something of this sort. So does the Ice Nine cellular automaton as studied by the inspiring Else Nygren. Sadly, neither of these algorithms operates on networks, but they make it clear that this kind of behavior is not hard to find.

My personal guess is that if Wolfram’s explorations pay off, he will find a class of algorithms that produce a knotted tangle of nodes which, after some large number of iterations, suddenly start unfolding like a flower. We have to hope that there is an entire family of algorithms that do this. Otherwise, if we need to accumulate a full universe’s worth of stuff prior to seeing any spatial expansion, we could be waiting a very long time indeed for the algorithm to do anything recognizable.