The Big Bang
One of the theoretical digital physics endeavors that has received the most attention in recent years is the program by Stephen Wolfram to find an ultimately simple universal algorithm–the rule that defines all of nature. Despite the fact that he’s a brilliant man with extraordinary resources at his disposal, and though I’d love for him to succeed, I don’t think he’s going to. In fact, I’m pretty certain of it. In this post I’m going to tell you why.
But first, let me tell you my understanding of what Wolfram is doing, as I may be missing something. In his book, A New Kind of Science, Wolfram makes the point that the only way to tell what kind of output algorithms are going to produce is by running them. Given that simple algorithms produce lots of interesting patterns that crop up in nature, he recommends exploring as many as we can, and looking for ones that produce useful effects. And for various reasons similar to those I’ve outlined on this blog, he suggests that we might be able to find the rule for the universe somewhere in that stack of programs, and that we’re fools if we don’t at least try to look. His plan, then, is to sift through the possibilities looking for those that produce a network structure that shows the properties of an expanding spacetime. In other words, a Big Bang.
My main concern with this springs from the importance of conservation laws in physics. In other words, though it has changed size, the universe contains a certain amount of stuff. So far as we can tell, there is essentially the same amount of stuff now as there was at the beginning of the universe, because properties like charge and lepton number are conserved. Certainly you can do things like create charged particle pairs spontaneously from photons, and so forth, but this doesn’t get around the fact that everything we know about cosmology suggests that the early universe was dense.
If Wolfram finds an algorithm that produces spacetime from scratch, where does all the stuff come from? The only solution, it would seem, it to have particles spontaneously generated as the network increases in size. But this isn’t what we see. If this were true, there’d be a lot more going on in the gaps between the galaxies than we witness. So, while finding an algorithm that produces spacetime geometry would certainly be an interesting result, in my opinion, it’d be highly unlikely to be physically relevant. Hence, so long as he’s looking for spacetime, my guess is that he’ll be out of luck.
So is Wolfram’s approach doomed? Far from it, I would propose, so long that we change the kind of network that we’re looking for. After all, just because we need an algorithm that eventually features conservation laws doesn’t mean we can’t have one that builds up a large amount of stuff before it builds the space to scatter it in. In other words, just because the Big Bang is where we measure the start of the universe from, there’s nothing to say that there wasn’t a prior era in which it was the stuff that was expanding instead of the space. If this is true, we should look for an algorithm that experiences a phase transition.
We already know of some algorithms that do this. Langton’s Ant experiences something of this sort. So does the Ice Nine cellular automaton as studied by the inspiring Else Nygren. Sadly, neither of these algorithms operates on networks, but they make it clear that this kind of behavior is not hard to find.
My personal guess is that if Wolfram’s explorations pay off, he will find a class of algorithms that produce a knotted tangle of nodes which, after some large number of iterations, suddenly start unfolding like a flower. We have to hope that there is an entire family of algorithms that do this. Otherwise, if we need to accumulate a full universe’s worth of stuff prior to seeing any spatial expansion, we could be waiting a very long time indeed for the algorithm to do anything recognizable.
Hi Alex,
just a quick note to fully subscribe to the idea of a change of phase in the ‘universal algorithm’. The distinction between a preliminary phase in which ‘things’ boil violently without much self-organization, followed by one in which they unfold into a more regular geometry of spacetime, is found in various theories. Some authors (from Russia, if I well remember) even distinguish between an initial, purely mathematical phase, followed by a ‘physical’ one in which spacetime begins to appear.
Another interesting two-phase theory is discussed in an old paper by David Finkelstein, entitled ‘Superconducting Causal Nets’. The net is first in a chaotic phase called Tohu (a term from the Genesis); a change of phase turns it into a sort of relativistic quantum ether that exhibits some of the features of the phenomenon of superconductivity. A remarkable property of this second phase is that it satisfies local Lorentz invariance, something that we had problems understanding in the context of algorithmic/deterministic causal sets…
Hi Tommaso!
Great to hear from you. These papers sound both useful and promising! Please do email me them if you get a chance, or post links to them here if that’s easier. What I’m curious about with the Finkelstein paper is how they determine that local Lorentz invariance is satisfied. Do they have some kind of measure on it, or is Lorentz invariance some kind of guaranteed property of the result?
I’ve been trying to wrap my head around what a pre-spatial structure would look like, and how it might possibly undergo phase change. My thought is that by trying to visualize the necessary conditions, one might have a better chance of effectively screening candidate algorithms.
So let’s say with start off with a kind of proto-spatial nugget–a tiny network comprised of nodes all hooked together. Our algorithm might then decorate that nugget with structure. In other words, it creates particle after particle, all pointing at the central nugget. We end up with something like a many-pointed star. Then, at some point, we reach a threshold. Instead of creating more spikes on the star, the central nugget is increased in size. Along the way, we reach a state where not all the nodes in the nugget are connected. Once that happens, we have room to start creating spatial structure. The limbs of the star then spread themselves out as wandering particles.
There are plenty of other ways one might achieve a similar result, I think, but this is the easiest for me to visualize. Any thoughts? It’ll be interesting to see whether the papers you mention contain anything that bears a resemblance to this idea.
I dare you to actually prove the universe is finite. On a related note, conservation of infinite values.
Is the burden of proof really with the idea of finiteness? An infinite universe is, IMO, infinitely unlikely. However, it’s interesting to contemplate how one might do this. If we can demonstrate that the idea of smooth continuous mathematics is, at some level, logically inconsistent, would that suffice?
Of course if you’re going to make any assertion there’s a burden of proof. Anything else is madness. I’m glad you have an emotion about infinity, enjoy yourself. I find the idea of a finite universe almost incomprehensible, especially since the hyper-spherical universe is directly repudiated by special relativity. If the universe is finite it has edges. If it has edges, it has a center. And of course it has an outside.
Can you point me at a paper where the idea of a hyperspherical universe is repudiated? (I know that measurements of spacetime suggest that it’s very flat, but that’s different. That’s a scale issue.)
And I’m assuming that if you can provide a link, that you can’t point me at one that repudiates a hypertorus. While it’s harder to do, a hypertorus can be grown from a seed, and stays geometrically flat throughout the process without having boundaries. Still, my money’s on a hypersphere.
As for your statement that every assertion requires a burden of proof, can you prove that?
And I’m glad you’re having emotions about me having emotions about infinity. I often have emotions about things that don’t make logical sense. It’s a habit of mine. 🙂