Archive

Posts Tagged ‘stephen wolfram’

Simplicity and Turing Machines

July 9, 2012 6 comments

I have an exciting result that I want to share with you. However, in order to get there, I’m going to have to take this blog in a slightly more abstract direction than it’s been going recently.

‘More abstract?’ I hear you ask. What could be more abstract than discussing whether the fundamental nature of reality is based on simple algorithms? The answer is: discussing the nature of simplicity itself. Is such an abstruse subject still relevant to physics? Undoubtedly. I believe that it holds the key to resolving the long-standing difference in perspective between computer scientists and physicists.

I have been to several painful conference discussions in which physicists at one end, and computer scientists at the other, debate about the nature of reality. The computer scientists proclaim that nature must be discrete, and that Ockham’s razor supports their reasoning. The physicists look at them blankly and tell them that Ockham’s razor is a tool for building models from experimental data, and represents a heuristic to guide reasoning–nothing more. Neither side can apparently fathom the other. Filling the panels with luminaries from the highest levels of science seems to only make the problem worse.

It’s my belief that the study of simplicity can potentially provide a language that can unify everything from group theory up to quantum mechanics, and put this battle to bed forever. I will endeavor show you how.

Discussions in digital physics often revolve around the notion of programs that are ‘simple’, but not much is said about what simplicity actually entails. Computer scientists are very familiar with the notion of complexity, as measured by the way in which the solution to a given problem scales with the size of the problem, but simplicity is something else.

For instance, consider Turing machines. There are idealized models of computation that computer scientists use to model what computers can do. A few years ago, Stephen Wolfram held a competition to prove that a given Turing machine model was capable of universal computation. Why was this model considered interesting? Because it contained fewer components than any other Turing machine for which the same proof had been made.

A Turing machine is a pretty good place to start exploring the idea of simplicity. You have a tape with symbols on it and a machine that can read and write those symbols while sliding the tape forward or backward, based on what it reads. You can build one out of lego.

Though there’s not much to it, but this incredibly simple machine, given enough tape and enough time, can do anything that the most sophisticated computers on Earth can do. And if we ever succeed in building quantum computers, the humble Turing machine will be able to do everything they can do too.

However, when it comes to providing a truly simple model of computation, I propose that the Turing machine doesn’t go far enough. This is because there is hidden information in the Turing machine model that isn’t written in the symbols, or stored in the state of the machine. In fact, for a classic description of a Turing machine, I’m going to propose that there is an infinite amount of information lurking in the machine, even when there are no symbols on the tape and the machine isn’t even running.

The hidden information is hiding in the structure of the tape. In order for a Turing machine to operate, the machine has to be able to slide the tape left or right. Unless we know which piece of tape is connected to which other piece, we have no program to run. This problem, I’d propose, infects the theory of information down to its roots. When we discuss the amount of information in a string of binary bits, we consider the number of bits, but not the fact that the bits need to come in a sequence. A bag of marbles colored white or black which can be drawn in any sequence doesn’t hold much information at all.

Any truly simple model of computation, therefore, needs to contain an explicit description of what’s connected to what. Hence, I’d propose that the simplest unit of machine structure isn’t the bit, but the reference. In other words, a pointer from one thing to another. You can build bits out of references, but you can’t build references out of bits, unless you presume some mechanism for associating bits that’s essentially identical to references.

Once you start representing computation using references, the structures you come up with suddenly start looking a lot more like the programs for replicating physical experiments that I’ve outlined in previous posts. From a digital physics perspective, this is already useful. However, we can go deeper than that. When we compute using references, something strange and wonderful can happen that I’m still figuring out the implications of. In the next post, I’ll show you what I mean.

Advertisement

The Big Bang

May 30, 2012 6 comments

One of the theoretical digital physics endeavors that has received the most attention in recent years is the program by Stephen Wolfram to find an ultimately simple universal algorithm–the rule that defines all of nature. Despite the fact that he’s a brilliant man with extraordinary resources at his disposal, and though I’d love for him to succeed, I don’t think he’s going to. In fact, I’m pretty certain of it. In this post I’m going to tell you why.

But first, let me tell you my understanding of what Wolfram is doing, as I may be missing something. In his book, A New Kind of Science, Wolfram makes the point that the only way to tell what kind of output algorithms are going to produce is by running them. Given that simple algorithms produce lots of interesting patterns that crop up in nature, he recommends exploring as many as we can, and looking for ones that produce useful effects. And for various reasons similar to those I’ve outlined on this blog, he suggests that we might be able to find the rule for the universe somewhere in that stack of programs, and that we’re fools if we don’t at least try to look. His plan, then, is to sift through the possibilities looking for those that produce a network structure that shows the properties of an expanding spacetime. In other words, a Big Bang.

My main concern with this springs from the importance of conservation laws in physics. In other words, though it has changed size, the universe contains a certain amount of stuff. So far as we can tell, there is essentially the same amount of stuff now as there was at the beginning of the universe, because properties like charge and lepton number are conserved. Certainly you can do things like create charged particle pairs spontaneously from photons, and so forth, but this doesn’t get around the fact that everything we know about cosmology suggests that the early universe was dense.

If Wolfram finds an algorithm that produces spacetime from scratch, where does all the stuff come from? The only solution, it would seem, it to have particles spontaneously generated as the network increases in size. But this isn’t what we see. If this were true, there’d be a lot more going on in the gaps between the galaxies than we witness. So, while finding an algorithm that produces spacetime geometry would certainly be an interesting result, in my opinion, it’d be highly unlikely to be physically relevant. Hence, so long as he’s looking for spacetime, my guess is that he’ll be out of luck.

So is Wolfram’s approach doomed? Far from it, I would propose, so long that we change the kind of network that we’re looking for. After all, just because we need an algorithm that eventually features conservation laws doesn’t mean we can’t have one that builds up a large amount of stuff before it builds the space to scatter it in. In other words, just because the Big Bang is where we measure the start of the universe from, there’s nothing to say that there wasn’t a prior era in which it was the stuff that was expanding instead of the space. If this is true, we should look for an algorithm that experiences a phase transition.

We already know of some algorithms that do this. Langton’s Ant experiences something of this sort. So does the Ice Nine cellular automaton as studied by the inspiring Else Nygren. Sadly, neither of these algorithms operates on networks, but they make it clear that this kind of behavior is not hard to find.

My personal guess is that if Wolfram’s explorations pay off, he will find a class of algorithms that produce a knotted tangle of nodes which, after some large number of iterations, suddenly start unfolding like a flower. We have to hope that there is an entire family of algorithms that do this. Otherwise, if we need to accumulate a full universe’s worth of stuff prior to seeing any spatial expansion, we could be waiting a very long time indeed for the algorithm to do anything recognizable.