## Simplicity and Turing Machines

I have an exciting result that I want to share with you. However, in order to get there, I’m going to have to take this blog in a slightly more abstract direction than it’s been going recently.

‘More abstract?’ I hear you ask. What could be more abstract than discussing whether the fundamental nature of reality is based on simple algorithms? The answer is: discussing the nature of simplicity itself. Is such an abstruse subject still relevant to physics? Undoubtedly. I believe that it holds the key to resolving the long-standing difference in perspective between computer scientists and physicists.

I have been to several painful conference discussions in which physicists at one end, and computer scientists at the other, debate about the nature of reality. The computer scientists proclaim that nature must be discrete, and that Ockham’s razor supports their reasoning. The physicists look at them blankly and tell them that Ockham’s razor is a tool for building models from experimental data, and represents a heuristic to guide reasoning–nothing more. Neither side can apparently fathom the other. Filling the panels with luminaries from the highest levels of science seems to only make the problem worse.

It’s my belief that the study of simplicity can potentially provide a language that can unify everything from group theory up to quantum mechanics, and put this battle to bed forever. I will endeavor show you how.

Discussions in digital physics often revolve around the notion of programs that are ‘simple’, but not much is said about what simplicity actually entails. Computer scientists are very familiar with the notion of complexity, as measured by the way in which the solution to a given problem scales with the size of the problem, but simplicity is something else.

For instance, consider Turing machines. There are idealized models of computation that computer scientists use to model what computers can do. A few years ago, Stephen Wolfram held a competition to prove that a given Turing machine model was capable of universal computation. Why was this model considered interesting? Because it contained fewer components than any other Turing machine for which the same proof had been made.

A Turing machine is a pretty good place to start exploring the idea of simplicity. You have a tape with symbols on it and a machine that can read and write those symbols while sliding the tape forward or backward, based on what it reads. You can build one out of lego.

Though there’s not much to it, but this incredibly simple machine, given enough tape and enough time, can do anything that the most sophisticated computers on Earth can do. And if we ever succeed in building quantum computers, the humble Turing machine will be able to do everything they can do too.

However, when it comes to providing a truly simple model of computation, I propose that the Turing machine doesn’t go far enough. This is because there is hidden information in the Turing machine model that isn’t written in the symbols, or stored in the state of the machine. In fact, for a classic description of a Turing machine, I’m going to propose that there is an infinite amount of information lurking in the machine, even when there are no symbols on the tape and the machine isn’t even running.

The hidden information is hiding in the *structure of the tape*. In order for a Turing machine to operate, the machine has to be able to slide the tape left or right. Unless we know which piece of tape is connected to which other piece, we have no program to run. This problem, I’d propose, infects the theory of information down to its roots. When we discuss the amount of information in a string of binary bits, we consider the number of bits, but not the fact that the bits need to come in a sequence. A bag of marbles colored white or black which can be drawn in any sequence doesn’t hold much information at all.

Any truly *simple* model of computation, therefore, needs to contain an explicit description of what’s connected to what. Hence, I’d propose that the simplest unit of machine structure isn’t the *bit*, but the *reference*. In other words, a pointer from one thing to another. You can build bits out of references, but you can’t build references out of bits, unless you presume some mechanism for associating bits that’s essentially identical to references.

Once you start representing computation using references, the structures you come up with suddenly start looking a lot more like the programs for replicating physical experiments that I’ve outlined in previous posts. From a digital physics perspective, this is already useful. However, we can go deeper than that. When we compute using references, something strange and wonderful can happen that I’m still figuring out the implications of. In the next post, I’ll show you what I mean.

I’ve read that before but i can’t remmeber

about reference/pointer importance

even with digitalism you can’t get ride of dimentions

for example Ed Fredkin’s SALT model simulate inertia, circula motion & ondulating strings

but it necessisate dimentions

Hey.

If I’m understanding you correctly, what you’re essentially saying is that you always need to have dimensions in the background in order for a computational model to work out. If that’s what you mean, I have to disagree with you. You don’t need dimensions, only references, and references don’t require coordinates, or any background medium. In order for dimensions to be a requirement for references, they’d also have to be a requirement for set theory, and therefore for logic, arithmetic, and just about everything else in math, most of which tends to have its roots in set theory.

On a more pragmatic note. I have built simulations that create topologically flat 2D networks using exactly zero dimensional information. I would imagine that doing the same for 3D is equally possible.

sorry for the ambugious post

but i was speaking about the SALT model and not all of callculations (for instance arithmetic)

if your are interested take a look at

http://www.i-programmer.info/news/112-theory/4445-a-new-computational-universe-fredkins-salt-ca.html

and read about SALT and the interesting ruleset called “Busy Box”

maybe in the end Enstein was right:

“God dosn’t play Dice, but he plays Chess instead!!!”

I think I missed this its just now that I found this interesting faq

dunno why there is complet disinterest in this CA

but I think its a much huger find than the “Game of life”

http://busyboxes.org/faq.html

One of the authors of the SALT work is Dan Miller, who is an occasional co-author on this blog. I agree with you that the work on SALT is great, and really interesting. I remain unconvinced by the CA approach for physics, but am consistently impressed by what people have been able to do with them. The circle-approximation feature on its own deserves some critical attention. After all, it’s a new way to approximate Pi if nothing else!

“One of the authors of the SALT work is Dan Miller”

wow!! didn’t know that!