We’re Not Local

November 6, 2012 Leave a comment

Ars Technica has a nice article on a piece of theoretical work done by J D Bancal, et al. The upshot of it is that if your explanation for how quantum mechanics works is anything other than non-local, leaves open the possibility of faster-than-light communication. (Thanks to Dan Miller for pointing me at it.)

I have mixed feelings about this idea, as I’d love for faster-than-light communication to be a possibility, and am delighted that someone has come up with a way of determining whether it can be done. However, the flip side of this is that I’m pretty certain that QM is fundamentally non-local, as I outlined in my post on replicating particle self-interference.  The notion here being that non-locality doesn’t rule out discrete models. If anything, it supports them, as it encourages to think of wave-functions as sets of non-locally distributed points, either finite or otherwise.

What this result doesn’t say, unless I’m missing something, is that the currently fashionable, complex-number-based model of QM is literally true. You can still take exactly the same result and reframe it in terms of another equivalent model, such as Bohmian mechanics, for instance, and get something that looks completely deterministic.

Hence, while the result is nifty, the goal posts for viable theories of physics remain doggedly where they were.

Hello again, cubic symmetry, and simulations

November 5, 2012 8 comments

Hello all. It’s been a while since I’ve posted anything on this blog. My life has been in flux of late, as I’ve been moving to Princeton, NJ, changing jobs, and having a baby all at the same time. Now that things are starting to settle, it should be a lot easier for me to find time to write.

With that in mind, here’s my take on a recent article that people forwarded to me a while back during my break–the result from Silas Beane at the university of Bonn that claims to have something to say on the subject of the simulated universe. The arxiv blog, as usual, as a good write up.

The gist of the research is this: if the universe is running in a simulation on a cubic lattice, in much the way that current quantum chromodynamics simulations are calculated, then there should be experimentally observable consequences. Beane and his team identify two: the anisotropic distribution of cosmic rays (different amounts of rays in different directions), and a cut-off in the energy of cosmic ray particles. This article generated some excitement because the cut-off matches a phenomenon that’s already been observed.

A great moment for digital physics, right? I’m not convinced. I have a few concerns about this work. For starters, as I have discussed on this blog, there are a huge number of ways of building discrete universe models, of which a 3D lattice is only one. That simulation style has significant limitations, which, while not insurmountable, certainly make it a tough fit for a huge number of observed physical effects, such as relativity and spatial expansion.  

Furthermore, in order to make their predictions, Beane and his associates simulated at a tiny scale. This is convenient because you only have to consider a single reference frame, and can treat space as a static backdrop for events. In other words, it’s pretty clear that the main problems with regular lattice simulations are things that their research didn’t touch.

I would find it astonishing, therefore, if we discovered the predicted cosmic ray anisotropy. And this brings me on to my second major concern. People, upon finding no irregularity in the cosmic ray distribution, are then likely to think, “gosh, well the universe was isotropic after all, I guess we’re not in a simulation.”

Except, let’s recall, experiments have already seen the expected energetic cut-off. In other words, the cosmic ray observations we see are perfectly consistent with a universe that’s discrete, but also isotropic. In other words, irregular, like a network. This, perhaps, shouldn’t come as a surprise.

Then, there is my third concern, and this reflects the interpretation imposed on this result. Namely, that a universe that turns out to run on an algorithm must somehow be a simulation running on a computer elsewhere. This, as I’ve also mentioned in previous posts, is just plain wrong.

Algorithms, like equations, are tools we use to build models. One does not have primacy over the other. One is not more natural than the other. A universe that turns out to be algorithmic no more requires a computer to run on than a universe based on differential equations needs a system of valves. The one main difference between algorithms and equations is that you can describe a vastly larger set of systems with algorithms. Equations are nice because, once you’ve figured them out, you can do lots of nifty reasoning. However, the number of possible systems that are amenable to this treatment is vanishingly small, compared to the systems that are not.

Most physicists want the universe to turn out to be completely describable with equations, because it would make life a lot easier for everyone. It’s a nice thing to hope for. It’s just that given the set of options available, it’s not terribly likely.

Higgsistential Angst

July 18, 2012 5 comments

We’ve seen a lot of news about the Higgs boson in the news over the last couple of weeks. One might be tempted to suppose that my bet about the outcome of this adventure was wrong.

And so it might me. Refutability is a fine thing. However, I don’t think this matter is over yet. Ray, from Finitism Forever, supplied me with this link to an article on the magnificent Arxiv blog.

There are two reasons why I’m not ready to call the discovery of the Higgs a done deal. The rational part of me is reluctant because the best the physics community can say at this point is a ‘Higgs-like particle’. That’s far from conclusive.

Then there’s the intuitive part of me, and it doesn’t want to think they’ve found the Higgs, because it would, IMO, be terrible news for physics. Yes, terrible. To tidy up the loose ends of a theory that looked complete before anyone discovered dark matter or dark energy means we’re in horrible shape to understand these deeper questions about how the universe works. In that scenario, there are no particle interactions we can generate that would help us even start to understand.

Also, bear in mind that the LHC has been punching large holes in lots of supersymmetry theories. That’s one result out of CERN that we can feel confident about. Hence, the idea of supersymmetric particle pairs as candidates for dark matter looks a lot shakier than it did a few years ago.

I would rather that physics have something chewy and hard to understand in front of it. Something tantalizing but offering the promise of deeper knowledge. The alternative is an opportunity for a lot of retired professors to bust out the champagne and feel smug, followed by a long, dark period of complete confusion.

So come on, Universe. Don’t give us a Higgs. You’re better than that.

 

The trouble with symmetry

July 10, 2012 2 comments

One of the greatest advances in theoretical particle physics in the 20th century is Noether’s theorem. If you’ve never heard of it, you’re not alone. It’s an achievement that seldom makes it into popular titles, despite the fact that it’s arguably the greatest single achievement of mathematical physics that’s ever been made. It was conceived of by one of the unsung heroes of the field–Emmy Noether, probably the greatest woman mathematician who ever lived.

What Noether’s theorem tells us is that for every symmetry of a physical system, there is a conserved quantity, and vice-versa. The conservation of energy, for instance, corresponds to symmetry in time. Conservation of momentum corresponds to the symmetry due to translation through space. What Noether’s theorem essentially tells us is that when you’re trying to build a working theory of physics, what really counts are the symmetries. Nail the symmetries and you’ve essentially nailed the problem.

The problem for digital physics is that Noether’s theorem specifically relates to differentiable symmetries. In other words, ones that change smoothly. For symmetries that don’t change smoothly, all bets are off. This means that anyone trying to use a discrete, computational system to model physics is hamstrung right out of the gate.

In order to bridge this gulf, it seems to me that you need some way of describing computation in terms of symmetries, or symmetries in terms of computation. Either way, you need some nice formal way of putting the two notions on the same footing so that a meaningful, discretized version of Noether’s theorem can be derived. In other words, you need some kind of super-math that slides right in there between calculus and the theory of computation.

Though the link may not yet be obvious, this was where I was going with my recent post on Simplicity and Turing machines. But what does simplicity have to do with symmetry? Plenty, I suspect. I propose that we try to bridge the gulf between symmetry and computation with an idea that has elements of both: the idea of a partial symmetry.

But what is a partial symmetry? This terminology doesn’t exist anywhere in math or physics. And what does it even mean? Either something is symmetric or its not. In truth, partial symmetry something I made up, inspired by the reading I was doing on partial orders. And it’s a notion I’m still ironing the bugs out of. It works like this:

Any time you have a system that displays a symmetry, there is informational redundancy in it. Because there is redundancy in it, you can look at that system as the outcome of some sequence of copying operations applied to an initial seed from which redundancy is missing. Consider a clock face. We can treat the clock as a shape that happens to have twelve-fold symmetry, or we can think of it as a segment for describing a single hour that’s been replicated twelve times. This isn’t how we normally think about symmetry, but in spirit it’s not that far from a more familiar idea that mathematicians use called a group action.

A clock made up of twelve segments

However, if your copying operation doesn’t preserve all the information in the initial seed, you don’t have a full symmetry. Consider what happens if, instead of taking those clock segments and lining them up in a circle, you copy and move with each step in such a way that a part of each segment is hidden. You still end up with something that’s got a lot of the properties of a symmetric object, but it’s not fully symmetric. Furthermore, as soon as you do this, the ordering of the sequence of copying operations suddenly matters.

A partially symmetric not-clock with six segments

My proposal is that partial symmetry is equivalent to computation. And that armed with this idea, we can start to look at the symmetries that appear in nature in a new light. That might sound like a bit of a stretch, but in later posts I’m going to try to show you how it works.

Categories: Uncategorized Tags:

Simplicity and Turing Machines

July 9, 2012 6 comments

I have an exciting result that I want to share with you. However, in order to get there, I’m going to have to take this blog in a slightly more abstract direction than it’s been going recently.

‘More abstract?’ I hear you ask. What could be more abstract than discussing whether the fundamental nature of reality is based on simple algorithms? The answer is: discussing the nature of simplicity itself. Is such an abstruse subject still relevant to physics? Undoubtedly. I believe that it holds the key to resolving the long-standing difference in perspective between computer scientists and physicists.

I have been to several painful conference discussions in which physicists at one end, and computer scientists at the other, debate about the nature of reality. The computer scientists proclaim that nature must be discrete, and that Ockham’s razor supports their reasoning. The physicists look at them blankly and tell them that Ockham’s razor is a tool for building models from experimental data, and represents a heuristic to guide reasoning–nothing more. Neither side can apparently fathom the other. Filling the panels with luminaries from the highest levels of science seems to only make the problem worse.

It’s my belief that the study of simplicity can potentially provide a language that can unify everything from group theory up to quantum mechanics, and put this battle to bed forever. I will endeavor show you how.

Discussions in digital physics often revolve around the notion of programs that are ‘simple’, but not much is said about what simplicity actually entails. Computer scientists are very familiar with the notion of complexity, as measured by the way in which the solution to a given problem scales with the size of the problem, but simplicity is something else.

For instance, consider Turing machines. There are idealized models of computation that computer scientists use to model what computers can do. A few years ago, Stephen Wolfram held a competition to prove that a given Turing machine model was capable of universal computation. Why was this model considered interesting? Because it contained fewer components than any other Turing machine for which the same proof had been made.

A Turing machine is a pretty good place to start exploring the idea of simplicity. You have a tape with symbols on it and a machine that can read and write those symbols while sliding the tape forward or backward, based on what it reads. You can build one out of lego.

Though there’s not much to it, but this incredibly simple machine, given enough tape and enough time, can do anything that the most sophisticated computers on Earth can do. And if we ever succeed in building quantum computers, the humble Turing machine will be able to do everything they can do too.

However, when it comes to providing a truly simple model of computation, I propose that the Turing machine doesn’t go far enough. This is because there is hidden information in the Turing machine model that isn’t written in the symbols, or stored in the state of the machine. In fact, for a classic description of a Turing machine, I’m going to propose that there is an infinite amount of information lurking in the machine, even when there are no symbols on the tape and the machine isn’t even running.

The hidden information is hiding in the structure of the tape. In order for a Turing machine to operate, the machine has to be able to slide the tape left or right. Unless we know which piece of tape is connected to which other piece, we have no program to run. This problem, I’d propose, infects the theory of information down to its roots. When we discuss the amount of information in a string of binary bits, we consider the number of bits, but not the fact that the bits need to come in a sequence. A bag of marbles colored white or black which can be drawn in any sequence doesn’t hold much information at all.

Any truly simple model of computation, therefore, needs to contain an explicit description of what’s connected to what. Hence, I’d propose that the simplest unit of machine structure isn’t the bit, but the reference. In other words, a pointer from one thing to another. You can build bits out of references, but you can’t build references out of bits, unless you presume some mechanism for associating bits that’s essentially identical to references.

Once you start representing computation using references, the structures you come up with suddenly start looking a lot more like the programs for replicating physical experiments that I’ve outlined in previous posts. From a digital physics perspective, this is already useful. However, we can go deeper than that. When we compute using references, something strange and wonderful can happen that I’m still figuring out the implications of. In the next post, I’ll show you what I mean.

How long is a very fast piece of string?

June 20, 2012 3 comments

In his work on special relativity, Einstein outlined the relation between time and distance, and in doing so, changed physics as we know it. In recent posts I’ve outlined a way to rebuild that effect using a discrete network-based approach. However, those posts have avoided addressing a one of the most astounding experimental consequences of that theory: Lorentz contraction.

Lorentz contraction, simply put, describes the fact that when an object is travelling fast, it appears to squash up along its direction of motion. This gives rise to the well-known barn paradox, in which a ladder too long for a barn will seemingly fit inside that barn so long as it’s moving quickly enough. (I don’t recommend trying this at home.)

With the kind of discrete system that I described, objects have fixed length, regardless of how fast they’re going. So how can I possibly claim that the essence of special relativity has been captured?

The answer is simple: There is no actual, physical Lorentz contraction.

Am I denying that Lorentz contraction is an observed phenomenon?  No. Do I contest the fact that it can be experimentally demonstrated to exist? Absolutely not. It happens, without a doubt, but what I’m proposing is that, in reality, Lorentz contraction has everything to do with time, and nothing at all to do with length.

Far from being a wild and implausible conjecture, this idea is actually a necessary consequence of other things we know about nature. For starters, that physical particles are observed to be point-like. At the scales that experiments have been able to probe, particles don’t behave as if they have width. And if particles don’t have width, at least of a kind that we recognize, how can they possibly be compressing? The answer is, they can’t.

So where does the Lorentz contraction we observe in experiment come from? It comes from synchronization. Or, to state the case more exactly, from the relationship between objects where their relative velocity is mediated by messages being passed between those objects.

Consider a fleet of starships readying to take part in a display of fancy close-formation flying. They all start at rest somewhere near the moon, each at a carefully judged distance from each other. Then, the lead pilot of the formation begins to accelerate and the others pull away with him to keep the formation in step. Because the formation is tricky to maintain near the speed of light, the ships use lasers to assess their relative distances. They measure how long it takes for each laser ping to return from a neighbor and use that to adjust their velocity.

Should we expect the fixed formation of starships to exhibit Lorentz-contraction just like every other fast moving object? Of course we should, whether the ships are inches apart, or separated by distances wider than the solar-system. Should it make a difference if the starships are tiny, and piloted by intelligent bacteria? Or even of zero length? Not at all.

So, in other words, the size of the starships themselves is irrelevant. It’s the testing of the distances between them using lasers that makes the difference. And this, of course, is what particles do. They exchange force-carrying particles to determine how far away they want to be from each other.

What does change, depending on how fast you’re going relative to someone else, is the wavelength of light that you see coming from other people. Things moving toward you look bluer. Things moving away turn red. And the good news is that wavelength isn’t the same thing as length.

To illustrate this, try to build yourself a mental cartoon of a particle emitting a photon. The particle looks like a bowling ball. The photon looks like a length of rucked carpet being spat out the side of it. The length of the carpet sample determines the wavelength of the light. Now take your cartoon, wind it back to the start, and run it forward very slowly. At first, the carpet will be just sticking out of the bowling ball. A few frames onward you can see how rucked and wiggled it is. A few frames after that and the carpet is all the way out and flying on its way, maybe with a tiny Aladdin on it.

We can play this little sequence out because the carpet sample has physical extent. This means that carpet-emission isn’t a single event–it’s a sequence. And this will be true for any model that we build for photon emission that gives wavelength physical meaning.

This leaves with realization that one of the following two statements must be true:

  • Photon emission is instantaneous. Therefore particle wavelength doesn’t involve physical length. Therefore we need an extra mechanism to explain why wavelength should be affected by Lorentz contraction.
  • Photon emission requires time. Therefore particle wavelength is real. Therefore it’s possible (perhaps preferable) to model it as a pair of events: a start and an end.

By treating the emission of a photon, or any other messenger particle, as a pair of events, our problems with Lorentz contraction evaporate. The timing of the start-of-photon and end-of-photon events is determined by how fast the emitting particle is travelling. Similarly, the perceived wavelength of a photon by a receiving particle is determined by the subjectively experienced delay between the start-of-photon and end-of-photon events. And, voila, temporal effects substitute for spatial ones.

This puts constraints on our choice of simulation model, of course. If we’re going to model photons with start and end events, that’s going to have implications on the kinds of wave implementation we can reasonably use. Fortunately though, the implementation I outlined for my posts on the double-slit experiment will work just fine.

I won’t lie to you and say that everything about this approach is solved. How the timing of photon events of this sort translates into energy is something I still don’t have an answer for. And it’s debatable how useful this way of treating relativity will ever turn out to be. However, I think what this model demonstrates is that when it comes to physics as weird as relativity, it’s worth looking for workable implementations that don’t rely on the mathematical tools we usually use. Their requirements can shed light on assumptions in the theory that we’re often not even aware that we’re making.

What does it all mean?

June 12, 2012 7 comments

In the last few posts, I’ve talked a fair bit about relativity and have struggled to make my thinking on the subject clear enough to read. What that process has revealed to me is that some topics in science are just hard to talk about. In part, that’s because they’re counter-intuitive, but there’s a lot more to it than that. A lot of what’s going on is, I’d propose, social, and deeply concerning about how we engage with science.

Open any number of pop-science books that attempt to give you a grand overview of the universe and somewhere near the start there are usually the same two chapters. One of these is on relativity and the other is on quantum mechanics. These chapters are the author’s attempt to explain the ‘wacky’ things that happen in physics. In most cases, the author ends by saying something like, “this might sound incredible, but it’s what we see in experiments, so suck it up”.

And this is usually where real scientific dialog with the public stops. Subsequent chapters in these books are usually short on specifics and relatively thick on prose like “Geoff and I were sitting eating a sandwich, feeling sad, and suddenly it occurred to me that if we ran the same simulation backwards, it would give us the eigenvectors we were looking for, only with the parameters inverted! We raced back to the lab without even finishing our lunch!”

Different books make the break in different places but the effect is usually the same. The physicist in question gives up on trying for an intuitive explanation of what they were doing and resorts to personal drama to try to retain reader interest.

Underpinning this switch is the belief that  the only way to really understand the ideas being discussed is to do the math. Without the math, you just can’t get there. At some level, the math is the understanding. I take issue with this notion pretty strongly. Not only is it dead wrong. It’s counter-productive. In fact, it’s an angry badger driving a double-decker bus into the side of the temple of science.

Let’s go over some of the problems that this ‘math equals understanding’ approach creates.

First, it causes the public to disengage. People feel that if they aren’t good at math, they’ll never get it. Yet life goes on, so science can’t possibly be relevant to them. And, at the end of the day, this creates funding problems.

Second, and far worse, is that the people who do the math and get the answer right feel like they have understood it, even though deep down, it still doesn’t make any sense. They sweep that feeling under the rug and press on but become increasingly defensive when pressed on topics that make them feel uncertain. This just makes the gulf between scientists and everyone else all the wider.

On top of this, attempts to communicate the math, rather than the meaning, to the public end up creating a folk-notion of how physics ‘has to be’. This creates a whole stew of junk reasoning when people try to extend that folk-notion. For instance, in relativity, people are told that you can’t go faster than light because if you did, you’d be travelling backward in time in someone else’s reference frame. This is incredibly, insanely wrong. And it’s just one step from there to “if I go faster than light I go backwards in time”.

Perhaps most horribly of all, this process creates physicists who can’t uncouple the tools they’re used to using from the problems they’re trying to solve. This creates massive blind-spots in the reasoning of some of our brightest and finest researchers, because these people are never tested to see whether they have understood the principles in the absence of the math.

Here’s an example from relativity: “spacetime exhibits Lorentz-invariance”. This might sound fine, until you think about the fact that we can only ever examine spacetime by passing things through it.  We have no idea what properties spacetime exhibits, because we can never directly test it. All we can know about is the things we can observe. Saying that test on moving objects yield a pattern of Lorentz invariance is fine, but often, that’s not what’s said.

Here’s another relativity example from my own life. I sat down in a cafe a few years ago with a grad-student in particle physics to talk over some things I wanted to understand. We got on to the subject of using a compact dimension for spacetime interval in the way I outlined in the last post. He pulled a face.

“I don’t think you can do that with just one dimension,” he said. “I think you need three.”

We debated the point for some time, even breaking out some equations on a napkin. In the end, he still wasn’t convinced, though he couldn’t say why, or point out a hole in my reasoning. All this despite the fact that his math skills were far in advance of my own.

Why did he make the assertion that he did, even though fifteen minutes of basic logic crunching could have demonstrated otherwise? Because the way relativity is taught makes use of the idea of Lorentz boosts. People use six dimensions to model what’s going on because it makes the math easier. They never just use one dimension for s. This fellow, extremely bright and talented though he was, was wedded to his tools.

So where do we go from here? What do we do? If science has a problem, how do we solve it?

I’d propose that all math can ever do is supply a relation between things. “If this is true, then that is true”. Math gives you a way to explore what the implications of an idea are, without ever saying anything about the idea itself, other than whether it’s self-consistent. In essence, math in physics tries to describe how things behave solely in terms of constraints, and without ever trying to provide an implementation. In other words, it deliberately avoids saying what something means, and says only what it does. This is because meaning, I’d propose, is a property that comes with a choice of specific model.

This is why physics tends to become fuzzy and unsatisfying when it diverges from physical experience. We can describe relativity or quantum mechanics easily using math by defining the constraints on the behavior we see. However, we are used to having a specific model to back our reasoning up–the one provided by intuitive experience of the world. When that model goes away, we lose touch with the implications of our own logic.

Does this mean that we are forced to rely on math for insight at that point, as is commonly proposed? No. In fact, I’d suggest that the reverse is true. This is the point at which we should trust math less than ever. This is because self-consistency is only as good as the conjectures you apply it to. I think it was Bertrand Russell who said that from a false premise you can prove anything. The only way to determine whether our physical premises are correct is to have more than one way at arriving at confidence in their validity. That’s why physical intuition is a vital tool for preventing self-consistent nonsense from creeping into theory.

Hence, instead of just leaning on our analytical crutch, we should strive harder than ever to find metaphors for physical systems that do work, and which bring phenomena such as relativity within easy mental reach.

And this, to my mind, is exactly where digital physics can help. Digital physics asserts that we should only consider a physical theory reasonable if we can construct a viable implementation for it. If a system is self-consistent, but non-implementable, then we shouldn’t expect it to match nature, as nature clearly is implemented, by virtue of the fact that we are witnessing it. By requiring concrete implementations, we force ourselves to create metaphors with which to test our understanding.

In other words, if the math leaves us asking the question, ‘what does it all mean?’, then we haven’t done enough digital physics yet.

Does this mean that any one of the implementations we pick is correct? No. In fact, the more workable implementations, the better. Digital models are not theories.

Does it mean that digital physics represent a substitute for mathematical reasoning? No, of course not. Math lies at the heart of physics. It just can’t exist in a vacuum of understanding.

Digital physics, then, is a different tool, through which the set of theoretical models of nature can be tested and understood. It’s a way of ruling out theories that don’t add up even if the math works out. It is, I would propose, the best antidote to Geoff and his half-eaten sandwich that physics has going for it.