Archive

Posts Tagged ‘double slit experiment’

Reflections on Waves

May 16, 2012 Leave a comment

In my recent post series, Making Waves (starting here), I outlined a very simple system for duplicating the kinds of effects seen in the Double Slit experiment, which Richard Feynman famously described as “the only real mystery in quantum mechanics”. The approach I used was completely discrete, and one for which pseudo-random numbers will happily suffice instead of the ‘pure randomness’ that’s often stated as a prerequisite for any QM model.

In the wake of these posts, I decided that it was only appropriate to talk a little about the limitations of the approach I outlined, and also to address some of the questions or yes-buts that I imagine some readers may have.

First, the limitations.

Relativity: It’s not that hard to come up with different interpretations of QM, so long as you don’t have to worry about reconciling it with relativity. Any Causal Set enthusiasts looking over my work might well point out that my spatial model isn’t Lorentz invariant, and therefore hard to take seriously. As it stands, this observation is absolutely right. And we can go further. In Scott Aaronson’s review of New Kind of Science, which I have mentioned in previous posts, he points out that a network-based approach to QM simply won’t work with a discrete model of spacetime, if we respect the Minkowski metric in that model. Fortunately, as I’ve outlined in previous posts, we simply don’t have to use that metric. Using causal sets to describe spacetime is a nice approach with lots of potential, but by no means a necessity. So while the model I’ve mentioned here is limited, future posts will show at least one way it can be extended.

Bell inequality violation: The particle I use here doesn’t have any properties as sophisticated as spin. It’s pretty clear, then, that as it stands, we wouldn’t be able to extrapolate it to that most marvelous demonstration of quantum effects at work: Bell’s experiment. However, the reason for that is a little different from the one that makes most models fall at this hurdle. Usually, the problem lies in getting around the limits imposed by locality. With a network-based approach, non-locality doesn’t present a problem. However, making particles with persistent orientation is harder. While I’ve been able to produce such particles, they currently still have limitations and currently don’t follow all paths.

Scale: The algorithm I described in the last post isn’t among the world’s most efficient, and it’s hard to imagine it replacing lattice QCD any time soon as the simulation engine of choice. So while the implications for QM may be interesting, it’s hard to scale the approach up enough to show what it’s really capable of. This means that the results I get are going to be noisy and incompletely convincing unless someone happens to have a whole bunch of supercomputer time that they’re giving away. This is something I’m prepared to live with.

And now, some yes-buts.

Randomness: People are fond of saying that QM is random, and therefore that exploring an algorithmic approach such as the one I’ve shown doesn’t make sense at some fundamental level. However, this statement is just wrong. You can know that a variable is unpredictable, but you can never know that it’s random, unless you have an infinite amount of computing power with which to prove it. So long as you have finite computing power, the variable you’re considering may simply be the output of a computing machine that has one bit more reasoning power than yours does. Thus you can say that it’s effectively random from your perspective, but no more. And when considering a universal algorithm, it’s completely acceptable to propose algorithms that use the entire state of the universe at any one iteration step to calculate the next. Thus, unless you’re outside the universe, you’d have no way to predict the behavior of even a single atom.

What a theoretical model can do is assert that quantum events are random, even when no proof can ever be supplied, which is what we currently do. I confess that I’m not a big fan of faith-based approaches, when it comes to randomness or anything else.

Efficiency: In Seth Lloyd’s eminently readable pop-science book, Programming the Universe he suggests that the universe is a quantum computer computing itself. Why not an ordinary computer, given that the set of problems that can be solved by both types of machine is exactly the same? Because quantum computers are massively more efficient. To his mind, it doesn’t make sense to consider nature as an ordinary computation because achieving what nature does takes ordinary computational models huge swathes of time.

However, when considering algorithms that potentially run the universe, and through which all reference frames are determined, I would propose that efficiency is irrelevant. In order for us to care about efficiency of the algorithm, we’re also tacitly proposing that someone is making a design choice about the universe, which seems like a ridiculous assertion to me. The reason to pursue a computational model of the nature is because it presents a more concrete, more reductionist, and more scientific view of how the universe operates. Not less. We don’t need someone to have designed the universe to justify digital physics any more than a continuum theory requires that someone be running the universe on a large array of valves and rheostats.

Usefulness: The reaction to the digital physics approach to QM that I have the most respect for is the experimentalist shrug. It’s completely fair to say at this point that the kind of algorithm I’ve outlined is far less useful as a scientific tool than what is currently being used. It’s also fair to say that experimental evidence for discrete spacetime is scant and elusive. And while these things are true, I see no reason for most physicists to alter their approach in any way.

However, I have two caveats. First, those theorists considering Theories Of Everything have no excuse to not consider discrete models. The set of physical systems that can be described by them is very much larger than the set that is conveniently differentiable. To assume that the universe lies in the differentiable set is rather like the man who looks for his car keys in the study rather than the street, because the light is better indoors. Such attitudes are particularly indefensible when, rather than considering systems of minimal complexity, we instead are expected to suspend disbelief about parallel universes, hidden dimensions, and tiny vibrating strings with no width.

The second caveat is that I suspect the game is about to change. The coming era of quantum computation will test our understanding of QM more thoroughly than anything that has come before, and I will be heartily surprised if there are not surprises that come with it. While digital physics represents a philosophical distraction now, I very much doubt that the same will be true in a hundred years.

Making Waves 4

May 8, 2012 1 comment

This is my fourth and final post about how to duplicate the effects of the famous double-slit experiment of quantum mechanics with just a few pages of code. At the end of last time, we’d built a self-interfering excitation wave, and applied it to a network with three lines cut into it to make it behave like a cardboard screen. The results looked something like this.

Now, in order to explain how we get from this pink blob to one of the greatest mysteries of modern science, let’s first recap a little about what makes QM so odd.

Last time, I talked about Young’s fringes, and the fact that light behaves like a wave. This model persisted for a long time until Albert Einstein and his chums came along at the turn of the Twentieth Century and pointed out that light had to be both a wave and a particle. This is because every time you measure light, it always comes in chunks, despite the fact that it makes diffraction patterns that prove that it has to be traveling like wave.

What this means is that when we run the same experiment that Young did, but with a lamp so dim that it emits just one photon at a time, we still get the same stripes, but they’re clearly made up of dots from individual impacts. In fact, it looks something like this.

Furthermore, if we put detectors at the slits instead of on the screen, to test which way each particle is going, we only ever get a detection at one slit or the other, never half a detection at both. If the particle only goes one way or the other, how can it be interfering with itself?

It gets worse. If we put a detector at just one of the slits, we get a detection half the time, and the pattern of stripes goes away. It’s as if the particle knows that we’re looking at it. How can the particles possibly tell if we’ve put a detector at one of the slits?

The answer to these problems is easier that it looks. The way to resolve it is to think of a QM particle as something less like a brick, and more like an Oscar. Bricks get thrown. Oscars get awarded. The whole time that a brick is flying towards someone, it’s clear where it’s going. However, before an Oscar is awarded, it might land with anyone on that year’s shortlist. Before the Oscar is awarded, it doesn’t make sense to say who’s won it, but that doesn’t mean it’s not real. It’s just not awarded yet.

We can think of the Oscar as having a probability of landing with each of the contenders, and that’s how most physicists think about QM. But what’s important here is just that there’s a shortlist and the rules of the competition say that only one person can receive the prize.

We can imagine a crazy actor, who can’t stand the suspense, cornering one of the Oscar judges and demanding to know whether he has the prize before the judges have even finished deliberating. The judge has to keep to his word, regardless of what he tells the actor, so he’s forced to make a choice to say yes or no. Either way, he buggers up the deliberation process that would have happened with his colleagues afterwards. That actor is just like the detector we put at one of the slits.

So we can achieve the same effect with a particle by saying that it simply maintains a list of places where it might be detected. In other words, by using exactly the approach we’re described in the last few posts. The only thing we need to make our particle quantum-mechanical, then, is a rule that tells the particle that any time we demand to know where it is, it has to tell us a single location.

We do this by making some of the nodes in our network into ‘detectors’. Whenever our particle passes over a detector, we add it to a list. Then we make a decision whether or not to give up information about the particle at that time. If we decide that we’re done, we pick one of our detectors at random from our list and announce that the particle has arrived. If we decide to keep going, we don’t pick any of them.

Thus, if we put a detector at one of the slits, we’re likely to get an answer much sooner than if we just put them on the screen, and the answer will be different than if we had left the wave alone. That’s it. No mystery. No consciousness. No crazy, brain-bending math.

In fact, we can go further. We can say that, in this interpretation, quantum particles behave like Oscars rather than bricks because they have to. Everything in our model is in more than one place at a time, because the only way to assert that anything in the network has a location is by connecting it to nearby points with links. In other words, something like QM naturally falls out of the system because it’s discrete and irregular.

So does this approach work? If we run the simulation, do we get the stripes we’d like to see? Here’s what happens if we make the bottom of our simulation into a sort of detector and collect detection events (green dots).

Lo and behold, we get stripes. They’re somewhat crappy, for sure, but the results are consistent, regardless of how I set the experimental parameters. Here’s another example.

The results look poor because I’m taking shortcuts. The code takes a long time to run, and I’m collecting more than one detection for the same particle, which isn’t what happens in nature. To make the result cleaner, I’d have to collect each detection independently, and tweak the spatial graph between each event. While that’s possible, it would take days to run with my current implementation, and nobody’s paying me to do this stuff.

I haven’t tried to match the results of what you’d see in a real QM experiment because that’s not the point here. I also haven’t tried to capture all of the features of QM in my demonstration, because that’s not the point either. The aim is just to demonstrate that the kind of ‘spooky’ results that people talk about in QM aren’t really that spooky at all. I can get them on my Mac, even though it might take me a while.

The reason why QM results might seem spooky at first glance is because our notion of locality hasn’t been updated since the 19th Century. Even though we’re now spending most of our waking hours embedded in a complex, dynamic network of information, people are generally reluctant to imagine that the universe they inhabit works in exactly the same way. That’s because we have a really great set of mathematical tools for dealing with 19th Century-style smooth large-scale systems, and not many yet for understanding networks.

So if this solution still isn’t quite clear to you yet, ask yourself this: Where is the Google homepage?

My answer: Anywhere that points to it.

 

Making Waves 2

April 23, 2012 2 comments

Last time, I promised I’d show you how to simulate quantum effects in the comfort of your own home. H0wever, I didn’t get that far. I showed you how to make simple waves that would travel across an irregular network, and that was about it. While the waves I showed you did allow us to think about the ways in which elements of discretized space were similar to waves or particles, they didn’t look much like the kind of waves we’re used to seeing in, say, water, let alone electromagnetism. So, this time, we’re going to start making the waves a little more realistic.

First, let’s remind ourselves what our waves from last time looked like.

Pretty, perhaps, and with many useful properties, such as the fact that the same algorithm will work in any number of dimensions. However, these waves have one obvious glaring flaw: they’re not round. So let’s fix that by using the following rule.

  1. Make two sets, A and B, and put a single point in A.
  2. Find all the points that are neighbors to the elements of set A or B, but not already in A or B.
  3. Make those elements the new set A, and the make the elements that were in A the new set B.
  4. Go back to step 2 and iterate.

As you can see. This rule is even simpler than the one we used last time, and this time the waves are always round.

Next, let’s give the wave a wavelength. To do this, let’s have several sets instead of just two. How about five? Let’s update the rule as follows.

  1. Make a list of five sets, numbered One to Five, and put a single point in set One.
  2. Find all the points that are neighbors to any of our sets, but which aren’t already in the sets.
  3. Make those elements the new set One, make the new set Two equal the contents of the old set One, and so on until we reach the end of the list of sets.
  4. Go back to step 2 and iterate.

When we run this rule, it looks like this.

However, we still have a problem, and it’s crucial. Our waves don’t interfere with each other. In this respect, they’re completely unlike the kind of waves we want for quantum effects. To solve this, we have to do something a bit clever. Rather than using a single wave, we’re going to use a collection of waves, or a ‘meta-wave’, if you like. Furthermore, we’re going to say that some of the points in our network are ‘special’. We’ll pick these special points at random and make the number of them just a small fraction of the total set of points in the network.

What we’re going to do at each turn is advance all the waves in our collection. But if one of the waves bumps into one of these special points, we’re going to add a new wave to our collection, starting at that point. To make things a little clearer, here’s the new rule.

  1. Make a set of round wave generators that use the ‘wave with wavelength’ rule listed above. Start it off with just one wave, and put just a single point in that wave.
  2. Advance all the waves in our collection by a single step.
  3. If one of the waves advanced onto a special point that isn’t in the wave, and now has a special point in its set One, start a new wave using that point, and add it to our collection.
  4. Go back to step 2 and iterate.

Now if we run our rule, we get something that looks rather different. In order to see what’s going on, I’m going to use a different coloring system, to let the effects of the different waves combine together.

Now we’re getting somewhere interesting. It might not look like much yet, but unlike a normal excitation wave, our meta-wave can pass through itself. This is because the individual waves that it’s comprised of don’t share any information, and each individual wave only contributes a small amount to the overall result. Furthermore, the meta-wave never loses strength. Though it becomes ever more costly to compute, it can go on growing forever. Next time we talk about waves, we’ll put this idea of interference to the test and show you some of the exciting things that it gives us.