My last post was a little ranty, perhaps. So lets be fair to the physicists. What physicists mean by randomness is that when they run an experiment, unpredictable results are seen. Furthermore, when viewed in aggregate, these unpredictable results perfectly match probability distributions of a certain sort. And given that there are no parameters one can control in these experiments to predict what the answers will be, the reasoning goes that we might as well consider them as random, and build our theory accordingly.
This is fine, IMO, so long as you’re not trying to build an ultimate theory of physics. It’s a good idea, even, in the same way that spherical cows are a good idea. However, if you’re trying to get the answer right, and describe the smallest levels of physical existence, then, by definition, mere approximations won’t cut it.
However, this assertion, on its own, probably doesn’t say, or explain enough. For instance, what about Bell’s Inequality? Bell’s Inequality experiments absolutely rule out local realism. Local hidden variable theories simply can’t work. Isn’t that reason a strong indicator that there is inherent randomness in the universe?
In short, no. This is because I can simulate Bell’s Inequality results in the comfort of my own home without resorting to quantum randomness once. This is doable because Bell’s Inequality says nothing about non-local hidden variable theories.
The most well known of these is Bohmian mechanics, an approach that was first presented in 1927. This method has been thoroughly explored by physicists, but most of them walk away from it fairly unsatisfied, because it requires that every point in the universe can have instantaneous interactions with any other. The math of Bohmian mechanics is set up to ensure that the answer comes out exactly as it does for classic QM, while keeping the system deterministic. But, given that this doesn’t add any expressive power, and makes the model non-local, that feels like a fairly poor compromise.
Fair enough. But Bohmian mechanics isn’t the only way to build a non-local theory. As we’ve pointed out on this blog, if you’re looking for a background independent model of physics, you have to start thinking carefully about how spatial points are associated with each other. And if you follow this reasoning in a discretist direction, you generally end up building networks, whether you’re into causal set theory, loop quantum gravity, quantum graphity, or any of the other variants currently being explored.
And, as soon as you start looking at networks, it’s clear that there are perfectly decent ways of non-locally connecting bits of the universe that are not only self-consistent, but provide you with tools that you can use to examine other difficult problems in physics.
If I seemed to be disparaging physicists for not considering hidden determinism in the universe in my last post, that was not my intention. I certainly don’t mean to poke the finger at any specific individuals, but I do believe that poking the finger at the culture of physics in this regard is important.
We have experimental evidence of the non-locality of physical systems. However, we have no evidence that the universe runs on a kind of non-computable, non-definable randomness that flies in the face of what we know about information and the mathematics of the real numbers. Doesn’t that mean that we should be working a little harder to put together some modern deterministic non-local theories? Is it really better to hide under the blankets of the Copenhagen interpretation because this problem is hard?
After all, while issues of interpretation are broadly irrelevant given most of the day to day business of doing physics research, there is the small matter of quantum mechanics and relativity remaining unreconciled for the last hundred years. I would venture to propose that if we ever want to close that gap, having the right interpretation of quantum mechanics is going to be an important part of the solution.
Ars Technica has a nice article on a piece of theoretical work done by J D Bancal, et al. The upshot of it is that if your explanation for how quantum mechanics works is anything other than non-local, leaves open the possibility of faster-than-light communication. (Thanks to Dan Miller for pointing me at it.)
I have mixed feelings about this idea, as I’d love for faster-than-light communication to be a possibility, and am delighted that someone has come up with a way of determining whether it can be done. However, the flip side of this is that I’m pretty certain that QM is fundamentally non-local, as I outlined in my post on replicating particle self-interference. The notion here being that non-locality doesn’t rule out discrete models. If anything, it supports them, as it encourages to think of wave-functions as sets of non-locally distributed points, either finite or otherwise.
What this result doesn’t say, unless I’m missing something, is that the currently fashionable, complex-number-based model of QM is literally true. You can still take exactly the same result and reframe it in terms of another equivalent model, such as Bohmian mechanics, for instance, and get something that looks completely deterministic.
Hence, while the result is nifty, the goal posts for viable theories of physics remain doggedly where they were.