Archive
Citizen Scientists
This week, I had a very interesting discussion with someone who I had never met, who had a digital physics idea that they wanted to share. I found myself in the position of giving feedback on work I was not familiar with, and it occurred to me that I should say something about it on this blog. I want to make it clear how I feel about citizen scientists, the concept of ‘crackpots’, and digital physics in general.
First, let us be completely honest. Digital physics is considered a crank topic by many mainstream physics. You need look no further than the blog of Lubos Motl to see just how fervently this is felt, or the level of anger that that can be directed towards the notion of a discrete universe.
For the most part, the reasons for this contempt are down to laziness. Those who don’t care to engage assume that a digital universe must involve a cartesian spatial grid on which some number of cells are turning on and off–the classic CA approach. This picture looks utterly incompatible with either quantum mechanics or general relativity, so they consider the entire notion to be stupid.
Those who do engage attempt to transfer the familiar tools they are used to using, the Minkowski metric, quantum fields, etc, into the discrete domain without modification. When they discover that this approach fails, they consider that they have given the idea a try and that it’s clearly inadequate. Usually, they do not look any deeper.
However, there is another reason why people have contempt for discrete approaches. That’s because they are both intuitively easy to grasp and easy for amateurs to explore with computers. This means that a great many people who are fascinated by science can perform basic simulations and become excited with the suggestive patterns that they find. Feeling that they have something to contribute, these people suddenly become the most vocal, amateur, would-be contributors to physics.
For professionals who have worked hard to carve out a place in an extremely competitive field, suddenly being vigorously courted by people claiming to have new physical theories can be galling. This is particularly true when those doing the courting have no notion of what has been tried already, incomplete grounding in physical mathematics, and an apparently unshakeable conviction that they have discovered something immense.
The simple fear of an encounter with someone who might be like that is enough to send some physicists running. I know because I have seen them run.
What is utterly stupid about this state of affairs is that those members of the public who are most interested in physics and most willing to engage often end up feeling the most shut out. Physics, particularly particle physics, is a field struggling for funding at a time when the cost of running groundbreaking experiments has skyrocketed. To throw away contact with those members of the public most likely to act as cheerleaders for the field doesn’t help anyone. Furthermore, disengagement from a public who want to exercise skepticism means that confidence in abstruse domains of physical theory, such as string theory, becomes ever harder to attain. How are the public to differentiate between M-brane theory and their own concoctions when dialog is expected to be one-way, as if from priests to the masses? The answer is, usually they do not, and frankly, should not.
How do we fix this? Work is needed on both sides. The physics community needs a better attitude towards so-called ‘crackpots’. Often such people are usually not crazy or stupid, just untrained and enthusiastic. Physics needs to find more things for interested members of the public to do, and more explicit ways for amateurs to help out. It needs to swallow its fear of strange people (an irony in itself). Guidelines for public engagement need to be written, to ensure that there is more of it, not less. If there are common misconceptions about physical theory that amateur theorists fall foul of, they need to need to be pooled, and collated as a series of challenges.
Having said this, the bulk of the work lies on the shoulders of aspiring citizen scientists. Professional physicists hold themselves and each other to incredibly high standards. They have little patience for would-be contributors who seemingly do not. This means that anyone from outside the field who wants to join in needs to do their level best to hold themselves to levels of rigor at least as high. Their work needs to be transparent, fully logical, and expressed in terms that makes it as easy as possible for physicists to read. Anything less than that is simply not good enough.
I have been incredibly lucky. My wife is a prize-winning astronomer. My housemate is a cosmologist. I have many dear friends who are physicists and mathematicians. Without exception, they have called me out when I have made statements I cannot substantiate. They have forced me to examine my own work with a critical eye. They have been unrelenting in making me describe what I have actually achieved, not what I would like to imagine I have done.
I believe that all citizen scientists can do this. Furthermore, we can do it for each other. We can, and must, exercise the highest degree of skepticism in our own work that we possibly can. Otherwise, we will never be heard, and the science we love will pay the price.
Me versus Multiverses
I don’t like the idea of a multiverse. I think it’s bad science. This might sound odd coming from someone who has just recently blogged about how all discrete universes simpler than are own are real. But I see a difference. In fact, the term ‘multiverse’ makes me groan each time I hear it.
Why? Because in order for the idea of a ‘multiverse’ of the sort that’s commonly envisaged to be correct, it requires that we buy into the existence of a very large amount of stuff, the existence of which we can never prove or disprove. (Just to make it clear exactly what kind of multiverse I don’t like, it’s the kind that invokes the ‘string-theory landscape‘ and asserts that a very large number of independent universes that share some kind of physical reality with our own.) This notion strikes me as unscientific.
One might argue that I have done exactly the same thing with my assertions about mathematical reality. However, the fact that I can count demonstrates that the integers exist, at least up to the value at which I’ve counted. Because the act of counting provides a complete implicit description of each integer, I have duplicated that pattern within my own universe. Hence, the pattern is ‘real’. The same cannot be said for vast tranches of hypothetical spacetime, each requiring eleven smooth dimensions for their description.
The most eloquent defender of the multiverse notion is, in my opinion, Max Tegmark, the same man who proposed the Mathematical Universe Hypothesis. I quote (via Wikipedia):
A skeptic worries about all the information necessary to specify all those unseen worlds. But an entire ensemble is often much simpler than one of its members. This principle can be stated more formally using the notion of algorithmic information content. The algorithmic information content in a number is, roughly speaking, the length of the shortest computer program that will produce that number as output. For example, consider the set of all integers. Which is simpler, the whole set or just one number? Naively, you might think that a single number is simpler, but the entire set can be generated by quite a trivial computer program, whereas a single number can be hugely long. Therefore, the whole set is actually simpler. Similarly, the set of all solutions to Einstein’s field equations is simpler than a specific solution. The former is described by a few equations, whereas the latter requires the specification of vast amounts of initial data on some hypersurface.
He’s dead right that a rule can be simpler than a result. However, as we’ve seen in previous posts, exactly the logic he invokes here to justify the multiverse rules out the existence of universes any more complex than the minimum needed to describe our own. That same logic also makes it clear that those universes are mathematically disjoint from each other. So while they may share a mathematical reality, sharing a physical reality they almost certainly do not.
So if it’s easy to reach the conclusion that a multiverse is an ugly idea, why is it so frequently invoked? Because, I would propose, it makes it easier to justify the usage of models that are otherwise hard to support.
This is the second reason I don’t like the notion of a multiverse. Not only does it require an unscientific abundance, but it smells of a kind of theoretical cosmology that is slowly bankrupting itself. Requiring that we believe in a very large number of things we can never witness is fine, so long as it’s the only viable explanation. (Sherlock Holmes springs to mind.) However, when it comes to theoretical cosmology, there are plenty of options out there that have been barely explored.
This is not to say that I have something better to replace the current favorite models, because I do not. I’m not a theoretical cosmologist. However, as an engaged citizen scientist, it’s my job to exercise skepticism about any explanation I’m presented with that I either don’t understand, or which appears to break in the face of simple logic.
If I become better informed and change my mind, that’s okay, because exercising doubt is the best way to know what questions to ask.
The eeriest result I’ve seen in years
Here’s a recent result posted online which, if it’s true, is incredibly eerie.
The upshot of it is that these guys claim to be able to have an experimental quantum entanglement setup that can affect events in the past. Not anywhere in the past, mind you, just specific, isolated events within the same experiment. Still, it’s an amazing result if someone manages to successfully duplicate it.
I rather suspect that they won’t succeed, and that the whole thing will turn out to be dodgy, but here’s the thing: if the result is correct, it probably invalidates the approach that I outlined in the last few weeks of posts. This is because, while the approach I outlined resolves problems with non-locality, it maintains a strict ordering of cause and effect.
I wanted to share with you because in digital physics, we like refutability! There are other discrete approaches in which this kind of result is probably fine, but my favorite model is probably toast.
I’d like to share with you what I think is wrong with this result, but first I should probably summarize the result, for those who don’t want to click around. It works like this:
* Alice and Bob both create pairs of entangled photons.
* One photon from each pair is sent to Victor.
* Alice and Bob make a measurement on their photons.
* Victor makes a decision as to whether to entangle the photons he received or not.
* When we check later, we find that whether Victor decides to entangle or not affects the correlation that Alice and Bob previously saw. Crazy!
So there are multiple reasons to suspect that this research requires more investigation before we know for sure. One is that each experimental pass seems to take place in 14 billionths of a second. That seems to me like a small enough window that experimental error could creep in. Another is that very few particles make it through the whole experimental setup, so the entire result hinges on statistical patterns in the collected data.
However, the thing that I wonder most about relates to the ordering of events. I haven’t gone through the paper yet but I suspect that the catch here is that Alice and Bob’s measurements are compared with Victor’s after Victor makes his decision.
Consider the case where Alice and Bob get to compare results before Victor makes up his mind. In that case, we have information with no quantum ambiguity traveling from the future into the past. My guess is they can’t do that. (If they can, financial trading will never look the same.)
And Victor has to decide first, the whole Alice/Bob/Victor setup only works if we treat it as an entangled system that we can’t touch until the whole trick is over. And that means we have to wonder whether Alice and Bob made a true measurement, if the outcome of it depends on whether we add Victor into the system or not.
In any case, it’s an awesome idea for an experiment. With luck, someone will be out there looking to repeat the result already.
Reflections on Waves
In my recent post series, Making Waves (starting here), I outlined a very simple system for duplicating the kinds of effects seen in the Double Slit experiment, which Richard Feynman famously described as “the only real mystery in quantum mechanics”. The approach I used was completely discrete, and one for which pseudo-random numbers will happily suffice instead of the ‘pure randomness’ that’s often stated as a prerequisite for any QM model.
In the wake of these posts, I decided that it was only appropriate to talk a little about the limitations of the approach I outlined, and also to address some of the questions or yes-buts that I imagine some readers may have.
First, the limitations.
Relativity: It’s not that hard to come up with different interpretations of QM, so long as you don’t have to worry about reconciling it with relativity. Any Causal Set enthusiasts looking over my work might well point out that my spatial model isn’t Lorentz invariant, and therefore hard to take seriously. As it stands, this observation is absolutely right. And we can go further. In Scott Aaronson’s review of New Kind of Science, which I have mentioned in previous posts, he points out that a network-based approach to QM simply won’t work with a discrete model of spacetime, if we respect the Minkowski metric in that model. Fortunately, as I’ve outlined in previous posts, we simply don’t have to use that metric. Using causal sets to describe spacetime is a nice approach with lots of potential, but by no means a necessity. So while the model I’ve mentioned here is limited, future posts will show at least one way it can be extended.
Bell inequality violation: The particle I use here doesn’t have any properties as sophisticated as spin. It’s pretty clear, then, that as it stands, we wouldn’t be able to extrapolate it to that most marvelous demonstration of quantum effects at work: Bell’s experiment. However, the reason for that is a little different from the one that makes most models fall at this hurdle. Usually, the problem lies in getting around the limits imposed by locality. With a network-based approach, non-locality doesn’t present a problem. However, making particles with persistent orientation is harder. While I’ve been able to produce such particles, they currently still have limitations and currently don’t follow all paths.
Scale: The algorithm I described in the last post isn’t among the world’s most efficient, and it’s hard to imagine it replacing lattice QCD any time soon as the simulation engine of choice. So while the implications for QM may be interesting, it’s hard to scale the approach up enough to show what it’s really capable of. This means that the results I get are going to be noisy and incompletely convincing unless someone happens to have a whole bunch of supercomputer time that they’re giving away. This is something I’m prepared to live with.
And now, some yes-buts.
Randomness: People are fond of saying that QM is random, and therefore that exploring an algorithmic approach such as the one I’ve shown doesn’t make sense at some fundamental level. However, this statement is just wrong. You can know that a variable is unpredictable, but you can never know that it’s random, unless you have an infinite amount of computing power with which to prove it. So long as you have finite computing power, the variable you’re considering may simply be the output of a computing machine that has one bit more reasoning power than yours does. Thus you can say that it’s effectively random from your perspective, but no more. And when considering a universal algorithm, it’s completely acceptable to propose algorithms that use the entire state of the universe at any one iteration step to calculate the next. Thus, unless you’re outside the universe, you’d have no way to predict the behavior of even a single atom.
What a theoretical model can do is assert that quantum events are random, even when no proof can ever be supplied, which is what we currently do. I confess that I’m not a big fan of faith-based approaches, when it comes to randomness or anything else.
Efficiency: In Seth Lloyd’s eminently readable pop-science book, Programming the Universe he suggests that the universe is a quantum computer computing itself. Why not an ordinary computer, given that the set of problems that can be solved by both types of machine is exactly the same? Because quantum computers are massively more efficient. To his mind, it doesn’t make sense to consider nature as an ordinary computation because achieving what nature does takes ordinary computational models huge swathes of time.
However, when considering algorithms that potentially run the universe, and through which all reference frames are determined, I would propose that efficiency is irrelevant. In order for us to care about efficiency of the algorithm, we’re also tacitly proposing that someone is making a design choice about the universe, which seems like a ridiculous assertion to me. The reason to pursue a computational model of the nature is because it presents a more concrete, more reductionist, and more scientific view of how the universe operates. Not less. We don’t need someone to have designed the universe to justify digital physics any more than a continuum theory requires that someone be running the universe on a large array of valves and rheostats.
Usefulness: The reaction to the digital physics approach to QM that I have the most respect for is the experimentalist shrug. It’s completely fair to say at this point that the kind of algorithm I’ve outlined is far less useful as a scientific tool than what is currently being used. It’s also fair to say that experimental evidence for discrete spacetime is scant and elusive. And while these things are true, I see no reason for most physicists to alter their approach in any way.
However, I have two caveats. First, those theorists considering Theories Of Everything have no excuse to not consider discrete models. The set of physical systems that can be described by them is very much larger than the set that is conveniently differentiable. To assume that the universe lies in the differentiable set is rather like the man who looks for his car keys in the study rather than the street, because the light is better indoors. Such attitudes are particularly indefensible when, rather than considering systems of minimal complexity, we instead are expected to suspend disbelief about parallel universes, hidden dimensions, and tiny vibrating strings with no width.
The second caveat is that I suspect the game is about to change. The coming era of quantum computation will test our understanding of QM more thoroughly than anything that has come before, and I will be heartily surprised if there are not surprises that come with it. While digital physics represents a philosophical distraction now, I very much doubt that the same will be true in a hundred years.
Why the universe is discrete rather than continuous
I greatly enjoyed the feedback I got on my last post. So here’s a little more digital philosophy for you.
Last time, we proposed that all logically-derived sequences less complex than the universe could be said to be real. However, given that we still don’t know how complex our own universe actually is, the total set of things that are ‘real’ is still ambiguous.
You can go one of three ways here:
1: You can take the same line that Max Tegmark took in his Mathematical Universe Hypothesis, and propose that all mathematics is real. (Roger Penrose likes this idea too.)
2: You can take the line that Jurgen Schmidhuber does, and say that only computable mathematics is real. In other words, only those theories of nature that can be represented as computer programs need be considered. (Tegmark appears to have also considered this option.)
3: You can be even more restrictive, and say that we don’t even have proof that all computable systems exist. In this case, we’re essentially saying that the universe can be represented as a really big finite state machine.
Despite the really excellent reasoning done by the chaps promoting options 1 and 2, I’m going to propose that we go with option 3.
To show why, let’s first put all the rules for making logical sequences in order, starting with the simplest, using a definition like this one. Computer scientists in the audience may complain at this point, because ordering this way requires that we pick an arbitrary machine definition as our measuring stick to define ‘simple’. However, we’re not going to be fussy. We’re just going to pick one. Then, we’re going to list out the total set of rules that our language can make and consider the output they produce.
Regardless of what language for making rules we use, we’re going to see certain things happen. For starters, there are going to be output patterns that show up more than once because some rules are going to be equivalent to each other. And some patterns will show up more often than others, because some of the rules that make them are going to contain redundant steps. Thus, as we build out our series of machines, certain kinds of output are going to dominate. And as the series gets larger, they’re going to dominate a lot.
In fact, the simpler the output is, the more frequently you’ll end up with a rule that produces it. Indeed, if you explore the set of simple algorithms from the ground up in the way that Stephen Wolfram and his buddies have done, this is exactly what you see.
Now consider that our universe appears somewhere in this list. The most likely rule for making output that looks like the universe is going to be the simplest one, by far, because its output will be duplicated so many times that it overwhelms all the other possibilities. Furthermore, given that we don’t have proof that things more complex than the universe even exist, the only rule for the universe that we need to bother considering is the simplest one that will do the job properly.
This result is helpful, because it tells us that the scientific habit of preferring minimally complex solutions over more ornate ones exists for a good reason. Of all the physical models that fit the output to a given experiment, the one that’s simplest really is more likely to be true, so long as it doesn’t conflict with other results. (And if it does conflict with other results, it’s wrong!)
This is all nice and useful, though so far we haven’t said anything about continuous models versus discrete ones. But consider this:
Any variable that varies smoothly cannot be represented in a countable number of discrete bits, whereas a single continuous number can code for any number of discrete values at the same time.
In other words, continuous systems aren’t just more complex than discrete ones. By every measure that includes both systems that people have been able to come up with, continuous systems are infinitely more complex. (If anyone has reason to believe to the contrary, I’d love to hear about it!)
This is not to say that you can’t produce continuous functions that produce simple output. However, these functions are isolated instances in a sea of more complex possibilities. And furthermore, all the continuous models that produce this kind of result turn out to equate in complexity to systems that aren’t continuous. In other words, you can always rewrite the functions they generate in terms of machines that don’t need smooth numbers. Thus, for any reasonable definition of simple that covers both cases, discrete models will always precede the continuous ones.
I would go so far as to propose that there is no reasonable way to say that continuous systems are simpler than discrete ones. And what this means is that if there’s even the slightest reason to believe that we can represent physical reality without smooth numbers, we should pursue it.
The burden of proof, therefore, properly lies with the continuum enthusiasts. In order for their models to be plausible, they need to prove that a discrete approximation cannot suffice. However, science doesn’t work that way. Unless people have decent analytical tools that don’t rely on calculus, discrete models simply aren’t going to be explored. This means that anyone hoping to actually get the Theory Of Everything right has about three hundred years of mathematical catching up to do in order to provide a framework that can compete with current expectations. Some actual experimental evidence for quantized spacetime would help too, though maybe we won’t have to wait too long.
Now, so far I haven’t said anything about the difference between Option 2 (computable reality), and Option 3 (even stupider reality), but I think that will have to wait for another post. In the mean time, happy reasoning, and may the force of Ockham’s Razor be with you.