I don’t like the idea of a multiverse. I think it’s bad science. This might sound odd coming from someone who has just recently blogged about how all discrete universes simpler than are own are real. But I see a difference. In fact, the term ‘multiverse’ makes me groan each time I hear it.
Why? Because in order for the idea of a ‘multiverse’ of the sort that’s commonly envisaged to be correct, it requires that we buy into the existence of a very large amount of stuff, the existence of which we can never prove or disprove. (Just to make it clear exactly what kind of multiverse I don’t like, it’s the kind that invokes the ‘string-theory landscape‘ and asserts that a very large number of independent universes that share some kind of physical reality with our own.) This notion strikes me as unscientific.
One might argue that I have done exactly the same thing with my assertions about mathematical reality. However, the fact that I can count demonstrates that the integers exist, at least up to the value at which I’ve counted. Because the act of counting provides a complete implicit description of each integer, I have duplicated that pattern within my own universe. Hence, the pattern is ‘real’. The same cannot be said for vast tranches of hypothetical spacetime, each requiring eleven smooth dimensions for their description.
The most eloquent defender of the multiverse notion is, in my opinion, Max Tegmark, the same man who proposed the Mathematical Universe Hypothesis. I quote (via Wikipedia):
A skeptic worries about all the information necessary to specify all those unseen worlds. But an entire ensemble is often much simpler than one of its members. This principle can be stated more formally using the notion of algorithmic information content. The algorithmic information content in a number is, roughly speaking, the length of the shortest computer program that will produce that number as output. For example, consider the set of all integers. Which is simpler, the whole set or just one number? Naively, you might think that a single number is simpler, but the entire set can be generated by quite a trivial computer program, whereas a single number can be hugely long. Therefore, the whole set is actually simpler. Similarly, the set of all solutions to Einstein’s field equations is simpler than a specific solution. The former is described by a few equations, whereas the latter requires the specification of vast amounts of initial data on some hypersurface.
He’s dead right that a rule can be simpler than a result. However, as we’ve seen in previous posts, exactly the logic he invokes here to justify the multiverse rules out the existence of universes any more complex than the minimum needed to describe our own. That same logic also makes it clear that those universes are mathematically disjoint from each other. So while they may share a mathematical reality, sharing a physical reality they almost certainly do not.
So if it’s easy to reach the conclusion that a multiverse is an ugly idea, why is it so frequently invoked? Because, I would propose, it makes it easier to justify the usage of models that are otherwise hard to support.
This is the second reason I don’t like the notion of a multiverse. Not only does it require an unscientific abundance, but it smells of a kind of theoretical cosmology that is slowly bankrupting itself. Requiring that we believe in a very large number of things we can never witness is fine, so long as it’s the only viable explanation. (Sherlock Holmes springs to mind.) However, when it comes to theoretical cosmology, there are plenty of options out there that have been barely explored.
This is not to say that I have something better to replace the current favorite models, because I do not. I’m not a theoretical cosmologist. However, as an engaged citizen scientist, it’s my job to exercise skepticism about any explanation I’m presented with that I either don’t understand, or which appears to break in the face of simple logic.
If I become better informed and change my mind, that’s okay, because exercising doubt is the best way to know what questions to ask.
I greatly enjoyed the feedback I got on my last post. So here’s a little more digital philosophy for you.
Last time, we proposed that all logically-derived sequences less complex than the universe could be said to be real. However, given that we still don’t know how complex our own universe actually is, the total set of things that are ‘real’ is still ambiguous.
You can go one of three ways here:
2: You can take the line that Jurgen Schmidhuber does, and say that only computable mathematics is real. In other words, only those theories of nature that can be represented as computer programs need be considered. (Tegmark appears to have also considered this option.)
3: You can be even more restrictive, and say that we don’t even have proof that all computable systems exist. In this case, we’re essentially saying that the universe can be represented as a really big finite state machine.
Despite the really excellent reasoning done by the chaps promoting options 1 and 2, I’m going to propose that we go with option 3.
To show why, let’s first put all the rules for making logical sequences in order, starting with the simplest, using a definition like this one. Computer scientists in the audience may complain at this point, because ordering this way requires that we pick an arbitrary machine definition as our measuring stick to define ‘simple’. However, we’re not going to be fussy. We’re just going to pick one. Then, we’re going to list out the total set of rules that our language can make and consider the output they produce.
Regardless of what language for making rules we use, we’re going to see certain things happen. For starters, there are going to be output patterns that show up more than once because some rules are going to be equivalent to each other. And some patterns will show up more often than others, because some of the rules that make them are going to contain redundant steps. Thus, as we build out our series of machines, certain kinds of output are going to dominate. And as the series gets larger, they’re going to dominate a lot.
In fact, the simpler the output is, the more frequently you’ll end up with a rule that produces it. Indeed, if you explore the set of simple algorithms from the ground up in the way that Stephen Wolfram and his buddies have done, this is exactly what you see.
Now consider that our universe appears somewhere in this list. The most likely rule for making output that looks like the universe is going to be the simplest one, by far, because its output will be duplicated so many times that it overwhelms all the other possibilities. Furthermore, given that we don’t have proof that things more complex than the universe even exist, the only rule for the universe that we need to bother considering is the simplest one that will do the job properly.
This result is helpful, because it tells us that the scientific habit of preferring minimally complex solutions over more ornate ones exists for a good reason. Of all the physical models that fit the output to a given experiment, the one that’s simplest really is more likely to be true, so long as it doesn’t conflict with other results. (And if it does conflict with other results, it’s wrong!)
This is all nice and useful, though so far we haven’t said anything about continuous models versus discrete ones. But consider this:
Any variable that varies smoothly cannot be represented in a countable number of discrete bits, whereas a single continuous number can code for any number of discrete values at the same time.
In other words, continuous systems aren’t just more complex than discrete ones. By every measure that includes both systems that people have been able to come up with, continuous systems are infinitely more complex. (If anyone has reason to believe to the contrary, I’d love to hear about it!)
This is not to say that you can’t produce continuous functions that produce simple output. However, these functions are isolated instances in a sea of more complex possibilities. And furthermore, all the continuous models that produce this kind of result turn out to equate in complexity to systems that aren’t continuous. In other words, you can always rewrite the functions they generate in terms of machines that don’t need smooth numbers. Thus, for any reasonable definition of simple that covers both cases, discrete models will always precede the continuous ones.
I would go so far as to propose that there is no reasonable way to say that continuous systems are simpler than discrete ones. And what this means is that if there’s even the slightest reason to believe that we can represent physical reality without smooth numbers, we should pursue it.
The burden of proof, therefore, properly lies with the continuum enthusiasts. In order for their models to be plausible, they need to prove that a discrete approximation cannot suffice. However, science doesn’t work that way. Unless people have decent analytical tools that don’t rely on calculus, discrete models simply aren’t going to be explored. This means that anyone hoping to actually get the Theory Of Everything right has about three hundred years of mathematical catching up to do in order to provide a framework that can compete with current expectations. Some actual experimental evidence for quantized spacetime would help too, though maybe we won’t have to wait too long.
Now, so far I haven’t said anything about the difference between Option 2 (computable reality), and Option 3 (even stupider reality), but I think that will have to wait for another post. In the mean time, happy reasoning, and may the force of Ockham’s Razor be with you.