Lorentz Invariance: Fact or Fiction

May 18, 2014 7 comments

Early this year, I won a book contract for a series of science fiction novels. I’m having a huge amount of fun with it, but am plagued by a peculiar anxiety. That people will have issues with my book to do with Lorentz Invariance.

My books are intended to be thinking person’s space opera. There are all those things that people enjoy about science fiction: starships, robots, alien worlds, etc. However, they’re also intended to be at least slightly realistic in the way that they deal with social and scientific themes. And one of the themes that’s used heavily in the books is warp drive.

No matter that SF writers have used warp drive for years, and no matter that the kind of warp drive I use is very similar to the sort that NASA is investigating right now. Still I am plagued with the notion that someone will call me out for apparent causality violations and thus consider the work implausible.  Eyebrows will be raised. Readers will flee. Scorn will descend. Etc.

Is this the kind of neurotic thought process that happens when one spends years doing scientific research where you have to justify your every choice, who’s then segueing back into fiction? Absolutely. But here’s the thing: how many people think about Lorentz Invariance is just wrong, and how my books cover it is right and proper. I am filled with shining righteous glee on this subject because Lorentz invariance is a topic that I care about and have researched well beyond the limits of common sense.

The standard argument against faster than light travel goes something like this: travel faster than light in your reference frame and you’re going backward in time in someone else’s. Thus if you travel faster than light, you’ve broken causality. No. This is what drives me crazy. Wrong. Bad. A conclusion based on false assumptions. The person who believes this gets ten minutes in the naughty corner. With a novelty hat on.

This belief gets such a strong reaction from me because there are many science fiction writers who believe it is true. Including some very notable ones who have worked in physics. They pat themselves on the back for being science-savvy and diligently write books that preclude FTL. Gah!

It is true that if you travel faster than light, something about your experience of the universe breaks, but it doesn’t have to be causality. There is another, perfectly natural way that our experience of spacetime might change which is in perfect keeping with the math. It is this: travel faster than light, and you break Lorentz Invariance. In other words, all reference frames don’t look the same any more.

This is my preferred model, not only because it works, but because I think there’s evidence that this is what would actually happen. Why? For starters, there is one reference frame that Nature has pulled out and made screamingly special for us already: the one defined by the CMB. While this fact doesn’t interfere with how we do physics, it reveals that the observable universe started with a specific frame. Furthermore, there is no evidence that bits of the universe far away from us are traveling wildly, randomly fast compared to us, suggesting that the entire universe shares that same frame.

Given this, in order for Lorentz invariance to be strictly true, the vast majority of possible reference frames would have to be ones in which the universe hasn’t started yet and is totally flat, i.e.: two-dimensional. This is because no matter how close you get to the speed of light, you can always go closer. This means that for almost all possible frames, nothing can have possibly happened, as the duration of the universe to date is less than the Planck length.  Can we honestly say that those frames exist if the universe hasn’t started in them yet?

Most of the available frames are ones that we could never even reach, because even if you totaled up all the energy in the universe and used it to push a single particle to some absurdly high speed, there would still be an endless spread of reference frames beyond it, all exactly equivalent and immaculately unreachable. Thus, even if you go for an infinite universe model such as eternal inflation, almost all possible frames will never be used. The local energy density at any point will never be high enough to make things pan out otherwise.

So the simple fact that the universe has a starting frame means that Lorentz invariance can only ever be measured to be locally true. It is also true that finite, discrete universe models (my favorites) only work if Lorentz invariance does not strictly hold. That’s true even if you build your discrete universe out of some nice Minkowski-metric compatible structure such as causal sets. Something can only be truly Lorentz invariant if it has infinite size, and exists for infinite time.

So given that strict Lorentz invariance is outlandish enough that we could never even prove that it held were it true, all possible models that can encompass local Lorentz invariance must be considered equally valid. Thus, holding physical reality to the absurd requirement of resembling Minkowski-space simply because it’s where we do most of the math that people are used to seems ludicrous to me.

There is a lovely upside to all this. While we have no evidence that anything in nature can go faster than light, there is also nothing in relativity that rules it out. Which means that NASA’s experiments with an Alcubierre drive may yet bear fruit. And that’s something worth being truly optimistic about.

 

Advertisement

Consensus Quantum Reality Revisited

January 16, 2013 2 comments

Okay.

My last post was a little ranty, perhaps. So lets be fair to the physicists. What physicists mean by randomness is that when they run an experiment, unpredictable results are seen. Furthermore, when viewed in aggregate, these unpredictable results perfectly match probability distributions of a certain sort. And given that there are no parameters one can control in these experiments to predict what the answers will be, the reasoning goes that we might as well consider them as random, and build our theory accordingly.

This is fine, IMO, so long as you’re not trying to build an ultimate theory of physics. It’s a good idea, even, in the same way that spherical cows are a good idea. However, if you’re trying to get the answer right, and describe the smallest levels of physical existence, then, by definition, mere approximations won’t cut it.

However, this assertion, on its own, probably doesn’t say, or explain enough. For instance, what about Bell’s Inequality? Bell’s Inequality experiments absolutely rule out local realism. Local hidden variable theories simply can’t work. Isn’t that reason a strong indicator that there is inherent randomness in the universe?

In short, no. This is because I can simulate Bell’s Inequality results in the comfort of my own home without resorting to quantum randomness once. This is doable because Bell’s Inequality says nothing about non-local hidden variable theories.

The most well known of these is Bohmian mechanics, an approach that was first presented in 1927. This method has been thoroughly explored by physicists, but most of them walk away from it fairly unsatisfied, because it requires that every point in the universe can have instantaneous interactions with any other. The math of Bohmian mechanics is set up to ensure that the answer comes out exactly as it does for classic QM, while keeping the system deterministic. But, given that this doesn’t add any expressive power, and makes the model non-local, that feels like a fairly poor compromise.

Fair enough. But Bohmian mechanics isn’t the only way to build a non-local theory. As we’ve pointed out on this blog, if you’re looking for a background independent model of physics, you have to start thinking carefully about how spatial points are associated with each other. And if you follow this reasoning in a discretist direction, you generally end up building networks, whether you’re into causal set theory, loop quantum gravity, quantum graphity, or any of the other variants currently being explored.

And, as soon as you start looking at networks, it’s clear that there are perfectly decent ways of non-locally connecting bits of the universe that are not only self-consistent, but provide you with tools that you can use to examine other difficult problems in physics.

If I seemed to be disparaging physicists for not considering hidden determinism in the universe in my last post, that was not my intention. I certainly don’t mean to poke the finger at any specific individuals, but I do believe that poking the finger at the culture of physics in this regard is important.

We have experimental evidence of the non-locality of physical systems. However, we have no evidence that the universe runs on a kind of non-computable, non-definable randomness that flies in the face of what we know about information and the mathematics of the real numbers. Doesn’t that mean that we should be working a little harder to put together some modern deterministic non-local theories? Is it really better to hide under the blankets of the Copenhagen interpretation because this problem is hard?

After all, while issues of interpretation are broadly irrelevant given most of the day to day business of doing physics research, there is the small matter of quantum mechanics and relativity remaining unreconciled for the last hundred years. I would venture to propose that if we ever want to close that gap, having the right interpretation of quantum mechanics is going to be an important part of the solution.

Consensus Quantum Reality

January 15, 2013 1 comment

A paper was recently reported on the Arxiv blog that I feel compelled to comment on. It showed the results of a poll of physicists, philosophers and mathematicians about the nature of quantum reality. It makes a for a fascinating read. One message comes through loud and clear, which anyone can pick up on regardless of their level of scientific knowledge, and that’s that scientists are massively undecided.

I can’t say I’m hugely surprised by this, but I find the results of the poll somewhat disappointing. I don’t often resort to rolling my eyes at mainstream physics, because I believe that digital physics researchers, and computer scientists in general, have a huge amount to learn from the physics community. Furthermore, if digital physicists don’t create tools that can pass muster in the eyes of physics professionals, then, at some level, we haven’t done our job. However, on this occasion, I think eye rolling is in order.

Let’s take question one on the list: What is your opinion about the randomness of individual quantum events? A great question. Kudos to Anton Zellinger and his team for asking it. However, the results are as follows:

  • The randomness is only apparent: 9%
  • There is a hidden determinism: 0%
  • The randomness is irreducible: 48%
  • Randomness is a fundamental concept in nature: 64%

Good lord. Really? Let’s take a moment to ask the question: what is randomness? We can start with a pretty basic definition on Wikipedia. It states that randomness commonly means a lack of pattern or predictability in events.

In other words, we define randomness through a negation. This is true up to and including the most formal definitions of randomness that I’m aware of. We say that something is random when we don’t know what’s going to happen next.

But there’s a deep problem here. Randomness, as we’ve defined it, isn’t a thing. It’s the opposite of a thing. And we have defined it based on the notion of predictability. Except, (as I’ve pointed out in previous posts), prediction is always done with some limited amount of computing power. There is no such thing as an infinitely powerful prediction machine. It’s hard to know what this would even mean.

And any system with limited ability to compute can only pick out and identify a finite number of patterns. For instance, I have the ability to surprise my four-month-old son on a regular basis. However, that does not make my behavior quantum mechanical.

The same limitation is true at any scale you want to pick. The combined computing power of the human race is still limited. There are problems that we can’t solve. Which means that there must be patterns in the universe that we cannot predict, but which can be derived from some underlying deterministic process. In fact, we know that this is true. We’ve been studying problems of this sort since the 1880’s, before quantum mechanics was even invented.

So, in order for someone to propose that randomness is a fundamental concept in nature, they have to assert that even though we know unpredictable deterministic patterns exist, quantum mechanics is not like them. And given that there can be no proof either way, this choice is always made in the absence of information. In other words, it is faith.

I do not like faith in my science. It has no place, IMO. And the only approach that is not faith is to continually doubt. In this case, doubting means assuming that the minimal information model is correct until you have evidence otherwise. In other words, proposing that an underlying mechanism exists and trying to look for such a model until you have a concrete, inviolable reason to believe that one could never be found. In other words, doubting equals determinism–the basis on which we founded science in the first place. (The same answer that received a zero percent vote.)

My concern about this mystic belief in randomness is that it suggests that a great number of physicists, while no doubt highly adept in their respective subfields, have not thought independently about the tools that they are using. They have accepted the notion of implicit randomness because it’s baked into the culture of physics and so it seems foolish to disregard. They believe in implicit randomness because ‘of course there is randomness’, or some mathematically dressed up version of the same. Whichever way you cut it, this is a bad reason.

For further evidence that this lamentable state of affairs does indeed exist, one need look no further than question 12: What is your favorite interpretation of quantum mechanics? Forty-two percent of the respondents picked the Copenhagen Interpretation, making it by far the most popular response.

What, you mean the version dreamt up by a bunch of coffee-swilling logical positivists in the 1920s, while Alan Turing was still in middle-school?

Sheesh. Don’t get me started.

Hatching a conviction

January 8, 2013 2 comments

For the past few months, I have been hatching a conviction. It’s too early to call this reasoning scientific, and it may never get there, but I’m going to share it anyway, because I think it’s interesting.

My conviction is this: everything in our universe can ultimately be explained by the logic of information copying, from conservation laws, to universal expansion, to the fact that we inhabit three spatial dimensions. 

What I mean by the logic of information copying is that algorithms that produce stable, complex output tend towards patterns in which information self-copies. Take Langton’s Ant, for example. At first, it produces a huge amount of apparently random behavior. Then, abruptly, it settles onto a pattern that copies itself ad-infinitum. One can view this process as a system exploring a search space of binary patterns while it seeks out one that can be stably reproduced. The highway pattern is stable in a way that the preceding randomness is not, and so the highway is maintained. The system has fallen into a more stable state.

Other examples are less obvious, but effectively equivalent. Consider cellular automata that produce gliders or other moving forms. These repeating motifs are effectively patterns that are copied forward, at the cost of the original. The only things more stable than gliders in automata like Conway’s Life are inert patterns that cannot reproduce at all.

These examples might seem somewhat detached from physics, and, in truth, they are. However, it seems increasingly to me that the imperative that self-copying creates both ubiquity and stability can serve as a bridge of understanding between very simple and abstract patterns like Langton’s ant, and those we see around us in nature.

Take the fact that we inhabit a three-dimensional universe. Why is this? Why, even, do we inhabit a universe with dimensions at all, instead of some other kind of associative structure. Mainstream physics usually requires that we bake the number of physical dimensions in as a prerequisite for any model. It’s a bold quantum gravity researcher who proposes that our set of dimensions is an emergent property. Similarly, cellular automaton enthusiasts tend to propose a lattice with fixed properties from the outset.

However, closer examination of dimensions as entities in their own right reveals them to be hugely specific in nature. The fact that sets of points, whether you consider them as infinitesimal or otherwise, should associate themselves in such a way is massively unlikely, given the vast space of alternatives. Mathematics is replete with different kinds of sets of elements, compact and otherwise, which don’t look remotely like spacetime. We’re forced to impose a global symmetry as if from outside physics itself, and to refuse to look closely at why it’s there.

However, arranging things using dimensions ensures that the resulting system has certain properties. Putting things in a one dimensional loop, for instance, maximizes the distance between connected elements while ensuring a homogeneous structure. Arranging things in a 3D space retains these nice, homogeneous properties to some extent, but adds the property that paths between any two points are almost never crossed by paths between two others.

Why would this feature of non-intersection be important? The best reason I can think of so far is that non-intersection implies non-isolation. In other words, while it’s easy to isolate a given sequence of elements on a 1D ring by blocking either end, blocking a set of elements in a 3D space is, by comparison, almost impossible. You have to define an enclosing surface, which is a complex structure in its own right.

That which cannot isolate itself cannot define its own boundary. And that which cannot define its own boundary is much less likely to be able to copy itself. This makes sense in the context of the following, highly conjectural scenario:

  1. The universe starts from a very small initial condition
  2. The initial state of the universe appears highly random and not remotely physical in structure
  3. Stable information patterns emerge from the noise which reproduce, just as in Langton’s ant
  4. Imperfections in the copying process caused by competition between patterns results in the creation of a new, even more stable pattern
  5. This pattern obstructs the reproduction of other variants by copying itself in an arrangement that prevents other patterns from self-isolating
  6. The reproduction of all other patterns stops, and the uninterrupted production of the ultimate pattern continues unchecked

In this story, the final dominant pattern is the fabric of space. It interacts with almost nothing, except via the tenuous medium of gravity–a process of interference that moderates the creation and placement of new spatial instances. The preceding, almost perfect, patterns constitute dark matter, which is almost as tenuous. The earlier self-reproducing patterns are baryonic matter–now trapped into fixed quantities and hence subject to conservation laws. Dark energy is, unsurprisingly, the process by which new spatial instances are continually created.

In this story, the patterns themselves aren’t made of some ‘stuff’. Rather, they are the information that defines what stuff is.

While I can’t prove this story, and can’t even join most of the dots yet, it smells right to me. This is because it tackles a lot of surprising aspects of the physical universe using a single explanation, and does so by proceeding from what we know to be true about the behavior of information.

I’m going to try to test this story by building some models that explore the emergent property of pattern reproduction in algorithms, but sadly, we may simply never know whether this picture is true or not. Nevertheless, it gives you something to think about next time you find yourself copying a piece of text, music, or video content. Copyright or not, you’re enforcing arguably the most fundamental law in Nature: that successful patterns want to be everywhere.

Categories: Uncategorized

Simulation hypothesis? No thanks.

December 18, 2012 1 comment

I recently received a comment on this blog proposing that believing in a creator is a logical choice, because the likelihood that we’re living in a simulation that isn’t simulated by someone else is vanishingly small. I thought it was an interesting remark at the time, though I didn’t agree with it.

Then, just a few days later, I heard the same argument again, this time from my friend Dan Miller, who’s an occasional contributor to this blog. It turns out that this meme apparently originates with a fellow called Nick Bostrom, who framed the argument in a paper in 2003.

However, when Bostrom framed it, the proposal was presented as one of a set of fairly reasonable options. It seems to me that he was quoted somewhat out of context. The public have grabbed a nice-looking Matrix-y idea and run with it. I’d like to take a look at this argument in it’s popular form and point out why, IMO, it’s badly flawed.

First, though, for all those people who don’t follow every single comment on my blog, which I assume is everyone except me, let’s at least do justice to the original proposal that’s out there in meme-land. We might frame it as follows:

  1. We can already simulate universes much simpler than our own.
  2. We aspire to simulate entities such as ourselves through natural curiosity.
  3. It’s reasonable to presume that other intelligent entities would behave similarly.
  4. Given this, the probability that we ourselves are being simulated his high.

It sounds nice, so long as you don’t look too closely, and provides a cozy rationalization for believing in something vaguely god-like. However, the main problem I see goes like this:

  1. If we’re probably being simulated, so, by extrapolation, are those above us doing the simulating.
  2. Given that we have no idea what the maximum possible computational universe of a universe larger than ours is, we are forced to conceive of an infinite stack of simulations.
  3. Given such a stack, we must also presume that intelligent beings become less likely to tinker with their creations once they’ve built them. In other words, the smarter you are and the more creative power you have, the less likely you are to exercise an act of will. Otherwise, we should expect someone up there in the infinite stack to be constantly fiddling with the rules, or turning the stack below them off and rebooting at every moment.
  4. If we propose decreasing proactive behavior for each level, we have to ask how come the simulation stack was created in the first place. However, if we don’t, we have to throw away the notion that the universe follows orderly laws. In other words, we have to throw away science.

Note that in this argument, we haven’t even started to consider the fact that more and more complex universes entail more and more complex rules on which they run. And this, of course, falls foul of the principle of descriptive minimalism that I’ve discussed in previous posts. More complex universes are far less likely to be the one that we’re in. The more a universe is simulatable, the less likely it is that it exists as a nested subset.

Hence, to my mind, the whole idea of digital physics is a refutation of the simulation hypothesis. We argue that the universe is finite precisely because it does away with excess, non-provable twaddle, like many of the properties of real numbers.

I’m not really familiar with Bostrom’s work, so I’m going to have to go back and have a closer look at it. However, his original remarks appear to be very carefully worded. Of the options he outlines, it seems to me that there is ample evidence to believe in number one:

The fraction of human-level civilizations that reach a posthuman stage is very close to zero

And on that cheery note, Merry Christmas, everybody!!!!

Reversibility

November 20, 2012 Leave a comment

Here’s an Ars Technica article about a recent measurement at SLAC. It describes a kind of reaction that happens at the subatomic level which isn’t reversible in time. I like this, because it underlines a key point that discrete modelers sometimes forget: not everything in the universe is trivially symmetric. In this case, in order to maintain the larger picture of CPT symmetry, we expect particle interactions that aren’t symmetric in terms of charge or reflection to also not be symmetrical in time.

There are some digital models that start from the position of baking reversibility into the system in the hope that this will yield more consistently physics-like results. While these models have a lot to offer, and can yield some amazing effects, I remain unconvinced that they offer a deep parallel with nature. This is because such models don’t leave room for the kind of result that SLAC has revealed.

So isn’t reversibility important? Should we not be trying to build it into our simulations? Absolutely it’s important, but it’s also relatively easy to get reversibility to appear as an emergent property of a non-reversible algorithm. For an example, take the Jellyfish algorithm that I’ve covered in previous posts. By reproducing rotational and translational symmetries from bulk properties of a network, we also get time-reversal symmetry as a bulk property, even though the algorithm running the pseudo-particle only runs one-way. This enables us to build models that are temporally symmetric most of the time.

As a rule of thumb, I hold to a principle that Tommaso Bolognesi once stated to me. Where possible, aim   for emergence. It’s nice to have physical effects appear in a model, but the fewer of them we insert by fiat, the more likely our models are to surprise us with their results.

Categories: Uncategorized

RIP Supersymmetry?

November 13, 2012 Leave a comment

The BBC have a nice article about new LHC results that exclude yet more supersymmetry models. What this article doesn’t point out is that, so far as I’m aware just about all of string theory requires supersymmetry to exist. If I were a string theorist, I’d be worried at this point.

I’m not the first person to point the finger at string theory and suggest that it’s an edifice on shaky ground. I’m also far from being one of the best informed on the topic. However, I have had far more than the average person’s interaction with quantum gravity theorists. And I’ve also had a lot more training in body-language and communication skills than most people who attend those conferences. And what I can say with confidence is that string theory casts a long, fear-inducing shadow over much of the rest of the field, regardless of whether the physicists involved want to parse it as such. People working on other theories seem to have to fight awfully hard for their credibility, while those babbling about multiverses and branes don’t seem to have much concern.

Maybe this result heralds an adjustment in the physics community. I hope so. There are a lot of great theories out there that could use some attention right now. And some of them are even discretist. 🙂

We’re Not Local

November 6, 2012 Leave a comment

Ars Technica has a nice article on a piece of theoretical work done by J D Bancal, et al. The upshot of it is that if your explanation for how quantum mechanics works is anything other than non-local, leaves open the possibility of faster-than-light communication. (Thanks to Dan Miller for pointing me at it.)

I have mixed feelings about this idea, as I’d love for faster-than-light communication to be a possibility, and am delighted that someone has come up with a way of determining whether it can be done. However, the flip side of this is that I’m pretty certain that QM is fundamentally non-local, as I outlined in my post on replicating particle self-interference.  The notion here being that non-locality doesn’t rule out discrete models. If anything, it supports them, as it encourages to think of wave-functions as sets of non-locally distributed points, either finite or otherwise.

What this result doesn’t say, unless I’m missing something, is that the currently fashionable, complex-number-based model of QM is literally true. You can still take exactly the same result and reframe it in terms of another equivalent model, such as Bohmian mechanics, for instance, and get something that looks completely deterministic.

Hence, while the result is nifty, the goal posts for viable theories of physics remain doggedly where they were.

Hello again, cubic symmetry, and simulations

November 5, 2012 8 comments

Hello all. It’s been a while since I’ve posted anything on this blog. My life has been in flux of late, as I’ve been moving to Princeton, NJ, changing jobs, and having a baby all at the same time. Now that things are starting to settle, it should be a lot easier for me to find time to write.

With that in mind, here’s my take on a recent article that people forwarded to me a while back during my break–the result from Silas Beane at the university of Bonn that claims to have something to say on the subject of the simulated universe. The arxiv blog, as usual, as a good write up.

The gist of the research is this: if the universe is running in a simulation on a cubic lattice, in much the way that current quantum chromodynamics simulations are calculated, then there should be experimentally observable consequences. Beane and his team identify two: the anisotropic distribution of cosmic rays (different amounts of rays in different directions), and a cut-off in the energy of cosmic ray particles. This article generated some excitement because the cut-off matches a phenomenon that’s already been observed.

A great moment for digital physics, right? I’m not convinced. I have a few concerns about this work. For starters, as I have discussed on this blog, there are a huge number of ways of building discrete universe models, of which a 3D lattice is only one. That simulation style has significant limitations, which, while not insurmountable, certainly make it a tough fit for a huge number of observed physical effects, such as relativity and spatial expansion.  

Furthermore, in order to make their predictions, Beane and his associates simulated at a tiny scale. This is convenient because you only have to consider a single reference frame, and can treat space as a static backdrop for events. In other words, it’s pretty clear that the main problems with regular lattice simulations are things that their research didn’t touch.

I would find it astonishing, therefore, if we discovered the predicted cosmic ray anisotropy. And this brings me on to my second major concern. People, upon finding no irregularity in the cosmic ray distribution, are then likely to think, “gosh, well the universe was isotropic after all, I guess we’re not in a simulation.”

Except, let’s recall, experiments have already seen the expected energetic cut-off. In other words, the cosmic ray observations we see are perfectly consistent with a universe that’s discrete, but also isotropic. In other words, irregular, like a network. This, perhaps, shouldn’t come as a surprise.

Then, there is my third concern, and this reflects the interpretation imposed on this result. Namely, that a universe that turns out to run on an algorithm must somehow be a simulation running on a computer elsewhere. This, as I’ve also mentioned in previous posts, is just plain wrong.

Algorithms, like equations, are tools we use to build models. One does not have primacy over the other. One is not more natural than the other. A universe that turns out to be algorithmic no more requires a computer to run on than a universe based on differential equations needs a system of valves. The one main difference between algorithms and equations is that you can describe a vastly larger set of systems with algorithms. Equations are nice because, once you’ve figured them out, you can do lots of nifty reasoning. However, the number of possible systems that are amenable to this treatment is vanishingly small, compared to the systems that are not.

Most physicists want the universe to turn out to be completely describable with equations, because it would make life a lot easier for everyone. It’s a nice thing to hope for. It’s just that given the set of options available, it’s not terribly likely.

Higgsistential Angst

July 18, 2012 5 comments

We’ve seen a lot of news about the Higgs boson in the news over the last couple of weeks. One might be tempted to suppose that my bet about the outcome of this adventure was wrong.

And so it might me. Refutability is a fine thing. However, I don’t think this matter is over yet. Ray, from Finitism Forever, supplied me with this link to an article on the magnificent Arxiv blog.

There are two reasons why I’m not ready to call the discovery of the Higgs a done deal. The rational part of me is reluctant because the best the physics community can say at this point is a ‘Higgs-like particle’. That’s far from conclusive.

Then there’s the intuitive part of me, and it doesn’t want to think they’ve found the Higgs, because it would, IMO, be terrible news for physics. Yes, terrible. To tidy up the loose ends of a theory that looked complete before anyone discovered dark matter or dark energy means we’re in horrible shape to understand these deeper questions about how the universe works. In that scenario, there are no particle interactions we can generate that would help us even start to understand.

Also, bear in mind that the LHC has been punching large holes in lots of supersymmetry theories. That’s one result out of CERN that we can feel confident about. Hence, the idea of supersymmetric particle pairs as candidates for dark matter looks a lot shakier than it did a few years ago.

I would rather that physics have something chewy and hard to understand in front of it. Something tantalizing but offering the promise of deeper knowledge. The alternative is an opportunity for a lot of retired professors to bust out the champagne and feel smug, followed by a long, dark period of complete confusion.

So come on, Universe. Don’t give us a Higgs. You’re better than that.