It was billed as being one of the biggest scientific breakthroughs of all time, ranking alongside the discovery of the DNA double helix in the 1950s.
According to rumours circulating last week, scientists at the Large Hadron Collider (LHC) in Geneva were about to announce the discovery of the long-sought Higgs boson, a subatomic particle seen as crucial to theories explaining all the forces at work in the universe.
After such a huge build-up, there was always the chance that the reality would seem a bit of a damp squib. Sure enough, the LHC scientists said merely that they had some fairly impressive evidence for the existence of the Higgs, and that more data was needed before they could be sure.
This came as no surprise to anyone who has followed the quest for the Higgs particle, first invoked by theorists (including the eponymous British physicist) almost 50 years ago as a means of imbuing other particles with mass. Pinning down any new particle is never easy, and the Higgs has been playing cat and mouse with physicists for decades.
Exactly five years ago, what seemed like impressive evidence for the existence of the Higgs was unveiled by scientists at the Fermilab particle accelerator laboratory near Chicago. But, mindful of the tricks that particles can play, the team held back from claiming a firm discovery. Sure enough, as more data came in, the impressive evidence vanished like the morning dew. Unfortunately, by then the media had got hold of the story, and the researchers found themselves heavily criticised by their colleagues for careless talk.
It was a similar story in July last year, when rumours circulated of another sighting of the Higgs at Fermilab. Again it proved a mirage. Not that their rivals at the LHC were in a hurry to criticise. Some still recall how in 1984 their predecessors thought they'd discovered the top quark - only to see the evidence vanish.
All of which raises the question: when can scientists be confident they have made a discovery? It's a question at the heart of research, and one that's beginning to worry ever more researchers. Earlier this month, the leading journal Science ran a special issue on how to tackle the problem of the "disappearing breakthrough", which in some fields is reaching epidemic proportions. According to a recent study in Significance, published by Britain's Royal Statistical Society, as many as 80 per cent of advances in some areas of medicine vanish as more evidence comes in.
So high a failure rate is all the more puzzling given that scientists take specific measures to reduce the risk of fluke results. The usual method is to employ so-called significance tests, in which the size of the effect found in an experiment is compared to that which could be produced by fluke alone. And the rule of thumb is that it's OK to claim to have discovered something if the chance of getting at least as impressive evidence by fluke is less than 5 per cent.
At last week's announcement, the LHC scientists revealed evidence somewhat more impressive than this, and from two independent experiments. So why the lack of confidence? After all, aren't the chances of their findings being a fluke therefore much less than 5 per cent?
If only that were true. Bitter experience has taught physicists that the time-honoured rule of thumb just doesn't give this level of security. If it did, just one in 20 findings would fall by the wayside. In reality, a far higher proportion of "breakthroughs" vanish, never to be seen again. That has led particle physicists to refuse to claim a discovery until the level of evidence is tens of thousands of times higher than that used in other fields.
That seems to work for physicists, but it leaves a disturbing further question: why are scientists working in other fields still so blasé about the failure of the rule of thumb about evidence?
It's not as if they've not been warned: statisticians have been pointing out its deficiencies for decades. One of the most basic is that the rule doesn't mean what it seems to.
Many if not most scientists believe that applying the rule to their findings means there will be only a five per cent chance that their result is a fluke. In fact, it means no such thing - not least because the 5 per cent figure is calculated assuming that fluke is the explanation of the result. As such, it cannot also give the chances that fluke really is the explanation. That's akin to assuming a ruler is accurate, then checking by using the same ruler to see how long it is. Yet this illogic is routinely applied in the analysis of countless research studies.
Nor does the rule of thumb take account of the inherent plausibility of the discovery. Many scientists like to trot out the mantra that "extraordinary claims demand extraordinary evidence". Yet when it comes to the results of their own work, they cheerfully apply the rule that treats everything the same - from claims for Higgs to the curative value of homeopathy.
Techniques for getting round these deficiencies do exist. Perhaps the simplest is the one adopted by particle physicists - use a far more demanding version of the standard rule of thumb, and hope this compensates for its failings.
But most scientific research has no hope of meeting so stringent a criterion. Instead, it must be analysed using the more sophisticated ways of evaluating discoveries developed by statisticians.
Worryingly, most scientists show little interest in giving up their hopelessly flawed rule of thumb. But until they do, they should stop complaining about their vanishing breakthroughs or wondering why they don't have the kudos of theoretical physicists.
Robert Matthews is visiting reader in science at Aston University, Birmingham, England.