Machine learning has returned with a vengeance. I still remember the dark days of the late ’80s and ’90s, when it was pretty clear that the current generation of machine-learning algorithms didn’t seem to actually learn much of anything. Then big data arrived, computers became
It seems that quantum machine learning might provide an advantage here, as a recent paper on searching for Higgs bosons in particle physics data seems to hint.
Learning from big data:
In the case of chess, and the first edition of the Go-conquering algorithm, the computer wasn’t just presented with the rules of the game. I’ll annoy every expert in the field by saying that the computer essentially correlated board arrangements and moves with future success.
I’m not saying this to disrespect machine learning but to point out that computers use their ability to gather and search for correlations in truly vast amounts of data to become experts—the machine played 5 million games against itself before it was unleashed on an unsuspecting digital opponent. A human player would have to complete a game every 18 seconds for 70 years to gather a similar data set.
This is the case for evaluating Higgs Boson observations. The LHC generates data at inconceivable rates, even after lots of pre-processing to remove most of the uninteresting stuff. But even in the filtered data set, collisions that generate a Higgs boson are pretty rare.
Sometimes, however, you have a situation that would be perfect for this sort of big-data machine learning, except that the data is actually pretty small.
That makes it quite difficult to apply machine learning, let alone train the algorithm in the first place.
To test if quantum machine learning might be good at sorting through these combinations, the researchers programmed a quantum computer to try to optimize the 36 parameters to fit the given data and subsequently classify the data as either containing a Higgs or not. The values of the parameters reach their optimum value when the sum of all the energies of the magnets is at a minimum, and the value of the minimum energy is used to decide Higgs/no Higgs.
In the end, they identified a set of three that were most sensitive and several that were completely insensitive, including the transverse mass of one of the emitted photons.
So far so good. But there are a bunch of non-quantum machine learning algorithms that should be able to do the same.
Searching for needles having never seen a needle:
The important difference between the classical and quantum algorithms was the size of the training data set. For algorithms trained on around 200 collisions, the quantum algorithm significantly outperforms the classical algorithms.
On the other hand, the quantum algorithm is significantly worse than the classical algorithms after training on large data sets. This, however, is probably a product of the performance of the underlying hardware rather than the actual algorithm.
I must admit to being faced with the unenviable task of changing my mind. In the past, I have been highly skeptical of machine learning and artificial intelligence in general. I was astonished at a recent conference when a speaker claimed that current AI was about the equivalent of a cat, a claim I find hard to credit.
It is inevitable that AI systems will do more and more tasks, even if they are limited to the role of assistants. At this point, there is not a single profession that I would say is safe from artificial intelligence, except those jobs that are too boring for an AI to be interested in learning.