Anyone who picks mushrooms knows that it is better to separate the poisonous and the non-poisonous. Not to mention what would happen if someone ate the poisons. In such “classification problems”, which force us to distinguish certain objects from one another and to assign the objects we are looking for to certain classes by means of characteristics, computers can already provide useful support to humans.
Intelligent machine learning methods can recognize models or objects and automatically select them from datasets. For example, they could choose these images from a photo database that show non-toxic fungi. Especially with very large and complex datasets, machine learning can provide valuable results that humans would not be able to discover, or only with much longer time. However, for some computational tasks, even the fastest computers available today are reaching their limits. This is where the great promise of quantum computers comes in: that one day they will also perform super-fast calculations that classical computers cannot solve in a timely manner.
The reason for this “quantum supremacy” lies in physics: quantum computers calculate and process information by exploiting certain states and interactions that occur within atoms or molecules or between elementary particles.
The fact that quantum states can overlap and intermingle creates a foundation that allows quantum computers to access a fundamentally richer set of processing logic. For example, unlike classical computers, quantum computers do not calculate with binary codes or bits, which treat information only as 0 or 1, but with quantum bits or qubits, which correspond to the quantum states of particles. The crucial difference is that qubits can achieve not only one state – 0 or 1 – per computation step, but also a state in which the two overlap. These more general methods of information processing allow in their turn a drastic acceleration of computation in certain problems.
Translating classical wisdom into the quantum realm
These speed advantages of quantum computing are also an opportunity for machine learning applications. After all, quantum computers could calculate the enormous amounts of data that machine learning methods need to improve the accuracy of their results much faster than classical computers.
However, to truly harness the potential of quantum computing, classical methods of machine learning must be adapted to the peculiarities of quantum computers. For example, algorithms, that is, the rules of mathematical computation that describe how a classical computer solves a certain problem, must be formulated differently for quantum computers. Developing well-functioning “quantum algorithms” for machine learning is not entirely trivial, as there are still a few hurdles to overcome along the way.
On the one hand, this is due to quantum hardware. At ETH Zurich, researchers currently have quantum computers that operate with up to 17 qubits (see “ETH Zurich and PSI found the Quantum Computing Hub” from May 3, 2021). However, if quantum computers are ever to realize their full potential, they might need thousands or even hundreds of thousands of qubits.
Quantum noise and the inevitability of errors
One challenge quantum computers face is their vulnerability to error. Quantum computers today operate with a very high level of “noise” because errors or disturbances are known in technical jargon. For the American Physical Society, this noise is “the major obstacle to the rise of quantum computers”. There is no complete solution to correct and mitigate errors. No way has yet been found to produce error-free quantum hardware, and quantum computers with 50 to 100 qubits are too small to implement software or correction algorithms.
To some extent, one has to live with the fact that errors in quantum computing are in principle inevitable, since the quantum states on which concrete computational steps are based can only be distinguished and quantified with probabilities. What can be achieved, on the other hand, are procedures which limit the extent of noise and disturbance to such an extent that the calculations nevertheless provide reliable results. Computer scientists call a reliable calculation method “robust” and, in this context, also speak of the necessary “error tolerance”.
This is exactly what the research group led by Ce Zhang, professor of computer science at ETH and member of the ETH AI Center, recently explored, somehow “accidentally” in an effort. to reason about the robustness of classical distributions in order to better build machine learning systems and platforms. Together with Professor Nana Liu from Shanghai Jiao Tong University and Professor Bo Li from the University of Illinois at Urbana, they developed a new approach. This allows them to prove the robustness conditions of certain quantum-based machine learning models, for which quantum computation is guaranteed to be reliable and the result correct. The researchers published their approach, one of the first of its kind, in the scientific journal Quantum information NPj.
Protection against errors and hackers
“When we realized that quantum algorithms, like classical algorithms, are prone to errors and disturbances, we asked ourselves how to estimate these sources of errors and disturbances for certain machine learning tasks, and how to ensure robustness and the reliability of the method, ”explains Zhikuan Zhao, post-doctoral fellow in Ce Zhang’s group. “If we know this, we can trust the results of the calculations, even if they are noisy.”
The researchers investigated this question using quantum classification algorithms as an example. After all, mistakes in classification tasks are tricky because they can affect the real world, for example if poisonous mushrooms were classified as non-toxic. Perhaps more importantly, using quantum hypothesis testing theory – inspired by recent work by other researchers on the application of hypothesis testing in the classical setting – which helps distinguish quantum states, the researchers of ETH have determined a threshold above which the assignments of the quantum classification algorithm is guaranteed correct and its predictions robust.
Using their robustness method, researchers can even check whether classifying an erroneous, noisy input gives the same result as a clean, noiseless input. From their findings, the researchers also developed a protection scheme that can be used to specify the error tolerance of a calculation, whether an error has a natural cause or is the result of a resulting manipulation. of a hacking attack. Their concept of robustness works for both hacking attacks and natural errors.
“The method can also be applied to a larger class of quantum algorithms,” explains Maurice Weber, doctoral student with Ce Zhang and first author of the publication. Since the impact of error in quantum computing increases as the size of the system increases, he and Zhao are currently researching this issue. “We are optimistic that our robustness conditions will prove useful, for example, in conjunction with quantum algorithms designed to better understand the electronic structure of molecules.”