Google has announced its new 105-qubit superconducting chip, code-named Willow, which solved a quantum supremacy experiment that would take at least 300 million years to simulate on a classical computer. More importantly, the chip shows how quantum hardware may achieve fault tolerance in such a way to seemingly unleash its scalability.
To make a long story short, Willow, introduced in a Nature paper, shows that it is possible to combine physical qubits into a logical qubit so that the error rate at the logical cubit level decreases with the number of physical qubits:
We tested ever-larger arrays of physical qubits, scaling up from a grid of 3x3 encoded qubits, to a grid of 5x5, to a grid of 7x7 — and each time, using our latest advances in quantum error correction, we were able to cut the error rate in half.
A requirement for this to be possible is being "below threshold", meaning that the error rate at the physical-qubit level is below a given threshold. This is what makes it possible for the logical error rate to decrease exponentially as more physical qubits are added.
Commenting the announcement, Scott Aaronson wrote that while not revolutionary, this evolutionary step crowns 30-year long efforts towards fault-tolerance in quantum computing and crosses an important threshold allowing to foresee a moment when "logical qubits [will] be preserved and acted on for basically arbitrary amounts of time, allowing scalable quantum computation".
It is important to understand that Google's result is limited to just one single logical qubit. Furthermore, it only shows logical qubits can scale up while reducing the error rate, not that a low-enough error rate has been achieved. Indeed, Willow's logical error is of the order 10-3, whereas, according to Aaronson, Google aims to reach a 10-6 error rate before saying they created a truly fault-tolerant qubit.
To further put into perspective the relevance of today's result, it is important to keep in mind where quantum computing is headed and where it is now, as Aaronson explains:
To run Shor’s algorithm at any significant scale, we’ve known for decades that you’ll need error-correction, which (as currently understood) induces a massive blowup, on the order of at least hundreds of physical qubits per logical qubit. That’s exactly why Google and others are now racing to demonstrate the building blocks of error-correction (like the surface code), as well as whatever are the most impressive demos they can do without error-correction (but these tend to look more like RCS and quantum simulation than factoring).
As a key side note, it is currently thought that solving Schor's problem will require at least 1730 logical qubits. While this may sound discouraging, it will surely reassure anyone fearing classical cryptography is close to be broken.
Another major achievement of Willow's is performing an experiment based on random circuit sampling (RCS) in under 5 minutes, thus advancing the limit of quantum supremacy.
Willow’s performance on this benchmark is astonishing: It performed a computation in under five minutes that would take one of today’s fastest supercomputers 1025 or 10 septillion years. If you want to write it out, it’s 10,000,000,000,000,000,000,000,000 years. This mind-boggling number exceeds known timescales in physics and vastly exceeds the age of the universe.
RCS can be seen as a very basic way of checking that a quantum computer is doing something that cannot be done on a classical computer, says Google Quantum AI lead Hartmut Neven. Yet, it just serves to calculate a random distribution with no specific value which just happens to be very hard for a classical computer to simulate, and for which we could find a more efficient classical algorithm, says physicist and science communicator Sabine Hossenfelder.
Furthermore, since it takes so long for its results to be validated on classical hardware, Google's validation is necessarily based only on extrapolations, so skeptics could argue that the proclaimed error-rate reduction is only partially true. As Aaronson remarks, this stresses the importance of designing efficiently verifiable near-term quantum experiments.
If you are interested in going deeper into a critique of Google's claims about Willow, you will not want to miss Israeli mathematician Gil Kalai's analysis.
Without doubts, the road ahead for quantum computing is still long. As Neven explains, the next challenge for Google is creating a chip integrating thousands of physical qubits with a 10-6 error rate, then the first logical gate including two logical qubits. Finally, trying to scale up the hardware so that useful computations become possible.