Why Google’s Quantum Supremacy Milestone Matters

17 views
Skip to first unread message

John Clark

unread,
Nov 1, 2019, 5:25:21 PM11/1/19
to everyth...@googlegroups.com
Quantum Computer expert Scott Aaronson wrote a editorial in the October 30 2019 New York Times:


   Why Google’s Quantum Supremacy Milestone Matters
     By Scott Aaronson

Google officially announced last week in the journal Nature that it achieved the milestone of “quantum supremacy.” This phrase, coined by the physicist John Preskill in 2012, refers to the first use of a quantum computer to make a calculation much faster than we know how to do it with even the fastest supercomputers available. The calculation doesn’t need to be useful: much like the Wright Flyer in 1903, or Enrico Fermi’s nuclear chain reaction in 1942, it only needs to prove a point.

Over the last decade, together with students and colleagues, I helped develop much of the theoretical underpinning for quantum supremacy experiments like Google’s. I reviewed Google’s paper before it was published. So the least I can do is to try to explain what it means.

Until recently, every computer on the planet — from a 1960s mainframe to your iPhone, and even inventions as superficially exotic as “neuromorphic computers” and DNA computers — has operated on the same rules. These were rules that Charles Babbage understood in the 1830s and that Alan Turing codified in the 1930s. Through the course of the computer revolution, all that has changed at the lowest level are the numbers: speed, amount of RAM and hard disk, number of parallel processors.

But quantum computing is different. It’s the first computing paradigm since Turing that’s expected to change the fundamental scaling behavior of algorithms, making certain tasks feasible that had previously been exponentially hard. Of these, the most famous examples are simulating quantum physics and chemistry, and breaking much of the encryption that currently secures the internet.

In my view, the Google demonstration was a critical milestone on the way to this vision. At a lab in Santa Barbara, Calif., a Google team led by John Martinis built a microchip called “Sycamore,” which uses 53 loops of wire around which current can flow at two different energies, representing a 0 or a 1. The chip is placed into a dilution refrigerator the size of a closet, which cools the wires to a hundredth of a degree above absolute zero, causing them to superconduct. For a moment — a few tens of millionths of a second — this makes the energy levels behave as quantum bits or “qubits,” entities that can be in so-called superpositions of the 0 and 1 states.

This is the part that’s famously hard to explain. Many writers fall back on boilerplate that makes physicists howl in agony: “imagine a qubit as just a bit that can be both 0 and 1 at the same time, exploring both possibilities simultaneously.” If I had room for the honest version, I’d tell you all about amplitudes, the central concept of quantum mechanics since Werner Heisenberg, Erwin Schrödinger and others discovered it in the 1920s.

Here’s a short version: In everyday life, the probability of an event can range only from 0 percent to 100 percent (there’s a reason you never hear about a negative 30 percent chance of rain). But the building blocks of the world, like electrons and photons, obey different, alien rules of probability, involving numbers — the amplitudes — that can be positive, negative, or even complex (involving the square root of -1). Furthermore, if an event — say, a photon hitting a certain spot on a screen — could happen one way with positive amplitude and another way with negative amplitude, the two possibilities can cancel, so that the total amplitude is zero and the event never happens at all. This is “quantum interference,” and is behind everything else you’ve ever heard about the weirdness of the quantum world.

Now, a qubit is just a bit that has some amplitude for being 0 and some other amplitude for being 1. If you look at the qubit, you force it to decide, randomly, whether to “collapse” to 0 or 1. But if you don’t look, the two amplitudes can undergo interference — producing effects that depend on both amplitudes, and that you can’t explain by the qubit’s having been 0 or by its having been 1.

Crucially, if you have, say, a thousand qubits, and they can interact (to form so-called “entangled” states), the rules of quantum mechanics are unequivocal that you need an amplitude for every possible configuration of all thousand bits. That’s 2 to the 1,000 amplitudes, much more than the number of atoms in the observable universe. If you have a mere 53 qubits, as in Google’s Sycamore chip, that’s still 2 to the 53 amplitudes, or about 9 quadrillion.

The goal, with Google’s quantum supremacy experiment, was to perform a contrived calculation involving 53 qubits that computer scientists could be as confident as possible really would take something like 9 quadrillion steps to simulate with a conventional computer. The qubits in Sycamore are laid out in a roughly rectangular grid, with each qubit able to interact with its neighbors. Control signals, sent by wire from classical computers outside the dilution refrigerator, tell each qubit how to behave, including which of its neighbors to interact with and when.

In other words, the device is fully programmable — that’s why it’s called a “computer.” At the end, the qubits are all measured, yielding a random string of 53 bits. Whatever sequence of interactions was used to produce that string — in the case of Google’s experiment, the interactions were simply picked at random — you can then rerun the exact same sequence again, to sample another random 53-bit string in exactly the same way, and so on as often as desired.

In its Nature paper, Google estimated that its sampling calculation — the one that takes 3 minutes and 20 seconds on Sycamore — would take 10,000 years for 100,000 conventional computers, running the fastest algorithms currently known. Indeed the task was so hard, Google said, that even directly verifying the full range of the results on classical computers was out of reach for its team. Thus, to check the quantum computer’s work in the hardest cases, Google relied on plausible extrapolations from easier cases.

IBM, which has built its own 53-qubit processor, posted a rebuttal. The company estimated that it could simulate Google’s device in a mere 2.5 days, a millionfold improvement over Google’s 10,000 years. To do so, it said, it would only need to commandeer the Oak Ridge Summit, the largest supercomputer that currently exists on earth — which IBM installed last year at Oak Ridge National Laboratory, filling an area the size of two basketball courts. (And which Google used for some of its simulations in verifying the Sycamore results.) Using this supercomputer’s eye-popping 250 petabytes of hard disk space, IBM says it could explicitly write down all 9 quadrillion of the amplitudes. Tellingly, not even IBM thinks the simulation would be especially easy — nor, as of this writing, has IBM actually carried it out. (The Oak Ridge supercomputer isn’t just sitting around waiting for jobs.)

We’re now in an era where, with heroic effort, the biggest supercomputers on earth can still maybe, almost simulate quantum computers doing their thing. But the very fact that the race is close today suggests that it won’t remain close for long. If Google’s chip had used 60 qubits rather than 53, then simulating its results with IBM’s approach would require 30 Oak Ridge supercomputers. With 70 qubits, it would require enough supercomputers to fill a city. And so on.

Is there real science behind the spectacle of these two tech titans locking antlers? Is “quantum supremacy,” divorced from practical applications, an important milestone at all? When should we expect those practical applications, anyway? Assuming Google has achieved quantum supremacy, what exactly has it proved — and is it something anyone doubted in the first place?

Let’s start with applications. A protocol that I came up with a couple years ago uses a sampling process, just like in Google’s quantum supremacy experiment, to generate random bits. While by itself that’s unimpressive, the key is that these bits can be demonstrated to be random even to a faraway skeptic, by using the telltale biases that come from quantum interference. Trusted random bits are needed for various cryptographic applications, such as proof-of-stake cryptocurrencies (environmentally friendlier alternatives to Bitcoin). Google is now working toward demonstrating my protocol; it bought the non-exclusive intellectual property rights last year.

Other applications will almost certainly require more qubits, and of a higher quality — things that Google, IBM and the other players are racing to build. One major milestone to watch for next will be the first use of small quantum computers to simulate the quantum physics of chemicals and materials in a way that’s actually useful to chemists and materials scientists. Simulating quantum mechanics — that is, overcoming the exponential explosion of amplitudes in nature via a computer equipped with the same power — was the original application that the physicist Richard Feynman envisioned when he proposed the idea of a quantum computer in the early 1980s. It’s still the most important application we know — one that could aid in the design of everything from batteries and solar cells to fertilizers and lifesaving drugs.

An even bigger milestone will be the first practical demonstration of quantum error correction — a technology that, in theory, should be able to keep qubits alive for vastly longer amounts of time by cleverly encoding them across many physical qubits. Quantum computing researchers think that quantum error correction is what will ultimately let quantum computers scale beyond a couple hundred qubits, to the million- or billion-qubit machines that would fully realize Feynman’s dream. But this hasn’t been demonstrated yet, and no one knows when it will be.

In the meantime, Google’s demonstration is a crucial proof of concept. Building a quantum computer is so hard that, ever since serious efforts began in the mid-1990s, some distinguished scientists have argued that the task would be impossible. Qubits, they said, will always prove too fragile to control. If quantum mechanics seems to predict that you can harness an exponential number of amplitudes for computation, then so much the worse for our present understanding of quantum mechanics.

Google’s demonstration should give these skeptics pause. To all appearances, a 53-qubit device really was able to harness 9 quadrillion amplitudes for computation, surpassing (albeit for a special, useless task) all the supercomputers on earth. Quantum mechanics worked: an outcome that’s at once expected and mind-boggling, conservative and radical.

The computer revolution was enabled, in large part, by a single invention: the transistor. Before transistors, we were stuck with failure-prone vacuum tubes. Yet vacuum tubes kind of, sort of worked: they translated abstract Boolean logic into electrical signals reliably enough to be useful. We don’t yet have the quantum computing version of the transistor — that would be quantum error correction. Getting there will surely require immense engineering, and probably further insights as well. In the meantime, though, the significance of Google’s quantum supremacy demonstration is this: after a quarter century of effort, we are now, finally, in the early vacuum tube era of quantum computing.

Scott Aaronson is David J. Bruton Centennial Professor of Computer Science at the University of Texas at Austin, and the founding director of UT’s Quantum Information Center. He’s the author of “Quantum Computing Since Democritus,” and blogs about quantum computing and other topics at Shtetl-Optimized.

Alan Grayson

unread,
Nov 2, 2019, 1:32:26 AM11/2/19
to Everything List
The thing I don't get is the role of superposition. If a Qbit is neither zero or one, but a superposition of both, how can it be useful to calculate anything? AG 

Philip Thrift

unread,
Nov 2, 2019, 5:42:55 AM11/2/19
to Everything List


This was a pretty pointless article.

Aaronson says nothing about what is potentially useful for QCs (if they ever work as the "theory" suggests) --

How parallel computing can be achieved with qubits:

John Clark

unread,
Nov 2, 2019, 7:33:44 AM11/2/19
to everyth...@googlegroups.com
On Sat, Nov 2, 2019 at 5:42 AM Philip Thrift <cloud...@gmail.com> wrote:

> This was a pretty pointless article. Aaronson says nothing about what is potentially useful for QCs

 Aaronson: "One major milestone to watch for next will be the first use of small quantum computers to simulate the quantum physics of chemicals and materials in a way that’s actually useful to chemists and materials scientists. Simulating quantum mechanics — that is, overcoming the exponential explosion of amplitudes in nature via a computer equipped with the same power" 

> How parallel computing can be achieved with qubits:

All Quantum Computers are inherently parallel, the key attribute to note is their scalability, and Aaronson says:
"We’re now in an era where, with heroic effort, the biggest supercomputers on earth can still maybe, almost simulate quantum computers doing their thing. But the very fact that the race is close today suggests that it won’t remain close for long. If Google’s chip had used 60 qubits rather than 53, then simulating its results with IBM’s approach would require 30 Oak Ridge supercomputers [the largest conventional computer on Earth]. With 70 qubits, it would require enough supercomputers to fill a city. And so on."

John K Clark

Philip Thrift

unread,
Nov 2, 2019, 8:45:29 AM11/2/19
to Everything List


On Saturday, November 2, 2019 at 12:32:26 AM UTC-5, Alan Grayson wrote:


On Friday, November 1, 2019 at 3:25:21 PM UTC-6, John Clark wrote:
Quantum Computer expert Scott Aaronson wrote a editorial in the October 30 2019 New York Times:




The thing I don't get is the role of superposition. If a Qbit is neither zero or one, but a superposition of both, how can it be useful to calculate anything? AG 

That's what he doesn't explain:


Quantum parallelism arises from the ability of a quantum memory register to exist in a superposition of base states. Each component of this superposition may be thought of as a single argument to a function. A function performed once on the register in a superposition of states is performed on each of the components of the superposition. Since the number of possible states is 2n where n is the number of qubits in the quantum register, you can perform in one operation on a quantum computer what would take an exponential number of operations on a classical computer. This is fantastic, but the more superposed states that exist in your register, the smaller the probability that you will measure any particular one will become.


As an example suppose that you are using a quantum computer to calculate the function $ \mathcal {F}$(x) = 2*x mod 7, where x is the superposition of integers between 0 and 7 inclusive. You could prepare a quantum register that was in a equally weighted superposition of the states 0-7. Then you could perform the 2*x mod 7 operation once, and the register would contain the equally weighted superposition of 1,2,4,6,1,3,5,0 states, these being the outputs of the function 2*x mod 7 for inputs 0 - 7. When measuring the quantum register you would have a 2/8 chance of measuring 1, and a 1/8 chance of measuring any of the other outputs. It would seem that this sort of parallelism is not useful, as the more we benefit from parallelism the less likely we are to measure a value of a function for a particular input. Some clever algorithms have been devised, most notably by Peter Shor which succeed in using quantum parallelism on a function where there is interest in some property of all the inputs, not just a particular one.


This kind of parallelism is very appealing for simulation on a parallel computer. A n bit quantum register contains a superposition of each of its 2n possible base states, and we represent this by an array of 2n complex numbers which are probabilities of measuring the quantum register to be the corresponding base state. To perform an operation on the quantum register, we simply modify each of the 2n array locations. By splitting the calculations of how to change the probability values of the array locations into even ranges via process elements, we get nearly linear speedup.



@philipthrift 

Philip Thrift

unread,
Nov 2, 2019, 9:01:14 AM11/2/19
to Everything List
It;s the how (How parallel computing can be achieved with qubits) that's missing.

Someone reading his article not knowing the how multiple computational trajectories are happening in parallel would be lost, I think.

@philipthrift


@philipthrift 
Reply all
Reply to author
Forward
0 new messages