Scientists say quantum computers could be built to operate up to a million times faster than conventional computers. But how do they work? And how close are we to putting them in homes and offices around the world?
and

answer these questions...
By Andrew Bean, David Holmes, Sharon Shattuck, and Zach Thompson

For many years, futurists, physicists and computer scientists have looked to quantum computing as the one of the next big frontiers in technology. Technology Review’s Charles Choi explains that quantum computers, “could run more calculations in an instant than there are atoms in the universe.” And over the last few weeks the excitement has only grown, as recent developments in the field have observers like the New Yorker’s Gary Marcus slyly calling it a “quantum leap” in computing.

So what’s all the fuss about? If you haven’t been following the story, here’s a quick refresher: In March, the multinational corporation Lockheed Martin announced it would become the first company to use a quantum computer for commercial purposes. Then a couple weeks ago, the makers of Lockheed’s device, Vancouver’s D-Wave, landed another big customer: Google (observers put the purchase price at around $15 million). Finally, we saw the release of a scientific study conducted by Catherine McGeoch, professor of computer science at Amherst College, that showed D-Wave’s machines significantly outperformed classical computers. These developments led to some pretty breathless claims in the press like this Daily Mail gem dug up by Marcus calling Google’s purchase a“a superfast quantum computer that could cure diseases, stop global warming and even learn to drive a car.”

Of course the reality of quantum computers is much messier, and we still don’t know for sure how useful these machines will be in the near-future, and for what tasks. Nevertheless, the mere whisper of the words “quantum computer” is like catnip to futurists, who are at once thrilled and frightened of a computer that’s up to one million times more powerful than a conventional machine, made possible by transcending mere 0s and 1s and embracing the infinite superpositions of quantum particles. The potential impact of quantum computers are huge for finance, cryptography, and extracting information from enormous sets of data like the human genome. One even solved a sudoku. And as our unquenchable thirst for “Big Data” becomes a potentially big problem for the environment, the exponentially-increased efficiency of quantum computing (and its sidekick quantum Internet) looks even more attractive. It even came up during the Bitcoin debate, as observers feared that machines that don’t even exist yet could be used to easily hack a currency that barely exists (see our bitcoin explainer).

But despite the excitement and D-Wave’s claims, observers like MIT theoretical computer scientist Scott Aaronson are skeptical that we’re truly on the cusp of quantum computers that consistently outperform classical computers over a wide spectrum of tasks (Aaronson even refers to himself as “Chief D-Wave Skeptic”). And it’s possible that due to fundamental challenges of harnessing information from quantum particles, particularly the challenge of error correction, we may never achieve it. It’s the biggest of next big things, if only we knew it was possible.

But even if a million-times-fast quantum computer in every home is a long ways off, the basic technology is already helping scientists in more subtle ways.The theoretical study of quantum computing has helped all kinds of scientists, from chemists to cryptographers, better understand their fields. In this explainer, we’ll get into the details of how quantum computers work, how they differ from conventional computers, and the techniques used by researchers at IBM and D-Wave to identify and extract information from quantum bits or “qubits.” But before we can understand the future of computing, we need to look at its past, and the work done by the patron saint of modern computers, Intel’s Gordon Moore.

In 1965, after spending his career building microchips and companies, Intel’s Gordon Moore made a prediction that would prove eerily prescient. He predicted that every two years, the number of transistors chipmakers could fit on an integrated circuit would double, thus increasing a processor’s computing power exponentially over time. He expected the trend to last 10 years, but this growth rate held firm all the way into the 21st century.

Moore's Law: Transistor Size vs. Time (1971 - 2008)

As circuits get smaller and smaller, however, eventually we’ll hit a point where circuits are smaller than atoms themselves. Even if manufacturers overcome the hardware challenges of building subatomic chips, weird things start to happen when you mess with particles that small. As physicists like Werner Heisenberg and Erwin Schroedinger postulated, merely observing a quantum particle can alter its properties.

But failing to harness this energy could spell disaster for a computing industry that has relied on consistent, exponential growth for decades. So how can computer scientists and physicists use the maddening uncertainty of quantum computing to their advantage?

The easiest way to think of how conventional computers work is as a series of transistors that are either on or off. Or, in binary code, 1 or 0. The combination of on/off bits helps store information or reflect the outcomes of computations. So for example, if you have three bits, there are 8 possible outcomes these bits can represent:

CLICK
Classical Bits

But in a quantum computer, the bits are replaced by what are called qubits. A qubit is commonly an atom, an electron, or a photon. Instead of having a discrete value like the “on” or “off” switch of a transistor, a qubit’s value is a measurement of the direction of its spin, which can be “up” “down” or anything inbetween:

Unlike bits, a quantum bit can be both 1 and 0 at the same time. So while a three-bit classical computer can represent one and only one out of eight possibilities at a time, a three-qubit quantum computer like the one above can represent these eight possibilities at the same time, allowing for multiple computations to be completed in parallel.

CLICK
Qubits
And that makes for faster computers?

That’s the idea. In conventional computers, an algorithm is fed into a set of transistors, and by cycling through a series of 0s and 1s, an answer is determined one step at a time. With quantum computers, the transistor can explore every possible position of the qubits very quickly, then researchers can extract the most probabilistically correct answer. Here’s a video mashup we made of different physicists and futurists explaining these processes:

Wait, what? Sounds like some theoretical science fiction-y nonsense like string theory or infinite alternate realities. Has anyone actually completed a quantum computation?

Yes! In 1994, mathematician Peter Shor wrote a quantum algorithm to find the prime factors of numbers. What’s the big deal about that, you ask? The toughest encryptions and security codes are protected using prime factorization. So Shor makes this algorithm and in 2001 IBM actually used it to find the prime factors of... <drum roll> 15.

That may not sound very special, but it was a huge step that, if built upon, could be used to factor much larger numbers, like the ones used to protect encrypted financial transactions. To give you an idea of how quantum computing could disrupt security and data encryption, a 193-digit number takes months to factor using classical computers, according to John Preskill a theoretical physicist at Cal-Tech. What about a 500-digit number? That would take a classical computer, oh, the approximate age of the universe to factor. But a quantum computer, Preskill says, could factor the 193-digit number in just 0.1 seconds. The 500-digit number would naturally take a bit longer, of course: a whopping 2 seconds.

But what about the whole “if you measure a quantum particle it can change its value” thing?

If you’ve ever taken a physics class, you might remember things like the Heisenberg Principle,the double-slit experiment, and Schrodinger’s Cat. The takeaway from all three concepts is that when you try to measure something as small and erratic as an electron or photon, you risk altering its values. And even if you don’t alter it, you can’t always get all the information you want: You can know the position, but not the speed or vice-versa. The same challenges arise when trying to extract information from qubits.

Take Schrodinger’s Cat, for instance: In the experiment, a cat’s life or death is dependent on a subatomic particle that has equal chances of being decayed or not decayed. If the subatomic particle decays, the cat dies. If not, the cat lives. Mathematically-speaking, the cat is in a superposition of both alive and dead, just like the qubits in our quantum computer are in a superposition of both 0 and 1. It is not until we open the box that we can know for sure. This raises some troubling questions. If the cat dies, does it die once the subatomic particle decays? Or does it only die when we open the box? Does the cat count as an “observer”?

PLAY
Schroedinger's Cat

Schrodinger’s Cat isn’t supposed to be taken literally. It’s a paradox designed to show why quantum systems cannot do the interesting things that quantum systems do when classically measured or observed. A cat can’t really be alive and dead at the same time, just like a classical transistor can’t be both on and off. And even when dealing with quantum particles, you need a way to shield them from all classical interference or “noise,” called “decoherence.” As Preskill puts it, “To prevent decoherence, we must prevent the environment from ‘learning’ about the state of the quantum computer during computation.” In other words, the cat can’t know if it’s dead or alive, and neither can we (until we open the box).

That’s where “entanglement” comes in. Entanglement occurs when the values of two quantum particles are so highly correlated that you cannot know one value without knowing the value of the other. Preskill writes, “Suppose you have a 100-page book with print on every page. If you read 10 pages, you'll know 10 percent of the contents. And if you read another 10 pages, you'll learn another 10 percent. But in a highly entangled quantum book, if you read the pages one at a time—or even 10 at a time—you'll learn almost nothing. The information isn't written on the pages. It's stored in the correlations among the pages, so you have to somehow read all of them at once.” So unless noise has infiltrated the entire system, the doings of a quantum computer can stay secret. In this video, starting at 33:46, Preskill explains how entanglement helps reduce errors caused by noise in quantum computers:

So how do we entangle the qubits so they’re able to process computations, free of noise? One method involves using ions (charged atoms, often cadmium or calcium) as qubits, trapping them, then firing lasers at the ions in a very careful, methodical way to induce coupling between the qubit’s states.

So if we can’t know anything about the qubits, how do we read the answer?

The states of the qubits must only remain secret during the calculation in order to maintain the quantum superpositions needed for parallel processing. When the most probabilistically-likely solution is reached, the system can be collapsed back into a classical state and the output can be read.

We can measure the output in a couple different ways, but here’s one popular method: After the qubits collapse into one or two states (providing a classical solution to our problem) lasers are fired at the qubits so only one of the two qubit states will be illuminated. This provides a binary read-out, but instead of “0”’s and “1”’s, we get “light” or “dark.”

Note: Not all quantum computers use entanglement, at least not on purpose. In fact, the quantum device Google has purchased from D-Wave has yet to show entanglement in a way that necessarily increases performance, which has caused some controversy over whether it’s truly a quantum computer at all. D-Wave’s device uses a process called “annealing” where particles are subjected to temperatures near absolute zero to slow them down to their lowest-energy state. The energy-levels are then altered in very gradual, complex ways until an optimal solution to a calculation is reached. This is one of the major sources of skepticism for observers like Scott Aaronson who aren’t convinced that D-Wave’s technique can scale, at least not until the company proves that the entanglement results in performance gains.

Futurist Christopher Barnatt calls quantum computing, “the hydrogen bomb of cyber warfare.” That’s because the massive computing potential allowed by quantum mechanics can be used to crack the most complex codes utilized by governments and financial institutions to encrypt data. It could make Stuxnet look like a high schooler’s prank.

In fact last month, when Bitcoin’s valuations were way up, Andreessen Horowitz partner (and PandoDaily investor) Chris Dixon Tweeted, “What could possibly bring down Bitcoin? Quantum computing, naturally!”

Many observers, however, like Business Insider’s Dylan Love, pointed out that as computing technology increases so will encryption methods. There’s even an entire field devoted to post-quantum encryption.

Nevertheless, as hacking and other forms of cyber warfare continue to pose a very real threat to the United States, security experts should be watching these developments very closely.

Between D-Wave’s quasi-quantum device and various ion-trapping machines, there are plenty of exciting developments on the quantum computing front. Last week, a D-Wave machine consisting of 439 qubits processed an equation 3600 times faster than a conventional computer.

That said, the most complex prime factorization processed by Shor’s Algorithm is still only... 21 (the answer’s 3 and 7. Easy to come up with on paper, not so easy using atoms).

But quantum computing is about more than cracking codes. The D-Wave device Google announced last week will be used to improve machine learning. That means better robots and maybe dystopia, if you believe our editor Adam Penenberg. But it also means producing better models for understanding the world around us (and, since this is Google we’re talking about, better models for organizing and searching through all that data).

“Machine learning is all about building better models of the world to make more accurate predictions,” writes Google’s Director of Engineering Hartmut Neven in a blog post. “If we want to cure diseases, we need better models of how they develop. If we want to create effective environmental policies, we need better models of what’s happening to our climate. And if we want to build a more useful search engine, we need to better understand spoken questions and what’s on the web so you get the best answer.”

How will it do this? In theory, by using a quantum computation known as Grover’s Algorithm. Neven explains in a 2009 post about Google’s research into quantum computing: “Let’s take unstructured search as an example. Assume I hide a ball in a cabinet with a million drawers. How many drawers do you have to open to find the ball? Sometimes you may get lucky and find the ball in the first few drawers but at other times you have to inspect almost all of them. So on average it will take you 500,000 peeks to find the ball. Now a quantum computer can perform such a search looking only into 1000 drawers.”

But isn’t a lot of it still purely theoretical?

Yes! But as Aaronson notes, even the theoretical frameworks of quantum computing are already having an impact on a number of fields, from chemistry to physics to plant biology. “Several research groups have used quantum-computing analogies to explain the remarkable light-harvesting efficiency of photosynthetic molecules in plants, and to suggest how solar panels might be designed with similar efficiencies,” writes Aaronson in a 2011 New York Times article. For more on quantum biology, check out this excellent overview in Nature.

And finally, there’s “quantum Internet.” Using quantum Internet, participants can send super-secure messages by taking advantage of the whole “observing a subatomic particle changes it” principle. After all, if anyone eavesdrops on a message encoded on a quantum particle, it will leave a very clear mark to the sender and receiver. Earlier this month, researchers at the Los Alamos National Labs claimed they’ve been operating quantum Internet for over two years. But right now these messages can only be sent between two nodes, making it more like a quantum telegraph than quantum Internet.

So basically we have a long way to go, right?

Yes. For each of these technological applications of quantum physics, there are as many caveats as there are potential uses. Theoretically, we can factor any number the universe throws at us. But keeping a system of trapped-ion qubits noise-free is so challenging that we’re still stuck at factoring 21. And while a quantum computer might dominate a classical computer when it comes to a very particular algorithm or computation, classical computers are still far more versatile, able to be applied to a huge variety of different problems.

It’s possible instead that the future will bring about classical systems that are merely optimized by quantum devices. But even then, it could still lead to some pretty spectacular results. Imagine artificial intelligence powered in part by subatomic particles, just like human brains. Or think about how much more efficient these giant data centers will be if they’re able to harness the data-scrubbing powers of Grover’s Algorithm, just like Google hopes. Quantum computing is simultaneously scary and exciting, outlandish yet seemingly within our reach. It could follow the pattern of so many world-shattering technologies that came before it, from automobiles to the Internet. Or it could end up in the dustbin of history alongside other scientific and technological follies.

The only question that remains then is: Is quantum computing the next automobile or Internet? Or merely the next cold fusion?