So… this happened a few days ago.
"While most experts agree that realworld applications of quantum are still at least several years or longer away"
— Sabine Hossenfelder (@skdh) February 7, 2020
Make that several decades or show me one "expert" who says quantum computers will be commercially viable in a few years. @BullshitQuantum https://t.co/NEDqDIMKkv
I would like to chronicle a few interesting tweets that resulted from the above tweet. As you probably know, I am a quantum optimist so the selection will be biased, but my intent is not to argue in favour of commercial quantum computing, rather to point out interesting arguments.
Heuristics vs. Algorithms
Hi. I'm an expert. I'm a Professor in the field.
— Michael J. Biercuk (@MJBiercuk) February 8, 2020
I believe the first heuristic approaches may well achieve quantum advantage for approximate algorithms in less than 10y. I say that being an active practitioner and having funded this research via DARPA.
Shor = decades.
Let’s a lot to unpack here, let’s start with the definition of a heuristic.
An algorithm is a program that solves a problem on all inputs; that is, it outputs the correct answer for every instance of the problem. In contrast, a heuristic is a program that solves a problem on most inputs; that is, it outputs the correct answer for most instances of the problem (typically, these instances correspond to instances that are most likely to arise in practice). To make this more concrete, let’s think of the addition problem which takes a pair of numbers and outputs a number. An algorithm for addition always output the correct answer, while a heuristic outputs the correct answer for most pairs of numbers, but for some pairs of numbers it might output the wrong answer.^{1} The definition of a heuristic might seem similar to the definition of a boundederror algorithm, but there is a subtle difference, a boundederror algorithm outputs the correct answer with highprobability for every input. So, one could combine this property with heuristics to get boundederror heuristics, and indeed this is what people typically mean when they say heuristics. A closer to reallife example of a heuristic is facialrecognition software.^{2}
If you have been following the field for some time, you know that we are slowing shifting from algorithms to heuristics. Some people believe that heuristics are flatout bad—they don’t know what they are talking about. Heuristics are amazing, most of the world runs on heuristics (it is hard to come up with algorithms for most realworld problems and in some cases we can prove that the general problem is NPhard.^{3}) That being said, heuristics are really hard to analyze: deep learning almost died because of this. The best way we have to work with heuristics is to code it up and run it. This means that it is hard for us to know if a quantum heuristic actually works, we can do simulations but that might not be enough to convince skeptics, or even finetune our program to perform better. Going back to the deep learning analogy, the field was revived by cheap GPUs which allowed researchers to test their heuristics and fine tune them.^{4} Till we have large quantum computers, writing heuristics is like walking in the dark—we don’t know if we have made any progress.
Algorithms are nicer in this respect, we can prove stuff about their behavior without actual devices. Shor did this with his amazing factoring algorithm. This is actually not that hard, for instance, I have two papers analyzing quantum algorithms, and no quantum computers were harmed in the process.
In principle, Shor’s algorithm is fast, that is, it has a polynomialtime algorithm. In practice, we care about number of qubits and number of gates more than asymptotic scaling. Actually, we don’t care about asymptotic scaling (sorry, man.) Last year, building on the work of many people, Craig Gidney and Martin Ekerå constructed an optimized circuit for Shor’s algorithm. Unfortunately (or fortunately) this circuit still needs on the order of a million qubits and a gate error (in trace distance) of $10^{3}$ (which is not too crazy.) This seems out of the reach of current quantum computer chip designs and architectures. Stop whatever you are doing and go read this awesome thread by @rdviii. Back to topic, so, yeah, millions of qubits might take a long time to get to.
Speed isn’t everything
Complexity theory cannot explain why people still use Microsoft Excel to do business analytics in large companies with algorithms that have a guaranteed slowdown compared to the best available classical algorithms. But they do. And they pay Microsoft a lot of $$$ to do so.
— Christopher Savoie 佐保井 久理須 (@cjsavoie) February 10, 2020
I really like this tweet—commercial computer science is not about stats, it is about products.
Interesting pointQCs could be commercially viable without outperforming their classical counterparts if they lead to a better product (usability, price, forward compatibility?)
— sanketh (@__c1own) February 10, 2020
Or energy consumption. A dil fridge runs on only 2030 kWh with capacity to double compute power with each qubit added without adding any significant power consumption. The IBM Summit by contrast requires 15 MWh. Enough to power 7000 homes. Double the compute, double the burn.
— Christopher Savoie 佐保井 久理須 (@cjsavoie) February 10, 2020
Quantum can Quantum
I think demonstrating a quantum advantage for an application before 2030 is a reasonable prediction. I would say the initial practical impact will mostly be in industries that need to understand quantum mechanical systems better (e.g. materials/molecules), not across the board
— Guillaume Verdon (@quantumVerd) February 7, 2020
This does not need any commentary. This has now become widely accepted within the community.

For a more formal treatment of heuristics see: Bogdanov, Andrej, and Luca Trevisan. “Averagecase complexity.” Foundations and Trends® in Theoretical Computer Science 2, no. 1 (2006): 1106. DOI: 10.1561/0400000004. ECCC: 2006/073. ↩

Another good example is antivirus, see the Wikipedia article on heuristic analysis. ↩

A lot of people look at NPhardness as a bad thing—I don’t. If someone is able to prove that realworld problem $\Pi$ is NPhard, it means that they have formally written down $\Pi$ and then managed to show that it is at least as hard as NPhard problem. First, let us assume that $\Pi$ is in NP, this is true for almost all realworld problems. Now, there there are two ways to solve this problem: (1) solve the NPhard problem; that is, reduce your problem to SAT and use a SAT solver; (2) attack the formal description of $\Pi$, maybe the description is too general, we only care about some instances. So, showing NPhardness is a step in the right direction. ↩

See Wikipedia for more on the history of deep learning. Another parallel with deep learning is the availability of data. Aside from hardware, deep learning suffered from a lack of data, this is true in quantum computing as well: we don’t have enough quantum data (data that can be efficiently loaded into a quantum state.) You might ask, why not just turn classical data into quantum data? Well, it is not as simple, naïve algorithms for this remove any quantum speedup for some tasks. An infamous example is the HHL algorithm; see, also, Ewin Tang’s classical analogue of HHL and Joe Fitzsimons’s post on QRAM. ↩