> The obvious solution is to write down the initial vector of size N=2^50 and start applying the quantum gates to the vector. […] The size is “just” a petabyte—or actually 1/8 of a petabyte.
Um, NO. A quantum system is described by the complex probability of being in any of those states. You need more like 2^57 bits to represent that. 16 petabytes.
Source: I've written a 24-qubit quantum simulator.
IIRC the models of computation with pure states and mixed states are equivalent in power and efficiency (perhaps up to a polynomial blowup). In fact, you don't even need complex numbers. This is probably Lipton's mindset, seeing as he's a theorist.
No, I'm referring to pure states (vector of complex probabilities), not mixed states (matrix of complex probabilities). The latter would need 2^107 bits.
Every useful quantum algorithm manipulates the complex probabilities of the system. You cannot observe these in a true quantum system but you must still track them in a classical simulation.
So you're saying it's because each probability is a 7-bit number? I guess that's a valid issue, but it's still very "well, actually." So to counter with my own "well actually," you don't need complex probabilities, and the computation only matters with probability bounded away from 1/2. Surely you could do enough engineering tricks to make that happen and keep it at about a petabyte.
No, it's because each probability is a 128-bit number (2^7 = 128; really two 64-bit numbers).
Yes, you very much DO need complex probabilities. The ability to phase-shift qubits is key to most basic quantum algorithms.
I'm not sure what you mean by "the computation only matters with probability bounded away from 1/2".
Almost by definition, the most "interesting" quantum algorithms are those which are most difficult to simulate classically. e.g. you can use "tricks" to greatly speed up simulation if your states are separable, but then you're not really harnessing the full power of the quantum model. The most powerful quantum algorithms entail maximum entanglement and worst-case simulation performance.
Now I think you don't understand quantum computing as well as you claim. The standard model of quantum computing uses complex numbers, but it is well known that you can do everything with just real numbers instead. See exercise 10.4-10.5 of Arora-Barak [1]. The key insight in quantum computing is not that complex numbers are required, but that the invariant across computation is the 2-norm of a vector (as opposed to the 1-norm, as in the classic stochastic model of randomized computation).
Further, the standard quantum model of computation is probabilistic in the sense that you "compute" something if your program outputs the right answer with probability at least 2/3. But 2/3 is not special, you just need it to be some probability bounded away from 1/2, in the sense that it can't get closer and closer to 1/2 as the input size grows.
So it's certainly plausible that you could take advantage of this to reduce the precision enough to get to the size bound Lipton mentioned, especially if, as implied, you had the might of a hundred Google engineers working on it. And the guy is so freaking smart and experienced in theoretical computer science that chances are he thought of all this and considered it not interesting enough to spell out for the people who will say "well actually."
Um, NO. A quantum system is described by the complex probability of being in any of those states. You need more like 2^57 bits to represent that. 16 petabytes.
Source: I've written a 24-qubit quantum simulator.