I recently attended a public discussion on quantum computing, with George Musser (author of Spooky Action at a Distance and The Complete Idiot’s Guide to String Theory) as resident expert in conversation with Caleb Scharf (Director of Astrobiology at Columbia University and author of Our Zoomable Universe and The Copernicus Complex).1 While the conversation of course praised the virtues of quantum computing, there ran through it a sobering thread that I found this refreshing, given all the hype.
It was pointed out, for example, that: there are only a handful of algorithms, as far as we can so far tell, at which quantum computing will significantly outperform classical computers (e.g., factoring); getting information in and out these computers poses problems at least as much for physics as it does for engineering (e.g., a given qubit, the quantum species of a bit, may be in a superposition of multiple states at once—e.g., of 0 and 1 or somewhere in between, and in order to extract digital information from that analog state, the superposition must be resolved, but if done too soon, you’ll get a random, and thus unreliable, output); even though quantum computers will be better at factoring, this won’t end cryptography and computer security—e.g., maybe we’ll just make bigger keys so the factoring is even harder, plus there will still be intractable counting problems, which is to say the P vs. NP problem won’t go away; what superior capacity for handling complexity they do have won’t lead them to suddenly gain consciousness (e.g., by modeling a human brain).
What I’d like to focus on here is another idea that was mentioned, which goes something like: A machine with as few as 200 or 300 qubits would be able to store information about every atom in the observable universe (that’s about 10⁸⁰ atoms). What’s meant by “information about” may vary—I’ve encountered versions of this idea that go from merely mapping the universe down every atom, to running a fine-grained simulation of the universe.* Musser expressed heavy skepticism for running a simulation, seemingly dismissing that prospect right away, while giving a little more credibility to mere information storage. Even this less intense option, however, would of course include an incredible amount of information, including, for starters, the relative positions in space between any two atoms (ponder the amount of relations involved in something as simple as a paperclip!). Musser pointed out that, even if you could store all that information in theory, you still have to get it into the computer. Entering manually isn’t an option… so how?
(*As an example, here’s a Seth Lloyd quote that I think originates in the Nature writeup, “Quantum Boost for Artificial Intelligence,” though it seems to be from an interview, as it’s not cited in any of the linked papers, and I can’t find it in any of Lloyd’s writings: “We could map the whole Universe—all of the information that has existed since the Big Bang—onto 300 qubits.” And at Wikipedia: “Lloyd states that we could have the whole universe simulated in a computer in 600 years provided that computational power increases according to Moore’s Law. However, Lloyd shows that there are limits to rapid exponential growth in a finite universe, and that it is very unlikely that Moore’s Law will be maintained indefinitely.” Here again, though, I can’t find an original source. Neither quote seems to come from his book Programming the Universe, though I haven’t read it in its entirety.
UPDATE: Here’s a 2013 reference for Lloyd’s estimate [on page 7] of about 300 qubits to include “the entire information content of the universe within the particle horizon” is given as log-base-2-of-(10^90) ≈ 300 : “Quantum Algorithms for Supervised and Unsupervised Machine Learning.”2 [Thanks to George Musser for sharing this paper in the comments.] A clarifying question I won’t address now: Is Lloyd—and others—saying that the universe could be literally replicated by such a computer, or simply that it wouldn’t take a particularly large computer to store that quantity of information? These are very different claims.)
Given this impracticality, maybe the question of fine-grain universe modeling is purely academic. But I do think there’s some value in exploring it, even if just to keep that sober thread going. So, here I’ll describe what strikes me as a recursive problem with storing all the information in the universe. (It strikes me, by the way, that is akin to, or may even be a generalization of, the Halting Problem noted by Alan Turing. Here are two nice videos that, watched in succession, give a good sense of what the problem is about: “Turing & The Halting Problem” and “Proof That Computers Can’t Do Everything (The Halting Problem)“.)
Suppose the qubits of your quantum computer have been configured so that they now hold an informational replica of the entire universe. Call this replica the “Q-Model.” The recursive problem arises because the Q-Model must contain a replica of itself. That is, the computer’s parts, and how those parts are configured, are contained in the universe. So those things must also be contained in the Q-Model. But if there’s a second Q-Model, there must also be a third, fourth, fifth, ad infinitum. This would pose a resource problem insurmountable by any computer made of finite parts (in other words, any sort of imaginable real computer).
Maybe this problem wouldn’t apply to just any sort of informational replica. Most vulnerable are those programs attempting to assemble something like a lossless map or snapshot accounting for every atom (and subatomic part?) in the universe.
A solution might be to tell the computer to include in the Q-Model everything in the universe but its own qubits. While this would amount to a relatively small loss per se, it could sabotage the project in other, more extensive, ways. Namely, the Q-Model will have a causal relation to the world around it. To put it crudely, were the Q-Model to exert some influence on the world outside itself, you’d have in the replication an effect, but not its cause. And that effect could have complex, widespread ramifications. Indeed, maybe you’d end up with a gappy Q-Model, and eventually an empty one, due to all causal lines tracing back to the Q-Model lacking representation in the Q-Model. Perhaps this is only a problem if you don’t try to update the Q-Model beyond its first snapshot of the universe, though the more basic recursive problem will still exist (we might call the basic recursive problem “synchronic recursion,” and the problems that emerge from updating the model over time “diachronic recursion”).
Similarly, the computer would need to avoid including other computers’ Q-Models in its own Q-Model, as otherwise there would arise the informational analog of a washroom lined with facing mirrors.
Finally, and most theoretically of all, in order to be fully lossless, you could try putting the computer into a different universe, so that its parts aren’t contained in the universe it replicates. The problem here is that one fact about that other universe is that there is a Q-Model of it in this universe. So, at the very least, that fact will need to be in the Q-Model (for it to be fully lossless); this also means that there will be at least a causal line leading from that universe into the Q-Model, and that would need to be included as well. So the other-universe solution seems to fail. (This may also suggest that problems of diachronic recursion seep even into the more basic synchronic recursion problem.)
So, the options (as far as I can tell) would be to either exclude all Q-Models, and certain causally oriented facts about Q-Models, from the replica, in which case the entire universe would not be (informationally) represented; or attempt to include everything and quickly overtax even the most powerful computer imaginable, thus also failing to assemble a complete Q-Model.
While this problem may have no practical implications, it’s at least a thought-provoking entry into the catalog of quantum computing’s limitations. Such considerations should constrain serious talk of what imaginable computers are capable of not just in practice, but in theory.
(1) After writing this, I came upon the following footnote on page 91 of Daniel Dennett’s 2003 book on free will, Freedom Evolves. The recursive problem I note above seems to boil down essentially to the problem noted here, but with a possible addition I’ll mention in a moment:
Laplace’s demon instantiates an interesting problem first pointed out by Turing3, and discussed by Ryle (1949)4, Popper (1951)5, and MacKay (1960)6 No information-processing system can have a complete description of itself—it’s Tristam Shandy’s problem of how to represent the representing of the representing of… the last little bits. So even Laplace’s demon has an epistemic horizon and, as a result, cannot predict its own actions the way it can predict the next state of the universe (which it must be outside).
What I might be adding here is that the universe outside of the machine includes the machine, so there’s not really any such space “outside of the machine.” I wouldn’t be surprised if others have noted this, but I’ve not yet run into it. Finally, as noted by Denett, the recursion problem also faces any predictor aiming to be complete and perfect, à la Laplace’s demon.
(2) Here’s a similar discussion that came up at Less Wrong a few weeks after my post: Can Our Universe Contain a Perfect Simulation of Itself?
(3) Here’s a relevant 2008 Scientific American article shared by George Musser in the comments: Go Forth and Replicate.
(4) If (3) is of interest, this ought to be as well (see the comments section for thoughts about this and ): John von Neumann’s 1948 Institute for Advanced Study lecture The General and Logical Theory of Automata (especially page 316).
Enjoy or find this post useful? Please consider pitching in a dollar or three to help me do a better job of populating this website with worthwhile words and music. Let me know what you'd like to see more of while you're at it. Transaction handled by PayPal.
Or click the banner to shop at Amazon (at no extra cost: it just gives me some of what would have gone to Amazon).
- The event was presented by YHouse. Disclosure: I do some freelance work for the organization. I’m not being paid to write this.
- By Seth Lloyd, Masoud Mohseni, Patrick Rebentrost: arXiv:1307.0411v2 [quant-ph]
- I’m not sure where Turing discusses this explicitly, but it seems to be at least implied by the Halting Problem, which I noted in my writing above.
- Gilbert Ryle, The Concept of Mind
- Karl Popper, “Indeterminism in Quantum Physics and in Classical Physics,” published in two parts (1950 and 1951) in the British Journal for the Philosophy of Science; here are JSTOR links: Part 1 and Part 2.
- D. M. MacKay, “On the Logical Indeterminacy of a Free Choice,” Mind, New Series, Vol. 69, No. 273 (Jan., 1960), pp. 31-40.