(Quantum) Computing: A Recursive Problem in Universe Modeling?

Estimated read time (minus contemplative pauses): 8 min.

The Spirits of the Pumpkins Descended into the Heavens, a 2015 infinity mirror room by Yayoi Kusamai (image: artlink.com)

I recently attended a public discussion on quantum computing, with George Musser (author of Spooky Action at a Distance and The Complete Idiot’s Guide to String Theory) as resident expert in conversation with Caleb Scharf (Director of Astrobiology at Columbia University and author of Our Zoomable Universe and The Copernicus Complex).1 While the conversation of course praised the virtues of quantum computing, there ran through it a sobering thread that I found this refreshing, given all the hype.

It was pointed out, for example, that: there are only a handful of algorithms, as far as we can so far tell, at which quantum computing will significantly outperform classical computers (e.g., factoring); getting information in and out these computers poses problems at least as much for physics as it does for engineering (e.g., a given qubit, the quantum species of a bit, may be in a superposition of multiple states at once—e.g., of 0 and 1 or somewhere in between, and in order to extract digital information from that analog state, the superposition must be resolved, but if done too soon, you’ll get a random, and thus unreliable, output); even though quantum computers will be better at factoring, this won’t end cryptography and computer security—e.g., maybe we’ll just make bigger keys so the factoring is even harder, plus there will still be intractable counting problems, which is to say the P vs. NP problem won’t go away; what superior capacity for handling complexity they do have won’t lead them to suddenly gain consciousness (e.g., by modeling a human brain).

What I’d like to focus on here is another idea that was mentioned, which goes something like: A machine with as few as 200 or 300 qubits would be able to store information about every atom in the observable universe (that’s about 10⁸⁰ atoms). What’s meant by “information about” may vary—I’ve encountered versions of this idea that go from merely mapping the universe down every atom, to running a fine-grained simulation of the universe.* Musser expressed heavy skepticism for running a simulation, seemingly dismissing that prospect right away, while giving a little more credibility to mere information storage. Even this less intense option, however, would of course include an incredible amount of information, including, for starters, the relative positions in space between any two atoms (ponder the amount of relations involved in something as simple as a paperclip!). Musser pointed out that, even if you could store all that information in theory, you still have to get it into the computer. Entering manually isn’t an option… so how?

(*As an example, here’s a Seth Lloyd quote that I think originates in the Nature writeup, “Quantum Boost for Artificial Intelligence,” though it seems to be from an interview, as it’s not cited in any of the linked papers, and I can’t find it in any of Lloyd’s writings: “We could map the whole Universe—all of the information that has existed since the Big Bang—onto 300 qubits.” And at Wikipedia: “Lloyd states that we could have the whole universe simulated in a computer in 600 years provided that computational power increases according to Moore’s Law. However, Lloyd shows that there are limits to rapid exponential growth in a finite universe, and that it is very unlikely that Moore’s Law will be maintained indefinitely.” Here again, though, I can’t find an original source. Neither quote seems to come from his book Programming the Universe, though I haven’t read it in its entirety.

UPDATE: Here’s a 2013 reference for Lloyd’s estimate [on page 7] of about 300 qubits to include “the entire information content of the universe within the particle horizon” is given as log-base-2-of-(10^90) ≈ 300 : “Quantum Algorithms for Supervised and Unsupervised Machine Learning.”2 [Thanks to George Musser for sharing this paper in the comments.] A clarifying question I won’t address now: Is Lloyd—and others—saying that the universe could be literally replicated by such a computer, or simply that it wouldn’t take a particularly large computer to store that quantity of information? These are very different claims.)

Given this impracticality, maybe the question of fine-grain universe modeling is purely academic. But I do think there’s some value in exploring it, even if just to keep that sober thread going. So, here I’ll describe what strikes me as a recursive problem with storing all the information in the universe. (It strikes me, by the way, that is akin to, or may even be a generalization of, the Halting Problem noted by Alan Turing. Here are two nice videos that, watched in succession, give a good sense of what the problem is about: “Turing & The Halting Problem” and “Proof That Computers Can’t Do Everything (The Halting Problem)“.)

Suppose the qubits of your quantum computer have been configured so that they now hold an informational replica of the entire universe. Call this replica the “Q-Model.” The recursive problem arises because the Q-Model must contain a replica of itself. That is, the computer’s parts, and how those parts are configured, are contained in the universe. So those things must also be contained in the Q-Model. But if there’s a second Q-Model, there must also be a third, fourth, fifth, ad infinitum. This would pose a resource problem insurmountable by any computer made of finite parts (in other words, any sort of imaginable real computer).

Maybe this problem wouldn’t apply to just any sort of informational replica. Most vulnerable are those programs attempting to assemble something like a lossless map or snapshot accounting for every atom (and subatomic part?) in the universe.

A solution might be to tell the computer to include in the Q-Model everything in the universe but its own qubits. While this would amount to a relatively small loss per se, it could sabotage the project in other, more extensive, ways. Namely, the Q-Model will have a causal relation to the world around it. To put it crudely, were the Q-Model to exert some influence on the world outside itself, you’d have in the replication an effect, but not its cause. And that effect could have complex, widespread ramifications. Indeed, maybe you’d end up with a gappy Q-Model, and eventually an empty one, due to all causal lines tracing back to the Q-Model lacking representation in the Q-Model. Perhaps this is only a problem if you don’t try to update the Q-Model beyond its first snapshot of the universe, though the more basic recursive problem will still exist (we might call the basic recursive problem “synchronic recursion,” and the problems that emerge from updating the model over time “diachronic recursion”).

Similarly, the computer would need to avoid including other computers’ Q-Models in its own Q-Model, as otherwise there would arise the informational analog of a washroom lined with facing mirrors.

Finally, and most theoretically of all, in order to be fully lossless, you could try putting the computer into a different universe, so that its parts aren’t contained in the universe it replicates. The problem here is that one fact about that other universe is that there is a Q-Model of it in this universe. So, at the very least, that fact will need to be in the Q-Model (for it to be fully lossless); this also means that there will be at least a causal line leading from that universe into the Q-Model, and that would need to be included as well. So the other-universe solution seems to fail. (This may also suggest that problems of diachronic recursion seep even into the more basic synchronic recursion problem.)

So, the options (as far as I can tell) would be to either exclude all Q-Models, and certain causally oriented facts about Q-Models, from the replica, in which case the entire universe would not be (informationally) represented; or attempt to include everything and quickly overtax even the most powerful computer imaginable, thus also failing to assemble a complete Q-Model.

While this problem may have no practical implications, it’s at least a thought-provoking entry into the catalog of quantum computing’s limitations. Such considerations should constrain serious talk of what imaginable computers are capable of not just in practice, but in theory.

====

ADDENDA:

(1) After writing this, I came upon the following footnote on page 91 of Daniel Dennett’s 2003 book on free will, Freedom Evolves. The recursive problem I note above seems to boil down essentially to the problem noted here, but with a possible addition I’ll mention in a moment:

Laplace’s demon instantiates an interesting problem first pointed out by Turing3, and discussed by Ryle (1949)4, Popper (1951)5, and MacKay (1960)6 No information-processing system can have a complete description of itself—it’s Tristam Shandy’s problem of how to represent the representing of the representing of… the last little bits. So even Laplace’s demon has an epistemic horizon and, as a result, cannot predict its own actions the way it can predict the next state of the universe (which it must be outside).

What I might be adding here is that the universe outside of the machine includes the machine, so there’s not really any such space “outside of the machine.” I wouldn’t be surprised if others have noted this, but I’ve not yet run into it. Finally, as noted by Denett, the recursion problem also faces any predictor aiming to be complete and perfect, à la Laplace’s demon.

(2) Here’s a similar discussion that came up at Less Wrong a few weeks after my post: Can Our Universe Contain a Perfect Simulation of Itself?

(3) Here’s a relevant 2008 Scientific American article shared by George Musser in the comments: Go Forth and Replicate.

(4) If (3) is of interest, this ought to be as well (see the comments section for thoughts about this and [3]): John von Neumann’s 1948 Institute for Advanced Study lecture The General and Logical Theory of Automata (especially page 316).


Enjoy or find this post useful? Please consider pitching in a dollar or three to help me do a better job of populating this website with worthwhile words and music. Let me know what you'd like to see more of while you're at it. Transaction handled by PayPal.
Or click the banner to shop at Amazon (at no extra cost: it just gives me some of what would have gone to Amazon).


Further Reading

Footnotes:

  1. The event was presented by YHouse. Disclosure: I do some freelance work for the organization. I’m not being paid to write this.
  2. By Seth Lloyd, Masoud Mohseni, Patrick Rebentrost: arXiv:1307.0411v2 [quant-ph]
  3. I’m not sure where Turing discusses this explicitly, but it seems to be at least implied by the Halting Problem, which I noted in my writing above.
  4. Gilbert Ryle, The Concept of Mind
  5. Karl Popper, “Indeterminism in Quantum Physics and in Classical Physics,” published in two parts (1950 and 1951) in the British Journal for the Philosophy of Science; here are JSTOR links: Part 1 and Part 2.
  6. D. M. MacKay, “On the Logical Indeterminacy of a Free Choice,” Mind, New Series, Vol. 69, No. 273 (Jan., 1960), pp. 31-40.

2 Replies to “(Quantum) Computing: A Recursive Problem in Universe Modeling?”

    1. Hi George,

      Nice to hear from you. Thanks for the references! I’ve shared them in the post, along with the related 1948 von Neumann lecture (“The General and Logical Theory of Automata”) and some other stuff.

      While writing the post, some candidate ideas occurred to me for avoiding the recursive problem, but they didn’t seem to work. Some of them were, I think at least superficially, similar to what’s being discussed in the Sci Am article. It seems to me those solutions fail for similar reasons that essentially come down to the fact that the computer being duplicated is meant to contain all the info in the universe. I’ll briefly think out loud about this concern while admitting that, as someone who doesn’t know much about computer science, information theory, or physics, I’ll at several points wander out of my depth.

      One thought I had was to somehow generate a universe-snapshot-replica that contains exactly one complete sub-copy of the replica, along with instructions for giving that sub-copy its own sub-copy in the event that the original replica were ever used to rebuild the universe. (I considered various ways of doing this that I won’t get into here. Also, I take “rebuilding the universe” to be just barely conceivable enough a notion to inform theoretical discussion.) But I don’t see a distinction between “instructions for rebuilding the universe” and “complete sub-copy of the replica” and the replica itself. So, I figure there’s no need for the complete sub-copy. We just need the replica/rebuilding instructions.

      It seems to me that, if intelligible, something like this approach could at least allow a computer to make a kind of generic—i.e., a functionally identical—replica of itself, along with instructions for duplicating the instructions etc. This seems quite distinct, though, from creating an informationally identical replica of itself, particularly if the machine is meant to store all the info in the universe (which includes the information required such that the reconstructed universe may pick back up where where the time-slice left off; i.e., may continue expanding, etc.).

      At any rate, perhaps the functional replica, taken to some limit close to perfect replication, could avoid recursion, but may still be quite gappy, perhaps as gappy as you like or as you care to notice. This has to do, for one thing, with causal lines going back to the instructions (or replica) being eliminated in order to avoid recursion (these can start simply…  there is light bouncing off of the machine, heat inside of it, space being taken up by it in a room, parts being used for it that weren’t used for other things—myriad externalities from there onward, emanating from the tiniest gap).

      But avoiding that problem then brings us back to the non-gappy replica containing “all the information,” a notion perhaps too daunting to really make sense of (even down only to the atoms; below that, I can’t fathom… for example, supplying instructions so that all quantum superposition states are identically reproducible from the snapshot).

      For instance, following the example of the architect’s blueprint in the Sci Am article: you can give instructions for reconstructing a chair that just relies ontologically on a general notion of “chair-ness,” or that follows an IKEA-like design with a certain kind of wood specified, or that amounts to reconstructing a chair that’s identical down to the atomic relations of that particular chair at time t; or, taken further, that somehow contains all the information related to any given part of that chair (e.g., where did the wood for each leg come from? whose DNA has its parts been in contact with?). The former examples are on the functional side, and seem more like the set of instructions the Sci Am article describes; while the latter strike me as something quite different, far more complex—far less “model”-like. Even more complexly, if the studio is in a boat during a storm, then we’d also need to represent the chair’s response to turbulence at time t (a set of relations that would somehow need to be tracked with every other atom in the universe as well; and those relations should resume whatever they were up to once the rebuilt universe is set into motion).

      (Interestingly, in the architect’s blueprint, there are instructions for building a copy machine, something that would be included in the reconstructed universe that isn’t in the original, or perhaps it would be outside of both universes; but this stretches the limits of conceivability here even further. At any rate, the copy machine instructions are treated as separate from the original studio, as is the architect and the light bouncing around in the room, etc. This grants that there are things proximate to, yet outside of, the thing being replicated/modeled, unlike with a replica that contains all relevant/proximate info in the universe.)

      Maybe this demands too much of what is technically meant by “information.” If so, maybe that’s my point.

      Finally, another fact about the chair is that it has phenomenological representation, as a singular object (rather than as a collection of particle relations), in the mind of the architect and whoever else has ever seen it or its constituent parts. The instructions/replica/”blueprint”—which may themselves similarly have phenomenological representation—would need to account for this as well, but perhaps that’s as simple as perfectly duplicating the chair’s constitutive relations, light in the room, architect’s brain, etc: get all the physical stuff right, and you get back the original phenomenology for free—as well as the original metaphysical and emergent properties, etc. Just as a perfect reproduction of all our universe’s H20 molecules would bring along things we call wetness, and maybe also temperature, dryness, hardness, etc.; and perhaps simply having any quantity of things at all would bring along all the integers, real and complex numbers by implication: all the abstract objects there are in this world (known and unknown by humans), though each could not possibly have an explicit representation in the model. (I don’t think you have to be a math Platonist to want/accept this.)

      Perhaps the solution lies somewhere in this notion of entailment, such that any non-explicitly represented instructions needed would be contained in whatever region of space (is that the right word?) that the universe rebuilding takes places—i.e., some region of space whose (various?) locally instantiated physical laws match identically to those spread throughout our universe. (I wouldn’t expect the replica to contain instructions for instantiating those physical laws, much less the meta-laws—I assume it’s not too controversial to suggest such a thing—that determine which local laws obtain in which regions of space, even though those are also facts about our universe! Otherwise, it seems there’d be no starting point for universe rebuilding; i.e., impossibly, it’d have to be done ex nihilo.)

      Ok, I think I’ve officially confused myself enough to stop typing.

Share your thoughts:


Deprecated: Directive 'allow_url_include' is deprecated in Unknown on line 0