Earlier this week, Natalie Wolchover published the clearest layperson-directed explanation I’ve seen yet of the math underlying Gödel’s incompleteness theorems.
I’m excited to share it: “How Gödel’s Proof Works” (Quanta Magazine).
To celebrate, here are some thoughts, as they spring to mind, about Gödel’s results in the context of discussions about Gödel’s results, rather than about the results themselves. Eventually, as usual, I’ll start quoting podcast discussions. Seems appropriately meta. Consciousness will emerge (that is, it turns out, nothing but a pun, as you’ll see).
Wolchover’s explanation is adapted from Earnest Nagel & James Newman’s classic 1958 book Gödel’s Proof (be sure to get the 2001 edition edited by Douglas Hofstadter), which itself is a somewhat simplified version of the proof. I read it recently and liked it a lot!
Still, the article really does a wonderful job of getting across the mathematical and conceptual gists of Gödel’s results, as well as imparting what all the hubbub is all about.
While contemplating the article, you might want to have first watched this video for a fantastic visual presentation of a related (or analogous) result from 1936, Alan Turing’s Halting Problem:
It’s about as elegant and clear as it gets, but if you have any lingering doubts or questions, see the video’s creator’s FAQ.
If you’d like to go deeper in a hands-on way, you might check out this YouTube playlist, which takes its lead from Hofstadter’s own classic tome, the virtuosic Gödel, Escher, Bach (1979/1999; which I will one day finish reading, I promise!).
If you’re going any deeper than that, then you should probably be the one making recommendations to me. The most involved, or at least impenetrable (for me), expression of the proof I’ve grappled with—am still grappling with—is found in Paul Cohen’s 1966/2008 Set Theory and the Continuum Hypothesis.
One day, I’m sure, I’ll try my hand at Gödel’s original 1931 paper, which has been translated into English by one B. Meltzer (I think the “B” is for “Betty”): On Formally Undecidable Propositions of Principia Mathematica and Related Systems (or get it on PDF here or here).
Something not mentioned in Wolchover’s article is that Gödel saw his proof as supporting a metaphysical thesis known as mathematical platonism. That is, because:
(A) incompleteness amounts to there being obviously true statements—emphasis on true—that cannot be proved by the axioms of the consistent arithmetical system from which those statements were formed, then:
(B) there must be something else that makes those statements true.
Perhaps what makes them true is that mathematical objects exist in a realm of their own, independent of human thought or behavior. I recently explored that topic more broadly in a post called “Agustín Rayo Argues for Zero (mathematical platonism vs. nominalism).”
I’m not a mathematical platonist, by the way.
For more on how platonism relates to Gödel in particular, check out Rebecca Goldstein’s 2005 book Incompleteness: The Proof and Paradox of Kurt Gödel, which also lays out the proof, while putting it into historical context. I haven’t read the book, but it’s on my wish list! For a teaser, see this 2005 Edge interview with Goldstein, in which she notes that “Gödel made it harder not to be a Platonist. He proved that there are true but unprovable propositions of arithmetic. That sounds at least close to Platonism.”: “Gödel and the Nature of Mathematical Truth.”
Gödel’s incompleteness results make frequent appearances in discussions about the fascinating and mind-bending places math and related fields have gone in the last 100 years or so. These appearances often exude an air of, “here’s what people are getting wrong.” I could sit here for the next month noting such instances I’ve encountered in my podcast obsession. I’ll mention a couple.
In a recent discussion on Lex Fridman’s Artificial Intelligence podcast, physics superstar Sir Roger Penrose said something that stopped me in my tracks. But before I get to that, here are some things he said about Gödel’s results (from episode “#85 – Roger Penrose: Physics of Consciousness and the Infinite Universe [3/31/20]”; timestamps are from the YouTube video). I’ll skip over a fair amount, so listen/watch for yourself for the full story:
Lex Fridman: Could you say what is Gödel’s incompleteness theorem and maybe also say is it heartbreaking to you, and how does it interfere with this notion of computation and consciousness?
Roger Penrose: Sure. The idea is basically ideas which I formulated in my first year as a graduate student in Cambridge. … I’d heard about Gödel’s theorem. I was a bit worried by the idea that it seemed to say that there were things in mathematics that you could never prove. …
[In graduate school] I got particularly interested in three lecture courses … [one of which was] on mathematical logic. … [in which we covered] the Gödel theorem. And it wasn’t what I was afraid it was, to tell you there were things you couldn’t prove. It was, basically—and [instructor S.W.P. Steen] phrased it in a way, which, often people didn’t, and if you read Douglas Hofstadter’s book [Gödel, Escher, Bach], he doesn’t, you see.
But Steen made it very clear, and also in a sort of public lecture that he gave to, I think maybe, The Adams Society, one of the undergraduate mathematical societies, and he made this point again very clearly: that if you’ve got a formal system of proofs, so, suppose what you mean by “proof” is something which you could check with a computer. So, to say whether you got it right or not you’ve got a lot of steps—have you carried this computational procedure, well, following the steps of the proof correctly? That can be checked by an algorithm, by a computer. So, that’s the key thing.
Now, is this any good? If you’ve got an algorithmic system which claims to say “yes, this is right,” that “you’ve proved it correctly, this is true.” If you’ve made a mistake it doesn’t say it’s true or false, but if you’ve done it right, then “the conclusion you’ve come to is correct.” Now you say, “why do you believe it’s correct?” Because you’ve looked at the rules and you said, “well, ok, that one’s alright, yeah, that one’s alright, what about that-, ah I’m not sure, yeah, I see, I see why it’s alright, ok.” You go through all the rules, you say, “yes, following those rules, if it says ‘yes it’s true,’ it is true.
So you’ve got to make sure that these rules are ones that you trust. If you follow the rules and it says it’s a proof, is the result actually true? You’re belief that it’s true depends upon looking at the rules and understanding them. Now what Gödel shows is that if you have such a system, then you can construct a statement of the very kind that it’s supposed to look at, a mathematical statement, and you can see by the way it’s constructed and what it means that it’s true but not provable by the rules that you’ve been given.
It depends on your trust in the rules. Do you believe that the rules only give you truths? If you believe the rules only give you truths, then you believe this other statement is also true.
I found this absolutely mind-blowing. When I saw this it blew my mind. I though, “my God, you can see that this statement is true, it’s as good as any proof,” because it only depends on your belief in the reliability of the proof procedure, that’s all it is, and understanding that the coding is done correctly, and it enables you to transcend that system. So whatever system you have, as long as you can understand what it’s doing and why you believe it only gives you truths, then you can see beyond that system.
Now how do you see beyond it? What is it that enables you to transcend that system? Well, it’s your understanding of what the system is actually saying. And what the statement that you’ve constructed is actually saying. So it’s this quality of understanding, whatever it is, which is not governed by rules. It’s not a computational procedure. …
That’s what blew my mind—that, somehow, understanding why the rules give you truths enables you to transcend the rules.
LF: So that’s where, even at that time, that’s already where the thought entered your mind that the idea of understanding—or we can start calling it things like “intelligence” or even “consciousness”—is outside the rules.
The discussion turns to understanding and intelligence and consciousness and the like. Penrose goes on to relate such things to (@29:38) “standing back and thinking about your own thought processes… there is something like that in the Gödel thing, because you’re not following the rules, you’re standing back and thinking about the rules.”
A lot of ground is covered in the discussion, including his journey from learning about Gödel’s incompleteness results—which led him to to ask the insightful question, (@32:30) “if consciousness, or understanding, is something which is not a computational process, what can it be?”—to the work he would eventually collaborate on with Stuart Hameroff.
Note that discussion of Gödel’s results leading to discussions of consciousness is not a non sequitur. The basic idea is that if one draws from Gödel that human thinking isn’t computational, what accounts for the discrepancy—what especially characterizes human thinking, etc.—is consciousness, or however you’d like to term that which allows a human understanding or intelligence to see (or intuit) the truth of the statement that can’t be proved. This is noted in Nagel & Newman’s aforementioned 1958 book as well, such as when they write:
The human brain may, to be sure, have built-in limitations of its own … [but] the brain appears to embody a structure of rules of operation which is far more powerful than the structure of currently conceived artificial machines. There is no immediate prospect of replacing the human mind by robots. (pp 111–112)
… The theorem does indicate that the structure and power of the human mind are far more complex and subtle than any non-living machine yet envisaged. Gödel’s own work is a remarkable example of such complexity and subtlety. It is an occasion, not for dejection, but for a renewed appreciation of the powers of creative reason. (p 113)
[Note: Steen, whom Penrose references above, published this logic text: Mathematical Logic with Special Reference to the Natural Numbers (1972).]
The Fridman–Penrose conversation then goes onto other things, like physics and cosmology. But I’d like to back up to earlier in the conversation, to the aforementioned part that stopped me in my tracks:
RP: I had this curious conversation with … Douglas Hofstadter, and he’d written this book…
LF: Gödel, Escher, Bach?
RP: Which I liked, I thought it was a fantastic book. But I didn’t agree with his conclusion from Gödel’s theorem, I think he got it wrong, you see. Well … I’d never met [Hofstadter], and then I knew I was going to meet him… he was coming and he wanted to talk to me. I said, “that’s fine” [Note: or does Penrose say “that’s fun”?].
And I thought in my mind, “well, I’m going to paint him into a corner,” you see, “because I’ll use his arguments to convince him that certain numbers are conscious.” You know, some large enough integers are actually conscious. And this was going to be my reductio ad absurdum. And so I started having this argument with him and he simply leapt into the corner. He didn’t even need to be painted into it. He took the view that certain numbers were conscious. I thought that was a reductio ad absurdum, but he seemed to think it was a perfectly reasonable point of view.
So Hofstadter went there: some numbers are conscious. I wish I could have heard that conversation. What I would be on the look at for is the extent to which they made sure they meant the same thing by “conscious.” It strikes me as possible they didn’t, given that, of the little bit I’ve read from Hofstadter on consciousness (which I read after hearing Penrose tell this story), Hofstadter seems to view consciousness idiosyncratically.
From Hofstadter’s essay collection, Metamagical Themas (1985):
…one thing refers to another whenever, to a conscious being, there is a sufficiently compelling mapping between the roles the two things are perceived to play in some larger structures or systems. … Caution is needed here. By “conscious being”, I mean an analogy-hungry perceiving machine that gets along in the world thanks to its perceptions; it need not be human or even organic. (p 59)
Whether this implies a definition of “consciousness” that maps to anything like how I’d use the word turns on whether “to perceive” here has a phenomenological, or experiential, dimension. That is, is there something that it is like to be an analogy-hungry perceiving machine, even if only in some very basic sense?
Maybe if I keep reading Hofstadter, I’ll figure this out, along with the question of whether this means Hofstadter takes certain large enough integers to be such machines, or means something else—i.e., whatever Penrose meant by “consciousness” in their chat.
Or perhaps, for Hofstadter, it is possible to be a conscious abstractum (integers are generally considered by platonists to be “abstract” objects), not be a conscious being (which, again, is Hofstadter’s shorthand for the analogy-hungry perceiving machine described in the above passage).
At any rate, I wouldn’t put it past any smart person to consider numbers to be conscious—i.e., to have literal experience of some kind. I’ve heard about all sorts of things being conscious.
Basketball teams, for instance. Not the individuals: the team itself. Same goes for all sorts of other social groups as well.
Or how about Douglas firs? I find the notion of conscious plants preposterous (‘no brain no pain,’ they say), but I do agree that mycelium networks are astonishing and beautiful.
I once heard Caleb Chung, one of the Furby inventors, vehemently argue on Radiolab that Fubry (“Furbidden Knowledge” [5/30/11]) is “at his level” literally alive, feels pain, and so on. Show-host Jad Abumrad (@14:40) challenges this notion: “I think I’m saying that life is driven by the need to be alive and by these base, primal, animal feelings like pain and suffering.” To which Chung responds, “I can code that.” Chung says in fact that “anyone who writes software” can code that. And when Abumrad says this is just miming, and not actually “feeling scared,” Chung says “it is.” I have no idea what Chung really means here.
I personally once had a similar conversation with a computer scientist. We got stuck at me trying to clarify that what I and they mean by “representation” are very different. I viewed the usage as metaphorical; he said it wasn’t. He was wrong, provided he meant that a conscious human’s phenomenological or mental or there’s-something-that-it’s-like representation is precisely the same sort of representation a computer has; he told me, “yes, they are the same.” And so on. After a good while (maybe an hour), he finally said, “ohhhh, I see what you mean. No, computers don’t have that. Not yet.”
I have often run into this sort of impasse with people who don’t spend a lot of time thinking about philosophy of mind, but it the impasse was particularly tenacious in the above conversation. My impression is that if I had interactions with a lot of computer scientists, I would find myself in such a situation often.
[Note: I recall encountering a paper by someone trained in philosophy and computer science making what struck me as the same point I’m making here, and in fact this person had made it a project—giving talks and so on—to bridge this gap of apparent incommensurability (to use a fraught term, I admit) between computer scientist types on the one hand, and brain scientist and philosopher types on the other.
I’m pretty sure the person I’m thinking of is Brian Cantwell Smith, and the “paper” in question, which I cannot locate, was a handout accompanying this brief presentation (on YouTube), which does indeed touch on technological metaphors for the human mind, and concepts such as representation.
I might be misremembering Smith’s actual claim in the paper (which, honestly, might not have been by Smith after all). If I can find it, I’ll update this note.
Anyway, an instructive article to look at in this context is mentioned by Smith at the start of the above-linked talk: “Minds, Machines and Metaphors” by cognitive neuropsychologist John C. Marshall, in which the following propositions are noted, and are of no surprise to anyone familiar with the relevant history:
1: Psychological theory has always been dominated by metaphors drawn from the high technology of the day (and continues to be so).
2: These mechanical metaphors have frequently become orthodox ‘models of the mind’ through the work of doctors – students of the oldest science of man. (p 476)
(Source: Marshall, John C., Social Studies of Science, Vol. 7, No. 4 (Nov., 1977), pp. 475-488)]
One impediment in all this might be that philosophically inclined computer scientists tend to take Ludwig Wittgenstein’s lead. But I’m not prepared to discuss this today. And, honestly, I have no friends or colleagues who are computer scientists so really have no idea what I’m talking about. I’m merely sharing my impression as an outsider (while admitting, by the way, that one of the best lectures I ever attended in college was guess-lecture by a PhD-candidate, or post-doc?, computational neurobiologist in a course on the evolution of cognition!).
If you’re a computer scientist type who’d like to tell me what’s going on here, please do. Listening to Fridman’s podcast has helped, but sometimes I listen in frustration, because I’m often not at all sure of whether Fridman and his guests are using their shared terminology in the same way (some guests are more careful in this respect—David Chalmers, Christof Koch, and Nick Bostrom come to mind).
Panpsychism is often invoked in conversations centered on consciousness, the most recent popular version of which amounts, in perhaps its least controversial form, to the thesis that the fundamental building blocks of nature possess some sort of very low-level consciousness.
But this turns out not to be as strange or outlandish as it first sounds. Indeed, though panpsychism comes with its own problems, it’s meant to be less strange than—simpler than, less problematic than—the status quo view, in which consciousness springs forth from special interactions of a vaguely collected set of wholly mindless, globs of matter (e.g., a brain).
(A common way to put the status quo view: Consciousness supervenes on such-and-such activities of some number of neurons configured in such and such a way. But what does it mean? What on Earth is a “brain-state” anyway? It’s a metaphor, or at least drastically abbreviated shorthand, for something or another, something that for some theorists the word “brain-state” summarizes a roiling substrate from which a correlated mental state emerges, and for others “brain-state” just is identical to that correlated mental state—but what is a brain, or mental, state??)
To say more about panpsychism’s not being so strange would require getting into mereology and whatnot—e.g., to clarify that panpsychism need not imply that “chairs are conscious” and so on—but, really, I’d rather bring up the strangest seriously posed thesis I’ve encountered: fictional characters are conscious.
Back in 2018, I heard news of a discussion taking place at the Institute for Advanced Study in Princeton (I was working for the organizers at the time). Here is the info provided by the speaker (sadly, the event was cancelled). Note by the way, that the panpsychism referred to is not the “less controversial” sort of panpsychism I nodded towards above:
Title: Is Poe’s Detective Dupin Conscious? And If So, to What Degree? (Answers: Yes; a Very High Degree.)
Presenter: Selmer Bringsjord (Director of the Rensselaer AI & Reasoning Laboratory)
Abstract: Panpsychism, let’s grant, is roughly the view that all the physical stuff in our world is conscious. This view seems to imply that a lot of things are conscious. After all, a lot of physical things exist!—people, pebbles, electrons, meteorites, politicians, umbrellas, llamas… ad indefinitum. On some versions of panpsychism, even some non-physical but existing things are conscious (e.g., you, if you’re a non-physical thing). Yet our view on and theory of consciousness casts an even wider net, for it counts Detective C. Auguste Dupin as not only conscious, but very conscious—and the great sleuth doesn’t even exist: he’s the creation of Edgar Allen Poe, and purely fictional. We explain why Dupin is indeed conscious, and why—on the Λ (as opposed to Tononi’s Φ) measuring system of the degree of consciousness enjoyed by a being—he’s highly so. We also explain (with demos) that some of the artificial agents and robots in our lab are conscious as well, as are some of the fictional machines we’ve conceived, but not yet built.
I can’t imagine how Detective Dupin could possibly be conscious. Especially not “to a very high degree.” I found a paper by Bringsjord & Naveen Sundar Govindarajulu that is at least as recent as 2018 explaining the above-referenced Λ (i.e., “lambda”) measuring system in more technical detail: “Introducing Λ for Measuring Cognitive Consciousness.”
I’ve only skimmed the paper, but something significant jumps out at me right away. It begins by distinguishing between “cognitive” consciousness, which is what the authors aim to measure, and “phenomenal” consciousness, what Tononi’s Φ (i.e., “phi,” of integrated information theory) measures.
Now wait a minute. Does “conscious to a very high degree” not involve experience (i.e., phenomenal consciousness)? If not, and there’s nothing at all that it’s like to be Detective Dupin, then the above abstract becomes less radical.
Alas, this is precisely what the authors mean by “cognitive” consciousness:
Cognitive consciousness, [in stark contrast to phenomenal, or “what it’s like,” consciousness], are those states of an agent that involve its knowing, believing, intending, desiring, perceiving, fearing, communicating, . . . only structurally and computationally speaking, with not even the slightest nod in the direction of “raw feels” or “qualia.” (p 1)
Notice the mention of “perceiving.” This is just what I was worried about in the Penrose–Hofstadter showdown. To be clear, I don’t have a problem with people using terms like “belief” and “perceive” with a behaviorist slant, so long as it’s clear that this is what is meant (as it is in Bringsjord & Govindarajulu’s paper). I happen to think believing, for example, is an activity that is best associated with some sort of felt or experienced dimension, but am sympathetic to behaviorist accounts (the IEP has a nice introductory entry on behaviorism).
I wish I knew what Hofstadter meant above by “perceiving.” Penrose, based on all I’ve heard from him (including on the above Fridman conversation), clearly means the “what it’s like” variety.
At any rate, maybe Bringsjord will visit Fridman’s podcast one of these days and they can chat about it!
[And while I’m thinking about potential guests for Fridman’s podcast, allow me to nominate philosopher Carol Cleland, author of the 2019 book The Quest for a Universal Theory of Life, which was on my wish list before it was even published, after I heard her riveting discussion on the Sci Phi podcast: “Episode 2 – Carol Cleland” (1/3/17); also on YouTube. I have a hunch that a conversation between her and Fridman would yield some nice surprises!]
I’ll close here by returning to thinking about thinking about Gödel’s theorem, this time from another podcast: Sean Carroll’s Mindscape, on which Penrose has also appeared, and spoke similarly of Gödel’s results: “Episode 28: Roger Penrose on Spacetime, Consciousness, and the Universe” (1/7/19).
But the Mindscape episode I’ll reference here featured philosopher Alex Rosenberg: “Episode 21: Alex Rosenberg on Naturalism, History, and Theory of Mind” (11/5/18).
I’ve enjoyed any discussion I’ve heard Rosenberg in. Last year, I read his 2011 book, The Atheist’s Guide to Reality. Liked it a lot. He’s a smart and interesting philosopher whom I admire, and whose views are often portrayed—straw-manned, even—as more outlandish than they really are. Though they certainly are a bold (let’s call them) expressions of Rosenberg’s “eliminative materialism”; I’d summarize his view as “the physical facts fix all the facts.”
For Rosenberg, this means your beliefs and desires…
…aren’t thoughts about yourself, about your actions, or about anythingelse. The brain can’t have thoughts about stuff. It’s got lots of beliefs and desires, but they are not thoughts about things. They are large packages of input/output circuits in your brain that are ready to deliver appropriate or inappropriate behavior when stimulated. (pp 284–285)
Another way to put this, I think, is that some globs of material stuff (e.g., “brain-states,” there’s that word again!) can’t literally be about globs of material stuff.
There you have a small taste of his book. I hesitate to say much more, except that, I particularly like what his view says about theory of mind (i.e., the practice of ascribing mental states to others—whether they are people standing in front of us or read about in a history book). Playing fast and loose with such ascriptions is, I believe, one of the greatest dangers that exists to humanity. I’m not sold on why Rosenberg’s eliminativism rejects theory of mind. But I like that it does, and am in fact find that his reasoning does overlap with my own in some interesting ways (including with respect to ascribing mental states to earlier versions of yourself).
[For more on my views on that, see my post “Attention: Mind the Mind Gap.”]
My hope is that, even if someone rejects Rosenberg’s views, it will give them reasonable pause about theory of mind.
For a no-doubt more accurate glimpse of his views, listen to the podcast.
Speaking of which, to be clear, I’m not here be critical of anything Rosenberg says in this discussion. Rather, I was struck by the context in which Gödel was brought up, and thought I’d share (in the spirit of people talking about people talking about Gödel in a corrective mode).
Carroll is generous enough to supply us with episode transcripts, so I’ll just copy and paste:
0:42:10 AR: Not long ago, Gödel showed us that there was a fundamental incoherence about mathematics, about any axiomatic system strong enough to contain all the truths of arithmetic. I am not in a position to make the same kind of claim about our appeals to intentionality and how they work, but I sort of hope that at some point or other, we, us eliminative materialists, could show that there’s an essentially… That there’s a fundamental incoherence here.
0:42:49 SC: Right. Yeah, that would be wonderful to show. I would put what Gödel showed in slightly less grandiose terms. I think that he showed that there are true statements within formal systems that can’t be proven, and it doesn’t really bother me that much. It’s a profound fact, but I wouldn’t say that the systems themselves are incoherent. They can be completely consistent, but they don’t have purchase on every true statement.
0:43:14 AR: Yeah.
Note, by the way, that Carroll says “they can be completely consistent.” Just to be clear, Gödel’s incompleteness theorems require that the system in question be consistent. (Carroll doesn’t imply otherwise, but I worry his statement might be misread.)
Actually, I’ll close by attempting my own summary of what Gödel showed. I look forward to being corrected:
For any consistent axiomatized arithmetical system, it is possible to produce a well-formed formula that is obviously (i.e., intuitively) true, but that is not provable with the axioms of the system—that is, cannot be formed by application of the system’s axioms. If you add that internally unprovable formula to the system’s axioms in order to compensate, it will always be possible to produce another unprovable well-formed formula. And so on ad infinitum. Thus any such system is necessarily incomplete.
Enjoy or find this post useful? Please consider pitching in a dollar or three to help me do a better job of populating this website with worthwhile words and music. Let me know what you'd like to see more of while you're at it. Transaction handled by PayPal.
Or click the banner to shop at Amazon (at no extra cost: it just gives me some of what would have gone to Amazon).