Laplace’s Demon Defeated by Human Consciousness

pineal gland

In his A Philosophical Essay on Probabilities (1814), Pierre-Simon Laplace describes a perfectly deterministic universe:

We ought to regard the present state of the universe as the effect of its antecedent state and as the cause of the state that is to follow. An intelligence knowing all the forces acting in nature at a given instant, as well as the momentary positions of all things in the universe, would be able to comprehend in one single formula the motions of the largest bodies as well as the lightest atoms in the world, provided that its intellect were sufficiently powerful to subject all data to analysis; to it nothing would be uncertain, the future as well as the past would be present to its eyes.1

That all-seeing intelligence has since been nicknamed “Laplace’s demon.” In recent decades, developments in areas such as chaos, complexity, and quantum mechanics suggest that ours is not Laplace’s universe, at least not from top to bottom—our universe, according to some, has randomness built in, is often messy and unpredictable.

But suppose you’re not convinced by those developments and still think the world a place of strictly regulated cause-and-effect. Perhaps I can convince you that at least one other feature of the universe would be unfriendly to the demon: human consciousness.2

Imagine Laplace’s demon—”LD” for short—is a recently built super computer engaged in the analysis Laplace describes. One day, an LD project researcher—let’s call her Shelly—has the idea that LD can predict her future. Using LD for such a thing would be frowned upon as unethical, but she figures she’ll give it a dabble just to see how her next few hours will go. Continue Reading

Share

Uploading Minds to a Computer Need Not Imply Substance Dualism

Instruments_of_Human_Sustenance-Humani_Victus_Instrumenta-Cooking-ArcimbaldoI’ve recently stumbled across several dismissals of mind uploading (i.e., the installation of a person’s consciousness onto a computer) on the grounds that it implies substance dualism (i.e., the existence of a soul or disembodied mind or some such, as distinct from a physical body).

Some take this criticism so far as to suggest that transhumanists, Singularitarians, futurists, ideologically neutral AI and/or consciousness researchers, and even just curious-minded folks deeply engaged in contemplation of the prospect’s plausibility and philosophical implications amount to the techno equivalent of those who peddle religion and healing crystals.

I’m mighty skeptical about mind uploading being possible, but I find accusations of dualism misguided. A glance at the Wikipedia entry on mind uploading shows that its proponents seem committed to, and in fact reliant upon, materialism. Computer parts are just as material as brains are, after all. Still, some critics seem to think this commitment at best a matter of philosophical confusion: these techno-frauds just don’t realize they’re dualists (a common accusation these days); and at worst a ruse, perpetrated for whatever reason (again, such characterizations sometimes come with comparisons to snake oil salesmen and other charlatans).

Either way, so claim many critics, those contemplating mind uploading are fundamentally driven by one thing: a deep fear of an oblivion that amounts to the annihilation of their metaphysical essence, or “true self”—which, critics often point out, any educated person knows doesn’t really exist since it can’t be found by an fMRI.1

I won’t address all these criticisms here. Instead, I’ll focus on why I think mind uploading doesn’t require dualism. Continue Reading

Share

An Easier Counterintuitive Conditional Probability Problem (with and Without Bayes’ Theorem)

Bayes' Theorem

Bayes’ Theorem. Image found here.

Given my recent posts about difficult counterintuitive probability problems (a topic from which I’ll now take a break for a while1), I thought it’d be fun to briefly look at a problem that ceases to be counterintuitive once explained. This is a variation on a question commonly given when teaching Bayes’ theorem. I’ll apply the theorem at the end of the post, but will mostly rely on more intuitive methods. Here’s the question:

One percent of 40-year-old women have breast cancer. The chance that a mammography machine correctly diagnoses breast cancer is 80%. That same machine has a 9.6% chance of giving a false positive. Suppose a 40-year-old woman goes in for a regular mammography screening and is diagnosed with breast cancer. What is the probability she has breast cancer?

Most people answer something like: “The machine detects the cancer 80% of the time, so the probability must be 80%!… Or maybe 70% given the false positive rate.” The correct answer is around 7.8%.

Apparently, even a great majority of medical doctors guess too high on these sorts of questions. According to this video, Explaining Bayesian Problems Using Visualizations (whose visualizations, by Luana Micallef, are useful towards developing an intuition for how the numbers are derived for these problems), 95% of doctor’s surveyed about the above question guessed 70% to 80%.

I don’t know where that study came from, but here’s reference, at a Cornell course blog, to another study involving a similar question: Doctors don’t understand Bayes’ Theorem. In this one, the correct answer was 10%. Only 21% of doctors (i.e., out of 1000 gynecologists) got the  correct answer, and nearly half gave an answer of 90%.

There’s also an excellent discussion of doctors’ lack of explicit Bayesian probability skills in Daniel Levitin’s 2014 book The Organized Mind. Levitin also notes, however, that some doctors intuitively “apply Bayesian inferencing without really knowing they’re doing it” (page 248). This points to an important feature of Bayesian probability: it really does have intuitive underpinnings. Indeed, the solution to the above problem is intuitive—perhaps even obvious—once you see it. Continue Reading

Share

Three Strange Results in Probability: Cognitive States and the Principle of Indifference (Monty Hall, Flipping Coins, and Factory Boxes)

Box Factory by Edward Hopper

Box Factory, Edward Hopper, 1928

Probability is known for its power to embarrass our intuitions. In most cases, math and careful observation bear out counterintuitive results. After many such experiences, one’s intuition improves (sometimes perhaps crossing into a kind of overcorrection—see the Optional Endnote for some inchoate thoughts on that). But some results stay strange, and it’s not always clear whether our rebelling intuitions signal a problem with formal probability, or simply confirm that human cognition has evolved to concoct tidy stories amounting to illusory—if sophisticated—representations of the world rather than to deal head on with complexity, chance, and uncertainty.

Here, briefly, are three results I find particularly interesting because, despite being strange (if not problematic), their solutions are simple within their given models.

Continue Reading

Share

Two-Child Problem (when one is a girl named Florida born on a Tuesday)

Two-Children_Vincent-Van-Gogh

Two Children, Vincent van Gogh, 1890

A classic probability riddle goes:

A couple has two children, one of whom is a girl. What is the probability both children are girls?

It’s usually credited to Martin Gardner who, in a 1959 issue of Scientific American, posed essentially this question but involving two boys. It’s commonly solved by working through a sample space in which gender outcome is conveniently assumed to be 50-50: GG, GB, BG, BB

We know the couple has a daughter, so I crossed out BB. Each sibling-pair outcome has the same probability—i.e., 1/4—of occurring. One out of the three remaining outcomes satisfies the GG condition. We thus conclude: Given that one sibling is a girl, there’s a 1/3 chance both siblings are girls.

This result contrasts with the intuitive answer of 1/2, which many give on the faulty grounds that the unmentioned child’s gender must be equally likely for boy or girl.1 It turns out, though, that there’s an alternative interpretation that leads to a valid 1/2 result, more about which shortly. My goal in writing this post is to help myself and others develop an intuitive understanding of this problem and its variations.

In his 2008 book, The Drunkard’s Walk: How Randomness Rules Our Lives, Leonard Mlodinow offers a variation that goes: What if you learn one of the children is a girl named Florida? He claims this information changes the probability of both children being girls from 1/3 to practically 1/2.

At first glance, this result struck me as so counterintuitive that I rejected it outright. But then it occurred to me that the information could make a difference when interpreting the problem in the same way that gets a 1/3 in the standard Two-Child problem—which is indeed the interpretation Mlodinow is applying. I still couldn’t accept, however, that it would make any difference under the aforementioned valid alternative interpretation; more precisely: when randomly observing that one daughter in a two-child family is named Florida. In that case, the chance of two girls should remain 1/2. Though I could only confirm this by playing with the math (which does in fact confirm 1/2).

What strikes me as especially interesting here is that learning about the name Florida updates the 1/3 answer to practically 1/2, which is closer to both the intuitive (though ill-founded) 1/2 answer and the valid 1/2 answer.

Before getting into the nuts and bolts of what distinguishes the 1/3 and 1/2 interpretations, I’m going to introduce a problem that’s similar to, but conceptually easier than, the “Friday” example. At the 2010 Gathering 4 Gardner convention, puzzle maker Gary Foshee presented this problem:

I have two children. One is a boy born on a Tuesday. What is the probability I have two boys? Continue Reading

Share

Counterintuitive Dice Probability: How many rolls expected to get a 6, given only even outcomes?

1. A Counterintuitive Probability Problem

ancient roman dice probability 4 then 6Mathematician Gil Kalai recently posted the following intuition-bending probability problem at his blog:

You throw a dice until you get 6. What is the expected number of throws (including the throw giving 6) conditioned on the event that all throws gave even numbers.*

(*The problem is originally due to Elchanan Mossel. The term “a dice” is used here to denote a single die. Some people dislike this usage, though it does not obscure the question.)

Most people give the initially intuitive answer: three. But that’s wrong. I’ll get to the right answer below. Solutions, along with commenters’ expressions of bewilderment and contention, have been posted at Kalai’s blog, Math with Bad Drawings, and at Mind Your Decisions’ blog and YouTube channel. It’s interesting to note that one YouTube commenter accepted the correct solution, even giving an explanation in their own words, then later reverted back to the incorrect answer of three.

Methods for getting the correct solution are often hard to follow due to involving complicated-looking math or relying on background knowledge. I’ll share a solution that I think makes the problem more intuitive, and that only requires a basic understanding of probability, along with a little precalculus. Still, I’ll be sure to review the most relevant concepts as I go along, just in case it’s helpful. Continue Reading

Share

Aesthetic Experience as Response to Stimuli (and Meaning) Rather Than to Mental Representations

I would like to briefly explore the question of whether our aesthetic experience is in response to mental representations per se or, rather, to the physical stimuli correlated with those mental representations (in which case, the stimuli are responsible—or are the external-to-body beginnings, to be bit more precise—for both for the aesthetic experience and the mental representation). Perhaps aesthetic experience isn’t possible without some cultural, historical, personal (e.g., nostalgia), biological (e.g., when a guitar mimics human sobbing) association—in other words, without meaning. I’ll get to that in a moment. But for now I’ll think in terms of aesthetic experience without such associations. I’ll use music as a reference point.

The usual intuition is that our aesthetic responses to music are in response to the way music sounds; that is, are in response to the experience of music.1

An illustration will clarify what I’m getting at. When you strike the A4 key on a piano, two events take place. One event is a physical process: air molecules and other materials vibrate at 440 Hz per second, resulting in your neural machinery firing in sympathy with that frequency (I’ll use note to refer to a frequency in this sort of context; so the note A4 refers to the frequency 440 Hz in a musical or similar context); the other event is mental: you have an experience, the content of which is—or, perhaps better put, the identity of which is—the pitch A (notice that I use pitch to denote the mental event correlated with the note A; also notice that the pitch A is a term we use to refer to an experience possessing a certain quality: a quality present whether the correlated note’s source is an oboe, human voice, singing bird, and so on).

Our intuition is that our aesthetic experience results from the qualities residing in the mental events; that is, the qualities of those pitches and the structures they form—i.e., harmonies, melodies, and other musical mental events. In short: We have, it seems, an experience of beautiful music because the music-oriented mental events themselves are beautiful.

Intuitive, yes. But this strikes me as quite probably wrong. Continue Reading

Share

Utilitarianism and Conscious Computers: An Unsettling Utopia?

Broadly speaking, utilitarianism is the view that right action is that which promotes the greater good. It has been revised, developed, and adapted into varying systems of thought over the last three hundred years or so. (For more on that, see the Stanford Encyclopedia of Philosophy entry The History of Utilitarianism and the Internet Encyclopedia of Philosophy entry Act and Rule Utilitarianism.) I’ll focus here on a common understanding of utilitarianism in which the greater good is evaluated as a function of aggregated happiness or suffering.

I generally rely on the terms happiness and suffering to refer to opposing ends of a spectrum that runs, respectively, from experiences of the greatest possible positive to the greatest possible negative valence. On the view I explore here, right action is that which results, on balance, in the most happiness; or, at a minimum, in the least suffering; suffering may be diluted or nulled by happiness, and vice versa. Call this the aggregate utilitarian (AU) view. (I take this approach to be in line with what’s sometimes called average utilitarianism, though I’m not committed to any strict aggregation calculus; I’m interested in any utilitarian system that aims to evaluate preferences according to aggregating experience.)

An interesting problem arises at the intersection of this view and the idea held by many that conscious computers are possible. (For brevity’s sake, I’ll simply say that, by conscious, I mean having the capacity for experience, in particular complex experience—something along the lines of your capacity to experience the cold of an ice cube, the pain of a needle prick, the nagging thought that you should wash some clothes, and the longing for an absent loved one.) I’m not convinced that conscious computers are possible, but if I were an AU (I’m not), I might think it our duty to strive to create and mass produce such beings due to the following observation:

Given enough happy computers, an AU would be obliged to say that the amount of suffering in the world is now negligible. That is, as the number of happy computers increases, the percentage of suffering in the world tends to zero, making that world increasingly preferable to one—all else being roughly equal—without conscious computers. Continue Reading

Share

The Magic of Meaning – (Words, Mental Causation, Experience, Mind-Brain, Behavior)

Aleister Crowley

Aleister Crowley… what are the magic words?

Several years ago, on a winter weekend afternoon, I received a call telling me that my mother had just been hit by a car. The details are hazy. I think the voice was that of a stranger using my mom’s cellphone—perhaps it was the convenience store owner who witnessed the accident and waited with her in the sub-zero cold for the ambulance. Or was it my sister? At any rate, these details aren’t the point here (my mother was injured but eventually healed, by the way).

The point is: Upon hearing those words, my sympathetic nervous system kicked in, adrenaline started flowing, and, in short, I felt my heart sink into my stomach.

How can words do this? How can they perform this magic of altering one’s physiology in an instant, with no more force or matter than that involved with a few puffs of air and the fine machinery of one’s auditory faculties? This is the question I’d like to explore here, one whose difficulty lies especially in the phenomenon of mental causation. (Spoiler: I don’t have an answer; I think, instead, it’s a question best surveyed in order to map the depths and scope of its intractability, and the implications thereof.)

To be clear, words themselves play only a small role in initiating such bodily changes. More important is the meaning of a word (or a set of words), which involves not only what those words refer to, but how they’re delivered and whether the hearer believes them. There is of course a basic, perhaps even trivial, sense in which words have definitional meanings; this is what elevates the status of a sound—or written symbol or hand gesture—to that of being a word. But this is far from the whole story of a what one works towards, what one means, when using words; indeed, words aren’t necessary for meaning. A sob, laugh, or scream often carry a great deal of meaning. And a slight vocal inflection can easily indicate not only that we are to assign to a word the opposite of its usual basic meaning (not surprisingly, given that in such cases the usual meaning is still serving as a kind of semantic or conceptual anchor), but also may indicate that we should assign a meaning that has nothing to do with the word’s usual usage (which now offers no anchor).

This in mind, consider again the phone call example:
Continue Reading

Share

Nassim Taleb’s Fat Tony Example / And: Is it possible to flip 100 Heads in a row?

Flâneuse, book by Lauren Elkin

In his book The Black Swan1, Nassim Nicholas Taleb, a fellow urban slow-walker, describes a scenario in which he poses the following question to two characters, the rational & educated Dr. John and the intuitive & streetwise Fat Tony:

Assume that a coin is fair, i.e., has an equal probability of coming up heads or tails when flipped. I flip it ninety-nine times and get heads each time. What are the odds of my getting tails on my next throw? 2

Dr. John refers to the question as trivial and gives the mathematically correct answer of one half. Fat Tony calls Dr. John a sucker and says,”no more than 1 percent, of course … the coin gotta be loaded.”

This gets at a critical disconnect, noted often by Taleb in The Black Swan, that arises when we endeavor to generalize a real-world-applicable probability calculus from neatly devised games (application of which he aptly calls the ludic fallacy). This distinction between probability models and the real world is one I often struggle with in my ongoing attempts to understand the tense relations between formal probability, intuition (what I sometimes call informal probability)3, complexity, and epistemology (i.e., belief, opinion, knowledge). In short, I’m with Fat Tony: If I ever saw someone throw 99 Heads in a row, I’d think the game rigged.

To be clear, Taleb’s example urges us to go further than simply suspecting fowl play should we encounter a real-world instance of 99 Heads in a row. I presume the rational Dr. John would also be skeptical in that situation. What’s questioned in the example, rather, is whether we should accept such a scenario even on conceptual or theoretical grounds. This is what Fat Tony refuses to do by rejecting the thought experiment itself.

I’d like to explore this theme further, starting with a similar question: What is the probability of throwing 100 Heads in a row?

Some thoughts (I used Wolfram|Alpha for the math):

This is an easy question to answer. The probability of flipping a fair coin and getting 100 Heads in a row is 1 in 2^100. That’s 1 in 1,267,650,600,228,229,401,496,703,205,376.

Or, written out: 1 in 1 nonillion 267 octillion 650 septillion 600 sextillion 228 quintillion 229 quadrillion 401 trillion 496 billion 703 million 205 thousand 376

Or, in decimal form: .0000000000000000000000000000007888609052210118054117285652827862296732064351090230047702789306640625

In other words, the probability is very, very, very, very low. Not zero, but might as well be.4

And the probability of getting at least one Tails in 100 flips is: 1 – (1/2)^100. Continue Reading

Share