Aesthetic Experience as Response to Stimuli (and Meaning) Rather Than to Mental Representations

Setting aside, for a moment, associative responses (e.g., cultural, nostalgic), I would like to briefly explore the question of whether our pure aesthetic experience is in response to mental representations per se or, rather, to the physical stimuli correlated with those mental representations (in which case, the stimuli are responsible—or are the external-to-the-body beginnings, to characterize things a bit more precisely—for both for the aesthetic experience and the mental representation). Perhaps aesthetic experience isn’t possible without some cultural, historical, biological (e.g., when a guitar mimics human sobbing) associations—in other words, without meaning. I’ll get to that in a moment. But for now I’ll think in terms of aesthetic experience without these associations. I’ll use music as a reference point.

Our usual intuition is that our aesthetic responses to music are in response to the way that music sounds; that is, are in response to the experience of music.1

An illustration will clarify what I’m getting at. When you strike the A4 key on a piano, two events take place. One event is a physical process: air molecules and other materials vibrate at 440 Hz per second, resulting in your neural machinery firing in sympathy with that frequency (I’ll use note to refer to a frequency in this sort of context; so the note A4 refers to the frequency 440 Hz in a musical or similar context); the other event is mental: you have an experience, the content of which is—or, perhaps better put, the identity of which is—the pitch A (notice that I use pitch to denote the mental event correlated with the note A; also notice that the pitch A is a term we use to refer to an experience possessing a certain quality: a quality present whether the correlated note’s source is an oboe, human voice, singing bird, and so on).

Our intuition is that our aesthetic experience results from the qualities residing in the mental events; that is, the qualities of those pitches and the structures they form—i.e., harmonies, melodies, and other musical mental events. In short: We have, it seems, an experience of beautiful music because the music-oriented mental events themselves are beautiful.

Intuitive, yes. But this strikes me as quite probably wrong. Continue Reading

Share

Utilitarianism and Conscious Computers: An Unsettling Utopia?

Broadly speaking, utilitarianism is the view that right action is that which promotes the greater good. It has been revised, developed, and adapted into varying systems of thought over the last three hundred years or so. (For more on that, see the Stanford Encyclopedia of Philosophy entry The History of Utilitarianism and the Internet Encyclopedia of Philosophy entry Act and Rule Utilitarianism.) I’ll focus here on a common understanding of utilitarianism in which the greater good is evaluated as a function of aggregated happiness or suffering.

I generally rely on the terms happiness and suffering to refer to opposing ends of a spectrum that runs, respectively, from experiences of the greatest possible positive to the greatest possible negative valence. On the view I explore here, right action is that which results, on balance, in the most happiness; or, at a minimum, in the least suffering; suffering may be diluted or nulled by happiness, and vice versa. Call this the aggregate utilitarian (AU) view. (I take this approach to be in line with what’s sometimes called average utilitarianism, though I’m not committed to any strict aggregation calculus; I’m interested in any utilitarian system that aims to evaluate preferences according to aggregating experience.)

An interesting problem arises at the intersection of this view and the idea held by many that conscious computers are possible. (For brevity’s sake, I’ll simply say that, by conscious, I mean having the capacity for experience, in particular complex experience—something along the lines of your capacity to experience the cold of an ice cube, the pain of a needle prick, the nagging thought that you should wash some clothes, and the longing for an absent loved one.) I’m not convinced that conscious computers are possible, but if I were an AU (I’m not), I might think it our duty to strive to create and mass produce such beings due to the following observation:

Given enough happy computers, an AU would be obliged to say that the amount of suffering in the world is now negligible. That is, as the number of happy computers increases, the percentage of suffering in the world tends to zero, making that world increasingly preferable to one—all else being roughly equal—without conscious computers. Continue Reading

Share

The Magic of Meaning – (Words, Mental Causation, Experience, Mind-Brain, Behavior)

Aleister Crowley

Aleister Crowley… what are the magic words?

Several years ago, on a winter weekend afternoon, I received a call telling me that my mother had just been hit by a car. The details are hazy. I think the voice was that of a stranger using my mom’s cellphone—perhaps it was the convenience store owner who witnessed the accident and waited with her in the sub-zero cold for the ambulance. Or was it my sister? At any rate, these details aren’t the point here (my mother was injured but eventually healed, by the way).

The point is: Upon hearing those words, my sympathetic nervous system kicked in, adrenaline started flowing, and, in short, I felt my heart sink into my stomach.

How can words do this? How can they perform this magic of altering one’s physiology in an instant, with no more force or matter than that involved with a few puffs of air and the fine machinery of one’s auditory faculties? This is the question I’d like to explore here, one whose difficulty lies especially in the phenomenon of mental causation. (Spoiler: I don’t have an answer; I think, instead, it’s a question best surveyed in order to map the depths and scope of its intractability, and the implications thereof.)

To be clear, words themselves play only a small role in initiating such bodily changes. More important is the meaning of a word (or a set of words), which involves not only what those words refer to, but how they’re delivered and whether the hearer believes them. There is of course a basic, perhaps even trivial, sense in which words have definitional meanings; this is what elevates the status of a sound—or written symbol or hand gesture—to that of being a word. But this is far from the whole story of a what one works towards, what one means, when using words; indeed, words aren’t necessary for meaning. A sob, laugh, or scream often carry a great deal of meaning. And a slight vocal inflection can easily indicate not only that we are to assign to a word the opposite of its usual basic meaning (not surprisingly, given that in such cases the usual meaning is still serving as a kind of semantic or conceptual anchor), but also may indicate that we should assign a meaning that has nothing to do with the word’s usual usage (which now offers no anchor).

This in mind, consider again the phone call example:
Continue Reading

Share

Nassim Taleb’s Fat Tony Example / And: Is it possible to flip 100 Heads in a row?

Flâneuse, book by Lauren Elkin

In his book The Black Swan1, Nassim Nicholas Taleb, a fellow urban slow-walker, describes a scenario in which he poses the following question to two characters, the rational & educated Dr. John and the intuitive & streetwise Fat Tony:

Assume that a coin is fair, i.e., has an equal probability of coming up heads or tails when flipped. I flip it ninety-nine times and get heads each time. What are the odds of my getting tails on my next throw? 2

Dr. John refers to the question as trivial and gives the mathematically correct answer of one half. Fat Tony calls Dr. John a sucker and says,”no more than 1 percent, of course … the coin gotta be loaded.”

This gets at a critical disconnect, noted often by Taleb in The Black Swan, that arises when we endeavor to generalize a real-world-applicable probability calculus from neatly devised games (application of which he aptly calls the ludic fallacy). This distinction between probability models and the real world is one I often struggle with in my ongoing attempts to understand the tense relations between formal probability, intuition (what I sometimes call informal probability)3, complexity, and epistemology (i.e., belief, opinion, knowledge). In short, I’m with Fat Tony: If I ever saw someone throw 99 Heads in a row, I’d think the game rigged.

To be clear, Taleb’s example urges us to go further than simply suspecting fowl play should we encounter a real-world instance of 99 Heads in a row. I presume the rational Dr. John would also be skeptical in that situation. What’s questioned in the example, rather, is whether we should accept such a scenario even on conceptual or theoretical grounds. This is what Fat Tony refuses to do by rejecting the thought experiment itself.

I’d like to explore this theme further, starting with a similar question: What is the probability of throwing 100 Heads in a row?

Some thoughts (I used Wolfram|Alpha for the math):

This is an easy question to answer. The probability of flipping a fair coin and getting 100 Heads in a row is 1 in 2^100. That’s 1 in 1,267,650,600,228,229,401,496,703,205,376.

Or, written out: 1 in 1 nonillion 267 octillion 650 septillion 600 sextillion 228 quintillion 229 quadrillion 401 trillion 496 billion 703 million 205 thousand 376

Or, in decimal form: .0000000000000000000000000000007888609052210118054117285652827862296732064351090230047702789306640625

In other words, the probability is very, very, very, very low. Not zero, but might as well be.

And the probability of getting at least one Tails in 100 flips is: 1 – (1/2)^100. Continue Reading

Share

Free Will Is (Mostly) Irrelevant

clockwork-orange_folio-society-cover

The Folio Society‘s cover for their 2014 illustrated A Clockwork Orange.

In this essay, I argue that the question of whether humans have free will need not be viewed as important, as it is largely irrelevant to whether one is living a good life, and is even mostly irrelevant to questions surrounding punishment and desert except insomuch as we cast it as—believe it to be, treat it as being, etc.—relevant.

Free will is understandably of concern for those who believe that an all-loving deity would only punish those who deserve it. But for the rest of us, there is no principled reason to view it as relevant, and so it should accordingly be struck from, for example, our core conception of justice, particularly as a vehicle for retribution. Indeed, free will seems to be one of many current questions that turn out to be potentially outmoded holdovers from a theological framework that has been gradually disassembled since the start of the scientific revolution. Where once stood a serious concern about deserving punishment for offending a deity, now stand questions about what counts as being responsible for one’s actions and thus deserving an F in calculus, low socio-economic status, or the death penalty; or, less significantly, about whether one really freely chose to push a button in a psychology experiment.

Depending on one’s definition of free will, what I describe here may imply that humans don’t have it. If so, it’s a trivial kind of lack, due to free will simply not existing any more than a square circle does. This is different from merely saying that humans don’t have free will, because that statement alone doesn’t imply anything about other sorts of beings (deities, alien lifeforms, post-humans, conscious computers…). In other words: If free will turns out to be such a confused or vague concept that the term doesn’t actually denote any phenomenon at all, then no entity of any sort has it or, really, can be conceived of as having it.

On other definitions of free will, what I describe here will seem to imply that people do have free will. That’s fine as well.

My point is that it doesn’t—or at least needn’t—matter either way whether or not one has free will according to this or that definition. What does matter is that we have an understanding of the phenomena that relate to our conceptions of free will—such as justice, praise, blame, and punishment—and how those phenomena relate to our lived lives as beings capable of suffering or thriving (I put more emphasis on the former). Continue Reading

Share

Amnesiac’s Dilemma (Aka: Sleeping Beauty Problem)

There’s a probability problem that lacks an obvious solution, despite appearing simple at first glance. It’s usually called the Sleeping Beauty Problem, but I’m uncomfortable with that formulation, as it strikes me as needlessly sexist: it usually revolves around a young woman who is put to sleep by researchers, awoken and questioned about the result of a coin flip, then given a mild memory-erasing procedure, then put back to sleep, etc. Maybe it’s not (always) sexist. In some versions, Sleeping Beauty is said to have consented. But anyone can be made to consent to anything in fiction. At any rate, there’s no harm in changing it, and in some ways doing so makes it easier to think about (in other ways, not so much).

What you see here is my attempt at thinking through this difficult problem, which I reformulate as the Amnesiac’s Dilemma. (Though I’ll refer to it as the Sleeping Beauty Problem as well; I just won’t use that story… much.) The upshot of the dilemma is that 1/2 and 1/3 both seem to be viable solutions (proponents of which have been called halfers and thirders, respectively). Rather than rule the problem indeterminate, we take it that there must be some fact of the matter about which solution is correct given that the problem can be reasonably well-defined by a discrete, finite sample space in which the experiment may be repeated indefinitely.

To make sense of the problem, we need to be clear about its relevant features—for example, about what counts as a desired outcome. It may be that 1/2 is valid for one sort of outcome, while 1/3 is for another. In fact, I ultimately conclude here that, whenever asked “Heads or Tails?”, the Amnesiac (Henry, in this case) should have a credence of 1/2 that the coin landed Heads, even though he may (reasonably) simultaneously assign 1/3 credence to his situation being, say, {(First Question AND Heads)} (Henry’s analogue to Sleeping Beauty’s {(Monday AND Heads)}). Though I don’t take this result lightly (if pushed, I might lean towards 1/2 or agnosticism). This will make more sense (I hope!) by the end.
Continue Reading

Share

Memory and Consciousness (via Audition)

Mnemosyne by Dante Gabriel Rossetti (1881)

Mnemosyne (1881), Dante Gabriel Rossetti

I’d like to explore the following strong claim: Without memory, consciousness is not possible. Put another way: If you had no memory at all, you would not be conscious. I suspect this claim to be true, even if we take the minimum requirement for consciousness to be experience (of any sort).

I’ll explore this by thinking about the relationship between memory and audition, starting with a thought experiment:

Imagine having an extremely short memory while listening to a melody. So short, that there would be no melody, and instead an unrecognized series of disparate, unrelated pitch experiences. Now shorten the memory even more. Now imagine having no memory at all. Suppose the note A above middle C is sounded on a piano. Strings now vibrate at 440 cycles per second (excluding overtones), and in turn excite the air molecules in the room to vibrate at that same frequency. These in turn are picked up by your auditory faculties, which endeavor to produce in your mind the sensory impression of the pitch A. Without any memory at all, however, you would not experience the pitch. In fact, you wouldn’t be able to experience even one cycle of oscillation (several of which would be needed in order to experience a pitch). For one cycle, your mind must take in and store, over the duration of 1/440th of a second (about 2.27 ms), the entirety of an oscillation, the beginning of which will have been forgotten by the time it reaches its end.

My suggestion here is that, without memory, you cannot connect (i.e., by way of experience) the beginning of a single unit of oscillation, of which there are 440 per second, with its middle or end. Your mind thus becomes an experience sieve. It all just passes through. Continue Reading

Share

Why We Get the Monty Hall Problem Wrong(?)

I’d rather have the goat.

Part I: The Monty Hall Problem

The Monty Hall Problem (explained below) is one of those math results that strikes most people as counterintuitive. The problem is often illuminated by restating it with 100 doors instead of 3 doors. This makes many people go, “Ah, now I get it,” and concede that their intuition must be wrong. Nevertheless, for many of them the 3-door scenario continues to be counterintuitive.

This leads many to ask, “Why don’t I understand the Monty Hall Problem?” Like this person at Quora: Why doesn’t the “Monty Hall problem” make sense to me? The usual response is to try to demonstrate to the person why the correct answer is correct—to try to get it to click. But, even when this works (sometimes it seems to), it doesn’t address why the problem’s solution feels so counterintuitive, nor why the standard wrong answer feels so right. I think I have an idea of what’s going on.

First, a summary of the problem.

Suppose you’re playing a game in which you’re faced with three closed doors, numbered 1, 2, and 3. You are told by the game-master (who does not lie and only speaks the truth) that one of the doors conceals a car, and the other two doors each conceals a goat. You’re not told which door conceals which item. (The game-master need not know which door conceals which item, by the way, though the game goes more smoothly if she does. To be clear, though, it must be understood that the game-master will reveal a goat in all instances of the game.* See the End Note, however, for how the game-master’s knowing could affect how a player should guess.) The arrangement of goats and car will not be changed throughout the course of the game. Continue Reading

Share

Free Will Paradox?

Spike with a chip. (Do chip-less vampires have free will?)

There might be a paradox—or tension?—having to do with how we assess what counts as a negation of free will. Namely, we don’t generally consider that which is physically impossible to count as evidence against free will’s existence; yet to rule out free will is to say that it is physically impossible. Is there a paradox here, or would the matter of free will’s existence be straightforwardly settled once we’ve (correctly) noticed that free will is impossible? Some reflections:1

1. What is required in order to answer the (metaphysical) question: Does S have free will?

2. How is this different from asking: Is S (physically) able to exercise S’s free will?2

3. A standard definition of free will is: The ability to have done otherwise. I’ll refine this as: The ability to have chosen to do otherwise (for reasons I’ll discuss below; though I’m not sure there’s often a need to be a stickler about this wording).3

To elaborate:
(i) Five minutes ago, S chose to do, and then did, activity A. Continue Reading

Share

Consciousness Explained in Three Billion Pages

An unconscious naked man (1912), Richard Tennant Cooper

An unconscious naked man (1912), Richard Tennant Cooper

Cognitive scientist Donald Hoffman has recently been getting popular attention for a theory that holds consciousness as fundamental—that is, “…not as something derivative or emergent from a prior physical world”1, which he supports with a novel account of the relationship between objects and perception (a relationship that he argues, on evolutionary grounds, is nonveridical, i.e., perception does not faithfully represent or reconstruct objective/external reality); Hoffman’s theory also requires a novel account of the relationship between the brain and consciousness. I’m not going to get into the theory here, though I find it thought-provoking.2 Instead, I’d like to pivot off a nice expression he often employs about our prospects for understanding consciousness (something we seem yet very far from accomplishing):

“Some experts think that we can’t solve this problem because we lack the necessary concepts and intelligence. We don’t expect monkeys to solve problems in quantum mechanics, and, as it happens, we can’t expect our species to solve this problem either.”3

Unlike Hoffman, I agree with the skeptical experts. Before explaining why, some conceptual grounding. Continue Reading

Share