1. I believe one of humanity’s greatest sources of conflict to be our inability to really know—directly, firsthand—what one another is thinking. I refer to this inability as the “phenomenological gap” or “phenomenal gap” or “mind gap.”
I don’t believe the gap can be bridged in any literal sense. But the gap itself isn’t the problem. It’s that we so often ignore the gap, believing ourselves to know what each other really believes, feels, desires. That in mind, I’m convinced that our best bet for dealing with the gap—for thriving and being enriched both despite of and due to the gap—is active, mutual attention. More on that as I go along.1
2. By “phenomenology” (whose adjective is “phenomenological” or “phenomenal”) I mean experience. By “mind” I mean the total geography of one’s inner world for which there is a phenomenological account. In other words, the sum of your experiences at a given moment and across time, insomuch as you experience that sum (which intimately involves memory, insofar as you can access and fabricate and fit those experiences you presume to constitute “memory” or “memories” into an acceptably coherent picture of “my mind,” which includes experiences catalogued according to your sense of past, present, future). In this discussion of the gap, I’ll use the words “phenomenology” and “mind” interchangeably (though will likely favor “mind” as it’s shorter), but deep down I do mean “phenomenology” to be something smaller than “mind,” and I think a more rigorous account would reflect the distinction.
A briefer way to put my intended usage might be: phenomenology is experience; mind is a complex of experiences coherent enough to produce a sense of self. Beyond this, I’ll trust the reader’s intuition to know what I mean with the words.
3. What follows are largely unedited thoughts whose proper order I’m unsure of. Many of these points could (probably should) be little essays in their own right, or should simply be abandoned. So, I think of it as background work for potential future thought and reference.
That the mind gap can’t be directly observed doesn’t make the exploration easier. I’m not even sure whether to characterize it as a metaphor or relation (e.g., a relation such that minds are “distinct from” or “discontiguous-with” one another, in the way brains are—in the way even an individual brain might be [as when bifurcated; more on this, I’m sure]). And if it’s to rightly be thought of as a thing in itself, it may be the most severely negative existent I can think of (and thus a powerful source of absence causation as well).
I’ll attempt, then, a kind of cautious, abstracted, perhaps apophatic exploration of the negative space around the gap. I can only observe and reflect on this side of the gap—that is, in the realm of my own experience, some content of which has as its source the reports from others about the goings-on at other points across the gap (if I know you, have read your work, heard your songs, seen your films… then you constitute one of those points).
4. The mind gap can be a deep source of enrichment. I’ll no doubt touch on this below, but my focus here is on its perils.
5. Many of the mind gap’s dangers seem avoidable. Some are surprising. Does it underly all human conflict? Even conflicts one has with oneself? Maybe. Maybe this begins in a trivial sense that is augmented the greater the distance between an instantaneous “self” and an “other,” including “within the same person,” which then expands to “other selves.” “Trivial,” I say, given that no self is identical to any other self, at any instant, other than itself. In other words, the fact any two entities being mutually distinct and dynamically changing may be just the same fact as: those entities are mutually dissonant (where “dissonance” is another way of characterizing the phenomenological gap). The reverse would be true as well, then: to recognize dissonance (or a phenomenological gap) just is a way to recognize that any self is identical only with itself. (This again suggests that any dangerous conflict finding its roots in the mind gap is not, ultimately, to say that the mind gap is in itself a bad thing.)
Rather than explore these far-reaching and metaphysically complicated speculations, I’ll rest on a weaker notion: the mind gap underlies much of inter-, and likely intra-, personal human conflict.
6. To better (or less?) understand the gap (as metaphor and/or relation): The gap enspheres each mind within an infinitely hollowed chasm, but it doesn’t enclose, as any mind is, in some important sense, infinite.*
(*There’s more to pick apart with the claim [or metaphor?] of the mind as infinite. The idea is that there is an infinite variety of things that can be perceived, imagined, felt. This doesn’t mean, however, that the any [human] mind is able to experience just anything. No human perceives a difference between two notes vibrating at very close frequencies, say 440Hz and 440.000000000000000001Hz. It may still turn out that any mind is open to an infinite set of experiences; just as a set of numbers may be infinite, despite not a single element of that set being an even number [e.g., the set of odd integers]. There’s much more to say, but I’ll leave the metaphor here.)
So where does the mind end and the chasm start? And what to make of the fact that there do seem to be small channels of communication for teleporting experience—or at least for relaying information—through the chasm, such as with language.
But don’t try to jump across. Not that you’ll fall in—rather, you’ll land on the ever-expanding edges of your own experience. This needn’t be a bad thing, so long as you don’t mistake those edges for the outer shores of someone else’s mind (though the occasional message-in-a-bottle may wash up: your experience of a certain shade of red may well be exactly similar to mine, even if you get that experience from an apple and I get it from a dove’s feathers; how to know we are sharing such things, I have no idea); and so long as you can handle the solitude—and potential loneliness—stirred by that realization (if only in fleeting doses).
Somehow, it makes sense to say, metaphorically, that the most outward edges of your experience are the furthest inward and the deepest down. And as you explore that geography further inward and deeper down, the more any system of taxonomy you devise for what you discover there will be your own—a kind of personal natural history and language. The more developed that system gets, the harder it will be to share it with others. But try you may—with a word, gesture, touch, song, painting, film.
You may even try to share it with future versions of yourself—to bridge the phenomenological gap between You Today and You Tomorrow—through photographs, diaries, objects of sentimental value, children (natural self-clones), (lab-made) clones, computerized brain-maps.
I’ve been saying “you” here—as I think whatever applies to you applies to me and every entity with a mind.
7. Your world, what you take to be reality, is some subset—perhaps the sum total—of the whole of your mind. Each of us inhabits a world. Correspondence of the world as it is experienced—i.e., of a mind at a particular instant—and the world in itself (the external-to-you, actual, objective reality you interact with and is the intermediary medium through which you and I interact with each other) is miles from perfect, nor is there any reason, evolutionary or otherwise, that it should be or that we’d want it to be (see again the 440Hz vs. 440.000000000000000001Hz example). No two such internal worlds (i.e., minds) are the same, and no single world (i.e., mind) stays that world for long.
What counts as a new world (i.e., mind) is unclear. If your memories change a little, you’re still you as far as your concerned, and presumably the world as you understand it is still the same world. Presumably these things (what you take to be you and what you take to be your world) can change independently, but usually mostly overlap.
8. Without delving much deeper into questions about the nature of the mind (or consciousness), I will note the loose, and familiar, distinction between the mind as a collection of (or maybe better: complex of) mental events (over time) and of the mind as a capacity to generate mental events. (I suppose this elaborates on my comments in point 2, above).
In the former sense, one’s mind—and thus world—is expanded, or at least its contents are increased (and perhaps decreased in some cases), by new experiences. In the latter sense, any new contents of one’s world doesn’t count as an expansion, so much as a realization of potential capacities (particularly when those experiences are very new or rare). When I speak of exploring the edges of one’s experience, I’m especially interested here in the latter sense, and thus in questions like: What sorts of unique, highly personal forms might a mind take? In what ways, subtle or extreme, do the worlds of individuals differ?
9. A moment ago, I wrote here: In a certain sense, we can test the remote borders of one another’s mind by trading in novel or atypical stimuli, and observing differences in one another’s behavioral responses. Consider late-19th-century to mid-20th century art and music etc. in this context. Notice, however, how quickly the stimuli in question cease to be novel or atypical, resulting in new conventions, languages, codes, formulae, etc.: how quickly behavioral responses to these stimuli normalize.
I’ll revise the above as: There’s something here to be said about niche or fringe aesthetic tastes as an indication of idiosyncratic experiential capacities (via nature and/or nurture). Perhaps atypical aesthetic (or otherwise) tastes are indications of someone surfing along the edges of their experiential borders, or perhaps the person always lives at those edges (for some reason of nurturing?), or perhaps that simply is the person’s center or norm despite behavioral expressions of such tastes constituting a social fringe. We need not survey fringes, however, to see differences. Just as interesting as someone’s genuine pleasure in cacophonous music (for example) are the conflicting experiences reported by two people of the same stimulus.
At any rate, the idea of “border-testing” is an interesting one, I think, though its surrounding concepts require much more working out. And depending on how performed, it could harm or enrich. (Note that suffering is easier to create than is pleasure. I’m exploring this asymmetry elsewhere, and recently began a now-abandoned piece in which I considered the progression of art towards a kind of pure mode of “phenomenology bending.”)
10. We’re bad at knowing what others are really thinking. Worse, we tend to think ourselves good at knowing what others are really thinking. This misguided confidence permeates human society, both in our daily lives and our formal institutions. For one of countless examples: Apparently, many who work in law enforcement think themselves nearly infallible experts at detecting deception, when in fact even the most accomplished are only right about 60% of the time, which is by no means great, though apparently far better than the rest of us, despite our confidence as lie detectors. (For more on this, see the work of folks such as Paul Eckman and Maureen O’Sullivan: “The Truth About Lie Detection.” Also, I’ll at some point below consider a computer program designed by Buffalo researchers that uses an individual’s specific behavioral cues to detect lying 80% of the time. Better, but still not good enough to rely on.)
11. Even when knowingly faced with a severe lack of evidence, we’ll often concoct a story rather than submit to uncertainty about what another (or even our own self!) is thinking. It’s culturally encouraged, in fact, such as when students are asked to infer, for a grade, what Michelangelo was thinking when he sketched The Damned Soul or what Abraham Lincoln really thought about the Civil War.
Or when we’re asked to opine on the mental states of our friend’s object of infatuation, even when that person is a stranger: “Oh, he said what? He would only say that if he was thinking x, y, and z.” Of course, we don’t know what that person thinks and neither do the petals of a rose. I presume that, deep down, we often know these are games meant to temporarily sooth uncertainty, while waiting for the early stings of infatuation to fade (a suffering borne of and sustained by uncertainty). Maybe we’re better off with the game, maybe not; perhaps those who play the game are more likely to end up in a healthy relationship, perhaps not. These are empirical questions. And we might be able to gather data about behavior that, averaged out, helps us measure likelihoods of some outcomes (certainly when the crush says, “I have zero interest, please stop calling me or I’ll report you to the police,” this is evidence of things not working out; evidence is usually subtler than this).
Trying to “mind read” (in the technical, psychological sense of the term) in these ways is something we at times must and should do—but they don’t bridge the gap; rather, they help us make inferences and decisions this side of, or around, the gap. My plea is not to ignore the gap, so as to avoid mistakes such as (to name just one of many) the Fundamental Attribution Error (defined at Wikipedia as “the concept that, in contrast to interpretations of their own behavior, people tend to (unduly) emphasize the agent’s internal characteristics [character or intention], rather than external factors, in explaining other people’s behavior. This effect has been described as ‘the tendency to believe that what people do reflects who they are.’).
12. In general, we try to bridge the mind gap—by means of what I’ll call behavioral representation—in two different directions: I try to impart to you what’s in my mind (call this exporting), and I try to surmise what’s in your mind (importing).
13. Means of exporting include direct language meant to convey simple ideas (“I have a bad headache”) or indirect language meant to convey something deeper (“My heart swells…”). Then there’s literature, painting, music, and so on. There are technical languages, math, musical notation. There’s body language and nonverbal communication such as laughter, yawing, sarcastic inflections. The idea is to get what’s in my head into yours, to make the content of my experience the content of your experience, and vice versa.
Exporting is hard work, especially when someone is resistant to important, and importing is even harder work, maybe even impossible when someone refuses to export. Important requires attention.
Of greatest significance here are those methods meant to trade the ineffable, or at least that which defies direct language representation (“No, I mean a really really bad headache… ‘headache’ isn’t even the right word!”).
We might imagine exporting as a kind of teleportation process, à la Star Trek, beaming information over to an able (and hopefully willing) importer—by way of media involving vibrating molecules that have been perturbed by air passing through vocal cords or by skin contacting skin in just the right way to stimulate your body’s nerve cells so that there results a rearrangement of your brain matter until you feel what I feel, or until you can at least tap into your own catalog of (remembered or stored) experiences in order to imagine what I might feel. The pains I’m taking right now, as I write these words, are an attempt to do just that—to inspire within you the awe that I experience when contemplating these mysteries. I doubt I’m succeeding.
(Perhaps this will help: Bafflingly to me, most people seem to be more impressed by the cosmos or the expansiveness of the grand canyon than by what I consider to be the awesomest known wonder of the universe: the human mind. I would rather inhabit the phenomenology of another animal, especially a human one, for ten seconds—while somehow still recognizing the experiencer as myself—than to take a trip to the Moon).
14. Successful importing can be as simple as understanding when someone tells us “this soup is saltier than that one.” But we must go much deeper than this. Clinicians, for example, must devise ways to measure pain—physical, psychological—via reports from patients. There are more or less clever ways to do this (for example, correlating one’s pain with the brightness of a light or loudness of a sound), but these results, however useful, aren’t taken to be transliterations of what the person is really feeling.
We don’t know, for example, if when a person has a low tolerance for pain, if a given stimulus actually hurts more for that person than for most other people, or if it hurts the same amount, but the person is less willing or able to endure the pain (for whatever reason). If there is actually more pain, let’s say X amount, then the tolerance isn’t low; that is, someone with a “high tolerance” for pain would not be able to handle X amount of experienced pain either. It just takes more to produce X amount of pain in the “high tolerance” experiencer.
This isn’t how we think of pain tolerance, however. For example, there are people who have no sense of pain at all—a condition known as congenital analgesia, or congenital insensitivity to pain. We don’t say that they feel pain but don’t mind it, but rather that there is simply no sensation they have that would count as pain. In other words, we don’t say that they have a high tolerance for pain.
In people who do feel pain, however, we’re perfectly happy to talk about higher and lower tolerance, and to sometimes even blame or praise someone for their personal degree of tolerance to pain.* This unconsidered overconfidence about understanding the mental content of others is quite distinct from the sorts of knowingly uncertain, but often useful, reports of pain in a medical context.
(*Is it fine, however, to casually speak of degrees of tolerance to a given stimulus? E.g., a person who doesn’t have an extreme experience of cold in low temperatures may be said to have a high tolerance for low temperatures. Would it make sense to still talk this way should the person turn out to have congenital analgesia? Or if the person is Superman? The former, I’m not so sure, but the latter seems fine, as Superman is capable of sensing pain.)
What I’m interested in here is our overconfidence as importers (as distinct from something like an operationalized medical context).
We try, often overconfidently, to bridge the mind gap with our inborn and developed “decoding” mechanisms—our theory of mind, theory theory, mind reading, or whatever you want to call it. This works well for giving a basic understanding that others have their own points of view, thoughts, feelings, and generally saving us from a life of egocentrism. We’re probably get the general gist of what someone’s thinking often enough: if you see Sarah order and seeming to enjoy eating vanilla ice cream with fresh cherries on several occasions, you’d probably be correct in thinking she enjoys those things, and, because you yourself have similarly enjoyed such things and behaved similarly in response to that enjoyment, you have a sense of the sort of enjoyment she’s experiencing. But we shouldn’t think of this skill as a kind of sixth sense for reading minds. We think we have built-in mind-gap bridges, but we don’t. We are in dire need of recognizing and acting in accordance with this fact.
We seem to recognize this intellectually, but we often behave as though we don’t. (By “we” here, I include myself, certainly.)
15. You might be prey to the gap when you: accuse a (depressed) person of laziness; resent someone for social anxiety (even if you do so unwillingly and regretfully); blame someone (rightly or wrongly) for involuntary sexual urges (deviant or otherwise); scold the sufferer of chronic pain who begs for death; condemn someone to life imprisonment due to an expert’s interpretation of a brain scan seeming more trustworthy than that of an opposing expert or, just as bad, for some folk-psychological reason: “Well, if she’s innocent then why is she acting so nervous? Only a liar would act nervous!”
There’s another layer of difficulty here. Consider, for example, when a socially anxious person resents themselves for their behavior and, more importantly, for the feelings that underlie that behavior. A host of questions arise, surrounding the psychological (clinical or folk) explanations for behavior; questions about a conception of personal identity as multifaceted (due, for example, to multiple consciousness entities being “hosted” by a given brain; these may indeed amount to different persons “inhabiting” the same body); free will (where it’s understood that you can have competing primary and secondary desires, but have no control over what those will be: you desire the cake, but also desire to lose weight, and you are the ultimate author of neither of those desires; which again may refer us back to there being multiple persons “in” a body, a notion we may contemplate as figurative or—in some alternate or future reality—conceive of as literal); and so on. The upshot is that how you feel about your predicament can depend on your concepts (e.g., in a society where a certain urge is not frowned upon, having that urge might not be cause for self-reproach and shame and so on).
16. The mind gap is at the root ten smart experts in a room evaluating the same evidence through the lenses of their similar educations and backgrounds, and even similar intuitions, and arriving at ten different conclusions, while each thinks the others either mentally unstable or dishonest for not sharing their own obviously correct pet conclusion.
17. Empathy, as it’s usually understood—i.e.,”feeling what someone else feels” without explicit regard for the epistemic impenetrability of private experience—may be a kind of evil, because it pretends to understand, to have bridged the gap. A healthier empathy, I think, would be less self-directed, would be one that recognizes the limitations of importation, recognizes that the best I can do is draw from my own experience, whose quality may or may not resemble yours, and pay attention to your behavior (e.g., if I hate X but love Y, and you act towards Y the way I act towards X, I should at least consider the possibility that you hate Y, even if we agree in many respects about the elementary experiences we associate with Y).
Note that “empathy” may mean different things at different times to different people, and is often distinguished into types—e.g., “affective empathy” in contrast to “cognitive empathy.” I’m using the term broadly, roughly in accord with its popular understanding, which I believe generally means to not only share, but to also understand, what another is feeling; my concern is that the relation between understanding and sharing is not clear—i.e., we may think we’ve achieved understanding by first intuitively sharing another’s feelings, or we may think we share another’s feelings because we’ve first come to understand them. This affords ample room for error and, most importantly, I doubt we often give much thought to that potential for error; that sort of thoughtfulness, in some robust form, is what I mean by “paying attention.”
18. Psychologist Paul Bloom makes an extended case against empathy in his 2016 book, Against Empathy: The Case for Rational Compassion. I’ve heard him discuss his ideas on various podcasts, where he’s made a compelling case for compassion over empathy as a guiding light for moral behavior. I haven’t read the book, though, and don’t know if he has explicitly made the point I make above.)
19. My guiding thesis in this collection of thoughts is that attention is our best bet at working around the unbridgeable mind gap. In point 17 above, I mention that empathy, in the sense of “(the belief that you’re [capable of]) feeling what someone else feels,” may mislead us into thinking we’ve traversed the mind gap.
After writing that, I came across a set of studies on “understand the mind of another” that supports this idea. Researchers asked participants to take the perspective of another in order to predict that persons thoughts and feelings and such. Conventional wisdom predicts that perspective taking would increase accuracy in this regard, but instead, it “decreased accuracy overall while occasionally increasing confidence in judgment” (to quote the abstract, which I share in full below). Participants ranged from intimates (e.g., spouses) and strangers. The effects were reliable across the board (e.g., the effect was seen among spouses, and there were no reliable gender differences).
The studies contrast perspective taking with perspective getting, which proved more reliable. Perspective getting relies less on assumption, less on imaging yourself in the other’s shoes, and more on conversation, to quote a recent Quartz article about the studies: “‘Understanding the mind of another person,’ as the researchers put it, is only possible when we actually probe them about what they think, rather than assuming we already know.”2 Or, as participating researcher Tal Eyal put it in a statement to Quartz: ”We assume that another person thinks or feels about things as we do, when in fact they often do not. So we often use our own perspective to understand other people, but our perspective is often very different from the other person’s perspective.” This may seem so obvious that many of us would shrug and say, “yeah, of course.” But we tend not to act as though we really believe it. Not even close.
Here’s the original paper and abstract:
Taking another person’s perspective is widely presumed to increase interpersonal understanding. Very few experiments, however, have actually tested whether perspective taking increases accuracy when predicting another person’s thoughts, feelings, attitudes, or other mental states. Those that do yield inconsistent results, or they confound accuracy with egocentrism. Here we report 25 experiments testing whether being instructed to adopt another person’s perspective increases interpersonal insight. These experiments include a wide range of accuracy tests that disentangle egocentrism and accuracy, such as predicting another person’s emotions from facial expressions and body postures, predicting fake versus genuine smiles, predicting when a person is lying or telling the truth, and predicting a spouse’s activity preferences and consumer attitudes. Although a large majority of pretest participants believed that perspective taking would systematically increase accuracy on these tasks, we failed to find any consistent evidence that it actually did so. If anything, perspective taking decreased accuracy overall while occasionally increasing confidence in judgment. Perspective taking reduced egocentric biases, but the information used in its place was not systematically more accurate. A final experiment confirmed that getting another person’s perspective directly, through conversation, increased accuracy but that perspective taking did not. Increasing interpersonal accuracy seems to require gaining new information rather than utilizing existing knowledge about another person. Understanding the mind of another person is therefore enabled by getting perspective, not simply taking perspective.
I’ll stress again that, as far as I’m concerned, nothing bridges the gap. But there are ways to work around it, to thrive socially and interpersonally both despite of and due to it. Close and sustained attention—importing, probing conversation, perspective getting, whatever else you want to call it—is at least half the solution. The other half is in the communication, the exporting, the behavioral representation.
20. (Bias Disclosure: I’m deep left, politically. I don’t think this should have any bearing on how you interpret the comments that follow. But it might. Especially now that I’ve mentioned it.)
The below comments surround the following claims: There are no group minds. So, there is no gap between group minds, and there is no gap between individual minds and group minds (except trivially, as in: there’s a trivial gap between my mind and the mind of this napkin, as this napkin has no mind). However, we often behave as though group minds exist, so the notion of a “group-mind gap” is important.
It seems that in many parts of the world today (e.g., in the U.S.), acknowledgement of individual experience is increasingly seen as important. Though it’s sometimes unclear how such experience is meant to be understood—it’s often cast as “personal” or “private” while simultaneously, and increasingly, characterized as being subsumed by a social group (e.g., in order to emphasize—for [often] understandable socio-political reasons—the assumed shared experiences had by [certain] members of [certain] groups). In fact, I have, in recent times, seen members from a range of social groups express the worry that they are being told they have to think about and experience the world in a particular way, strictly due to their (voluntary or compelled) social group membership. This might not be so surprising, especially in cases when it is precisely one’s self-report of individual experience that gains one publicly acknowledged membership in a certain—though what may be surprising, and in some cases oxymoronic, is the assumption that the basic experience that qualifies one for admittance into the group is meant to entail a whole other set of experiences. (Note that I take “experience” here to also include beliefs, or at least the phenomenological and occurrent dimensions of belief.)
I think it’s important, then, even when (rightly) encouraging this line of thinking, to not forget that literal experiences are shared in the sense that you and I might share a heart rate: it’s the same rate, but a different heart (gimme’ a minute and I’ll come up with a better analogy); many important—I’d say the most important—details of any individual’s personal experience are utterly private.
To be clear, group’s don’t have singular, first-person perspectives, nor, going further, do they have experience of any sort as a singular, unified entity—i.e., as a single entity above and beyond the individual entities that are the group’s constitutive members. Another way to put this is that I see no good reason to think that groups of individuals have anything like a single, shared brain that would yield to that group—as a singular entity in its own right—the subjective experience of some phenomenon exactly like or relevantly analogous to that of the human mind (or, for that matter, even of something simpler than the human mind).

And even if there were singular entities composed of human individuals (in the way a human is composed of cells and organs and so on), there’s no reason to think that contingent human social concepts determine, or at least just so happen to pick out, the sort of human collaboration that results in ontologically literal group minds except perhaps to the extent that those concepts result in the relevant sort of human behavior that amounts to the right sort of—i.e., effective group-mind-making—collaboration (a set of behaviors which, incidentally, itself would, by extension, be contingent, not universal, absolute, or ahistorical). Furthermore, a lot of luck would be required in order for us to happen to organize ourselves in such a way that leads to exactly the group minds (yet no other [morally significant] group minds) we tend to talk about, precisely in a way that matches—whether it precedes or follows from—how we talk about group experience, and, furthermore and as a separate matter, in such a way that the group entity’s experience is a faithful magnification or amalgamated replica of its individual members distinctly personal experiences (I imagine here something like the Mandelbrot set).*
(*It could be that the group minds we talk about happen to exist due to all such possible minds existing; though, again, this would also mean that (1) many group minds would arise from what we’d conceive of as mixed-groups [i.e., groups we’d never think to talk about but may be just as, if not more, worthy of our talking about them]; and, again, it’s unlikely that (2) the experience of those minds would be what we might infer from the purported private experiences of the group’s individuals. To see what I mean by (1), consider what set theorists refer to as “power set,” which is to say the set of all subsets of a given set, where, in this context, the prevailing set is perhaps the entirety of living humans at a given moment. An example of (2) might be: Imagine a sports team that is being pushed so hard that the team’s individual players are miserable, and wish nothing more than for the season to be over; they’ve ceased to care about winning or losing. As it happens, the team is winning every game. Now suppose the team constitutes a group mind. The mind is thrilled that it’s winning every game, and this is often reflected in the language of spectators who say, “The team is doing amazingly well this season!” We can also imagine a team losing every game, but the individual players are enjoying themselves.)
Even those few who posit the literal possibility of something like group minds don’t go as far as suggesting that the group minds of the world are carved up precisely in accordance with our language (e.g., see Eric Schwitzgebel’s enjoyable 2015 essay: “If Materialism Is True, the United States Is Probably Conscious.”)
But we of course freely talk as though group experience is real. I think most folks would agree that it is metaphorical (or at least it only makes sense to me as metaphor; one person I brought this up to insisted that the existence of group minds seemed likely [though not necessarily in accordance with the groups we tend to pick out semantically in the U.S.], but it’s hard for me to know whether people are being sincere in such discussions, or if they’re putting on their Philosopher’s Hat; for more about that, see this footnote3). Such talk, however intended, can be powerful. It can, for instance, be a potent factor in imbuing individuals with a greater sense of power and/or, maybe ironically, with a more robust sense of personal identity, particularly in the face of [perceived or inferred] obstacles against the groups of which those individuals are members—individuals are aware of how their experience differs with the stereotypes of their group(s), yet they may join powers with fellow members as a force that collectivizes the reported shared experiences and powers of group members (i.e., the individuals’ shared features that are presumed to serve as a kind of natural gravitational field that brings the group together in the first place; social group ontology, however, is often closer to technology than it is to natural phenomenon—I’ll reserve discussion of that for another day). The worry, however, is that we fail to keep in mind that these same sort of dynamics (e.g., significant variation among group members) exists in groups of which we aren’t members.
In short, I reject that there could be a phenomenological gap between groups of individuals, no matter how robustly constituted; though of course there may be wider or narrower phenomenological gaps between members of distinctly constituted groups, by virtue of a given individual’s umwelt (to adopt a term from ethology) being at least to some degree shaped by their group affiliation.
It can also be powerful for holding individual group members accountable for deeds committed, attitudes expressed, etc. by other members of that group (whether it’s the sort of group in which membership is voluntary or compulsory [like it or not]). This feature provides further reason for worry. (Again, more on social group ontology some other time.)
Finally, I’ll reemphasize that such phenomena can amount to a social good, though there is a danger of it coming with a diminished recognition of the robustness of individual experience of members of other groups. In-group/out-group dynamics have been well-studied by social psychologists (e.g., “the members of that group are all the same; the members of my group are all different”). There’s a deep and complex conversation to be had about the social goods and ills of the metaphors that not only shape contemporary socio-political discourse, but also manifest in how our society is—continues to be, is soon to become, etc.—organized.
I won’t further engage this conversation here, but bring it up in order to earmark yet another channel of complications associated with the mind gap (i.e., at the group level or between groups and individuals).
(I’ve written elsewhere about some of this in, for example, the pieces “Do Groups Believe?” [the answer is “no”; it’s a longish piece of over 9,000 words] and “Fun with Statistics on Planet żMi” [about 825 words].
21. Experience is the basis for morality (i.e., involving harm committed by rational agents against creatures capable of suffering). Private experience complicates attempts at a perfectly codified moral system based on explicit principles. “Do unto others as you would have them do unto you,” literally practiced, would get many of us arrested fast. To adjust for this, we would end up with vague principles such as, “Treat others as they wish to be treated so long as it doesn’t bring you or anyone else undue harm (assuming the treatment, when enacted on the person who wished to be treated in the way in question, cannot by definition count as harm).” Better, then, might be a morality without principles (sometimes called moral particularism). Jonathan Dancy describes such a system in his 2004 book Ethics Without Principles.
22. Impediments to crossing the gap, and the vagueness of language (e.g., knowing whether each of us means the same thing when we say “that hurts a lot”) have resulted in relying too much on external cues, too much on behavioral representation. Behaviorism dominated psychology for much of the 20th century, and continues to have great influence; and there’s the exploitation of external cues such as can be found in the body’s external shapes, colors, and styles of mobility. This becomes yet vaguer, yet less tuned to the individual experience of those who own (to use a standard popular metaphor, if in what turns out to be a subtly creepy tone*) those bodies and thus less human, as those external effects are aggregated into statistics and talk of group experience; useful at times, but at other times furthering us from understanding the lived (private) experience of human beings. Experience doesn’t aggregate, only behavior and behavior’s effects; though our understanding of those aggregates can in turn influence experience (e.g., can help us institute anti-discrimination policies).
(*We could also say something along the lines of “inhabit” or “occupy” or “are” or “whose self” or “identity is correlated with”… And we can always expose the metaphorical nature of what we say without too much trouble. For example, by asking “Who or what is doing the owning or inhabiting?”)
23. In a 1980 study by Robert Kleck and Angelo Srenta, subjects were given a fake facial scar and then asked to participate in a face-to-face interaction with a stranger. Before the interaction, subjects were shown the scar in a mirror, and then told moisturizer would need to be applied in order to keep the scar from cracking and peeling. Unknown to the subject, however, the scar was not being moisturized, but removed. They would then interact with the stranger, after which many subjects not only reported that their interactants responded uncomfortably to the scar, but also—while watching videotape recordings of their interactions—provided rankings of specific instances of tenseness and gaze aversion, which subjects presumed to be in response to their scars.
There seems to be at play a combination of behaviors on the subject’s part. For example, believing themselves to have a scar and expecting a certain kind of response to the scar may result in a kind of self-fulfilling prophesy, in which the interactant behaves nervously in response to the subject’s nervousness, etc. Additionally, the subject may in some instances misinterpret otherwise normal-appearing behavior as being a tense response to the scar. Whatever the case, the study nicely demonstrates the perils of thinking you know what someone else is thinking, even in subtle, everyday interactions.4
I don’t know if the study has been replicated.
24. The popular idea that you can infer the involved or subtle internal mental states of others by “reading” their body language constitutes a particularly annoying myth. Body language is at times like amateur karate: it’s only useful if all involved are following the same rules. And even then its value is limited. In-the-know practitioners may just find themselves self-consciously trading contrived gestures that all recognize to have nothing to do with what anyone is really thinking, but which may be used as a kind of game-face for attempting to out-bluff one another.
But this sort of game-play is expected and encouraged. I’ve several times been told, “you should hold your arms differently because people will read that as nervousness.” “But I’m not nervous,” I’d respond, “rather, I used a different soap today so my hand feels weird, and it makes me not want to touch my pants.” “That makes sense. But people will read it as nervousness.” In other words, body language won’t tell you what I’m thinking. But it might tell you I’m willing to use the tropes of that “language.” Is that a good thing? “I’m terrified but would like to convey confidence, so I’m going to sit with the posture that this book says naturally confident people are expected to use (in this culture).”
To be clear, I think there’s something to the idea that adopting a posture popularly correlated with confidence may help increase actual confidence; but this is a different discussion (perhaps it has something to do with beliefs about the posture, whether than anything biological; I don’t know). At any rate, in a moment of great fear, no matter how good I am at feigning some cliched I’m-not-afraid posture, I will still be afraid. And if appearing less afraid somehow helps save my life, well, fair enough. But for better or worse, outside observers won’t know how afraid I really was. They’ll misread my internal state.
This isn’t to say that claims about reading body language are completely bogus. But the myths outweigh the truth. Read about this at Forbes, by someone claiming to be a body language expert: Busting 5 Body Language Myths (2012, by Carol Goman). In the article, Goman mentions that computer scientists at Buffalo developed a computer program that can catalog and individual’s particular behavior in order to detect lies 82.5% of the time, which is better than a human can do (see above, where I note that even the best of us can only hit about 60%; this article says more like 65%: http://www.buffalo.edu/news/releases/2012/03/13302.html).
A couple of thoughts on this. First, 82.5% is far too infrequent to be trusted when the stakes are high. Second, I fear one could easily deliberately fool the system, particularly with training. Consider, for example, the story broadcast on ABC Australia’s All in the Mind (the episode is “Mindreading, Ethics, and the Law,” July 17, 2016*), in which a subject (Nita Farahany) was shown a series of faces, after which she was put into an fMRI machine and shown another series of faces while being asked whether she’d seen each face before. The fMRI data afforded researchers 80% accuracy in determining whether Farahany’s assessments were correct. Impressive. However, she was then given tips on how to fool the machine:
When they ran that [first] experiment, the prediction of the machine, of their algorithms reading the fMRI, went from what I think was around 70% or 80% accuracy to 50%, which is no better than chance. Which means without any prior training I was able to fool the machine to being no better than flipping a coin as to whether or not I had seen an image before or not.
(*It’s a fascinating episode that touches on a tech I don’t much explore here, called “mind reading” or “thought identification” or “brain decoding”; see also this All in the Mind episode, from one week earlier: “Brain Decoding.”)
That said, it seems to me that tracking the individual behavioral patterns of a specific individual will be far sounder than trying to use averages across populations. I imagine that most of us operate with a fairly limited and consistent palette of behavioral representations with varying, but finite, degrees of subtly and distinctiveness. There may even be such representations even when individuals aren’t explicitly sure of what’s going on inside themselves (as alluded to in the above fMRI study, and in other observations made here about the difficulties of interpreting or even explicitly noticing one’s own inner states).
Still, I’d wager that, though finite, measurable signals still have to be averaged, and thus evaluated probabilistically, as they will still amount to a huge catalog of permutations, which could also be context dependent (for example, evoking the name of a childhood pet or raising the temperature in the room or delaying lunch or researchers wearing a white lab coat rather than no lab coat could change things significantly). The more probabilistic, the fuzzier the reading. Particularly for a complicated inner state, particularly if it’s one the person rarely experiences, particularly if it’s one that falls between the cracks of the broader categories of inner states the researchers have operationalized.
This brings to mind a theme in Lisa Barrett’s 2017 book How Emotions Are Made, of which the following passage (on page xii) is representative:
Even after a century of effort, scientific research has not revealed a consistent, physical fingerprint for even a single emotion. When scientists attach electrodes to a person’s face and measure how facial muscles actually move during the experience of an emotion, they find tremendous variety, not uniformity. They find the same variety—the same absence of fingerprints—when they study the body and the brain.
25. The role of all this in legal proceedings urgently demands our attention.
For one thing, we may have access to our own private, subject experiences, but we don’t always know what the sources of them are; furthermore, not all subjective experience is cut and dry, black or white, this or that (for a simple example, I’ve at times been not sure of whether to call something this or that color, or a blend; emotional and intuitive—i.e., “gut” feelings—in response to external evidence are even harder to parse out). A witness may or may not be saying something true when conveying the contents of his or her experience, even when being honest (“that is the face I observe when surveying what I take to be my accurately encoded memory of the event” etc.*).
(*There are many examples to draw from within the “false memory” literature. For a sustained discussion, see Julia Shaw’s 2017 book The Memory Illusion: Remembering, Forgetting, and the Science of False Memory.
For a 17.5-minute Ted Talk by one of the premiere researchers in the field:, see Elizabeth Loftus: How Reliable Is Your Memory? (2013).
A recent example that comes to mind from the news is covered in this 2015 New York Times article: “Witness Accounts in Midtown Hammer Attack Show the Power of False Memory,” in which a witness states: “I saw a man who was handcuffed being shot. … And I am sorry, maybe I am crazy, but that is what I saw.” Clear video surveillance revealed what really happened. The article goes on to note, “…the man who was shot had not been trying to get away from the officers; he was actually chasing an officer from the sidewalk onto Eighth Avenue, swinging a hammer at her head. Behind both was the officer’s partner, who shot the man.”)
Unconscious bias in judges has also been investigated in a range of studies. One of the most interesting and infamous is the 2011 study in which the likelihood of judges granting parole started at around 65% at the beginning of a “decision session” (where a “session” is defined as the period between breaks, particularly breaks that involve eating a meal or snack) steadily declined to near zero by the end of the session, then went back to around 65% after a break or snack. There’s still more work to be done on this sort of study, but the researchers confidently assert that that data “suggest that judicial decisions can be influenced by whether the judge took a break to eat.” (Read the paper here: “Extraneous Factors in Judicial Decisions.”5 Notice that they conclude by pointing out that this isn’t just a commentary about judges: “Indeed, the caricature that justice is what the judge ate for breakfast might be an appropriate caricature for human decision making in general.”)
A common interpretation of the study is that the judges behave this way due to ego depletion—i.e., the glucose in their brain burns up with each decision they make and the further they get from their last meal, lessening their powers of self-control and for making difficult decisions, which makes them more likely to stick to the status quo.6 For interpretations along these lines, see the above-linked paper on the original study, as well as Roy Baumeister and John Tierney’s 2011 book Willpower, and Daniel Kahneman’s 2011 book Thinking, Fast and Slow (in which he also notes the unconscious influence—or “anchoring effect”—on some German judges who rolled dice before determining sentencing duration for a shoplifter [page 125]; for more on that, see this footnote.7). For an alternative interpretation that emphasizes the affective and intuitive dimensions of each judge’s experience—e.g., in which hunger is read by the judge as the gut feeling that someone doesn’t merit being granted parole—see again Barrett’s 2017 book How Emotions Are Made.
I’ve read and recommend the four books so far mentioned in this entry item. There’s another I’d like to mention that I haven’t read, but is on my wish list: Russell Hurlburt and Eric Schwitzgebel’s 2011 Describing Inner Experience?: Proponent Meets Skeptic, in which the authors—the skeptic being Schwitzgebel, whose blog The Splintered Mind I enjoy—track their collaborative exploration of the following questions: Can conscious experience be described accurately? Can we give reliable accounts of our sensory experiences and pains, our inner speech and imagery, our felt emotions?8
These questions again evoke the fascinating prospect of whether the mind gap passes through oneself—i.e., whether one can even really have access to one’s own internal states. There are, of course, experiences that are vague, inchoate, blurry, subtle, complicated, for which we don’t have names, and so on. But presumably these felt things often just don’t sufficiently conform to anything we have the language or conceptual context for dealing with in a meaningful way (or perhaps that’s why we don’t have the language; as for the conceptual contexts, that’s an even more complex question—consider that for complex emotions, there is likely required a conceptual context, which could be culturally or environmentally contingent, that may precede the emotions and/or, for example, dictate whether a certain felt thing is cause for worry or to be noticed at all).
This isn’t merely a matter of not being able to match an experience to a stimulus: If you show me a blurry image in good lighting, I’ll have a crisp and clear experience of a blurry image. The experience itself isn’t blurry. Rather, I’m interested here in when a meta-cognitive introspection yields, for example, a “blurry” experience. We might not realize it’s blurry or sparse until asked to describe it; e.g., until asked to count the number of petals on the rose we’re imagining, or to rank the intensity of dissonance when imagining—as compared to actually hearing—two slightly detuned pitches being sounded together; or to describe your feelings about your best friend getting the acting gig you wanted; or to say for certain rather this scarf is pink or red. In trying to describe such things, we might fill in—i.e., newly imagine—details that weren’t there to begin with, and perhaps still aren’t despite the attempts to imagine them, i.e., to will them in via imagination (or simply due to lying).
And there are probably always things contributing to our overall experience that meta-cognitive attention doesn’t pick out as being a distinct entity or event. I was walking in a wooded area on a recent windy day. Various sounds could occasionally be heard in the distance: cars, scurrying squirrels, laughing children, and so on. But it was mostly wind, soft and steady, punctuated by random gusts and rustles.
At some point, I looked down a hill and noticed a creek, whose sound I was immediately able to pick out from that of the wind. It felt as though this complicated, loud, and distinctly watery—sharp, quick, crashing over rocks—sonic texture was introduced into my experience at the moment I saw the creek. But that experiential content must have been there before. Is it that all those sounds were already explicitly there and my attention hand’t carved them out? Or is it more like looking at a tree trunk with a moth on it, but without attending to the moth, so that the animal makes no contribution to my experience other than as just another overlapping patch in the consistently brown and gray texture of the tree’s bark?
I don’t know. But, at any given moment, it does seem clear that whatever I would have reported would have indeed been an account of my experience insomuch as I could characterize it purely on evidence of the experience itself. In other words, there may be more going on in your sensory experience than your immediate environment seems to reveal. This is significant for, at the very least, it says that there can be something in one’s environment that’s obvious to oneself, but may not be present to one’s closest neighbors.
For example, after years of mixing audio, I began to hear the world differently, and thus to automatically carve up the world differently conceptually. I began to hear—or at least notice—things that I wouldn’t have been able to pick before out unless someone, for example, were to have isolated it with extreme e.q., so that I could train my attention on that particularly bandwidth and then hold my attention there while they set the e.q. back to normal (i.e., to a flatter setting). I now hear those things—e.g., the air of a human voice—as a distinct entity that contributes to a bigger thing we call “voice,” just as a table leg is a distinct entity that contributes to a bigger thing we call “table.”
These and many other observations lead me to believe that, in trying to understand the mental content of others, the best we can do is to attend closely, and with open mind, to others’ attempts at behavioral representation. All we have is report.
(While at least worth mentioning, I’ll suppress the temptation to further explore the question of self-access by considering blindsight, in which people don’t have visual experience of what’s in front of them, but behave as though they do; a similar, though for different reasons, phenomenon can be found in split-brain cases, something I’m sure I’ll bring up again before too long.)
26. The aforementioned (see the point preceding this one) case involving judges presents an interesting example of outsiders interpreting—or even correcting—how others (i.e., the judges) interpreted their own internal states; or declined to self-interpret (perhaps rightly so, on the grounds that they weren’t in a good state of mind for making a good decision, so they stuck with the status quo; certainly not ideal, but maybe better than making an ill-considered decision). (The interpretations I noted above are quite different. Did anyone ask the judges what they thought?)
The point I’d like to home in on there is the presumption that there is an internal state to interpret and that it is not only up to the individual to access and interpret it* and then move on accordingly with some decision, but that there is some sense in which it is fair game for outside observers to interpret how those individuals were, (in)correctly, interpreting their own internal states. That is, not just that “they felt hungry,” but that they either misinterpreted their hunger as a sense that someone didn’t deserve parole, or didn’t interpret it at all (i.e., were simply too depleted to “observe” their complex of intuitions, which must be held up to the facts of the case and so on, in order to arrive at a decision).
(*Notice this model’s invitation to homunculus regress: the interpretation of a phenomenological state is itself a phenomenological state, which must itself be interpreted.)
Concerns about peering deep into the minds of others does not, of course, apply only to judges. Of equal concern are attempts to get inside the minds of suspected wrongdoers. Polygraph reports are generally impermissible as legal evidence for good reason; namely, they’re unreliable: they measure biological changes that may not occur when a person lies, or often will occur when a person doesn’t lie (e.g., due to the anxiety of being given a polygraph); there is also the difficulty of ensuring clarity of questions and that the subject understands the questions, and so on. (For specifics, see the U.S. Dept of Justice website entry: www.justice.gov/usam/criminal-resource-manual-262-polygraphs-introduction-trial. Here’s an Oct 2018 Wired article profiling the [mis]use of polygraphs to screen government job applicants: “The Lie Generator: Inside the Black Mirror World of Polygraph Job Screenings.”9)
More recently, a more impressive, though still suspect, technology has been persuading judges and juries: brain scans. Indeed, studies suggest (as indirectly alluded to in Gazzaniga point (e) below) that, for most of us, a brain-related writing is made more persuasive by the addition of a brain image, even when that image has nothing to do with the writing. And this phenomenon is only the start of our worries. In my favorite chapter (“We Are the Law”) of his 2011 book, Who’s in Charge?: Free Will and the Science of the Brain, Michael Gazzaniga summarizes the state of things:
Currently the case against using scans in the courtroom is quite evident for several reasons: (a) As I described, all brains are different from one another. It becomes impossible to determine if a pattern of activity in an individual is normal or abnormal. (b) The mind, emotions, and the way we think constantly change. What is measured in the brain at the time of scanning doesn’t reflect what was happening at the time of a crime. (c) Brains are sensitive to many factors that can alter scans: caffeine, tobacco, alcohol, drugs, fatigue, strategy, menstrual cycle, concomitant disease, nutritional state, and so forth. (d) Performance is not consistent. people do better or worse at any task from day to day. (e) Images of the brain are prejudicial. A picture creates a bias of clinical certainty, when no such certainty is actually present. There are many firm reasons why in 2010, while I write this, although the science is enormously promising, it currently is not good enough, and it would more likely be misused instead of used properly. What we must remember, however, is things are changing fast in neuroscience and new technology is constantly allowing us to learn more about our brains and behavior. We have to be prepared for what may be coming in the future.
What may be coming has its foundation in the central principle in American criminal and common law, which is Sir Edward Coke’s maxim of mens rea: The act does not make a person guilty unless the mind is also guilty. You need a guilty mind. Mens rea has four major parts that have to be demonstrated: (a) acting with the conscious purpose to engage in specific conduct or to cause a specific result (purposefulness); (b) awareness that one’s conduct is of a particular nature, for instance, good or bad, legal or illegal (knowledge); (c) conscious disregard for a substantial and unjustifiable risk (recklessness); (d) the creation of a substantial or known risk of which one ought to have been aware (negligence). Each of those parts has brain mechanisms that are well studied and are still being studied. Purposefulness involves the brain’s intentional systems; knowledge and awareness involves its emotional systems; recklessness involves the reward systems; and negligence involves joy-seeking systems. Much is already known about these areas, which will be causing problems for the principle of mens rea.10
(For another fascinating list of issues related to court, which touches a bit more on the mind gap [e.g., worries about using EEG in courtroom to tell if someone is lying] but points out that bias generated by seeing a brain scan may turn out to be less of a worry than we thought, see neuroendocrinologist Robert Sapolsy’s 2017 book, Behave: The Biology of Humans at Our Best and Worst, Chapter 16: “Biology, the Criminal Justice System, and (Oh, Why Not?) Free Will.”)
27. Technologies other than polygraphs and brain imaging have been set to the task of reading internal states, often with terrible and horrifying results.
Quantitative or Cognitive Pupillometry, in which mental states are inferred from involuntary changes in pupil size, is fascinating (see the aforementioned Thinking, Fast and Slow for thought-provoking applications), but I wouldn’t put much stock in what it can tell us about the subtleties of an individual’s complex inner world, as was attempted with so-called “fruit machines,” which were used by the Canadian government in the 1950s and ’60s to (inaccurately) detect and remove gay men working in the civil service by showing men strategically chosen photos and measuring pupil response.
I think of the above machinery as glorified mood rings. But suppose technology gets sophisticated enough to reliably reveal our deepest mental states—our most hidden pain and fears, our quietest desires and most hidden turn-ons, including those we’re conflicted about and those that are never acted on, due to being consistently overpowered by louder, stronger desires. Should that tech be used? That’s an emphatic no.
I wrote several paragraphs supporting my objection, but have deleted them because the question requires more attention than I can give it here (I’m particularly interested in pondering what might happen were the technology to be equally applied to everyone with the results publicized). I will say only that I believe any projected or hoped for benefits are too likely to be outweighed, or indeed given to backfiring, by both foreseeable and unimaginable dangers. The long and short of it is that I simply wouldn’t trust humanity to apply it with moral sanity (see again the “fruit machine,” though it’s at least a little heartening that that particular campaign would be unthinkable today, only a handful of decades later).
On the other hand, I do think being able to “decode” thoughts has important uses. For example, for developing more sophisticated ways of communicating with patients who’d otherwise be considered vegetative (see point 45 below). And if it could be used to extract information from the mind of someone like, say, a thief who refuses to divulge to police where he dumped the stolen car with the baby inside, I suppose I’d have to reluctantly agree that the tech should be used (rather than beating the man, which is what they actually did to get him to talk; the infant, “dehydrated, too weak to cry,” was then saved from the hot car… But what if the beating hadn’t worked? And what if developing such tech ultimately causes more harm than good, despite such useful applications? And so on.)11
At any rate, my concern here is a recognition of the mind gap that results in our paying better attention to one another. No matter how good mind reading tech gets, the mind gap will not bridged, though many might think it is. Knowing that someone has a particular urge isn’t the same as knowing what it is or means to be the person who has that urge, something that could come with a whole host of other feelings, something that could contribute to a person’s accumulating experience and identity over a lifetime. I worry that such tech would give the false impression of having bridged the gap, thus actually discouraging attention and separating us even further.
That said, it turns out there may be a more reliable technology out there, at least when it comes to assessing certain erotic desires in anatomical males: the penile plethysmograph, which measures blood-flow to the penis; though it must, for many obvious reasons, be used carefully.12 For a discussion of this, see Chapter 6 of Jesse Bering’s 2013 book, Perv: The Sexual Deviant in All of Us. I’m agnostic-but-leaning-towards-suspicious about its accuracy, even when carefully implemented (and the subtler our inferences in that regard, the more suspicious I become). All I know of it is what I’ve read in Bering’s book and at Wikipedia, where it’s noted that the U.S. has ruled it unreliable for use in court due to: No standardization; Test results are not sufficiently accurate; Results are subject to faking and voluntary control by test subjects; High incidence of false negatives and false positives; Results are open to interpretation.
Canada has similarly ruled it unreliable.
Unsurprisingly, aside from concerns about its accuracy, I have ethical concerns about its usage even if we assume it’s accurate, similar to those I’ve already expressed to above; concerns of one kind or another are also noted in Bering’s book and at Wikipedia.
Finally, there is another sort of tech I only mention in passing in this post, referred to as “mind reading” or “thought identification” or “brain decoding” (see point 24 above).
(In a few points further down, I’ll consider some outlandishly sci-fi level tech for sharing experiences; I don’t think those will bridge the gap either.)
Later Update: I’ve recently come across use of fMRI and the like to attempt to detect lying in the courtroom. For starters: There Are Some Big Problems with Brain-Scan Lie Detectors,” which mentions the company No Lie fMRI. There’s also an EEG tech called “brain fingerpinting,” a controversial technique featured in the second season of the Netflix docuseries Making a Murderer.
28. There are also enriching reasons to respect the gap’s integrity. I’ve discussed the resulting art, but there are deeper questions to explore: What are the evolutionary dimensions of inner privacy? Is it necessary for there being more than one subject-entity? What’s the “point” or “purpose” in there being more than one subject-entity? Could there have been or one day be, even on a materialist account, only one such entity, with a single mind, yet with the same number of bodies there currently are in the world?
Would the result of truly bridging the gap bring us so close together that humanity becomes a single subject-entity and thus, ironically, lonelier than ever?
29. There’s a temptation here for getting into a discussion about whether beliefs come down to the stuff of behavior or of the mind. The degree and nature of the access we have to our beliefs may depend on our answers to that question. (I take it that beliefs are a mixture of mind and behavior, but would emphasize the mental dimensions; for example, a conscious human who cannot move at all can still have beliefs; more on that another day). Such a discussion would likely lead to mention of things like implicit bias tests, which are meant to access one’s split-second, unconscious attitudes—e.g., about race. See, for example, the Implicit Association Test (IAT). The test has been challenged (e.g., on the grounds that “the correlation between implicit bias and discriminatory behavior appears weaker than previously thought”13), but I’m inclined to not discuss the test for other reasons.
For one thing, it’s unclear whether beliefs are even what the IAT aims to home in on; from the Project Implicit FAQ: “The IAT shows biases that are not necessarily endorsed and that may even be contradictory to what one consciously believes.” I’m concerned with the phenomenological gap, which features conscious experience, however blurry that may be. Moreover, I’m concerned here with individual experience, not group behavior, which seems to be what the test especially aims to predict (though it also may affect individual behavior by making conscientious individuals aware of their potential biases); again from the FAQ (note the two appearances of the word “behavior”): “While a single IAT is unlikely to be a good predictor of a single person’s behavior at a single time point, across many people the IAT does predict behavior in areas such as discrimination in hiring and promotion, medical treatment, and decisions related to criminal justice.”
(This isn’t to say that such tests aren’t worth paying attention to. For one discussion that, at several points [e.g., Chapters 4 and 11], persuasively touches on the value of such work, see Robert Sapolsky’s 2017 book Behave: The Biology of Humans at Our Best and Worst.)
30. There seems to be a growing niche literature on the significance of the mind gap in the courtroom. I’ve set aside some articles to read when I have the time. I’m also interested in the 2016 book Experiencing Other Minds in the Courtroom by Neal Feigenson.
(This will go nicely with another set of articles I’ve set aside on the [mis]uses of probability in the courtroom.)
31. For a sustained assault by a computer scientist on the notion of there being any literal distinction between the private (i.e., first-person subject experience) and the public (the physical brain) in this context, and indeed perhaps even against talking or thinking as though figurative distinctions are intelligible, see the first chapter of Murray Shanahan’s 2012 book Embodiment and the inner life: Cognition and Consciousness in the Space of Possible Minds. Shanahan, who has recently influenced, at least indirectly, pop-cultural ideas about AI by functioning as a scientific advisor to the film Ex Machina, essentially argues that there is no private, only public when it comes to the brain and experience. In other words, with the sufficient tools, access will be had to whatever a person is feeling. Indeed, a deviation from use of private language will in itself contribute to what Shanahan describes as a Wittgensteinian “progress towards a kind of silence of metaphysical matters” (where metaphysics is particularly attended to as the conceptual space carved out for mind in traditional, dualistic ways of speaking).
I find the chapter’s treatment of the topic rough (in particular it bizarrely threatens to characterize the contemporary field of analytic philosophy as far more tending towards dualism than it actually is) and unconvincing, for several reasons I won’t pursue here; but I bring it up for the curious reader to investigate.
32. My friend often tells me, “go with your gut; it’s easy.” I say, “I have three guts, equally strong, loud, persistent, certain.” My friend says, “I only have one.” My friend and I claim to have no idea what the other’s talking about. My intuitions, desires, reasons, and motivations are, as I sometimes see it put, in equipoise. (Related question: If free will exists, do indecisive and decisive people have more, less, or the same degree of freedom of will?)
33. I ask a friend how he knows his new coworkers don’t like him. “Did they tell you that?” “No, but I see their behavior.” But behavior is easily misinterpreted. I fear that some people sustain life-long fantasies in which they’re billed as outcasts, built on the shaky evidence of subtle behavioral cues (which, of course, can lead to self-fulfilling prophecies: treat someone like they don’t like you, and they’ll start acting more and more like they don’t like you). A strategy of Cognitive Behavioral Therapy is to get people to reevaluate their evidence.
I once heard a reputable social psychology professor, Tory Higgins, include in his lecture a radical take on this that went something like (I’m elaborating a bit): We don’t know what other people are really thinking. The more elaborate the story we tell about what others are thinking, the more wrong we probably are. But we must tell ourselves some story or another. Therapy is the project of replacing a fantasy that doesn’t work for us with one that does.
Higgins also specializes in shared reality. Here’s the abstract of a 2009 paper, “Shared Reality: Experiencing Commonality With Others’ Inner States About the World,” by Higgins, et al; emphases in bold are mine:
Humans have a fundamental need to experience a shared reality with others. We present a new conceptualization of shared reality based on four conditions. We posit (a) that shared reality involves a (subjectively perceived) commonality of individuals’ inner states (not just observable behaviors); (b) that shared reality is about some target referent; (c) that for a shared reality to occur, the commonality of inner states must be appropriately motivated; and (d) that shared reality involves the experience of a successful connection to other people’s inner states. In reviewing relevant evidence, we emphasize research on the saying-is-believing effect, which illustrates the creation of shared reality in interpersonal communication. We discuss why shared reality provides a better explanation of the findings from saying-is-believing studies than do other formulations. Finally, we examine relations between our conceptualization of shared reality and related constructs (including empathy, perspective taking, theory of mind, common ground, embodied synchrony, and socially distributed knowledge) and indicate how our approach may promote a comprehensive and differentiated understanding of social-sharing phenomena.
I won’t discuss the paper here, but will say that I think it an important project, and I share it now for the curious reader (I believe—and hope!—a book on the subject is underway).
34. What are some of the major (accumulative or direct) negative outcomes of the gap? Wars? Racism? Torture? Perhaps it’s the small interpersonal misreads where the real problem lies. The intensity of the misreads seems to increase as people are physically further apart from one another. In this sense, if there is anything like a bridge, perhaps it’s direct, sustained, focused attention. Personal attention, by which I mean attempting to understand what the other person is thinking not through some mysterious intuition, but by paying close attention to whatever means of communication the person attempts. Such attention breeds connection, empathy (whatever that does or should mean), compassion, love. Attention is the basis for shared reality, and the shared understanding that, though we are separated by this infinitely cavernous gap, the gap is somehow shrunken by attention, which seems to allow for a kind of representation, or model, of the internal self of another to be built within one’s own mind.
Easier said than done, but it’s worth trying.
35. Attention is a bridge not only to others, but to oneself. I return again to the corpus callosum, the nervous tissue connecting the left and right brain hemispheres. In those in whom this connection is severed, life continues mostly normally. But in controlled lab conditions, one observes that the left and right brains have, as it were (or perhaps literally!), minds of their own. This is done by making sure information—i.e., external stimuli—sent to the left brain is not sent to the right brain, and vice versa. For example, an image introduced only to the left visual field will only make its way to the right brain. In most of us, the right brain can’t give verbal reports, but it can report through the left hand (writing, drawing, etc.). The left brain reports verbally: it’s got the mouth, but I take “verbally” to also mean it has the “inner voice.” Discrepancies in right- and left-brain reports (about, for example, what has been seen) suggests distinct minds or consciousness. In day-to-day life, however, the left and right brains are more or less aware of what the other brain is thinking, thanks to cues external to the body: e.g., the right visual field sees what the left hand is writing.
There’s a lot more to ruminate over here, but I’ll get to the point. It seems plausible that the two hemispheres house separate minds even when the internal connection has not been severed. At least two. There may be more. Including some quiet protestors who have no means for behavioral expression. Who knows? It’s an open question, but the evidence strikes me as pointing towards multiple minds doing different jobs, often in conflict with one another. (This isn’t as scientifically controversial as it may sound; indeed, it seems to be where brain research is taking us. Philosophically speaking, it strikes me as at least as plausible a conception as that of the “unified self.”)
I imagine some fascinating results of this. For example, I recently heard an interview with a person with dissociative identity disorder (“DID,” more commonly known as multiple personality disorder); the person spoke of not having fully worked out consent from the other identities for speaking of them. Imagine a future (on which I’m not passing judgment so much as I’m contemplating it with great wonder) in which a combination of greater sensitivity to, and decreased stigma surrounding, mental health (and in particular atypical categories of, let’s call it, mental or cognitive style) results in not only the DID sorts of distinct entities having rights of their own, but the separate minds of any given individual, insomuch as we can conceive of them. The part of someone who wants chocolate cake at midnight might be called Edward, while the part of that person who wants to finish studying and go to sleep is Shelly. I can imagine students giving multiple names in school. And I wonder what all this means for ethical concerns should such entities be viewed as literal moral and legal persons.* For example, the identities that come with DID are often very young. Could this lead to issues surrounding sexual consent?
(*I stress “moral” and “legal” here because, while I take the project of convincing people that such entities are literal persons will be less extensive than that of establishing legal personhood status for those entities. For the record, I’m on board with the idea that multiple persons can literally occupy a single body, perhaps even simultaneously. What this might or should actually mean in practice—morally and legally—I have no idea.)
One could also share mutual body space with vague or spontaneous persons. I’ve noticed it easier to develop an intuitive understanding of difficult technical concepts if I explain them to myself as though a teacher guiding a student—it really does feel as if one part of myself is the student and the other the teacher. Perhaps this can be taken further. Perhaps we can attempt to pay close attention to the quieter mental corners of ourselves, the ones that cannot speak or write or draw.
36. With behavioral representation, we share rough approximations of our experience, and that’s usually enough for day-to-day living. But that roughness stokes loneliness, perhaps even a detachment from the objects around us. Our percepts are the brain’s best attempt, at this point in its evolution, at giving us a world, including the regions of that world that are other minds. The features of that world are limited only by a kind of phenomenological paint set. Though the set may be massive, it shies from theoretical possibility: between those two shades of love, there are theoretically billions of possible states, and between any of those two states there are yet more billions. But feelings—perceptions of any sort—don’t come with that degree of precision. Our behavioral representations, even less so.
These are metaphors, of course. The idea that there is some intermediate shade between two given shades of love or hunger or pain is an idea that only works within the very sort of model you (presumably) and I are prepared to readily adopt. That sort of model helps make experiences intelligible and behaviorally representable (in some way that is mostly beneficial for people like us living in society’s like ours), but it won’t hold up to close scrutiny*, any more than a feeling of hot due to being set on fire or cold due to sub-zero exposure could be literally said to be degrees—in fact opposing degrees—of the same sensation in response to similarly situated degrees of stimuli on a continuum of events we’ve collected into a category we call “temperature,” though these things are readily modeled and, for us, conceptually intelligible as such (there’s no problem if I say, “There’s a lot of hot air coming into this room through the vents, making me feel very warm”).
(*Consider an extreme model, where we propose a continuum of, say, jealous, on a scale including all real numbers from 0 to 1 inclusive. It would be ridiculous to evaluate one’s jealousy at, say, 0.0000030004126888114048465000001 degrees, but it’s theoretically intelligible.)
37. The worst way to fail to recognize the gap is when that failure is implied by a failure to recognize a mind (where there are no minds, there is no gap). When this happens, the pain we might cause others is unthinkable. Mutually shared attention helps all sides.
38. Behaviorism has lost much of the immense sway it had during much of the 20th century, but it’s still influential. For example, in popular ideas about artificial intelligence: If a computer acts like it understands, then it understands; furthermore, for some: if it acts like it’s conscious, then it’s conscious. (I find the former plausible and the latter absurd.) We see it, too, with animals. If a dog acts like it loves, then the dog loves (or, for some: if the dog’s brain moves in ways similar to that of a humans when loving, then the dog must be loving). We can’t really know what it’s like to be a dog, but maybe it’s best to presume that one’s dog loves—this is part of what makes that mutually rewarding relationship work. And maybe the dog does meet all the criteria that matters for what counts as “love,” as we usually mean the word.
I’m not so sure about more involved human-like emotions, however, such as guilt. Consider this abstract from a 2009 paper, “Disambiguating the ‘guilty look’: Salient prompts to a familiar dog behaviour,” by Alexandra Horowitz. I’ll embolden the part I find most concerning:
Anthropomorphisms are regularly used by owners in describing their dogs. Of interest is whether attributions of understanding and emotions to dogs are sound, or are unwarranted applications of human psychological terms to non-humans. One attribution commonly made to dogs is that the “guilty look” shows that dogs feel guilt at doing a disallowed action. In the current study, this anthropomorphism is empirically tested. The behaviours of 14 domestic dogs (Canis familiaris) were videotaped over a series of trials and analyzed for elements that correspond to an owner-identified “guilty look.” Trials varied the opportunity for dogs to disobey an owner’s command not to eat a desirable treat while the owner was out of the room, and varied the owners’ knowledge of what their dogs did in their absence. The results revealed no difference in behaviours associated with the guilty look. By contrast, more such behaviours were seen in trials when owners scolded their dogs. The effect of scolding was more pronounced when the dogs were obedient, not disobedient. These results indicate that a better description of the so-called guilty look is that it is a response to owner cues, rather than that it shows an appreciation of a misdeed.14
39. Music, language, poetry, pictures, and aestheticized or elaborate forms of behavioral representation—perhaps they not only don’t bridge the gap, but expand it. Especially when those bridges are from another culture (whatever is universal about these things need not be shared in the first place, at least not for closing the gap; whatever is culture need not be shared locally; whatever is unique to individuals is hopelessly stranded; unfortunately, we can’t be sure of which is which, and in some cases two people who grew up in the same home could be said to live in different “cultures”). In the right circumstances, this constitutes an expansion of riches: the gap is bridged with a kind of hazy light that make its borders glow. Music and other art forms may be some of our best attempts at bridging the gap, in ways language alone can’t; done right, a kind of incredible, but sweet, sadness and isolation may be evoked, accompanied by a promise for deeper connections, of rescue from the culture of self, delivering one to a world of collective embrace.
40. But the gap also makes possible: lying, being accused of lying, accusing someone (especially from a safe distance) of holding a bundle of beliefs because of one opinion the person expresses.
(I call the latter point “constructing the ideal nemesis” or “ideal enemy.” This happens in debates sometimes when someone really really seems to want their interlocutor to be their ideal nemesis; but when that person claims to hold beliefs from outside the nemesis bundle (“extra-bundular”?), the nemesis-constructor says, “oh, you may vehemently swear you feel positively about X, but I’m here to tell you that you in fact feel negatively about X. That established, I’ll now educate you [with a speech I’ve been dying to give an ideal nemesis such as yourself] as to why you’re wrong—evil, in fact—to feel negatively about X.”)
41. More now on technology. We easily assign theories of mind to insects and teddy bears. If we’re getting those wrong (and I’m convinced we are in both cases), how easy is it to get it wrong when ascribing mental states to humans engaged in complex behaviors?
Even with the most advanced technology conceivable, even with tech that could rearranging my neurons in order to produce the same internal sensations you’re feeling at a moment—I find it difficult to see how I could feel what you feel, at the very least when that feeling carries meaning.
Suppose you drag a piece of ice across your arm. I want to know what that feels like. I take a piece of ice from the freezer and do the same. I declare it painful. You say it’s refreshing. More significantly, you say you’ve got a faint, 40-year-old scar in that spot from when you were once burned. Shall I burn myself and wait 40 years? Even if I did, other factors may hold me back. Suppose your mother had rubbed ice on the burn when it was fresh, while singing a lullaby, and she only did it during lunch-time while bread was baking. In order to really know what you’re feeling, I’ll need something more than a momentary rearranging of brain structure. I’ll need your history and your memories, in addition to a brain structure and nervous system etc. that, if not identical to yours, has at least been altered to match the stimuli-processing style of your own. I’ll need to know what it is to be you as an embodied subject who has in large part been formed by many years of interacting with the world. In short, I’ll need to be you, or practically you. But then I won’t be me, and I won’t know what you’re feeling: it will still be only you who knows. (In simpler terms: the amount of rearranging that would be required seems too extensive for my own sense of self to survive the process.)
I’ll grant that we can generally agree that the ice feels cold, and are probably referring to very similar—perhaps even identical—qualities of sensation. But this is very basic. It would be dangerous to count such a basic shared experience as an indication that we can get into one another’s minds.
42. Rearranging brain matter as described above, if at all possible, is a good ways off. More doable might be tech that addresses a question like: What does it feel like to be depleted of dopamine for a day? Finding out could inspire some empathy, perhaps, for a certain class of people we’re tempted to call “lazy” or worse, but this will not bridge the gap. It won’t show what it’s like to be depleted day after day for years, and to deal with the ongoing stresses such a life entails. I doubt a few moments of feeling what you feel right now could impart that (and certainly virtual reality won’t do it; though if that sort of technique shows results for empathy—or, perhaps better: compassion—training, I’m all for it! Results so far seem to be mixed, however. I’ll save that discussion for another time.).
43. What about something like this: Suppose it possible to install connections from your brain to mine via commissures that function much as those that connect the left and right hemispheres of an individual’s brain (i.e., the corpus callosum; though this could be done via some sort of wireless airborne transmission as well). This wouldn’t “unify” our consciousnesses any more than there is unified consciousness between an individual person’s left and right hemispheres (assuming that it makes sense to speak of “unified” consciousness at all [I think it does]; and assuming a model in which there are distinct consciousnesses in the two hemispheres). It may result in a more direct sort of report of experience, but I don’t think this would show me what it’s like to be you.
For example, connecting the input of signals from your eyes into my brain wouldn’t show me what you see any more than an eye transplant would give one the experience of the eye’s original owner. That is, a direct transfer of the physical impulses from your brain to mine will still undergo processing in my brain, according to my brain and body’s structure and so on. And whatever is perceived is still filtered through the same meaning-assignment process I usually engage.
Perhaps technology can be developed to, as it were, translate that experience in conjunction with a kind of virtual reality that, for example, alters my internal sense of embodiment.
Such an idea takes us well beyond any foreseeable tech (unlike development of some kind of interpersonal corpus callosum, even a crude one!) and well into science fiction, not to mention that it would likely be extremely difficult—if not impossible—to know that the translation is working. Perhaps the best bet would be to send it to my brain, then back to yours so you can judge whether the signal survives the translation. That might work! Let’s do it!
44. Egocentrism is when you don’t realize that other people have experiences of the world that differ from your own. Young children—let’s say from about ages two to seven (to use Jean Piaget’s second, or preoperational, stage of cognitive development)—are often chronically egocentric. You might clearly be looking at the back of a book while the child sees the cover, but the child will think you’re also seeing the cover. It seems that we never completely recover from this condition, some of us less so than others. Maybe this is an example: A mother, who, due to feeling a chill after eating ice cream on a 72-degree day, insists against protests that her physically active teenage child put on a sweater in order to warm up.
45. Detecting consciousness in humans is generally a matter of collecting evidence of behaviors that appropriately correlate with the self-observational experience of the conscious observer collecting that evidence. In other words, the principal evidence I have that you’re conscious is that I’m conscious and you look and act a lot like I do. Yet more refined is the evidence of your brain being similar to mine in terms of its structure and activity. This may lead us to conclude that animals that, at least in certain ways, behave quite differently than we do are conscious. It may also allow us to detect consciousness in humans who have control over their minds, but not their bodies (not even their eyes or respiration).
Researchers working with vegetative patients have devised ingenious ways to detect consciousness in such humans. Neuroscientist Adrian Owen, for example, has pioneered important work of detecting conscious awareness in patients previously considered vegetative and, even more importantly, has devised a method for communicating with them by asking them to imagine playing tennis in order to convey a “yes,” and to imagine wandering around their home to convey a “no.” These two cognitive activities increase the levels of oxygenated blood to different parts of the brain, which can be detected via fMRI. Whatever your skepticism about brain scanning (I myself am a careful skeptic), the results in this case are convincing and important.
For more on this, see this Feb 2010 article: Think tennis for yes, home for no: how doctors helped man in vegetative state. Or, even better, listen to this Nov 2016 All in the Mind interview with Dr. Owen: Finding consciousness, which bears the following description:
Why would a neuroscientist tell jokes to a person in a seemingly complete vegetative state—and whilst scanning their brain? It’s one technique used to determine levels of consciousness. The latest research shows that around 1 in 5 people who appear to be comatose are actually fully conscious and aware of their surroundings—it’s just that no-one else knows. The implications are enormous.
Owen’s work is chronicled in his excellent 2017 book: Into the Gray Zone: A Neuroscientist Explores the Border Between Life and Death. (Since reading it, I’ve been collecting ideas about the wondrous, and I think urgent, possibilities of developing more robust ways of communicating with people in this condition; perhaps even developing a kind of language… more on that later).
Methods for detecting consciousness continue to be refined. The general idea has to do with correlating observable activity in the brains of vegetative patients with that of the brains of walking and talking humans when, for example, engaged in the activity of processing meaning. The role of meaning here is critical, and is quite distinct from the sort of activity one might observe in a vegetative brain exposed to, say, photos of human faces, some of which may be of family members; such photos might trigger an automatic response originating in the brain stem of a vegetative (and thus unconscious) individual (see page 29 of Owen’s book for his account of showing photos to a patient named Kate); this is similar to when an unconscious individual’s hand flinches when pinched, or even when you, as a conscious agent, pull your hand reflexively from hot metal before being aware of the heat, and certainly before registering pain; as Owen puts it:
Quickly removing the hand that you placed on a hot stove is automatic and instantaneous and involves only the neurons in your spinal cord, not your brain. It would simply take too long if the message “Hot!” had to go up your arm to your spinal cord and then to your brain for you to decide to move your hand, only to then send that message back down to your arm. Painful stimuli such as pressure on a fingernail or the feeling of a hot stove elicit a hardwired, automatic response, which tells us little about patients in the grey zone: these responses occur whether or not the brain is irreparably damaged. (Page 52)
Meaning is a more complex matter, and requires the brain—it requires consciousness (we presume, I think rightly).
So, researchers need some way to know that meaning extraction is occurring. One method that’s been used is to show people we know are conscious a film (e.g., by Hitchcock), and track brain activity, particular at moments that would require meaning; e.g., activity that amounts to, “it’s making me nervous to see that child waiving around a loaded gun.” Such a scene would be meaningless to an observer who doesn’t have any associations with guns. These data are then aggregated and compared to the brain activity of vegetative patients as they are shown the same movie. When the activity significantly matches, it’s a solid bet that the patient is as aware of the film as you or I would be. (If it doesn’t match, this doesn’t rule out consciousness; see the extraordinary account of Juan that begins on page 207 of Owen’s book.)
I would like to ask Owen how they can be sure that meaning is happening here (and not something like the case of responding to a familiar face; perhaps patients also respond to subtle jokes?), but from what I’ve read, they seem to be onto something, given, I assume, that they are looking at degrees of response to cues with varying degrees of subtly over the course of significant chunk of time.
PS: I just ran into a new article at Nautilus about Owen’s work: “The Ethics of Consciousness Hunting,” by Mackenzie Graham. It’s pointed out that seemingly vegetative people who turn out to be conscious are now being classified as having “cognitive motor dissociation” (or CMD). It’s also argued that we have an ethical obligation to test for consciousness before classifying someone as vegetative. I agree. And we need to do more than just recognize that they’re conscious. As Graham puts it: “We have built a tool to reach patients who we thought had been lost forever. We’ve given them a voice; now we have to listen.”
46. Despite the cleverness and reliability of the examples involving vegetative patients, it seems impossible to use such techniques with entities without brains like ours, as we’d never be able to get that project off the ground for what strike me as obvious reasons.* I assume a walking talking creature with a brain whose structure and activity—as a living, biological organ—are nearly identical to my own is conscious, because I have a brain like that and I am conscious.
/BEGIN DIGRESSION/ *I’ll touch on this briefly. Consider a computer, which, as will become apparent in a moment, is my primary target at this moment. For one thing, we don’t know that a machine’s responses count as meaning processing rather than unconscious response to a stimulus. We don’t have our self-observations to rely on for setting a standard of what to expect, how to interpret the machine’s activity [e.g., spikes in activity of a certain bit of hardware], and so on. We lack a solid baseline.
I’m reminded here also that meaning is not something that has been at the forefront of information theory. Indeed, it’s been historically explicitly dismissed, as far as I can tell, and understandably, at least as far as developing a reliable logic system goes. This goes back to the roots of the field. Consider the following passage from James Glick’s 2011 book, The Information: A History, a Theory, a Flood. It refers to a 1950 conference in the early years of information theory, at which big names were in attendance, including Claude Shannon and John von Neumann. Famous anthropologist Margaret Mead was also there; it’s her comments that I’m most interested:
Mead, recording the proceedings in a shorthand no one else could read, said she broke a tooth in the excitement of the first meeting and did not realize it till afterward. … [Norbert] Wiener wondered whether anyone had tried a similar calculation for “compression for the eye,” for television. How much “real information” is necessary for intelligibility? … Margaret Mead had a different issue to raise. She did not want the group to forget that meaning can exist quite apart from phonemes and dictionary definitions. “If you talk about another kind of information,” she said, “if you are trying to communicate the fact that somebody is angry, what order of distortion might be introduced to take the anger out of a message that otherwise will carry exactly the same words?” … That evening Shannon took the floor. Never mind meaning, he said. He announced that, even though his topic was the redundancy of written English, he was not going to be interested in meaning at all. He was talking about information as something transmitted from one point to another: “It might, for example, be a random sequence of digits, or it might be information for a guided missile or a television signal.” What mattered was that he was going to represent the information source as a statistical process, generating messages with varying probabilities. (pp. 242–246). /END DIGRESSION/
But how to detect consciousness in an entity with a very different sort of brain—for example, in a computer? One recent suggestion is to examine behavior of the sort that perhaps can only come from conscious beings. The 2017 article Is Anyone Home? A Way to Find Out If AI Has Become Self-Aware (by Susan Schneider and Edwin Turner) describes an AI Consciousness Test (ACT). The idea is that conscious humans are very quick to grasp the notion of a mind-body dualism, even if they are hardcore materialists; if computers are conscious, they too would be able to grasp concepts invoking a separation between the experiential and the physical.
Our intuitions about the mind-body relation are what make dualism so appealing and lasting, and, for some of us, just obvious. (Not to mention that a shockingly large set of our socio-cultural assumptions rely on the idea; more on that another day.) I bet even hardcore materialist Patricia Churchland can readily imagine, for example, switching minds with another human (even if she’d say her years of training make it possible for her to avoid the fallacy on deeper reflection; see my quote of her below). I take it that this capacity is what Daniel Dennett is referring to when he speaks of the “zombic hunch”: the temptation to fall for David Chalmers’ notion that philosophical zombies are (logically) possible. A brief discussion of philosophical zombies (or just “zombies” for short) will be necessary for making a point I’ll get to in a moment.
Zombies are non-conscious entities that look and act human. They say “that hurts!” when pinched, etc. But they have no internal awareness, feelings, or experience—no more so than does a pile of laundry. Chalmers and others argue that because we can imagine such entities, this tells us that consciousness must be something extra, something more than just the body, something immaterial. On deeper reflection, however, many reject the conceptual intelligibility of philosophical zombies.
For example, it strikes me as implausible that the sorts of behaviors conscious entities exhibit would arise out of environment alone, without some phenomenological representation of that environment: excessive heat seems like the most plausible explanation for backing away from fire. Now, there may be simple, non-conscious animals that respond to stimuli, so there is a conversation to be had there about the natural selection of behaviors that are accountable as stimulus responses without experience (e.g., a tiny photosensitive organism that turns towards light); but this is worlds away from the complex behavior exhibited even by, say, peacocks spreading their plumage, and even further from that of humans in response to music. In a more immediate sense, knowing the extent to which brain function is correlated with experience makes it more difficult to indulge the zombic hunch. Indeed, Churchland answered “Oh sure” not long ago to this question posed by Susan Blackmore: “So do you think that, brain research is going so fast, fairly soon people won’t fall for the zombic hunch?” (Page 59 of Blackmore’s 2006 book, Conversations on Consciousness.)
But, again, being a human whose experience is that of a (seemingly continuous, even when dreaming) embodied first-person perspective that lives right here, between the eyes, I bet Churchland can easily imagine switching minds with her neighbor, and doesn’t find movies like Freaky Friday to be unintelligible nonsense.
Herein lie my hopes and concerns about the ACT test. As for my hopes, it seems plausible that even a well-informed conscious creature, no matter how aware it is of its own physical makeup, will be able to conceive of a distinction between its experience and its physical makeup. It might even be able to tell us what the nature of that experience is and, if appropriate, where it actually is. And so on. This also suggests a concern: that ability to conceive of mind-body dualism may seem plausible to you and me, but maybe the machine would exhibit an extreme version of what Churchland describes; i.e., it would be so aware of its physical makeup, that it would not see any distinction between talk of an experience and of the physical processes correlated with that experience.
A variation or extension of that concern is that a non-conscious computer might use the sorts of language we’d expect from a conscious entity—i.e., the sort of language we humans use—but that we and the computer would be speaking different languages. For a simple example, we might ask a computer if it has experience. It might interpret this as “do you have representation?” Well, computers do represent things in code and hardware configurations. So we might clarify by asking if it has representation outside of those things. If the computer says it does, it might be referring to some metaphysical system it has come to understand—for example, about its identity over time, as a machine whose code and hardware changes; and yet, there is some singular metaphysical entity (with a continuous locus of perception, in the sense of incoming information) that the computer understands to be its “self,” as we humans would call such a thing. If we ask the computer if it considers that self to be illusory (as many physicalist-minded humans take that experience to be), the computer might interpret this as a statement about the metaphysics of identity that need not have anything to do with being conscious (e.g., see the Ship of Theseus).
Again, I see this coming down to the concern that a conscious computer might be so well-informed about its own physical makeup, and view subjective experience as such an obvious material result of that makeup, that it would not be able to make sense of the idea that there is something extra. It would be 100% immune to the zombic hunch, unlike even the most well-informed humans. In other words, the computer doesn’t share the evolutionary baggage that humans carry around, making us vulnerable to things like superstition and highly reductive representational models and stereotypes and cognitive biases and meaningless taboos and sugar and so on. (There are good things, too, like our strange sensitivities to sound that make music possible and powerful.)
Given all this, there could be two computers side-by-side, one conscious and the other not, yet their behavior—as far as we can tell—amounts precisely to the sort of distinction Chalmers describes in his zombie example. Now, I presume that the deepest inner workings of these computers will be different. This could come down to a process that can be performed by identical hardware; i.e., due to slightly different code. But we can’t work out which is conscious and which isn’t strictly on that basis (even were the code to turn out to be relatively simple!). And asking simply, “which of these is the code for consciousness?” or “are you conscious?” won’t do it. And so the need for something cleverer, like the ACT (about which I’ve already expressed some basic concerns).
In the end, I worry that the baseline problem reemerges here. Even if we ask, “Is there a significant aspect to your existence, or to how you represent or model the world (including your physical and mechanistic self) to yourself, that no other entity can observe but you?” A conscious computer might answer “no” because it might understand that any other computer can be connected to it and share in its experience, and might not realize that humans can’t do the same. Even if the machine conceives of “you” as a single entity, that entity could be formed by what we’d consider to be millions of distinct computers (this again gets at the idea that, if you actually do bridge the mind gap between two entities, you are no longer dealing with two entities, but wiht one).
Alternatively, though I think this more of a stretch, a non-conscious computer might answer “yes” because of the aforementioned metaphysics (and perhaps it views any additional computer being connected into it as now creating a new computer, rather than constituting an ontological expansion of itself). The point is that we can’t know for sure about what’s going on in the machine—at least, not with the anywhere near the same level of certainty I think we’re thoroughly justified in enjoying about whether our friends and neighbors are conscious.
47. Relying too much on behavior leads us to assign internal, conscious states to things like Furbies and pea tendrils.
For an example of the former, check out this 2011 Radiolab interview with Furby creator Caleb Chung, in which Chung vehemently asserts that Furbies are conscious.
Questions about the latter are posed in this February 2, 2018 New York Times article, “Sedate a Plant, and It Seems to Lose Consciousness. Is It Conscious?“:
“Plants are not just robotic, stimulus-response devices,” said Frantisek Baluška, a plant cell biologist at the University of Bonn in Germany and co-author of the study. “They’re living organisms which have their own problems, maybe something like with humans feeling pain or joy. In order to navigate this complex life, they must have some compass.”
I don’t know how closely “like” pain or joy Baluška assumes plant experience to be, or the extent to which he means these words metaphorically, but I’m struck by the fact that Baluška seems to believe—or is at least apparently happy to give the impression of believing—that there’s anything at all that it’s like to be a plant. What grounds that belief? The fact that the plant exhibits some behavioral responses similar to those found in animal nervous systems? (Admittedly, I haven’t read any of the relevant scientific literature; but see below for critical commentary by an expert.) I’m sure there’s a complicated story to tell here, but I am incredibly skeptical about that story’s power to convince me—or anyone else interested in human experience—that a plant has anything like pain or joy, or anything like consciousness.
Perhaps some simple organisms have at least primary consciousness; that is, extremely basic, low-level representations of certain stimuli in its environment. Worms for example. I doubt it, but how can we know? And even if they do have primary consciousness, I wouldn’t assume they have anything that registers as what we humans call “pain,” due to their comparatively simple nervous systems. And it’s not like they need pain to do what they do: pain isn’t required for a moving thing to respond positively or negatively to stimuli; single-celled organisms and robots can do this.
I wonder at what point something counting as pain begins to emerge in the evolution of life on Earth. Who knows? But I seriously doubt plants have anything like it. On the other hand, that dogs and rabbits feel no pain—that they have no conscious experience whatsoever, in fact—was obvious to folks like Descartes.* (REMINDER: I BELIEVE DOGS AND RABBITS EXPERIENCE PAIN. I’M TALKING HERE ABOUT DESCARTES’ BELIEFS, NOT MY BELIEFS.) Indeed, Descartes would apparently have thought me as misguided for thinking a dog could feel pain as I think those assigning pain to plants must be. For Descartes, dogs didn’t yelp in pain when cut (something he would do as a matter of research); rather, they were automatons, and their yelps were like the squeak emitted by a machine whose springs have been tugged at.
(*For a recent, highly regarded neuroscientist making this claim, though for reasons of function rather than religion [Descartes didn’t believe nonhuman animals had souls and he took it that soul equals mind], I direct you again to Susan Blackmore’s 2006 Conversations on Consciousness, in which appears an interview with V. Ramachandran. Blackmore pushes Ramachandran, a self-declared non-dualist/”neural monist,” on the question [starting on page 188]:
Rama: … I think what’s going on is—let me make some bold assertions—first of all I think animals don’t have consciousness or qualia.
Sue: None of them! Only humans, right?
Rama: Great apes come close. I think there is a quantum leap. There is something very unique and special about humans, not in any theological or mystical sense, but just in terms of functions.
[…skipping ahead…]
Rama: … I think that, for example, your withdrawal from a hot kettle is a different pain from the pain that you then contemplate. In the first case, the pain of withdrawal from a kettle, there is no qualia, no meta-representation. …
[…skipping ahead…]
Sue: So your’e saying something like this: “I’m a vegetarian, I don’t want to eat animals, I would rather the were killed in a nice way, but actually I don’t think they feel pain.”
Rama: That’s correct, I would say that, if pushed.
Again, these are not my views. I don’t even know if Ramachandran still, in 2018, has these views, and he seems hesitant in the interview—some 15 years ago—to mention them. Whatever beliefs he had at that time, and now, were arrived at, I’m sure, in good faith following years of deep contemplation and scientific inquiry into the workings of the mind and brain. So I’d recommend reading the whole discussion to get the nuances of what he has to say. I bring the conversation up here namely to emphasize a number of things: the potential stigma of admitting to such a belief, even one believes it scientifically warranted; the fact that smart and experienced people can look at the same evidence and arrive at different conclusions; the distinction between understanding something functionally, and indeed being a virtuosic instrumentalist with that thing, while holding contrarian views, even within one’s own field, about the fundamental nature of that thing; the difficulties of understanding consciousness; the intractability of the mind gap: it’s so intractable, we often can’t even be sure, including in the strictest evidentiary sense, of when there is or isn’t another mind on the other side [including, again, that of a computer].)
Maybe ascribing sophisticated experiences to plants is to hypercorrect for fear of repeating Descartes’ mistake (and for fear of exhibiting the worst sorts of stereotypes about the “rational West”?), and to conflate our anthropomorphic metaphors with the real thing. Or maybe its a consequence the decades in which behaviorism reigned, equating descriptions of behavior with some correlated experiences; and maybe this is an attitude certain scientists would like to see sustained (e.g., those who would like to see the subjects of their research given a higher status in society, or even those who’d like to think that behavior is enough to call computers conscious, and so on).
For a contrary (and far more common) expert view, check out this February 14, 2014 Massive Science article, written by Devang Mehta in response to the above Times article: “Plants Are Not Conscious, Whether You Can ‘Sedate’ Them or Not.”
I must point out that Mehta seems to imply that the Times article misrepresents the research (while other popular publications did not), and that, while some researchers might use words like “intelligence” to talk about plants, it’s not meant to be taken as anything like “consciousness,” and many researchers prefer to avoid consciousness-related metaphors in order to avoid such confusion. Further, Mehta states that “even Baluška would not go so far [as declaring plants conscious], telling the paper: ‘No one can answer this because you cannot ask them.’
48. There’s another surprising way in which the gap may help, namely with forgiveness, or at least with maintaining a healthy level of humanitarian sympathy. At first blush, I’d expect the gap to impede forgiveness, because we don’t know what someone else was feeling and thinking when they committed their crime. But this also means we don’t know the pain of someone else who’s been wronged. If I could literally feel the pain of everyone being wronged at this moment, not to mention the pain of the person committing the harm… well, I supposed my head would explode.
And if we had such capacity—i.e., the capacity to experience one another’s harms—would such harm still be committed? I imagine it would, even if we had no choice but to experience one another’s harms. People routinely self inflict harm, in a variety of ways and for a variety of reasons. If I think hard enough, I might be able to recall harms I’ve committed against myself that I might want to see someone else sent to prison for, and I do indeed experience the wrongs I commit against myself.
At any rate, many of us believe—and not just on consequentialist grounds—that prisoners should be treated fairly, not given a life of torture. What I wonder is if we would still feel this way if we felt the pain of others. It occurs to me that, feeling the pain of others would also entail feeling the pain of tortured prisoners. The genuine dissolution of the mind gap leads to unfathomable alternatives to our current reality.
Speaking of sympathy for prisoner’s, this 2018 The Conversation article comes to mind: “Teaching Philosophy to Prisoners Can Help Transform ‘Macho’ Prison Culture.”
49. Psychophysiologist Stephen LaBerge studies lucid dreaming. A lucid dream is one in which your mind awakens while your body remains asleep. It’s often triggered by the awareness that you’re dreaming. Lucid dreams are generally more vivid than ordinary dreams. Some experienced oneironauts (a nice word) are good at controlling their lucid dreams. LaBerge needed to convince skeptics in the research community that lucid dreaming is real, so he devised ways of communicating while in the lucid dreaming state—for example, by moving his eyes in certain patterns (one’s body is mostly paralyzed while asleep, but the eyes are left alone; so is respiratory function, but signaling via breathing patterns is harder to do). I find it interesting that is waking self-report wasn’t enough to convince others; and brain monitoring of his sleeping brain gave the same report as that of someone in a non-lucid sleep state (I presume, though, that lucid dreaming could be detected with the right tools).
(Aside from Wikipedia, linked above, my source for this point is, again, Susan Blackmore’s 2006 book, Conversations on Consciousness, in which appears an interview with LaBerge.)
50. Does there exist the possibility of a universal intersubjective phenomenology calculus? Something like a quantified language that says, “I can’t know what you’re thinking exactly, but I’m .6 degrees confident that I’m in the ballpark of .7 degrees of accuracy.” Failing in attempts to codify such a scheme may itself be useful.
51. There’s a remarkable probabilistic element to this that seems to carry an evolutionary advantage. We are usually going to be correct in interpreting a person who looks angry as being angry. Or, at least: better safe than sorry. In fact, maybe it’s evolutionarily advantageous, in extreme circumstances, to assume any stranger to be a threat. This is not so advantageous in the current world, however, particular if one of our goals is for people of different groups to get along harmoniously. Our intuitions might err on the side of confrontation, or will focus on data that seem to recommend the scariest conclusions (particularly when we don’t know how the data was collected, how they are parsed, or how they should be compared to an entire population). The gap is tricky to navigate, and I’m not sure whether the most helpful map for that lies in informal probabilistic models (as those provided to our intuitions by evolution) or informal models (our attempts at bypassing our naturally probabilistically deficient intuitions about how to interpret one or more data points) or neither (e.g., instead, in some sort of politically motivated, but instrumentally useful, world model).
52. Her face cracked in the middle, a strained and crooked smile, eyes trading squints and bulges. This physiognomic cacophony was, for those close to her, the external embodiment of an ecstatic warmth, an ember’s pulsating glow. For many who didn’t know her, it was a fright, at least fleetingly. Some, not all, young children would cry at first sight. And some emotionally immature teenagers—even those who’d seen her around since childhood—would let burst a feigned scream, quickly swallowed by nervous laughter. Maturer strangers would be taken off guard before quickly regaining their composure, loosening as they realized this was nothing more than a physical disability of sorts—an inability, due to disfigurement or loss of neurological control, to coordinate facial muscles. And some young children would smile big right back at her, and then her face, vibrating, would crack deeper, a jar of happy flies: “Ha,” she’d laugh.
The above is an imagined case of synkinesis, inspired by a longtime fascination of mine with how easily a stranger’s atypical appearance can distort our theory of mind. It doesn’t take much—injury, infection, a bit of nose lost to cancer. Maybe just plain aging can do it. The distorting effects on interpersonal dynamics travel in both directions, as demonstrated in the earlier noted example (point 23) of the non-existent scar (maybe even the thought of an atypical appearance can have an effect). (What I don’t consider here is atypical appearance of the opposite kind, i.e., of an extra saturated dose of what tend to be considered attractive features.)
I’m reminded of a 2015 New Yorker piece, “Give Me a Smile,” in which theatre teacher Jonathan Kalb describes his experience with synkinesis (now that I look at it again, I feel I might have plagiarized it in my little story above; but I did a poor enough job of plagiarizing to leave it be):
For the past thirteen years, my smile has been an incoherent tug-of-war between a grin on one side and a frown on the other: an expression of joy spliced to an expression of horror. Smiles are our most important form of nonverbal communication. They express warmth and familiarity; they signal receptiveness, openness, alliance, approval, arousal, mirth, and pleasure. They’re also pleas for attention; tools of ingratiation, seduction, appeasement; flags of disapproval, contempt, embarrassment. Some people wield them parsimoniously; others dole them out willy-nilly. The spontaneously joyful smile is the facial expression most easily recognized from a distance—as far as a hundred metres, researchers say. If a stranger approaches me smiling and I try to return the greeting, I watch the person’s face fade into apprehension and wariness.
I feel strongly that we, as a culture and as best we can, must push to transcend such responses to physical appearance—particularly what I’ll here call “nontraditional modes of behavioral representation” (the most extreme example I can think of is that of the presumed vegetative; see point 45). Such conditions only debilitate insomuch as society—i.e., individuals en mass—responds to them in the way I and Kalb have described. I would certainly prefer to live in a world where someone with synkinesis could, for example, be a viable contender in a major political race. I’m (greatly!) tempted to begin what would surely be a long, complicated discussion about the both deeply embedded and shallowly sustained obstacles we face in pursuing that aspiration. Instead, I’ll save those tens of thousands of words for another day and will just say that, as always, attention seems to be the best bet.
53. Suppose minds are both infinite and, in a way, contiguous—much in the way some have described universes (i.e., the multiverse) to be. Imagine figure-eights tightly packed together, without discernible borders; yet each experiences itself as isolated. Maybe there is a portal through.
54. There’s a lot more to think about (e.g., Empathy training with virtual reality?… Can we know what it’s like to be a bat?…). I’ll leave it at this.
55. To conclude, I’ll repeat the end of point 19:
Nothing bridges the gap. But there are ways to work around it, to thrive socially and interpersonally both despite of and due to it. Close and sustained attention—importing, probing conversation, perspective getting, whatever else you want to call it—is at least half the solution. The other half is in the communication, the exporting, the behavioral representation.

Or click the banner to shop at Amazon (at no extra cost: it just gives me some of what would have gone to Amazon).
Footnotes:
- Another great problem is complexity, which we address with a wide range of formal and informal models (e.g., those given by sensory perception and intuition, and those found in statistics and probability). Something complexity and the phenomenological gap have in common is epistemic impenetrability—that is, deep uncertainty, perhaps even unknowability.
I mention complexity here because these problems are linked: human behavior, at the individual and group levels, is complex, and thus requires simpler models in order to be talked about, studied, understood (even if just a little). In other words, we use models—including informal probabilistic ones (more on this later)—to try make sense of one another’s phenomenological existences and, indeed, even for understanding the grounding and preconditions for that existence (e.g., the activities of the brain, itself an unfathomably complex system).
In an age increasingly defined by its expanding ability to store, parse, exchange, and communicate information, and in which information is even increasingly seen as fundamental to all existence, uncertainty may spawn a collective angst of a new, hyper-forward-looking and existential sort, particularly when that uncertainty is about something humans view as essential to who they are, i.e., consciousness: As the information age progresses and as technology permeates, what might we humans become as individuals, as a group? That’s a broad question about the role of technological permeation, post-information age, that entails narrower questions, such as about what will become of our understanding about individuation between minds—of the mind gap.
- Livni, Ephrat “There’s Only One Way to Truly Understand Another Person’s Mind” (July 3, 2018) by Ephrat Livni at Quartz.
- The “Philosopher’s Hat” is what I call it when someone faced with a philosophical discussion pursues arguments they don’t actually believe, namely because they think the point of a philosophical discussion is to say the most outlandish things they can think up, rather than to simply say what they actually believe. They view philosophical conversation as a kind of game in which participants compete to come up with and defend the weirdest science-fiction stories or the most counterintuitive ideas imaginable. Or they view it as an absurdist game of “What if?”—just a bit of fun, then back to real life. I usually dislike those conversations, especially when it’s about a topic I think to be important; though I also understand that there may (appear to) be a fine line between sincere claims about the real world and the fantastical thought experiments philosophers come up with in order to support those sincere claims.
(I distinguish the Philosopher’s Hat from a phenomenon in which people, particularly in support of a political view, make an obviously false claim that they either don’t believe [but think, for example, expresses some kind of abstract truth]; don’t realize they don’t believe [at the moment of making the claim]; would like to ensure that the belief is never correct [and think that claiming it correct now decreases the likelihood of its coming true later], and so on.)
- Perceptions of the impact of negatively valued physical characteristics on social interaction. R.E. Kleck & A Strenta. Journal of Personality and Social Psychology. 1980, Vol. 39, No. S, 861-873
- Danziger, S., Levav, J., & Avnaim-Pesso, L. (2011a). Extraneous factors in judicial decisions. Proceedings of the National Academy of Sciences, 108(17), 6889–6892.
- Like many other influential ideas in contemporary psychology, ego depletion has been brought into question under the recent “replication crisis.” See, for example, this 2016 Neuroskeptic post: “The End of Ego-Depletion Theory?” I know there are other findings I’ve cited in recent years that have been brought into question, or it appears will soon be brought into question. I don’t have time to go back and flag all of them.
- “German judges with an average of more than fifteen years of experience on the bench first read a description of a woman who had been caught shoplifting, then rolled a pair of dice that were loaded so every roll resulted in either a 3 or a 9. As soon as the dice came to a stop, the judges were asked whether they would sentence the woman to a term in prison greater or lesser, in months, than the number showing on the dice. Finally, the judges were instructed to specify the exact prison sentence they would give to the shoplifter. On average, those who had rolled a 9 said they would sentence her to 8 months; those who rolled a 3 said they would sentence her to 5 months; the anchoring effect was 50%.” (pp 125-126)
- Cited from the back cover.
- Sub-header: “Want to become a police officer, firefighter, or paramedic? A WIRED investigation finds government jobs are one of the last holdouts in using—and misusing—otherwise debunked polygraph technology.”
- Gazzaniga, Michael S. (2011-11-15). Who’s in Charge?: Free Will and the Science of the Brain (pp. 198-199). HarperCollins. Kindle Edition.
- From the Stanford Encyclopedia of Philosophy entry, “Torture.” Accessed 9/18/2018.
- The anatomical female version of such tests, in which vaginal vascocongestion is measured, is reportedly a less reliable window into erotic desire. I won’t touch on the theories about why, some of which are politically controversial. For that, see Jesse Bering’s book (linked below).
- “Can We Really Measure Implicit Bias? Maybe Not” by Tom Bartlett, The Chronicle of Higher Education (2017)
- Alexandra Horowitz, “Disambiguating the ‘guilty look’: Salient prompts to a familiar dog behaviour”, Behavioural Processes, Volume 81, Issue 3, July 2009, Pages 447-452