A while back, in a long post called “Attention: Mind the Mind Gap,” I doubted whether it would ever be possible to create a Reliable Lie Detector (RLD), no matter how advanced our technology and brain science become. The more I reflect on the idea, the more convinced I am that it is not possible.
I’ll give a quick rundown of why in a moment. First, let me clarify what I mean by “lying.” Defining exactly what a lie is may turn out in itself to pose insurmountable problems for developing an RLD. I will say more about this difficulty as I go along here, but won’t push the point too much, as I think even fairly uncomplicated conceptions of lying are enough to support serious doubts about the RLD’s possibility. Here’s the definition I’ll start with:
You are lying when you knowingly try to convince someone that a proposition you believe to be false is true.
A critical implication here is that to lie is not identical to communicating false information. If it’s 1:15pm but I believe it’s 12:15pm and I tell Albert it’s 12:15pm, I have told Albert something false, but I have not lied.
Also, I can say something true while lying. For example, if I tell Albert that it’s 1:15pm while believing it’s 12:15pm, I have lied, even though it really is 1:15pm—so, I only accidentally told the truth (you might think I had it coming, liar that I am, but I forgot to mention: Albert had plans to kidnap some puppies and I was pursuing sabotage).
That this is an incomplete conception of lying is the key to its rhetorical charm. If an RLD can’t handle an overly simplified notion of lying, a conceptually complicated—more realistic—one will be hopeless. (Unless you cheat by cooking up an arbitrarily operationalized definition of “lying,” which I’ll address below.)
Also, I should note that, by “reliable,” I don’t necessarily mean the RLD has to be able to catch 100% of lies with no false positives. I see not point in quantifying this here, but I obviously have in mind a very high standard.
Now for a cursory list of reasons why the RLD is impossible. Or, if the word “impossible” isn’t in your Futurist dictionary, think of these as challenges.
(1) There is no utterance that, in itself, bears some objective feature that makes it a lie.
This is obvious but important to mention. To illustrate: if you find a piece of paper on the sidewalk with the words, “Mandy stole my pajamas” on it, there’s nothing about that statement in itself that makes it a lie or not a lie.
To be clear, an utterance need not involve speaking out loud. It can be a nod, a shrug, a hand-signing, some writing, pointing (e.g., “point to the person you saw that day”), and any other behavior that amounts to (usually intentionally) communicating information. Also, for my purposes, “to utter” and “to say” and suchlike are synonymous.
Just like current lie detectors, comparatively super-advanced future lie detectors won’t look strictly at the content of utterances in order to evaluate whether a speaker is lying. The way current lie detectors work is to measure activity correlated with lying—e.g., sweating. I’ll say more about this shortly, but won’t waste time trying to convince you of the well-known fact that current lie detectors are far below the standards of the RLD, a machine that would rely on a complete understanding of both the the mind-brain-behavior relationship and the human brain itself—thus going to the source of the utterance. At least, this is what I’m imagining, taking my cues from hyper-optimists about brain science.
That said, the RLD might work in conjunction with analysis of bodily activity (what I’ll broadly call behavior). In particular, refining our detection of personalized behaviors seems especially promising. This, too, I’ll touch on below. But I’ll start out here focusing on the brain proper, while acknowledging that the purview of brain science is quickly spreading outside of the skull, taking on notions like embodied cognition; the microbiome; the connection between environment, brain development, and neuroplasticity; and even the thought that tools like notebooks and smartphones are literal extensions of one’s mind.
(2) There is no arrangement of brain parts that correspond to any human conception of “lying.”
There are brain parts that correspond to cognition and the formulation of thoughts and to moving mouths and moving hands and so on. But, like utterances, neither brain parts nor their corresponding cognitions bear any objective feature in themselves that we’d call “lying.” That this is so will become apparent over the course of this writing.
(3) It is not clear what the RLD is meant to measure.
Recall that the RLD’s aim is to determine whether you are trying to convince someone of the truth of a proposition you believe to be false.
Let’s momentarily set aside the “trying to convince someone” criterion and focus just on the “uttering a proposition you believe to be false” part of the definition. In other words, we’ll assume that the RLD would register it as a lie were you to say “I was born on Krypton,” even if everyone in the room—which might be you alone—knows you’re only saying that sentence in order to test the RLD.
What sort of work must the RLD do here? Again, it’s goal is not to determine whether the utterance is in itself false. If I say that I was born in Philadelphia, that’s a true statement. But it might be false if you say it, though this isn’t enough to make it lie. You could be mistaken about where you were born. We misremember things constantly without realizing it. We are also often misinformed: you might, in fact, have grown up being told—and thus now believe—that you were born on Krypton, in which case the RLD would not register “I was born on Krypton” a lie, if it’s you saying so.
So what the RLD must detect is that the speaker knowingly asserting a statement they believe to be false. In other words, it must somehow detect brain states that correspond to the statement and to beliefs about the statement and so on, and it must detect a kind of conflict therein that amounts to the speaker knowingly asserting a statement they believe to be false.
Here’s a hugely oversimplified way of breaking that down. Suppose that brain-state-1 corresponds to the statement being uttered and brain-state-2 corresponds to the belief that the statement is false. The first problem to arise here is that the belief encoded in brain-state-2 may not be currently in the thoughts of the speaker (what is often called “occurrent”; I’ll call this “awake”).
Let’s consider some examples.
Suppose Beatrice says, “I lived in Chicago from 1985 to 1995.” However, she spent all of 1991 in Milwaukee, but has forgotten. Were her mother present, she’d remind Beatrice of that fact, and Beatrice would say, “Oh, right! How could I forget?” Somewhere in Beatrice’s brain, there was all along some physical stuff arranged such that the memory could be awoken. (I’ll call such physical stuff “engrams.”) But it must be awake when Beatrice says she lived in Chicago from 1985 to 1995 for it to count as a lie.
(We could further complicate things by pointing out that, even if that memory were awake, this would only matter in some contexts—that is, in most casual discussions, it would be perfectly fine for Beatrice to say, “I lived in Chicago for 10 years.” Not to mention that she might say, “Well, I always thought of ’91 as an extended vacation.” The possibilities are vast.)
Not only that, at the time of the utterance, there may be in Beatrice’s mind many other awake thoughts (call these “P” for “proposition”), all of which will need to be untangled by the RLD and compared and somehow categorized as, for example, “Beatrice believes P1, but not P2, and P1 is the proposition she’s uttering while thinking both P1 and P2.”
Not only must all this be untangled, but the RLD will need to be able to deal with when Beatrice is thinking P1 while P2 is asleep (i.e., what philosophers call “dispositional”), and yet Beatrice is in fact lying, but does not need to have P2 awake to do so! So, RLD cannot dismiss engrams that correspond to relevant, but asleep, propositions. How in the world to do this?
Examples can easily get yet more complicated, given more complicated questions. Suppose Beatrice answers a question about her political beliefs, and she truthfully says P1. At the same time, she’s thinking P2, which conflicts with P1. But, she sincerely believes the conjunction of P1 and P2! And she does so rationally enough, as she doesn’t realize they are conflicting, because the conflict isn’t obvious and she hasn’t yet noticed it.
So far, except for in perhaps some special cases, this shouldn’t be a problem for the RLD, as, just as it is not in the business of merely declaring statements false, it is not in the business of evaluating for rational inconsistency. The problem, though, is that there may some engram whose corresponding propositional content, P3, is asleep. Were P3 it to be awoken, Beatrice would immediately have a lightbulb moment and see the conflict between P1 and P2. So, while RLD can’t dismiss P3, it has to somehow detect that this awakening has not happened before or, if it has happened, has been forgotten. It must detect that there is no asleep P4 hosted in an engram that’s encoded the memory of that lightbulb moment in order to evaluate whether the conjunction of P1 and P2 amounts to a lie and, if so, is P1 the lie or is P2 the lie?
And what if Beatrice utters “P1 or P2,” a true statement so long as she believes one of them (she might believe neither, but this is revealed, again, by P4).
Now suppose that Beatrice, for the first time in her life, thinks P1, P2, and P3 all at once (in other words, begins forming the P4 engram), and does so under RLD’s watch. This might still require a fair amount of reflection from Beatrice in order for her to see the conflict. And even if she detects the conflict, it’s not clear that this amounts to a complete or clear P4—for example, to a full disavowal of P1 or P2. Rather, Beatrice could just think, “something is wrong here, but I can’t put my finger on it, so I’ll hold onto P1 and P2 until I figure it out.”
This suggests yet more complicated scenarios. It is commonplace for us to have conflicting beliefs that are competing for dominance, either over long or short durations of time. Maybe the RLD would need to report that Beatrice, within a certain time range, believes P1 to a degree of .2 and P2 to a degree of .8, but has asserted P1, and therefore is lying to a degree of .25 (i.e., the ratio between P1 and P2’s degrees of belief; but this is arbitrary).
And what if the P4 engram was formed some time ago, but is now so deteriorated that Beatrice no longer has access to it, but not so deteriorated that the RLD can’t piece it together? And so on and on and on.
One of the key worries here is that the RLD needs to avoid false positives: we humans can carry a lot of actual or potential conflict at a moment of utterance, yet not be lying. I for one relate very well to Beatrice here, as someone who self-consciously carries around a lot of conflicting beliefs. I have met people who claim to see the world as perfectly dichotomous—where everything is clearly right or wrong—for example, under the guidance of some religious or political doctrine. I’m definitely not one of those people, and I know I’m far from alone. The RLD must be able to deal with all of us.
We, at a given moment, may be agnostic, or believe different things depending on what we’ve heard in the news that morning, or be sensitive to good arguments on all three sides of an issue, and we might sincerely say we believe P1 true while otherwise consistently behaving as though we believe P1 false (which one is the real belief?), and on and on and on.
And don’t forget to put back in the “trying to convince someone” criterion (see point (5) for more on this). It’s not looking good for the RLD.
(4) What about using an RLD in conjunction with behavioral cues?
The sorts of behavioral cues currently relied on—like raised blood pressure and sweating (i.e., galvanic skin response) and pupil dilation—are too crude to contribute to an RLD. I’ve already talked about these difficulties and linked to articles and so on in the aforementioned post, “Attention: Mind the Mind Gap.” I won’t go into those details here, but will paint in broad strokes.
The reasons these are crude are obvious. Some truth-tellers get nervous just for being put on the spot, and some liars are pros. Though you don’t need to be a professional liar to fool today’s machines, once you know how to—including the most sophisticated machines using the best brain decoding techniques to date.
That said, tracking a specific individual’s behavioral cues has recently yielded statistically better results than do one-size-fits-all methods. The research I’m thinking of is from 2012, out of U Buffalo—specifically, they tracked eye patterns. That technique has the usual issues (e.g., some folks are great at lying), but a fair interpretation of the results is to imagine that, one day, a very smart AI could observe your behavior and collect and make predictive use of data that correlates with when you’re lying and when you’re telling the truth. It makes perfect sense that this would be better than a one-size-fits-all test. But this, too, is problematic.
For one thing, there is a concern about test settings being low-stress and low-stakes compared to real-world setting where lying matters most (I’m imagining the development of such tests relying on bored college students looking to get an extra half-credit for their Psych 101 course). That’s a practical worry, however. In principle, if we toss out ethical concerns about how to collect data—e.g., from genuine suspected wrong-doers—results might be improved.
Still, it seems that, as such tech becomes better known, it wouldn’t be hard to introduce noise into the data. For example, intentionally moving one’s facial muscles in strange ways or thinking particularly stressful or erotic thoughts while telling the truth and particularly calming thoughts while lying. Or, if one is skilled enough in meditation, keeping one’s mind at rest in all cases. And, of course, if one is suspected of a severe crime, whether guilty or innocent, the very process of collecting data could be quite stressful, which in itself may introduce noise (as is currently the case with polygraphs).
RLD-designers could then respond with increased invasiveness. Maybe there’s some administration of a drug cocktail combined with having the person intentionally lie in both trivial (“I was born on Krypton”) and non-trivial (“I killed my entire family”) ways. All of this in order to lead up to the question, “Are you guilty of the crime at hand?”
These solutions aren’t very promising, but are more promising than just looking at a person’s brain.
One might wonder by now why we’d need to ask the person if they committed the murder at all, given that, if the RLD can read minds well enough to be an RLD, it should be able to tell whether the person committed the murder just by looking at the brain for the appropriate engrams. This poses even more problems than does trying to detect lying. The brain decoder will need to know which engrams are of the memory of what actually happened, rather than what might have been dreamt of or contemplated as an alternative set of actions—that is, the suspect might have robustly imagined several situations in which the murder was not committed, and those engrams might present as robustly as does the memory of what actually happened; indeed, the suspect might create competing memories just for the sake of fooling the decoder; and, as always, it’s critical to keep in mind that humans have bad memories, and, moreover, even what “actually” happened may be inaccurately depicted in memory, and even if it is well-depicted, it may be the memory of a misleading or informationally narrow point-of-view.
So, I’d put my money on an RLD before I’d put it on a broad memory decoder, even though the RLD is also greatly challenged by problems related to memory. I’d bet in particular on an RLD, then, that works in conjunction with personalized behavioral analysis, of which there is an extreme version that I’ve not yet considered.
At birth, we could install into all infants a chip that would collect data on the person’s behavioral cues, correlating them with brain activity and, somehow, with social outcomes, like being clearly caught in a lie (which may have to be somehow reported by others whose input would likewise be considered reliable). This could also be used to track whereabouts and other data, so that simple questions (“Where were you on the night of the crime?”) don’t even need to be asked. In other words, the RLD would only be needed for the more complicated questions, which, again, will pose the usual “complicated situation” problems. So, maybe this would make for far more reliable methods than we have now, but I’m not sure this would get us into RLD territory.
Not to mention the dystopian air of this scenario (it may be sold as a step towards utopia—but forced utopia is the shortest route to dystopia, right?). And, again, it’s easy to imagine folks learning to game this technology (e.g., by introducing noise) and to even hack it in order to frame people. This is only scratching the surface of the awfulness that accompanies such solutions.
It’s easy to imagine less invasive version of this, in which, for instance, rather than a chip, one carries around an app and voluntarily feeds it information; to not do so might be stigmatized as “having something to hide” (who doesn’t, though?!?). I don’t see how this could work and it seems it would be particularly easy to hack (something habitual liars would be most interested in doing), but it’s a fun idea—the smartphone era’s answer to the mood ring.
Another alternative here is, rather than detecting lying, increasing the likelihood of telling the truth. For example, researchers can now apparently induce out-of-body experiences and will probably get better at such things as long as they keep at their research. Maybe a combination of some neurological procedure, drugs, and some mind-blowing virtual reality could put someone into a truthful state—maybe by convincing them they’re standing before their Judge and Maker at the gates of eternity.
That’s not an RLD so much as a Reliable Truth Inducer—but how to know the RTI is working without an RLD?? So let’s return to the RLD.
(5) What if we remove the “trying to convince someone” criterion altogether?
Above, I set aside the “trying to convince someone” criterion. Must we put it back in? Surely one can utter a sentence one believes to be false without the goal of convincing anyone that the sentence is true; and this utterance may or may not amount to a lie. What’s the distinction? Suppose you say something sarcastically, in which case you’ve said something you believe to be false, but with the aim of convincing someone of its opposite. In which case you might have uttered or said a sentence that, at face value, is false, but you have communicated something true (or have at least intended to communicate something true).
I take it that “to intend to communicate information” is equal to “to try to convince someone that the information is true.” But maybe only roughly equal. Maybe they trade, at their edges, in distinct sorts of information. Maybe the former is a little broader and includes affective, emotional, attitudinal, implicit, and suchlike information, while the latter deals more strictly with explicit propositional information. Maybe. I’ll leave this open-ended.
That said, sarcasm itself does add something extra to the propositional information being communicated, but it’s often not quite clear what that is—it could amount to mood-lightening humor or some kind of threat.
And something said sarcastically may or may not amount to a lie, may be an effective way to vaguely leave one’s own beliefs about a proposition open-ended or open to interpretation. In other words, the speaker’s intentions may be buried or muddled, and as such barely accessible even to the speaker, much less a brain scanner. This obfuscation, then, could either arise from sarcasm (e.g., to hide, hedge on, or blunt the impact of one’s true beliefs) or it could be the cause of sarcasm (e.g., if the speaker is actually unsure of what they believe).
Whatever the case, it matters what is being communicated and why; in other words, intentions matter—so I think the RLD has to be able to make sense of such noise. Though I don’t know how it could.
Still, sarcasm presents a relatively easy case, as we can at least say that sarcastic speakers generally intend to communicate the opposite of their utterances’ propositional content. As usual, things get harder with more complicated examples. As I consider some, keep in mind that the question here, at its core, is about whether the intention to convince is something the RLD must detect in order to count an utterance as a lie.
If you say something nonsensical, like “Tolstoy giraffes beam bee seeds to Mars,” it’s not something you believe (as its content is trivially false), and it cannot be something of which you aim to convince a typical mature human, for the same reasons you couldn’t believe it. But are such statements lies? This depends on context. If you’re asked, “How old are you?,” and you answer “13.8 billion years,” you are lying but you aren’t trying to convince anyone of this obviously false proposition. Rather, you’re avoiding the question. And you are communicating something in doing so; it could be “I’m really old” (jokingly) or “I’m not telling you anything” (defiantly). Those things might be true and obvious from context. In which case you’ve communicated something true by obviously lying. How shall the RLD deal with this?
It’s also interesting to imagine that a supposed RLD could be fooled in this instance into rating “13.8 billion years” as a true utterance, if the person uttering the statement intentionally focuses hard on thoughts that appropriately frame the question so that this answer is in fact true—e.g., “I, as a good secular humanist, view myself as a constitutive part of the universe, which was born approximately 13.8 billion years ago; so I was born 13.8 years ago.”
This points to a problem for the RLD that current lie detection also faces: the questions asked must be clearly framed and unambiguously stated and understood by the questioner and the answerer.
But that’s not all. In other instances, one might say something everyone knows is false, but with something very close to an aim of convincing everyone in the room that the false statements are true. Stage actors might do this while performing. At the very least, actors routinely say false things as though they are true, but the actors are not lying.
I think you get the idea. So here’s where I’ll land on this. I think the RLD—to meet useful standards of reliability—absolutely must be able to distinguish, at a minimum (as regards convincing; this list doesn’t account for phenomena mentioned in other parts of this writing, e.g., when a false memory is believed to be true and is the basis for an utterance):
…when you are merely uttering something you believe to be false, for whatever reason (“I was born on Krypton”);
…when you actually hope to convince someone of a falsity;
…when you say something you believe false but really don’t care either way if the person you’re talking to believes or remembers what you’re saying.
The third point above merits a brief linger. It’s easy to imagine someone lying, but with no intention to convince anyone of, or to communicate, anything. In other words, I might lie, but with no care for whether you believe or take anything at all from what I’m actually saying. One could spend all day listing examples, from banal and inconsequential ones to, well, the opposite of banal and inconsequential.
The upshot of all this is twofold: I don’t know what to do about the “trying to convince someone” criterion; and even trying to state a clear and simple definition of “lying” poses a challenge for would-be RLDs.
(6 ) The easy way to make an RLD: Cheat.
I now return to the earlier noted possibility of arbitrarily operationalizing the concept of lying for the sake of easier lie detection. In other words, we could simply adjust our human-made conception of lying so that it’s something an RLD can detect.
This is a terrifying thought. It’s also the easiest way to proceed and may in fact be something to genuinely worry about. Here’s an absurdly exaggerated example of what I mean.
It would be trivially easy to make an RLD if our reliability standard is strictly to catch all instances of lying. All we have to do is define “lying” as “any human utterance.” This would guarantee that anything we now think of a lie will indeed be be counted as a lie. This is analogous, by the way, to having only a standard of sensitivity optimization in disease detection, and is as obviously useless as the doctor who, in order to catch every instance of a rare disease, diagnoses every human on Earth with the disease. That is, you’ll also get loads of false positives. So we need some way to accurately call some statements “not lies” (analogous to specificity in disease detection).
This suggests a less absurdly exaggerated example of how to adjust the concept of lying. Namely, to operationalize “lying” in some way that can be measured with more nuance. For example, by putting electrodes on your face and detecting some amount (i.e., according to some threshold) of activity in such-and-such muscles, and so on.
We can do this with any human emotion we care about. Imagine, for instance, operationalizing “happiness” as smiling so many times per minute, where “smiling” is operationalized as such-and-such muscle activity, where “muscle activity” is operationalized according to some quantifiable threshold that can be detected by electrodes and turned into electrical impulses sent to a machine that outputs a reading that a human can look at and say, “This person is happy,” or, more precisely, “This person rates a .7 happiness ratio,” or some such.
Operationalizing and controlling in this way may have excellent applications in a lab, and I’m probably doing an injustice in my portrayal of the approach, but I cannot help but be cynical about the prospect of taking the very messy stuff that we humans vaguely conceptualize as lying and reducing it to some set of physical outputs that can be easily measured by some machine, and then declaring, “We now have an RLD and can catch all the bad guys” (where “bad guys” is operationalized in accord with some moral system that is the momentary fashion of some particularly influential segment of society).
Unfortunately, we tend to be Ok with folks redefining terms, so that six people in a room can argue about “X” when they have six different things in mind entirely. But that’s a topic for another writing (which I have in mind and will post soon-ish).
So I’ll just clarify here that the “L” in RLD stands for “lie,” and that by this word I mean the concept I defined at the start of this writing, which I have clearly shown to be a major oversimplification of a broader notion of “lying” that RLDs would have to account for—a notion that roughly matches the way most English speakers use the word “to lie” right now (i.e., April 18, 2019). I also understand this usage to more or less align with what most Spanish speakers mean by “mentir” and most Chinese speakers mean by “撒谎” and, well, you get the idea.
Yet another alternative here is to make RLDs more effective by altering human behavior—and even human thought—to be less complicated. We might, for instance, expect people to talk like in short, propositionally explicit, monotone phrases so the RLD can do better. (This scary scenario is reminiscent of humans talking more and more like chat bots, making it easier for chat bots to pass Turning tests.)
All of these, and similar, alternatives amount to cheating as far as I’m concerned. But make them widespread enough for long enough and they might become taken for granted and even viewed as “progress.”
(7) Even if a computer has full access to our mental content, the RLD won’t work.
There are various ways a computer could gain full access to the parts and operations of your brain. I’ll consider one such option as a thought experiment for thinking more deeply about RLDs.
Some folks these days (not me) think there is a high probability that we live in a computer simulation. (See especially Nick Bostrom’s simulation argument.) Suppose we do live in a simulation. We might think, then, that someone with access to the simulation’s code would know when we are lying. I remain skeptical, however, about reliable lie detection in such a world.
Just because a mind has been coded into existence does mean its contents are externally accessible—whether you are the coder, a hacker, or the software running the code. Even the relatively simple self-programming AIs we can make now are too complex for a human to make sense of. So we must instead make inferences about programming from behavior, rather than being able to make predictions about behavior from code. (I have heard of technicians working with sociologists, for example, in order to make sense of self-teaching, self-programming robots’ behavior.)
In other words, while the software does contain, in code form, everything that constitutes the mind of a speaker, that code is simply too complicated for a human (or even an imaginable superhuman) to understand. The alternative is to have some software that analyzes the code in search of a lie (i.e., a lie detector), but this is just problem reemergence, wherein there must be some way of reliably correlating various streams of code with what we call “lying.”
That is, any comprehensive report from the software about what a subject is thinking would just be the unintelligibly complex code corresponding to what the subject is thinking; even a simplified report might be millions of pages long. What we want, rather, is a short report that amounts to, “the subject is lying.” This, however, is the same problem we have now, but we’ve replaced, “brain parts” and “engrams” and so on with “code.”
That said, such a program would be better at catching a person in what we might call material lies. For example, if I say I only had one drink last night but I really had six, the computer will know I’m lying. This doesn’t fix the deeper and more intricate problems that arise when it comes to more complicated situations, but it would make certainly allow for the computer to know whether the defendant pulled the trigger of the gun that launched the bullet that killed the deceased. Of course, in such a case, the program already knows this, so there is no need to ask the defendant anything, and no need for an RLD (provided law enforcement can get ahold of that information, which seems unlikely). The harder problem here surrounds questions of motive and whether it was self-defense and whether the defendant was of sound mind or somehow misled and so on. That’s what the RLD is need for, but also where it will fail.
The response here might be that there is at least an approximation of an RLD in the simulated world, we just don’t have access to it. Which is to say that RLDs are not impossible in principle, even they are impossible in practice. I disagree, however, as we actually in that situation now. Specifically, I generally know if I’m lying to you: I am my own RLD (even though I might try to lie to myself). This is the best sort of RLD imaginable, provided I turn it on (e.g., by sincerely reflecting on my thoughts and behavior to determine whether I’m lying; this includes opening myself up to observations by others about my behavior). Indeed, keeping my internal RLD turned off or malfunctioning is the most effective way to evade external lie detectors (a tactic I’ve described here, for example, as introducing noise). So, there are approximations of RLDs, but they aren’t useful to the world outside of my skull, especially if they’re not running well. This is, in a real sense, precisely the problem I’m attempting to describe throughout this writing.
Finally, and as usual, for more complicated situations, the simulation program will face the same challenges in constructing an RLD as the inhabitants of the simulation itself face (or, in other words, that we now face, simulated or or not). And, like us, it might cheat to get the job done.
Why might the simulation software cheat? One reason could be that the simulation is programmed to transfer liars to a worse sector of the program once they die—a kind of Hell. The simulation will need to be able to parse lies and non-lies to do so.
(8) The RLD Can’t Detect Evil, Mind-Controlling Neuroscientists.
When I say I was born in Philadelphia, I am telling the truth. But suppose, at the moment of utterance, I’m being controlled by a team of evil neuroscientists, so that it is not really me saying this thing, but the neuroscientists, none of whom were born in Philadelphia. How would any RLD detect this as a lie?
The point of this thought experiment is to contemplate how to treat utterances made by an individual that, in some literal and robust sense, are in fact attributable not to the speaker, but to some other individual or group.
Why am I interested in this? It’s part of a bigger project of reminding myself (and others) to not assume I know what others are thinking and feeling, and that there’s no easy way to gain such knowledge. I also happen to think this gap, if not properly acknowledged, poses one of the greatest threats to humanity. In my aforementioned article, “Attention: Mind the Mind Gap,” I suggest that our best bet is to pay sustained and thoughtful attention to one another with this limitation compassionately in mind. It seems to me that, as faith in brain science and technology deepens and spreads, our acknowledgement of the mind gap lessens proportionally.
I’ll close with an excerpt from the latest episode of Tyler Cowan’s wonderful Conversations with Tyler podcast, which I happened to listen to while wrapping up this post. The interviewee is Ed Boyden, a neuroscientist and nuerotechnology expert (among other things). Listen and read the transcript here: “Ed Boyden on Minding Your Brain” (Episode 64; 4/10/19). Keep in mind that Boyden’s response, which I found thoughtful and reassuring, is given on the spot (starts at around 27:40):
COWEN: Would it be good if we had a fairly accurate lie detector? It could read your brain waves, maybe the tone of your voice, your micro expressions, but it would be able to tell if you’re telling the truth or not, and you wouldn’t even have to consent to be hooked up to it.
You’d go out on a date. You’d turn on your lie detector. It would give you feedback throughout the course of the date. Is this a net social good? If it’s bad, do we have a way of stopping it?
BOYDEN: That’s a good question. I think there’s a lot of questions about lies and memories that make the question very nuanced. For example, there’s a field in memory research where they look at what’s called reconsolidation. What happens is, basically, when you recall a memory, it becomes fragile, and there are studies about how even false memories can be induced.
They’ve done experiments where you show somebody something in a photograph or in a story, and then a week later you ask, “Oh, how was your experience?” It wasn’t their experience. It was in a photograph or a story. Some fraction of the people will remember it as their experience.
If we start thinking about these kinds of topics that connect to everyday human experience, like lies and memories and so forth, I think we also have to delve very deeply into the underlying neuroscience so we know what we’re talking about and what we’re doing.
For example, if somebody talks about something in a certain way, but it was not an intentional lie, it was a false memory — what are the subcategories of different things that people say? What about things that are coupled to external influence, like the example I just gave, where the memory was affected by an external photograph or story?
Anyway, I think that we have to make sure that the science accelerates as fast as the technology. Sometimes I say that the hard part of neurotechnology is the neuro part. Maybe we’ll have the ability to scan brains and read out information with unprecedented accuracy at some point in the future, but if we don’t understand the underlying processes and what the information means, then we might do the wrong thing with that information.
Enjoy or find this post useful? Please consider pitching in a dollar or three to help me do a better job of populating this website with worthwhile words and music. Let me know what you'd like to see more of while you're at it. Transaction handled by PayPal.