Utilitarianism and Conscious Computers: An Unsettling Utopia?

Estimated read time (minus contemplative pauses): 20 min.

Broadly speaking, utilitarianism is the view that right action is that which promotes the greater good. It has been revised, developed, and adapted into varying systems of thought over the last three hundred years or so. (For more on that, see the Stanford Encyclopedia of Philosophy entry The History of Utilitarianism and the Internet Encyclopedia of Philosophy entry Act and Rule Utilitarianism.) I’ll focus here on a common understanding of utilitarianism in which the greater good is evaluated as a function of aggregated happiness or suffering.

I generally rely on the terms happiness and suffering to refer to opposing ends of a spectrum that runs, respectively, from experiences of the greatest possible positive to the greatest possible negative valence. On the view I explore here, right action is that which results, on balance, in the most happiness; or, at a minimum, in the least suffering; suffering may be diluted or nulled by happiness, and vice versa. Call this the aggregate utilitarian (AU) view. (I take this approach to be in line with what’s sometimes called average utilitarianism, though I’m not committed to any strict aggregation calculus; I’m interested in any utilitarian system that aims to evaluate preferences according to aggregating experience.)

An interesting problem arises at the intersection of this view and the idea held by many that conscious computers are possible. (For brevity’s sake, I’ll simply say that, by conscious, I mean having the capacity for experience, in particular complex experience—something along the lines of your capacity to experience the cold of an ice cube, the pain of a needle prick, the nagging thought that you should wash some clothes, and the longing for an absent loved one.) I’m not convinced that conscious computers are possible, but if I were an AU (I’m not), I might think it our duty to strive to create and mass produce such beings due to the following observation:

Given enough happy computers, an AU would be obliged to say that the amount of suffering in the world is now negligible. That is, as the number of happy computers increases, the percentage of suffering in the world tends to zero, making that world increasingly preferable to one—all else being roughly equal—without conscious computers.

This observation has a kinship with population-centered ethical dilemmas such as that found in Derek Parfit’s Repugnant Conclusion: “For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living.”1 The entrance of conscious computers offers new theoretical problems given that they are not biological beings—i.e., neither human nor animal.

Aggregate utilitarianism offers no obvious solution for dealing with the experience of nonhuman animals; presumably in large part because we humans don’t know what it’s like (if anything, e.g., in the case of a worm) to be those animals.2 Does the suffering of a chicken, dog, bonobo count as much as—i.e., have the same quality as, have the potential to be as intense and harmful as—that of a human who is: healthy and law-abiding; horrifically criminal; an infant; elderly, barely conscious, and terminally ill; a beloved celebrity? Computer experience poses similar difficulties.3

Some might argue that computer experience isn’t in the same moral class as that of animals and therefore shouldn’t be aggregated with animal (or in particular human) experience. I agree that conscious computers would likely demand special moral considerations, thus constituting one or even multiple moral classes, though I don’t think this would necessarily exclude them from the broader aggregate. This in mind, let’s reflect on some of the features that distinguish computers and humans.

Computer memory performs quite differently than that of humans. Indeed, it strikes me that the word memory might not be appropriate for how computers encode, store, and retrieve information, particularly for non-conscious computers, in which many terms associated with animal consciousness—e.g., representation—are used metaphorically. Introducing information into a computer’s storage, so that it later may be retrieved wholly intact, is quite different from what happens in a biological brain, which rarely affords its bearer unaltered memories. 4. Computer memory is at its core a matter of storage and retrieval, while the processes underlying human memory are varied, complex, and little understood. 5

Put simply, computers have perfect memories and humans do not. When I was 15, I burned my arm on a wood stove. It hurt a lot. I now barely remember what it felt like, though I have ever since been more careful when doing things around hot metal. Even people who remember too well, whether due to supercharged autobiographical memory (i.e., hyperthymesia) or not being able to shake a horrible experience (PTSD), still remember imperfectly. And many humans who live with outsized suffering have some moments that make life worth living—moments of joy and laughter. Moments of forgetting.

Consider the moral implications of a computer’s capacity for perfect memory. By remembering, a computer is recreating the original experience whole, unaltered, and undiluted. In other words, a conscious computer could be made to feel intense happiness (or suffering) in its most robust and intense forms indefinitely, simply by introducing the source of such experience to the computer one time. This arguably provides a special moral category for computers. 6

Notice that the human limitation on memory falls under a more general limitation on imagination: we cannot imagine the sensation of, say, eating chocolate ice cream as vividly as we experience the actual eating of the stuff. (It would be nice if we could, but this would have been an evolutionary hinderance—to simply imagine away hunger, lust, and other survival drivers would be to imagine away survival.7) A free computer—i.e., one not overly restricted by its programming—can live from moment to moment in whatever world it cares to represent to itself (within the strictures of logical possibility, its energy resources, etc.).

Given these differences, the metaphysics of their identity will likely differ: a computer could, for example, partition or wipe clean its painful memories in order to avoid re-experiencing them. But would this count as a kind of localized death/suicide and thus be something to be feared, or would it be insignificant to the computer? Similarly, to design a conscious computer with forgetting built in—that is, with a deliberately impaired memory capacity—might perturb the integrity of that computer’s personal sense of identity.

The mention of metaphysics shouldn’t be dismissed lightly. By computer, I don’t mean a physical machine—a software-running box with some chips and wires inside. I’m talking about something that perdures through time and space as an experiencing entity that recognizes itself as existing; something more like a person, or at least as robust as a person—maybe more so given the computer’s capacities. The computer that lives for a million years has changed physical parts again and again, has gone from silicon chips to who knows what materials, all the while maintaining its sense of self, its singular, first-person locus of perception as a self-aware, thinking being. In some ways this metaphysics is similar to that of human identity; but given the computer’s special capacities, its metaphysics will be different in ways about which we can only speculate. Indeed, this may play out differently for different computers, accidentally or by design.

A key distinction for the basis of a conscious computer’s identity metaphysics may be that it need not function within—or at least need not define its (moral) identity as part of—a social system. This could be true if a single computer operates as a closed system, or if all the computers within a network actually view themselves as being identical with a single entity, recognizing no network as such at all. This may strike some as a point against aggregation, as it is our involvement in a society that gives us the sense that our experience, and thus our moral self, is subsumed by the interests of the group(s) of which we are a member—in other words, the sense that our experience is aggregate-able. If computers aren’t part of human society, perhaps they shouldn’t be aggregated along with humans.

Similarly, we tend to count animal experience according to how we relate socially to those animals. We might, then, find ourselves intuitively wishing to include in our aggregates a computer whose form resembles that of a human or a dog, but not that of, say, feces or a rifle.8 What really should count, it seems to me, is the sophistication of a being’s faculties for rich and complex experience: how deep does the suffering go, and how high the happiness? And this experience happens at the level of the individual, not the group. Rather than appeal to that line of thinking (which I think ultimately substantiates a strong argument against AU altogether, by the way9), I’ll note that in order to bring computer experience into the aggregation fold would merely require that those computers are designed to be a significant part of human society; perhaps computer happiness would require some sort of regular attention from humans. (This doesn’t solve the problem of what the AU should do with experiencing beings isolated from human society, but it does avoid that problem in the case I’m outlining.)

To emphasize the significance of social interaction, notice how intuitively easy and natural it is to conceive of comparing the preferability of two distinct worlds, rather than intuiting those worlds as being part of some greater sphere of aggregation, such as the universe or all existence through time and space. In other words, on contemplating two worlds—A and B—side by side, the intuitive assessment is that “there’s now x amount of suffering in World A and y amount of suffering in World B” rather than “there’s now  x+y amount of suffering in the universe.” This will be true even if those worlds exist simultaneously in the same solar system. It seems to me that these intuitions are in response to those worlds being socially distinct, rather than anything having to do with physical space. Notice how the intuition is affected if, for example, inhabitants of two planets are able to freely teleport back and forth between those planets, living on one and working on the other, etc.; I could go on developing this into a single society (e.g., imagine the planets have the same laws)—the planets begin to feel like a single world, a single sphere of aggregation.10

So, experience-aggregation is a matter of conceptual rather than physical partitioning. We can partition according to ordinary social group membership (race, gender, etc.) or arbitrarily, just as we can with any social group ontology—say, according to hobby, vocation, hair color, or random activities (bus riders). But aggregate utilitarianism is concerned with the worlds made up of all the experiences within a dynamic social system. Whether that’s a contained sphere of activity or the universe or the entire scene of existence across all possible universes past, present, and future is not obvious, but what does seem clearer is that when experiencing beings function within a system such that the behavior of one of those beings can influence the experience of other beings (call this a world or a society or whatever you like), the AU’s aggregate calculus applies.

Humans as such (i.e., by definition) are significantly bound to their bodies, which centers a locus for individual behavior, though that behavior’s sphere of influence is harder to track; solitary confinement is an extreme example of cutting off a human’s influence (execution being yet more extreme, and perhaps the most extreme would be damnatio memoriae, an ancient Roman punishment the aim of which is to erase all physical evidence of one’s having existed). It’s not clear whether a world in which all experiencing beings humans are in solitary confinement is possible (there’s no interaction, and so it’s not a world, or at least not a society) or would be exempt from aggregation (on any reasonable utilitarian view). Nor is it clear that computers can be contained in this way, especially not the ones we’d be most interested in containing. But it does seem clear that happy computers worked into the human social fold would not be exempt from aggregation.

Or at least being in the human social fold should make it a solid candidate for inclusion. That in mind, I’ll reiterate that computer experience should be included for the same reason often cited for including the experience of apes: its experience is sophisticated. Indeed such a computer may even count as person, or at least as a person. Furthermore, to ignore those grounds for inclusion sets a bad precedent for utilitarianism in the human case (though perhaps that shouldn’t be the guiding principle here, given that I presume the utilitarian operates according to some deep moral truth, even if inconveniently), from which follows the designation of humans into distinct moral classes based on things like mental acuity and capacity for complex experiences; we do this already to a degree—e.g., the brain-damaged, young children—but there is an important distinction to be recognized between assignation of moral responsibility or agency (low in the case of dogs, for instance, given their limited capacity for moral reasoning) and expected moral treatment (i.e., as moral subjects; high in the case of dogs, given their great capacity for suffering). These are finer distinctions than I’ll endeavor to parse out here, though I will note that given a conscious computer’s capacity to suffer at least in terms of physical pain merits special consideration; it’s unclear, however, what a conscious computer’s capacity for complex emotions such as anguish or fear could be, as this is a question of the degree to which the world has meaning for a conscious computer; experience is one thing (a chicken likely has experience) and attaching meaning to experience is another (a chicken likely does not, certainly not to the degree more cognitively sophisticated animals do). Depending on just how sophisticated the computer’s experience is, we may be obliged to weight it more than that of humans, for the same reason we now give more weight to human experience than to that of chickens.

To summarize a few key points: It seems that a nonhuman entity is a prime candidate for inclusion in the AU aggregate calculus if that entity has some role to play in human society and, furthermore, has sophisticated experience, given that mere experience—or what’s often called primary experience, of the sort a wasp might have—seems insufficient (I’ve encountered no debates about including insects in the aggregate); in other words, merely having internal, phenomenological representation of external stimuli is not enough (even if a stimulus is a drug that results in a kind of pure euphoria): you also need an internal sense of things like desire and disappointment.11 Finally, that entity may merit special moral consideration on metaphysical grounds related, among other things, to memory. Note that attempting to minimize the moral concerns about what any single computer experiences will likely fail. For example, attempting to create a supply of meaningless happiness, experienced as a steady, memory-less stream won’t work: experience, even if had by a tremendous number of beings, requires at least some meaning in order to be aggregated with human experience (as noted above), and it seems unlikely that experience without memory is possible (see my piece on Memory and Consciousness [via Audition]).

There are other interesting questions to consider:

–Some, like Andy Clark and David Chalmers, persuasively argue that when humans use computers, those computers are literal extensions of that human’s mind.12 Need that only be a kind of unconscious mind-extension with which a human meta-cognitively interacts? Or, rather, when that computer is conscious and happy, could we say that some (or even the bulk) of that human’s happiness resides within that computer? (I have a similar question for Clark and Chalmers regarding when, say, human spouses extend each other’s minds. In short, my broad question is: What is the phenomenology of the extended mind when “Otto’s notebook” is conscious? [A more immediate example of which is a spouse.])

–We are unable to communicate with nonhuman animals, making it seem harder to imagine what it’s like to be them. But as Ludwig Wittgenstein once pointed out, if a lion could speak, we wouldn’t understand it. Conscious computers could, we imagine, fluently speak our natural (and not just programmed) languages. But is this correct? In the simplest instances there’d be no problem: “the book next to the cup of coffee” would designate precisely the same objects for a computer and its human companion. But anything much more complex than that—”I am hungry” or “that shade of orange makes me uncomfortable” or “your comments hurt my feelings” or “I miss my lover”—may become increasingly untranslatable, to the point of constituting a wholly different language. This could have far-reaching moral implications for human-computer relations. I’m reminded here also of George Bernard Shaw’s observation that “England and America are two countries separated by a common language.”

–There is an interesting phenomenon with humans wherein it is relatively easy to cause intense suffering (setting a person, or that person’s loved ones, on fire would do it), but inducing an analogous level of happiness is quite difficult. Furthermore, fire will pretty much always hurt a conscious human, but, say, an aesthetic response to a beautiful song or the euphoria from a certain drug can become quickly diluted: the song becomes too familiar and we are desensitized to the drug. Given the nature of computers—something teologically easier to pin down than anything we could uncontroversially call “human nature”—perhaps there is for them no such dilution. Perhaps they can exist in a state of bliss without being harmed: they are not being developed within the vicious Darwinian world in which the sensibilities of us animals arose—in which constant bliss is maladaptive. What might be the moral implications of this phenomenological asymmetry in computers?13

–Health concerns will likely differ. For example, computers don’t require sleep; there’s a certain forgetting and rejuvenation and healing etc. that come with human sleep, not to mention its social aspects (e.g., intimacy; we see this among nonhuman animals as well, and not only because sleeping requires a kind active collaboration in insecure environments). In general, mental and physical health may not take on the Cartesian dualistic sense in which we humans tend to conceive of those things, but rather may be seen as a healthy software and hardware functioning (respectively), along with a kind of overall sense of satisfaction or wellbeing had by the computer. This requires a certain kind of internal organization, relation to the world, and energy access (to mention a few that come right away to mind).

–Computers have the potential to live (so to speak) indefinitely, which may lead them to fear death or permanent impairment more than humans do, and value their existence and some details thereof (e.g., the memory of a particularly poignant moment in a long and mostly repetitious existence) more than humans do, etc.

–The need for meaningful experience may pose a resource issue, as such meaning may require a rich world of complex experiences had over time, which would be costly to implement across millions or billions of computers.14  This may make it impossible to create a significantly large number of happy computers. A way to avoid this may be to capture just a few moments of meaningful experience, involving a minimum store of memories, and loop those moments over and over in several computers. (Imagine something along the lines of a Boltzmann Brain, which comes briefly in and out of existence with its memories and relevant experiences already built into its structure; now loop that.) Perhaps this won’t work, however, given that, in order to make a dent in the human aggregate, we will need a far greater number of those computers than we would of computers engaged in longer and more meaningful existences. Also, if two computers have the exact same set of experiences, I think it’s easily argued that this counts as just one set of experiences, and should not be double-counted in the aggregate; this will also be true of the same set of experiences had by a single computer over time (i.e., in a loop). What, I wonder, would count as the smallest change to an episode for it to result in the slightest change in lived experience?

–I ask in the title if the utopian result I explore here is unsettling. But unsettling for whom? For humans, certainly. But perhaps also for computers who understand themselves to be designed for this purpose. We could program them to be unaware of their teleology, but this doesn’t make the prospect more satisfying from the human perspective. And, as so often is the case, we can turn this existential observation about human-made machines back on ourselves as naturally evolved—and increasingly technologically enhanced (for some purpose or another)—machines.

I’ll reserve these and other questions (including those in the above footnotes) for later reflection.

I don’t know how an AU would respond to my observations. (Would she say I’ve fundamentally mischaracterized utilitarianism? Perhaps I am unduly fixed on the idea that utilitarianism is troubled by the fact that individuals, not groups, have experience, while the good of the group does not preclude the suffering of individuals. On this view, utilitarianism threatens to be, at bottom, directed at protecting one’s self: a practically deontological, ultimately non-consequentialist system, such that I endeavor—i.e., I urge the group—to create a society in which I’m least likely to suffer. Perhaps I’m also taking too literally the idea of aggregating experience, something that can’t really be done, and I think will always appear arbitrarily calculated.15) At any rate, I look forward to seeing how utilitarianism, and ethics in general, is informed by the ever firmer assurances of smart people that conscious computers are coming, even if such computers turn out to be impossible. Perhaps one of the richest benefits of such fantastical ideas is the resulting thought experiments about what it means—has meant and will ever mean—to be a living, feeling, embodied (organically so, for now), death-bound human.


Enjoy or find this post useful? Please consider pitching in a dollar or three to help me do a better job of populating this website with worthwhile words and music. Let me know what you'd like to see more of while you're at it. Transaction handled by PayPal.
Or click the banner to shop at Amazon (at no extra cost: it just gives me some of what would have gone to Amazon).


Further Reading

Footnotes:

  1. For an explanation, see chapter 17 of Reasons and Persons (1984); or see the Stanford Encyclopedia of Philosophy: The Repugnant Conclusion. Notice that Parfit’s observation is more obviously relevant to total (i.e., tallying up absolute amounts of happiness) rather than average utilitarianism: suffering outweighs happiness in the larger world, though more people have lives worth living in the larger world; there is also more absolute happiness in the larger world than in the smaller world, but also more absolute suffering. That said, the Repugnant Conclusion may also pose less obvious problems for average utilitarianism, which also has more obvious problems of its own; for more on that, see the Wikipedia entry: Average and Total Utilitarianism.
  2. We don’t really know what it’s like to be other humans, either. But that’s another topic, which I look at in an upcoming piece on the “phenomenological gap” that exists between all conscious beings.
  3. In a more careful survey, I’d be inclined to refer to computer phenomenology rather than experience, as I wouldn’t presume the basic components (or qualia, if you like) of experience to differ across types of experiencing beings. By phenomenology, I mean that which arises when the quality and nature of many experiences over time are taken in their totality as a kind of lived world; in simpler terms, it’s what experience—as a phenomenon in the moment and over time—means to the experiencing subject, particularly one with a complex mind (e.g., one capable of robust meta-cognition [or self-reflection/introspection]). All experiences may be made of the same basic stuff (as it were), while the phenomenology of beings may differ significantly, including across similar beings (e.g., from one human to the next). Mind, then, I’m inclined to say, is the collection (generally conceived of as a linear series) of one’s experiences taken as a thing in itself, without meaning (note also that mind is not a vessel for experience: it just is the tapestry [or series] of experiences). Consciousness is the viable minimal capacity for these things: if you are conscious, then you have experience (or at least have functioning faculties for experience, even though you may be, say, in a deep sleep), and vice versa. Finally, I might use umwelt to refer to the analog of phenomenology in less complex minds, such as that of a frog; I take this to be roughly in line with the ordinary understanding of umwelt.
  4. Introducing information into a computer is less like introducing information into a brain than it is like adding a book to a library. This is part of why I’m wary of the term memory here: I don’t say that my bookshelf now has War and Peace in its memory. Notice that such metaphors promote a functionalist picture of mental activity, which bolsters the plausibility of conscious computers; i.e., we talk as though computer computation and human cognition are essentially identical, or at least significantly similar.
  5. One story I find persuasive is that that animal longterm memories are correlated with protein formations that are rebuilt whenever the memory is recalled. This is known as the reconsolidation theory of memory, which was getting a lot of attention a few years ago (e.g., it inspired the neuroscience of the 2004 film Eternal Sunshine of the Spotless Mind, and was covered on the podcast Radiolab in 2015). Popular interest seemed to hit its apex in 2015, when it was pointed out that, like so many amazing advancements in (neuro)science (mirror neurons, anyone?), we might need to temper our enthusiasm about reconsolidation: Time to Rethink the Reconsolidation Theory of Memory? (Neuroskeptic blog at Discover, 2015).
  6. Or for any entity with perfect memory. I’ve often wondered whether an alien life-form with perfect memory might think humans morally insignificant given our poor memories for experience.
  7. Imperfect memory may also allow for attention. An extreme example: imagine a disorder in which what you see now doesn’t fade from short-term (i.e., iconic) memory when you close your eyes. Everything you see simply stays there, stacking up and creating more and more chaos, making it impossible to attend to anything in particular.
  8. For an interesting discussion about human empathy in the context of robot and animal treatment, listen to the July 10, 2017 episode of Hidden BrainCould You Kill a Robot?. It features Kate Darling, a robot ethicist and researcher at MIT media labs, “where she “investigates social robotics and conducts experimental studies on human-robot interaction” (according to her website’s bio).
  9. E.g., we might be tempted to say that if a group is doing well, then so are its members. But this isn’t so. A basketball team can having an incredible winning streak, while its members are suffering horribly. If we then say that our utilitarian system should be that which results in the most happiness for the most individuals, then we are back to looking to individual experience—i.e., the individual good—as a basis for moral grounding, not the greater good (i.e., not the good of the group).
  10. This line also leads to questions about whether aggregation should happen across time.
  11. To fill this out a bit more: The building blocks—or qualia—just stacked up don’t aggregate. Rather, it’s when they interlock in a certain way—i.e., when they come together in a certain way within an individual’s mind or experience—that a rich enough phenomenology emerges from—as some would say, supervenes on—those building blocks (or perhaps more precisely, on the relations now established between those blocks; those relations themselves may be emergent phenomena), such that makes a dent in the aggregate. This again comes back to meaning: the bits of visual information that compose my brother’s face are individually meaningless; but recognized as a whole, they pick out “my brother,” which also comes along with a complex set of emotions and personal history, etc.

    (Notice that this characterization gives the sense that the aggregate happens on its own, and humans only attempt to describe it and to understand what it means for human action. If accurate, perhaps there are many such [overlapping] aggregates and the questions become: Which aggregate should we attend to? What underlying or meta principles guide us in determining which aggregates are relevant to human action? Is there one that, in the human context, counts as The aggregate? And so on.

  12. See their 1998 paper “The Extended Mind“.
  13. I’m working on a piece exploring implications of this asymmetry for humans. Coming soon.
  14. To motivate questions about cost, notice that it would cost more to represent (or manifest) a high resolution experience than it would a low resolution one. Indeed, we might say that the high resolution experience actually amounts to a bigger collections of experiences, given that it is made up of a bigger number of building blocks (what we might call qualia; it’s difficult of course to talk about what experience is made of, as experience per se is taken to be immaterial and non-physical—i.e., made of nothing; so better, I think particularly in this context, to think of it in terms of energy, which always comes with a cost).
  15. One can imagine this taken to the extreme as a basis for theodicy. That is, God need not apologize for suffering on Earth and in Hell so long as that suffering is nulled by happiness on Earth and in Heaven. (What calculus does God use? Do ants and angels count? Does Satan?) I can’t fathom how this would answer the question of why anyone should have to suffer in Hell for eternity, when the same balance could have been reached by having no suffering to begin with. But then again perhaps utilitarianism was conceived of as other morally relevant ideas were (e.g., John Locke’s take on identity): only God knows best; we do what we can in absence of omniscience. In which case, again, utilitarianism is prone to arbitrariness.

Share your thoughts:


Deprecated: Directive 'allow_url_include' is deprecated in Unknown on line 0