Broadly speaking, utilitarianism is the view that right action is that which promotes the greater good. It has been revised, developed, and adapted into varying systems of thought over the last three hundred years or so. (For more on that, see the Stanford Encyclopedia of Philosophy entry The History of Utilitarianism and the Internet Encyclopedia of Philosophy entry Act and Rule Utilitarianism.) I’ll focus here on a common understanding of utilitarianism in which the greater good is evaluated as a function of aggregated happiness or suffering.
I generally rely on the terms happiness and suffering to refer to opposing ends of a spectrum that runs, respectively, from experiences of the greatest possible positive to the greatest possible negative valence. On the view I explore here, right action is that which results, on balance, in the most happiness; or, at a minimum, in the least suffering; suffering may be diluted or nulled by happiness, and vice versa. Call this the aggregate utilitarian (AU) view. (I take this approach to be in line with what’s sometimes called average utilitarianism, though I’m not committed to any strict aggregation calculus; I’m interested in any utilitarian system that aims to evaluate preferences according to aggregating experience.)
An interesting problem arises at the intersection of this view and the idea held by many that conscious computers are possible. (For brevity’s sake, I’ll simply say that, by conscious, I mean having the capacity for experience, in particular complex experience—something along the lines of your capacity to experience the cold of an ice cube, the pain of a needle prick, the nagging thought that you should wash some clothes, and the longing for an absent loved one.) I’m not convinced that conscious computers are possible, but if I were an AU (I’m not), I might think it our duty to strive to create and mass produce such beings due to the following observation:
Given enough happy computers, an AU would be obliged to say that the amount of suffering in the world is now negligible. That is, as the number of happy computers increases, the percentage of suffering in the world tends to zero, making that world increasingly preferable to one—all else being roughly equal—without conscious computers. Continue Reading