Bentham and Utilitarianism: An Appreciation

Nick Geiser
12 min readJul 20, 2022

--

In a recent series of articles on this website, Russ Roberts develops a “critique of utilitarianism” as a theory of individual decision. The thees of Roberts’ argument are familiar to long-time listeners of his podcast, Econtalk — the complexity of the social world, the limits of human reasoning, the challenge of understanding data, and our desire for certainty. Utilitarianism’s merits and the contributions of the “utilitarian tradition” should be of interest even if Roberts and I agree that utilitarianism is incorrect as a theory of public or private decision. Hence the title: rather than a defense of utilitarianism (and specifically its Benthamite version), my goal is an appreciation of the doctrine and the tradition of progressive, rational reform in the Anglophone world associated with it, including its deficiencies and historical blind spots.

It’s helpful to start with what utilitarianism is. I will distinguish between “utilitarianism,” by which I mean a doctrine of either public or private decision-making, and what I above called the “utilitarian tradition,” which is an intellectual lineage. The best description of what utilitarianism comprises was given by Amartya Sen and Bernard Williams in their introduction to Utilitarianism and Beyond. Utilitarianism, in their analysis, involves three theses (pp. 3–4):

  1. Welfarism — the value for some state of affairs is the extent to which people get what they want. Or more precisely, the extent to which it satisfies people’s preferences or desires. Welfarism, then, is a thesis about what sorts of things make a situation better or worse.
  2. Consequentialism — the rightness of some action, rule, policy, or individual disposition consists entirely in the consequences it produces. While welfarism is a thesis about what makes situations good, consequentialism is a thesis about what makes actions/rules/dispositions morally right.
  3. Sum-ranking — in assessing different states of affairs, the utilitarian ranks them according to the sum of individual utilities each. Strictly speaking, this describes “total” utilitarianism. “Average” utilitarianism ranks alternatives according to the average utility in each state of affairs.

Some further clarifications about theses 1–3. Utilitarianism is often misunderstood as the view that people, psychologically, maximize something separate quantity, “utility,” through their actions. This is not exactly what utilitarians claim. “Utility” is a way of talking about people’s desires/preferences/wants and “utility maximization” is a way of representing their actions. “Utility” isn’t an entity separate from whatever someone’s desires or preferences are.

Consequentialism, thesis 2, gets its bite from the idea that the only thing that matters in assessing some action/rule/etc is the consequences it produces. Just because you’re not a utilitarian doesn’t mean you shouldn’t care about consequences! What makes consequentialism distinctive isn’t the idea that consequences matter — it’s the idea that consequences are the only thing that matters. One way of putting this point is the idea that the ranking of outcomes alone determines the ranking of actions. This feature comes out clearly in different variations on the “trolley problem” thought experiment.

Finally, sum-ranking involves the (in)famous problem of making “interpersonal comparisons” of well-being between persons. People sometimes argue that interpersonal comparisons of well-being are too difficult to make, but this is a bad argument. “Too difficult” is rarely defined, but it might mean that either it is too difficult a) psychologically or b) formally. Psychologically, it’s just false — parents make interpersonal comparisons of well-being between their children all the time. Formally, it’s actually quite easy to aggregate utilities using a ratio scale. The problem is rather that it is too easy. Basically, you can construct too many utility functions that aggregate those of individual people, and there is no principled way to choose among them.

Here’s a simple example, borrowed from the late philosopher Gerald Gaus. Suppose there are two people, A and B, and three items of food: pizza, a taco, and yogurt, and the utilities of each are:

A: pizza = 3, taco = 2, yogurt = 1

B: pizza = 2, taco = 1, yogurt = 3

If we were naive utilitarians, we might say the correct distribution gives the pizza and taco to A and the yogurt to B, since this maximizes the sum of utilities. If, however, A and B’s utility functions are only “ordinal” measures, then the differences between the utility numbers aren’t meaningful — only the ordering of the items. The same information would be given by the following:

A*: pizza = 4, taco = 3, yogurt = 0

B*: pizza = 7, taco = 6, yogurt = 9

The utilitarian now gives all the food to B.

An ordinal utility function is the least informative utility measure — it contains the least information about the agent’s preferences. One step up from an ordinal measure is a “cardinal” utility function, which is an interval measure (temperature measures are interval scales). There is, famously, a theorem that if an agent’s preferences satisfy certain conditions there exists a cardinal utility function that corresponds to their preferences and their actions can be represented as maximizing this function.

Unfortunately, you cannot aggregate across the cardinal utility functions of different agents. This is because following cardinal utility functions have the same information:

A*: pizza = 4, taco = 3, yogurt = 0

A**: pizza = 7, taco = 6, yogurt = 3

More formally, cardinal utility measures are invariant up to a linear transformation, such as adding 3 to all the utilities. This invariance also means that you can change the zero point arbitrarily. However, the choice of zero point is necessary for aggregation. This is the sense in which it is “too easy” to do interpersonal comparisons and aggregations of utility — there are too many functions consistent with the underlying individual utility functions, and the choice of function for adding utilities together will produce arbitarily different answers.

Finally, it’s worth noting that this characterization of utilitarianism so far is neutral between the objects of moral evaluation: acts, rules, dispositions, institutions, etc. It is also neutral towards choices under certainty or uncertainty — cardinal utility measures are necessary for choice under uncertainty. These differences are significant, but less significant than the three theses here.

So classical utilitarianism is the combination of welfarism, consequentialism, and sum-ranking. Why would anyone find this view attractive either as a theory of public or private morality? I think there are three basic motivations.

Impartiality

Utilitarianism commits you not to take your own preferences, desires, or wants as special simply because they are yours. Each person’s preferences count equally in the moral calculus of utilitarianism. Why treat everyone’s preferences impartially? One reason is fairness — it is one way, at least, of treating people fairly for everyone’s preferences to count equally. Another is objectivity. Henry Sidgwick, the 19th century British philosopher, described utilitarianism as taking the “point of view of the Universe” in assessing the consequences of different actions/rules/policies. Utilitarianism commits you to taking account of all desiring and prefering, not just the desiring and prefering of people who look like you. In this way it directs you to see the world of value as it really is rather than as it appears to us in virtue of who we are, where we are, or when we are.

Inclusion

Utilitarianism is sometimes criticized for its reductive understanding of well-being. As Sen and Williams put it, utilitarianism “sees persons as locations of their respective activities — as sites at which such activities as desiring and having pleasure and pain take place” (p. 4). Bentham thought the pleasures of “push-pin” (a simple 19th century English game) counted just as much as those of poetry. However, reduction also has its advantages. For one thing, it can facilitate inclusion. Reducing the complex moral experience of human persons to some simpler analytical unit permits us to include more “locations” of value in our moral calculus. The most famous example of this is with non-human animals. A commitment to taking the pleasures of push-pin seriously is also a commitment to taking the suffering of animals seriously.

Unification

Utilitarianism is a “monistic” theory. By this I just mean that it denies there are different and irreducible normative factors that go into moral deliberation. Utilitarianism holds, rather, that these diverse factors can ultimately be compared and weighted on some objective, impartial basis. This is a powerful and attractive idea. A helpful comparison is with unification in scientific explanation. Newtonian mechanics showed how the motions of bodies as diverse as the tides, planets, and arrows could be explained by a common set of principles. Darwin argued that the diversity of life — his “endless forms most beautiful” — could be explained by a process of evolution by natural selection. Scientific explanation often involves identifying an underlying set of simpler principles or processes that unify disparate phenomena.

Roberts argues that the problem with utilitarianism is that it is a “scalar” theory. A scalar is just a single number, and the problem with scalar theories is it’s difficult, if not impossible, to represent all the relevant normative factors in the form of a scalar. Worse still, scalar theories not only fail to deliver but produce the illusion of understanding. Scalar theories not only fail, but our cognitive biases make it their failures easy to miss.

This argument, I think, is best understood as an objection against thesis 3), sum-ranking. Sum-ranking has no shortage of criticisms, of course. The most famous, at least to utilitarianism as a theory of public morality, is that it ignores information about the distribution of utility as well as its aggregate quantity.

(Side note: it’s worth noting that sum-ranking is not indifferent to the distribution of stuff, or whatever causes utility. Some utilitarians think, for example, that the diminishing marginal utility of money is grounds for an egalitarian distribution of income.)

Still, sum-ranking’s insensitivity to the distribution of utility is the source of many objections to utilitarianism. For example, sum-ranking would endorse a state of affairs that imposed significant deprivations on one person, or a small group, to produce small benefits for a large number of others (e.g. feeding Christians to the lions, “The Ones who Walk Away from Omelas”).

Many scalar theories, however, are non-utilitarian. The maximin principle, which ranks states of affairs in terms of the position of the worst-off individual, is also a scalar theory. The problem with scalar theories, then, is broader than those of utilitarianism.

Scalar theories enable straightforward constrained-maximization, and maximization is commonly thought of as the basic idea behind rational choice. So scalar theories enable rational decision-making, on one common interpretation of what rationality means. A powerful reply to scalar theories, then, would be to undermine the connection between maximization of a single variable and rational choice.

Consider the famous case of Buridan’s ass. In the case, a hungry donkey stands equidistant between two identical bales of hay. The donkey is rational, and he maximally satisfies his preferences. Specifically, he wants to choose the alternative that satisfies his preferences more than any other available option. The donkey notices some quality of bale A, finds it desirable, but then finds the same quality in bale B. The donkey then surveys bale B, finds many desirable qualities, but then notices these qualities in A. The donkey cannot decide which bale to eat, and starves as the result despite the abundance of food.

It is easy enough to think that the donkey faces a pseudo-problem: the bales are the same, so just flip a coin! This is only a pseudo-problem, however, if we assume exactly what Buridan’s donkey cannot assume. The donkey’s problem is that he cannot compare the two bales of hay. More formally, his preferences over the bales is not complete. As a result, he is unable to choose the option which satisfies his preferences more than any other available option.

One way in which scalar theories seem to enable rational choice is that they enable completeness — they give us a way to compare any set of options according to a single evaluative metric.

However, completeness is arguably not necessary for rational choice. To see why, consider another way Buridan’s ass might have rationally deliberated. In this version, rationality means choosing an alternative that is “undominated” — there is no other alternative that is better than it. It’s easy enough to see that each bale is “undominated,” either by the other bale or the alternative of choosing neither and starving. Even though Buridan’s donkey cannot compare the two bales, he can cleave to rationality and sate his hunger by choosing an undominated option.

Incompleteness is a common feature of situations with plural, irreducible evaluative factors. But incompleteness doesn’t mean we have to give up on rational choice.

Writers and thinkers in the “utilitarian tradition” did more than simply develop utilitarianism as a moral theory. In particular, I want to focus on the contributions of Jeremy Bentham — a figure whose writings on political and social reform are far more detailed and subtle than commonly appreciated.

Many of Bentham’s most important contributions were in law. As John Stuart Mill wrote of him (in an otherwise critical assessment of him), “[Bentham] found the philosophy of law a chaos, he left it a science; he found the practice of the law an Augean stable, he turned the river into it which is mining and sweeping away mound after mound of its rubbish.” One of Bentham’s fundamental achievements was to convince people that law should thought of as a human institution that human beings implement to bring about various effects. Rather than a special mystery, it was a set of means that could be adapted for specific ends.

The idea that punishment and the criminal law influences behavior is now so deeply entrenched in our culture that it is difficult to appreciate that it was once a novel thesis. We owe this idea to Bentham and other reformers in the utilitarian tradition. Bentham was one of the first to argue, along with Beccaria and others, that one purpose of the criminal law was to deter and prevent crime, and not simply to enact retribution against the guilty.

Bentham’s theory of civil law is also more sophisticated than a simple application of some abstract utilitarian principle. Bentham argued that a utilitarian legislator will implement policies that tend to promote the happiness of the body politic. Such public happiness comprises four objects, according to Bentham:

Subsistence: possession of sufficient material goods to meet one’s bodily needs

Security: protection of one’s material possessions and person against interference, either by others or the community

Equality: prevention of arbitrary distinctions between persons — their possessions, effects, social status

Abudance: promotion of well-being and material posssessions beyond those necessary for bodily subsistence

Bentham classified security and subsistence as more fundamental than equality and abundance, but never argued that the “art” of legislation could be replaced by an algorithmic or mechanical process. As he writes in the Theory of Legislation, “the whole difficulty of the legislative art consists in distinguishing, on each occasion, the particular object which is to be treated as of paramount importance” (p. 127).

Bentham’s distinction between subsistence and abundance also points to a distinction, in practice, between different classes of preferences — it singles out needs like hunger, thirst, shelter for special moral importance. Bentham sometimes explains this difference in terms of the special intensity of pain that comes from the deprivation of our needs.

The tension to which Bentham devotes the most time is that between security and equality. Here Bentham anticipates ideas such as Okun’s trade-off between “equality and efficiency” and critiques of progressive taxation. What is particularly interesting is how clearly Bentham thinks of property rights as a device for securing expectations. The question then becomes which expectations should the law uphold, and which should it frustrate.

At the same time, there is a tension between the detailed, specific investigation that characterizes some of Bentham’s proposals and the aims of utilitarianism. After all, utilitarianism is supposed to circumvent the need for specific, case-by-case forms of reasoning that invite all the usual bugbears of prejudice and bias. Case-by-case approaches obviate one of the apparent advantages of utilitarian theories.

What this shows, I think, is that no one — not even utilitarians like Bentham — actually endorses the scalar theories as decision procedure. Bentham’s own writings show that the principle of utility works as a background principle for how to weight and compare different, middle-level principles like security and equality.

Utilitarians also have theoretical reasons for denying that utilitarianism should be understood as a decision procedure. This is because utilitarianism directs us to choose the decision procedure that makes things go best. If some other procedure produces better outcomes than the utilitarian one, utilitarianism directs us to choose that one.

This might sound like an inconsistency: utilitarianism directs us to choose whatever makes things go best, but also not to choose whatever makes things go best. However, it’s actually a perfectly coherent point that relies on a distinction between what we might call a decision procedure and an standard of assessment. A decision procedure is something like “choose the action/follow the rule/implement the policy that makes things go best.” However, the theory’s standard of assessment could tell us that following some decision procedure that directly aims at the theory’s goals would achieve those goals less well than a procedure that aims at those goals only indirectly.

As an example, consider how it might be rational to make yourself irrational. A group tries to a rob a bank, but they can’t open the vault. The robbers order one of the staff to open the vault, otherwise they will start killing the customers they have taken as hostages. Now suppose that the staff have on hand a special compound that will, for a time, make you irrational. If you take it, you will become crazy for a brief time. You will say things like, “I don’t want to see anyone die, so please kill the customers.” The advantage of taking this compound is that an irrational person cannot be coerced by threats into opening the bank vault. Of course, making yourself crazy for a period has its risks. You might accidentally open the bank vault or harm one of your colleagues. But it also means that you cannot be coerced into opening the vault by a threat.

The point, then, is that there are coherent utilitarian reasons not to adopt what Roberts calls “narrow utilitarianism” as a decision procedure, if such a procedure would lead to worse outcomes from a utilitarian point of view. There is a catch, however. Indirect strategies complex and opaque — it’s like trying to keep a nearby and distant object in focus at the same time. They are psychologically and rationally demanding, which makes them difficult to implement. Indirect strategies mean that utilitarianism can’t deliver on all its promises.

--

--

Nick Geiser

Political theory PhD. I write about politics and (social) science.