Consequentialism: A Defense/Manifesto


I. Consequentialism implies a better world, no matter what that means

Deontologists, by modern definitions, have a very significant advantage over consequentialists like me. The definition of consequentialism is the ethical belief that all that matters is making the world better. The definition of deontology is that something other than making the world better can matter to ethics too. This means that consequentialists can’t point out to deontologists “making the world better is really important!” Deontologists can agree, and say that, nevertheless, there are other things that matter. If you get an account of the rules deontologists say matter, you can’t even then go on to say “but look at what happens when this rule conflicts with making the world better in this case! Surely it doesn’t always hold up” they can still agree, and say the rule has some limit, but nevertheless, it matters more than consequences at least sometimes. There are some advantages consequentialism has over deontology on the basic level. For instance if you think ethics should be simple, then the simplest versions of each are making the world better, versus only and no matter what following some rules, and arguably it is hard to favor the latter in this competition…but this may be a minor advantage for most people. I mean, why should ethics have to be either/or in this case?

This is not something worth getting particularly upset about. Deontology that cares about good outcomes is a pretty good compromise for those of us who care a great deal about outcomes, but it is worth noting that some of the disparity between the numbers of consequentialists and deontologists doesn’t come from rules winning some philosophical culture war with outcomes. It comes from us consequentialists being, inherently, radicals. So how can consequentialists like me escape this apparently huge burden of proof?

A first thing to notice is that, in ethics, answers that claim that something makes no difference at all do not always have the burden of proof their specificity might suggest. A dramatic example is that if I were to ask you “how differently should we treat people based on their race, religion, gender, etc.”, and you were to respond “not at all, clearly”, it would seem that I missed an important point if I then replied that, out of all possible differences, no difference at all seems awfully specific, so surely you have a very strong burden of proof. Again, this example is meant to be dramatic, it says little on its own about the present case, but is merely meant to point out that radical answers about moral reasons, such as “not at all”, do not always carry so heavy a burden of proof. But in this specific case, where this is less clear, why lean towards this answer?

I propose that at least some of deontology comes out of failing to recognize intuitions concerning “axiology”, or less technically, “what makes the world better”. I think that deontologists often absorb axiological intuitions into deontological rules, and wind up neglecting their views of what sorts of things they actually think make for a better or worse world. That some of the rules deontologists support look to me suspiciously like they are just meant to bring about a preferred world. That when these rules are matched up with corresponding prohibitions and then controlled for, the space left for deontology to occupy is comparatively smaller and less compelling.

I recently discussed partial aggregation, and in passing mentioned that most modern partial aggregationists don’t seem to be partial aggregationist about axiology. I suggested that this seems to be rooted in Alastair Norcross’ paper from the 90s, and also that it seems like something partial aggregationists should not be satisfied with. That is, partial aggregationists will often say that the world can be better or worse on the basis of pure aggregation, but nevertheless you should only choose who to save using partial aggregation. I agree that there is an intuition in favor of saving the person being tortured over saving the many people suffering dustspecks in their eyes, but it is very hard for me to believe that this does not have something very significant to do with a parallel intuition that someone being tortured is in fact, as an aspect of the world, worse than the many dustspecks.

It is easy, costless, for the non-consequentialist to deny that this is part of the intuition, if they never have to act on it. It makes it frustratingly difficult to point out to someone that they probably don’t really intuitively believe that this torture world is better. Should the partial aggregationist hope that someone stumbles into the torture accidentally, sparing the many people their dustspecks? If they have some way to deny that they’re committed to this, then is there any place in their ethics at all that their purported axiological beliefs pay rent? If I take it for granted that partial aggregationists also have partial aggregationist intuitions about what states of the world are better or worse, then I am suspicious that this axiological intuition, and not some independently compelling side-constraint, is largely what motivates the principle that you should save the person being tortured.

I think something like this happens, usually more subtly, in many common anti-consequentialist thought experiments. The generalized problem is this, if you think that a thought experiment fundamentally undermines consequentialism, it seems as though you are committed to saying that the world the consequentialist in the thought experiment brings about, is in fact better. This is a feature of many famous thought experiments that oppose, for instance, utilitarianism. Consider the organ-harvesting thought experiment. Deontologists will often point out that utilitarians are committed to harvesting the organs of one person to save the lives of multiple other people who need organ transplants. Utilitarians have a good retort to this, “do you think people would really be happier living in a world where this sort of thing happened?” This is more or less what the deontologist is committed to believing if they think this is what utilitarianism implies.

I think some newbies to the argument are tempted to say that this world would not actually be happier, but utilitarians will nonetheless be so focused on maximizing the numbers saved that they will endorse this anyway. This is a bad response, defining naive utilitarians into existence hardly seems like fair play. The better retort is to control for the things that would make this world less happy, by saying that this all happens in secret, in a way that affects no one but the person killed and the people saved.

As a response to act utilitarianism, this is pretty effective. But it is possible to take at least two things away from it. One is that consequentialism is false, and side-constraints are ethically important. Another is that the version of consequentialism that best captures our intuitions of what makes the world better or worse, does not imply this outcome. Once again, the former tactic pays the cost that it implies the world in which the doctor kills the patient to transplant the organs is better than the one in which the doctor leaves the patient be. If for instance, you are looking on as the only person aware of the doctor’s plan, it seems as though you ought to let the doctor do his thing if this will make the world better. The doctor is acting wrongly, but that is not your doing, and by interfering you are directly making the world worse.

Or perhaps you can imagine a deontological rule that says that, although you make the world worse through your interference, in this case it is nonetheless your duty to interfere. This strikes me as suspicious, but then instead imagine that you have no influence. Ought you still root for the doctor to succeed in carrying out this sacrifice and transplant, because this makes for the better world? And, once more, if there is some additional deontological principle that explains why you should not even do that, it seems to me that your purported axiological beliefs pay suspiciously little rent anywhere in your thinking.

I think that, as a matter of fact, many apparent deontologists just think that the world in which the doctor butchers the patient has something crucial about it that is worse than the world in which the doctor allows the other patients to die. Perhaps the deontologist likes a world in which others fulfill their duties as doctors who “do no harm” better than a world in which more lives are saved at the cost of this behavior. This can be generalized a great deal. A subtle distinction some people miss (and that I have seen philosophers like Robert Nozick and Derek Parfit highlight) is the difference between deontology that tells everyone to respect the rights of others, and consequentialism concerning rights getting respected. I contend that this is not just an interesting academic footnote, but a good account of what many people actually value about a world when they are drawn to deontology – not necessarily that they personally are respecting rights, but that everyone’s rights get respected as much as possible.

So if I look at the issue in this way, and then control for the types of rules I find most suspicious (like that you have a duty to interfere and thereby make the world worse, or that you ought not hope that the world gets better without your interference), where do I locate the deontology? To me, I think the “deontology zone” is roughly in between:

Framed this way, deontology looks to me suspiciously like it will always come down to the value of keeping one’s own hands clean. The consequentialist, on the other hand, has a rather modest set of asks about this case:

  1. The difference in numbers of lives saved such that you ought to leave the surgeon alone, and the number such that you ought to do the surgery yourself in this position, is zero.

  2. What justifies this is not some suspicious extra deontological rule that tells you to intervene even if it makes the world worse, it is because by intervening you are stopping an event that is, on the whole, bad from happening.

There are other thought experiments where this type of preference of worlds strikes me as even more obvious. For instance I believe that the utility monster is not just repugnant to people because they would be violating side-constraints if they arranged for this monster to eat lesser-utiled mortals, I think that if something like a utility monster came about without anyone’s moral agency, most would view it as a natural disaster, which ought to be stopped, because it is a natural disaster, and as such is making the world worse. Likewise, I think many people revulsed by utilitarianism when looking at Omelas would react that you should just save the wretched child, even that just walking away would be wrong.

Now I think more careful thinkers will concede that none of these thought experiments is against consequentialism in general. I do, however, think that when people intuit that consequentialism is repugnant, they implicitly visualize thought experiments like these. If so, I think it would be interesting if these common, anti-utilitarian thought experiments, were largely counter-intuitive for consequentialist reasons.

I think few people recognize how consequentialist their moral intuitions often are, because they often fail to notice when they dislike something because it seems to them to be a bad occurrence, a way in which the world is in fact worse. I believe that this misunderstanding persists because people can use elaborate additional layers of rules to make their views about what makes a better or worse world impotent in any decision or feeling they have. If you strip that away, it seems to me, all that non-consequentialist theories have is the relationship an agent has to the specific choice at hand. In other words, something like “we ought not get our own hands dirty”. This is more or less what compels me away from deontology most, even though it is far broader, and accommodates far more values, that consequentialism. But this prompts the question of why so many people have this implicit distrust of consequentialism in the first place.

II. Consequentialism is not just a genre

Some time ago, I had a disagreement with a professor. To be clear, I like and respect this professor, and this interaction was only short and poorly developed on both sides, but I came away with some troubling impressions of how non-consequentialists view consequentialism. The class was having a discussion about deontology and autonomy. I noted that it seemed as though nothing we had discussed about what autonomy really was implied deontology, that autonomy seemed to be a feature of the world, and consequentialists could care about it just as much. The professor was unimpressed, and seemed to characterize consequentialists as always naively saying that we could just stick other theories’ values in our utility functions and call it a day 1. The tone implied that this was obviously silly, and relied on doing something that was fundamentally contrary to the nature of autonomy, though I couldn’t get a satisfying argument about just why this was, and I suspect much of the tone was meant as a light-hearted jab more than intentional condescension.

Still, it kind of hurt, because, yes, if you have not gotten this from the rest of this post yet, I think that consequentialism can accommodate, and in some cases better explain, a huge range of values that aren’t conventionally associated with it. I reflected on some of the comments in that class and what our actual disagreement was, and I think I have some idea. To the professor, consequentialism meant using force to maximize some quantity of a thing. Just as utilitarians are committed to tiling the universe in hedonium, autonomy consequentialists are committed to tiling the universe in autonium. Autonium simply is not autonomy, and therefore it is an oxymoron to be consequentialist with respect to autonomy. This type of reasoning is a bit caricatured, but the idea may come down to, whatever thing you do to restrain consequentialism from just trying to produce autonium, it will always seek something like autonium but with some arbitrary constraint, and therefore will always miss the mark on autonomy in some important way.

This is interesting, but I think it is more an argument from genre than an argument from axioms. Consequentialism, as a genre, looks like trying to put bits of the universe into bags, and piling the bags up into as tall a heap as you can make. Consequentialism, as an axiom, is just the idea that whatever thing makes the world better is what makes a choice better too. As a sanity check, consider the autonomy trolley problem.

Say that a slave-catcher’s trolley is headed down a track towards five escaped slaves. You can turn the trolley onto a track with only one escaped slave on it. Should you turn the trolley? I maintain that an autonomy consequentialist could say “yes, I ought to turn this trolley because the world in which the trolley gets turned has more autonomy in it that the world in which the trolley does not get turned.” This answer seems both perfectly intelligible, and as though it appeals to a concept of autonomy that most people could recognize, that does not miss the mark in some way that can be compared to autonium tiling.

Many deontologists of course could also accept turning this trolley, so you may picture any of the variations of the autonomy trolley problem if you like. The escaped slave on the footbridge, or loop, or lazy susan, or whatever. In every case, I believe, there is a perfectly intelligible answer the consequentialist can give that more or less matches what non-consequentialists mean when they think of autonomy. But maybe these slaves only need to be free, and it is a problem for the consequentialist that they can’t force this person to be more autonomous? This again seems simple to answer if you step away from the genre framing, if the consequentialist does something to intervene in someone’s life that does not actually promote their autonomy as we really mean it, then they are not acting as an autonomy consequentialist. And yes, if there is some point of sufficiency at which autonomy has been successfully promoted and anything else would be backtracking in some way, then the consequentialists can keep their hands to themselves. Sufficiency is hardly new to consequentialists, just look at negative utilitarianism.

So what’s the deal with the disagreement here? Why is consequentialism viewed as loading bags of utils into a heap, and its attempts to absorb other value systems, to many, accomplishes little but changing the labels on the side of the bags? The glib answer is that, like many of the things non-consequentialists argue, this critique seems applicable to utilitarianism at least, the most famous and influential variety of consequentialism. The more generous answer is that any sufficiently specific consequentialist system looks like this to most people.

If you find something that makes the world good, and try to use it to coherently line worlds up, you wind up targeting some specific thing about the world that it turns out most people won’t value in a way that lines up with the results of this ordering. Basic rules seem safer, if not because they have fewer ambiguities and problems when pushed into various corners, at least because they generally tell you to stop doing certain things rather than to push you to find ever more creative ways of applying the rules in every novel case imaginable. They constrain decision-space rather than trying to occupy as much of it as possible.

Still, deontologists generally accept that the world can be better or worse as well, as mentioned in the beginning. It is a considerable advantage they have, they are simply less radical than people like me. They can value anything that matters to us, whereas we cannot value everything that matters to them. This suggests to me they should be nearly as invested as us consequentialists in having a rich and satisfying way of lining better and worse worlds up. It is not believable to me that the deontologist would not have a very strong reason to turn the trolley away from the escaped slaves on the track if there was no one on the other track, or that they ought to do so because of some rule, rather than because the slave’s freedom is just good.

This is why I reject the idea that consequentialists are the only ones on the hook for problems specifying a satisfying, coherent axiology. Deontologists have some reason to worry as well if it turns out the only way to value autonomy as an outcome is with autonium tiling. If this sort of reasoning can be generalized, I believe it will answer many of the implicit concerns people have with consequentialism.

III. What it feels like from the inside

But then, when reading all of this, a deontologist might be suspicious.

“Yes Devin, you have given plenty of reasons why an unconventional and rich conception of consequentialism should make non-consequentialists more sympathetic, but you have had your fingers crossed behind your back the whole time. You are, after all, a classical utilitarian! You actually do want to exchange the dustpecks for torture, cut up the patient for organs, feed the utility monster, build Omelas, and tile the universe in hedonium!”

I have mentioned multiple times at this point (admittedly without writing something more dedicated, which I probably should at some point) that I reserve every right to just reject utilitarian conclusions I hate enough, while maintaining that these conclusions have the strongest principled case on their side 2. That limits some of this concern, but I don’t think it gets at the root of it. I think it will help non-consequentialists understand my thought process better if I explain what it feels like for me, from the inside, to be a classical utilitarian.

Perhaps a stereotypical assumption is something like, where most people have a value system in their heads, I have a function that reads “more hedons please”. Someone more sympathetic might say that this function is not just cold and emotionless. One of my very favorite quotes in ethics, concerning the late Derek Parfit (which I originally saw here), goes:

“As for his various eccentricities, I don’t think they add anything to an understanding of his philosophy, but I find him very moving as a person. When I was interviewing him for the first time, for instance, we were in the middle of a conversation and suddenly he burst into tears. It was completely unexpected, because we were not talking about anything emotional or personal, as I would define those things. I was quite startled, and as he cried I sat there rewinding our conversation in my head, trying to figure out what had upset him. Later, I asked him about it. It turned out that what had made him cry was the idea of suffering. We had been talking about suffering in the abstract. I found that very striking. Now, I don’t think any professional philosopher is going to make this mistake, but nonprofessionals might think that utilitarianism, for instance (Parfit is a utilitarian), or certain other philosophical ways of thinking about morality, are quite calculating, quite cold; and so because as I am writing mostly for nonphilosophers, it seemed like a good corrective to know that for someone like Parfit these issues are extremely emotional, even in the abstract.” 3

This is absolutely lovely, it makes me swell with the utilitarian version of patriotism, and while there is a part of me that desperately wants to someday cry at the abstract idea of suffering, this quote does not, I think, capture my utilitarianism either. Classical utilitarianism simply does not seem to live anywhere in my head.

What it feels like from the inside is more like this: normative ethics can be broken down into a number of high-level questions. On each of these types of questions, I lean in a certain direction, and when you combine all of my leanings, they overlap to produce classical utilitarianism. On the question of value, I lean hedonist. On the question of prioritization, I lean in favor of the theory of value itself defining moral importance, and reject special obligations, desert, speciesism, and egoism. On the question of moral tradeoffs, I lean in favor of pure aggregation. On the question of act versus rules, I lean act. On the question of population axiology I lean totalist. And yes, on the question of right and wrong, I lean consequentialist. When you believe all of these answers (and maybe a few more), they call that classical utilitarianism. I have not always been a classical utilitarian, I have changed my mind on many of these answers, in particular hedonism, population axiology, and aggregation, and there is no guarantee at all I will die leaning towards these same answers I currently favor. In fact the intersection is so specific, that what me being a “classical utilitarian” amounts to, is having a tiny plurality credence in classical utilitarianism as compared to other similarly specific theories.

I don’t know that all of these answers are totally orthogonal, I took a stab at coming up with a coherent common thread once, and the answer I favored was that utilitarianism is the ethics of “ideal empathy”. I think there is something to this, and it might explain utilitarians or those in the ballpark of utilitarians who have reactions to ethics like Parfit’s.

Maybe it is hard to see what difference this makes at first, isn’t the belief in each of these answers kind of a restatement of the theory? But I think it helps explain some important things about how I reason ethically, and my moral attachments. My approach can be explained by taking these answers to be prior to classical utilitarianism. I am a hedonist before I am a classical utilitarian. I favor pure equal consideration of equal interests before I favor classical utilitarianism. I am an aggregationist before I am a classical utilitarian. I favor the act version of consequentialism before I favor classical utilitarianism. I am a totalist before I am a classical utilitarian. And, yes, I am a consequentialist before I am a classical utilitarian.

This can be helpful for understanding some of my intuitions that are otherwise unexpected. For instance, I think that the logic of the larder works on the assumption of classical utilitarianism, given good enough farming conditions (with some lingering, perhaps fatal problems with the legal regime it requires), and yet I reject the logic of the larder 4. I have mentioned that I, perhaps controversially, reserve the right to just not endorse implications of utilitarianism that are sufficiently repugnant to me that even utilitarianism cannot convince me of them. But the logic of the larder is intuitively fine to most people. If we breed animals with good lives, we all win, right? And yet, when I consider this reasoning in human contexts, it clearly falls flat, it is not clear why parallel reasoning would not justify, at least under the right, stable conditions, human farms 5.

This is enough on its own to make the logic of the larder pretty much irredeemably repugnant for me in the non-human context as well. The best explanation of this I can give is that I am an anti-speciesist before I am a classical utilitarian. In a similar way, I am not scheming to first convince non-consequentialists to accept the plausibility of a very broad, vague version of consequentialism, with some eventual intention to pull them from there into classical utilitarianism. Consequentialism is more important to me, and more persuasive to me, than classical utilitarianism, and the reasons I have given are the real reasons why this is the case. I could be talked out of nearly all of the specifics of my ethics, and think that something different would be needed to additionally talk me out of consequentialism, full-stop.

Perhaps this post is still suspicious. Sure I say that all I want is for people to be consequentialists, but given how broadly I have argued for this, surely this argument alone would not actually change anyone’s practical decisions or behavior. I admit it, I think that non-consequentialists could retain most of their most compelling judgements while being consequentialists, but I don’t think that means the conversion would change nothing. As I have said, it is my impression, from controlling for all other relevant factors, that deontology is largely a way of avoiding a personal sense of guilt. My practical appeal is not just that people should be more honest with themselves about what states of the world they actually think are better and worse, but that they should reject the practical aspect of deontology while they’re at it.

I’m saying that the number of people the surgeon could be saving such that you should not stop them, should be the same as the number of people you could save such that you should personally perform the surgery. That if a utility monster rolls into town, you can just fight it because it’s there and you think that’s bad. That if you really like rights and autonomy, it is alright to locally reduce them if it globally increases them. Here is where I make the concession that consequentialism is more accommodating than often stereotyped, but not endlessly so. I still maintain that on reflection, people should, and do, care about making the world better, at the exclusion of most other things. And that this is worth recognizing and doubling down on.


  1. Ed. Note: Moral Uncertainty discusses how to decide actions when you have credence in multiple utility functions and/or ethical theories. It is, indeed, more complicated than sticking other theories’ values in our utility functions. ↩︎

  2. Ed. Note: In his defense, this really just means “the strongest principled case on their side that anyone’s come up with so far”. ↩︎

  3. The idea that Derek Parfit was a utilitarian is not true, or at least not perfectly up to date. My somewhat second-hand understanding is that his views on normative ethics were not very specific by the end of his life, but that he leaned in the direction of some sort of prioritarianism↩︎

  4. I recently read Abelard Podgorski’s “The Diner’s Defense”, which gave a much more convincing version of the logic of the larder in my opinion, though it leaves me unpersuaded. It did, however, cause me to reevaluate my reaction to Robin Hanson’s relevant piece, which I originally took to be making a classic logic of the larder argument, but which in retrospect might have been reaching for something closer to Podorski’s argument, which I think is stronger. ↩︎

  5. Ed. Note: This reminded me of the more galaxy-brained-in-a-bad-way pro-slavery arguments↩︎


If
, help us write more by donating to our Patreon.


Tagged: