The Atlantic‘s Robert Wright has a thought-provoking review of Joshua Greene’s Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. Greene used scans of people’s brains to examine their responses to the famous (famous by the standards of professional philosophy, anyway) “trolley problem” thought-experiment. In the thought-experiment, people are asked whether they would divert a runaway trolley about to hit five people onto a track where it would hit just one person. Most people think this would be the right thing to do. But when the conditions of the experiment are changed, people tend to respond differently. For instance, many people say they wouldn’t be willing to push someone onto the track to prevent the trolley from hitting the other five, even though the utilitarian moral calculus (one life for five) is the same.
Greene found that MRIs showed that people who said would be OK to push the one man onto the track were using the portions of their brains associated with logical thought, while those who said it wouldn’t were responding more emotionally. He concludes that emotional bias–inherited from our evolutionary past–clouds our judgment. Because our ancestors lived in small hunter-gatherer groups, we’re good at group solidarity, but bad at inter-group harmony. Pushing someone to their death is the kind of thing you could be blamed and swiftly punished for in a small group, so the idea of doing that lights up some deep-seated moral aversions. Green concludes that humanity needs a global moral philosophy that filters out these atavistic types of responses can “resolve disagreements among competing moral tribes.” And the best candidate for this is a form of utilitarianism.
Here’s Wright summarizing Greene:
One question you confront if you’re arguing for a single planetary moral philosophy: Which moral philosophy should we use? Greene humbly nominates his own. Actually, that’s a cheap shot. It’s true that Greene is a utilitarian—believing (to oversimplify a bit) that what’s moral is what maximizes overall human happiness. And it’s true that utilitarianism is his candidate for the global metamorality. But he didn’t make the choice impulsively, and there’s a pretty good case for it.
For starters, there are those trolley-problem brain scans. Recall that the people who opted for the utilitarian solution were less under the sway of the emotional parts of their brain than the people who resisted it. And isn’t emotion something we generally try to avoid when conflicting groups are hammering out an understanding they can live with?
The reason isn’t just that emotions can flare out of control. If groups are going to talk out their differences, they have to be able to, well, talk about them. And if the foundation of a moral intuition is just a feeling, there’s not much to talk about. This point was driven home by the psychologist Jonathan Haidt in an influential 2001 paper called “The Emotional Dog and Its Rational Tail” (which approvingly cited Greene’s then-new trolley-problem research). In arguing that our moral beliefs are grounded in feeling more than reason, Haidt documented “moral dumbfounding”—the difficulty people may have in explaining why exactly they believe that, say, homosexuality is wrong.
If everyone were a utilitarian, dumbfoundedness wouldn’t be a problem. No one would say things like “I don’t know, two guys having sex just seems … icky!” Rather, the different tribes would argue about which moral arrangements would create the most happiness. Sure, the arguments would get complicated, but at least they would rest ultimately on a single value everyone agrees is valuable: happiness.
Whenever I see someone arguing that “science” can tell us which moral framework to adopt, it sets my Spidey-sense tingling. Simply saying we should all be utilitarians dodges a bunch of important and contested philosophical questions, like
–What is “happiness” (or “utility”)? Is it just the net balance of pleasure over pain (as the founder of utilitarianism, Jeremy Bentham, thought)? Or does it include “higher,” more complex elements (as Bentham’s protégé and critic John Stuart Mill thought)?
–Assuming we can define happiness, can we quantify it in such a way that allows us to determine which course of action in a given case will yield the most of it?
–Even if we can define and quantify happiness/utility, might there not be other things that are good and whose promotion should enter into our moral calculus? What about beauty? Truth? Should those always be subordinated to happiness when they conflict?
–Utilitarianism is a form of consequentialism. But can we know what the likely consequences of our actions are ahead of time? Can we even specify what counts as a consequence of a particular action with any precision?
Wright says that Greene studied philosophy, so presumably he knows this. And it’s not that utilitarians don’t have responses to these questions. But they don’t all agree among themselves on what the answers are. And these are properly philosophical questions, not questions that the natural sciences (including neuroscience) can answer in any straightforward way.
To Wright’s credit, he is skeptical of Greene’s advocacy of utilitarianism as a kind of “moral Esperanto.” And he notes that some of the most intractable conflicts in our world aren’t necessarily conflicts over ultimate values, but over facts. For instance, most Americans are, at best, dimly aware of our history of meddling in the internal politics of Iran, so they attribute Iranian mistrust of the U.S. to irrational animus or religious fanaticism. The problem is that we are all afflicted with a self-bias that inclines us to filter out facts that our inconvenient to our cause and which makes it difficult for us to view a situation from the perspective of our opponent. Christians would call this a manifestation of Original Sin.
UPDATE: At Siris, Brandon offers some thoughts on the Atlantic article and utilitarianism in general.

Leave a comment