Joshua Greene and Moral Tribes

March 2, 2022

We often discuss individual morality and ethics on the show–how people should or should not behave on an interpersonal level. But what about groups of people? How should they make sense of their competing value systems? On today’s episode, we’re rebroadcasting a show we did with the neuroscientist Joshua Greene, who has an idea about how groups–what he calls modern tribes–should get along. He thinks people should develop something he calls a metamorality. And for him, the best contender for this metamorality is utilitarianism. He also describes how our brains make moral decisions–and why this matters when we’re thinking about morality amongst groups of people.

Contact us at examiningethics@gmail.com.

Follow us on Twitter @ExaminingEthics. Follow us on Instagram @examiningethicspodcast. You can also find us on Facebook.

For the episode transcript, click here.

Show Notes:

Thanks to Evelyn Brosius for our logo. Featured image, “Village de Bourgogne” is by Jeanne Menjoulet and can be found here.

  1. Thannoid” (1 minute variation) by Blue Dot Sessions
    From sessions.blue
    CC BY-NC 4.0
  2. Inamorata” by Blue Dot Sessions
    From sessions.blue
    CC BY-NC 4.0

To contact us, email examiningethics@gmail.com.

Transcription:

Joshua Greene and Moral Tribes


{music begins}

Christiane Wisehart, producer: I’m Christiane Wisehart. And this is Examining Ethics, brought to you by the Janet Prindle Institute for Ethics at DePauw University.

We often discuss individual morality and ethics on the show–how people should or should not behave on an interpersonal level. But what about groups of people? How should they make sense of their competing value systems? On today’s show, we’re talking to Joshua Greene, who has an idea about how groups–what he calls modern tribes–should get along. He thinks people should develop something he calls a metamorality. And for him, the best contender for this metamorality is utilitarianism.

Joshua Greene: Part of what is attractive about this philosophy, although I don’t think that’s what makes it ultimately the right way to go, is that it doesn’t require a lot of fancy philosophizing. That it’s basically saying we should be trying to eliminate suffering, as much of it as possible. And everyone’s suffering counts the same and we should be trying to create a world in which people are happier and everyone’s happiness counts the same.

Christiane: Stay tuned for my interview with Joshua Greene on today’s episode of Examining Ethics.

{music ends}

Christiane: Our guest today thinks that humans’ moral landscape is a lot more complicated than it was a few millennia ago. In the past, morality meant working out right and wrong within smaller groups. Now we have to face problems on a global scale, which means that entire value systems get pitted against one another. Whether we’re talking about climate change or health care, it can be difficult to even find common ground on the issues, let alone resolve them. On today’s show, we’re talking to Joshua Greene, a professor of psychology at Harvard University and a member of the Harvard Center for Brain Science faculty. We’re discussing his work on how our brains process morality, and how we might solve the problem of scaling up moral reasoning.

In his book Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, Joshua argues that we can understand how our brains make moral decisions by thinking of them like a fancy camera. So on a good camera, you’re going to find two main settings: point-and-shoot, and manual mode.

Joshua Greene: The reason why the camera has these two different ways of taking photos is that it enables you to navigate this trade-off between efficiency or reliability on the one hand and flexibility on the other. So, the point-and-shoot settings, they’re, you know, very good, easy to use. You’ll get the result you want, uh, most of the time. But they’re not very flexible. Whereas the manual mode is totally flexible. You can do anything with it, including everything you could do with your point-and-shoot settings and a lot more. But, it’s a little trickier. You have to know what you’re doing, and you want to understand, you know, what is the situation that I’m in? What effect am I trying to achieve and how do I achieve that effect, given the situation. And this sort of point-and-shoot versus manual mode, I think, mirrors the basic design of human decision-making, where we have intuitions about what’s right or wrong or good or bad. And then we also have an ability to reason about things. And to say, okay, well, this is what I want to achieve, is what’s most important to me and these are my options and this is what I believe will happen if I do this or if I do that. Um, and that’s- that’s manual mode thinking. 

Christiane: There will be some situations where it might be hard for you to decide which mode to use. Joshua explained that there’s a similar tension in our brains between our gut feelings and intuitions, which are sort of like the point-and-shoot mode and our rationalizing, thinking mode, which is sort of like a manual setting. You can see this tension come alive by thinking about a version of the trolley problem. So the trolley problem is a famous moral dilemma where you think through different scenarios of how to save people on some trolley tracks. 

Joshua Greene: A trolley is headed towards five people and they’ll die if- if you don’t do anything. And you’re on this footbridge over the tracks, in between the oncoming trolley and the five. And next to you is this big person. Let’s say it’s a big person with a big backpack. The only way you can save those five people is to push the guy with the big backpack onto the tracks and he’ll get hit by the trolley and die but the five people will be saved. And no, you cannot jump yourself and yes, this will definitely work. You’ve been to the movies. You know how to suspend disbelief. Even if you assume that this is the only way to save them and it will definitely work, many people say that it would be wrong to push the guy off the footbridge in order to save those five people.

Contrast this case with a case which we sometimes call the switch case, where the trolley’s headed towards five people and to save them, all you have to do is turn the trolley onto a side track. But there’s one person on the side track, so that person will be killed. In both cases, it’s trading one life for five, but in the footbridge case, people have a very strong sense that it’s wrong to do this and people are very reluctant. Whereas, in the switch case people are relatively comfortable saying, okay, it’s better to hit the switch so that only person dies instead of five people.

So, what’s going on there? Well, now after a decade and a half of behavioral and neuroscientific research, we have a pretty clear idea of what’s going on. That when you encounter the footbridge case, you have an emotional response to that act of pushing. And you can see that response in a part of your brain called the amygdala, which is a kind of emotional alarm center. Same kind of response that you would see if you saws a snake or some other threatening thing, So you have that amygdala response and that sends a signal forward to part of your brain that’s involved in weighing different signals and making decisions. In another part of your brain, called the dorsolateral prefrontal cortex, you’ll see a- a cognition representation related to applying that cost-benefit reasoning. And this part of your brain is the part of your brain that’s involved in the kind of thinking that we think of as thinking.

So, you’ve got your manual mode part of the brain saying, “Five lives for one. That sounds reasonable.” And you have your amygdala saying, “Aha! Pushing people, terrible.” Right? And then those things both converge on a part of the brain called the ventromedial prefrontal cortex, which weighs those signals against each other and gives a kind of all-things-considered judgment. 

If people have brain damage that knocks out part of the ventromedial prefrontal cortex, the place where these signals converge, those people, if they can’t rely on those signals at all, then they just default to a kind of decision rule. And these patients with this kind of damage are much more likely to say that’s it’s okay to push the guy off the footbridge and things like that. By contrast, people who have damage to a part of the brain called the hippocampus, which is important not only for memory but for imagining and understanding situations and linking actions to, to consequences, people who have that kind of damage, their response is more emotional. They don’t really have the sort of full model, the full manual mode ability to think about the whole scenario, so they just kind of react to the action itself, and they’re more likely to- to- to- to- to say that it’s wrong. So we’ve sort of zeroed in on kind of different pieces of- of- of- of- of- of this puzzle and how these different types of signals weigh against each other when we, when we respond to a- a moral dilemma.

Christiane: Our brains don’t just switch from the manual to the automatic settings, though.

Joshua Greene: When you encounter this dilemma, you’re in both modes. Your amygdala is going, “Ahh, don’t push the guy!” But your dorsolateral prefrontal cortex is saying, “Well, five lives for one. Isn’t it better to have more people alive and a fewer people dead?” And then both of those signals are converging on your ventromedial prefrontal cortex and it’s kind of weighing them against each other. So, that is the normal, healthy state, is having both and weighing them.

Christiane: So we’ve been talking about how our brains process moral decisions. This way of processing morality evolved while we were trying to figure out how to cooperate in our small family groups, or tribes of people. Joshua explained that we can picture this with a parable called the “tragedy of the commons.”

Joshua Greene: The story of the tragedy of the commons comes from an ecologist named Garrett Hardin. It illustrates the fundamental problem of cooperation and really, I think, the fundamental problem of social life. So, in- in Hardin’s parable it goes like this. You have a bunch of herders who share a common pasture and they’re raising their flocks of sheep on the, on the pasture and we assume that these are rational, self-interested herders and they say to themselves every so often, “Hmm. Should I add more animals to my flock?” And they think, “Well, good to have more animals when I go to market. More money for me. What’s the downside? Well, there’s not much downside. They’re just grazing on this shared pasture, so sure, I’ll add more animals.” But all the herders have the same thought and they add more and more animals. And then at some point, there’s so many animals grazing on the commons, they eat up all the grass down to the roots and there’s no more food for anyone and all the animals die. And that’s the tragedy of the commons.

What makes it tragic, is that everybody is acting in their own self-interest and yet everybody ends up worse off. Right? And the problem is that while you may do what’s good for you, if it depletes the resources that other people need, and everybody’s doing the same thing, then everybody will end up being worse off. So, the fundamental problem, I think, of social existence, is managing this cooperation problem, that if you want the commons to survive, people can’t just be about “me.” They also have to think about “us.” So, the solution to the tragedy of the commons, in the most general sense, is morality. That is, morality is a set of external or internalized norms and expectations that impel us to not be completely selfish and to care about the interests of others.

Christiane: Over time, different tribes will develop their own systems of morality to solve the tragedy of the commons. Joshua’s work centers on what happens when these tribes try to solve moral problems between them. Let’s say the tribes all discover a new patch of land. Who gets to decide what is right and wrong in this new stretch of the commons? 

Joshua Greene: Is it going to be a big, collectivist meta-society or is it going to follow the individualist, free market principles of one of the tribes? Are they all going to pray to one of the tribe’s gods? Is that religion going to take over or there’s going to be many different religions that are all tolerated? Or is there gonna be an all-out war, where- where- where- where one tribe is just going to destroy the others and conquer whoever’s left? This is the modern moral problem, where the basic mor- moral problem is life within a tribe. It’s me versus us and that basic tragedy of the commons of my self-interest versus the interests of others. The modern problem is about tribes living together. Right? And the tribes have different values and different interests and the question is, you know, what does it take to get a bunch of tribes with their own moral codes and ideals and expectation to get along with one- one- one another. And so the tragedy of commonsense morality, the idea is that every tribe has its own version of common sense. But it’s not as common as they think. It’s common within their tribe, but it’s not common to all of the tribes. And so, we have this natural tendency to say, “Okay, our tribe knows how humans ought to live” and wants to impose its system on everybody else. But of course, everybody else is not necessarily so happy about that.

Christiane: Joshua argues that in order to solve this problem of figuring out morality between huge groups of people, we need to figure out something he calls a “metamorality.”

Joshua Greene: So, the idea is that a moral system, a basic morality for a tribe, is a system that’s embedded in people’s psychology and their, and their feelings, that enables otherwise selfish individuals to not be completely selfish and- and therefore be cooperative and allows the group to survive and flourish. So, then, you know, it naturally raises the question, okay, if a morality is what a group of individuals need in order to live together, what do a group of groups need in order to live together? And metamorality is my term for the answer to that question. That is, a metamorality is a system for governing competing moralities, for enabling tribes to get along with each other in the modern world just as individuals get along with each other within a tribe.

Christiane: So remember that tension we have in our brains between the automatic and manual modes when it comes to morality? That’s going to factor into how we think about morality between tribes.

Joshua Greene: So, I think the- the- the idea is that, our intuitions, our instincts are not bad. But in the modern world, where we have different tribes with different intuitions about healthcare or about abortion or about whether or not it’s okay to be gay or trans, whatever it is, we can’t rely on our intuitions, because we have competing, conflicting intuitions. Right? So, the solution there can’t be to just go with you gut if our gut, tribal guts, are telling us different things in different tribes. So, what’s the solution? What I would say is if you find yourself disagreeing with some other person, some other tribe member, that’s when you need to say, “Okay. Well, at least one of us has a faulty gut. Right? We can’t both be right.” So, we need to think about this in a more manual mode kind of way. And more specifically, I think we should be asking ourselves questions like, “You want this, I want this. Based on evidence, not just what we want to be true but actual evidence, what is more likely to lead to an overall happier outcome?” 

The idea is not that our moral intuitions are always wrong. On the contrary, probably, they’re probably pretty good most of the time for most of the situations that we encounter. But when we disagree with each other, and disagree with each other in a gut-level way, then the only productive thing to do is to try to step away from our guts and say, “Okay, well, can- can we work this out in terms of a, you know, a more rational attempt to produce good consequences?”

Christiane (on tape): You know, I work at an ethics institute, which has the unfortunate side-effect of me seeing a lot of people coming through and just being, not here obviously, but…but in a sense sometimes an ethics education can give people the language they need to rationalize-

Joshua Greene: Mm-hmm (affirmative).

Christiane (on tape): … their, you know, their deep-seated beliefs.

Joshua Greene: Yes. Yeah.

Christiane (on tape): And so, couldn’t you say that, somebody’s calling manual mode or pragmatism, but they’re just, you know, they’re just gathering facts to bolster their case.

Joshua Greene: I think this is what, uh, we- we do a lot if not more of the time, with our manual modes. Right? But the idea is not man- an- any sort of manual mode of thinking is good. It’s, what are you doing with it? Are you using your manual mode just to justify what you already believe intuitively? In that case, you might as well just go with your gut and just say, you know, pound your fist on the table and say, “Because I said so.” Right? It not, that’s not helping. It only makes sense to use your manual mode if you’re going to make a case based on evidence that would have some chance of convincing a comparably reasonable person on the, on- on the other side.

Christiane: Joshua argues that the moral philosophers John Stuart Mill and Jeremy Bentham provide a good example of how to use the manual mode.

Joshua Greene: I think they actually got it right, that they really are using their manual modes and you can tell because the- the positions that they took back in the 19th century turned out to be ones that only later our intuitions caught up with. So, Bentham was one of the first people to defend what we now call gay rights, which is remarkable. This was in the 18th century. He basically said, “Look. I- I feel like being gay feels wrong to me. But when I think about it, who are they harming? If they’re not harming themselves and they’re not harming other people, then what’s the problem?” And, you know, with that willingness to detach from his emotions and actually say, “What’s the bad consequence here,” in my view at least, he- he- he jumped ahead sort of two centuries in- in moral thought. And he and Mill were way ahead of their time on issues like slavery and women’s rights and animals rights and workers rights, and none of it framed in the language of rights. All of it framed in pragmatic terms, in terms of producing better consequences and reducing suffering and increasing human happiness. I think that that’s using your manual mode the right way, but just, you know, coming up with fancy arguments to justify what you’re, what you already believe in your gut, that’s the opposite of- of useful.

Christiane: In fact, Mill and Bentham are more than just good examples of people using their manual modes properly. Joshua argues that their school of philosophy, utilitarianism, is the best candidate we have for that metamorality we were discussing earlier.

Joshua Greene: So, I actually think that part of what is attractive about this philosophy, although I don’t think that’s what makes it ultimately the- the right way to go, is that it- it- it doesn’t require a lot of fancy philosophizing. That it’s basically saying we should be trying to eliminate suffering, as much of it as possible. And any, everyone’s suffering counts the same and we should be trying to create a world in which people are happier and everyone’s happiness counts the same. That’s it. That’s the whole shebang. Right? And- and- and everything else is trying to get over the obstacles to doing that and to gather the information we need to make those judgment calls well. That’s it.

Part of why ethicists don’t like utilitarianism is that it’s so simple. If it were right, it would put them out of a job. That there’s not much more to say about it, that if you’re a utilitarian, most of the hard work is keeping yourself honest and gathering the facts. Right? Whereas, it makes- makes moral philosophy a lot more exciting as a kind of armchair discipline if you think that there are these complicated moral truths out there that you have to discern with your, with your big philosopher brain.

Christiane: Utilitarianism is the school of thought that says that our morality should be guided by outcomes. So what counts as moral is anything that promotes the greater good and eliminates as much suffering and harm as possible. While Joshua believes that utilitarianism is a great contender for our metamorality, he’d prefer to think of it as “deep pragmatism.”

Christiane (on tape): Why do you not like the term “utilitarianism”?

Joshua Greene: Where do I start, right? So, first of all, you know, things that are utilitarian, it makes you think of things like the laundry room and parking garages and stuff like that. Things that are mundanely functional. But that’s not, by any means, the only thing that matters, according to utilitarianism. It’s anything that affects the lived experience of our lives. Right? So, you know, all of the delightful, frivolous, fun things or not so frivolous fun things, I mean, that- that- that counts too. And then if you call it happiness, then that goes to the other extreme and then you get what I call the “my favorite things” understanding of utilitarianism. You know, “Raindrops on roses, whiskers on kittens. Uh, bright, copper kettles and warm, woolen mittens.”

Those are nice, uh, but that’s not everything. Right? And it’s kind of shallow to think that’s the most morally important thing is maximizing, uh, you know, raindrops on roses. 

And then there’s the misunderstanding of utilitarianism as a decision procedure, that we should all be going around with our spreadsheets adding up costs and benefits all the time. But utilitarianism doesn’t tell you how to promote the greater good. It says that the greater good is what ultimately matters. Another common misunderstanding is anyone who says it’s all for the greater good must be believed and followed. Chairman Mao coming along and saying, “Well, if you’re gonna make an omelet, you’ve gotta break some eggs. It’s all for the grand vision of our society and if that means lots of people suffer and die, then that’s fine because it’s all for the greater good.” Well, but was he right? (laughs) Was it all for the greater good? Would we have been wise to line up behind him? Just because you can attempt to justify something in the name of the greater good, doesn’t mean that it is actually justified in the name of the greater good.

So, the list of common misconceptions about utilitarianism … not just common, but natural misconceptions goes on and on. I prefer to think of it as deep pragmatism because I think that’s- that’s really what it amounts to. You are taking the world as it actually is and using those methods that are most likely to be effective in terms of your own thinking, in terms of the kinds of policies that you choose to promote. That’s what utilitarianism really looks like when it’s fully realized. But almost no one who, you know, reads the- the introductory textbook gets the right idea.

Christiane: I certainly didn’t have the right idea about utilitarianism going into my conversation with Joshua. It’s not my favorite school of moral philosophy. But even if you were to convince me that utilitarianism is the right metamorality as opposed to deontology or virtue ethics, there’s still a question about why society needs a metamorality in the first place. I mean after all, if something is moral on, say a family level, shouldn’t it be right no matter how many people are involved? 

Joshua Greene: If we were to really scale up family values to the nation, that would basically be going towards, you know, the Elizabeth Warren “we’re all in this together view” and- and- and beyond. Right? What do you do if your child needs healthcare and you can afford it? You provide healthcare for your child. You give your child everything you can. What do you do for your child’s education if you have the resources? You give everything you can. Right? If the United States government treated all US citizens the way parents with resources treat their children, uh, we’d be living in a very different political world when it comes to things like education and healthcare. So, in many, many ways, simply scaling up family values to the country as a whole would be a radical leftward shift, in our politics. But it’s not simple. Right? So, it’s, whatever it is, it’s not simply a matter of, you know, just take the morality that you apply within your family where people really care about each other and- and- and- and scale it up.

Christiane (on tape): I kind of dig the world that you just laid out.

Joshua Greene: Okay.

Christiane (on tape): Right? I understand that it’s radically different. And I’m- I’m gonna admit to one of my biases, which is that I tend to approach things through a lens of care ethics. And so I’m wondering, why is that a bad thing to- to have that be the kind of … Why is care ethics not a great thing to scale up?

Joshua Greene: Oh, I’m not saying that it would be. Right?

Christiane (on tape): Okay.

Joshua Greene: I mean, it can’t literally be based on personal feelings because we’re not capable of having feelings for —  personal feelings of commitment to 300 million people the way we are to a small number of people in our, in our, in our families. Psychologically it’s not possible to scale that up. But the idea that, you know, we- we try to take care of everybody, even if it’s not done with the same interpersonal feelings. That, I think, is, you know, that that- that is- it’s certainly possible to move much more in that direction. I mean, that’s, you know, that’s essentially what the more, you know, socially-oriented social democracies of Europe are like. You know, most notably the Scandinavian countries like Denmark and Norway and Sweden, where, you know, incomes are high and education is good and people report being happier than anywhere else in the world.

Christiane (on tape): The metamorality that you advocate for is utilitarianism. I don’t know. To me that seems like it might be a little bit at odds with- with the ethics of care or- or is it?

Joshua Greene: Well, one is about process and the other is about outcomes. So, what utilitarianism says is that the best world is the world in which people are as happy as possible, there’s little suffering as possible, that it measures the quality of an outcome of a world in terms of the total sum of happiness over suffering, experienced well-being. Right? So, what utilitarian would say is Norway and Sweden and Denmark are doing very well. There’s not a lot of suffering there and there are a lot of people who are very content with their lives. That’s a measure of the outcome. And then there’s the question of, “Okay. How do we get there and what should our psychological approach be?” Right? And it, and then- then that- that’s an open empirical question. That’s a question about what works.

Now, if it turns out that what works is fostering empathy for other people, you know, who are not just your friends and your family members, but people you don’t know, then okay. Maybe that’s the best way to do it. Um, or if it’s to sort of say, “What would I do if I really cared about this person the way I care about my child?” And then you say, “Well, let’s do that.” Even if you don’t have the same feelings for that person that you have for your child, you try to replicate the policy without necessarily the same psychology. Okay, well, that might be another approach. I mean, utilitarianism is a, it’s a, it’s a benchmark. It’s a standard of value. It’s what- wh- what is, what… by what standard should we judge what’s good and what’s bad and what we should be aiming for? But it leaves open the question of what’s the best way to get there.

Christiane: Joshua makes a compelling argument for why we need a metamorality. There has to be a bigger moral code that can help us sort out issues like abortion and health care. But the question I want to end on is, even if we all agree we need a metamorality, who gets to decide what that looks like?

Joshua Greene: What I would hope … and, you know, I- I’m not very optimistic about this in the short term, maybe more in the long term … is that we would come to a kind of consensus in the same way that we’re slowly coming to consensuses about other things. Our philosophies have changed, they have evolved, but they haven’t evolved what I would consider all the way. Um, ten steps forward, nine steps back, but I think these things take a long time. Um, but that’s what I would have in mind, is that it would be just a- a natural evolution in the same way that we’ve evolved on particular issues like scientific questions like where did human beings come from. And, uh, and also moral questions like, is slavery morally acceptable. 

{music begins}

Christiane: There’s a lot more to Joshua Greene’s book Moral Tribes that we didn’t get to on this show. If you want to know more about it, check out our show notes page at examiningethics.org.

Eleanor Price, producer: Let us know what you think of today’s show. Record a voice memo on your phone and email it to us at examiningethics@gmail.com. Be sure to include your first name and where you’re from. Or, if you’re shy about recording your voice, send us an email with your thoughts and we’ll share it with our listeners.

Remember to subscribe to get new episodes of the show wherever you get your podcasts. But regardless of where you subscribe, please be sure to rate us on Apple podcasts–it helps us get new listeners, it’s still the best way to get our show out there.

For updates about the podcast, interesting links and more follow us on Twitter: @examiningethics. We’re also on Instagram: @examiningethicspodcast and Facebook.

Credits: Examining Ethics is hosted by the Janet Prindle Institute for Ethics at DePauw University. Eleanor Price and Christiane Wisehart wrote and produced the show. Our logo was created by Evie Brosius. Our music is by Blue Dot Sessions and can be found online at www.sessions.blue. Examining Ethics is made possible by the generous support of DePauw Alumni, friends of the Prindle Institute, and you the listeners. Thank you for your support.

The views expressed here are the opinions of the individual speakers alone. They do not represent the position of DePauw University or the Prindle Institute for Ethics.

{music fades}