Morality and the Brain: A Tale about Science, and Ourselves
by Lorenzo Lazzerini-Ospri
An out of control trolley is heading toward a group of five people, who are going to be killed by it unless you hit a switch to divert the trolley to a side track. The problem is, an unsuspecting repairman is working on the side track, and he is the one who is going to be killed if you hit the switch.
So, would you kill one man in order to save five?
If you're like most people, you would go ahead and make a grievous sacrifice for the greater good. Philosophers call this a consequentialist moral choice, because you base your judgment solely on the consequences of your decision and you do a quick cost-benefit analysis to maximize utility: five lives saved are more than one, thus hitting the switch is the right thing to do.
Is the prototypical variety of moral intuition in humans consequentialist, since most people concur on the consequentialist solution to the trolley dilemma? You might be tempted to conclude so, but consider now this slight variation on the theme:
You are walking on a bridge, when suddenly you notice an out of control trolley hurtling by beneath you, headed toward a group of five people, who are surely going to be killed by the impact. Right next to you, on the other hand, is a very fat man. The only way you have to save the five is to shove this unsuspecting stranger off the bridge and onto the tracks, so that his large body would stop the trolley.
Would you kill one man in order to save five?
If you're like most people, this time you would conclude it is not morally permissible to kill the fat man. This is called a deontological moral choice: you value adherence to rules and abstract principles (e.g., “killing a human being is always wrong”) above any consideration for costs and consequences.
Humans generally give opposite answers to the trolley and footbridge dilemmas, despite the two scenarios being logically the same. Some people have taken the widely divergent variety of ethical opinions to signify the absence of objective moral truths or, at the very least, our inability to discover them the same way we discover objective facts about the natural world (i.e., through science). Indeed, the mainstream view even among scientists has been, ever since the times of Hume, that there are two basic, discrete entities in the world: facts and values. Science is limited in its extraordinary powers of discovery to the former – they argue.
Opinions might be about to undergo a momentous change right now in our time, as neurobiology progresses past its inchoate status as a fledging science. After all, morality exists only in the context of conscious minds interacting in society, and conscious minds are the product of very much science-accessible brains. Are values merely a special kind of facts, subject to scientific scrutiny?
Some researchers with an inquiring mind of their own, wanted to find out what happens if you stick a human contemplating a moral dilemma in an MRI scanner. What they found is that the brain regions lighting up during the task depend on the type of moral questions, and the type of moral opinion eventually ventured by the subject.
When pondering impersonal moral problems like the trolley dilemma (where no direct, personal interaction is necessary: you only have to hit a switch), the regions getting activated are mostly the same as those involved in non-moral decision making.Personal moral dilemmas like the footbridge scenario (where you need to physically push another person standing next to you) are significantly different: regions associated with emotional processing like the amygdala, medial frontal gyrus, precuneus, and temporo-parietal junction, start glaring white hot, while areas associated with abstract cognitive processing, like the right dorsolateral prefrontal cortex (dlPFC) and inferior parietal lobe, are not quite as active.
The apparently contradictory answers given by most people to moral-personal and moral-impersonal dilemmas are explained by the differential activation of the emotional system of the brain: killing a man by shoving him off a bridge evokes a stronger emotional response than killing him by hitting a switch, and this emotional salience steers most people to the deontological choice.
This conclusion is further validated by a Stroop-like effect detected in people giving the counterintuitive, consequentialist answer to moral-personal dilemmas. The Stroop effect refers to the higher reaction time it takes for people to categorize a stimulus according to a sensory modality, while an interference is present from a different modality (e.g., naming the color of the word green). People concluding that it is permissible to shove one man off the bridge to save five have significantly higher reaction times than people giving the deontological answer. This suggests the emotional system activated by the moral dilemma is involved in the decision making process, not merely collateral to it.
In the same way, cognitive load (i.e., having to retain a series of numbers in your working memory) interferes in decision making requiring the delaying of immediate gratification to secure a greater gain later, and it increases reaction times in moral-personal dilemmas, but only for consequentialist judgments. An associated activation of the anterior cingulate cortex can be observed in such cases, just like the classic Stroop effect.
A number of scientific findings so far point to the conclusion that human morality is the product of two brain processes: a domain-specific emotional system, and a domain-general abstract-reasoning system. Evolutionarily, this makes sense: primates are social animals, and primate societies must be fairly stable even in the absence of extensive abstract-reasoning abilities. Direct, physical interactions were the only domain relevant for group stability and survival in our primate ancestors, therefore evolution molded our brain so that harmful personal interactions (e.g., shoving a man off a bridge) would automatically evoke aversive emotional responses in the perpetrator.
This forms the basis of the deontological moral compass most people instinctually follow during personal social interactions, but in modern times the same inhibitory emotional component must be overcome by the independently evolved abstract-reasoning faculty of people deciding for the consequentialist course of action. Indeed, as predicted, people making consequentialist decisions have higher activity in the anterior dlPFC, posterior cingulate, and areas of the temporal cortex also associated with abstract reasoning and impersonal, economic decisions. Deontological decision-makers, on the other hand, have higher activity in the anterior insula, an area associated with reflection about internal states and emotions (especially disgust).
It is ironic, in a twisted sort of way, that the philosophy of Immanuel Kant --the most eminent theorist of deontological ethics--, would turn out to be grounded in a vestigial emotional system of our primate brain, rather than being the noble figment of exclusively-human practical reason.
But the general conclusion, the most inspiring and significant one, is that science for the first time appears to have direct bearing on our moral reflections, on questions about values and justice. In a still very much demon-haunted world, consumed by seemingly irreconcilable ideological divisions, the fact that our only Candle in the dark might be a guide not only to nature, but also to our own soul, is not something to be feared, but to be rejoiced at.
So, would you kill one man in order to save five?
If you're like most people, you would go ahead and make a grievous sacrifice for the greater good. Philosophers call this a consequentialist moral choice, because you base your judgment solely on the consequences of your decision and you do a quick cost-benefit analysis to maximize utility: five lives saved are more than one, thus hitting the switch is the right thing to do.
Is the prototypical variety of moral intuition in humans consequentialist, since most people concur on the consequentialist solution to the trolley dilemma? You might be tempted to conclude so, but consider now this slight variation on the theme:
You are walking on a bridge, when suddenly you notice an out of control trolley hurtling by beneath you, headed toward a group of five people, who are surely going to be killed by the impact. Right next to you, on the other hand, is a very fat man. The only way you have to save the five is to shove this unsuspecting stranger off the bridge and onto the tracks, so that his large body would stop the trolley.
Would you kill one man in order to save five?
If you're like most people, this time you would conclude it is not morally permissible to kill the fat man. This is called a deontological moral choice: you value adherence to rules and abstract principles (e.g., “killing a human being is always wrong”) above any consideration for costs and consequences.
Humans generally give opposite answers to the trolley and footbridge dilemmas, despite the two scenarios being logically the same. Some people have taken the widely divergent variety of ethical opinions to signify the absence of objective moral truths or, at the very least, our inability to discover them the same way we discover objective facts about the natural world (i.e., through science). Indeed, the mainstream view even among scientists has been, ever since the times of Hume, that there are two basic, discrete entities in the world: facts and values. Science is limited in its extraordinary powers of discovery to the former – they argue.
Opinions might be about to undergo a momentous change right now in our time, as neurobiology progresses past its inchoate status as a fledging science. After all, morality exists only in the context of conscious minds interacting in society, and conscious minds are the product of very much science-accessible brains. Are values merely a special kind of facts, subject to scientific scrutiny?
Some researchers with an inquiring mind of their own, wanted to find out what happens if you stick a human contemplating a moral dilemma in an MRI scanner. What they found is that the brain regions lighting up during the task depend on the type of moral questions, and the type of moral opinion eventually ventured by the subject.
When pondering impersonal moral problems like the trolley dilemma (where no direct, personal interaction is necessary: you only have to hit a switch), the regions getting activated are mostly the same as those involved in non-moral decision making.Personal moral dilemmas like the footbridge scenario (where you need to physically push another person standing next to you) are significantly different: regions associated with emotional processing like the amygdala, medial frontal gyrus, precuneus, and temporo-parietal junction, start glaring white hot, while areas associated with abstract cognitive processing, like the right dorsolateral prefrontal cortex (dlPFC) and inferior parietal lobe, are not quite as active.
The apparently contradictory answers given by most people to moral-personal and moral-impersonal dilemmas are explained by the differential activation of the emotional system of the brain: killing a man by shoving him off a bridge evokes a stronger emotional response than killing him by hitting a switch, and this emotional salience steers most people to the deontological choice.
This conclusion is further validated by a Stroop-like effect detected in people giving the counterintuitive, consequentialist answer to moral-personal dilemmas. The Stroop effect refers to the higher reaction time it takes for people to categorize a stimulus according to a sensory modality, while an interference is present from a different modality (e.g., naming the color of the word green). People concluding that it is permissible to shove one man off the bridge to save five have significantly higher reaction times than people giving the deontological answer. This suggests the emotional system activated by the moral dilemma is involved in the decision making process, not merely collateral to it.
In the same way, cognitive load (i.e., having to retain a series of numbers in your working memory) interferes in decision making requiring the delaying of immediate gratification to secure a greater gain later, and it increases reaction times in moral-personal dilemmas, but only for consequentialist judgments. An associated activation of the anterior cingulate cortex can be observed in such cases, just like the classic Stroop effect.
A number of scientific findings so far point to the conclusion that human morality is the product of two brain processes: a domain-specific emotional system, and a domain-general abstract-reasoning system. Evolutionarily, this makes sense: primates are social animals, and primate societies must be fairly stable even in the absence of extensive abstract-reasoning abilities. Direct, physical interactions were the only domain relevant for group stability and survival in our primate ancestors, therefore evolution molded our brain so that harmful personal interactions (e.g., shoving a man off a bridge) would automatically evoke aversive emotional responses in the perpetrator.
This forms the basis of the deontological moral compass most people instinctually follow during personal social interactions, but in modern times the same inhibitory emotional component must be overcome by the independently evolved abstract-reasoning faculty of people deciding for the consequentialist course of action. Indeed, as predicted, people making consequentialist decisions have higher activity in the anterior dlPFC, posterior cingulate, and areas of the temporal cortex also associated with abstract reasoning and impersonal, economic decisions. Deontological decision-makers, on the other hand, have higher activity in the anterior insula, an area associated with reflection about internal states and emotions (especially disgust).
It is ironic, in a twisted sort of way, that the philosophy of Immanuel Kant --the most eminent theorist of deontological ethics--, would turn out to be grounded in a vestigial emotional system of our primate brain, rather than being the noble figment of exclusively-human practical reason.
But the general conclusion, the most inspiring and significant one, is that science for the first time appears to have direct bearing on our moral reflections, on questions about values and justice. In a still very much demon-haunted world, consumed by seemingly irreconcilable ideological divisions, the fact that our only Candle in the dark might be a guide not only to nature, but also to our own soul, is not something to be feared, but to be rejoiced at.