Are many of our moral judgments hard-wired from evolution? Moral cognition labs at top universities around the world are studying this question. Subjects are presented moral dilemmas to solve like the famous runaway trolley problem. A trolley breaks loose and is about to kill five people standing on the tracks. You can save them by pulling a switch to divert the trolley to a track where only one person would die. Should you pull the switch? About 90 percent of volunteers, regardless of ethnic or social status or religious or moral views, including children over 8, say they would pull the switch.
But what if you can save the five only by pushing a very large man in front of the trolley to slow its progress? Only about 30 percent say that is moral, although the outcome is the same – one killed, five saved. Would it be more moral if you dropped the large person from a trap door rather than pushing him? About 60 percent say yes. What explains these differences?
To find an answer, neuroscientists used fMRI imaging of areas of the brain that activate as the subjects solved the moral dilemmas. The fMRI data shows that the brain activity in those who objected to the killing of one to save five was predominately in areas associated with emotion. The more physical and direct the killing the greater the activation. Whereas those who found the consequences of saving five to override killing one or reported weighing the consequences tended to have more activity of longer duration in brain areas where reasoning occurs.
Another moral dilemma with even greater emotional pull was devised to elicit the emotion verses reason brain interplay. Subjects were asked to consider whether they would smother their crying baby to save a group from detection by enemy soldiers who would execute them if discovered. A surprising 53 percent found smothering the baby moral. Their brain scans lit up in a neural wrestling match between emotional and reasoning areas.
Never miss a local story.
The fact that these brain patterns occur regardless of ethnicity, social status, age over eight, and moral or religious upbringing suggested to the scientists that the emotional responses are hard-wired from evolution but may be overridden by reasoning processes.
We are increasingly learning that some evolutionary predispositions, such as favoring kin and clan and maximizing the spread of one’s genes, are at times counterproductive to a civilized world and a happy life. Similarly, the moral dilemma studies suggest that automated judgments derived from our evolutionary past, like racism, sexism and homophobia, may not be reliable guides in the modern world, and that moral progress may critically depend on use of reason to weed out instinctive norms that do not advance the greatest good or are unjust.
Moral dilemmas also illustrate another important insight: how hard it is to justify the view that all moral problems have a uniquely correct answer.
Can we really say with confidence that someone who would choose to pull the switch or smother the baby would be wrong? Can we say it would be wrong to save five critically-ill people by harvesting the organs from a medically and legally certified brain-dead man without his or his guardian’s consent? Or can we say so confidently that a person who lies to save his or her life from any unjust death has acted wrongly? (In the classic play “The Crucible,” the protagonist refuses to lie and is condemned to death.)
The recent findings in neuroscience suggest that it would be wise to apply the cautionary principle to difficult moral judgments and constrain our impulse to impose those judgments on others.
Brian Faller, a local attorney, is a member of The Olympian’s Board of Contributors. He can be reached at brian email@example.com.