There have been two nice pieces recently on the helping instinct: “The moral instinct?” in the NYT magazine (by Harvard’s Steven Pinker) and a video talk by Daniel Goleman.
The Moral Instinct
Steven Pinker notes that three individuals have done a lot of good: Mother Teresa (with all her work with the poor in India), Bill Gates (through his philanthropy around diseases and technology) and Norman Borlaug (who most people have never heard of but who has ushered in a green revolution in agricultural productivity that has saved a billion lives in the developing world). Pinker notes that our moral views of these individuals are clouded by how much we know about them and whether they are embued with a saint like halo in the case of Mother Teresa or Bill Gates’ profit-making motive. Our judgments ignore the fact that Mother Teresa often avocated suffering and treated people in harsh conditions: her missions offered “their sick patrons…plenty of prayer but harsh conditions, few analgesics and dangerously primitive medical care.”
The article highlights how brain scanners, game theory, evolutionary biology and neuroscience are being used to understand how we make moral judgments.
Pnker notes that moral action requires a neural switch that forces us to classify actions as “immoral (‘killing is wrong’), rather than merely disagreeable (‘I hate brussels sprouts’), unfashionable (‘bell-bottoms are out’) or imprudent (‘don’t scratch mosquito bites’). It requires us to have a universal sense of right and wrong. Pnker says it also requires that one think that the immoral actor needs to be punished, although while I agree that is a common reaction, it is not essential to seeing this action as immoral.
“The psychologist Paul Rozin has studied the toggle switch by comparing two kinds of people who engage in the same behavior but with different switch settings. Health vegetarians avoid meat for practical reasons, like lowering cholesterol and avoiding toxins. Moral vegetarians avoid meat for ethical reasons: to avoid complicity in the suffering of animals. By investigating their feelings about meat-eating, Rozin showed that the moral motive sets off a cascade of opinions. Moral vegetarians are more likely to treat meat as a contaminant — they refuse, for example, to eat a bowl of soup into which a drop of beef broth has fallen. They are more likely to think that other people ought to be vegetarians, and are more likely to imbue their dietary habits with other virtues, like believing that meat avoidance makes people less aggressive and bestial.” Actions can take on different tones of morality over time: smoking has become moralized from largely being a person decision and many other behaviors have become amoralized over time like divorce, being a working mother, using pot, or being gay. And Pinker notes that the movement of smoking to being “immoral” is not just a question of the harm done to others since other things which cause harm to others, like not changing the batteries in smoke alarms or going on a driving vacation, both which increase the risk to others, are not moralized.
Examples developed by the psychologist Jonathan Haidt, show that we typically develop a gut feel about whether something is moral or immoral and then struggle to defend our position. For example, deciding that it is immoral for a woman cleaning out her closet and finding her old, now unwanted, American flag, to cut it up into pieces and use the rags to clean her bathroom. In this case as in many others, it is difficult to argue who has been hurt.
And scholars think that many of our standards of immorality have developed for evolutionary biological reasons. We think it is okay to push a lever to divert a speeding conductorless trolley onto another track that kills one innocent person rather than the five it would kill on its current course, but not okay to throw a fat man off a bridge in front of a trolley to save these five men, even though both result in one innocent person being killed rather than five. Joshua Greene, a philosopher and cognitive neuroscientist, believes that we developed norms about not manhandling innocent people. They also tested this with neuroscience to observe with brain scanners a conflict between the brain’s emotional regions and the brain’s rational lobes. “When people pondered the dilemmas that required killing someone with their bare hands, several networks in their brains lighted up. One, which included the medial (inward-facing) parts of the frontal lobes, has been implicated in emotions about other people. A second, the dorsolateral (upper and outer-facing) surface of the frontal lobes, has been implicated in ongoing mental computation (including nonmoral reasoning, like deciding whether to get somewhere by plane or train). And a third region, the anterior cingulate cortex (an evolutionarily ancient strip lying at the base of the inner surface of each cerebral hemisphere), registers a conflict between an urge coming from one part of the brain and an advisory coming from another.
“But when the people were pondering a hands-off dilemma, like switching the trolley onto the spur with the single worker, the brain reacted differently: only the area involved in rational calculation stood out. Other studies have shown that neurological patients who have blunted emotions because of damage to the frontal lobes become utilitarians: they think it makes perfect sense to throw the fat man off the bridge. Together, the findings corroborate Greene’s theory that our nonutilitarian intuitions come from the victory of an emotional impulse over a cost-benefit analysis.”
This research led closer to a sense of universal morality, that emerges even early in childhood before empathy is taught. “Four-year-olds say that it is not O.K. to wear pajamas to school (a convention) and also not O.K. to hit a little girl for no reason (a moral principle). But when asked whether these actions would be O.K. if the teacher allowed them, most of the children said that wearing pajamas would now be fine but that hitting a little girl would still not be.”People almost universally across the globe have things that might seem akin to social capital: “a sense of fairness: that one should reciprocate favors, reward benefactors and punish cheaters. They value loyalty to a group, sharing and solidarity among its members and conformity to its norms.” They also believe in deference to legitimate power, respecting high status people, and avoiding “defilement, contamination and carnality.” They’ve lumped these into five elements of the “periodic table” of morality: harm, fairness, community (or group loyalty), authority and purity, the second and third of which are closely connected to social capital. Pinker notes that groups across the world can vary on morality depending on the relative weights they attach to these five elements of morality.
The subjects of fiarness and community/loyalty “match up with the classic examples of how altruism can evolve that were worked out by sociobiologists in the 1960s and 1970s and made famous by Richard Dawkins in his book The Selfish Gene. Fairness is very close to what scientists call reciprocal altruism, where a willingness to be nice to others can evolve as long as the favor helps the recipient more than it costs the giver and the recipient returns the favor when fortunes reverse. The analysis makes it sound as if reciprocal altruism comes out of a robotlike calculation, but in fact Robert Trivers, the biologist who devised the theory, argued that it is implemented in the brain as a suite of moral emotions. Sympathy prompts a person to offer the first favor, particularly to someone in need for whom it would go the furthest. Anger protects a person against cheaters who accept a favor without reciprocating, by impelling him to punish the ingrate or sever the relationship. Gratitude impels a beneficiary to reward those who helped him in the past. Guilt prompts a cheater in danger of being found out to repair the relationship by redressing the misdeed and advertising that he will behave better in the future (consistent with Mencken’s definition of conscience as ”the inner voice which warns us that someone might be looking”). Many experiments on who helps whom, who likes whom, who punishes whom and who feels guilty about what have confirmed these predictions.”Community, the very different emotion that prompts people to share and sacrifice without an expectation of payback, may be rooted in nepotistic altruism, the empathy and solidarity we feel toward our relatives (and which evolved because any gene that pushed an organism to aid a relative would have helped copies of itself sitting inside that relative). In humans, of course, communal feelings can be lavished on nonrelatives as well. Sometimes it pays people (in an evolutionary sense) to love their companions because their interests are yoked, like spouses with common children, in-laws with common relatives, friends with common tastes or allies with common enemies. And sometimes it doesn’t pay them at all, but their kinship-detectors have been tricked into treating their groupmates as if they were relatives by tactics like kinship metaphors (blood brothers, fraternities, the fatherland), origin myths, communal meals and other bonding rituals.
” Unfortunately, the meme of the selfish gene escaped from popular biology books and mutated into the idea that organisms (including people) are ruthlessly self-serving. And this doesn’t follow. Genes are not a reservoir of our dark unconscious wishes. ”Selfish” genes are perfectly compatible with selfless organisms, because a gene’s metaphorical goal of selfishly replicating itself can be implemented by wiring up the brain of the organism to do unselfish things, like being nice to relatives or doing good deeds for needy strangers. When a mother stays up all night comforting a sick child, the genes that endowed her with that tenderness were ‘selfish’ in a metaphorical sense, but by no stretch of the imagination is she being selfish.
“Nor does reciprocal altruism — the evolutionary rationale behind fairness — imply that people do good deeds in the cynical expectation of repayment down the line. We all know of unrequited good deeds, like tipping a waitress in a city you will never visit again and falling on a grenade to save platoonmates. These bursts of goodness are not as anomalous to a biologist as they might appear.”
Trivers, the biologist showed that even individuals who wish to promote reciprocity must develop some system (he proposed tit-for-tat) in being reciprocal without being preyed upon or perennially cheated. Trivers theorized that people would compete in a reciprocal society to be the most generous partner so that his reputation would spread and others would want to cooperate with him, since a reputation for fairness and generosity was important. And favor receivers would have to sort out the puffery (claims of having done huge favors) from the reality. But Trivers hypothesized an ecological niche also for stingy reciprocators who gain fewer trading partners, but have to give less on each transaction or cheaters who exploit gains from one-off transactions without the expectation of repeat play.
Whether this morality is God-given is unknown although Pinker notes that the rules of morality have to be symmetrical: we can’t advocate a system that constantlly privileges me over you. That is why the notion of “interchangability of perspectives” keeps on reappearing in “the Golden Rule (itself discovered many times); Spinoza’s Viewpoint of Eternity; the Social Contract of Hobbes, Rousseau and Locke; Kant’s Categorical Imperative; and Rawls’s Veil of Ignorance. It also underlies Peter Singer’s theory of the Expanding Circle — the optimistic proposal that our moral sense, though shaped by evolution to overvalue self, kin and clan, can propel us on a path of moral progress, as our reasoning forces us to generalize it to larger and larger circles of sentient beings.”
The Pinker article is available here.
Relatedly, Daniel Goleman wants to know what impels us to act morally. There is a nice talk by Daniel Goleman at TED (link below) on empathy and the good Samaritan with a good summary here.
Goleman hypothesizes, based on an incident with a homeless man in NYC and the fact that we are wired neurologically to empathize, that all empathy takes is us noticing others’ needs and “tuning in” rather than “tuning out.” Because of the outpouring of help that Goleman’s noticing of this Hispanic homeless man prompted, he is optimistic.
But Goleman doesn’t discuss why we don’t notice. Some of it surely is that we train ourselves not to notice, but it may also be a self-protection mechanism against feeling that there are too many needs out there and we can’t satisfy these needs, or a competition between our desire to meet the needs of family/friends versus the needs of strangers, or the fact that humans also don’t want to feel taken advantage of, and under-serve homeless for example, because the experience of getting tricked into helping someone who feigns homelessness or injury is greater than the joy that comes from helping another (even if 9 out of 10 beggars are legitimate).
Daniel Goleman TED talk available here.