Cultural moralities are diverse, contradictory, and even bizarre. Sometimes cultural moral norms forbid eating pigs, cutting one’s hair, or having homosexual sex; sometimes they do not. Women may be morally required to be submissive to men and, at least in the past, slaves have been morally required to be obedient to their masters. In modern western cultures, enforcing such ‘moral’ mandates about food, grooming, sex prohibitions, and exploitation of out-groups may be considered morally irrelevant or even viciously immoral.
In this apparent chaos, are any behaviors universally moral? There are from an evolutionist’s perspective. “Do to others as you would have them do to you” summarizes morality for many people. From an evolutionist’s perspective, it is a heuristic (a useful, though fallible, rule of thumb) for a cooperation strategy that we have good reasons to believe is universally moral.
This essay summarizes those good reasons.
To understand the evolutionary perspective on moral behavior, we must first understand what behaviors evolutionists study when studying morality. Individuals commonly think moral behavior refers to behaviors motivated by their moral sense and advocated by their culture’s moral norms. People may also recognize that other people’s moral sense and cultural moralities can be very different. This suggests what evolutionists might usefully study: all behaviors motivated by our moral sense and advocated by past and present moral codes. Note that evolutionary science is applicable to both our moral sense, which has a biological basis with identifiable selection forces, and cultural moral norms which can also be understood as the product of identifiable selection forces.
An underlying principle for descriptively moral behaviors
Moral philosophers refer to cultural and individual moral codes as descriptively moral, meaning described as moral in one society but not necessarily considered moral in other societies. Following this convention, behaviors motivated by our moral sense and advocated by past and present cultural moral codes will be referred to as descriptively moral.
Collectively, these descriptively moral behaviors provide evolutionists with a wonderfully diverse, contradictory, and bizarre data set to be explained. Because of this diversity, evolutionists can be highly confident that 1) any hypothesis that explains all the data set does not do so merely by chance and 2) it is unlikely there will be multiple hypotheses that explain the data equally well. This is just the sort of data set that enables coming to scientifically robust conclusions.
But some challenges immediately arise.
First, how do we identify cultural moral norms? How do we tell them apart from other cultural norms such as table manners or how to politely greet others? Intuitively, moral norms are cultural norms whose violation commonly incites the feeling that violators deserve punishment of some kind, though the person may not actually be punished. Provisionally accepting this intuition both 1) enables a simple means of identifying cultural moral norms and 2) adds another aspect of morality to the large data set that must be explained. (We would have to re-examine how we identify moral norms if the leading hypothesis did not explain why our intuition is that violators deserve punishment. This turns out to not be a problem. The leading hypothesis shows that punishment of immoral behavior is central to maintaining the function of moral behavior and thereby central to morality as natural phenomena.)
A second challenge is that it is impossible to tabulate all past and present cultural moral norms. But we can at least test any hypothesis’ explanatory power for the most diverse, contradictory, and bizarre moral norms we know of from the past and present.
Third, the behaviors our moral sense motivates are personal experiences and generally not recorded. Fortunately, there is a stand-in data set about our moral sense. That data set is 1) the emotions cross-culturally produced by our moral sense (the emotions connected to the violation and upholding of moral norms) and 2) the categories of circumstances that cross-culturally trigger our moral sense to make moral judgments. For example, our moral emotions (Haidt 2003) are, cross-culturally, compassion, loyalty, gratitude, anger, disgust, contempt, shame, guilt, and ‘elevation’ (a mixture of pride and satisfaction). The circumstances that trigger these emotions (Graham 2012) are, again cross-culturally, perceived care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, and sanctity/degradation.
A great deal of light could be shed on the murky subject of human morality if there were a scientific hypothesis that explains all the above, superficially chaotic, combined data set.
There is such a hypothesis. One twelve word hypothesis explains and makes sense of all the behaviors motivated by our moral sense and advocated by past and present moral codes. It explains what descriptively moral behavior ‘is’.
In the last forty years or so, there has been growing evidence, (Gintis 2005, 2009), (Nowak 2011, 2013), and (Curry 2015), that behaviors motivated by our moral sense and advocated by cultural moral codes were biologically and culturally selected for by the benefits of cooperation that they produce in groups. We can express this as an underlying moral principle:
“Behaviors that increase the benefits of cooperation in groups are descriptively moral.”
Here, “descriptively moral” means morally admirable or morally acceptable in some society. Based on insights from game theory (see Gintis and Nowak as above), it follows that all behaviors that are descriptively moral are elements of cooperation strategies.
Summarizing, this principle explains all of the following as elements of cooperation strategies: 1) virtually all known past and present moral norms, no matter how diverse, contradictory, and bizarre, 2) our moral emotions (the emotions triggered by our moral sense), 3) the circumstances that trigger these moral emotions, and 4) our intuitive recognition of moral norms as norms whose violators deserve punishment.
Suggestions for past or present moral norms that are not explained as elements of cooperation strategies are always welcome. But before making suggestions, consider that there are three primary sources of the diversity, contradictions, and bizarreness of past and present moral codes. These are 1) diverse markers of membership or commitment to an in-group such as circumcision, food prohibitions (for example pigs and shrimp), prohibitions against forms of sex (such as homosexuality), and clothing or hair style (such as not cutting hair), 2) different definitions of who is in favored in-groups such as men, slave owners, or one racial group and who is in less favored, or even exploited out-groups such as women, slaves, another race, homosexuals, or simply people who are not members of your family or tribe, and 3) emphasis on different cooperation strategies such as kin altruism, direct reciprocity, indirect reciprocity, and cooperation in hierarchies. Also, as a check to be sure the norm in question actually is a moral norm, do violations commonly evoke the feeling that the violators deserve punishment?
The above principle also explains the emotions cross-culturally triggered by our moral sense. Compassion, gratitude, and loyalty motivate initiating or maintaining cooperation within groups. Anger, disgust, and contempt motivate punishing – sometimes by exclusion and shunning – other people who do not reciprocate in response to cooperation or otherwise violate moral norms. Shame motivates concern for our reputations which game theory shows is central to the most powerful cooperation strategy known, indirect reciprocity. Guilt provides efficient internal punishment for our own violations of moral norms. (This internal punishment of moral norm violations was selected for in our ancestors because it avoids the adverse effects on cooperation possible with external punishment, such as risk of cycles of retribution.) And ‘elevation’, a mixture of satisfaction and pride, provides an innate psychological reward for cooperation within family, friends, and larger groups which can maintain group cohesion and contribute to our emotional experience of durable happiness, even when other rewards for cooperation are scarce.
The specific circumstances that cross-culturally trigger moral judgments (Graham 21012) are also explained as elements of cooperation strategies. Our ability to trigger moral judgments when we detect circumstances of care/harm are just what is needed to maintain indirect reciprocity by motivating behaviors that either initiate ‘helping’ (cooperation) or punishment of harm. Game theory shows that fairness/cheating detection which motivates either fairness or punishment of unfairness is necessary for maintaining indirect reciprocity. Detection of loyalty/betrayal, authority/subversion, and sanctity/degradation motivate behaviors that maintain cooperation within in-groups (while generally decreasing cooperation with out-groups).
An underlying principle for universally moral behaviors
As described above, the apparent chaos of descriptively moral behaviors is made orderly by recognizing that its diversity, contradictions, and bizarreness are explained by “Behaviors that increase the benefits of cooperation in groups are descriptively moral”. Might there be a universally moral subset of these descriptively moral behaviors?
As also previously described, the vast majority of the diversity, contradiction, and bizarreness (the non-universality) of descriptively moral behaviors is due to the use of different markers of membership in and commitment to favored in-groups and different definitions of who is in favored in-groups and who is in disfavored and even exploited out-groups.
Since dividing people into in-groups and out-groups is the primary source of non-universality, one approach to a universal moral principle would be that all people must be treated equally. Everyone would be equally worthy of concern whether they were your children or someone on the other side of the earth you will never meet.
However, it is cross-culturally immoral to not have more concern for your own family, particularly immature offspring, than for others. Further, not having more concern for friends and the larger communities you belong to than for others is disloyal and also commonly judged immoral. Prohibiting in-groups and out-groups does not produce a cross-culturally universal moral principle.
Further, a moral code that abolishes preferential treatment for in-groups will also be predictably disastrous for human welfare. If our obligations to everyone are equal, then the concentrated focus and effort needed to support family life and effective cooperation in groups would be impossible. Also, free-riders and exploiters, who both destroy the benefits of cooperation, would be much harder to suppress if there was no clear sub-group with moral responsibility for punishing bad behavior and no out-group to banish them to.
It is important to understand how effective dividing people into in-groups and out-groups is as a strategy for increasing the benefits of cooperation and, thereby, human well-being. It is effective in part because identifying people who are likely reliable cooperators, and who will be punished by the group if they exploit other’s cooperation, makes cooperation much less risky than if one just randomly cooperates with whomever one comes in contact with. Dividing people into in-groups and out-groups is also effective at increasing the benefits of cooperation for the practical reason that framing moral behaviors as “in-group cooperation” can engage the biology underlying our moral sense. That biology was selected for by the benefits of in-group cooperation and can provide powerful biological motivation to behave morally. To maximize the benefits of cooperation, and thus human welfare, we want to search for cross-culturally universal moral principles that are consistent with the existence of in-groups and out-groups.
The most powerful cooperation strategy in game theory is arguably indirect reciprocity. See Chapter 2, Indirect Reciprocity—Power of Reputation in (Nowak 2011) for a relevant description. Its core can be expressed as the heuristics “Do to others as you would have them do to you” and “Don’t do to others as you would not want them to do to you”. What if human society defined ‘moral’ levels of higher concern for in-groups to be consistent with versions of the Golden Rule? Could it be cross-culturally universally moral to treat people in out-groups (such as non-kin, other communities, and other countries) the way you want them to treat you and thus define a universal moral principle that thus allows favoring in-groups?
Adding a consistent-with-indirect-reciprocity criterion to the principle underlying what is merely descriptively moral, we have a trial universal moral principle:
“Behaviors that increase the benefits of cooperation in groups by fair means are universally moral”.
Here the more familiar term “fair means” refers to consistency with indirect reciprocity which is commonly encoded in moral codes as its heuristics, versions of the Golden Rule.
So far as I know, this principle is cross-culturally universally moral.
But this moral principle is about more than what is universal in human cultures. It defines the subset of cooperation strategies in game theory for which between group interactions and kin altruism are consistent with indirect reciprocity. This moral principle is thus made fully internally consistent and as cross-species universal as the mathematics that define this unique subset of cooperation strategies.
To moral philosophers who think of morality as an intellectual construct, perhaps as answers to questions such as “What is good?”, “How should I live?”, and “What are my obligations?”, the above essay summarizing the derivation of a moral universal may be puzzling and even incoherent. They might respond “So what if you have identified a cross-culturally universal moral principle? You have not shown this is what morality ought to be. You have not shown that it is somehow binding on people regardless of their needs and preferences.”
That is all correct. Science cannot provide a source of innate moral bindingness or fully answer the above three questions. However, science can reveal what the function of our moral sense and moral codes ‘is’ – increasing the benefits of cooperation in groups. It also can reveal which behaviors that increase those benefits are universally moral. What is universally moral is also objectively moral, but it is a kind of moral objectivity that has no innate bindingness.
My follow-on essay, “Can science define morally right and wrong ‘means’ to unspecified ‘ends’?”, proposes how understanding what is universally moral can be culturally useful by providing an objective standard for judging moral and immoral behavior.
Curry, O. S. (in press). Morality as Cooperation: A problem-centred approach. In T. K. Shackelford & R. D. Hansen (Eds.), The Evolution of Morality. Springer.
Gintis, Herbert (2009). The Bounds of Reason: Game Theory and the Unification of the Behavioral Sciences. Princeton University Press
Gintis, Herbert; Bowles, Samuel (2011). A Cooperative Species: Human Reciprocity and Its Evolution. Princeton University Press.
Graham, Jesse, Haidt, J., et al. (2012). Moral Foundations Theory: The Pragmatic Validity of Moral Pluralism. Available at http://ssrn.com/abstract=2184440
Haidt, J. (2003). The moral emotions. In R. J. Davidson, K. R. Scherer, & H. H. Goldsmith (Eds.), Handbook of affective sciences. Oxford: Oxford University Press. (pp. 852-870).
Nowak, Martin; Highfield, Roger (2011). SuperCooperators. Simon & Schuster, Inc.
Nowak, Martin A. (2013). Evolution, Games, and God. Harvard University Press.
Protagoras, the philosopher who patiently explained to Socrates that the function of our moral sense, and morality, was to increase the benefits of cooperation in groups – see Plato’s dialog of the same name. Socrates rejected Protagoras’ view, perhaps because it was too commonplace, because it was the common view among the people. That it was the common view did not seem to trouble Protagoras.