Is there any rational justification for a person to accept the burdens of acting according to enforced norms defined by “Altruistic acts that also increase the benefits of cooperation in groups are moral”?
First, we will likely agree that just because there ‘is’, as a matter of science, a function of enforced social moral standards, the primary reason they exist, this does not mean that that function ought to be, regardless of people’s needs and preferences, the basis of social moralities. A descriptive fact cannot intrinsically imply imperative oughts, the kind David Hume warned against deriving from facts, that are binding regardless of people’s needs and preferences. Perhaps someday a clever moral philosopher will conclusively justify the reality of imperative oughts, ‘magic’ oughts as I think of them, but no one has ever done so to date, and I expect never will. So far as I know, reality just doesn’t work that way.
Fortunately, there is still ample rational justification for accepting the burdens of Evolutionary Morality as an instrumental ought in order to meet some overriding goal such as durable well-being over a lifetime.
In the big picture, accepting the burdens of Evolutionary Morality can be rationally justified based on 1) expected increased synergistic benefits of cooperation of all kinds and 2) inherent psychological rewards, both immediate and in our sense of durable well-being, or happiness, a common overriding instrumental goal. These immediate psychological rewards and much of our sense of durable well-being originally evolved in our ancestors as the chief means of motivating people to cooperate in groups, but are now available to us mainly as important benefits of altruistic cooperation.
Social morality, properly understood, is a biological and cultural adaptation for increasing benefits. It is not just a set of obligations better avoided.
Perhaps, but how about when an individual expects that acting morally will not be in their best interests even after taking these benefits into account?
Is there a rational justification for, almost always, accepting the burdens of Evolutionary Morality even when an individual expects that acting morally will not be in their best interests?
This would be a highly useful characteristic of a social morality and, fortunately, it is a characteristic of Evolutionary Morality.
The fundamental reason it has this characteristic is that people’s ability to predict what action will best meet their overriding goals in the long term is poor. We cannot know all relevant information, and even if we knew it we are often unable to accurately predict the future. This inability to predict the future, even knowing all relevant information, is partly due to our competing, and often self-defeating, selfish inclinations and partly due to our brain’s computational limitations.
Understanding all this makes it likely to be in our best interests to, almost always, accept the burdens of acting morally even when, in the heat of the moment of decision, we expect doing so will not be.
Which do you think is more likely to turn out well? Going with the wisdom of the ages (representing forces whose benefits shaped your inner being) or going with your personal, confused perceptions in the heat of the moment of decision?
Well sure, going with the “wisdom of the ages” sounds like it should almost always be the better choice. But what is that wisdom?
Is Evolutionary Morality really the wisest of the social wisdom of the ages?
I explain my claim of overriding cultural utility for Evolutionary Morality as follows. By “overriding cultural utility”, I mean it is the most useful available defining principle for culturally enforced moral standards. Here utility is judged by how well these enforced moral standards can be expected to meet people’s common needs and preferences.
First, let’s size up the available secular competition.
Compact moral principles that have been suggested for this role of defining enforced cultural norms include “Do unto others as you would have them do unto you”, versions of Utilitarianism (such as act to most increase the happiness of the most people), Kant’s categorical imperatives (such as act only as consistent with rules you would advocate to be universal), “maximize universal agency” (maximize the freedom and ability of all people to act and accomplish goals as they think best), all pro-social acts (voluntary behavior intended to benefit another which consists of actions which benefit other people or society as a whole such as helping, sharing, donating, co-operating, and volunteering), and egoism (such as act only in ways that you expect will increase your personal well-being).
Of course, these compact definitions of morality are not the only alternatives. Grab bag collections of moral claims such as “Whatever one’s culture defines as moral” are obviously culturally useful and “The sum of mainstream moral philosophy throughout the ages” might be advocated as the most culturally useful foundation for choosing what moral standards will be enforced. (Note that my definition of usefulness is utility in choosing which norms will be enforced in a culture.)
Many readers are likely to be as familiar as I am with the shortcomings of the above candidates.
But briefly, some, like the Golden Rule, are inadequate as ultimate arbiters. For example, is it really a moral requirement to always act according to the Golden Rule when dealing with criminals and in time of war? Some lack generally accepted rational justification for accepting their burdens (Utilitarianism and Kant’s categorical imperatives). Some, such as egoism, emphasize a point of view that is unlikely to meet the needs and preferences of mentally normal people, because, as social animals, these needs and preferences have evolved in large part to motivate altruistic cooperation in groups. And some, such as the semi-random collections of moral claims from different cultures or as have been generated by mainstream moral philosophy through the ages, are incapable of providing a rational basis for resolving disputes about what is moral and justifications for accepting morality’s burdens.
In contrast, “Altruistic acts that also increase the benefits of cooperation in groups are moral” captures, as empirical fact, the universal core of enforced moral standards (the topic most non-philosophers are interested in when discussing morality). It is not a fallible heuristic for morality as is the Golden Rule, “preserve life”, “act charitably toward all”, or the product of some philosopher’s ponderings about “What is good?” or “What are our obligations?” (which may have no objective answers).
So the competition is not impressive. But Evolutionary Morality is not just marginally better than these alternatives. None of them are even remotely competitive as instrumental choices for cultural moralities.
The principle attractive aspects of Altruistic Cooperation’s definition of morality include the following:
1) Acting morally is a means of increasing benefits, in particular psychological benefits, not just a source of burdensome obligations.
2) Accepting its burdens is uniquely able to elicit positive moral emotions and our experience of durable well-being (durable happiness) because the biology that produces these emotions was selected for in our ancestors as the chief means of motivating cooperation in groups,
3) Matching existing moral intuitions (except when those intuitions favor the Dark Side of morality – exploitation of other groups) better than any alternative, and
4) It is objectively universal in the maximum possible sense, meaning it is not just independent of culture, but independent of species, and even independent of biology (choosing culturally enforced norms according to it would arguably be a reasonable choice even for hypothetical societies of intelligent computers).
5) It leaves undefined, for the most part, what benefits of cooperation people should seek, thus avoiding the troublesome problem of trying to define what is good. The only limit on what benefits can be morally sought is that, to be logically consistent, “It is immoral to seek benefits that decrease the future benefits of cooperation”.
How attractive Evolutionary Morality is, is key to its cultural utility. Its cultural utility can be rationally justified only as an instrumental ought. For example, perhaps “If you desire to increase your durable well-being, then based on facts about human psychology and from game theory, you ought (instrumental) to accept, almost always, the burdens of Evolutionary Morality”.
Of course, some moral philosopher may someday conclusively show, contrary to my expectations, that imperative oughts actually exist that are somehow binding on people regardless of our needs and preferences. Or perhaps someone may suggest an alternative definition of morality that is a more attractive instrumental choice, perhaps to satisfy some more overriding desire than “durable well-being over a lifetime”. In either of these cases, Evolutionary Morality could become irrelevant.
However, until either of these events occurs (and they seem unlikely to me) Evolutionary Morality appears to be the most culturally useful definition of social morality available.
Even though I have repeatedly pointed out that Evolutionary Morality entails, on its own, no imperative oughts, it will, if adopted, inevitably acquire both imperative ought characteristics (binding regardless of personal needs and preferences) and emotional oughts (which can feel binding regardless of personal needs and preferences).
The sources of these two kinds of bindingness are shared with all cultural moralities. The group enforcing a norm usually puts it forward as binding regardless of needs and preferences, and is ready to enforce that as required. A group deciding which cultural norms to enforce gives those norms imperative force: “You ought to conform to them regardless of your needs and preferences”.
Further, “emotional oughts” are due to human biology’s remarkable ability to incorporate whatever morality we practice into our moral intuitions. Once incorporated into our moral intuitions, our intuition’s emotional motivating power can sometimes overpower rational thought and self-interest.
Due to the evolved match between Evolutionary Morality and human biology, it seems likely that the motivating power of Evolutionary Morality will be more enhanced by these imperative characteristics from group enforcement and our own biology than more randomly assembled moral standards. That is, people in cultures adopting Evolutionary Morality may feel, on average, a stronger sense of bindingness of their moral standards, than individuals in cultures with more randomly assembled moral standards.