Is morality just a matter of cultural convention or are some acts right or wrong regardless of anyone’s opinion? And why should we act morally when we don’t want to?
Moral philosophers have recorded their attempts at answering these two questions for almost 2500 years. To date, they have not arrived at definitive answers. Science from the last forty years or so may now enable largely, but not completely, answering both. This essay summarizes how science can answer those questions and describes how those answers can be culturally useful for resolving moral disputes and increasing human well-being.
But how can science tell us anything about what we ought to do if science can only tell us what ‘is’, not what ‘ought’ to be?
Given an ultimate goal for enforcing a moral code, for example increasing “flourishing” (however that might be defined), there is nothing preventing science from using its insights into human psychology, cooperation, and game theory to inform us what moral code will be most likely to actually do that. From a purely rational standpoint, using science to inform us what moral norms are most likely to achieve a given ultimate goal should be uncontroversial.
But what can science usefully tell us about morality if there is no agreement on morality’s ultimate ‘end’, which is the case we actually face? Asking what is moral if there is no agreement on morality’s ultimate end is an important question because not only do we have no such agreement now, there is no evidence there will ever be such an agreement. Fortunately, science can tell us quite a lot about what is right and wrong even without any agreement on ultimate ‘ends’.
The companion post “Moral universals from an evolutionist’s perspective” argues science reveals there is a universal moral principle:
“Behaviors that increase the benefits of cooperation in groups by fair means are universally moral”.
Where “by fair means” refers to consistency with indirect reciprocity, which can be substantially represented by the heuristics (usually reliable but fallible rules of thumb) “Do to others as you would have them do to you” or “Don’t do to others as you would not have them do to you”. The word “moral” refers to morally admirable or permissible. Also, the principle defines a uniquely self-consistent sub-set of cooperation strategies from game theory. Therefore, the principle is not just cross-culturally universal but is as cross-species universal as the mathematics that underlie its strategy set.
Note that this moral principle defines moral ‘means’ to achieve goals. It is silent on what those goals ought to be except that they are benefits of cooperation. But simply knowing moral ‘means’ must be consistent with a universally moral principle can be surprisingly useful in resolving moral disputes as the following examples are meant to illustrate.
The following norms are heuristics for universally (and hence objectively) moral cooperation strategies:
The Golden Rule, “Do not kill”, and human rights claims – All are fallible heuristics for elements of indirect reciprocity, an objectively moral ‘means’. See the discussion below of circumstances when slavishly following The Golden Rule, “Do not kill”, and perhaps even human rights claims can become objectively immoral.
Favoritism for family (kin altruism), friends, and your community consistent with indirect reciprocity – These are objectively moral since, by definition, they are consistent with indirect reciprocity. Further, due to our evolutionary history, preferential cooperation with family and friends can be particularly effective in increasing the benefits of cooperation, and thus human well-being, due to our biological motivation to unselfishly cooperate in these groups.
Objectively immoral claims include:
Claims that women are morally obligated to be submissive to men and homosexual behavior is immoral – Objectively immoral since the between group interactions violate indirect reciprocity.
Claims that 1) individuals are morally obligated to accept a large burden so a large number of people can gain a small benefit (as Utilitarianism might require) and 2) one must always tell the truth even to a murderer looking for his victim (as Kantianism might require) – Objectively immoral because such behavior is either inconsistent with indirect reciprocity or can be expected to reduce the benefits of cooperation in groups.
Claims we ought to follow the Golden Rule and “Do not kill” and perhaps even human rights claims even when doing so decreases the benefits of cooperation (as sometimes is the case for the Golden Rule and “Do not kill” when dealing with criminals and in time of war) – Objectively immoral because the benefits of cooperation in groups would be reduced.
Category error claims that are neither moral nor immoral:
“Cultivate mental tranquility “– This norm might be important to human flourishing but is not a moral norm whose violators deserve punishment. Therefore, calling it a ‘moral’ norm (as virtue ethics might) commits a category error because it is not in the same category as behaviors motivated by our moral sense and advocated by past and present moral codes.
Moral claims that may be neither objectively moral nor immoral cooperation strategies:
Prohibitions against abortion and euthanasia – Are descriptively moral if they are consistent with one of the culture’s preferred ultimate goals such as “Preserve life at all costs” Of course, if the culture’s preferred ultimate goals were more along the lines of “respect individual autonomy”, then these norms would be descriptively immoral. The universal moral principle tells us nothing about the objective morality of such norms unless they increase or decrease the benefits of cooperation.
Enforcement of animal rights – Can be descriptively moral if it is consistent with one of the culture’s preferred ultimate goals such as “Respect all conscious animals”. Again, the universal moral principle tells us nothing about the objective morality of animal rights claims unless they increase or decrease the benefits of cooperation.
These examples illustrate that science can largely answer the first question about what norms are objective right and wrong. Moral norms that are consistent with this universal moral principle define objectively right means of achieving goals. ‘Moral’ norms that are inconsistent with it are either 1) objectively immoral means of achieving goals, 2) about a non-moral category of behavior, or 3) their normativity (rightness) is dependent on a society’s ultimate goals, not the means of achieving those goals.
“Why should we act morally?”
Acting morally (even when we don’t want to) can be expected to increase the benefits of cooperation and, by doing so, to increase, on average and in the long term, our emotional experience of durable well-being. If we desire to increase our emotional experience of durable well-being, we then ought to act morally. Of course, we could say “OK, I’ll act morally except when I think my long term well-being will be improved by acting immorally.” Sure, but predicting the future is almost always difficult and predicting what will increase our emotional well-being in the long term is particularly difficult. Which should we rely on, our often ill-informed opinion in a rushed heat of the moment decision, or the wisdom of the ages?
It is surprising to me how much this moral principle from science can tell us about objectively right and wrong behavior. And conveniently, it tells us so much even when people cannot agree on what the ultimate goal of moral behavior ought to be.
What can this science not tell us about morality?
This science cannot provide any source of innate bindingness, such as philosophers commonly seek, which could tell us a moral ‘means’ for achieving our goals is the means we ‘ought’ to use. (As a practical matter, the main sources for bindingness for science based morality will be 1) from harmony with the biology underlying our moral sense which already has elements of these cooperation strategies encoded in it and 2) from the society that decides to enforce it as part of their moral code.)
Also, this science cannot fully answer important philosophical questions like “What is good?”, “How ought I live?”, or “What are my obligations?” To fully answer these questions we must still look outside of science.