1. Didn’t evolutionary morality justify eugenics and Hitler’s racial superiority theories?
2. What are your premises?
3. How can “anything that increases reproductive fitness” be moral?
4. So morality is whatever meets my needs and preferences?
Including responses in other venues, the above questions have been asked multiple times!
1. Didn’t evolutionary morality justify eugenics and Hitler’s racial superiority theories?
In the past, some misguided people, relying on bad science and even worse moral philosophy, have claimed such. However more careful science and just introductory moral philosophy show such justifications are intellectual nonsense; they have no basis, and have never had any basis, in reality.
In recent times, a comparable false claim is that understanding morality as an evolutionary adaptation means morality can have no objective basis since science, and evolution, can only tell us what is, not what we must do regardless of our needs and preferences. It is true that science cannot tell us what we ought to do, but that is irrelevant to justifications for moral behavior based on instrumental oughts of the form “Based on facts X, you ought (instrumental) to act according to morality Y, if you desire Z, perhaps increased durable well-being over your lifetime”.
It turns out that understanding morality as a biological and cultural evolutionary adaptation actually leads inevitably to the conclusion that there is an objective underlying basis for morality, not that there is not one. Further, that basis is consistent with what is generally viewed as morally admirable by modern moral intuitions.
An eye is an evolutionary adaptation and there certainly is an underlying optics principle for all eyes, regardless of the many forms of eyes that exist. Similarly, enforced moral standards in different cultures have many forms, but they virtually all are heuristics for a single underlying objective principle, something like “Altruistic acts that increase the benefits of cooperation in groups are moral”.
The above moral principle underlies both our biological adaptations that motivate altruism, such as empathy and loyalty, and virtually all past and present enforced cultural norms (moral standards which advocate altruism) no matter how diverse, contradictory, and bizarre.
Understanding morality as an evolutionary adaptation provides many wonderful insights. For example, “Do unto others as you would have them do unto you” is a compact, very useful admonition to act according to a winning game theory strategy called indirect reciprocity. Indirect reciprocity has been shown to spontaneously appear and be maintained in evolutionary game theory experiments (under certain conditions common for people) using evolving computer programs. No God’s are required, just the power of evolution.
The utility of this evolutionary understanding of the Golden Rule is that it is trivial to define, for example, when it is immoral to follow the Golden Rule, as sometimes in dealing with criminals and in time of war. As is almost universally accepted as a practical matter in all cultures, it is immoral to follow the Golden Rule if doing so is expected to decrease, rather than increase, the benefits of cooperation in groups.
2. What are your premises?
Altruistic Cooperation morality has no premises in the moral philosophy sense. Altruistic Cooperation morality (in the form of the function of morality, the primary reason it exists) is derived from the inductive arguments of science. The truth of the universal function of morality is the normal provisional truth of science, based mainly on 1) no contradiction with known descriptive facts, 2) explanatory power for known facts and puzzles about morality, and 3) unity with the rest of science.
3. How can “anything that increases reproductive fitness” be moral?
It is not. The universal principle is “Altruistic acts that increase the benefits of cooperation in groups are moral”. That, plus its corollary “Acts, such as exploitation of other groups, that reduce the benefits of cooperation between groups are immoral” define Altrusitic Cooperation morality.
It is true that our biology based moral emotions such as empathy, loyalty, and guilt were selected for by the increased reproductive fitness they provided our ancestors. However, that increased reproductive fitness was not from “anything”. It was only from “Altruistic acts that increase the benefits of cooperation in groups”.
Further, enforced cultural norms (which define social morality) are the products of cultural evolution. Enforced cultural norms can be selected for based on whatever benefits of cooperation in groups people find attractive or beneficial, which may or may not include reproductive fitness.
4. So morality is whatever meets my needs and preferences?
No. Altruistic Cooperation morality is defined by “Altruistic acts that increase the benefits of cooperation in groups are moral” and its corollary “Acts, such as exploitation of other groups, that reduce the benefits of cooperation between groups are immoral”.
The rational justifications for groups enforcing such a morality and individuals accepting its burdens are instrumental oughts that are justified by meeting overriding needs and preferences. First, the group’s overriding needs and preferences determine what norms will be enforced, and second, the individual’s needs and preferences are determining in rational justification of accepting the burdens of the group’s enforced norms.
I have always understood “altruism” to be acting for the benefit of another with no thought to personal values, almost like “sacrifice.” As I study your work I read the word “altruism” but see the concept of trade cooperation. Could you define “altruism?”
As you elsewhere noted you saw in the glossary, I felt forced to define “altruism” as something like “Acting at a cost to one’s self, benefiting others, and without consideration of future benefits to one’s self”. I had to do that in order to make moral sense of discussions of altruism related to morality. “Altruism” was a word invented (with an unfortunate definition) about 1850 by a philosopher named Comte.
As an example of the problem with Comte’s definition imagine you expected that over the course of your lifetime you would benefit from consistently following “Do to others as you would have them do to you”. Then by Comte’s definition, no action you took in following the Golden Rule could be considered altruistic, which is nuts. My definition solves that problem.
And in addition to making sense relative to morally admirable altruism, I think my definition also makes sense in the common cultural meaning.
I found your glossary, please ignore the prior message and sorry to take up time.
I think you have done a good job eliminating supernatural sources from morality, and coming up with an objective definition of the types of acts that do (or don’t) count as moral. I don’t see an objective outcome from these acts though. The term “benefits of cooperation” is a subjective term, until you tell us what is objectively a benefit or a harm. If you say it’s “whatever the group decides”, then you have slid back into relativism. What if every human on the planet agreed that there would be a benefit to living in world with no…let’s say mosquitoes, or birds, or bats…and we all cooperated to meet that goal? Would that be moral? According to your definitions above, I think you think it would be. Perhaps you can elaborate why not if you don’t think so.
Ed, good question.
It recently became clearer to me that we can treat this knowledge about the evolutionary function of moral behavior just like any other useful knowledge in science.
If our society has an ultimate goal (moral end) for enforcing a moral code, then this science can inform us as to how to revise our moral code (moral means) to be most likely to achieve that goal.
Revealing the most effective ‘means’ to a specified ‘end’ is science’s bread and butter. The science of morality is no different.
So what is moral? Science says behaviors are moral that increase the benefits of cooperation. Perhaps your society’s ultimate goals for enforcing moral codes are overall well being achieved consistent with fairness. It is then not too complicated to define a moral code most likely to achieve those goals by more cleverly engaging our moral emotions that motivate pro-social behavior.
Do you want science to define your society’s ultimate goals? You are out of luck.
Do you want science to help define a moral code that is most likely to achieve those goals? That science can do.
Is anything absolutely immoral? Yes, exploitation of other people’s efforts at cooperation.
You didn’t answer my question about dealing with life forms outside of the human realm. Maybe it’s easier to deal with a single real question rather than a set of hypotheticals, so try this one: Was it moral when humans cooperated to wipe out the dodo? They got to eat more while doing so. They probably all agreed that was a benefit. Clearly we think the answer is ‘no’ today, but I don’t see how your rules for morality would justify that ‘no’ unless you think we were exploiting the dodo, in which case how do you justify eating anything? I’m poking holes where I think you are missing something vital. Literally…vitality.
I almost agree that science cannot define our goals, it can’t define many of our secondary goals, but there is one ultimate goal that must objectively be met, and that is the question of the survival of life. Were life to go extinct, there would be no more questions of benefits and harm. This scientific fact of existence is therefore the ultimate benefit and extinction is the ultimate harm. What we do with that existence is up to us, but all choices of morality do have to satisfy that one goal. The threat against that goal that comes from wiping out dodos is the way I explain the ‘no’ to my question above. What is your answer?
Ed, the full answer about whether it is moral to cooperate to exterminate other species (or to do anything else) is dependent on the answers to two separate questions. (This insight from science that all moral questions have two implied components: are the ‘means’ moral and are the ‘ends’ moral, perhaps deserves a blog post on its own.)
1) Were moral ‘means’ (cooperation strategies) used to exterminate the animals? I’ll assume yes. So the ‘means’ to the end were “evolutionarily moral”. They fulfill the evolutionary function of morality, the primary reason morality exists.
2) Was exterminating other species a morally acceptable consequence of some other ultimate goal such as survival of your society? Science is necessarily silent on this value judgment about secondary consequences of achieving a chosen ultimate goal of a society.
Judging if the ‘means’ of an act are moral by whether or not they fulfill the evolutionary function of morality is an important and useful form of moral absolutism. For example, exploitation of other’s efforts to cooperate is immoral in an absolute sense.
But the ultimate goals of morality are only relative, so far as science is concerned, to the society’s goals. There may be no ‘truth’ regarding the morality of ultimate goals.
That said, this “ultimate goal relativism” (as far as science is concerned) is not nearly as big a problem in the real world as it might first appear.
Assume a culture defines their moral norms to be consciously aimed at advocating cooperation strategies that produce pro-social behavior. We could expect that it would always be immoral to exploit other’s cooperative efforts and the culture should be highly cooperative with lots of pro-social behavior. This kind of culture will probably click along fairly satisfactorily even with no ultimate goal ever agreed on (which is common in real societies). Further, when disputes arise about ultimate goals or the morality of secondary consequences of pursuing ultimate goals, the disputes could directly focus on the real issues, not which ‘means’ are moral.