It recently occurred to me that the science of morality offers a more informed understanding of moral responsibility that can greatly clarify the confusing topic defined by the question “Are we still morally responsible if we don’t have free will?”
I’ll start with “There is probably no such thing as free will” and the emerging science of morality.
As I have described elsewhere on this site, the emerging science of morality supports that human morality is composed of sets of biological and cultural evolutionary adaptations selected for by the benefits of altruistic cooperation in groups. Altruistic cooperation strategies necessarily all motivate or advocate two necessary actions 1) ‘altruism’ of the kind needed to initiate cooperation and 2) punishment of poor cooperators such as those who exploit other people’s altruism.
This talk about the science of morality is not nearly so esoteric or removed from the common human experience of morality as it may first sound. The science of morality describes what made our ancient ancestors social animals. It also describes what created our common moral emotions and much of our emotional experience of well-being, and, by selection of our enforced cultural codes of behavior, shaped our sometimes diverse and contradictory moral intuitions about right and wrong. A correct understanding of the science of morality will necessarily fit people like a key in a well-oiled lock because this key (morality) is largely what shaped this lock (our social psychology). A bad fit would indicate we have gotten the science wrong.
Ok, by the science of morality that is broadly accepted as least through implication in the literature, holding people morally responsible is a strategy for increasing the benefits of cooperation in groups. That moral responsibility strategy is to punish people, by means ranging from social disapproval to rule of law, who act in ways that decrease the benefits of cooperation in groups. For example, to punish people who decrease the benefits of cooperation in groups by insufficient ‘altruism’, exploitation of other peoples’ altruism, stealing, lying, and killing, or other means.
Note that punishment is morally limited to whatever is likely to increase the benefits of cooperation in the group. Punishment that is likely to decrease the benefits of cooperation in groups in the long term is immoral. So, with this science based definition of morally responsible, revenge or “an eye for an eye” are immoral when they are likely to reduce the future benefits of cooperation in groups. We ought to focus on the things that matter about morality — assessing risk of future bad behavior, protecting innocent people, and deterring crime.
In summary, holding people morally responsible is a useful strategy for increasing the benefits of cooperation in groups whether or not we have free will. Therefore, “People are morally responsible for their actions anyway” because, by the science of the matter, holding people morally responsible is ONLY a useful strategy for increasing the benefits of cooperation in groups. Other definitions of moral responsibility are the result of thought that, regardless of the amount of effort and good intentions put into it, is either ill-informed or talking about a subject other than what morality actually ‘is’ as a science based proposition. Free will is irrelevant to moral responsibility according to the emerging science of morality because the function of moral responsibility in cultures has nothing to do with free will.
The above science based definition of “morally responsible” seems to me to greatly clarify the real issues underlying moral responsibility if there is no free will. It avoids the unnecessary mental traps built into traditional definitions of morally responsible in moral philosophy.
“…the emerging science of morality supports that human morality is composed of sets of biological and cultural evolutionary adaptations selected for by the benefits of altruistic cooperation in groups.”
I don’t understand this statement. Are you saying the biological and cultural adaptations are selected because they increase altruistic cooperation, or are they selected for the benefits that come from altruistic cooperation?
Our ‘moral’ biology (the biology underlying empathy, shame, indignation and so forth) was selected for in our ancestors by the reproductive fitness benefits that come from the cooperation strategies those adaptations motivated. This was a mindless process in the normal sense of biological evolution.
The case is a little of both for the evolution of enforced cultural norms (cultural moral standards). People can consciously choose to seek out and enforce norms that increase the benefits of cooperation in groups; they “are selected because they increase altruistic cooperation”. On the other hand, groups that enforce norms that increase the benefits of altruistic cooperation for whatever reason – say a god told them to do so – may prosper and by imitation, those norms be spread. That is, “they (were) selected for the benefits that come from altruistic cooperation”.
Make sense?
And what are those benefits? Sounds to me like altruistic cooperation is just a means to those ends. But what are the ends?
The most common over-riding ultimate goal of groups who are deciding what moral code to enforce is some form of well-being for the group. That is, the burden of group enforcement is rationally justified by expected increases in well-being. Then the benefits of cooperation to be pursued would be the proximate goals that support that well-being such as material goods and psychological goods..
The science of morality informs us only of means to ends, not what those ultimate ends must be. So what could make a science based morality normative? A science based morality would become universally normative (what one ought to do) only if, given specified conditions, it defines what would be put forward by all rational people. It is easy to argue that, given the specified condition that the ultimate goal is increased group well-being, all rational people would put forward altruistic cooperation strategies as the basis of their moral code.
“A science based morality would become universally normative (what one ought to do) only if, given specified conditions, it defines what would be put forward by all rational people.”
Science shows us what we all have in common. We are all living animals, with a shared evolutionary history, trying to go on surviving. Isn’t that commonality the ultimate backstop for our morality? Aren’t morals really just rules for long term survival? We have disagreed over what leads to “well being”, but each disagreement that has been resolved has been so by discovering which norm actually leads to increased survival. This wasn’t always done consciously, but surely that is the end result. How could it be otherwise? If it was otherwise, the groups (and their norms) wouldn’t survive. If it was otherwise, then that would not be what we ought to do.
Ed,
My definition of what is universally moral (normative in philosophy speak) comes from the online Stanford Encyclopedia of Philosophy’s entry on morality. It is the best definition I know.
You can try to make the argument that all rational people will put forward a moral code aimed at long term survival, but whose? Their family’s, their tribe’s, all people, all conscious beings, all life on earth?
Moral behavior is a strategy for increasing the benefits of cooperation in groups. People are free to choose whatever benefits they like, which may, in the long term, directly cause extinction. For example, loyalty is both moral and the direct cause of war. Moral prohibitions against eugenics may cause extinction in the long term through increasing incidence of mental illness, in particular the % of people who are rational psychopaths, a trait whose reproductive fitness is arguably higher in modern societies than in primitive societies.
Rational people will choose moral codes based on whatever benefits of cooperation they seek. People are not yet smart enough to know if a benefit of moral behavior, for example of loyalty, will actually increase long term survival of the species or even their group.
Sorry, I don’t mean to hijack your blog. I make my case for a universal definition of good on my own site. If morality is the differentiation between actions, intentions, and decisions between those that are “good’ and those that are “bad”, then I’m interested in using science to find an objective definition for those terms and hence an objective set of rules for morality.
In answer to your question, people will put forth a moral code aimed at the survival of all life as their timeframe of consideration becomes longer and longer. This is what morality is evolving towards because all life is intertwined and anything short of that brings the risk of extinction closer to hand.
You’re right that people are not perfectly capable of predicting which behaviors lead to that long-term survival. But neither are they perfectly capable of determining some kind of benefit of well being for the group reached through altruistic cooperation. We know from studying evolution though that in the face of this blind ignorance, there is real power in trial and error and the use of limited experiments that do not endanger the whole when we are faced with uncertainty. Recognizing the end goal – survival of life – and recognizing our epistemic limitations in knowing how to reach that goal gives us some definite prescriptions on how to act and how not to act as we move toward our end goals. We ought not do anything else.
Defining a moral code based on “survival of life”, however that is defined, seems to me to seriously misunderstand with morality is. Exploitation of everyone who is not related to you could be the best strategy to ensure your family survives. Greed is a human emotion because it increased the reproductive fitness of our ancestors. Neither behavior is moral.
I think you misunderstand my claims. Exploitation and greed are competitive strategies focused on short-term wins. In a species with repeated interactions, memories, communication, reputation, and gossip, those cheating strategies are punished and lose in the long term to cooperative moral behavior. Just because a behavior is “natural”, i.e. seen in nature, doesn’t mean it is “good” in terms of a defined objective moral code.