It recently occurred to me that the science of morality offers a more informed understanding of moral responsibility that can greatly clarify the confusing topic defined by the question “Are we still morally responsible if we don’t have free will?”
I’ll start with “There is probably no such thing as free will” and the emerging science of morality.
As I have described elsewhere on this site, the emerging science of morality supports that human morality is composed of sets of biological and cultural evolutionary adaptations selected for by the benefits of altruistic cooperation in groups. Altruistic cooperation strategies necessarily all motivate or advocate two necessary actions 1) ‘altruism’ of the kind needed to initiate cooperation and 2) punishment of poor cooperators such as those who exploit other people’s altruism.
This talk about the science of morality is not nearly so esoteric or removed from the common human experience of morality as it may first sound. The science of morality describes what made our ancient ancestors social animals. It also describes what created our common moral emotions and much of our emotional experience of well-being, and, by selection of our enforced cultural codes of behavior, shaped our sometimes diverse and contradictory moral intuitions about right and wrong. A correct understanding of the science of morality will necessarily fit people like a key in a well-oiled lock because this key (morality) is largely what shaped this lock (our social psychology). A bad fit would indicate we have gotten the science wrong.
Ok, by the science of morality that is broadly accepted as least through implication in the literature, holding people morally responsible is a strategy for increasing the benefits of cooperation in groups. That moral responsibility strategy is to punish people, by means ranging from social disapproval to rule of law, who act in ways that decrease the benefits of cooperation in groups. For example, to punish people who decrease the benefits of cooperation in groups by insufficient ‘altruism’, exploitation of other peoples’ altruism, stealing, lying, and killing, or other means.
Note that punishment is morally limited to whatever is likely to increase the benefits of cooperation in the group. Punishment that is likely to decrease the benefits of cooperation in groups in the long term is immoral. So, with this science based definition of morally responsible, revenge or “an eye for an eye” are immoral when they are likely to reduce the future benefits of cooperation in groups. We ought to focus on the things that matter about morality — assessing risk of future bad behavior, protecting innocent people, and deterring crime.
In summary, holding people morally responsible is a useful strategy for increasing the benefits of cooperation in groups whether or not we have free will. Therefore, “People are morally responsible for their actions anyway” because, by the science of the matter, holding people morally responsible is ONLY a useful strategy for increasing the benefits of cooperation in groups. Other definitions of moral responsibility are the result of thought that, regardless of the amount of effort and good intentions put into it, is either ill-informed or talking about a subject other than what morality actually ‘is’ as a science based proposition. Free will is irrelevant to moral responsibility according to the emerging science of morality because the function of moral responsibility in cultures has nothing to do with free will.
The above science based definition of “morally responsible” seems to me to greatly clarify the real issues underlying moral responsibility if there is no free will. It avoids the unnecessary mental traps built into traditional definitions of morally responsible in moral philosophy.