Scientific hypotheses must be tested against data.
What data set is appropriate for testing scientific hypotheses about what morality ‘is’?
Specifically, what data set describes what people believe about morality?
(Note the subject here is what morality ‘is’, a subject which is accessible to science. The subject is not what morality ought to be, on which science appears silent. The utility of investigating what morality ‘is’ is, first, it could be culturally useful to understand how morality has shaped what our moral psychology, moral values, and goals are. Second, understanding what morality ‘is’ could even increase consensus about what morality ‘ought’ to be.)
The following are candidate biological, emotional, behavioral, cultural, and cognitive data about our moral lives. (edited 2-1-2014 based on suggestions from posters)
1) People have emotions such as empathy, loyalty, guilt, shame, and indignation that motivate behaviors commonly called moral: they motivate cooperation or punish behavior that reduces cooperation
2) People almost always act more altruistically toward family than strangers
3) People have an ability make near instant moral judgments of what they think of as right and wrong without conscious thought
4) The neuroscience of emotions such as empathy, loyalty, guilt, shame, and indignation
5) Past and present enforced moral codes (diverse, contradictory, and bizarre as they are)
6) Empirical data about what basis people universally use for making moral judgments: harm, fairness, freedom, loyalty, respect for authority, and purity (as found from interviewing people around the world)
7) Empirical data about moral dilemmas (prisoner’s dilemma, dictator games, trolley problems, and so forth)
8) Moral philosophers advocate different moralities such as Utilitarianism, Kantianism, virtue ethics, and egoism
9) Other?
Such a data set of descriptive facts about morality can then be used to test the scientific truth of hypotheses about what morality ‘is’. (Here, “scientific truth” refers to the normal provisional kind in science.) Criteria for scientific truth would include explanatory power for the data set, non-contradiction with the data set, simplicity, integration with the rest of science, and so forth.
First, I am interested if I have missed important moral features of the world or if the above descriptions could be improved.
But second, I want to talk about how we can check if the scope of the data set is in fact ‘correct’. Do we have the most useful data set that describes what morality ‘is’?
In science, the “What to include in the data set?” problem comes down to “Does the data set encompass just one phenomenon or multiple phenomena?” If multiple phenomena are present, we could be faced with the problem that the underlying principle for part of the data set is inconsistent with the phenomena and underlying principle for the rest of the data set. Since consistency with known facts is required for scientific truth, it could be impossible to extract underlying principles of the phenomena until we separated the different phenomena’s data sets.
Consider 4) The neuroscience of moral emotions. The neuroscience of morality describes the chemistry and biology of how moral behaviors are motivated by our emotions. These are details of how moral behavior (perhaps one phenomenon) is implemented in humans by different biological phenomena. Fortunately, it is fairly obvious how to separate out the data sets for testing hypotheses in neuroscience. But note that neuroscience is part of science, so any hypothesis about what morality ‘Is’ cannot contradict neuroscience’s facts.
Now consider 8) moral philosophy’s standard moral theories. Science is about what ‘is’. Moral philosophy is about what ‘ought’ to be. There is no a priori reason to think the two are the same. Second, because moral philosophy’s theories contradict each other, there appears to be no fact of the matter about what morality ought to be. In science, data about which there is no fact of the matter is useless. Of course, the science of morality will still be consistent with the logic of ethics and the mere fact that some people advocate such moral theories, but the contradictions between the theories show they are not part of what people believe about morality that is necessarily accessible to science.
Interesting question. This way of looking at the problem provides a useful path for me to go down when I try to further explain my own hypothesis. With your analysis of point number eight:
“Now consider 8) moral philosophy’s standard moral theories about what the goals of morality ‘ought’ to be. For instance, ought the goals of morality to be individual well-being, universal well-being, fairness, or do no such oughts exist (nihilism)?”
you are really hitting at the crux of what has been wrong with much of traditional moral philosophy, and it’s attempt to identify a single rule (or lack of rules) to obey for our morals. In fact, as your data sets point out, our moral emotions conflict with one another because there isn’t a single goal that works everywhere. Here’s my hypothesis on morality: moral emotions evolved to motivate behaviours towards the survival of life. Using E.O. Wilson’s consilient breakdown of biology (the study of all life), we see that life can act to survive at the level of:
biochemistry -> molecular biology -> cellular biology -> organismal biology -> sociology -> ecology -> evolutionary scales
Just as this breakdown was used to unify the fields of biology, this breakdown can help us unify the study of morality. I think any conflict that comes from analysing the data sets you’ve identified can all be explained as conflicts between rules for life to survive at each of these levels (though really only above the level of organismal biology since we tend not to use the term morality for the rules of chemicals inside an organism as you said in your point about neurobiology, which makes sense because we have no “free will” at that level). As our scientific understanding of the world has enlarged and lengthened, so too has our morality, thereby explaining some of the changes to morality that have taken place over centuries. We’ve figured out what works and what doesn’t. On the other hand, the epistemic opacity involved in trying to work out what will actually lead to survival over very large periods of time leads to a very large proliferation of trials (and errors) in looking for the right path to go down.
What say you? Does this hypothesis hold up to explaining the data?
ED, actually I think I have a different view. Obviously, not all behaviors are moral that increase reproductive fitness. Morality is a very special kind of thing. And yes our moral emotions and innate sense or right and wrong were selected for because they increased the reproductive fitness of our ancestors. But what kind of behavior does this biology motivate that increased the reproductive fitness of our ancestors? Doing a cut and paste from a piece I just published on the on-line magazine Evolution This view of life:
… consider the reason morality exists and the problem that the largest component of morality evolved to solve: The universal dilemma of how to obtain the benefits of cooperation – without being exploited.
In our universe, cooperation can produce many more benefits than individual effort. But cooperation exposes one to exploitation. Unfortunately, exploitation is almost always a winning short-term strategy, and sometimes is in the long term. This is bad news because exploitation discourages future cooperation, destroys those potential benefits, and eventually, everybody loses.
All life forms in the universe, from the beginning to the end of time, face this universal dilemma. This includes people and our ancestors.
In SuperCooperators: Altruism, Evolution, and Why We Need Each Other to Succeed (2011), Martin Nowak argues our success as a species is due to people being astonishingly good at cooperation. How did our ancestors manage cleverly preventing exploiters and free riders from destroying the benefits of cooperation?
In the last few decades, the answer has come from game theory. Herbert Gintis calls the strategies that solve this universal dilemma of cooperation/exploitation “altruistic cooperation” strategies. All such strategies have two necessary components: A part that motivates risking cooperation even when it may be exploited (the altruistic part), and a part that motivates punishment of exploiters. (Note the strategies are, strictly speaking, not purely altruistic.)
So morality, as a natural phenomenon, is embodied in our biology and cultural moral codes that evolution selected because it implements a useful set of strategies. Over vast stretches of time, this selection force shaped human social psychology and arguably, even shaped much of our experience of durable (not fleeting) well-being – thereby enabling us to become the incredibly successful species we are. Our moral norms, modeled after altruistic cooperation strategies, should fit people like a key in a well-oiled lock because this key is what largely shaped this lock (our social psychology).
It continues at:
http://www.thisviewoflife.com/index.php/magazine/articles/mainstream-science-of-morality-contradicts-sam-harris-central-claim
I saw your response to Sam Harris on EvTVOL. Congrats on the post there as Associate Morality Editor! That’s quite a platform to have to discuss and debate your ideas with some really amazing people. Good for you.
I’m not sure what you mean by the claim that “morality is a very special kind of thing.” Are you suggesting that it lies outside the realm of evolutionary selection? I didn’t think there was anything that was able to avoid that screen.
I don’t disagree with your statement that “Obviously, not all behaviors are moral that increase reproductive fitness.” I’m just suggesting that those behaviours can be understood through my look at moral emotions over the different spheres of life. Like the boss at EvTVOL (David Sloan Wilson) and his proposal of a multilevel selection theory to explain the different processes of evolution, I guess I’m proposing a multilevel morality theory to explain the different behaviours of individuals. The classic behaviour to discuss in this debate is that of rape, which we see all across the animal kingdom. If you are an ignorant sea lion who only cares for his individual organismal biology, then rape is a “moral” action in that it satisfies his urges for survival. It’s only when you look at the larger sphere of sociology that you see how profoundly immoral rape is because it violates another individual and deadens the urges for altruistic cooperation that lead to progress and survival for groups. And since groups survive better than individuals do, the moral rules derived at the level of sociology trump those urges developed solely for individuals. The same damnation of moral emotions can occur in behaviours that benefit societies at the cost of ecologies. Or even short-term ecological concerns over long-term evolutionary ones. This is how my hypothesis explains the varying data sets you pointed out above, and ultimately arbitrates between them as well.
As for your final claim that, “Our moral norms, modeled after altruistic cooperation strategies, should fit people like a key in a well-oiled lock because this key is what largely shaped this lock (our social psychology).” I don’t disagree that altruistic cooperation has developed in our species over the millennia of evolutionary history. I’m just suggesting that it is only a tool, or a means, we use to get to an actual outcome, or end. And what is the ultimate outcome? Survival of life over long-term evolutionary timescales. This was the theme of my Moral Landscape Challenge entry, which you can see at http://is.gd/IzLP3e.
Perhaps we could open this up for wider debate on EvTVOL sometime…
Ed, I am glad to hear you liked my essay.
By “morality is a very special kind of thing”, I meant that the science of morality shows that morality has a universal function, to increase the benefits of cooperation in groups by altruistic cooperation strategies. But morality, as a natural phenomenon, has no ultimate goal. It is a strategy for achieving your group’s ultimate goals, in many cases people choose well-being. Morality as a universal, species independent natural phenomenon is a means to your group goals, it is silent about what those group goals ought to be.
When you talk about ‘levels’ of morality, I am reminded of Peter Singer’s description of the progress of morality through history as being “expanding the circle of moral concern”. He is exactly right. Morality is about cooperation and fairness within groups. For evolutionary reasons, people are inclined to act fairly and morally only toward those they believe to be part of one of their in-groups. Understanding that immediately makes all the diverse, contradictory and bizarre moral codes suddenly make perfect sense.
Right, morality is only a tool, a means to an end. Where we disagree is if science tells us what that end ‘is’. I think it does not, and that is a question for people, and perhaps philosophers, to answer.
Could you do me a favor? Could you go over and comment on the piece (if you have not already). We are interested in generating conversations, and sometimes one comment stimulates others.Thanks
I’m sure that page didn’t allow comments when I first read it. I surely would have left one. I’ll go do that now that I see I can though.
“But morality, as a natural phenomenon, has no ultimate goal. … Right, morality is only a tool, a means to an end. Where we disagree is if science tells us what that end ‘is’.”
I guess we’ll have to continue to disagree there. Though not for lack of me trying different arguments to persuade you! 🙂 Just to be clear, I don’t think nature has objective goals within it in some anthropomorphised manner, but I do think that the ultimate goal of survival *emerges* naturally from a system where some things survive and some things go extinct. I also think it’s incumbent upon us to recognise that is the rule of the game we find ourselves in and cooperate towards that goal.
As you suggest, I expect we will just have to disagree about whether science reveals morality’s ultimate goal. But there is a lot of good we could do if everyone understood what the function of morality as a tool was.
Thanks for posting over at the ETVOL site. Let me know if you have trouble posting there.