Imagine an emotional, highly willful elephant (representing our biology based moral intuitions, or, as David Hume might have called them, our moral passions) with a much less powerful rider, the rational part of human consciousness. This is Jonathon Haidt’s key metaphor in his new book, The Righteous Mind. Haidt’s science based claims about how moral judgments are made and defended are consistent with David Hume’s claim that “Reason is the servant of the passions”.
However, Haidt’s conclusions are a different kind than Hume’s because Haidt’s are based in science and you or I could, in principle, replicate the experiments Haidt describes. For example, it appears to be empirical fact that moral judgments are usually made virtually instantly without time for rational thought. Only if these judgments are questioned do our conscious minds swiftly construct justifications for whatever our instant moral intuitions were. People are highly skilled at constructing such after-the-fact justifications regardless of counter-arguments. Reason does act as the servant of at least our moral intuitions; science confirms that claim.
Haidt proposes that moral disagreements, for example between conservatives and progressives (also called liberals in the USA), are less due to stupidity, ignorance, and greed (common progressive views of conservatives) or naiveté and degeneracy (common conservative views of progressives) than to the power of the intuitive portions of our minds. These disagreements are actually the vigorous and creative defenses by our rational riders (the rational part of our consciousness) for whatever our different moral intuitions (our emotional elephants) happen to be. Haidt goes on to describe how such different moral intuitions can arise out of different cultures, individual experiences, and perhaps even biologically based emotional differences between people.
Haidt’s advice for making progress in discussions of what is moral is to focus rational arguments more on the intuitions underlying moral positions, which are relatively constant and long term, rather than on the specific rational defense of those intuitions that is being proposed at the moment but which is highly adaptable to address any challenge. That is, talk to the elephant, not the rider. Haidt is vague on how to best balance “talking to the elephant versus talking to the rider”, but I think is still providing a useful, science based, insight.
Obviously, people’s moral intuitions can change over time due to experience and rational consideration. However, the biology underlying our moral intuitions was selected for stability and therefore to not be under the direct control of our rational thought processes. These universal biological inclinations for stable moral intuitions appear to a chief source of the seemingly interminable arguments about what is moral.
Interestingly, Haidt claims experimental evidence shows that moral arguments are often even more interminable and unresolvable when engaged in by highly intelligent people. If a philosopher’s reasoning abilities are the servants of their moral passions as Hume claimed and as Haidt argues science supports, then it would follow that highly skilled ‘reasoners’ such as philosophers could be incredibly good at defending whatever their moral passions happened to be. It is easy to imagine philosophers continuingly refining their rational arguments and defenses against reasoned attacks. This is fully consistent, so far as I know, with what is observed. That is, philosopher’s arguments about morality are even more interminable and unresolvable than ‘normal’ people’s, specifically because of, not in spite of, their superior reasoning ability.
Haidt’s “Talking to the elephant” idea is only one element of his book. His main focus is on the evolutionary origins and present functions of the ‘moral modules’ that make up the biological basis of our moral intuitions and ability to, over time, incorporate cultural norms into those moral intuitions.
I see Haidt’s work as complementary and integrating well with my approach to a universal definition of morality (Altruistic Cooperation morality) which is based in biology independent aspects of physical reality. However, I did not see that Haidt proposed specific justifications for accepting normative burdens. Altruistic Cooperation compliments his work in that it provides a specific definition of morality, and instrumental justifications based on meeting common overriding goals for both 1) groups accepting the burdens of enforcing norms consistent with that definition, and 2) individuals accepting, almost always, the burdens of those norms even when, in the moment of decision, the individual expects doing so will be against their best interests.
John, my focus here is on whether 1) what I am proposing is actually good science and 2) if it is culturally useful for deciding what enforced cultural norms (enforced moral standards) ought to be in order to best meet the needs and preferences of groups.
Your references are too far off of my core interests for me to usefully comment on.
However, at bottom I am an engineer, and engineers are all about making useful things. I would be delighted if my work was useful to Alfie Kohn and to Buddhism(?) in general. Both seem to me to be pursuing worthwhile goals.