My background is in engineering and physics.
After retirement, I became interested in two questions that I expect many people care about, but lack satisfactory answers for.
1) “What should enforced moral standards be?” and
2) “Is there any rational justification for an individual to accept the burdens of conforming to such moral standards in cases where they expect doing so will be against their best interests?”
I started by reading what mainstream moral philosophy had to say on this subject and immediately became puzzled. What I saw as the main investigative tool, deductive arguments, seemed a strange way to seek new knowledge. Deductive reasoning is a process of deriving conclusions from premises. It has the advantage that if the premises are true, then the deductions are true (not just provisionally true as is common in science). The odd part is that while some deductive knowledge can appear to be new in a “That was not obvious” sense, all deductive knowledge is implicit in its premises. I could not see how I could hope to make any progress with this approach.
Further, based on the lack of consensus answers to my questions after a few thousand years of efforts by very bright people using all of the tools in moral philosophy’s toolkit, it was obviously pointless for me to even attempt to use that toolkit to answer my questions.
In frustration, I decided to try applying the data based methods of science and engineering that I was familiar with. I assumed this effort might occupy the rest of my life and was unlikely to produce any culturally useful results. But I thought the chance of culturally utility made the effort worthwhile. Finding useful solutions to problems is what engineers are all about.
I immediately discovered the wealth of highly relevant science, mainly in the fields of evolutionary psychology, evolutionary biology, and particularly game theory that has been accumulating mainly over the last 40 years or so, and mostly over the last 20 years.
In that relevant science, I found another puzzle. Even when there were obvious normative implications for their work with considerable apparent social utility (such as when it is ‘immoral’ to slavishly follow the Golden Rule as when dealing with criminals and in war), the authors were almost uniformly silent on moral implications. They would bring you right up to a moral implication and then abruptly stop. Perhaps they didn’t see it worth the nuisance of the howls of indignant protest they expected from the philosophy department across campus if they suggested normative (moral) implications for their work.
I am in a position to not be so reticent. I have no academic career to protect, and I may have had a few insights that could help unify the existing wealth of relevant science into a more coherent whole. Justifying culturally useful implications for a science based unifying understanding of what enforced moral standards ‘are’, in a descriptive sense, is a worthy goal.
Understanding the science of morality has turned out to not be the biggest challenge. Due to the large amount of existing science on the subject, getting in place all the main elements of that science, at least to my own satisfaction, was the work of only about two years.
The big challenge was, and remains, presenting my conclusions without out being misunderstood. It is remarkable the number of people who, on hearing the words evolution and morality in the same sentence, immediately fly into an irrecoverable intellectual tailspin of self-generated misinformation which makes further useful communication nigh impossible. Also, the radically different mindset and context of mainstream philosophy makes communication very difficult. Even four years after getting the science to fit together into a coherent whole, figuring out how to present my conclusions so they will not be misunderstood is still a work in progress.
However, I caught a big break in November of 2011. The philosopher Phillip Kitcher published what an Amazon review said was, for mainstream moral philosophers at least, “a revolutionary approach to the problems of moral philosophy” based on the underlying function of morality in cultures. The big break for me was that Phillip Kitcher is a respected philosopher (a past president of the American Philosophical Association) whose book may have added credibility since he was author of the influential, and widely perceived to be anti sociobiology book, Vaulting Ambition in 1985 (Kitcher clarifies it was only anti bad science “pop-sociobiology”).
Now, it is with great relief that when attempting to address people who are knowledgeable concerning moral philosophy, I can say: “I am proposing a functionalist social morality consistent with Phillip Kitcher’s approach in his 2011 book The Ethical Project” and have some hope of providing a respectable context for my position.
I understand Kitcher’s general approach to be the same as mine, but we come to different conclusions. For example, Kitcher concludes the function of morality in societies is something like:
“… the (original overall) function of morality was to ‘remedy those altruism failures provoking social conflict’” (p223) plus the idea that the function of moral behavior has changed over time.
My proposal: “The universal function of enforced cultural norms is to advocate altruism that increases the benefits of cooperation in groups” plus the idea that the function of moral behavior has been constant for all of human history (and far beyond that stretching in both directions). The appearance of changing functions of morality is due to the radical shifts in the dominant benefits of moral behavior (such as first during the emergence of culture and second by the invention of money economies under rule of law) and slow drifts over time depending on circumstances people found themselves in.
This function’s implied moral principle, “Altruistic acts that also increase the benefits of cooperation in groups are moral”, matches people’s needs and preferences like a key in a well-oiled lock because this key is what, in part, originally shaped this lock.
That is enough for now, have a read, ask questions or make comments if you wish.
I apologize in advance for extensive repetition that may become especially tedious for attentive or already well-informed readers. I duplicated material as seemed appropriate for a website where entries might be read in no particular order, readers might be unfamiliar with important concepts, and I thought the duplicated material was important for the topic at hand.
Mark Sloan February, 2012
Contact: cooperationmorality(insert the @ as normal)comcast.net
For convenience’s sake, here are short versions of the answers I found to my two questions.
1) “What should enforced moral standards be?”
Enforced cultural norms should be based on a universal moral principle “Altruistic acts that also increase the benefits of morality in groups are moral”. A critical heuristic for this principle (perhaps best described as a corollary) is “Acts that decrease the benefits of morality between groups are immoral”.
2) “Is there any rational justification for an individual to accept the burdens of conforming to such enforced moral standards in cases where they expect doing so will be against their best interests?”
Yes, based on expected increased synergistic benefits of cooperation and inherent psychological rewards, both immediate and in our sense of durable well-being. These immediate psychological rewards and much of our sense of durable well-being (happiness), originally evolved in our ancestors as the chief means of motivating people to cooperate in groups, but are now available to us mainly as important benefits of altruistic cooperation.
Further, people’s ability to predict what action will be in their long term best interest is poor. We cannot know all relevant information, and if we knew it we are often unable to accurately predict the future, partly due to our competing, unenlightened selfish inclinations and partly due to our brain’s computational limitations. Social morality, properly understood, is a biological and cultural adaptation for increasing benefits. These quick moral heuristics (usually reliable, but fallible rules of thumb) provide an alternative to our often unreliable predictions in the moment of decision about what will actually turn out to be in our long term self-interest.
Understanding all this makes it more rational to, almost always, accept the burdens of acting according to Altruistic Cooperation morality, even when, in the heat of the moment of decision, we expect doing so will not be in our best interests.
Which do you think is more likely to turn out well? Going with the wisdom of the ages (describing forces that shaped your inner being) or going with your personal, confused perceptions in the heat of the moment of decision?