The Golden Rule’s power points to a universal moral principle

Philosophical perspectives on the Golden Rule typically focus on the Golden Rule’s well-known flaws and may even have a dismissive tone.

The philosopher Dan Flores recently wrote:

“If ethics is the inquiry into the basic claims of morality, then upon philosophical scrutinization of the Golden Rule, we find that, in the words of Quine, ‘there is nothing to scrute’ after all. We should focus our attention on ordinary moral principles instead.”

As an admirer of the Golden Rule, I took offense on its behalf. In response, I will both defend its permanent cultural usefulness and argue that it points us to a universal moral principle.

Rather than there being “nothing to scrute”, the Golden Rule, particularly in the form “Do to others as you would have them do to you”, may be the most culturally useful heuristic (a usually reliable, but fallible, rule of thumb) for moral behavior in existence. We will see there are good reasons that Jesus is quoted in Mathew 7:12 as saying the Golden Rule summarizes morality and even present-day secular people commonly quote it as their primary moral guide. Despite the Golden Rule’s flaws, it has remained a popular and useful moral principle since ancient times and in cultures around the world.

Building on insights into the origin and function of morality by the Greek philosopher Protagoras and Charles Darwin, I’ll argue we can understand why the Golden Rule’s specific flaws exist. Understanding when the Golden rule will advocate immoral behavior is a useful result on its own. In addition, this knowledge plus a bit about cooperation strategies leads to a perhaps even more surprising result. We can identify the cross-species universal moral principle that the Golden Rule is a heuristic for. These are the potential payoffs for scrutinizing the Golden Rule.

In one of Plato’s dialogs, the philosopher Protagoras explained to Socrates that morality’s function, the primary reason it exists, is it increases the benefits of cooperation. (Protagoras illustrated his argument with the Greek myth that Zeus gave all people a moral sense to enable them to cooperate in groups. The existence of this myth implies that “morality as cooperation” was a common understanding of morality among people in Protagoras’ time and likely well-known to Socrates.)

If the function of morality is to increase the benefits of cooperation, then how might we describe immoral behavior except as acting to decrease the benefits of cooperation? Then when might the Golden Rule’s guidance be expected to decrease the overall benefits of cooperation? Such circumstances include the Golden Rule’s commonly recognized “failures” when 1) a judge does not punish a criminal because the judge would like to not be punished in the same circumstances, 2) a soldier acts generously toward an enemy soldier in time of war resulting in the enemy soldier killing the generous one, and 3) people’s “tastes differ”, as Bernard Shaw pointed out,  regarding how they want to be treated. Protagoras’ 2500-year-old perspective on morality as cooperation reveals the “why” of the Golden Rule’s standard failure examples. Those failures occur when following the Golden Rule would likely decrease the benefits of cooperation and thus be immoral.

If the function of moral behavior actually is increasing the benefits of cooperation, then we have an explanation for the flaw that produces the Golden Rule’s failures. But given this flaw, how has the Golden Rule remained such a useful moral norm?

“Do to others as you would have them to do to you” advocates initiating cooperation based on the generally reliable assumption that both parties like to be treated similarly.  For example, following the Golden Rule would advocate sharing food, coming to other’s aid when they need help, and treating other people fairly, even when one has the power to treat them unfairly. Such cooperation was critical for survival in pre-civilization societies and the material and psychological benefits of cooperation remain, even now, the overwhelming reason we form and maintain societies and moral codes.

However, the Golden Rule does not advocate mere reciprocity – I help you and you help me. There is no hint in the Golden Rule that people helped will directly reciprocate. If the people helped also follow the Golden Rule, then they will help whoever in the group needs help. Radically more benefits of cooperation are made possible when “all help all” in a large group rather than when help is dependent on pairs of reciprocators (pairs of cooperating people).

The sophisticated form of cooperation initiated by the Golden Rule is called indirect reciprocity. It is perhaps the most powerful cooperation strategy known. The Golden Rule has remained a central moral principle since ancient times because the behaviors it advocates can so effectively increase the benefits of cooperation. (Note that the Golden Rule only initiates indirect reciprocity. Maintaining indirect reciprocity requires exploiters and freeloaders be punished, perhaps just by ceasing to cooperate with them. Our evolved moral sense is generally eager to punish ‘immorality’. We can think of indirect reciprocity as being initiated by the Golden Rule. But indirect reciprocity must be maintained by freeloaders and exploiters being “punished” as motivated by our moral sense’s indignation about other people’s immorality, our own guilt and shame at our own immorality, and by cultural punishment norms for  immoral behavior.)

But was Protagoras right? Is the function of morality merely to increase cooperation? If this is true, then Protagoras’ hypothesis faces a daunting task. It must explain, as elements of cooperation strategies, virtually everything we know about our moral sense and cultural moral codes.

After Protagoras, Charles Darwin in The Descent of Man (1871) proposed the next important insight into morality as cooperation. He speculated that biological evolution selects for altruism toward others (“altruism” here referring to helping without expectation of direct reciprocity) and moral behavior in general, because more cooperative groups can outcompete less cooperative groups. Darwin was right. By pointing out that evolution selects for increased cooperation within groups which are sometimes in deadly competition with other groups, Darwin explained two puzzling phenomena: altruism and, perhaps unknowingly, why moral norms in different cultures can be so diverse, contradictory, and even bizarre.

If groups are in competition, it can be a matter of life and death to know who is committed to your group and who might be in a competing group. This can be a problem in large groups such as tribes where individuals may not know everyone well. Markers of membership in and commitment to a cooperative group, such as hair and dress style (which are immediately obvious), circumcision, food and sex taboos, and allegiance to one god versus another (which are more hidden, but still important for distinguishing “us” from “others”) were readily adopted and enforced as “moral norms” because they were effective at increasing the benefits of cooperation within groups.

The diversity, contradictions, and bizarreness of cultural moral norms can be understood as due to two primary causes. First, as described above, groups will use different “markers” of membership and commitment to the group to clearly distinguish themselves from competing groups. Second, societies will use different definitions of who deserves full moral regard (perhaps only men or only one tribe) and who is worthy of less or even no moral regard (perhaps women, slaves, or an enemy group).

Thus, Darwin’s idea that morality is selected for because it enables groups to outcompete other groups largely explains why past and present moral codes can superficially appear to be such a chaotic mess. Indeed, Darwin’s evolutionary explanation, plus a little modern knowledge about cooperation strategies, explains virtually everything we know about our moral sense and past and present cultural moral codes as described here and here.

But a well-functioning moral system must include answers to questions such as “Who will be in favored in-groups? Just family, friends, countrymen, or everyone?” and “What interactions between groups are moral?”. Can an evolutionary perspective illuminate these questions?

There has been a lot of progress in the science of cooperation and moral behavior in recent decades. That science supports a claim about what is universally moral which may help answer these questions. This principle is “Increasing the benefits of cooperation without exploiting others is universally moral”.

The word “moral” here is used in the normal cultural and scientific sense to refer to right and wrong behavior, meaning judged worthy of praise or condemnation by our moral sense and cultural moral codes. “Universally moral” refers to what is universal about all the diverse, contradictory, and bizarre behaviors our moral sense and cultural moral codes motivate and advocate.  The claim is ‘true’ in a scientific sense because “cooperation without exploiting others” is a necessary (universal) subcomponent of all cooperation strategies (moral behaviors) relevant to human morality. For example, even cooperating in an in-group to exterminate out-groups relies on people following the above principle and thus maintaining this cooperation by not exploiting others in their in-group.

I know of no philosophical argument that this moral principle is what we somehow imperatively ‘ought’ to do regardless of our goals. But no such argument is generally accepted for any moral principle. Since none have been shown to exist, traditional philosophical “imperative oughts” cannot be the basis of a society’s rational choice for moral references for refining their moral codes. But moral references can be rationally chosen based on whichever moral principle is believed to be most likely to aid in meeting shared needs and preferences.

The cultural usefulness of this universal moral principle is due to its ability to directly and powerfully help us achieve shared goals, its innate harmony with our evolved moral sense, and our intellectual recognition of its mind independent, uniquely “universally moral” status.

This essay is part of an exploration of different approaches to explaining this evolutionary perspective on morality. The approach I personally prefer begins with first principles about cooperation that are innate to our physical reality and independent of human existence. But tastes differ.

If adopting and practicing this principle can be rationally justified simply by a society’s desire to best meet their shared needs and preferences, would that leave moral philosophers out of a job? No, of course not.

First, moral philosophy’s tools and insights would be needed to build, on the bare foundation science provides, coherent, well-functioning moral systems. Such systems must address issues such as abortion, human rights, and relative moral obligations to family, friends, people you will never meet (including future generations), animals, and eco-systems. Second, philosophical answers to larger ethical questions such as “What is good?”, “How should I live?”, “Why should I act morally?”, and “What should our goals be?” range far beyond mere cooperation and science’s domain. Moral philosophy’s traditional methods and wisdom remain as relevant and critical as ever. Moral progress resulting from recognizing morality’s grounding in science might even give moral philosophy’s reputation a substantial boost.

Will all such cultural moral systems then be the same if they are consistent with the proposed single universal moral principle? No. Such cultural moral norms would be merely heuristics (usually reliable, but fallible rules of thumb) for the universal moral principle. Depending on a group’s history, environment, and sometimes different goals, the cultural moral norms that are most likely to achieve those goals by “increasing the benefits of cooperation without exploiting anyone” could still be diverse, contradictory, and perhaps even bizarre to other cultures.

Despite such potential diversity, one moral norm will almost certainly have a prominent place in every such morality. That moral norm will be some version of the Golden Rule. Despite its known flaws, the Golden Rule’s ability to initiate the powerful cooperation strategy indirect reciprocity insures it a permanent place in human morality.

 

Links:

The Not So Golden Rule,  https://philosophynow.org/issues/125/The_Not_So_Golden_Rule

Found: a Universal Moral Principle, https://scienceandmorality.com/2017/10/27/found-a-universal-moral-principle/

Moral Universals from An Evolutionist’s Perspective, https://scienceandmorality.com/2015/08/13/moral-universals-from-an-evolutionists-perspective-3/

A Universal Principle Within Moralitys Ultimate Source, https://evolution-institute.org/a-universal-principle-within-moralitys-ultimate-source/

25 thoughts on “The Golden Rule’s power points to a universal moral principle

  1. Thanks, Mark. Another great contribution to clarifying an important way that modern society might somehow emerge from the dangerous swamp of nihilism and self-interest that we find ourselves in. And good that you have again made clear that moral philosophers will not lose their jobs through implementation of this radical clarification and simplification of the field. Sighs of relief all around.

    • Ken,
      Pointing put the continued importance and arguably increased cultural utility of moral philosophy if it can be firmly grounded in science is important for at least two reasons. First, it is true and useful. Second, it is psychologically very difficult for any of us to understand a concept if understanding that concept threatens our livelihood and identity. Perhaps I ought to emphasize the increased cultural usefulness and resulting potentially increased cultural valuation of moral philosophy when firmly grounded in science. Thanks for commenting.

  2. So why do you not cooperate with me as someone who is also trying to explore the roles of cooperation and evolution in morality? Why can’t my peer-reviewed ideas (that refute your claim above that there is no bridge between is and ought) be shared with The Evolution Institute? If you did unto me as you would have me do unto you, you would not exclude me.

  3. Hi Ed,

    Regarding following the Golden Rule, it is a necessarily flowed heuristic, not a perfect moral principle. Following the Golden Rule does not always help meet group shared goals (the function of morality) even when doing so might help meet an individual’s goals.

    As a fellow non-academic, I sympathize with your frustration with the difficulty of getting high quality comments on one’s technical work regarding the origin and function of morality.

    I was the initial instigator and organizer of both This View of Life’s morality section and the Evolution Institute’s “Is there a universal morality?” project you requested to be a part of. However, my plan has always been to focus on essays from people with professional level peer-reviewed journal publications in relevant areas of either philosophy or biology. And all essays, except mine, were from professors of philosophy or biology. My essay was reviewed by people connected to the Evolution Institute and was deemed to be of adequate quality to be included even though I am not an academic.

    I’ve developed my perspective almost entirely with the aid of feedback from posting on open philosophy forums. I consider the responses I’ve gotten on the Reddit philosophy site, and even more so before on the sadly now defunct Philosophy Forums site, to be invaluable. If you have not already done so, you might try the Reddit site to find what people find objectionable about your approach and for clues how to fix it.

    From my perspective, your explanation of the function of morality needs more development. As I phrase it, “The science of morality is fairly easy, it is the presenting of it in ways that philosophy majors can understand that is devilishly difficult.” All I can suggest is just keep working on the presentation of it. That is what I am doing.

    • That’s the thing, Mark. I have done all that. And you didn’t even give me a chance to write something that shows you that I have. I got my biggest idea about a universal basis for morality published in in peer reviewed academic journal. I then published a version for the lay person in Humanist magazine (read by thousands of subscribers). I’ve further developed these ideas over 6 years (!) on my blog during which time I’ve interacted quite a lot with professional philosophers (one of whom you recently wrote out of the blue causing him to ask me about you) and evolutionary psychologists (one of whom quoted me in her TVOL1000 profile). If I’m frustrated, it’s only with you for not engaging with me; I’ve had plenty of other great engagements. I can only guess that you just refuse to budge off your pet idea so you disagree with mine and won’t give it the light of day out of some excuse that I’m not a professional “philosopher” (all evidence to the contrary, including the recent twitter spat I had with Massimo Pigliucci (also featured in your series, also someone who disagrees violently with you) when I got him to admit that I am “a philosopher”). I absolutely applaud and admire the work that The Evolution Institute does and would love to champion its ideas further. I even appreciate the work you do there, but I am afraid you are hampering its progress. (If you haven’t read it yet, check out the recent collection of essays edited by David Livingstone Smith called “How Biology Shapes Philosophy: New Foundations for Naturalism.” There are some excellent academics in there who could also have been a part of your series on morality.)

      • Ed, this is certainly an appropriate place to discuss our different conclusions about what is universally moral. If you are interested, we could start with answers to basic questions. Following are some questions I see as revealing about claims for moral universals from an evolutionary perspective:

        1. What is the claimed moral universal?
        2. Is this a claim about what ‘is’ morally universal or what ‘ought’ to be morally universal?
        (“What ‘is’ morally universal” here refers to what objectively is morally universal independent of anyone’s opinion. Such claims are arguably in the domain of science, and, like the rest of science, innately come with only instrumental ‘oughts’ [to achieve goal Y, one oughtto do X]. In contrast, “what ‘ought’ to be morally universal” refers to non-instrumental oughts – what people somehow ought to do regardless of their goals, which is beyond the domain of science.)
        3. If it is a claim about what ‘is’ morally universal as a matter of science, what criteria were used to show its scientific ‘truth’? For example, does this hypothesis explain the sum of factual knowledge about human morality markedly better than any other hypothesis?
        4. If it is a claim about what ‘ought’ to be universally moral, what philosophical argument supports that claim?

        Are these questions acceptable?

      • Are these questions acceptable? No. But I’ll get to that in a bit. First, I disagree with your very first statement that “this is certainly an appropriate place to discuss our different conclusions.” That is not certain at all. I am uninterested in trying to convince *you* on *your* blog to change *your* mind on this subject because you have shown time and time again to be inflexible in the face of new and better arguments. For example, you are still banging on about the Golden Rule for some reason, and if you can show me a translation of Plato where “Protagoras explained to Socrates that morality’s function, the primary reason it exists, is it increases the benefits of cooperation,” I will eat that page. You put the words of your hobby horse into his mouth without so much as blushing so how can I expect you to discuss morality honestly with me again?

        However, I will explain my position because it illustrates why you are are wrong about “morality as cooperation”. Let’s take it as a hypothesis that I might have something new to say about morality. If that were correct, I would be offering something new for the benefit of future people who *cannot* be cooperating with me because they literally don’t exist yet. And I would be *competing* with you because I think your own ideas are detrimental to the future of life. Yet (!), my actions would be moral, because I am trying to change what we see about what we ought to do. To reiterate, this very comment would then be an observation of a moral act that does not contain any cooperation in it, so if you were a good scientist, you would be forced to change your hypothesis. I don’t expect you to, however, because I have given examples in the past where moral actions did not contain cooperation and you haven’t yet dropped this ridiculous notion. (Here’s another easier one as a reminder: we can’t “co-operate” with trees or dogs or chimps, they aren’t consciously acting along with us in some self-identified in-group, but it is moral to expand our circle of concern to them.)

        Now, to your questions.

        –> 1. What is the claimed moral universal?

        Right off the bat, this is an error of “essentialism”, phrasing the question as if a moral universal already existed. Read this recent article from Dan Dennett to see how moral universals could be seen to be *emerging* or *evolving* and will probably never finish doing so.

        Click to access Demise_of_Essentialism.pdf

        –> 2. Is this a claim about what ‘is’ morally universal or what ‘ought’ to be morally universal?

        My own claim is that we can inductively reason what it looks like *is emerging* as a universal moral, what morality ought to become eventually, and is in fact showing evidence of doing so.

        –> In contrast, “what ‘ought’ to be morally universal” refers to non-instrumental oughts – what people somehow ought to do regardless of their goals, which is beyond the domain of science.

        This is wrong. What ought to be morally universal can be entirely predicated on natural desires of people. That’s not a naturalistic fallacy either. It would be a super-naturallistic fallacy to claim oughts from something outside of the natural world.

        –> 3. If it is a claim about what ‘is’ morally universal as a matter of science, what criteria were used to show its scientific ‘truth’?

        I don’t know. You tell me. I don’t claim there *is* universal morality anywhere yet, but you seem to think you have found it even though I’ve given you counterexamples (data) to disprove that. So I don’t understand the criteria you must be clinging to.

        –> 4. If it is a claim about what ‘ought’ to be universally moral, what philosophical argument supports that claim?

        In my own case, logical deductions. Premise 1: Life is. Premise 2: Life wants remain an is. Conclusion: Life ought to act to survive. None of those elements stand outside the natural realm, but given that it encompasses the entire universe of life, that makes it universal to life, including us humans who don’t yet or always realise it.

        To finish up, I realise that I have not presented my own argument in full here so that is probably not clear or persuasive to you. I would want to write a full essay for The Evolution Institute to make my own point about morality clear. The whole point of these comments on your blog is to show that I think your criteria are wrong, so I have mostly just attacked your own arguments. (Again, I *competed* with them, out of a sense of morality.) I did so, hoping you would see that your role as the moral editor at The Evolution Institute is not to prove that you are right (because you aren’t), but your role should be to further the attempts at trial and error to try to move towards truth. I really wish you would give me the chance to share my trials (and errors) in the bigger and better forum that matters to me.

      • Sorry, but we have cross-posted. (Or I at least tried to. Here’s what I had to say.)
        —————————————–
        If you’ll allow me to continue, I’ve thought about why we continue to be at loggerheads and I might have arrived at some insight. I take the fact that you continue to suggest Reddit and defunct philosophy forums as places to turn to for help as evidence that you haven’t actually engaged in philosophy all that deeply. You’re arguments make sense to me as coming from a *scientist* who sees only the methods of empirical observation as ways to arrive at knowledge. But there are other ways to arrive at philosophical conclusions.

        Last week, I was invited to speak at an event where Timothy Williamson (the current Wykeham professor of Logic at Oxford University, a position once held by AJ Ayer) came to talk about his new book “Doing Philosophy.” He write in the book that it provided a view of philosophy that many philosophers would hate, but I think you would like it. In it, he makes the case (that I agree with) that science and philosophy can be reconciled by their similar methods. However, there are some places that science just cannot go with their empirical observations (due to danger, ethics, feasibility, possibility, etc.). That is where philosophical thought experiments come in, which is the chapter in the book that I was brought in to comment upon and write a response to. (All of which will be published soon.) I see this as relevant to the problem you and I have because I just don’t think you see how my philosophical forays can arrive at logical conclusions. (I still think your own empirical observations don’t fit the data, but you never engage with me about that and still insist that yours fits “well enough” or “better than anything else” or something like that.) Anyway, you don’t have to affirm or deny my hypotheses about you and our loggerheads, but I’ll just leave you with the suggestion that you might like Tim Williamson’s very short and very clear new book “Doing Philosophy.”
        ————————————————
        Okay, that’s what I said. But to quickly respond to your latest comment, I don’t repeatedly put that syllogism out there by itself, but you are supposed to have read my paper by now in which I fully explain why that desire for life to survive (NOT the species as you mischaracterised) is “the want that must.” I would have hoped I could speak about my ideas in shorthand by now with you, but apparently not since you can’t actually parrot them back to me. I don’t, therefore, know how you can possibly think my ideas haven’t been thought through when you don’t even know them. But alas, you have your position, and I will go on in the world without your cooperation.

        My comments above, and my proposal do contradict your own arguments, as I have said repeatedly, and means without ends are utterly useless, but I’m afraid this is the end of this conversation.

  4. Ed,

    “In Plato’s Protagoras there is an avowedly mythical account (told by the Greek philosopher Protagoras) of how Zeus took pity on the hapless humans, who, living in small groups and with inadequate teeth, weak claws, and lack of speed, were no match for the other beasts. To make up for these deficiencies, Zeus gave humans a moral sense and the capacity for law and justice, so that they could live in larger communities and cooperate with one another.” (Peter Singer, 2005. “Ethics and Intuitions”)

    “(This) gift, direct from Zeus, saved humankind from destruction: the gift of aidós and díkë: a sense of right and wrong. Importantly, Zeus instructed his messenger Hermes to give this not just to a few people but to everyone; and this universal sense of right and wrong – the foundation of civil society – was to be our salvation.” Plato, L Brown, A Beresford, Protagoras and Meno (Penguin Classics, Kindle Locations 215-220)

    Protagoras had asked Socrates if he wished his question about the nature of morality to be answered in “story form” or “in the manner of regular discourse”. Socrates chose story form – hence Protagoras explained morality using the Greek myth that Zeus gave each of us a moral sense to increase the benefits of cooperation..

    Also, your primary conclusion does not follow from your premises.
    1: Life is.
    Premise 2: Life wants remain an is.
    Conclusion: Life ought to act to survive.

    Simply wanting something is not logical justification for acting to obtain it. You need another premise in there.

    Please post your proposed syllogism on the Reddit philosophy page. Philosophy majors there should be much more capable, as well as more credible, than I am in explaining your error.

    • The justifications for why this particular moral desire are explained in the rest of the article, which, AGAIN, was published in a peer reviewed academic journal. The only refutation to it is to advocate for life going extinct, which is what you must want then. That’s perfectly allowed in this universe, but such wants will go extinct. Your advice about Reddit shows once again that you refuse to read my paper (and related work) and think about it with an open mind.

      And you have to be kidding with that justification for your characterisation of the Protagoras story. Yes, we cooperate. Yes, we have morality. That doesn’t say “that morality’s function, the primary reason it exists, is it increases the benefits of cooperation.” You are making a giant leap there with no justification. There’s more to morality, but I’m not surprised that you again refuse to see it.

      • Ed,

        You can’t repeatedly present a syllogism with a missing premise and expect to be taken seriously.

        Also, you are confounding what morality ‘is’ (perhaps cooperation strategies as I propose) with what we might propose morality ‘ought’ to be, perhaps “preserving the species” (or something close to that) as I understand you propose.

        It can be a matter of science what morality objectively ‘is’. I, along with others, argue that the morality as cooperation hypothesis markedly better meets relevant criteria for scientific ‘truth’ than any competing hypothesis.

        Perhaps it would help to point out that my “morality as cooperation” describing what morality ‘is’ could be entirely consistent with “preserving the species” describing what the ultimate goal of moral behavior ‘ought’ to be. The two claims aren’t even about the same category of thing. “Morality as cooperation” describes what moral ‘means’ ‘are’. So far as I can interpret, your “preserving the species” describes what the ‘end’ (or goal) of moral behavior ‘ought to be’. These “means’ and “ends” are radically different categories of things.

        As I understand it, your proposal does not and could not contradict mine. So I did not oppose its publication because it contradicted mine as you seem to think. The reason I opposed it was I thought it insufficiently thought though. For example, I would not want to publish an obviously flawed syllogism.

        Of course, it is possible I am completely wrong. But I have to go with what I can understand.

        I think it is time to end this discussion as non-productive.

  5. Ed,

    Thanks for the additional reading suggestion..

    You first suggestion, How biology shapes philosophy, expresses an old idea of mine – that philosopher’s well considered arm-chair intuitions about what is and is not moral are products of our biological evolution. Hence, biology influences philosophy in ways that mainstream philosophy does not seem to be aware of. I look forward to a re-perusal.

    I also look forward to reading Doing Philosophy. Perhaps it will shed light on our communication difficulties.

    Regarding your claim that “means without ends are utterly useless”, one of the three main branches of moral philosophy is deontology. That is the idea that the rightness or wrongness of an action is determined by the nature of the action itself, not its consequences or the character and habit of the actor. You are thus claiming that deontology, prominently including Kantianism – which is solely about moral ‘means’, is “utterly useless”. Again, such ideas are in serious need of being tested and refined in arenas where you will receive informed criticism of them.

    • Ha ha, I’ll stick my neck out and say Kantisn ethics is utterly useless.

      Of course that’s a bit of hyperbole. There is use in analysing anything rigorously. But means alone just spin wheels without direction. That’s my definition of useless. I’ve fought for that elsewhere just fine thanks.

      Enjoy the reading.

  6. (Singer, The Expanding Circle, p. 67) What, though, of an ethical theory which emphasises not goals or consequences, but moral rules or the preservation of absolute rights, irrespective of consequences? Kant’s moral theory is often taken as an instance of this view. … Ordinary common-sense knowledge was enough to lead most philosophers to reject moral theories which pay no attention to consequences.

    • Hi Ed,

      I also reject moral theories which pay no attention to consequences.

      Consistent with the title, the Golden Rule is a heuristic (a usually reliable, but necessarily flawed, rule of thumb) that points to the moral theory that “Behaviors that increase the benefits of cooperation and don’t exploit others are universally moral”. (Variations of the Golden Rule are near universal as moral guides because they advocate initiating indirect reciprocity, arguably the most powerful cooperation strategy known.)

      Therefore, acting according to the Golden Rule would not be universally moral if doing so decreased the benefits of cooperation or exploited others. I do in fact advocate judging the morality of acting according to versions of the Golden Rule in part based on the consequences of doing so.

      What Singer’s utilitarianism gets wrong is it judges the morality of actions based only on if they maximize happiness or some such. Evolutionary science reveals that our moral sense’s and cultural moral codes’ selection forces have two components – moral ‘ends’, increasing the benefits of cooperation, and moral ‘means’, acting in a cooperative manner. That is, standard consequentialist ‘moral’ theories are about something, but not what “morality” ‘is’ as defined by the category of behaviors motivated by our moral sense and advocated by cultural moral codes.

      If you want to act in a universally “moral” way, you have to get both moral ‘ends’ (benefits of cooperation) and moral ‘means’ (elements of cooperation strategies) right at the same time.

      • So you reject deontology now too? That’s news to me, and seems to directly contradict your comment above on October 10, 2018 at 1:41 pm (as well as the consistent support you give to the Golden Rule), but I’m glad to hear it.

        I honestly don’t see how the Golden Rule (“Do to others as you would have them do to you”) points to what I’ll call the Sloan Rule (“Behaviors that increase the benefits of cooperation and don’t exploit others are universally moral”). The GR literally just says to try to cooperate (which can often be a great thing!); it is silent about what that cooperation ought to lead to. So, your caveat that one ought not to follow the GR if doing so leads to poor consequences is actually the exact and utter refutation of the GR that I and many philosophers in history have argued for. You’re literally saying, “follow the GR if it is good, don’t if it’s bad.” So good and bad are therefore judged independently of the GR, and we can dispense with it, other than to say it’s a strategy that works some of the time.

        I quoted Singer above to you because I was re-reading him in advance of seeing him speak last night. I also criticise his utilitarianism, but I honestly don’t see the difference between his “happiness or some such” and your “benefits of cooperation.” Both are wooly and subjective to me without a strong definition to back them up. Perhaps I’ve missed it though. What are your benefits of cooperation? If they lead to the ultimate consequence of the survival of the project of life, then that is exactly what my evolutionary consequentialism says we ought to act towards.

        On the other side of the Sloan Rule, I also don’t see how moral actions “don’t exploit others.” This sounds like a rehash of the harm principle, but the harm principle has collapsed because from *someone’s* subjective perspective, harm is ubiquitous. (Google “Harcourt Collapse of the Harm Principle” for his seminal paper on political philosophy about this.) I defy you to come up with an action that increases the benefits of cooperation for someone that doesn’t also cause harm to someone. Moral choices always require a choice between harms—that’s why they are moral choices.

        Finally, I agree that cooperative actions *can* be moral means, but they aren’t always, nor are they the only means! Cooperation is one of the evolutionary virtues that I list in my evolutionary virtue ethics, but competition is in there too. As well as wisdom at striking the balance between these two forces.

        Your final sentence advocating for moral ends (a consequentialism) and moral means (a virtue ethics) can best be united, in my opinion, not merely using cooperation (and….?benefits?), but by using a moral rule (a deontology) that advocates us to “Act so as to maximise the survival of life in general and avoid universal extinction.” That is the way I alter and combine the three traditional camps of ethics into a new and stronger position using evolutionary perspectives. I’ve submitted another paper for peer-reviewed publication about this too, which I’ll happily share once accepted.

  7. Hi Ed,

    Communication about morality is particularly difficult due to our biological and cultural evolutionary history giving us socially useful, but strongly misleading, intuitions about it.

    I’d like to make just a couple of points to, hopefully, clarify my view, then I have some questions for you.

    • First, the category of behaviors motivated by our moral sense and advocated by past and present moral codes define what morality descriptively ‘is’; elements of strategies that solve a cooperation dilemma that is innate to our physical universe. That cooperation/exploitation dilemma is how to sustainably obtain the benefits of cooperation without those benefits being destroyed by exploitation (which is virtually always the winning short-term strategy).
    o All highly cooperative societies, from the beginning of time to the end of time and regardless of biology or lack of it (robot societies also!), must solve this dilemma.
    o Claims from traditional moral philosophy such as morality depends solely on 1) consequences (such as maximizing happiness), 3) following a rule (like Kant’s), or 3) being “virtuous” are speculations about what people think or have thought ‘morality’ ‘ought’ to be. People are certainly free to make such claims about what morality ‘ought’ to be. But to date no one has come up with a convincing argument justifying what morality ‘ought’ to be. In sharp contrast, what morality ‘is’ is something we have a good chance to agree on as a simple matter of objective science.
    • There is a subset of descriptively moral behaviors (behaviors that solve the cooperation exploitation dilemma) that are universally moral (regardless of time, place, or biology) because thay are innate to all strategies that solve the cooperation/exploitation dilemma.
    • That subset of universally moral behaviors is defined by “Behaviors that increase the benefits of cooperation without exploiting others”. Note universally moral behaviors are defined by both a rule (increase the benefits of cooperation) and a consequence (that following the rule actually increases the “benefits”, however people define them, of that cooperation).
    • Versions of the Golden Rule are excellent, but fallible, heuristics for this moral universal because they advocate initiating indirect reciprocity, arguably the most powerful cooperation strategy known, without knowingly exploiting others.

    Questions to you:

    The Golden Rule is only a fallible heuristic, a usually reliable rule of thumb. So why do you say I refute it if I point out it is immoral to follow the Golden Rule if doing so decreases the benefits of cooperation (as when “tastes differ”, “in dealing with criminals”, and “in time of war”)? I can’t make any sense of that. Rather than refuting it, I celebrate (and explain as a matter of science) why the Golden Rule is normally such a useful moral guide as well as the special cases when it is not.

    The “benefits” of cooperation could be happiness, flourishing, or whatever people coherently desire. Wouldn’t you agree that people could decide to cooperate to prevent the extinction of “life in general”? Such behavior would solve the cooperation/exploitation dilemma and hence be descriptively moral. However, perhaps your in-group decided they should be in the part of “life in general” that survives and humans in out-groups should be killed to be able to achieve your ultimate goal. Such behavior would not be universally moral (“moral” in normal moral philosophy terms). Wouldn’t you necessarily be arguing that such genocide would be “moral”?

    Are money economies necessarily morally virtuous in your view? Money economies are powerful cooperation strategies. If virtues are all about achieving goals, wouldn’t you have to include money economies as moral virtues?

    We agree that “survival of the species” or “survival of all life”(?) are or could be desirable consequences (‘ends’) of human behaviors. Would you consider the possibility that, as a matter of science, there are moral and immoral ‘means’ of achieving that ‘end’?

    • 1. The Golden Rule fails as a deontological ethic. Kant didn’t write a “categorical suggestion.” These rules are supposed to be imperative. If you now want to backtrack and call it the Golden Guide That’s Generally Good, I would be fine with that. But you have to be clear that it does NOT stand on its own. (In that way, it is rather like the god of Euthyphro’s Dilemma who discovers he’s not actually the arbitrator of good and evil.)

      2. You’ve been clear before that you are trying to say what morality *is* rather than what it *ought* to be, but it’s only now become clear to me that I have to point out how impossible your task is. Science might be able to define what physics *is* without saying anything normative about that subject, but how can you define morality (the study of what is good) without making a normative statement about what is good?? You can’t!! That is why I’ve banged on (and on, and on, and on) about asking you what your “benefits of cooperation” are, and you have continued to duck the question and say it’s “whatever people coherently desire.” But that’s not a definition. Not even close. No word in the Oxford English Dictionary is defined as: “(n) you tell me.”

      3. I have written very clearly in the published paper I keep urging you to read that I am NOT talking about the survival of any one individual, social group, ecosystem, or species. Nor do all living things get to survive indefinitely. When I say the survival of life is the ultimate consequence that matters (because non-living things don’t sense the world and don’t replicate so they don’t have anything that matters to them), I am talking about “the project of life” or “life in general.” That’s the largest possible circle of moral concern that Peter Singer almost expand to in his 1981 book, but E.O. Wilson defined it in total in his 1998 book Consilience. The word “life” is the simple and clear one that captures this. Don’t add adjectives to it unless you are purposefully trying to create straw men.

      4.
      –> Wouldn’t you agree that people could decide to cooperate to prevent the extinction of “life in general”?

      Yes.

      –> Such behavior would solve the cooperation/exploitation dilemma and hence be descriptively moral.

      That doesn’t follow. I still defy you to find a cooperative act that doesn’t “exploit” (definition please??) some group somewhere.

      –> However, perhaps your in-group decided they should be in the part of “life in general” that survives and humans in out-groups should be killed to be able to achieve your ultimate goal. Such behavior would not be universally moral (“moral” in normal moral philosophy terms). Wouldn’t you necessarily be arguing that such genocide would be “moral”?

      That’s actually an empirical question once the goal has been agreed upon. Overwhelmingly the answer would be no; of course genocide would not lead towards cooperation toward the survival of life. Theoretically, however, if one in-group of humans was undeterredly determined to drive Earth to extinction, then it’s conceivable that that group should at the very least be detained indefinitely, and perhaps wiped out if it was otherwise impossible to stop them from wiping the rest of us out. Wars against Nazis is a (generally approved) moral approximation of this.

      5.
      –> Are money economies necessarily morally virtuous in your view? Money economies are powerful cooperation strategies. If virtues are all about achieving goals, wouldn’t you have to include money economies as moral virtues?

      I disagree with the certainty expressed that they are “necessarily morally virtuous” or that I “have to include” them, but sure, in the current world, political economies are part of political philosophy that wrestles with morally-laden decisions for societies.

      6.
      –> We agree that “survival of the species” or “survival of all life”(?) are or could be desirable consequences (‘ends’) of human behaviors.

      Drop the adjectives. Just say life. Then…great!

      –> Would you consider the possibility that, as a matter of science, there are moral and immoral ‘means’ of achieving that ‘end’?

      No. That would require the existence of absolute moral goods irrespective of goals / ends. We see no evidence of that in the universe. As far as I can tell, actions are only good instrumentally. This is why even “Thou shalt not kill” fails as a deontological rule. It’s also why “cooperation to increase the benefits of cooperation” can sometimes be bad, depending on who is defining those benefits.

  8. Hi Ed,

    1.“The Golden Rule fails as a deontological ethic. Kant didn’t write a “categorical suggestion.” These rules are supposed to be imperative. If you now want to backtrack and call it the Golden Guide That’s Generally Good, I would be fine with that. But you have to be clear that it does NOT stand on its own.”

    ?????? Ed, for at least the last 12 years, I have consistently described how and why the Golden Rule is a centrally important moral heuristic. I hope you understand that a heuristic is a generally reliable, but fallible, rule of thumb. So for at least the last 12 years I have described the Golden Rule, over and over, as the equivalent of what you call a “moral guide”. There has been no “backtracking”.

    2. “Science might be able to define what physics *is* without saying anything normative about that subject, but how can you define morality (the study of what is good) without making a normative statement about what is good??”

    Science can tell us what the origin and function of descriptively moral behaviors ‘are’ with the same authority it tells us about the origin and function of other features of our physical universe such as biological structures.

    It is also fully within the domain of science to identify any necessarily universal subcomponents of these descriptively moral behaviors.

    Do you disagree? If so, why?

    As I have pointed out at every opportunity, science is now and always will be silent regarding what we imperatively ought to do and can provide no source of normativity. Specifically, what science tells us is universally moral (universal to all descriptively moral behaviors) necessarily has no innate normativity. But neither has moral philosophy been able to conclusively reveal any innate normativity, though not for lack of trying. Our moral sense and cultural enforcement are the only known sources of normativity. So where does that leave us?

    Cultures require moral norms to achieve common human goals for living in societies. I advocate choosing moral norms to be those most likely to enable achieving common ultimate goals related to living in social groups. Those norms are, I argue, consistent with the universal subcomponent of all descriptively moral strategies, “Increase the benefits of cooperation without exploiting others”.

    Can we understand each other on at least these two fundamental points?

    • Sadly no.

      Preface: Why do you not answer my simple and direct questions, whereas I endeavour to answer yours? One might be tempted to think it’s because I have answers and you do not. At the very least it stops me from gaining an understanding of you and your position that I am genuinely trying to gain through interrogation.

      1. As a reminder (for myself), here are the opening paragraphs of your post above:

      ———-
      Philosophical perspectives on the Golden Rule typically focus on the Golden Rule’s well-known flaws and may even have a dismissive tone.

      The philosopher Dan Flores recently wrote:

      “If ethics is the inquiry into the basic claims of morality, then upon philosophical scrutinization of the Golden Rule, we find that, in the words of Quine, ‘there is nothing to scrute’ after all. We should focus our attention on ordinary moral principles instead.”

      As an admirer of the Golden Rule, I took offense on its behalf. In response, I will both defend its permanent cultural usefulness and argue that it points us to a universal moral principle.
      ———–

      So, while you do say here that the GR has flaws, you still say it is permanently useful and points to a universal moral principle. That is too strong. I am much more of Quine’s opinion that there is nothing to scrute.

      Why is that? I’ve got you to admit that you need to take the consequences of following the golden rule into account before you can know if following it is moral or not. Thus, the GR, on its own, can lead to good, bad, or indifferent outcomes. That right there is the reason that the GR, on its own, is morally meaningless. On its own, it is empty of moral content. It may be a heuristic (defined as “any approach to problem solving or self-discovery that employs a practical method, not guaranteed to be optimal, perfect, logical, or rational”), but it is not, by itself, a “moral heuristic.” I did not call it a “moral guide”. I called it, using tongue in cheek alliteration, a “golden guide that’s generally good”. But if pressed on that, I have to admit that it is only generally good out of situational luck. The GR would be an awful guide in dystopian cultures like those depicted in violent prisons in Hollywood movies, or some of the communist cultures I’ve encountered in Eastern Europe’s Terror museums. Those are clear examples of where the GR does not have “permanent cultural usefulness.”

      2.
      –> “Science can tell us what the origin and function of descriptively moral behaviors ‘are’”

      I repeat….no it cannot. Not without making a normative judgment about what is or is not in the moral realm. There are no label makers in the universe that tell you if the study of….hydrodynamics, cake baking, beetle colouration, meerkat cooperation, hypothalamus function, or any other topic you can think of….is moral or not. Scientists have to make a normative judgment, right from the start, if they want to say what is or it not included in the moral realm.

      In my published paper, I began with this in the second paragraph:

      “Morality, from the Latin moralitas, meaning manner, character, or proper behavior, is “the differentiation of intentions, decisions, and actions between those that are good and those that are bad.”(4) It’s “a conformity to the rules of right conduct.”(5) But who gets to define what good, bad, or right mean?”

      So yes, you may have said at every opportunity that “science is now and always will be silent regarding what we imperatively ought to do and can provide no source of normativity.” And that’s why you should therefore stop trying to tell us scientifically what morality is. Philosophy is needed to say what morality *is* before scientists can help to tell us what we *ought* to then do. You’ve got it the wrong way around.

  9. Hi Ed,

    I am happy to answer your simple and direct questions. However, there are so many areas of misunderstanding in your comments to respond to that I must focus on what I see as our most fundamental disagreements. And even then, my answers are longer than I would like.

    For example:
    You said “I’ve got you to admit that you need to take the consequences of following the golden rule into account before you can know if following it is moral or not.”

    This shows a fundamental misunderstanding. What led me to start my journey to understanding morality from an evolutionary perspective was the realization that specifically, the Golden Rule, Christian virtues, and Greek virtues all had the function of increasing the benefits of cooperation in groups. That is, all were merely heuristics (fallible rules of thumb) for obtaining the benefits of cooperation. The function of human morality (behaviors motivated by our moral sense and advocated by past and present moral codes), as confirmed by the science of the 40 years or so, is to increase the benefits of cooperation in groups. For you to read what you have of mine and conclude I did not understand the dependence of morality on consequences (as well as means) is to me inexplicable.

    Then I see:
    “The GR would be an awful guide in dystopian cultures like those depicted in violent prisons in Hollywood movies, or some of the communist cultures I’ve encountered in Eastern Europe’s Terror museums. Those are clear examples of where the GR does not have “permanent cultural usefulness.”

    First, the Golden Rule is permanently useful because advocates initiating indirect reciprocity, arguably the most powerful cooperation strategy known. If cooperation is not possible then sure, life is everyone only for themselves and life is nasty, brutish, and short. So what?

    Societies exist because of the benefits of cooperation they produce. Saying the Golden Rule is not permanently useful is like saying cooperative societies are not permanently useful. Or more directly, that moral cultural norms such as do not steal, lie, or murder are not permanently useful. All are applications of the Golden Rule.

    Finally,
    “”Science can tell us what the origin and function of descriptively moral behaviors ‘are’”
    I repeat….no it cannot. Not without making a normative judgment about what is or is not in the moral realm.”

    Please explain why science has to make a “normative judgement” in order to study the origin and function of behaviors motivated by our moral sense and advocated by past and present cultural moral codes (such as altruism as advocated by the Golden Rule, and moral norms such as do not steal, kill, or lie, and Christian and Greek moral virtues).

    • –> For you to read what you have of mine and conclude I did not understand the dependence of morality on consequences (as well as means) is to me inexplicable.

      ??????? If that’s truly the case….great. But why else would you ask me, “Would you consider the possibility that, as a matter of science, there are moral and immoral ‘means’ of achieving that ‘end’?” This question implies you think there are moral means independent of the ends. I said there were not. You never followed up. (Maybe you agree there are no moral means independent of moral ends??) You have also said (over and over and over) that “the benefits of cooperation” are whatever people define them as. This is a focus on means (cooperation) without any defined ends (Benefits?? What are they?? I ask you this again and again and again.) If you refuse to define your moral ends, yet insist you know a specific moral means, then I don’t understand how you link morality to consequences in a dependent manner. It sounds to me like they are independent to you.

      –> If cooperation is not possible then sure, life is everyone only for themselves and life is nasty, brutish, and short. So what?

      Then the GR is not PERMANENTLY useful. Sometimes we need to COMPETE to reach good moral ends.

      –> Or more directly, that moral cultural norms such as do not steal, lie, or murder are not permanently useful.

      Okay, I’m going to be charitable here and say that perhaps you are using permanently (defined as “lasting or intended to last or remain unchanged indefinitely”) when you actually mean continually (defined as “repeated frequently in the same way; regularly.”). None of these cultural norms are permanently applied, but they are continually applied (and continually not applied too). Is that what you are trying to say about the GR? (If yes, so what? It’s just an observation that cooperation has occurred.)

      –> Please explain why science has to make a “normative judgement” in order to study the origin and function of behaviors motivated by our moral sense and advocated by past and present cultural moral codes (such as altruism as advocated by the Golden Rule, and moral norms such as do not steal, kill, or lie, and Christian and Greek moral virtues).

      You still don’t see it?? As I said, there are no label makers in the universe that tell you if [something] is moral or not. You keep using “moral sense” and “moral codes” and “moral norms” as if that common adjective has an objective scientific definition somewhere to tell you which senses and which codes and which norms are moral ones. But some senses and codes and norms are surely not moral—they may deal with mere preferences. It takes a philosophical normative judgment to tell you which one is which. As it has been said, philosophy spawns science once it understands a subject well enough to make it empirical. But a science of morality hasn’t happened yet precisely because philosophers are still working on the definition of what morality *is*. Scientists cannot jump in without doing the normative philosophical evaluation first (or without at least accepting some assumption without realising it’s an assumption). This is literally one of the main criticisms Peter Singer (among others) had of E.O. Wilson’s claim in his 1975 book Sociobiology that morality could be “removed from the hands of philosophers and biologicized.” Wilson was wrong, and so are you. (Though that’s good company to keep.)

  10. I’m sorry to say that this back and forth between Mark and Ed is a little bit childish. Mark’s basic thesis, that the Golden Rule is a useful “heuristic” (as defined by him), seems pretty clear and uncontroversial. Ed’s analysis of this basic thesis seems born from his profound unhappiness about having a paper rejected previously by Mark (or associates) in some other venue. Mark should have recognized long ago that this argument with Ed is a useless enterprise.

    • Hi PL,

      It is always nice to hear someone thinks my argument about why the Golden Rule is such a useful heuristic is “pretty clear and uncontroversial’. That is my goal. Thanks.

      Largely stimulated by my conversation with Ed, I’m writing a new essay “The time has come to fulfill E. O. Wilson’s vision for grounding morality in science”. It has been fun 1) reading the old arguments as to why that is impossible and 2) trying to clarify my reasoning as to why such a grounding is both possible and would be highly beneficial. My conversation with Ed has been some benefit to me at least.

      • You’re welcome. Perhaps if I have been of help to you, it would make sense to accept that I can be of help to others as well. Please consider this in regards to my offer to write something for The Evolution Institute.

        I have already written 2 peer-reviewed papers (1 published, 1 recently submitted) that argue how a science of morality can now be formed using some of E. O. Wilson’s research. The reason I was able to point out a giant hole in your arguments (that you are now attempting to fill) is precisely because I’ve already filled them in. So, feel free to cite me in your new essay, or let me know if you’d like to collaborate on something.

Leave a Reply to @EdGibney Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s