In virtually every society, someone who acts at a cost to themselves, benefits other people, and acts without consideration of net benefits to one’s self, either direct, or indirect, is commonly judged to be acting in a morally admirable way.
Suppose someone does the same action, but motivated based on expectations of future compensation or benefits. For example, someone donates to a charity because they think doing so will enhance their reputation. They will commonly be judged to not be acting in a morally admirable way. Of course, they are not acting immorally (the act does not deserve punishment), but, because they are motivated by self-interest, the act is only morally neutral, not morally admirable.
As an empirical observation, the morality of an act is commonly judged to be dependent on the actor’s motivation. If an act is done without consideration of future benefits for one’s self then it is not motivated by those future benefits; it is not motivated by self-interest.
Based on the above, we might propose defining altruism as “Acting without consideration for one’s future net benefits, with a cost to one’s self, and benefiting other people.” This is a potentially useful definition because 1) it is consistent with common judgments about morality and 2), as will be argued below, it is consistent with the existence of instrumental rational justifications for accepting the burdens of acting altruistically (by this definition).
Unfortunately, due to historical accident, the above definition is not a standard one in moral philosophy. In my view, the more standard definitions have caused needless confusion, in particular concerning instrumental justifications for accepting the burdens of altruism.
The word ‘altruism’ … was coined about 1851 by the French philosopher Auguste Comte as the opposite of egoism. (Wikipedia) That ancestry is evident in definitions of altruism used in ethics discussions such as: “sacrificing something for someone other than the self (e.g. sacrificing time, energy or possessions) with no expectation of any compensation or benefits, either direct, or indirect (for instance from recognition of the giving)”. (again from Wikipedia) The critical difference is “no expectation” in the presently common definition rather than my proposed “no consideration” of, and therefore no motivation from, future compensation or benefits for one’s self.
This distinction makes a critical difference in discussions of instrumental rational justifications for acting altruistically. Instrumental justifications are of the form “If you desire X, then based on facts Y, you ought (instrumental) to do Z”. Rational justification for accepting the burdens of acting altruistically is one of the central problems in normative moral philosophy.
Assume an instrumental justification for accepting the burdens of altruism is proposed based on some facts Y and an individual or a group achieving an overriding desire. For example, “You ought (instrumental) to act altruistically if your overriding desire is to increase your experience of durable well-being over your lifetime”.
Using a conventional definition of altruism being contingent on expectations, such an instrumental justification is logically impossible. As soon as you expect instrumental compensation or benefits for accepting the burdens of altruism, the definition says you cannot be acting altruistically.
In contrast, using my proposed definition of altruism being contingent on not considering instrumental justifications (even though you might be aware of them), there is no logical problem with the existence of instrumental justifications for accepting the burdens of altruism. At least my personal experience is that my altruism is motivated by my moral intuitions, not any expectation that, on average, acting altruistically will produce a net benefit such as increasing my experience of durable well-being over my lifetime. I assume other people’s most common motivation for altruism is the same.
For example, I expect that consistently acting according to “Do unto others as your would have them do unto you”, even when I expect, in the moment of decision, that doing so will be against my best interests, will likely actually increase my happiness over my lifetime. If we use my proposed definition of altruism based on “consideration” rather than “expectations” of future benefits, I can act altruistically (morally) if my actions are not motivated by my expectations of benefits for myself. On the other hand we have the nonsensical conclusion that it is logically impossible for me to altruistically follow the Golden Rule if we attempt to use the more common definition.
Definitions of words should be based on utility which sometimes requires tailoring to match the subject matter; they should not be fixed by historical accident. I expect the most useful definition of altruism regarding moral philosophy will contain the critical elements of “Acting without consideration for one’s future net benefits, with a cost to one’s self, and benefiting other people.”
This definition of altruism is the one intended in my proposed definition of morality based on understanding morality as an evolutionary adaptation (similarly to Phillip Kitcher’s approach in his 2011 book, The Ethical Project): “Altruistic acts that also increase the benefits of cooperation in groups are evolutionarily moral”.
Above, I described “common” perceptions of what variety of altruistic acts are judged to be morally admirable. There is at least one philosophical sub-culture, Ayn Randians, who I understand hold as a matter of principle that altruism is never morally admirable. I expect they would firmly reject my above proposed definition of morality. However, a different philosophical sub-culture, egoists who act in their enlightened self-interest, might welcome both my proposed definition of altruism and the above Altruisitc Cooperation morality definition provided they became convinced it was, in fact, their best instrumental choice for a moral reference.
The picture is of a mythical, magical, unicorn. It corresponds to my view of the likelihood of the reality of “imperative moral oughts” (the kind that Hume warned about) that are somehow binding regardless of people’s needs and preferences. In my understanding, the pursuit of a generally accepted justification for the burdens of imperative moral oughts is equivalent to searches for a magical unicorn. Hence my keen interest in moral justifications based on instrumental oughts (not imperative oughts), and in definitions of altruism that make such instrumental justifications logically possible.