In virtually every society, someone who acts at a cost to themselves, benefits other people, and acts without consideration of net benefits to one’s self, either direct, or indirect, is commonly judged to be acting in a morally admirable way.
Suppose someone does the same action, but motivated based on expectations of future compensation or benefits. For example, someone donates to a charity because they think doing so will enhance their reputation. They will commonly be judged to not be acting in a morally admirable way. Of course, they are not acting immorally (the act does not deserve punishment), but, because they are motivated by self-interest, the act is only morally neutral, not morally admirable.
As an empirical observation, the morality of an act is commonly judged to be dependent on the actor’s motivation. If an act is done without consideration of future benefits for one’s self then it is not motivated by those future benefits; it is not motivated by self-interest.
Based on the above, we might propose defining altruism as “Acting without consideration for one’s future net benefits, with a cost to one’s self, and benefiting other people.” This is a potentially useful definition because 1) it is consistent with common judgments about morality and 2), as will be argued below, it is consistent with the existence of instrumental rational justifications for accepting the burdens of acting altruistically (by this definition).
Unfortunately, due to historical accident, the above definition is not a standard one in moral philosophy. In my view, the more standard definitions have caused needless confusion, in particular concerning instrumental justifications for accepting the burdens of altruism.
The word ‘altruism’ … was coined about 1851 by the French philosopher Auguste Comte as the opposite of egoism. (Wikipedia) That ancestry is evident in definitions of altruism used in ethics discussions such as: “sacrificing something for someone other than the self (e.g. sacrificing time, energy or possessions) with no expectation of any compensation or benefits, either direct, or indirect (for instance from recognition of the giving)”. (again from Wikipedia) The critical difference is “no expectation” in the presently common definition rather than my proposed “no consideration” of, and therefore no motivation from, future compensation or benefits for one’s self.
This distinction makes a critical difference in discussions of instrumental rational justifications for acting altruistically. Instrumental justifications are of the form “If you desire X, then based on facts Y, you ought (instrumental) to do Z”. Rational justification for accepting the burdens of acting altruistically is one of the central problems in normative moral philosophy.
Assume an instrumental justification for accepting the burdens of altruism is proposed based on some facts Y and an individual or a group achieving an overriding desire. For example, “You ought (instrumental) to act altruistically if your overriding desire is to increase your experience of durable well-being over your lifetime”.
Using a conventional definition of altruism being contingent on expectations, such an instrumental justification is logically impossible. As soon as you expect instrumental compensation or benefits for accepting the burdens of altruism, the definition says you cannot be acting altruistically.
In contrast, using my proposed definition of altruism being contingent on not considering instrumental justifications (even though you might be aware of them), there is no logical problem with the existence of instrumental justifications for accepting the burdens of altruism. At least my personal experience is that my altruism is motivated by my moral intuitions, not any expectation that, on average, acting altruistically will produce a net benefit such as increasing my experience of durable well-being over my lifetime. I assume other people’s most common motivation for altruism is the same.
For example, I expect that consistently acting according to “Do unto others as your would have them do unto you”, even when I expect, in the moment of decision, that doing so will be against my best interests, will likely actually increase my happiness over my lifetime. If we use my proposed definition of altruism based on “consideration” rather than “expectations” of future benefits, I can act altruistically (morally) if my actions are not motivated by my expectations of benefits for myself. On the other hand we have the nonsensical conclusion that it is logically impossible for me to altruistically follow the Golden Rule if we attempt to use the more common definition.
Definitions of words should be based on utility which sometimes requires tailoring to match the subject matter; they should not be fixed by historical accident. I expect the most useful definition of altruism regarding moral philosophy will contain the critical elements of “Acting without consideration for one’s future net benefits, with a cost to one’s self, and benefiting other people.”
This definition of altruism is the one intended in my proposed definition of morality based on understanding morality as an evolutionary adaptation (similarly to Phillip Kitcher’s approach in his 2011 book, The Ethical Project): “Altruistic acts that also increase the benefits of cooperation in groups are evolutionarily moral”.
Above, I described “common” perceptions of what variety of altruistic acts are judged to be morally admirable. There is at least one philosophical sub-culture, Ayn Randians, who I understand hold as a matter of principle that altruism is never morally admirable. I expect they would firmly reject my above proposed definition of morality. However, a different philosophical sub-culture, egoists who act in their enlightened self-interest, might welcome both my proposed definition of altruism and the above Altruisitc Cooperation morality definition provided they became convinced it was, in fact, their best instrumental choice for a moral reference.
The picture is of a mythical, magical, unicorn. It corresponds to my view of the likelihood of the reality of “imperative moral oughts” (the kind that Hume warned about) that are somehow binding regardless of people’s needs and preferences. In my understanding, the pursuit of a generally accepted justification for the burdens of imperative moral oughts is equivalent to searches for a magical unicorn. Hence my keen interest in moral justifications based on instrumental oughts (not imperative oughts), and in definitions of altruism that make such instrumental justifications logically possible.
Pingback: Philosophers’ Carnival #139 « Nick Byrd's Blog
Pingback: Philosophers’ Carnival #139 « Nick Byrds Blog
I was led to your site by your recent comment to the post of Steve Davis “Peter Singer, Group Selection, and the Evolution of Ethics”, where you state:
“Sure, the philosopher Comte’s original definition of altruism (when he coined the word) as the opposite of egoism makes nonsense of the above paragraphs. (“There can be no such thing as pure altruism!”). But Comte’s definition had nothing to do with the science of morality. Letting the science of morality define altruism we get a much more useful definition: “Acting at a cost to one’s self, benefiting others, and done without consideration of future benefits to one’s self”. This definition threads the needle between the common cultural understanding of altruism and altruism as an evolutionary adaptation, allowing sensible usage of one definition in both domains.”
In this post I find a similar statement as to Comte’s definition of altruism supposed at variance with yours.
I’d like to reassure you about how far Comte differs from you:
1. his view of altruism was in fact, as far as I can see, quite similar to yours
2. it had, indeed, much “to do with the science of morality”, since he precisely assigned himself the task of founding such a science–on the basis of his discovery of altruism.
All of this can be found in his major work, the (most controversial) “System of Positive Polity” (1851-1854–see http://archive.org/stream/cu31924092570591). In this treatise, initially intended as a purely sociological work, Comte could not restrain himself from an incursion into biological territory (vol. 1, chapter 3). There he introduced his own theory of the brain (http://archive.org/stream/cu31924092570591#page/n603). This theory assigns to the brain three kinds of “functions” :
— affective
— intellectual
— active
(See http://www.archive.org/stream/catechismofposit00comt#page/n441)
Affective “instincts” are regarded as playing the major role in the dynamics of the brain: they govern the intellectual and the active functions. They are divided into personal/egoistic and social/altruistic instincts/feelings.
Following this, Comte set out to re-organize the whole of his thinking around the lines of the dual superiority of affectivity over intelligence and altruism over egoism. And in the second volume of his treatise, he added to the top of his his celebrated hierarchy of the sciences, above sociology, the 7th science of “morals” or scientific ethics. (See, if you can read French, my paper at http://membres.multimania.fr/clotilde/articles/psychoac.xml)
(After that, Comte went so far as to imagine that a new medicine could emerge, based on the premises that health depends essentially on cerebral equilibrium–and the latter mostly on predominance of altruism over egoism. His premature death in 1857 prevented him from writing the two volumes of a treatise on morals that he had planned)