For some time I have been describing here how science shows that the function of social morality, the morality of interactions between people, can be reduced to a non-moral object by the normal means of science. Specifically, that function is to increase the benefits of cooperation by a set of cooperation strategies. This set of cooperation strategies are defined by four necessary aspects, motivation to risk cooperation, motivation to punish exploiters, being in-group strategies (not cooperation strategies that exploit out-groups), and being expected to be effective in actually increasing the benefits of cooperation.
A little review may be useful before I describe the meta-ethical implications of this science.
My justification for claiming my hypothesis about that reduction is true, is as follows. Its explanatory power for the social morality category of the data set of what people have believed to be moral (http://forums.philosophyforums.com/thread/65051/), along with it meeting other relevant criteria for scientific truth, with no other hypothesis meeting the criteria, support the conclusion it is provisionally true, the normal sense of truth in science,
No philosophical argument for this implied form of moral naturalism is needed, and none may be possible, because 1) no such argument could change the science of the matter and 2) available philosophical tools and methods for such an argument appear inadequate for the task (based on the lack of any convincing conclusions about moral naturalism to date using these tools).
Note that social morality norms and judgments are the largest category of moral beliefs in the above data set. Also, for social morality norms, violators of judgments and norms are commonly judged to deserve punishment of, at minimum, social disapproval. This category includes past and present enforced moral codes and most intuitive judgments of right and wrong.
The second largest category concerns individualistic morality norms. Violators of these judgments are not commonly thought to deserve punishment as long as they do not harm other people. It includes egoism and the self-interested aspects of virtue ethics.
The smallest category of moral beliefs is what the ultimate goal of morality is. For example, Utilitarianism claims the ultimate goal is overall well-being, egoism claims the ultimate goal is individual well-being, Kantianism and Rawlsian justice claim the ultimate goal is something like fairness, and nihilists claim there is no ultimate goal.
However, individualistic morality norms and moral beliefs about what the ultimate goal of morality is do not appear to yet be (and likely will never be) reducible to non-moral objects by science. So science’s implied form of moral naturalism is so far silent regarding four important issues 1) the specific ultimate goal of social moral interactions between people, 2) the entire subject of individualistic ethics, 3) whether the property of rightness that is reducible from the function of past and present moral codes and other sources does, for unknown reasons, NOT have the actual property of rightness, and 4) if there is any sound argument for innate bindingness (oughts) based on what science tells us about what ‘is’ (without such an argument, this science based moral naturalism can only claim instrumentally utility for achieving human goals).
What are the meta-ethical implications, beyond implying a strange form of moral naturalism, if my claim is scientifically true?
[quote][i]From Wikipedia
… there are three kinds of meta-ethical problems, or three general questions:
1. What is the meaning of moral terms or judgments?
2. What is the nature of moral judgments?
3. How may moral judgments be supported or defended?
A question of the first type might be, “What do the words ‘good’, ‘bad’, ‘right’ and ‘wrong’ mean?” (see value theory). The second category includes questions of whether moral judgments are universal or relative, of one kind or many kinds, etc. Questions of the third kind ask, for example, how we can know if something is right or wrong, if at all. Garner and Rosen say that answers to the three basic questions “are not unrelated, and sometimes an answer to one will strongly suggest, or perhaps even entail, an answer to another. [/i][/quote]
Answers to the three meta-ethical questions posed above will obviously be different for each of the above three categories of moral beliefs.
I will focus on the meta-ethical implications on the social morality category (which, again, includes past and present enforced moral codes and intuitive moral judgments about interactions between people).
[quote]1. What is the meaning of (these) moral terms or judgments?
Social morality’s moral terms and judgments refer to whether an act is consistent with a set of cooperation strategies. This set of cooperation strategies are defined by four necessary aspects, motivation to risk cooperation, motivation to punish exploiters, being in-group strategies (not cooperation strategies that exploit out-groups), and being expected to be effective in actually increasing the benefits of cooperation. (I have been calling them “altruistic cooperation” strategies similar to how Herbert Gintis uses the phrase, but I have seen examples of some authors using “altruistic cooperation” in a contradictory sense. So I need a new name for this category of cooperation strategies. Maybe “in-group cooperation with enforcement”?)
2. What is the nature of moral judgments?
As described above, there are three main categories of beliefs about morality, social morality, individualistic morality, and beliefs about morality’s goals. Only one, social morality, is so far reducible to a non-moral object and that reduction is only for this morality’s function, not its goal. Social morality’s judgments are universal because they reduce to a species independent natural phenomenon.
3. How may moral judgments be supported or defended?
Social morality’s moral judgments (based on whether or not an act is consistent with a set of cooperation strategies) are reducible to a natural phenomenon and therefore their descriptive truth can be confirmed by science. However, descriptive science entails no bindingness. Without a sound argument for such bindingness, social morality’s moral judgments can only be justified by being instrumentally useful. For example, a culture might justify advocating and enforcing norms, optimized as altruistic cooperation strategies, by the expectation that such norms are most likely to achieve whatever the society’s ultimate goals are, perhaps well-being. [/quote]
However, note that moral judgments in the moral goal category and individualistic category cannot yet be supported or defended by science. We must rely on moral philosophy to do that.