Goal: in any universe where people are capable of forming their own normative beliefs about what they have reasons to do, it is impossible for there to exist objective reasons that necessarily apply as normative reasons for all people.
-In the real world, we have sufficient freedom of the will to form our own normative beliefs about what we have reasons to do. Moral realists argue that objective moral reasons override these normative beliefs (if they conflict, our formed beliefs about what we have reason to do are false).
The Argument from Authority:
1. A reason applies to me only if it is capable of at least partially explaining why I acted the way I did. (A -> P)
2. A reason is capable of at least partially explaining why I acted the way I did only if it is consistent with my nature to act for that reason. (P -> C)
3. A reason necessarily applies to all people only if it is necessarily consistent with all people’s nature to act for that reason. (*A -> *C)
4. A reason is objective only if it does not depend on people’s nature. (O -> -N)
5. A reason that does not depend on people’s nature is necessarily consistent with all people’s nature only if the normative beliefs that partially constitute people’s nature cannot be changed to be inconsistent with the reason. [(-N & *C) -> (-B)]
6. If the normative beliefs that partially constitute people’s nature can be changed to be inconsistent with any reason, reasons that do not depend on people’s nature cannot be necessarily consistent with all people’s nature. [(B) -> -(-N & *C)]
7. If the normative beliefs that partially constitute people’s nature can be changed to be inconsistent with any reason, reasons cannot be both objective and necessarily apply to all people. [(B) -> -(O & *A)]
(Side note: The argument’s meaning translates better into predicate logic, but I doubt most people who read this blog would understand it and I have no idea how to create universal quantifiers with my keyboard. As it is, I believe the argument is valid in sentential logic and the lost meaning does not significantly change the argument.)
Since I am arguing that objective reasons of this type are impossible in universes with normative freedom, the argument fails if there is a single possible universe where the initial conditions are met and there are these objective reasons.
I am willing to grant any strange, ridiculous ontology moral realists can conceive. I am not requiring physicalism (where only physical things can possibly exist), so talk of Platonic forms/abstract objects/fictional characters existing is fine.
A Moral Realist Position: Derek Parfit argues that moral reasons exist and constitute what we objectively have reasons to do.
-Standard error theorist arguments would claim that these reasons (or values) are metaphysically “queer”, or that it is hard or impossible to understand how they would exist. This reply doesn’t show that the reasons are logically impossible, however, so I want a stronger argument. Also, I am willing to grant any ontology, so I’ll grant that these reasons could exist.
Hypothetical Universe-M: Parfitian moral reasons exist. They are objectively part of Universe-M, and they constitute what people have objective reasons to do.
-As a moral realist, Parfit wants these moral reasons to be both objective and apply to all people. I argue that it is impossible for reasons to be both objective and apply to all people in the same universe where people can form their own normative beliefs.
-What Parfit needs to prove me wrong: a single universe, Universe-M, where the moral reasons that exist are both objective and apply to all people without causing people to be incapable of forming their own normative beliefs about what reasons they have.
1. Necessary condition for moral reasons to be objective: the existence of moral reasons does not depend on people (they would exist in Universe-M whether or not people existed).
-Initially, no necessary connection between moral reasons and our perceptions/beliefs/actions. Parfit has no problem with moral reasons existing and us having the wrong perceptions/beliefs/actions.
-Possible analytic, necessary connection between moral reasons and people: moral reasons constitute true reasons statements for all people.
2. Explanation for how moral reasons apply to all people: moral reasons are necessarily action-guiding.
-But, what does it mean for a moral reason to be action-guiding?
-Cannot be that moral reasons necessarily guide actual actions. Parfit wants people to be capable of acting wrongly in Universe-M.
-Cannot be that moral reasons semantically contain commands that may or may not guide actual actions. If I write “Don’t scratch your arm” on the wall, the writing on the wall semantically contains a command that may or may not guide actual actions. Parfit does not want moral reasons to be action-guiding only in the sense that random commands written on the wall are action-guiding.
3. Parfit needs moral reasons to have greater action-guiding force than random commands written on the wall but not so much force that they necessarily guide actual actions.
3a. Possible condition: moral reasons are action-guiding because they are capable of having enough force to guide actual actions in a sense that random commands on the wall are not capable.
-Capability cannot depend on perception (moral reasons guide actual actions whenever they are perceived), because Parfit allows for people to act immorally despite perceiving moral reasons in Universe-M.
3b. Possible condition (revised): moral reasons are action-guiding because, when perceived, they form normative beliefs that may guide actual actions.
Key question: When we perceive moral reasons, do we necessarily form a corresponding normative belief about what we have reason to do?
Reason to say yes:
-Analogy to perception of objects: If I perceive a table, some think I necessarily have a corresponding belief that the table is there. Perhaps if I perceive a moral reason to X, I necessarily have a corresponding belief that I have a reason to X.
Problem with saying yes:
-If perception of moral reasons necessarily causes us to form certain beliefs about what we have reasons to do, then we are incapable of forming our own normative beliefs about what we have reasons to do. Moral reasons would, just by being perceived, have sufficient force to override any freedom the will has to form its own normative beliefs. This violates the initial condition of Universe-M that Parfit needs for the argument.
Reason to say no:
-We seem to able to distinguish our perceptions from our beliefs. I can perceive a table but, due to extreme Cartesian inspired skepticism, form no beliefs about what is really there.
Problem with saying no:
-There are going to be people in Universe-M who form normative beliefs corresponding to moral reasons and people who form normative beliefs that do not correspond to moral reasons despite both groups perceiving the same moral reasons.
The Comparative Problem: In order to be a moral realist, Parfit needs some way to show that those who form normative beliefs corresponding to moral reasons are more rational/correct than those whose normative beliefs conflict with what moral reasons they have (especially if both perceive the same moral reasons).
Analogy to Epistemology: if we both perceive the same table, but I form a belief that there is a table and you form a belief that there is an elephant, I seem more rational because my epistemic beliefs conform to the evidence.
-I seem more rational because, in epistemology, there is an implicit principle that is plausibly accepted: we should form our epistemic beliefs in accord with our perception of the evidence.
Analogy back to moral reasons: if we both perceive the same moral reason not to kill, but I form a belief that I have a reason not to kill and you form a belief that you have no reason not to kill, Parfit needs to say that I am more rational/correct because my normative beliefs conform to moral reasons.
-But, consider the analogous implicit principle for moral reasons: we should form our normative beliefs in accord with our perception of moral reasons.
Comparative Problem cannot be solved: In order to describe those who form normative beliefs in accord with moral reasons as more rational/correct than those who don’t, Parfit needs an implicit normative principle that tells people how they should form their normative beliefs. However, there will be people who form normative beliefs according to the principle and people who do not. In order to describe those who form normative beliefs according to the principle as more rational/correct than those who don’t, Parfit needs another implicit normative principle that tells people which principles they should use to form their normative beliefs. The same problem applies for this new implicit normative principle, resulting in an infinite regress. The infinite regress is detrimental to Parfit’s view because moral reasons are only action guiding for those who already accept that their normative beliefs should conform to them. For those who do not conform their normative beliefs, moral reasons have no more action-guiding force than random commands written on the wall.
Implications of Universe-M case: Even if I grant any crazy ontology he wants, Parfit is incapable of explaining how moral reasons can be objective and apply to all people without making people incapable of forming their own normative beliefs. Although this does not prove the Argument from Authority’s conclusion, I believe the inconceivability of the reasons Parfit needs gives good evidence that those reasons are impossible.
Possible Objection from Parfit: The only type of connection needed between moral reasons and people is the analytic connection posited earlier: moral reasons constitute true reasons statements for all people.
Short version of my reply: the only reason statements that can follow from the meaning of moral reasons are distinct from the reason statements that people use to judge what they should do. (see following argument)
The Argument from Authority applied to our universe:
1. Like Parfit, I believe the statement “I know I have a reason to X, but why should I X?” is untenable. Asking “why should I X?” just is asking “do I have a reason to X?”. The original question translates to “I know I have a reason to X, but do I have a reason to X?”.
2. The sentence “I know I have a reason to X, but do I have a reason to X?” is only untenable if there is no equivocation between “reason”s. It is perfectly defensible to say “I know I have a financial reason to X, but do I have a hedonistic reason to X?”.
3. Practical reasoning is the only faculty/system/algorithm that cannot be defensibly questioned. Questioning practical reasoning itself asks for a reason to use reasons, so the question uses practical reasoning to question practical reasoning.
4. All other faculties/systems/algorithms that output candidates for reasons can be defensibly questioned. It is perfectly plausible to say “I know X increases aggregate utility, but why do I have a reason to X?” or “I know there is a moral reason to X, but why do I have a reason to X?”.
5. Since it is defensible to say “I know there is a moral reason to X, but why do I have a reason to X?”, “reason” and “moral reason” cannot have the same meaning.
*The “Authority” part of the argument*:
-Note that the claim in premise 3 means that only our own practical reasoning is capable of finishing the “why” question. Basically, practical reasoning is the only faculty with sufficient authority to settle a normative question beyond defensible questioning. When someone in the real world (or Universe-M) asks “I know there is a moral reason to X, but why do I have a reason to X?”, she is asking for a practical justification for following moral reasons that can settle the “why” question.
-Parfit might object that moral reasons and practical reasons are different, but moral reasons are fundamentally more important. But this claim falls into the same Comparative Principle trap (when deciding what to do, people should follow moral reasons over practical reasons).