Intransitive preferences contradict basic transitive logic. In general form, intransitive preferences can be understood as:
Intuitively, it appears irrational to maintain all three positions at once. If x>y and y>z, it should follow that x>z. It doesn’t seem possible to rationally maintain all three positions at once.
The Money Pump Argument is a case that’s meant to prove the irrationality of intransitive preferences. Using the example my professor used:
Preference 1: Chocolate Ice Cream > Vanilla Ice Cream
Preference 2: Strawberry Ice Cream > Chocolate Ice Cream
Preference 3: Vanilla Ice Cream > Strawberry Ice Cream
The idea behind the Money Pump Argument is that any person X who holds all three of these preferences could be turned into a money pump. Imagine that X currently has Vanilla Ice Cream. Based on Preference 1, X would prefer Chocolate Ice Cream to Vanilla. If this is a real preference, X should be willing to pay the smallest unit of money (1 cent) to upgrade from Vanilla to Chocolate (if X isn’t willing to pay the smallest unit to upgrade, it’s not really a preference). Now X has Chocolate Ice Cream. Based on Preference 2, X would be willing to pay 1 cent to upgrade from Chocolate to Strawberry. Now X has Strawberry Ice Cream. Based on Preference 3, X would be willing to pay 1 cent to upgrade from Strawberry to Vanilla.
After paying 3 cents, X is back to the original spot-Vanilla. This process could, if X had no memory/rationality, go on infinitely, thus making X a “money pump” that pays for no actual upgrade.
The purpose of this post is to challenge the idea that intransitive preferences are necessarily irrational. Most intransitive preferences are irrational, but I want to argue that irrationality is not a necessary part of intransitive preferences (it is possible to hold intransitive preferences and still be rational).
The concept behind my objection is mathematical. Intransitive preferences are impossible to justify if the “>” relations are thought of in an entirely linear manner-one entity is greater than the other, and “greater” is defined as having a higher amount of goodness/utility/happiness/etc. Understood this way, intransitive statements are necessarily irrational-if X has a greater amount of goodness than Y, and Y has a greater amount of goodness than Z, then it is mathematically impossible for Z to have a greater amount of goodness than X.
My alternative way of understanding intransitive statements is a circular preference system, without necessary differences in goodness between the entities to justify the preference. Meaning, X > Y is not true because X has a greater amount of goodness/utility than Y. Rather, X > Y is true only if, for some reason, changing from Y to X increases goodness/utility. Although it is difficult to imagine, it is at least possible for “X >Y”, “Y > Z”, and “Z > X” to all be true simultaneously if all the transitions manage to increase goodness/utility. The key point is that the goodness that causes the preference (people prefer to increase goodness) does not exist in the entities “X”, “Y”, and “Z”; rather the goodness that causes the preference exists in the change from one entity to the other. The system of preferences is coherent if they exist in a circular manner, and a circular manner is possible if a linear comparison of goodness in the entities is not used.
Here’s the example I thought of as an objection to the claim that intransitive preferences are necessarily irrational, which illustrates the math I was talking about:
Imagine a person, Poca, who collects Chinese Zodiac Animals. Poca can only afford to keep one animal at a time. Poca has a strange desire: simulate travelling into the future by trading a zodiac animal for the animal that represents the following year. If Poca currently has a Rat, Poca would be willing to trade the Rat for an Ox-the next animal in the calendar. Here’s Poca’s intransitive preference system:
1. Ox > Rat
2. Tiger > Ox
3. Rabbit > Tiger
4. Dragon > Rabbit
5. Snake > Dragon
6. Horse > Snake
7. Goat > Horse
8. Monkey > Goat
9. Rooster > Monkey
10. Dog > Rooster
11. Pig > Dog
12. Rat > Pig
Poca’s preference system, that values the transition (rather than the entity itself), can coherently maintain all 12 intransitive preferences. The 12 preferences share the same properties as the Ice Cream example earlier-if someone could trade with Poca, Poca could be turned into a money pump. However, this does not make Poca irrational. For each transaction, Poca might lose 1 cent, but Poca gains the utility of feeling like she traveled into the future. If Poca did not prefer the feeling to 1 cent, Poca would not make the trade (a trade is evidence that Poca prefers the feeling to having 1 cent). Even if Poca is continually traded with and becomes a money pump, Poca will continually gain more utility than she loses due to her strange desire.
The Poca Zodiac case is incredibly weird, and I can’t imagine anyone actually having these preferences (not to mention the idea of trading a dragon for a snake). But the point is that it is not necessarily irrational to maintain intransitive preferences. If Poca’s preference system is rationally coherent (and it is), then it possible to rationally maintain intransitive preferences.
This argument is largely taken from David Hume’s “Of Miracles”, though taken in a different direction.
There are two types of possible miracles: ones you directly experience, and ones that other people directly experience. If you directly experience a miracle, then you have good reason to believe in it. If not, then you must rely on the testimony of others to believe it.
Here’s the problem: how can you know that testimony of miracles is reliable, and not false? One way is to reproduce the miracle, so you can directly experience it. If not, then you must rely on the validity of the testimony. So, should you expect the testimony of miracles to be true?
In logical terms: The truth of the proposition “X told me a miracle happened” is sufficient for the truth of “I should believe a miracle happened”.
Obviously, this approach will not work. It is clearly true that people have lied about miracles and their experiences. If you took “testimony” as sufficient for “I should believe it”, then you would be stuck believing every story from every belief system imaginable. Some of these stances would be logically contradictory.
So, it is clear that “X told me a miracle happened” is not, by itself, sufficient for “I should believe a miracle happened”. So, what other proposition’s truth could be sufficient, or perhaps jointly sufficient with the first proposition?
The first option would be “it can be proved through empirical evidence”. However, this condition would not be applicable to any case of miracles. You can’t use physical, non-magical evidence to infer that a magical event took place. The second option might be “more than one person told me”, but this would fail for the same reason as the original proposition: millions of people have claimed to be directly contacted by different deities. They cannot all be right, unless there are multiple gods who are completely unaware of each other.
Instead of looking at what potential propositions could help (as they are all dead ends), I want to look at what people actually use in practice to determine whether or not a testimony of a miracle is believable or not. This practical approach inevitably results in one answer: because the testimony is consistent with their already held religious beliefs. Meaning a Christian will believe someone who said Jesus preformed a miracle for them, but not someone who claimed it was Vishnu who helped them out.
Back to logical form, this means that in practice, the truth of two propositions ”X told me a miracle happened” and “the miracle is consistent with my religious beliefs” are jointly sufficient for the truth of “I should believe a miracle happened”.
The problem: if this is the only way people can, in practice, believe miracles, then miracles cannot give any support whatsoever for religious beliefs. In order for the miracles to be supported, you already need the assumed truth of your religious beliefs. If your religious beliefs aren’t assumed to be true, then there is no reason to believe some miracle testimonies over others. However, if you assume your religious beliefs, nothing that is derivable from that assumption can be used to provide support for the truth of your religious beliefs. To claim it would be, would be begging the question-already assuming the truth of the conclusion that you are trying to prove.
In short, miracles cannot be believed to be true unless you already assume the truth of a belief that would make that miracle possible. Therefore, there are no miracles that are believable without the assumption of its respective religious belief. Because of this, miracles necessarily cannot provide support for religious beliefs, as to do so would be begging the question.
The Free Rider Problem is generally brought up in political philosophy as a problem for making public goods (which often include governments). A public good is simply something that everyone can benefit from. Clean air, for example, is a public good-everyone benefits from having clean air to breathe. One particular feature of public goods, that causes the problem, is that everyone will benefit from them regardless of their contribution. Meaning, a public good can only exist if it is beneficial to everyone-you cannot make it not beneficial to everyone without getting rid of the public good.
Here’s some numbers to illustrate this:
Imagine a town of people considering whether or not to contribute to the public good of a public park. Each individual person is capable of contributing to the park, but it costs them 10 units of utility in order to contribute. For every contribution, the benefit of the public good increases by 5 units of utility. Each unit of utility the public park has is available to everyone. So, if the public park has 20 units of utility, everyone gains 20 units of utility.
It would be best, in an aggregate sense, if everyone contributed. If there are 100 people in the town and each contribute their 10 units of utility, then the public good would reach 500 units of utility. Each person sacrifices 10 units of utility, and gains 500 units of utility. Each individual gains +490 units of utility.
The free rider problem: for each individual, it is best for their own self interest to not contribute, but have everyone else contribute. If they don’t sacrifice their 10 units to contribute, they only lose 5 units from the public good. So 99 people sacrifice 10 units of utility, 1 person doesn’t. Each person gets 495 units of utility from the public good, where the 99 who contribute get a net of +485, the 1 who did not contribute gets a net of +505.
The individual who gains the benefit of the public good, but does not contribute, is a free rider. This is obviously a problem, because each individual could reason in this way, to the point that no one will contribute.
In simplistic terms, the free rider problem is: I want others to contribute to something that everyone benefits from, but I don’t want to contribute myself.
(Side note: there are a lot of attempts in political philosophy to solve the free rider problem, like the Principle of Fairness. However, I have yet to be convinced by any of them, and would argue that none of them can get around this problem)
So how does this relate to human equality? To be analogous, I am thinking of “everyone is treated with equal respect” and “no discrimination” as public goods-everyone benefits when there is no discrimination and everyone is truly equal. Our contribution to this public good is to treat everyone equally, and not discriminate ourselves.
A free rider, with these analogies, would be someone who does not treat everyone equally but expects everyone else to. The free rider expects everyone to contribute to the public good of true equality, but does not contribute themselves.
The problem, as in the public park case, is that when people don’t make their own individual contributions, the group as a whole doesn’t get the benefit. The extent of this problem is correlated with the extent to which people don’t make their own individual contributions.
On the surface, it would seem that most people do contribute to the public good of equality (at least in the U.S.). Not many people discriminate against other people with their actions (at least I would hope).
But the problem I want to focus on is the way we view other people, especially those different than us. On the most basic level, we’ve come to realize and try to fix this problem in ourselves. Most intelligent people aren’t racist, and don’t think less of people of different races. This was once a huge problem (and it still is a big problem), but I am optimistic that it isn’t as bad as it used to be.
However, the intense focus on not being racist hasn’t been carried over into other areas of similarly arbitrary ways of discrimination. A lot of conservatives, especially recently, look down on the poor. If you are poor, on welfare, etc., you are seen as less of a person than a rich, successful businessperson. This is true of a lot of people-whether actively or on a more subtle level, a lack of financial success is seen as something that makes the person, in one way or another, less than a person who is financially successful.
Liberals have this problem too, though often in different areas. What I see a lot from liberals (which I have to admit includes myself), is looking down on people who aren’t as intelligent. A person who is less intelligent than another person is seen, in one way or another, as less than a person who is more intelligent.
These are just two examples, but the list could go on: social standing, physical ability, physical fitness, interpersonal skills, etc. Each of these traits is an arbitrary reason to value another person less than other people. They are all comparable to race, in that none of them are justifiable grounds for thinking of another human being as somehow inferior.
In conclusion, the public good of human equality is threatened by individual people being free riders. Individual people can be free riders, in this case, if they expect others to treat people equally but find ways on their own to consider some people as inferior to others.
I would hope that the free rider problem doesn’t apply to many people. However, I am extremely doubtful that it doesn’t. From what I’ve noticed, it seems each person has traits that they look for in other people, and value them differently based on that trait. A racist who values white people more than other races, a Mormon presidential candidate who values rich people more than poor people, a liberal blogger who values liberals more than conservatives-all are cases of being a free rider when it comes to equality. If people don’t genuinely think of people on equal terms, then the free rider problem for human equality will continue to make it so (as I believe is actually the case) that the system will favor those who more people like, rather than achieve genuine equality.
1. The Problem of Evil (discussed in length here: http://fensel.net/2011/10/17/why-the-common-conception-of-god-is-impossible/)
The point of the above post was that evil is incompatible with God’s existence in the standard sense, and further that no potential excuse could satisfy the incompatibility. Here’s the argument again, in logical form:
Definition of Omnipotence: the ability to bring about the truth of any conceivable, non-contradictory proposition. Ie, the ability to make any logically possible proposition “P” true.
Definition of Rational: the will to, all else being equal, choose the option that you value more
The standard argument for the problem of evil is such:
1. “P” is true
2. God could make “not P” true (Definition of Omnipotence)
3. God is rational
4. If God values “not P” more than “P”, then God will bring about “not P” (Definition of Rational, combined with Premise 2)
5. God does not value “not P” more than “P”.
-The above argument is logically valid. Meaning, to reject the truth of Conclusion 5 for a specific Premise “P”, you have to reject one of the first 4 premises. 2 and 3 cannot be rejected without claiming that God is either not omnipotent or rational (which would be contrary to the common conception). Premise 1 cannot be rejected anytime where the proposition is actually true. Meaning, if Premise 4 follows from 2-3, then the conclusion ”God does not value not P more than P” is true of every proposition “P” that is actually true. This is effectively what I claimed in my previous post: if God exists as we think it, everything that exists/is true is exactly what God would most want.
To reject the argument, a Christian would have to reject Premise 4. This means that Christians would have to claim that “not everything else is equal” so the demand of rationality does not apply. A common example of this type of argument is the free will argument: “God may not value pain/death/suffering, but he allows it to exist so we can have free will”. Here’s the basic idea:
1. If P, then Q
2. God values Q
3. God will make P true, to make Q true through Modus Ponens
Problem: the conditional claim “If P, then Q” is unnecessary for any being that is omnipotent. Here’s the correct argument:
1. God values Q
2. God can bring about the truth of Q (definition of omnipotent)
3. God is rational
4. God will bring about the truth of Q
-The key note of my previous post was to show that using any conditional claim “If P, then Q” to justify the existence of a negative value “P” with a positive value “Q” denies the definition of omnipotence. The only way the conditional “If P, then Q” is relevant is if the condition “If Q, then P” is necessarily true, meaning that it is not logically possible for Q to be true and P to be false. However, there is no such conditional “If Q, then P” that is necessarily true other than tautology (If Q, then Q). Meaning, there are no possible conditions “P’ that are not identical to Q that are necessary for Q to exist, and therefore there are no possible conditions “P” that God allows but does not value.
2. Attempts to Prove the Existence of God
In its simplest form, any argument to prove the existence of God takes Modus Ponens form:
2. If P, then Q
In the above argument, “Q” is “God exists”. “P” is whatever that specific person thinks proves the existence of God (whether it be nature, the stars, or the shape of bananas). The problem with these types of arguments is in the conditional claim “If P, then Q”. To be convincing, this conditional needs to argue that the proposition “P is true and Q is false” is necessarily false. Meaning, there is no logically possible universe where P is true but Q is false.
What this means for any possible “God exists” argument: none of them work. There isn’t a single conceivable proposition, other than “God exists” that is necessarily incompatible with the proposition “God does not exist”. It is logically possible for both the proposition “God does not exist” to be true and for nature/stars/bananas to also be true. Meaning, none of the proposed evidence for God actually proves God’s existence. (There is a lot that needs to be said on probability. Meaning, if “P” is true, what is the likelihood that “Q” is also true? If you want to support the existence of God, treat your arguments as providing probabilistic support for God, not evidence. Further, this argument applies similarly to a lot of arguments against God as an entity. There are sound arguments against a God that is both omnipotent and all-good, but none against a morally neutral God. Keep in mind, however, that the burden of proof lies on those who make the assertive claim that God exists)
I’ve talked a bit before about the nature of reasoning, and I think it’s best described as the ability to link truths to other truths. A really simple example would be reasoning from the truth that “it is raining” to “it is not not raining”.
I’ve briefly talked about how this provides an initial problem-how do we come to our first truths? Here we have to introduce the concept of “a priori knowledge”, that we can know without experience and without needing other truths. There might be only one piece of a priori knowledge (I exist), or more, but it at least provides a starting point to derive other truths from.
Value, however, is a different matter entirely. I’ve seen a lot of debates about the idea of “objective value”. This, almost by definition, means nothing. If value isn’t from a subjective viewpoint, what is doing the valuing? Is anything valuable to an atom?
The very concept of value entails that there is some entity that judges something to be a positive thing, whether intrinsically or instrumentally. Think of it-if no life in the universe existed, would anything be truly valuable?
Here’s the problem: how can we reason from objective truths to subjective value? The simple answer: we can’t. The long answer: seriously, we can’t.
The disconnect between objective truth and subjective value is similar to the “you cannot infer an ought from an is” discussion. The ought from an is discussion is about morality, and it basically claims that you cannot infer a normative statement like “you should do x” from any objective statement such as “y is the case”. I feel this boils down to one idea: we are each the ultimate decider for our own actions and viewpoints. It isn’t possible for their to exist an objective value outside my own subjective value system that could override my own judgments.
The answer to both, I argue, lies in the possibility of subjective value systems that people do not actually choose for themselves. Basically, people already value certain things, and then we can infer what people are obligated to do from the things they already value. We can infer an ought from an is, because the is involves subjective value rather than objective facts.
To clarify, the reason we need certain values that we did not choose is that reason cannot provide us with a justification for value in and of itself. So, consider the idea that I should value humanitarian work. I cannot derive “I should value humanitarian work” from only “this is what humanitarian work is”. I need something along the lines of “I value x”, “humanitarian work promotes x”, therefore, “I should value humanitarian work”. You then can take the question back to “x”. Why should I value x? Either I have no reason to (and I just do value x), or there must be some value y where “I value y”, “x promotes y”, and therefore “I should value x”. If the latter option is correct, then do the process again, and you’ll eventually either reach a value that has no reasons for it, or start an infinite regress. Since an infinite regress is impossible, there must be some value we hold for no real reason if we are to have any values at all.
Christine Korsgaard goes into this idea, and argues that the fact that we act at all proves that we are obligated to value ourselves as an end. After all, if we didn’t value ourselves, why would we act to promote that value?
What I am ultimately hoping to prove is that we already, in fact, have the value that we should value others. As I mentioned in my last post, it is impossible to derive “value others/morality” from “I only value myself”. If morality is to exist at all, then there must be a “I value others” trait that we have and do not need nor have reason to choose.
Ethical egoism is the doctrine that in all cases, people should only do what benefits themselves. The effect on others, regardless of degree, is meaningless to you. In terms of this post, being an egoist means already accepting the premise that you are the only end worth pursuing. In logical terms I’ll call this a person having already accepted the premise “P”.
Morality, as I have argued for before, necessitates that people value others solely for their own sake. Meaning that to me, you have at least some level of value as an end in yourself (benefiting you is valuable for your sake, it is not dependent on how it affects me). Since this means that you cannot only value yourself as an end, it is the negative condition of P, or simply “not P”.
Now here’s the logical problem in trying to establish moral obligations for an ethical egoist (this is the standard case you have to look at if you’re trying to justify morality in general). Since egoists already accept the premise “P”, you have to find a way to argue from ”P” to “not P”. This is logically impossible. It’s exactly like trying to derive “its not raining” from the premise that “it is raining”. It’s just not going to work.
So we’re left with three possible options: morality is impossible to justify, morality does not in fact require “not P”, or egoists do not in fact already have the premise “P”.
(Long side note: there is the possibility of the “should” route. Meaning, an egoist may already have the premise P, but we can say egoists have moral obligations because they should have “not P”. I find this route has one of two undesirable conditions: either morality means nothing, or it asks to do something impossible. Egoists who already accept P have the rational value system of P, meaning that options and choices are considered using a value system that only values yourself. We can either say you have to change your value system, or that you don’t have to change and should just acknowledge the superiority of another value system [ie morality]. However, both options allude to a possibly “objective” third value system that can compare P and not P, which doesn’t exist [and even if it did, it would have the same problem. How do you prefer the third system to P? Do you need a fourth to compare the third and P? And so on until an infinite regress] Since this is impossible, morality then becomes an optional set of rules that people can or cannot follow based on a whim. The concept of “should” is entirely lost.)
I’ll go over each of the possible options to find its plausibility:
1. Morality is impossible to justify.
-This is a potential solution, though obviously we don’t like it. Human societies are obviously better if morality is justified, so we want there to be a justification for morality. Further, none of the commonly brought up reasons for the lack of morality are convincing. It’s obvious that if ethics do exist, humans have to be the ones to make it (ethics are not ingrained in physical laws, as there is no concept of “should” or obligation for physical objects). This makes a lot of people uncomfortable for similar reasons that people still hold onto the concept of an immaterial soul-factual, physical explanations never seem as convincing or “special” as immaterial, vague ones. This results in a lot of people thinking there has to be some objective, outside of humanity ethical rules in order to be justified. But, how would this be justified in any way? Higher powers (why does power allow people to make ethical rules)? Not being human (obviously not enough)? Higher ability to reason (then wouldn’t the reasoning be there to justify morality)?
I’ve gotten into a couple long-winded side notes, but I felt they were necessary. Basically where we’re at: there isn’t a compelling argument for egoism yet, but there is yet to be a convincing one for morality as well.
2. Morality does not require “not P”.
-I find this option easy to think, but it reduces morality to nothing. If I am not required to value others at all, then I will always act in an egoistical way. I will only act along the rules of morality if they happen to coincide with what is in my rational self interest. Meaning, I will only refrain from killing you is if it’s in my best interest not to kill you. However, this means morality says nothing more than “do what you want”. If this were the case, then morality truly means nothing. If this were all morality could be, then we fall back into option #1. In this case, humanity has to structure strict laws that punish harm, so that the “moral” action is the egoistical action in as many cases as possible.
3. Egoists do not, in fact, already have the premise “P”.
-This is the route I am trying to take, which was strongly proposed by Christine Korsgaard in “Sources of Normativity”. I won’t get into how this could work entirely, because I haven’t made a completely convincing case yet. In effect, the argument is ultimately aimed at proving that people, due to their nature, already value others to some degree. Human nature might heavily involve higher cognitive functions, which might be relevant since it seems that the higher cognitive ability animals have, the more “morality” they seem to entail.
The next step in this route (if I can successfully show that people already have the value “not P”) is proving that people should keep the value “not P”. So an egoist could respond: I may already have the value “not P”, but why can’t I just reason that I should choose “P”? The only possible response is that the egoist cannot in fact choose “P”, when they already have the value “not P”.
Some line of though that could support this claim: as I will argue in my next post, we cannot reason our way to values. We only have values, and can use reason to determine what those values entail (this argument has to do with the idea that value is only subjective, and reasoning cannot determine subjective value. I’ll argue more for it in my next post). If this route is true, then an egoist cannot actually reason from “not P” to “P” for the same reasons I cannot reason from “P” to “not P”.
Finally, the egoist might respond: is morality really justified then, if I have no choice whether or not to value P? My response, at least so far, would be that: there comes a point where “I” has to be defined. While I would have to do more work to prove it, I think any concept of “I” is going to necessarily include certain values, which would include “not P”. Therefore, there is no way for “I” to be distinguished from “not P”, “not P” is simply one of the identities that makes up who I am as a person (Korsgaard’s argument is along these lines). It is clear, however, that people have to have some free control over their actions in order for morality to be justified. So, that free will could be in whether or not the individual chooses to listen to “not P” or not, and since the individual in fact does hold “not P”, we can justly hold him/her accountable for acting outside the confines of “not P”, or in effect, outside morality.
Assumptions, simply defined, are things that a person “knows” that were not derived with reason. Reasoning, in my understanding, is the ability to logically deduce one fact from another.
The above two definitions, which I don’t think are too controversial, help define the difference between a priori and a posteriori knowledge. A priori knowledge (knowledge without experience) has to be an assumption of sorts. A posteriori knowledge, when correct, is knowledge learned from experience and supported by reason.
The initial problem, that a priori knowledge is meant to solve, is how we have any knowledge at all. If we can only reason from knowledge to other knowledge, then we either have no knowledge or there must be some knowledge we have through a process other than reason.
So the question is: what a priori knowledge do we have that is justifiable? If we start from complete skepticism, are we left with only a Cartesian sense of “I exist”?
My answer to this question involves an argument for materialism (which I’ll post later today). I wanted this post to introduce the topic beforehand.
Most arguments, especially in politics, have become shouting or regurgitation contests-whoever can yell the loudest or recall the most talking points in an argument is the supposed winner. I deal with this on a regular basis in the religion section of YA and the political section of CNN. The purpose of this post is to describe how arguments are supposed to be done-through the search of a sound argument (see: the Lincoln Douglass Debates). If you want to get right to an example of how this works, scroll down to the bold part. The first part is explaining the theoretical components of logic, which might be boring to some but explains what I am doing in the example.
I am currently in a debate about the morality of food stamp programs, so I’m going to focus on that. This argument can be split into two categories: application and ethical arguments (http://fensel.net/2011/08/24/174/). Basically, application arguments ask whether or not the policy can reasonably be expected to be successful in the real world. The ethical arguments are whether or not the policy’s goal is morally justifiable.
There are two basic stances in each debate: advocating your own argument, and refuting your opponents argument. The focus in each is to find a sound argument-one where each of the premises (assertions) is true, and the truth of the premises logically necessitate the truth of the conclusion. In the words of my former professor W. Siewert, you have to accept the truth of a sound conclusion on “pain of being irrational”. So, if you are trying to prove your own conclusion, you need to justify each premise you make and how that premise logically leads to the conclusion. If your opponent is a skilled debater, then he/she will either show how your premises don’t lead to your conclusion, or choose one of your premises that he/she disagrees with. The same holds true if you are trying to refute your opponents argument-look for false premises, or look for false inferences (points where the premise does not lead to the conclusion).
When boiled down, I believe almost every valid inference in an argument is going to be either a Modus Ponens (MP) inference or a Modus Tollens (MT) inference. MP is: If in each scenario where you have x, you have y; and you have x; then you can conclude y (real world example: if you are using the internet, you are using electricity; you are using the internet; therefore you are using electricity). MT is: If in each scenario where you have x, you have y; and you don’t have y; then you can conclude that you don’t have x (if you are using the internet you are using electricity, you are not using electricity; therefore you are not using the internet).
By and large, if the inference made in an argument is not MP or MT, it is most often an invalid reference. The easiest way to judge these inferences is to look for the necessary and sufficient claims. Necessary conditions=in order to have y, you need x (there is no y in which there is no x). Sufficient conditions=if you have x, then you can assume y. In MP, the claim is that internet is sufficient for the existence of electricity. Similarly, electricity is necessary for internet.
Here I’m going to paraphrase a food stamps argument that I’ve been having with Mitchell Powell on various posts on his blog (fontwords.com).
His view is that all food stamp policies should be abolished. His argument was basically that food stamps lead to greater crime in some way, and that eliminating all food stamps would lead to less crime.
To counter, you have to look at what a statement like this is really saying: The crime that food stamps causes makes food stamps a morally unjust policy. In MP form, it looks like this:
1. If food stamps cause x amount of crime, then food stamps are unjust.
2. Food stamps cause x amount of crime.
Food stamps are unjust.
The above argument is logically valid, so if the two premises are correct the conclusion must be true.
I chose to reject premise #1. Premise #2 is questionable at best, there are numerous correlation does not equal causation fallacies required as well as other problems. But, for the sake of simplicity, I usually grant all the premises other than the one I feel is most wrong.
So, is #1 true: “1. If food stamps cause x amount of crime, then food stamps are unjust.”
This is a sufficient claim. Basically, it says that in any case where something causes x amount of crime, it is unjust. This is obviously untrue-there could be a greater moral value that would justify the x amount of crime. Imagine that policy A creates x amount of crime, but saves a billion lives. Is it justified? Unless x is an absurdly unreasonable amount of crime, policy A is justified. Therefore, premise #1 is false (remember, a sufficient claim is false if you can find even one counterexample to it).
I then argued: does the negative value of x crime outweigh the positive value of food stamps (being giving people food, potentially eliminating poverty induced crime, malnourishment, and starvation)? Mitchell responded by taking my argument to be an absolutist claim that the positive value of saving lives outweighs any negative value. I then responded that I was making a claim that a positive moral value outweighing a negative moral value is sufficient for moral justification, not that positive moral values are sufficient for moral justification. (This is where our debate is currently at)
-Hopefully this is somewhat clear. The key points to take from this: it is much harder to have a sound argument than most people realize, but having one automatically makes your conclusion correct. Also, look for hidden premises: if someone makes a claim to justify a conclusion, find out how that claim could possibly lead to the conclusion (through MP for example). Use counterexamples to refute faulty inferences. If the inferences are all correct, and you still disagree, it is most likely due to a differing opinion on a specific premise. Sometimes these premises can be debated (and their truth can be derived into further premises), but at some point, if no faulty inferences are ever made, you will find a non-derivable premise that you disagree on (like whether or not people should morally value others). For example, in my previous debate with Mitchell on public education, we boiled down our arguments to one fundamental difference: I believe that there would be a significant amount of children who would be better off with education but would be unable to have one in a completely private educational system, he did not believe these children existed in any significant degree. There wasn’t really a way either of us could prove our premises, but we had derived the differing ideas we had and figured out why we inferred different conclusions.
Last note: Anecdotes in arguments are usually a sign of a weak argument. When you see one, keep in mind what an anecdote is: a personal example of a case. This case can be justly used as a counterexample to a sufficient or necessary condition claim. However, as it is most often used, it cannot be used to infer a sufficient or necessary condition claim. Meaning that you can’t infer from “I know a person who was hurt by our welfare system” to “all people are hurt by our welfare system” or “if you are on our welfare system, you are worse off than you would be without it”.
(If interested, feel free to post any arguments in the comments for me to do this process on)