Let's do a brief review. We know that with deductive reasoning, a valid argument guarantees the truth of the conclusion if the premises are assumed to be true. We know that a sound argument is valid, but its premises are, in fact, true. We know that truth isn't as simple as it seems, even in science, but still attainable. We know that inductive arguments are strong if their premises support the conclusion, but weak if the premises do not. We know that, even if we are likely not living in an illusion such as The Matrix, it is still difficult to find good sources. When we ask about the truth of a claim, we must ask both about the claim itself as well as the source of the claim. We discussed some criteria for determining whether a source is good (for example, peer review). Now, for the remainder of the class, we will turn to mistakes in reasoning, or fallacies.
Fallacies in General
The first order of business here is to distinguish fallacies from the cognitive biases we've already discussed. According to the dual processing model of mind, cognitive biases are specific examples of fast thinking--mental shortcuts that simplify the world for us to make it easier to understand. Recall that we discussed specific cognitive biases such as the in-group bias.
Although fallacies and biases can both cause us to reason poorly, a fallacy is not just a mental shortcut but a full fledged (but failed) attempt at creating an argument. Another way to say it is that a fallacy is an attempted argument in which the premise doesn't prove, doesn't support, or isn't really relevant to the conclusion. To take us back to the language from the first reading, fallacies make use of ethos (personal characteristics) and pathos (emotion), and more, rather than logos (reason).
Fallacies are tricky because sometimes they seem like good arguments. If you are never introduced to fallacies, in fact, people using them might even sound clever. For example, in debates about God you often here someone say, "But you can't prove that God doesn't exist, so he must exist." If you didn't know any better, that might sound like good reasoning. The fallacy here is the misplaced burden of proof. If you make a claim (as is the case in a court of law), you better have evidence to back it up. Another way to say it is that you can't prove a negative. If someone claims that God exists, they had better be able to back it up with some reasons or evidence. After all, that person made a positive claim about the existence of something.
This example also illustrates another important point: fallacies can sometimes be turned into arguments by slightly changing the premises or conclusion. Let's modify the above quote to say: "But you can't prove that God doesn't exist, so for all we know there may be some kind of higher power." In this case, the conclusion is not overdrawn. It's not necessarily unreasonable to discuss the possibility of a higher being given that we can't disprove such a being's existence (some might call it a weak argument, but I would still at least call it an argument).
In an argument, there is clear relevance between premises and conclusion. But in a fallacy, that relevance gets lost somehow. In a fallacy, premises may also be said to be inadequate. In the misplaced burden of proof, there is no relevance between not being able to prove something and its possible existence. There are all sorts of beings whose existence we can't disprove, but it doesn't mean that these beings exist (the Easter Bunny, Santa Claus, etc.). In the second quote, I realigned the argument to have relevance between premise and conclusion.
The bad news about this material is that there are way more fallacies out there than you could ever hope to learn in a single class. Compound that with the fact that different textbook authors categorize fallacies differently, and name them differently. In this and the other fallacy reading I have chosen fallacies that are often listed as most commonly used. I have also chosen fallacies that I most often see my students fall prey to.
The good news is that your benefiting from this material does not depend on memorizing all the fallacies in existence. The fallacies are just a tool, a metric, to help you recognize bad reasoning. That being said, at the beginning stage of learning to reason well (a stage I assume most of you are at since you're taking this class), it helps to have guides/frameworks, which is why I do ask you to know the names of some fallacies. On the part of the assessment that addresses this material, you will be asked to read passages and recognize which fallacy, if any, is being used, just like in the homework.
We will be discussing two categories of fallacies: 1) Relevance fallacies and 2) Induction fallacies. Relevance fallacies are what they sound like: attempted arguments in which there is no obvious relevance between premises and conclusion. The focus of this reading is relevance fallacies. Induction fallacies are attempted arguments where the premise may be relevant to the conclusion, but it is inadequate. I mentioned the hasty generalization in the Inductive Reasoning reading, which happens when someone tries to make a generalization with a sample that is too small. If I take only one apple out of the barrel of 100 apples and it's rotten, it would be a hasty generalization for me to conclude that the last 99 apples are rotten. The premise is relevant to the conclusion, but not adequate enough to support it. I need a bigger sample to make it an argument rather than a fallacy.
Before we turn to additional specific relevance fallacies, a word of warning. If you haven't learned these fallacies before, you will start seeing them around you. You will recognize friends using them, parents, even teachers! You will see them on the news, in YouTube videos, and more. It's important to realize that the presence of a fallacy does not necessarily discredit the source. People sometimes use fallacies casually for fun. People also sometimes intersperse fallacies with arguments. Conspiracy theory films are great examples of the latter. These films sometimes contain legitimate points that are not being talked about by mainstream media or scholars, but they will also often use just about every fallacy in the book. As I've been arguing to you throughout this class, it's important to sift through the ethos and pathos to get to the logos--don't throw the baby out with the bathwater.
Ad hominem is a latin phrase meaning “at the man.” This fallacy occurs when someone attacks the person stating a claim or making an argument rather than the claim or the argument itself. It might be true that someone is an idiot, but it doesn’t follow that his argument is idiotic. Likewise, just because you think someone is brilliant it doesn’t follow that his argument is always brilliant (as a matter of fact, brilliant people can come up with some ridiculous ideas).
Here is a very simple example: “Maria can’t even spell so her claim about how much our country needs good healthcare must be false.” Do you see what’s happened? Maria’s claims about health care shouldn’t depend on whether or not she can spell. It’s true that a person’s circumstances can cause us to be skeptical about someone’s claim—if, say, someone with no education is making complex claims about weather patterns—but it should not cause us to reject the claim right away.
Fallacies are mistakes in reasoning, or attempted arguments where the premise is either irrelevant to, or inadequate to support, the conclusion. The cognitive biases we studied earlier in the class are different in that they are mental shortcuts, examples of fast thinking. Both fallacies and biases prevent us from being as reasonable as we could.
One important cognitive bias is the in-group bias. There's nothing wrong with feeling pride in a group you're a part of, but the example of Hitler should never cease to remind us of the consequences of such biases when they are not put in check--one consequence being the exclusion of human beings from the human community based on highly flawed beliefs and principles.
Ad hominem attacks are often obvious (like in the cartoon below), but in other cases they are much more subtle.
We should never completely discount a source due to the presence of a fallacy, though too many fallacies can make us skeptical of that source. People often use fallacies (consciously or unconsciously) along with legitimate arguments. As noted in the text to the left, many conspiracy theories are a combination of fallacies and arguments.
In the video below, Naomi Klein lays out her idea of the shock doctrine.
A good method for understanding fallacies is to change the premises and/or conclusion so that the fallacy is turned into an argument. This way you can see examples of relevance and irrelevance for direct comparison. See below.
Ad Hominem Fallacy:
1. He thinks pot should be legalized.
As an Argument:
1. He does not have expertise about drinking laws.
Straw Man Fallacy:
If we keep defense spending at the current level, we won't even be able
to defend ourselves from Iceland (a particularly nonviolent country).
As an Argument:
1. We have clear intel that North Korea will soon invade the US.
So we should spend more on defense.
False Dilemma Fallacy:
As an Argument:
1. The Environmental Protection Agency (EPA) is trying to ruin the oil industry, or it is just incompetent.
2. Either option is bad.
So the EPA has bad motivations.
1. The EPA president said in a debate that the company is only motivated to get more publicity, and not to actually help the environment.
So the EPA has bad motivations.
Note that the premises in these last arguments are not true (as far as I know), but we assume their truth to see what it would take to make them arguments.
The Boulder Glacier: same glacier at different times (before is above, after is below). Do these images alone persuade you that global warming is a serious concern? Although the images illustrate what scientific evidence and consensus is also telling us, some people have argued that the images were used as a scare tactic in the movie An Inconvenient Truth.
Ad hominem fallacies can be more subtle too. I might reject a person's argument due to her personal circumstances rather than some direct personal characteristic. If I rejected, say, a man's argument about marriage simply because he'd been through a divorce, I would be committing an ad hominem. I might reject a person's argument due to his inconsistencies. Just because a person changed his mind about a particular topic doesn't mean his argument should be rejected outright. Sometimes people poison the well against someone else, or they commit an ad hominem in advance. Before a speaker gets on the stage, a prior speaker might tell the audience not to listen to the next speaker, thus poisoning the image of the next speaker in the audience's mind before he even gets a chance to speak.
Some of you might be relating all this back to credibility. After all, when we discussed credibility we found that a person's personal circumstances, like lack of reputation or education, are relevant to credibility. But keep in mind that it's a fallacy only if we draw the wrong conclusion from the premises. It's only a fallacy if we completely reject someone's argument due to personal circumstances, but we are entitled to be skeptical of someone's argument due to personal circumstances. If I am listening to a speaker who is, say, a physicist and he starts talking about theology, I can still listen to his argument while being skeptical. Outright rejecting his argument would be an ad hominem.
There is one last variation of the ad hominem, the genetic fallacy. The genetic fallacy is basically an ad hominem applied to an entire time period or organization. If I assume that the origin of a claim refutes that claim, I am committing this fallacy. In the senate, bills are often rejected simply because they were created by the opposing political party--a perfect example of the genetic fallacy.
A man made of straw is easy to push to the ground. A real man is harder to push to the ground. What happens in this fallacy is that someone builds up someone else’s position like it’s made of straw, then easily knocks it down. In other words, someone attributes a false position to an opponent, then easily attacks that false position. If he had described the true position, it wouldn’t be so easy to knock down. Sometimes the straw man fallacy aims to make someone’s argument or claim look downright ridiculous. For example: “This is what communists actually believe…” “This is what republicans actually believe…”
You'll notice that on the discussion boards I ask you to paraphrase each other's posts before replying to each other. While some of you may not enjoy this, being forced to reflect on what someone else actually says helps to prevent straw men fallacies from occurring. And if there are fewer straw men, there is more meaningful communication, a major goal of the discussion boards.
One way to avoid straw men is to do your best to understand everything you learn thoroughly. Sometimes people commit straw men, they misinterpret an idea or theory, simply because they never understood the idea or theory in the first place.
A false dilemma fallacy is just what it sounds like: two options are presented when there are, in fact, other options. The problem with the false dilemma is that it’s an attempt to simplify something into a black or white, binary distinction when that something is more complicated. Sometimes politicians and other leaders say things like this: “Either you’re with us, or you’re against us.” False dilemma. Note that there are cases where there are only two choices about something; such cases are not false dilemma fallacies. For example: “Either you are Paris Hilton or you are not.” There are only two possibilities here.
There is a common variation of the false dilemma called the perfectionist fallacy. The perfectionist oversimplifies the world into two options, but based around perfection: either we do something perfectly, or we don't do it at all. Someone might suggest, for example, that there is no point to having a police force, since no matter what we do there will always be crime. But just because the police are not perfect (that is, they don't stop all crime), does not mean that we shouldn't have a police force at all.
Misplaced Burden of Proof
I gave an example of the misplaced burden of proof previously. If you make a claim, it's on you to prove it. If you say something is the case, you must show how it’s the case. A common variation of this argument is the appeal to ignorance. The appeal to ignorance basically says, “You can’t prove it’s false, so it must be true.” Or it says, “You can’t prove it’s true, so it must be false.” Again, if you make a claim that something is false, it’s on you to show how it’s false. The fact that someone else can’t prove your claim to be true doesn’t make it false. This appeal exploits the things about the world that we don’t know. You can't logically move from a state of ignorance (you can't prove it) to a state of positive knowledge (it exists). Notice that both of the following are fallacies: “You can’t prove God doesn’t exist, so he must exist.” “You can’t prove that God does exist, so he must not exist.”
Often, people who understand logic well—like lawyers and politicians—will deliberately misplace the burden of proof to make their opponent look bad. They know the burden of proof is on them, but they want to shift focus away from themselves so they purposefully commit a burden of proof fallacy to catch their opponent off guard. Consider yourself now intellectually armed against such people.
Begging the Question
The begging the question fallacy is sometimes referred to as reasoning in a circle, or circularity. This is because, in this fallacy, the arguer tries to get you to accept the very thing he’s trying to prove. The failed argument essentially looks like this: “You should accept X because of X.” However, the terms used are typically clothed in different language so that it doesn’t seem to be repeating the same point. In logical terms, begging the question says the same thing in the premises as it does in the conclusion.
Here’s a common example: “God exists because the bible says so. And we know that the bible is an authoritative source of information, because it was divinely inspired.” It is implied that divine inspiration is the result of the existence of some sort of God. So this is basically saying, “We know God exists because God exists.” This is begging the question, reasoning in a circle, assuming what is trying to be proved.
Appeals to Emotion
There is nothing wrong with expressing emotion or feeling it, but emotion alone is variable both between and among people and, therefore, cannot be a justification for an argument. An appeal to emotion fallacy happens when emotion gets substituted for reason. There are a variety of appeals to emotion, and I've listed some of the most common below. However, for the purposes of the assessment, you only need to be able to recognize any appeal to emotion as an appeal to emotion. In other words, if one of the answers on the assessment is technically scare tactics (a type of appeal to emotion), the answer will nevertheless just be "appeal to emotion."
Naomi Klein, in her book The Shock Doctrine, argues that Governments quickly push policies when citizens are in a state of shock as a result of a natural disaster or terrorism of some kind. The idea is that governments use citizens’ shock as a means of getting them to accept new legislation and reform. If Klein is right, what governments do is an example of the scare tactics fallacy.
This is a fallacy that tries to scare people into accepting something or doing something; it replaces reason with fear. If you make people afraid, they are more likely to look for solutions, and perhaps listen to your solution. Like anger, fear can prevent us from seeing the reasoning (if any) behind an issue. It can prevent us from seeing what’s really going on. Some people argue that Al Gore's movie, An Inconvenient Truth, used scare tactics in its before/after depiction of the Boulder Glacier (see box to the left).
It’s worthwhile to point out that sometimes there are legitimate reasons for being scared. Not everyone who presents scary information is committing a scare tactics fallacy. If you are, let’s say, a programmer for a software company and someone cites a statistic that programmers are losing their jobs left and right (and you have reason to believe this statistic is accurate), this is a legitimate reason for being scared that you might lose your job.
Many of us get angry. Some more than others. Sometimes we have a reason for being angry. If we find, for example, that our significant other has been cheating on us with one of our friends, it seems safe to say that we are justified in our anger toward our significant other, and our friend. But sometimes we get angry without really having a reason. If there is no reason, it's an appeal to emotion.
This is a common fallacy, which perhaps merits its own category. The fallacy happens when we accept, or fail to accept, a claim simply because it would be pleasant or unpleasant if it were true. Many self-help books commit this fallacy. One common theme that runs through self help is the idea that you can “be whatever you want to be.” This may prompt you to believe that you can be a basketball star, even if you are under 5 feet tall. Probably the most common example of wishful thinking is believing in an all-loving God because it is more pleasant to think that such a God is watching over you. This isn’t to say that people don’t believe in God for other legitimate reasons.
The argument from pity is a fallacy where we feel pity for someone, but as a result we are driven to some conclusion on an unrelated matter. Professors, naturally, get this all the time. For example, at the end of the semester students will often email me to say that they need an A to get into some internship, or some program. The expectation is that I will feel pity for them, then give them the A. The problem is that there are clear requirements for earning an A, and a student must meet those requirements to get it. The fact that I might feel pity for a student does not change the fact that he or she didn’t meet the requirements.
This fallacy is exactly what is sounds like, and is derived from the teacher’s pet stereotype. When someone praises someone else to substitute for the truth of a claim, they are committing the apple polishing fallacy. A more base way to say it is “ass kissing.” If a student tells me how great a teacher I am, and I let this influence the grade I give him, then I have committed the apple-polishing fallacy. A student’s accomplishments in the class should speak for themselves.
This one’s pretty obvious. Most of us are familiar with the concept of “guilt-tripping” someone. This is a fallacy. If you try to make someone feel guilty to get them to do, or not do, something, then you are committing this fallacy. Let’s say you are out to eat and someone with you doesn’t finish her food. If you say, “You know there are starving children in Africa so you should finish your food” you are trying to guilt-trip the person into eating the food. Guilt alone shouldn’t be responsible for making someone accept a claim or course of action.
People want to fit in. When we accept a claim simply to gain the approval of a group of people, we are committing the peer-pressure fallacy. Let’s say you’re a sophomore in high school. You’re hanging out with a new group of friends and they’re talking about how great the Yankees are. The clear leader of the group then looks at you and asks, “You like the Yankees, right?” Let’s say that you say “yes” because you are afraid of being kicked out of the group, or made fun of. In other words, you seek the approval of the group. You’ve committed the peer-pressure fallacy.
Red Herring/Smoke Screen/Irrelevant Conclusion
This fallacy occurs when someone introduces a distraction from the original point of a discussion, then goes on to conclude something irrelevant about that separate point. This fallacy is very common and often occurs in a conversation without either party being aware. In a heated debate, this fallacy is usually implemented because the parties involved don’t want to be wrong about something, so they introduce a separate point to lead the conversation in another direction.
Let’s say you’re having a discussion with a friend and she is discussing the injustices of the world, saying that there has been genocide in more countries than she can name. You are skeptical about how she’s using that term, so you ask her to define it so her point is clearer. She responds by asking you to define a complex term (say, imperialism) on the spot—presumably her point is that it’s difficult to define terms outright. But this is a red herring because the issue was her discussion of genocide, not your ability or inability to define imperialism.
A straw man is easier to knock down than a real man, just as a misinterpretation of someone's argument is easier to refute than the actual argument.
Some pessimists and others argue that wishful thinking is at the root of all beliefs about an afterlife, God, reincarnation, etc. They argue that, given our rough days as a species in the state of nature, we needed metaphysical beliefs like this because it was too hard to face the reality that people are gone forever when they die. As cognitive biases clearly illustrate, we want some things to be true, even if we don't know whether they are actually true. There have been some philosophers, however, who've argued that beliefs should be judged based on their usefulness rather than their truth. This view is called pragmatism and was held by William James. On pragmatism, wishful thinking doesn't seem to be a problem.
Copyright © Luke Cuddy 2009