Formal Logic

Now we are moving into more difficult material. Some of you are going to love it, and some of you probably are not. Those of you who like clear answers to questions are going to like it; some of the material is a bit like solving math problems.

The good thing is that you were introduced to deductive logic earlier in the class. Thus, you should have a basic understanding of how deductive logic works and this will help a lot. We are dealing now with what is sometimes called “formal logic” or “symbolic logic” or, as the text calls it, “truth-functional logic.” The point is that we’re ultimately going to be turning deductive arguments, like the ones from Chapter 2, into symbols and then manipulating those symbols. This will make more sense as you read on.

Symbolic Reasoning

We all know what symbols are, don’t we? To oversimplify the definition, symbols are things that stand for other things. In our culture, with traffic lights, the color red symbolizes “stop” and the color green symbolizes “go.” Symbols make things easier to understand. We don’t have to write out “stop” and “go” because we have collectively agreed to represent those actions with colors.

Something similar happens with symbolic logic. Symbolic reasoning, which we’re going to learn about, is used when other forms of reasoning would be too slow. Remember the terms “valid” and “invalid?” It’s easy to see that simple arguments are valid or invalid. Here are a couple of examples:

This argument is obviously valid:

  • All men love hot dogs.
  • Steve is a man.
  • So Steve loves hot dogs.

If it’s true that all men love hot dogs (remember, we only assume truth with validity) then it has to be true that Steve loves hot dogs if Steve is a man. Clearly this is a valid argument that is easy to assess—if the premises are true, the conclusion must also be true.

This argument is obviously invalid:

  • Some women love hot dogs.
  • Joanne is a woman.
  • So Joanne loves hotdogs.

If only some women love hot dogs, it does not follow that, just because she is woman, Joanne loves hot dogs. If only some women love hot dogs, then Joanne might be one of the women who does not like them. So the argument is obviously invalid.

Now, the above are simple arguments that are easy to assess. But what about more complicated arguments with numerous, more complex premises? Consider a religious debate between two people. One, call him Tim, argues that there is a God and that sinners will suffer eternal damnation. The other, call her Julie, argues that, if God is all good, forgiving, and compassionate, then there can be no hell of eternal suffering when we die. Here is the way Julie’s argument looks when put in premise conclusion format.

  • 1) If God does not exist, then there will be neither a heaven nor a hell for us when we die.
  • 2) If he does exist, then there should be human suffering only if this suffering contributes to fulfilling God’s purpose.
  • 3) However, if there is to be human suffering and eternal suffering, then this cannot contribute to fulfilling God’s purpose (because God is supposed to be good, forgiving, and compassionate).
  • 4) There will be human suffering and eternal suffering, if there is a hell for us when we die.
  • 5) It follows that there will not be a hell for us when we die.

Now, given your skills in determining valid and invalid, you could probably take some time to determine whether or not the above is valid or invalid. But it would take awhile, and it might not be a very fun thing to do. Luckily for us, there is an easier way—this is where symbolic logic comes in. Like mathematics, symbolic logic was invented so we can follow long trails of reasoning that are not easy to assess. Sometimes logic is called “systematic common sense.” This definition applies especially to symbolic logic which puts argument and common sense into a system, as we’ll see.

Think about math. When you are solving an equation, however simple, do you write out the numbers? Is it “75 X 6” or “Seventy five times six?” Clearly the first is how we do math. Equations would be incredibly difficult to figure out if we didn’t have symbols for numbers and functions. In math, we systematize numbers and their relations; in symbolic logic we systematize language. This being said, symbolic logic can get incredibly complicated and full classes are devoted to it in upper division philosophy. Since this is an introduction to logic class, we’re not going to go into detail; we’re just going to scratch the surface. But we’re going to go far enough that, I hope, you will see the importance of symbolic logic to our present age, particularly with respect to computers (we’ll get into this in a bit).

Symbolic Translation

But now, how do we turn language into symbols? The beginning of symbolic logic is just this, learning to translate ordinary language into symbols. In fact, completely understanding symbolic logic is like learning a new language (some philosophy graduate programs will consider proficiency in symbolic logic suitable for the language requirement).

The beginning stage is often referred to as “propositional logic” because it is concerned with trying to understand the connections between propositions (or statements) in ordinary language. Consider the following:

  • “Either John passes the final or he will not pass the course.”

(continued below to the right)

We have collectively agreed that certain colors stand for certain actions with traffic lights.

 

 

We use agreed-upon symbols to represent different beliefs and ideas.

 

Language itself is a system of symbols, but with formal logic we will be creating a more fundamental symbolic system using language. According to many linguists, our brains have been "wired" for language for thousands of years.

 

 

Below are examples of "logic gates." In computers, logic gates are used to carry out various operations. Logic gates take information and use it to produce some output, depending on the gate. Some of the operations should be familiar: "and," "or." At the fundamental level, computers function according to the operations of symbolic logic. Logic gates are usually implemented via a circuit of some kind, such as microchips.

Below is an example of an integrated circuit that makes use of logic gates. The image is enlarged; the chip is actually only about 2mm.

 

 

There are two statements here that are linked: “John passes the final” and “John passes the course.” They are linked by “or” and “not.” These linking terms are called “logical connectives.”

Logical Connectives: and, or, not, if/then, if and only if

Then there are variables. In symbolic logic, we use variables to stand for statements. For example, we use “F” to stand for “John passes the final exam.” Notice that we could have used “J” instead. There is no exact science to choosing a variable—the idea is to choose a term from the statement that seems to represent that statement best.

So, now we are seeing the components of the symbolic language we’ll be using: variables and logical connectives. There are symbols that are used to stand for logical connectives, they are below.

Variables: A, B, C…

Logical Connectives:
And: &
Or: v
Not: ~
If/then: →
If and only if:

Logicians give labels to statements with each of these connectives:

Conjunctions are “and” statements (&).
Disjunctions are “or” statements (v).
Negations are “not” statements (~).
Conditionals are “if/then” statements (→).
Although a conditional is single symbol, it often contains two terms in ordinary language: “if” and “then.” This can be confusing, but just remember the nature of the claim is that one thing depends on the other. “If we eat, then we’ll be full.” In conditionals, what comes first is called the “antecedent” and what follows is called the “consequent.”
Biconditionals are “if and only if” statements ().
These can get complicated, as noted in the book. For our purposes, I won’t be asking you to understand the deeper complications.

Given the symbols above, how would we translate the following into symbolic notation? “Juan went to the store and Mary stayed home.” The first step is to choose variables. Let’s choose “S” for “Juan went to the store” and “H” for “Mary stayed home.” Now, the final symbolic notation would be: “S & H.” I hope you’re able to see why this is the case. Here are a few more examples.

If I eat fish, then you’ll eat pork: F → P (this is a conditional)
We’ll go to the mall if and only if you get out of bed: M B (this is a biconditional)
Either we get a house or an apartment: H v A (this is a disjunction)
We’re not going to the beach: ~ B (this is a negation)

Are you starting to see it? The overall point is to see that language can be put into a system of symbols, and this is exactly what symbolic logic does. (I will be asking you to translate some statements into symbolic logic for the final exam.) To some of you this might seem simple. But I have to warn you that this stuff gets quickly complex. Here’s an example of what a full argument of symbolic logic looks like.

  1. (A & B) → [A → (D & E)]
  2. (A & B) & C          /:. D v E
  3. A & B
  4. A → (D & E)
  5. A
  6. D & E
  7. D
  8. D v E

But don’t panic! I won’t be asking you to anything quite this complex in this class. I just wanted to show you how far this stuff can go. We’ll be going further than translation alone, but not this far.

Computers: Logic Machines

I do want to show you how this sort of symbolic language is relevant to our world. Here is a little experiment. Right now, open a new window in your web browser. Now go to “view” in the top menu and click on “page source.” Those of you who have built webpages might be familiar with what you see. Now, what you see is not an all out programming language, but it is a language of symbols that allows people to manipulate the appearance of text, links, and more. In fact, what you see is typically called HTML which stands for HyperText Markup Language. Like the language I am teaching you in this lecture, HTML is a language of symbols with different labels and functions.

It is languages like these that underlie the computer revolution and the general functioning of computers. Like most human achievements, the computer revolution has many different areas of human knowledge to thank, but the development of deductive, symbolic languages played a huge role. In this sense computers are logic machines, carrying out operations to their logical conclusions, dictated by the rules of the programming language.

(continued below to the left)

Formal Proofs of Validity

But now, back to the immediate concerns of this week’s lecture. To reiterate, symbolic logic was originally used to show the validity or invalidity of arguments that are long and complex. We’ve learned how to translate from ordinary language to symbolic logic, and the next step is to test whether arguments in this symbolic language are valid—these are usually called “proofs of validity” because you are proving that an argument is valid. There are different ways to prove validity, but we’ll be focusing on one.

It’s worth noting, too, that we could take translation deeper than we have. I could ask you to translate long arguments into symbolic language. But, since this is an intro class, we’re going to skip much of that. I basically want you to get the big picture: full arguments can be translated into symbolic language. We can skip some of the steps in between. Of course, you should know how to translate basic statements like the ones above.

Now, on to proving validity. As I mentioned, there are other methods of proving validity and invalidity. I wish we had time to go over all of them, but it’s just unrealistic to even think that we can. As a result, I’ve chosen the method I have the most familiarity with. This will enable me to explain it to you better than I could likely explain other methods. It also makes it likely that I can most effectively answer your questions. The method we’re going to use is often simply called “deduction.” We’re going to be taking argument patterns or forms that are already valid and using them to derive a conclusion from a group of premises. It sounds more difficult than it is.

We’ll be working with 9 valid argument forms or rules. It is very important that you’re able to reference these: all 9 forms are listed in the textbook beginning at the bottom of page 322.

Modus Ponens

For clarity, I’m going to go over the first argument form: modus ponens. By showing the way this is translated from ordinary language, I think you’ll be able to see pretty clearly why it’s valid. Consider the following argument:

  1. If there is a gas station, then we’ll stop.
  2. There’s a gas station.
  3. Therefore, we’ll stop.

Now, let’s translate it. First, our variables. We’ll let “there’s a gas station” be represented by “G” and “we’ll stop” be represented by “S.” Now, how does the argument look in symbolic language given our variables? Something like this:

  1. G → S
  2. G
  3. Therefore, S.

This is one of the most basic valid argument forms: modus ponens. To remind you, we are going to be using this and the other argument forms (total of 9) from the text to show that a certain conclusion can be derived from a set of premises. It all begins with learning to recognize the 9 argument forms. For some examples of this, see the full box of white text to the right.

Recognizing Argument Forms (Rules) in Operation

The first step in being able to do formal proofs of validity is learning to recognize some of the 9 valid argument forms in operation. A couple of things. This little symbol "/.:" means "therefore." It is also good to understand that anything in parenthesis can stand for a single term. For example, both of the following are modus ponens:

1. P → Q
2. P
/.: Q

1. (X & ~Y) → Q
2. X & ~Y
/.: Q

Now, let's consider the following examples. Keep your list of the 9 rules nearby. In these examples I want you to consider this question: what form or rule was used to reach the conclusion given?

1. (A → ~B) & (~C → D)
/.: A → ~B

The conclusion, which is A → ~B, was reached by the 5th rule: simplification. If such a question were to come up on a quiz or exam, you would write "simplification" next to the line with the conclusion, to show what rule was used to derive it. Like this:

1. (A → ~B) & (~C → D)
/.: A → ~B simplification

Let's try another:

1. (V → W) v (X → Y)
2. ~(V → W)
/.: X → Y

What rule was used to get the conclusion, X →Y? It was rule 4, the disjuntive argument.

So, are you starting to get the idea? Hopefully you are beginning to see it. It should get clearer, too, as you try the exercises for this week. The idea is to get comfortable with these and then move on to constructing proofs of validity yourself. With proofs, you won't just be recognizing these rules, you'll be deciding for yourself which ones to use to derive the conclusion.