Truth Functional Logic (or formal deductive reasoning)

There's no easy way to say this, the material you're about to learn in this "lecture" can be pretty hard for some students. Other students, on the other hand, absolutely love this stuff. Whichever camp you are in, I suggest taking it slowly. This material is really not all that difficult if you go step by step, trying to understand each idea fully before moving on to the next one. 

The good thing is that, from last lecture, you already know the basics of deductive reasoning. This prior knowledge will be useful, considering that this current material is basically formal deductive reasoning. Sometimes this sort of logic is called "symbolic logic" since we are basically reducing arguments to symbols. Ultimately you are going to be turning deductive arguments into symbols and then manipulating those symbols. 

Symbolic Reasoning

We all know what symbols are, don’t we? To oversimplify the definition, symbols are things that stand for other things. In our culture, with traffic lights, the color red symbolizes “stop” and the color green symbolizes “go". Symbols make the world easier. We don’t have to write out “stop” and “go” because we have collectively agreed to represent those actions with colors.

Something similar happens with symbolic logic. Symbolic reasoning is used when other forms of reasoning would be too slow. Let's take validity, for example. It’s easy to see that simple arguments are valid or invalid. Here are a couple of examples:

This argument is obviously valid:

1. All men love hot dogs.
2. Steve is a man.
So Steve loves hot dogs.

If it’s true that all men love hot dogs (remember, we only assume truth with validity) then it has to be true that Steve loves hot dogs if Steve is a man. Clearly this is a valid argument that is easy to assess—if the premises are assumed to be true, then the conclusiono follows with certainty.

This argument is obviously invalid:

1. Some women love hot dogs.
2. Joanne is a woman.
So Joanne loves hot dogs.

If only some women love hot dogs, it does not follow that, just because she is a woman, Joanne loves hot dogs. If only some women love hot dogs, then Joanne might be one of the women who does not like them. So the argument is obviously invalid since the conclusion can be false, even when we assume the premises are true.

Now, the above are simple arguments that are easy to assess. But what about more complicated arguments with numerous, more complex premises? Consider a religious debate between two people. One, call him Tim, argues that there is a God and that sinners will suffer eternal damnation. The other, call her Julie, argues that, if God is all good, forgiving, and compassionate, then there can be no hell of eternal suffering when we die. Here is the way Julie’s argument looks when put in premise/conclusion format:

1. If God does not exist, then there will be neither a heaven nor a hell for us when we die.
2. If he does exist, then there should be human suffering only if this suffering contributes to fulfilling God’s purpose.

3. However, if there is to be human suffering and eternal suffering, then this cannot contribute to fulfilling God’s purpose (because God is supposed to be good, forgiving, and compassionate).

4. There will be human suffering and eternal suffering, if there is a hell for us when we die.

It follows that there will not be a hell for us when we die.

Now, given your skills in determining validity, you could probably take some time to determine whether or not the above is valid or invalid. But it would take awhile, and it might not be a very fun thing to do. Luckily for us, there is an easier way—this is where symbolic logic comes in. Like mathematics, symbolic logic was invented so we can follow long trails of reasoning that are not easy to otherwise assess. Sometimes logic or reasoning is defined as “systematic common sense.” This definition applies especially to symbolic logic, which puts argument and common sense into a system, as we’ll see.

Think about math. When you are solving an equation, however simple, do you write out the numbers? Is it “75 X 6” or “Seventy five times six?” Clearly, the former is the way we do math. Equations would be incredibly difficult to figure out if we didn’t have symbols for numbers and functions. In math, we systematize numbers and their relations; in symbolic logic we systematize language. This being said, symbolic logic can get incredibly complicated and full classes are devoted to it in upper division philosophy. Since this is an introduction to logic class, we’re not going to go into detail; we’re just going to scratch the surface. But we’re going to go far enough that, I hope, you will see the importance of symbolic logic to our present age, particularly with respect to computers.

Symbolic Translation

How exactly do we turn language into symbols? The beginning of symbolic logic is just this, learning to translate ordinary language into symbols. In fact, completely understanding symbolic logic is like learning a new language (in fact, some philosophy graduate programs will consider proficiency in symbolic logic suitable for a foreign language requirement).

The beginning stage is often referred to as “propositional logic” because it is concerned with trying to understand the connections between propositions (or claims/statements) in ordinary language. Consider the following: “Either John passes the final or he will not pass the course.”

There are two statements here that are linked: “John passes the final” and “John passes the course.” They are linked by “or” and “not.” These linking terms are called “logical connectives."

(continued below to the right)

We have collectively agreed that certain colors stand for certain actions with traffic lights.



We use agreed-upon symbols to represent different beliefs and ideas.


Language itself is a system of symbols, but with formal logic we will be creating a more fundamental symbolic system using language. According to many linguists, our brains have been "wired" for language for thousands of years.



Below are examples of "logic gates." In computers, logic gates are used to carry out various operations. Logic gates take information and use it to produce some output, depending on the gate. Some of the operations should be familiar: "and," "or." At the fundamental level, computers function according to the operations of symbolic logic. Logic gates are usually implemented via a circuit of some kind, such as microchips.

Below is an example of an integrated circuit that makes use of logic gates. The image is enlarged; the chip is actually only about 2mm.



Logical Connectives: and, or, not, if/then, if and only if

Then there are variables. In symbolic logic, we use variables to stand for statements. For example, we use “F” to stand for “John passes the final exam.” Notice that we could have used “J” instead. There is no exact science to choosing a variable—the idea is to choose a term from the statement that seems to represent that statement best.

So, now we are seeing the components of the symbolic language we’ll be using: variables and logical connectives. There are symbols that are used to stand for logical connectives, they are below. No matter how complicated this stuff gets, just remember that all we're dealing with (at least in this class) boils down to variables and logically connectives, that's it.

Variables: A, B, C...

Symbols for Logical Connectives:
And: &
Or: v
Not: ~
If/then: →
If and only if:

Logicians give labels to statements with each of these connectives:

Conjunctions are “and” statements (&).
Disjunctions are “or” statements (v).
Negations are “not” statements (~).
Conditionals are “if/then” statements (→).
Biconditionals are “if and only if” statements ().

Although a conditional is single symbol, it often contains two terms in ordinary language: “if” and “then.” This can be confusing, but just remember the nature of the claim is that one thing depends on the other. “If we eat, then we’ll be full.” In conditionals, what comes first is called the “antecedent” and what follows is called the “consequent.”Biconditionals can get complicated too. For our purposes, I won’t be asking you to understand the deeper complications.

Given the symbols above, how would we translate the following into symbolic notation? “Juan went to the store and Mary stayed home.” The first step is to choose variables. Let’s choose “S” for “Juan went to the store” and “H” for “Mary stayed home.” Now, the final symbolic notation would be: “S & H.” I hope you’re able to see why this is the case. Here are a few more examples.

If I eat fish, then you’ll eat pork: F → P (this is a conditional)
We’ll go to the mall if and only if you get out of bed: M B (this is a biconditional)
Either we get a house or an apartment: H v A (this is a disjunction)
We’re not going to the beach: ~ B (this is a negation)

Are you starting to see it? The overall point is to see that language can be put into a system of symbols, and this is exactly what symbolic logic does. As you'll see from the homework and from the related discussion board, this stuff can get complicated quickly. Here’s an example of what a full argument of symbolic logic looks like.

  1. (A & B) → [A → (D & E)]
  2. (A & B) & C          /:. D v E
  3. A & B
  4. A → (D & E)
  5. A
  6. D & E
  7. D
  8. D v E

It's important to note that we could replace the variables and logical connectives in this entire argument, and we would have an argument in natural language.

Computers: Logic Machines

Try a little experiment. Right now, open a new window in your web browser. Now go to “view” in the top menu and click on “page source" (it may be different depending on the browser). If you've built webpages, or worked with more complicated programming languages, you'll be quite familiar with what you see. Now, what you see is not an all out programming language, but it is a language of symbols that allows people to manipulate the appearance of text, links, and more. In fact, what you see is typically called HyperText Markup Language (HTML). Like the language I am teaching you in this lecture, HTML is a language of symbols with different labels and functions.

It is languages like these that underlie the computer revolution and the general functioning of computers. Like most human achievements, the computer revolution has many different areas of human knowledge to thank, but the development of deductive, symbolic languages played a huge role. In this sense computers are logic machines, carrying out operations to their logical conclusions, dictated by the rules of the programming language.

(continued below to the left)

Formal Proofs of Validity

But now, back to the central concerns of the lecture. To reiterate, symbolic logic was originally used to show the validity or invalidity of arguments that are long and complex. We’ve learned how to translate from ordinary language to symbolic logic, and the next step is to test whether arguments in this symbolic language are valid—these are usually called “proofs of validity” because you are proving that an argument is valid. There are different ways to prove validity, but we’ll be focusing on one. On the deductive reasoning homework, you had to determine whether shorter arguments are valid. Again, keep in mind that all we're doing in this current lecture is proving validity with longer arguments that have been turned into symbols.

There are other methods of proving validity (as well as invalidity) of long arguments. One common method that we won't be covering in this class is the truth table. For this class, I’ve chosen the method I have the most familiarity with. This will enable me to explain it to you better than I could likely explain other methods. It also makes it likely that I can most effectively answer your questions. The method we’re going to use is often simply called “deduction.” We’re going to be taking argument patterns or forms that are already valid and using them to derive a conclusion from a group of premises. It sounds more difficult than it is.

We’ll be working with 9 valid argument forms or rules. It is very important that you’re able to reference these: all 9 forms are in the Truth Functional Logic Homework PDF on p. 318 under Group 1. You do not need to know the Group 2 rules.

Modus Ponens

For clarity, I’m going to go over the first argument form: modus ponens. By showing the way this is translated from ordinary language, I think you’ll be able to see pretty clearly why it’s valid. Consider the following argument:

  1. If there is a gas station, then we’ll stop.
  2. There’s a gas station.
  3. Therefore, we’ll stop.

Now, let’s translate it. First, our variables. We’ll let “there’s a gas station” be represented by “G” and “we’ll stop” be represented by “S.” Now, how does the argument look in symbolic language given our variables? Something like this:

  1. G → S
  2. G
  3. Therefore, S.

This is one of the most basic valid argument forms: modus ponens. To remind you, we are going to be using this and the other argument forms (for a total of 9) from the text to show that a certain conclusion can be derived/proved from a set of premises. It all begins with learning to recognize the 9 argument forms. For some examples of this, see the full box of white text to the right.

Recognizing Argument Forms (Rules) in Operation

The first step in being able to do formal proofs of validity is learning to recognize some of the 9 valid argument forms in operation. A couple of things. This little symbol "/.:" means "therefore." It is also good to understand that anything in parenthesis can stand for a single term. For example, both of the following are modus ponens:

1. P → Q
2. P
/.: Q

1. (X & ~Y) → Q
2. X & ~Y
/.: Q

Now, let's consider the following examples. Keep your list of the 9 rules nearby. In these examples I want you to consider this question: what form or rule was used to reach the conclusion given?

1. (A → ~B) & (~C → D)
/.: A → ~B

The conclusion, which is A → ~B, was reached by the 5th rule: simplification. If such a question were to come up on a quiz or exam, you would write "simplification" next to the line with the conclusion, to show what rule was used to derive it (or choose the appropriate answer if it's a multiple choice question). Like this:

1. (A → ~B) & (~C → D)
/.: A → ~B simplification

Let's try another:

1. (V → W) v (X → Y)
2. ~(V → W)
/.: X → Y

What rule was used to get the conclusion, X →Y? It was rule 4, the disjuntive argument.

So, are you starting to get the idea? Hopefully you are beginning to see it. It should get clearer, too, as you try the relevant homework. The idea is to get comfortable with these and then move on to the rest of the homework where you'll have to construct proofs of validity yourself. With proofs, you won't just be recognizing these rules, you'll be deciding for yourself which ones to use to derive the conclusion.