As both the metalanguage of mathematics and an object of study within mathematics, it is important to be able to distinguish the two roles of logic and their respective notations.
When used as a metalanguage for mathematics, logic takes the form of crisp, standardized English. When logical statements are being studied as mathematical objects in their own right, symbolic forms of quantifiers, relations and connectives are appropriate.
Being able to translate between the forms is a critical part of logical literacy:
- means `` and .''
- means `` or .''
- means ``it is not the case that .''
- means `` implies .''
- means `` if and only if .''
- means ``for each object , it is the case that .''
- means ``there exists an object such that .''
Logic forms complex statements from simpler ones.
In mathematics, equations are the most common simple statements. But, any relation (e.g., less than, greather than or equal, subset-inclusion) can form a mathematical statement.
More generally, any set can form the basis of a relation. When a set is used as a relation, the notation is the same as .
The following are all simple statements:
- The sum of three and four is seven.
- is less than .
- Bob is an accountant.
Consider the following two formulations of the same theorem:
Theorem For every integer greater than 1, there exist two distinct integers and such that the product of and is .
Which formulation reads more clearly?
[Niggling detail In the first formulation, how do we know that the numbers involved are integers? How do we even know they're numbers?
We don't, unless the ``universe of discourse'' is specified.
The universe of discourse determines what kinds of values variables and expressions may take.
In this case, the universe of discourse is implicitly assumed to be integers. In some texts, it's explicitly specified; in others, it must be inferred from context. To be clear and context-insensitive, one could specify the range of every variable each time, as in this third formulation:
Whether one specifies the universe of discourse, leaves it to be inferred, or specifies the range of each variable must be made in light of the tradeoffs between clarity and precision for the reader.]
Going in the other direction, however, it isn't as awkward (even if it is uncommon). One could use English connectives in place of symbolic connectives--especially when the connectives are bold-faced, e.g., and, or.
and '' to be true, both and must be true.
There are two ors in logic, and the default or is inclusive: `` or '' is true when is true, when is true and when and are both true. In other words, `` or '' is false only when both and are false.
The other or is exclusive-or (or xor); `` xor '' is true only when is true and is false, or when is false but is true. Exclusive-or is typically written ``either or .'' In some texts, the symbol denotes exclusive or, but this notation is not universal.
Logical negation (prefixing a statement with the symbol or ``it is not the case that'') flips the truth value of a statement from true to false or false to true.
In most logics, double negation self-eliminates:
DeMorgan's laws express the relationship between and, or and not:
And and or also distribute across each other:
A truth table enumerates all possible values of a logical expression based on the value of its operands. A truth table gives a precise, formal meaning to a logical connective. For example, the truth table for the or-connective is:
Likewise, the truth table for the and-connective is:
The truth table for the xor-connective is:
Many understand implication intuitively, yet find its symbolic formulation puzzling. The statement `` implies '' has several equivalent interpretations:
- if , then ;
- follows ;
- entails ;
- is sufficient for .
Bidirectional implication, or logical equivalence, also has equivalent formulations. The statement `` if and only if '' is the same as:
- iff ;
- is necessary and sufficient for .
means ``if and and ... , then .'' Or, in the case of equivalences:
means `` and and ...and if and only if .''
It's easy to prove that is true if and only if is true:
is the contrapositive of .
The truth table for implication is:
The truth table for bidirectional implication is:
The symbols and are quantifiers. The symbol is the universal quantifier because it claims something for every element in the universe; while the symbol is the existential quantifier because it claims the existence of an object that satisfies a claim.
DeMorgan's laws generalize to quantifiers:
There are subtle notational variations on quantifier forms in symbolic logic:
- means ``for each in the set , it is the case that .''
- means ``there exists an in the set such that .''
In some texts, the colon is left off quantifiers.
Consider the following statements:
- is the function that squares its argument and adds two.
- is prime and is prime implies that is composite.
In logic, rules of inference allow true statements to be transformed into new true statements. The application of a sequence of rules of inference is what constitutes a formal proof. Being aware of rules of inference, and the most common rules, acts as nice supplement to logical literacy.
Rules of inference may be expressed in the following form:
and such a rule means, ``if , , ..., are true, then must also be true.''
The most frequently applied rule of inference is modus ponens:
In short, modus ponens says that if we know and we know that implies , then we also know .
Modus tollens, for instance, works in reverse:
If we know that implies , but is false, then must be false.
For a rule of inference:
to be sound, must be equivalent to true. Consider modus ponens:
A logical fallacy is a would-be rule of inference which is not always true. A common fallacy is affirming the consequent; that is, assuming:
It's a good exercise to prove that this rule is a fallacy by showing it does not always reduce to true.
With respect to logic's role as a metalanguage in mathematical prose, this post covers the essential vocabulary for technical reading.
I glossed over the nitty gritty that should be common sense, like the idempotent laws (e.g., p or p is just p), the commutative laws and the identity laws. Any textbook on logic will cover these.
For most, the next step is to learn how to read and write proofs.
If you're interested in getting deeper into symbolic logic, the first rabbit hole starts with relations and formulae, continues through entailment, satisfication, derivation, validity, consistency, soundness and completeness, and ends with models, theories and incompleteness.
- My recommended reading for grad students.
- Harry Gensler's Introduction to Logic is the best introduction to logic and proofs that I know of. This is probably because it was written for philosophers instead of mathematicians.
- Mathematical Logic, on the other hand, is a punishing read for non-mathematicians. That said, it is a complete, unified development of all of the foundational concepts in symbolic logic (including Gödel's incompleteness theorems). As a reference for the working formal mathematician, it is an indispensible resource.
- Gödel's Theorem: An Incomplete Guide to Its Use and Abuse is the best lay reader's overview of Gödel's incompleteness theorems that I know of.