Could someone give me an algebraic proof of De Morgan's Theorems?
I already know the graphic proof with the truth table, but I need to understand the algebraic way.
EDIT
I try to explain better. Imagine to have the two theorems:
NOT(a * b) = NOT(a) + NOT(b)
NOT(a + b) = NOT(a) * NOT(b)What do you answer to someone that tell you: "Show me why this two theorems are verified without using the truth tables"?
$\endgroup$ 85 Answers
$\begingroup$Following, e.g. Wikipedia, let us define a boolean algebra to be a set $A$, together with two binary operations $\land$ and $\lor$, a unary operation $'$, and two nullary operations $0$ and $1$, satisfying the following axioms: $$\begin{align*} a\lor(b\lor c) &= (a\lor b)\lor c, & a\land(b\land c) &= (a\land b)\land c, &&\text{(associativity)}\\ a\lor b &= b\lor a, & a\land b &= b\land a, &&\text{(commutativity)}\\ a\lor(a\land b) &= a, & a\land (a\lor b) &= a, &&\text{(absorption)}\\ a\lor(b\land c) &= (a\lor b)\land (a\lor c), & a\land(b\lor c) &= (a\land b)\lor (a\land c) &&\text{(distributivity)}\\ a\lor a' &= 1, & a\land a' &= 0. &&\text{(complements)} \end{align*}$$
You want to use these axioms to prove that $(a\land b)' = a'\lor b'$ and $(a\lor b)' = a'\land b'$.
Lemma 1. $a\land 1 = a$ and $a\lor 0 = a$ for all $a$.
Proof. $a \land 1 = a\land(a\lor a') = a$, by complements and absorption; likewise, $a\lor 0 = a\lor(a\land a') = a$ by complements and absorption. $\Box$
Lemma 2. $a\land 0 = 0$ and $a\lor 1 = 1$ for all $a$.
Proof. $a\land 0 = a\land (a\land a') = (a\land a)\land a' = a\land a' = 0$. And $a\lor 1 = a\lor(a\lor a') = (a\lor a)\lor a' = a\lor a' = 1$. $\Box$
Lemma 3. If $a\land b' = 0$ and $a\lor b'=1$, then $a=b$.
Proof.$$\begin{align*} b &= b\land 1\\ &= b\land(a\lor b')\\ &= (b\land a)\lor (b\land b')\\ &= (b\land a)\lor 0\\ &= (b\land a)\lor (a\land b')\\ &= (a\land b)\lor(a\land b')\\ &= a\land (b\lor b')\\ &= a\land 1\\ &= a.\ \Box \end{align*}$$
Lemma 4. For all $a$, $(a')' = a$.
Proof. By Lemma 3, it suffices to show that $(a')'\land a' = 0$ and $(a')'\lor a' = 1$. But this follows directly by complementation. $\Box$
Theorem. $(a\land b)' = a'\lor b'$.
By Lemmas 3 and 4, it suffices to show that $(a\land b)\land (a'\lor b') = 0$ and $(a\land b)\lor (a'\lor b') = 1$; for by Lemma 4, this is the same as proving $(a\land b)\land (a'\lor b')'' =0$ and $(a\land b)\lor (a'\lor b')'' = 1$; by Lemma 3, this gives $(a\land b) = (a'\lor b')'$, and applying Lemma 4 again we get $(a\land b)' = (a'\lor b')'' = a'\lor b'$, which is what we want.
We have: $$\begin{align*} (a\land b)\land(a'\lor b') &= \bigl((a\land b)\land a')\bigr) \lor \bigl((a\land b)\land b') &&\text{(by distributivity)}\\ &= \bigl( (a\land a')\land b\bigr) \lor \bigl( a\land (b\land b')\bigr) &&\text{(associativity and commutativity)}\\ &= ( 0\land b) \lor (a\land 0)\\ &= 0 \lor 0\\ &= 0. \end{align*}$$ And $$\begin{align*} (a\land b)\lor(a'\lor b') &= \bigl( (a\land b)\lor a'\bigr) \lor b'&&\text{(by associativity)}\\ &= \bigl( (a\lor a') \land (b\lor a')\bigr) \lor b'&&\text{(by distributivity)}\\ &= \bigl( 1\land (b\lor a')\bigr) \lor b'&&\text{(by complements)}\\ &= (b\lor a')\lor b'&&\text{(by Lemma 1)}\\ &= (b\lor b')\lor a'&&\text{(by commutativity and associativity)}\\ &= 1\lor a'&&\text{(by complements)}\\ &= 1 &&\text{(by Lemma 2)}. \end{align*}$$ Since $(a\land b)\land (a'\lor b') = 0$ and $(a\land b)\lor (a'\lor b') = 1$, the conclusion follows. $\Box$
Theorem. $(a\lor b)' = a'\land b'$.
Proof. Left as an exercise for the interested reader. $\Box$
$\endgroup$ 8 $\begingroup$Let ~ stand for the negation function "NOT", and -> for (material) implication. Then in a natural deduction system (with a rule of negation elimination going "from an assumption which is a negation which leads to a contradiction, we may infer the (sub) well-formed formula which the negation negates) the proofs might run as follows:
1 | ~(a*b) assumption
2 || ~(~a+~b) assumption
3 ||| ~a assumption
4 ||| (~a+~b) 3 + introduction
5 ||| ((~a+~b)*~(~a+~b)) 2, 4 * introduction
6 || a 3-5 ~ elimination
7 ||| ~b assumption
8 ||| (~a+~b) 7 + introduction
9 ||| ((~a+~b)*~(~a+~b)) 2, 8 * introduction
10 || b 7-9 ~ elimination
11 || (a*b) 6, 10 * introduction
12 || ((a*b)*~(a*b)) 1, 11 * introduction
13 | (~a+~b) 2-12 ~ elimination
14 (~(a*b)->(~a+~b)) 1-13 -> introduction
15 | (~a+~b) assumption
16 || (a*b) assumption
17 || a 16 * elimination
18 ||| ~a assumption
19 |||| b assumption
20 |||| (a*~a) 17, 18 * introduction
21 ||| ~b 19-20 ~ introduction
22 || (~a->~b) 18-21 -> introduction
23 ||| ~b assumption
24 || (~b->~b) 23-23 -> introduction
25 || ~b 15, 22, 24 + elimination
26 || b 16 * elimination
27 || (b*~b) 25, 26 * introduction
28 | ~(a*b) 16-27 ~ introduction
29 ((~a+~b)->~(a*b)) 15-28 -> introduction
30 (~(a*b)=(~a+~b)) 14, 29 = introduction
1 | ~(a+b) assumption
2 || a assumption
3 || (a+b) 2 + introduction
4 || ((a+b)*~(a+b)) 1, 3 * introduction
5 | ~a 2-4 ~ introduction
6 || b assumption
7 || (a+b) 6 + introduction
8 || ((a+b)*~(a+b)) 7, 1 * introduction
9 | ~b 6-8 ~ introduction
10 | (~a*~b) 5, 9 * introduction
11 (~(a+b)->(~a*~b)) 1-10 -> introduction
12 | (~a*~b) assumption
13 || (a+b) assumption
14 ||| a assumption
15 || (a->a) 14-14 -> introduction
16 ||| b assumption
17 |||| ~a assumption
18 |||| ~b 12 * elimination
19 |||| (b*~b) 16, 18 * introduction
20 ||| a 17-19 ~ elimination
21 || (b->a) 16-20 -> introduction
22 || a 13, 15, 21 + elimination
23 || ~a 12 * elimination
24 || (a*~a) 22, 23 * introduction
25 | ~(a+b) 13-24 ~ introduction
26 ((~a*~b)->~(a+b)) 12-25 -> introduction
27 (~(a+b)=(~a*~b)) 11, 26 = introductionA basic technique used there consists of not inferring a contradiction as soon as one might have the ability to do so, but rather introducing an assumption that one doesn't want to keep around, and then inferring a contradiction to get rid of it. Personally, I'd find this very tricky, if I didn't keep track of scope like I did above.
Note that there exists a metatheorem which states that if a theorem holds in classical propositional calculus, there will also exist a corresponding theorem in all Boolean Algebras. Since the above do consist of proofs in classical propositional calculus, by that metatheorem, they also hold for all Boolean Algebras.
$\endgroup$ $\begingroup$The truth tables are a summary of the underlying algebra; without further specification of what you mean by "the algebraical way" I don't know what you would want other than truth tables. You could split it into cases ("if $A=0$ and $B=0$ then $(A+B)' = 0' = 1 = 1 \cdot 1 = A' \cdot B'$", and so on), but this is exactly what the truth table tells you.
$\endgroup$ 4 $\begingroup$What do you answer to someone who says: "Show me why this two theorems are verified without using the truth tables."?
The simplest possible example that I can think of for the de Morgan laws is:
If you ask:
"When does an $AND$ (or $\land$) statement is not $true$?",
then the answer is:
"If either of the two statements involved is $false$."
The above is described symbolically with: $$\lnot(a \land b) = \lnot a \lor \lnot b $$
Similarly, if you ask:
"When does an $OR$ (or $\lor$) statement is not $true$?",
then the answer is:
"When both of the involved statements are $false$."
Which symbolically is:
$$\lnot(a \lor b) = \lnot a \land \lnot b$$
Thus, the de Morgan laws could be thought of as a criteria for the result of the logical operators $OR$ and $AND$ to be $false$.
More convoluted answer:
Considering:
The statements $\lnot \forall x \in U(p(x))$ and $\exists x \in U(\lnot p(x))$ are equivalent.
The de Morgan laws could be thought of as a reduction of the relationship that negation, $\neg$, gives between "for all", $\forall$, and "there exists", $\exists$, statements, from a potentially infinite many statements about a infinite universe to finite number of statements.
$\endgroup$ $\begingroup$Transferring the problem from the Boolean algebra $(\mathbb Z_2,\neg,\vee,\wedge)$ to the Boolean ring $(\mathbb Z_2,1,+,\cdot)$, where $1$ means TRUE, $+$ means EXCLUSSIVE OR and $\cdot$ means AND and using the equivalences
$\neg a \iff (1+a)$
$a\vee b \iff (a+b+ab)$
$a\wedge b \iff (a\cdot b)$
makes the task very algebraic and rather straightforward.
$\neg(a\wedge b)\iff (1+ab)$
$(\neg a \vee \neg b)\iff ((1+a)+(1+b)+(1+a)(1+b))=a+b+1+a+b+ab=1+ab$
(remember that $x+x=0$)
and
$\neg(a\vee b)\iff (1+(a+b+ab))$
$(\neg a)\wedge\neg b)(1+a)(1+b)=1+a+b+ab$