diff --git a/README.md b/README.md index a4be109..aa9ed79 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,3 @@ -# Logical Verification 2020: Installation Instructions +# Logical Verification 2021: Installation Instructions TODO. diff --git a/hitchhikers_guide.pdf b/hitchhikers_guide.pdf new file mode 100644 index 0000000..a2b6636 Binary files /dev/null and b/hitchhikers_guide.pdf differ diff --git a/hitchhikers_guide_tablet.pdf b/hitchhikers_guide_tablet.pdf new file mode 100644 index 0000000..c1b5ee2 Binary files /dev/null and b/hitchhikers_guide_tablet.pdf differ diff --git a/lean/love01_definitions_and_statements_demo.lean b/lean/love01_definitions_and_statements_demo.lean new file mode 100644 index 0000000..5fee46e --- /dev/null +++ b/lean/love01_definitions_and_statements_demo.lean @@ -0,0 +1,447 @@ +import .lovelib + + +/-! # LoVe Preface + +## Proof Assistants + +Proof assistants (also called interactive theorem provers) + +* check and help develop formal proofs; +* can be used to prove big theorems, not only logic puzzles; +* can be tedious to use; +* are highly addictive (think video games). + +A selection of proof assistants, classified by logical foundations: + +* set theory: Isabelle/ZF, Metamath, Mizar; +* simple type theory: HOL4, HOL Light, Isabelle/HOL; +* **dependent type theory**: Agda, Coq, **Lean**, Matita, PVS. + + +## Success Stories + +Mathematics: + +* the four-color theorem (in Coq); +* the odd-order theorem (in Coq); +* the Kepler conjecture (in HOL Light and Isabelle/HOL). + +Computer science: + +* hardware +* operating systems +* programming language theory +* compilers +* security + + +## Lean + +Lean is a proof assistant developed primarily by Leonardo de Moura (Microsoft +Research) since 2012. + +Its mathematical library, `mathlib`, is developed under the leadership of +Jeremy Avigad (Carnegie Mellon University). + +We use community version 3.20.0. We use its basic libraries, `mathlib`, and +`LoVelib`. Lean is a research project. + +Strengths: + +* highly expressive logic based on a dependent type theory called the + **calculus of inductive constructions**; +* extended with classical axioms and quotient types; +* metaprogramming framework; +* modern user interface; +* documentation; +* open source; +* endless source of puns (Lean Forward, Lean Together, Boolean, …). + + +## This Course + +### Web Site + + https://lean-forward.github.io/logical-verification/2020/index.html + + +### Installation Instructions + + https://github.com/blanchette/logical_verification_2020/blob/master/README.md#logical-verification-2020---installation-instructions + + +### Repository (Demos, Exercises, Homework) + + https://github.com/blanchette/logical_verification_2020 + +The file you are currently looking at is a demo. There are + +* 13 demo files; +* 13 exercise sheets; +* 11 homework sheets (10 points each); +* 1 project (20 points). + +You may submit at most 10 homework, or at most 8 homework and the project. +Homework, including the project, must be done individually. The homework builds +on the exercises, which build on the demos. + + +### The Hitchhiker's Guide to Logical Verification + + https://github.com/blanchette/logical_verification_2020/blob/master/hitchhikers_guide.pdf + https://github.com/blanchette/logical_verification_2020/blob/master/hitchhikers_guide_tablet.pdf + +The lecture notes consist of a preface and 13 chapters. They cover the same +material as the corresponding lectures but with more details. Sometimes there +will not be enough time to cover everything in class, so reading the lecture +notes will be necessary. + + +### Final Exam + +The course aims at teaching concepts, not syntax. Therefore, the final exam is +on paper. + + +## Our Goal + +We want you to + +* master fundamental theory and techniques in interactive theorem proving; +* familiarize yourselves with some application areas; +* develop some practical skills you can apply on a larger project (as a hobby, + for an MSc or PhD, or in industry); +* feel ready to move to another proof assistant and apply what you have learned; +* understand the domain well enough to start reading scientific papers. + +This course is neither a pure logical foundations course nor a Lean tutorial. +Lean is our vehicle, not an end in itself. + + +# LoVe Demo 1: Definitions and Statements + +We introduce the basics of Lean and proof assistants, without trying to carry +out actual proofs yet. We focus on specifying objects and statements of their +intended properties. -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## A View of Lean + +In a first approximation: + + Lean = functional programming + logic + +In today's lecture, we cover inductive types, recursive functions, and lemma +statements. + +If you are not familiar with typed functional programming (e.g., Haskell, ML, +OCaml, Scala), we recommend that you study a tutorial, such as the first +chapters of the online tutorial __Learn You a Haskell for Great Good!__: + + http://learnyouahaskell.com/chapters + +Make sure to at least reach the section titled "Lambdas". + + +## Types and Terms + +Similar to simply typed λ-calculus or typed functional programming languages +(ML, OCaml, Haskell). + +Types `σ`, `τ`, `υ`: + +* type variables `α`; +* basic types `T`; +* complex types `T σ1 … σN`. + +Some type constructors `T` are written infix, e.g., `→` (function type). + +The function arrow is right-associative: +`σ₁ → σ₂ → σ₃ → τ` = `σ₁ → (σ₂ → (σ₃ → τ))`. + +Polymorphic types are also possible. In Lean, the type variables must be bound +using `∀`, e.g., `∀α, α → α`. + +Terms `t`, `u`: + +* constants `c`; +* variables `x`; +* applications `t u`; +* λ-expressions `λx, t`. + +__Currying__: functions can be + +* fully applied (e.g., `f x y z` if `f` is ternary); +* partially applied (e.g., `f x y`, `f x`); +* left unapplied (e.g., `f`). + +Application is left-associative: `f x y z` = `((f x) y) z`. -/ + +#check ℕ +#check ℤ + +#check empty +#check unit +#check bool + +#check ℕ → ℤ +#check ℤ → ℕ +#check bool → ℕ → ℤ +#check (bool → ℕ) → ℤ +#check ℕ → (bool → ℕ) → ℤ + +#check λx : ℕ, x +#check λf : ℕ → ℕ, λg : ℕ → ℕ, λh : ℕ → ℕ, λx : ℕ, h (g (f x)) +#check λ(f g h : ℕ → ℕ) (x : ℕ), h (g (f x)) + +constants a b : ℤ +constant f : ℤ → ℤ +constant g : ℤ → ℤ → ℤ + +#check λx : ℤ, g (f (g a x)) (g x b) +#check λx, g (f (g a x)) (g x b) + +#check λx, x + +constant trool : Type +constants trool.true trool.false trool.maybe : trool + + +/-! ### Type Checking and Type Inference + +Type checking and type inference are decidable problems, but this property is +quickly lost if features such as overloading or subtyping are added. + +Type judgment: `C ⊢ t : σ`, meaning `t` has type `σ` in local context `C`. + +Typing rules: + + —————————— Cst if c is declared with type σ + C ⊢ c : σ + + —————————— Var if x : σ occurs in C + C ⊢ x : σ + + C ⊢ t : σ → τ C ⊢ u : σ + ——————————————————————————— App + C ⊢ t u : τ + + C, x : σ ⊢ t : τ + ———————————————————————— Lam + C ⊢ (λx : σ, t) : σ → τ + + +### Type Inhabitation + +Given a type `σ`, the __type inhabitation__ problem consists of finding a term +of that type. + +Recursive procedure: + +1. If `σ` is of the form `τ → υ`, a candidate inhabitant is an anonymous + function of the form `λx, _`. + +2. Alternatively, you can use any constant or variable `x : τ₁ → ⋯ → τN → σ` to + build the term `x _ … _`. -/ + +constants α β γ : Type + +def some_fun_of_type : (α → β → γ) → ((β → α) → β) → α → γ := +λf g a, f a (g (λb, a)) + + +/-! ## Type Definitions + +An __inductive type__ (also called __inductive datatype__, +__algebraic datatype__, or just __datatype__) is a type that consists all the +values that can be built using a finite number of applications of its +__constructors__, and only those. + + +### Natural Numbers -/ + +namespace my_nat + +/-! Definition of type `nat` (= `ℕ`) of natural numbers, using Peano-style unary +notation: -/ + +inductive nat : Type +| zero : nat +| succ : nat → nat + +#check nat +#check nat.zero +#check nat.succ + +end my_nat + +#print nat +#print ℕ + + +/-! ### Arithmetic Expressions -/ + +inductive aexp : Type +| num : ℤ → aexp +| var : string → aexp +| add : aexp → aexp → aexp +| sub : aexp → aexp → aexp +| mul : aexp → aexp → aexp +| div : aexp → aexp → aexp + + +/-! ### Lists -/ + +namespace my_list + +inductive list (α : Type) : Type +| nil : list +| cons : α → list → list + +#check list.nil +#check list.cons + +end my_list + +#print list + + +/-! ## Function Definitions + +The syntax for defining a function operating on an inductive type is very +compact: We define a single function and use __pattern matching__ to extract the +arguments to the constructors. -/ + +def add : ℕ → ℕ → ℕ +| m nat.zero := m +| m (nat.succ n) := nat.succ (add m n) + +#eval add 2 7 +#reduce add 2 7 + +def mul : ℕ → ℕ → ℕ +| _ nat.zero := nat.zero +| m (nat.succ n) := add m (mul m n) + +#eval mul 2 7 + +#print mul +#print mul._main + +def power : ℕ → ℕ → ℕ +| _ nat.zero := 1 +| m (nat.succ n) := m * power m n + +#eval power 2 5 + +def power₂ (m : ℕ) : ℕ → ℕ +| nat.zero := 1 +| (nat.succ n) := m * power₂ n + +#eval power₂ 2 5 + +def iter (α : Type) (z : α) (f : α → α) : ℕ → α +| nat.zero := z +| (nat.succ n) := f (iter n) + +#check iter + +def power₃ (m n : ℕ) : ℕ := +iter ℕ 1 (λl, m * l) n + +#eval power₃ 2 5 + +def append (α : Type) : list α → list α → list α +| list.nil ys := ys +| (list.cons x xs) ys := list.cons x (append xs ys) + +#check append +#eval append _ [3, 1] [4, 1, 5] + +/-! Aliases: + + `[]` := `nil` + `x :: xs` := `cons x xs` + `[x₁, …, xN]` := `x₁ :: … :: xN :: []` -/ + +def append₂ {α : Type} : list α → list α → list α +| list.nil ys := ys +| (list.cons x xs) ys := list.cons x (append₂ xs ys) + +#check append₂ +#eval append₂ [3, 1] [4, 1, 5] + +#check @append₂ +#eval @append₂ _ [3, 1] [4, 1, 5] + +def append₃ {α : Type} : list α → list α → list α +| [] ys := ys +| (x :: xs) ys := x :: append₃ xs ys + +def reverse {α : Type} : list α → list α +| [] := [] +| (x :: xs) := reverse xs ++ [x] + +def eval (env : string → ℤ) : aexp → ℤ +| (aexp.num i) := i +| (aexp.var x) := env x +| (aexp.add e₁ e₂) := eval e₁ + eval e₂ +| (aexp.sub e₁ e₂) := eval e₁ - eval e₂ +| (aexp.mul e₁ e₂) := eval e₁ * eval e₂ +| (aexp.div e₁ e₂) := eval e₁ / eval e₂ + +#eval eval (λs, 7) (aexp.div (aexp.var "x") (aexp.num 0)) + +/-! Lean only accepts the function definitions for which it can prove +termination. In particular, it accepts __structurally recursive__ functions, +which peel off exactly one constructor at a time. + + +## Lemma Statements + +Notice the similarity with `def` commands. -/ + +namespace sorry_lemmas + +lemma add_comm (m n : ℕ) : + add m n = add n m := +sorry + +lemma add_assoc (l m n : ℕ) : + add (add l m) n = add l (add m n) := +sorry + +lemma mul_comm (m n : ℕ) : + mul m n = mul n m := +sorry + +lemma mul_assoc (l m n : ℕ) : + mul (mul l m) n = mul l (mul m n) := +sorry + +lemma mul_add (l m n : ℕ) : + mul l (add m n) = add (mul l m) (mul l n) := +sorry + +lemma reverse_reverse {α : Type} (xs : list α) : + reverse (reverse xs) = xs := +sorry + +/-! Axioms are like lemmas but without proofs (`:= …`). Constant declarations +are like definitions but without bodies (`:= …`). -/ + +constants a b : ℤ + +axiom a_less_b : + a < b + +end sorry_lemmas + +end LoVe diff --git a/lean/love01_definitions_and_statements_exercise_sheet.lean b/lean/love01_definitions_and_statements_exercise_sheet.lean new file mode 100644 index 0000000..73c49ee --- /dev/null +++ b/lean/love01_definitions_and_statements_exercise_sheet.lean @@ -0,0 +1,139 @@ +import .love01_definitions_and_statements_demo + + +/-! # LoVe Exercise 1: Definitions and Statements + +Replace the placeholders (e.g., `:= sorry`) with your solutions. -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Question 1: Truncated Subtraction + +1.1. Define the function `sub` that implements truncated subtraction on natural +numbers by recursion. "Truncated" means that results that mathematically would +be negative are represented by 0. For example: + + `sub 7 2 = 5` + `sub 2 7 = 0` -/ + +def sub : ℕ → ℕ → ℕ := +sorry + +/-! 1.2. Check that your function works as expected. -/ + +#eval sub 0 0 -- expected: 0 +#eval sub 0 1 -- expected: 0 +#eval sub 0 7 -- expected: 0 +#eval sub 1 0 -- expected: 1 +#eval sub 1 1 -- expected: 0 +#eval sub 3 0 -- expected: 3 +#eval sub 2 7 -- expected: 0 +#eval sub 3 1 -- expected: 2 +#eval sub 3 3 -- expected: 0 +#eval sub 3 7 -- expected: 0 +#eval sub 7 2 -- expected: 5 + + +/-! ## Question 2: Arithmetic Expressions + +Consider the type `aexp` from the lecture and the function `eval` that +computes the value of an expression. You will find the definitions in the file +`love01_definitions_and_statements_demo.lean`. One way to find them quickly is +to + +1. hold the Control (on Linux and Windows) or Command (on macOS) key pressed; +2. move the cursor to the identifier `aexp` or `eval`; +3. click the identifier. -/ + +#check aexp +#check eval + +/-! 2.1. Test that `eval` behaves as expected. Make sure to exercise each +constructor at least once. You can use the following environment in your tests. +What happens if you divide by zero? + +Make sure to use `#eval`. For technical reasons, `#reduce` does not work well +here. Note that `#eval` (Lean's evaluation command) and `eval` (our evaluation +function on `aexp`) are unrelated. -/ + +def some_env : string → ℤ +| "x" := 3 +| "y" := 17 +| _ := 201 + +#eval eval some_env (aexp.var "x") -- expected: 3 +-- invoke `#eval` here + +/-! 2.2. The following function simplifies arithmetic expressions involving +addition. It simplifies `0 + e` and `e + 0` to `e`. Complete the definition so +that it also simplifies expressions involving the other three binary +operators. -/ + +def simplify : aexp → aexp +| (aexp.add (aexp.num 0) e₂) := simplify e₂ +| (aexp.add e₁ (aexp.num 0)) := simplify e₁ +-- insert the missing cases here +-- catch-all cases below +| (aexp.num i) := aexp.num i +| (aexp.var x) := aexp.var x +| (aexp.add e₁ e₂) := aexp.add (simplify e₁) (simplify e₂) +| (aexp.sub e₁ e₂) := aexp.sub (simplify e₁) (simplify e₂) +| (aexp.mul e₁ e₂) := aexp.mul (simplify e₁) (simplify e₂) +| (aexp.div e₁ e₂) := aexp.div (simplify e₁) (simplify e₂) + +/-! 2.3. Is the `simplify` function correct? In fact, what would it mean for it +to be correct or not? Intuitively, for `simplify` to be correct, it must +return an arithmetic expression that yields the same numeric value when +evaluated as the original expression. + +Given an environment `env` and an expression `e`, state (without proving it) +the property that the value of `e` after simplification is the same as the +value of `e` before. -/ + +lemma simplify_correct (env : string → ℤ) (e : aexp) : + true := -- replace `true` by your lemma statement +sorry + + +/-! ## Question 3: λ-Terms + +3.1. Complete the following definitions, by replacing the `sorry` markers by +terms of the expected type. + +Hint: A procedure for doing so systematically is described in Section 1.1.4 of +the Hitchhiker's Guide. As explained there, you can use `_` as a placeholder +while constructing a term. By hovering over `_`, you will see the current +logical context. -/ + +def I : α → α := +λa, a + +def K : α → β → α := +λa b, a + +def C : (α → β → γ) → β → α → γ := +sorry + +def proj_1st : α → α → α := +sorry + +/-! Please give a different answer than for `proj_1st`. -/ + +def proj_2nd : α → α → α := +sorry + +def some_nonsense : (α → β → γ) → α → (α → γ) → β → γ := +sorry + +/-! 3.2. Show the typing derivation for your definition of `C` above, on paper +or using ASCII or Unicode art. You might find the characters `–` (to draw +horizontal bars) and `⊢` useful. -/ + +-- write your solution in a comment here or on paper + +end LoVe diff --git a/lean/love02_backward_proofs_demo.lean b/lean/love02_backward_proofs_demo.lean new file mode 100644 index 0000000..7710759 --- /dev/null +++ b/lean/love02_backward_proofs_demo.lean @@ -0,0 +1,384 @@ +import .love01_definitions_and_statements_demo + + +/-! # LoVe Demo 2: Backward Proofs + +A __tactic__ operates on a proof goal and either proves it or creates new +subgoals. Tactics are a __backward__ proof mechanism: They start from the goal +and work towards the available hypotheses and lemmas. -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + +namespace backward_proofs + + +/-! ## Tactic Mode + +Syntax of tactical proofs: + + begin + _tactic₁_, + …, + _tacticN_ + end -/ + +lemma fst_of_two_props : + ∀a b : Prop, a → b → a := +begin + intros a b, + intros ha hb, + apply ha +end + + +/-! ## Basic Tactics + +`intro`(`s`) moves `∀`-quantified variables, or the assumptions of +implications `→`, from the goal's conclusion (after `⊢`) into the goal's +hypotheses (before `⊢`). + +`apply` matches the goal's conclusion with the conclusion of the specified lemma +and adds the lemma's hypotheses as new goals. -/ + +lemma fst_of_two_props₂ (a b : Prop) (ha : a) (hb : b) : + a := +begin + apply ha +end + +/-! Terminal tactic syntax: + + by _tactic_ + +abbreviates + + begin + _tactic_ + end -/ + +lemma fst_of_two_props₃ (a b : Prop) (ha : a) (hb : b) : + a := +by apply ha + +lemma prop_comp (a b c : Prop) (hab : a → b) (hbc : b → c) : + a → c := +begin + intro ha, + apply hbc, + apply hab, + apply ha +end + +/-! `exact` matches the goal's conclusion with the specified lemma, closing the +goal. We can often use `apply` in such situations, but `exact` communicates our +intentions better. -/ + +lemma fst_of_two_props₄ (a b : Prop) (ha : a) (hb : b) : + a := +by exact ha + +/-! `assumption` finds a hypothesis from the local context that matches the +goal's conclusion and applies it to prove the goal. -/ + +lemma fst_of_two_props₅ (a b : Prop) (ha : a) (hb : b) : + a := +by assumption + +/-! `refl` proves `l = r`, where the two sides are syntactically equal up to +computation. Computation means unfolding of definitions, β-reduction +(application of λ to an argument), `let`, and more. -/ + +lemma α_example {α β : Type} (f : α → β) : + (λx, f x) = (λy, f y) := +begin + refl +end + +lemma α_example₂ {α β : Type} (f : α → β) : + (λx, f x) = (λy, f y) := +by refl + +lemma β_example {α β : Type} (f : α → β) (a : α) : + (λx, f x) a = f a := +by refl + +def double (n : ℕ) : ℕ := +n + n + +lemma δ_example : + double 5 = 5 + 5 := +by refl + +lemma ζ_example : + (let n : ℕ := 2 in n + n) = 4 := +by refl + +lemma η_example {α β : Type} (f : α → β) : + (λx, f x) = f := +by refl + +lemma ι_example {α β : Type} (a : α) (b : β) : + prod.fst (a, b) = a := +by refl + + +/-! ## Reasoning about Logical Connectives and Quantifiers + +Introduction rules: -/ + +#check true.intro +#check not.intro +#check and.intro +#check or.intro_left +#check or.intro_right +#check iff.intro +#check exists.intro + +/-! Elimination rules: -/ + +#check false.elim +#check and.elim_left +#check and.elim_right +#check or.elim +#check iff.elim_left +#check iff.elim_right +#check exists.elim + +/-! Definition of `¬` and related lemmas: -/ + +#print not +#check not_def +#check classical.em +#check classical.by_contradiction + +lemma and_swap (a b : Prop) : + a ∧ b → b ∧ a := +begin + intro hab, + apply and.intro, + apply and.elim_right, + exact hab, + apply and.elim_left, + exact hab +end + +/-! The `{ … }` combinator focuses on the first subgoal. The tactic inside must +fully prove it. -/ + +lemma and_swap₂ : + ∀a b : Prop, a ∧ b → b ∧ a := +begin + intros a b hab, + apply and.intro, + { exact and.elim_right hab }, + { exact and.elim_left hab } +end + +/-! Notice above how we pass the hypothesis `hab` directly to the lemmas +`and.elim_right` and `and.elim_left`, instead of waiting for the lemmas's +assumptions to appear as new subgoals. This is a small forward step in an +otherwise backward proof. -/ + +lemma or_swap (a b : Prop) : + a ∨ b → b ∨ a := +begin + intros hab, + apply or.elim hab, + { intro ha, + exact or.intro_right _ ha }, + { intro hb, + exact or.intro_left _ hb } +end + +lemma modus_ponens (a b : Prop) : + (a → b) → a → b := +begin + intros hab ha, + apply hab, + exact ha +end + +lemma not_not_intro (a : Prop) : + a → ¬¬ a := +begin + intro ha, + apply not.intro, + intro hna, + apply hna, + exact ha +end + +lemma not_not_intro₂ (a : Prop) : + a → ¬¬ a := +begin + intros ha hna, + apply hna, + exact ha +end + +lemma nat_exists_double_iden : + ∃n : ℕ, double n = n := +begin + apply exists.intro 0, + refl +end + + +/-! ## Reasoning about Equality -/ + +#check eq.refl +#check eq.symm +#check eq.trans +#check eq.subst + +/-! The above rules can be used directly: -/ + +lemma cong_fst_arg {α : Type} (a a' b : α) + (f : α → α → α) (ha : a = a') : + f a b = f a' b := +begin + apply eq.subst ha, + apply eq.refl +end + +lemma cong_two_args {α : Type} (a a' b b' : α) + (f : α → α → α) (ha : a = a') (hb : b = b') : + f a b = f a' b' := +begin + apply eq.subst ha, + apply eq.subst hb, + apply eq.refl +end + +/-! `rw` applies a single equation as a left-to-right rewrite rule, once. To +apply an equation right-to-left, prefix its name with `←`. -/ + +lemma cong_two_args₂ {α : Type} (a a' b b' : α) + (f : α → α → α) (ha : a = a') (hb : b = b') : + f a b = f a' b' := +begin + rw ha, + rw hb +end + +lemma a_proof_of_negation₃ (a : Prop) : + a → ¬¬ a := +begin + rw not_def, + rw not_def, + intro ha, + intro hna, + apply hna, + exact ha +end + +/-! `simp` applies a standard set of rewrite rules (the __simp set__) +exhaustively. The set can be extended using the `@[simp]` attribute. Lemmas can +be temporarily added to the simp set with the syntax +`simp [_lemma₁_, …, _lemmaN_]`. -/ + +lemma cong_two_args_etc {α : Type} (a a' b b' : α) + (g : α → α → ℕ → α) (ha : a = a') (hb : b = b') : + g a b (1 + 1) = g a' b' 2 := +by simp [ha, hb] + +/-! `cc` applies __congruence closure__ to derive new equalities. -/ + +lemma cong_two_args₃ {α : Type} (a a' b b' : α) + (f : α → α → α) (ha : a = a') (hb : b = b') : + f a b = f a' b' := +by cc + +/-! `cc` can also reason up to associativity and commutativity of `+`, `*`, +and other binary operators. -/ + +lemma cong_assoc_comm (a a' b c : ℝ) (f : ℝ → ℝ) + (ha : a = a') : + f (a + b + c) = f (c + b + a') := +by cc + + +/-! ## Proofs by Mathematical Induction + +`induction'` performs induction on the specified variable. It gives rise to one +subgoal per constructor. -/ + +lemma add_zero (n : ℕ) : + add 0 n = n := +begin + induction' n, + { refl }, + { simp [add, ih] } +end + +/-! We use `induction'`, a variant of Lean's built-in `induction` tactic. The +two tactics are similar, but `induction'` is more user-friendly. -/ + +lemma add_succ (m n : ℕ) : + add (nat.succ m) n = nat.succ (add m n) := +begin + induction' n, + { refl }, + { simp [add, ih] } +end + +lemma add_comm (m n : ℕ) : + add m n = add n m := +begin + induction' n, + { simp [add, add_zero] }, + { simp [add, add_succ, ih] } +end + +lemma add_assoc (l m n : ℕ) : + add (add l m) n = add l (add m n) := +begin + induction' n, + { refl }, + { simp [add, ih] } +end + +/-! `cc` is extensible. We can register `add` as a commutative and associative +operator using the type class instance mechanism (explained in lecture 4). This +is useful for the `cc` invocation below. -/ + +@[instance] def add.is_commutative : is_commutative ℕ add := +{ comm := add_comm } + +@[instance] def add.is_associative : is_associative ℕ add := +{ assoc := add_assoc } + +lemma mul_add (l m n : ℕ) : + mul l (add m n) = add (mul l m) (mul l n) := +begin + induction' n, + { refl }, + { simp [add, mul, ih], + cc } +end + + +/-! ## Cleanup Tactics + +`rename` changes the name of a variable or hypothesis. + +`clear` removes unused variables or hypotheses. -/ + +lemma cleanup_example (a b c : Prop) (ha : a) (hb : b) + (hab : a → b) (hbc : b → c) : + c := +begin + clear ha hab a, + apply hbc, + clear hbc c, + rename hb h, + exact h +end + +end backward_proofs + +end LoVe diff --git a/lean/love02_backward_proofs_exercise_sheet.lean b/lean/love02_backward_proofs_exercise_sheet.lean new file mode 100644 index 0000000..b55f02a --- /dev/null +++ b/lean/love02_backward_proofs_exercise_sheet.lean @@ -0,0 +1,146 @@ +import .love02_backward_proofs_demo + + +/-! # LoVe Exercise 2: Backward Proofs -/ + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + +namespace backward_proofs + + +/-! ## Question 1: Connectives and Quantifiers + +1.1. Carry out the following proofs using basic tactics. + +Hint: Some strategies for carrying out such proofs are described at the end of +Section 2.3 in the Hitchhiker's Guide. -/ + +lemma I (a : Prop) : + a → a := +sorry + +lemma K (a b : Prop) : + a → b → b := +sorry + +lemma C (a b c : Prop) : + (a → b → c) → b → a → c := +sorry + +lemma proj_1st (a : Prop) : + a → a → a := +sorry + +/-! Please give a different answer than for `proj_1st`: -/ + +lemma proj_2nd (a : Prop) : + a → a → a := +sorry + +lemma some_nonsense (a b c : Prop) : + (a → b → c) → a → (a → c) → b → c := +sorry + +/-! 1.2. Prove the contraposition rule using basic tactics. -/ + +lemma contrapositive (a b : Prop) : + (a → b) → ¬ b → ¬ a := +sorry + +/-! 1.3. Prove the distributivity of `∀` over `∧` using basic tactics. + +Hint: This exercise is tricky, especially the right-to-left direction. Some +forward reasoning, like in the proof of `and_swap₂` in the lecture, might be +necessary. -/ + +lemma forall_and {α : Type} (p q : α → Prop) : + (∀x, p x ∧ q x) ↔ (∀x, p x) ∧ (∀x, q x) := +sorry + + +/-! ## Question 2: Natural Numbers + +2.1. Prove the following recursive equations on the first argument of the +`mul` operator defined in lecture 1. -/ + +#check mul + +lemma mul_zero (n : ℕ) : + mul 0 n = 0 := +sorry + +lemma mul_succ (m n : ℕ) : + mul (nat.succ m) n = add (mul m n) n := +sorry + +/-! 2.2. Prove commutativity and associativity of multiplication using the +`induction'` tactic. Choose the induction variable carefully. -/ + +lemma mul_comm (m n : ℕ) : + mul m n = mul n m := +sorry + +lemma mul_assoc (l m n : ℕ) : + mul (mul l m) n = mul l (mul m n) := +sorry + +/-! 2.3. Prove the symmetric variant of `mul_add` using `rw`. To apply +commutativity at a specific position, instantiate the rule by passing some +arguments (e.g., `mul_comm _ l`). -/ + +lemma add_mul (l m n : ℕ) : + mul (add l m) n = add (mul n l) (mul n m) := +sorry + + +/-! ## Question 3 (**optional**): Intuitionistic Logic + +Intuitionistic logic is extended to classical logic by assuming a classical +axiom. There are several possibilities for the choice of axiom. In this +question, we are concerned with the logical equivalence of three different +axioms: -/ + +def excluded_middle : Prop := +∀a : Prop, a ∨ ¬ a + +def peirce : Prop := +∀a b : Prop, ((a → b) → a) → a + +def double_negation : Prop := +∀a : Prop, (¬¬ a) → a + +/-! For the proofs below, please avoid using lemmas from Lean's `classical` +namespace, as this would defeat the purpose of the exercise. + +3.1 (**optional**). Prove the following implication using tactics. + +Hint: You will need `or.elim` and `false.elim`. You can use +`rw excluded_middle` to unfold the definition of `excluded_middle`, +and similarly for `peirce`. -/ + +lemma peirce_of_em : + excluded_middle → peirce := +sorry + +/-! 3.2 (**optional**). Prove the following implication using tactics. -/ + +lemma dn_of_peirce : + peirce → double_negation := +sorry + +/-! We leave the remaining implication for the homework: -/ + +namespace sorry_lemmas + +lemma em_of_dn : + double_negation → excluded_middle := +sorry + +end sorry_lemmas + +end backward_proofs + +end LoVe diff --git a/lean/love03_forward_proofs_demo.lean b/lean/love03_forward_proofs_demo.lean new file mode 100644 index 0000000..2032c66 --- /dev/null +++ b/lean/love03_forward_proofs_demo.lean @@ -0,0 +1,407 @@ +import .love01_definitions_and_statements_demo + + +/-! # LoVe Demo 3: Forward Proofs + +When developing a proof, often it makes sense to work __forward__: to start with +what we already know and proceed step by step towards our goal. Lean's +structured proofs and raw proof terms are two style that support forward +reasoning. -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + +namespace forward_proofs + + +/-! ## Structured Constructs + +Structured proofs are syntactic sugar sprinkled on top of Lean's +__proof terms__. + +The simplest kind of structured proof is the name of a lemma, possibly with +arguments. -/ + +lemma add_comm (m n : ℕ) : + add m n = add n m := +sorry + +lemma add_comm_zero_left (n : ℕ) : + add 0 n = add n 0 := +add_comm 0 n + +lemma add_comm_zero_left₂ (n : ℕ) : + add 0 n = add n 0 := +by exact add_comm 0 n + +/-! `fix` and `assume` move `∀`-quantified variables and assumptions from the +goal into the local context. They can be seen as structured versions of the +`intros` tactic. + +`show` repeats the goal to prove. It is useful as documentation or to rephrase +the goal (up to computation). -/ + +lemma fst_of_two_props : + ∀a b : Prop, a → b → a := +fix a b : Prop, +assume ha : a, +assume hb : b, +show a, from + ha + +lemma fst_of_two_props₂ (a b : Prop) (ha : a) (hb : b) : + a := +show a, from + begin + exact ha + end + +lemma fst_of_two_props₃ (a b : Prop) (ha : a) (hb : b) : + a := +ha + +/-! `have` proves an intermediate lemma, which can refer to the local context. -/ + +lemma prop_comp (a b c : Prop) (hab : a → b) (hbc : b → c) : + a → c := +assume ha : a, +have hb : b := + hab ha, +have hc : c := + hbc hb, +show c, from + hc + +lemma prop_comp₂ (a b c : Prop) (hab : a → b) (hbc : b → c) : + a → c := +assume ha : a, +show c, from + hbc (hab ha) + + +/-! ## Forward Reasoning about Connectives and Quantifiers -/ + +lemma and_swap (a b : Prop) : + a ∧ b → b ∧ a := +assume hab : a ∧ b, +have ha : a := + and.elim_left hab, +have hb : b := + and.elim_right hab, +show b ∧ a, from + and.intro hb ha + +lemma or_swap (a b : Prop) : + a ∨ b → b ∨ a := +assume hab : a ∨ b, +show b ∨ a, from + or.elim hab + (assume ha : a, + show b ∨ a, from + or.intro_right b ha) + (assume hb : b, + show b ∨ a, from + or.intro_left a hb) + +def double (n : ℕ) : ℕ := +n + n + +lemma nat_exists_double_iden : + ∃n : ℕ, double n = n := +exists.intro 0 + (show double 0 = 0, from + by refl) + +lemma nat_exists_double_iden₂ : + ∃n : ℕ, double n = n := +exists.intro 0 (by refl) + +lemma modus_ponens (a b : Prop) : + (a → b) → a → b := +assume hab : a → b, +assume ha : a, +show b, from + hab ha + +lemma not_not_intro (a : Prop) : + a → ¬¬ a := +assume ha : a, +assume hna : ¬ a, +show false, from + hna ha + +lemma forall.one_point {α : Type} (t : α) (p : α → Prop) : + (∀x, x = t → p x) ↔ p t := +iff.intro + (assume hall : ∀x, x = t → p x, + show p t, from + begin + apply hall t, + refl + end) + (assume hp : p t, + fix x, + assume heq : x = t, + show p x, from + begin + rw heq, + exact hp + end) + +lemma beast_666 (beast : ℕ) : + (∀n, n = 666 → beast ≥ n) ↔ beast ≥ 666 := +forall.one_point _ _ + +#print beast_666 + +lemma exists.one_point {α : Type} (t : α) (p : α → Prop) : + (∃x : α, x = t ∧ p x) ↔ p t := +iff.intro + (assume hex : ∃x, x = t ∧ p x, + show p t, from + exists.elim hex + (fix x, + assume hand : x = t ∧ p x, + show p t, from + by cc)) + (assume hp : p t, + show ∃x : α, x = t ∧ p x, from + exists.intro t + (show t = t ∧ p t, from + by cc)) + + +/-! ## Calculational Proofs + +In informal mathematics, we often use transitive chains of equalities, +inequalities, or equivalences (e.g., `a ≥ b ≥ c`). In Lean, such calculational +proofs are supported by `calc`. + +Syntax: + + calc _term₀_ + _op₁_ _term₁_ : + _proof₁_ + ... _op₂_ _term₂_ : + _proof₂_ + ⋮ + ... _opN_ _termN_ : + _proofN_ -/ + +lemma two_mul_example (m n : ℕ) : + 2 * m + n = m + n + m := +calc 2 * m + n + = (m + m) + n : + by rw two_mul +... = m + n + m : + by cc + +/-! `calc` saves some repetition, some `have` labels, and some transitive +reasoning: -/ + +lemma two_mul_example₂ (m n : ℕ) : + 2 * m + n = m + n + m := +have h₁ : 2 * m + n = (m + m) + n := + by rw two_mul, +have h₂ : (m + m) + n = m + n + m := + by cc, +show _, from + eq.trans h₁ h₂ + + +/-! ## Forward Reasoning with Tactics + +The `have`, `let`, and `calc` structured proof commands are also available as a +tactic. Even in tactic mode, it can be useful to state intermediate results and +definitions in a forward fashion. + +Observe that the syntax for the tactic `let` is slightly different than for the +structured proof command `let`, with `,` instead of `in`. -/ + +lemma prop_comp₃ (a b c : Prop) (hab : a → b) (hbc : b → c) : + a → c := +begin + intro ha, + have hb : b := + hab ha, + let c' := c, + have hc : c' := + hbc hb, + exact hc +end + + +/-! ## Dependent Types + +Dependent types are the defining feature of the dependent type theory family of +logics. + +Consider a function `pick` that take a number `n : ℕ` and that returns a number +between 0 and `n`. Conceptually, `pick` has a dependent type, namely + + `(n : ℕ) → {i : ℕ // i ≤ n}` + +We can think of this type as a `ℕ`-indexed family, where each member's type may +depend on the index: + + `pick n : {i : ℕ // i ≤ n}` + +But a type may also depend on another type, e.g., `list` (or `λα, list α`) and +`λα, α → α`. + +A term may depend on a type, e.g., `λα, λx : α, x` (a polymorphic identity +function). + +Of course, a term may also depend on a term. + +Unless otherwise specified, a __dependent type__ means a type depending on a +term. This is what we mean when we say that simple type theory does not support +dependent types. + +In summary, there are four cases for `λx, t` in the calculus of inductive +constructions (cf. Barendregt's `λ`-cube): + +Body (`t`) | | Argument (`x`) | Description +---------- | ------------ | -------------- | ------------------------------ +A term | depending on | a term | Simply typed `λ`-expression +A type | depending on | a term | Dependent type (strictly speaking) +A term | depending on | a type | Polymorphic term +A type | depending on | a type | Type constructor + +Revised typing rules: + + C ⊢ t : (x : σ) → τ[x] C ⊢ u : σ + ———————————————————————————————————— App' + C ⊢ t u : τ[u] + + C, x : σ ⊢ t : τ[x] + ———————————————————————————————— Lam' + C ⊢ (λx : σ, t) : (x : σ) → τ[x] + +These two rules degenerate to `App` and `Lam` if `x` does not occur in `τ[x]` + +Example of `App'`: + + ⊢ pick : (x : ℕ) → {y : ℕ // y ≤ x} ⊢ 5 : ℕ + ——————————————————————————————————————————————— App' + ⊢ pick 5 : {y : ℕ // y ≤ 5} + +Example of `Lam'`: + + α : Type, x : α ⊢ x : α + ——————————————————————————————— Lam or Lam' + α : Type ⊢ (λx : α, x) : α → α + ————————————————————————————————————————————— Lam' + ⊢ (λα : Type, λx : α, x) : (α : Type) → α → α + +Regrettably, the intuitive syntax `(x : σ) → τ` is not available in Lean. +Instead, we must write `∀x : σ, τ` to specify a dependent type. + +Aliases: + + `σ → τ` := `∀_ : σ, τ` + `Π` := `∀` + + +## The PAT Principle + +`→` is used both as the implication symbol and as the type constructor of +functions. Similarly, `∀` is used both as a quantifier and in dependent types. + +The two pairs of concepts not only look the same, they are the same, by the PAT +principle: + +* PAT = propositions as types; +* PAT = proofs as terms. + +Types: + +* `σ → τ` is the type of total functions from `σ` to `τ`; +* `∀x : σ, τ[x]` is the dependent function type from `x : σ` to `τ[x]`. + +Propositions: + +* `P → Q` can be read as "`P` implies `Q`", or as the type of functions mapping + proofs of `P` to proofs of `Q`. +* `∀x : σ, Q[x]` can be read as "for all `x`, `Q[x]`", or as the type of + functions mapping values `x` of type `σ` to proofs of `Q[x]`. + +Terms: + +* A constant is a term. +* A variable is a term. +* `t u` is the application of function `t` to value `u`. +* `λx, t[x]` is a function mapping `x` to `t[x]`. + +Proofs: + +* A lemma or hypothesis name is a proof. +* `H t`, which instantiates the leading parameter or quantifier of proof `H`' + statement with term `t`, is a proof. +* `H G`, which discharges the leading assumption of `H`'s statement with + proof `G`, is a proof. +* `λh : P, H[h]` is a proof of `P → Q`, assuming `H[h]` is a proof of `Q` + for `h : P`. +* `λx : σ, H[x]` is a proof of `∀x : σ, Q[x]`, assuming `H[x]` is a proof of + `Q[x]` for `x : σ`. -/ + +lemma and_swap₃ (a b : Prop) : + a ∧ b → b ∧ a := +λhab : a ∧ b, and.intro (and.elim_right hab) (and.elim_left hab) + +lemma and_swap₄ (a b : Prop) : + a ∧ b → b ∧ a := +begin + intro hab, + apply and.intro, + apply and.elim_right, + exact hab, + apply and.elim_left, + exact hab +end + +/-! Tactical proofs are reduced to proof terms. -/ + +#print and_swap₃ +#print and_swap₄ + +end forward_proofs + + +/-! ## Induction by Pattern Matching + +By the PAT principle, a proof by induction is the same as a recursively +specified proof term. Thus, as alternative to the `induction'` tactic, induction +can also be done by pattern matching: + + * the induction hypothesis is then available under the name of the lemma we are + proving; + + * well-foundedness of the argument is often proved automatically. -/ + +#check reverse + +lemma reverse_append {α : Type} : + ∀xs ys : list α, + reverse (xs ++ ys) = reverse ys ++ reverse xs +| [] ys := by simp [reverse] +| (x :: xs) ys := by simp [reverse, reverse_append xs] + +lemma reverse_append₂ {α : Type} (xs ys : list α) : + reverse (xs ++ ys) = reverse ys ++ reverse xs := +begin + induction' xs, + { simp [reverse] }, + { simp [reverse, ih] } +end + +lemma reverse_reverse {α : Type} : + ∀xs : list α, reverse (reverse xs) = xs +| [] := by refl +| (x :: xs) := + by simp [reverse, reverse_append, reverse_reverse xs] + +end LoVe diff --git a/lean/love03_forward_proofs_exercise_sheet.lean b/lean/love03_forward_proofs_exercise_sheet.lean new file mode 100644 index 0000000..460ffe1 --- /dev/null +++ b/lean/love03_forward_proofs_exercise_sheet.lean @@ -0,0 +1,125 @@ +import .lovelib + + +/-! # LoVe Exercise 3: Forward Proofs -/ + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Question 1: Connectives and Quantifiers + +1.1. Supply structured proofs of the following lemmas. -/ + +lemma I (a : Prop) : + a → a := +sorry + +lemma K (a b : Prop) : + a → b → b := +sorry + +lemma C (a b c : Prop) : + (a → b → c) → b → a → c := +sorry + +lemma proj_1st (a : Prop) : + a → a → a := +sorry + +/-! Please give a different answer than for `proj_1st`. -/ + +lemma proj_2nd (a : Prop) : + a → a → a := +sorry + +lemma some_nonsense (a b c : Prop) : + (a → b → c) → a → (a → c) → b → c := +sorry + +/-! 1.2. Supply a structured proof of the contraposition rule. -/ + +lemma contrapositive (a b : Prop) : + (a → b) → ¬ b → ¬ a := +sorry + +/-! 1.3. Supply a structured proof of the distributivity of `∀` over `∧`. -/ + +lemma forall_and {α : Type} (p q : α → Prop) : + (∀x, p x ∧ q x) ↔ (∀x, p x) ∧ (∀x, q x) := +sorry + +/-! 1.4 (**optional**). Reuse, if possible, the lemma `forall_and` from question +1.3 to prove the following instance of the lemma. -/ + +lemma forall_and_inst {α : Type} (r s : α → α → Prop) : + (∀x, r x x ∧ s x x) ↔ (∀x, r x x) ∧ (∀x, s x x) := +sorry + +/-! 1.5. Supply a structured proof of the following property, which can be used +pull a `∀`-quantifier past an `∃`-quantifier. -/ + +lemma forall_exists_of_exists_forall {α : Type} (p : α → α → Prop) : + (∃x, ∀y, p x y) → (∀y, ∃x, p x y) := +sorry + + +/-! ## Question 2: Chain of Equalities + +2.1. Write the following proof using `calc`. + + (a + b) * (a + b) + = a * (a + b) + b * (a + b) + = a * a + a * b + b * a + b * b + = a * a + a * b + a * b + b * b + = a * a + 2 * a * b + b * b + +Hint: You might need the tactics `simp` and `cc` and the lemmas `mul_add`, +`add_mul`, and `two_mul`. -/ + +lemma binomial_square (a b : ℕ) : + (a + b) * (a + b) = a * a + 2 * a * b + b * b := +sorry + +/-! 2.2 (**optional**). Prove the same argument again, this time as a structured +proof, with `have` steps corresponding to the `calc` equations. Try to reuse as +much of the above proof idea as possible, proceeding mechanically. -/ + +lemma binomial_square₂ (a b : ℕ) : + (a + b) * (a + b) = a * a + 2 * a * b + b * b := +sorry + +/-! 2.3. Prove the same lemma again, this time using tactics. -/ + +lemma binomial_square₃ (a b : ℕ) : + (a + b) * (a + b) = a * a + 2 * a * b + b * b := +begin + sorry +end + + +/-! ## Question 3 (**optional**): One-Point Rules + +3.1 (**optional**). Prove that the following wrong formulation of the one-point +rule for `∀` is inconsistent, using a structured proof. -/ + +axiom forall.one_point_wrong {α : Type} {t : α} {p : α → Prop} : + (∀x : α, x = t ∧ p x) ↔ p t + +lemma proof_of_false : + false := +sorry + +/-! 3.2 (**optional**). Prove that the following wrong formulation of the +one-point rule for `∃` is inconsistent, using a tactical or structured proof. -/ + +axiom exists.one_point_wrong {α : Type} {t : α} {p : α → Prop} : + (∃x : α, x = t → p x) ↔ p t + +lemma proof_of_false₂ : + false := +sorry + +end LoVe diff --git a/lean/love04_functional_programming_demo.lean b/lean/love04_functional_programming_demo.lean new file mode 100644 index 0000000..ae84a5b --- /dev/null +++ b/lean/love04_functional_programming_demo.lean @@ -0,0 +1,527 @@ +import .lovelib + + +/-! # LoVe Demo 4: Functional Programming + +We take a closer look at the basics of typed functional programming: inductive +types, proofs by induction, recursive functions, pattern matching, structures +(records), and type classes. -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Inductive Types + +Recall the definition of type `nat` (= `ℕ`): -/ + +#print nat + +/-! Mottos: + +* **No junk**: The type contains no values beyond those expressible using the + constructors. + +* **No confusion**: Values built in a different ways are different. + +For `nat` (= `ℕ`): + +* "No junk" means that there are no special values, say, `–1` or `ε`, that + cannot be expressed using a finite combination of `zero` and `succ`. + +* "No confusion" is what ensures that `zero` ≠ `succ x`. + +In addition, inductive types are always finite. `succ (succ (succ …))` is not a +value. + + +## Structural Induction + +__Structural induction__ is a generalization of mathematical induction to +inductive types. To prove a property `P[n]` for all natural numbers `n`, it +suffices to prove the base case + + `P[0]` + +and the induction step + + `∀k, P[k] → P[k + 1]` + +For lists, the base case is + + `P[[]]` + +and the induction step is + + `∀y ys, P[ys] → P[y :: ys]` + +In general, there is one subgoal per constructor, and induction hypotheses are +available for all constructor arguments of the type we are doing the induction +on. -/ + +lemma nat.succ_neq_self (n : ℕ) : + nat.succ n ≠ n := +begin + induction' n, + { simp }, + { simp [ih] } +end + +/-! The `case` tactic can be used to supply custom names, and potentially +reorder the cases. -/ + +lemma nat.succ_neq_self₂ (n : ℕ) : + nat.succ n ≠ n := +begin + induction' n, + case succ : m IH { + simp [IH] }, + case zero { + simp } +end + + +/-! ## Structural Recursion + +__Structural recursion__ is a form of recursion that allows us to peel off +one or more constructors from the value on which we recurse. Such functions are +guaranteed to call themselves only finitely many times before the recursion +stops. This is a prerequisite for establishing that the function terminates. -/ + +def fact : ℕ → ℕ +| 0 := 1 +| (n + 1) := (n + 1) * fact n + +def fact₂ : ℕ → ℕ +| 0 := 1 +| 1 := 1 +| (n + 1) := (n + 1) * fact₂ n + +/-! For structurally recursive functions, Lean can automatically prove +termination. For more general recursive schemes, the termination check may fail. +Sometimes it does so for a good reason, as in the following example: -/ + +-- fails +def illegal : ℕ → ℕ +| n := illegal n + 1 + +constant immoral : ℕ → ℕ + +axiom immoral_eq (n : ℕ) : + immoral n = immoral n + 1 + +lemma proof_of_false : + false := +have immoral 0 = immoral 0 + 1 := + immoral_eq 0, +have immoral 0 - immoral 0 = immoral 0 + 1 - immoral 0 := + by cc, +have 0 = 1 := + by simp [*] at *, +show false, from + by cc + + +/-! ## Pattern Matching Expressions + + `match` _term₁_, …, _termM_ `with` + | _pattern₁₁_, …, _pattern₁M_ := _result₁_ + ⋮ + | _patternN₁_, …, _patternNM_ := _resultN_ + `end` + +`match` allows nonrecursive pattern matching within terms. + +In contrast to pattern matching after `lemma` or `def`, the patterns are +separated by commas, so parentheses are optional. -/ + +def bcount {α : Type} (p : α → bool) : list α → ℕ +| [] := 0 +| (x :: xs) := + match p x with + | tt := bcount xs + 1 + | ff := bcount xs + end + +def min (a b : ℕ) : ℕ := +if a ≤ b then a else b + + +/-! ## Structures + +Lean provides a convenient syntax for defining records, or structures. These are +essentially nonrecursive, single-constructor inductive types. -/ + +structure rgb := +(red green blue : ℕ) + +#check rgb.mk +#check rgb.red +#check rgb.green +#check rgb.blue + +namespace rgb_as_inductive + +inductive rgb : Type +| mk : ℕ → ℕ → ℕ → rgb + +def rgb.red : rgb → ℕ +| (rgb.mk r _ _) := r + +def rgb.green : rgb → ℕ +| (rgb.mk _ g _) := g + +def rgb.blue : rgb → ℕ +| (rgb.mk _ _ b) := b + +end rgb_as_inductive + +structure rgba extends rgb := +(alpha : ℕ) + +#print rgba + +def pure_red : rgb := +{ red := 0xff, + green := 0x00, + blue := 0x00 } + +def semitransparent_red : rgba := +{ alpha := 0x7f, + ..pure_red } + +#print pure_red +#print semitransparent_red + +def shuffle (c : rgb) : rgb := +{ red := rgb.green c, + green := rgb.blue c, + blue := rgb.red c } + +/-! `cases'` performs a case distinction on the specified term. This gives rise +to as many subgoals as there are constructors in the definition of the term's +type. The tactic behaves the same as `induction'` except that it does not +produce induction hypotheses. -/ + +lemma shuffle_shuffle_shuffle (c : rgb) : + shuffle (shuffle (shuffle c)) = c := +begin + cases' c, + refl +end + +lemma shuffle_shuffle_shuffle₂ (c : rgb) : + shuffle (shuffle (shuffle c)) = c := +match c with +| rgb.mk _ _ _ := eq.refl _ +end + + +/-! ## Type Classes + +A __type class__ is a structure type combining abstract constants and their +properties. A type can be declared an instance of a type class by providing +concrete definitions for the constants and proving that the properties hold. +Based on the type, Lean retrieves the relevant instance. -/ + +#print inhabited + +@[instance] def nat.inhabited : inhabited ℕ := +{ default := 0 } + +@[instance] def list.inhabited {α : Type} : + inhabited (list α) := +{ default := [] } + +#eval inhabited.default ℕ -- result: 0 +#eval inhabited.default (list ℤ) -- result: [] + +def head {α : Type} [inhabited α] : list α → α +| [] := inhabited.default α +| (x :: _) := x + +lemma head_head {α : Type} [inhabited α] (xs : list α) : + head [head xs] = head xs := +begin + cases' xs, + { refl }, + { refl } +end + +#eval head ([] : list ℕ) -- result: 0 + +#check list.head + +@[instance] def fun.inhabited {α β : Type} [inhabited β] : + inhabited (α → β) := +{ default := λa : α, inhabited.default β } + +inductive empty : Type + +@[instance] def fun_empty.inhabited {β : Type} : + inhabited (empty → β) := +{ default := λa : empty, match a with end } + +@[instance] def prod.inhabited {α β : Type} + [inhabited α] [inhabited β] : + inhabited (α × β) := +{ default := (inhabited.default α, inhabited.default β) } + +/-! Here are other type classes without properties: -/ + +#check has_zero +#check has_neg +#check has_add +#check has_one +#check has_inv +#check has_mul + +#check (1 : ℕ) +#check (1 : ℤ) +#check (1 : ℝ) + +/-! We encountered these type classes in lecture 2: -/ + +#print is_commutative +#print is_associative + + +/-! ## Lists + +`list` is an inductive polymorphic type constructed from `nil` and `cons`: -/ + +#print list + +/-! `cases'` can also be used on a hypothesis of the form `l = r`. It matches `r` +against `l` and replaces all occurrences of the variables occurring in `r` with +the corresponding terms in `l` everywhere in the goal. -/ + +lemma injection_example {α : Type} (x y : α) (xs ys : list α) + (h : list.cons x xs = list.cons y ys) : + x = y ∧ xs = ys := +begin + cases' h, + cc +end + +/-! If `r` fails to match `l`, no subgoals emerge; the proof is complete. -/ + +lemma distinctness_example {α : Type} (y : α) (ys : list α) + (h : [] = y :: ys) : + false := +by cases' h + +def map {α β : Type} (f : α → β) : list α → list β +| [] := [] +| (x :: xs) := f x :: map xs + +def map₂ {α β : Type} : (α → β) → list α → list β +| _ [] := [] +| f (x :: xs) := f x :: map₂ f xs + +#check list.map + +lemma map_ident {α : Type} (xs : list α) : + map (λx, x) xs = xs := +begin + induction' xs, + case nil { + refl }, + case cons : y ys { + simp [map, ih] } +end + +lemma map_comp {α β γ : Type} (f : α → β) (g : β → γ) + (xs : list α) : + map g (map f xs) = map (λx, g (f x)) xs := +begin + induction' xs, + case nil { + refl }, + case cons : y ys { + simp [map, ih] } +end + +lemma map_append {α β : Type} (f : α → β) (xs ys : list α) : + map f (xs ++ ys) = map f xs ++ map f ys := +begin + induction' xs, + case nil { + refl }, + case cons : y ys { + simp [map, ih] } +end + +def tail {α : Type} : list α → list α +| [] := [] +| (_ :: xs) := xs + +#check list.tail + +def head_opt {α : Type} : list α → option α +| [] := option.none +| (x :: _) := option.some x + +def head_pre {α : Type} : ∀xs : list α, xs ≠ [] → α +| [] hxs := by cc +| (x :: _) _ := x + +#eval head_opt [3, 1, 4] +#eval head_pre [3, 1, 4] (by simp) +-- fails +#eval head_pre ([] : list ℕ) sorry + +def zip {α β : Type} : list α → list β → list (α × β) +| (x :: xs) (y :: ys) := (x, y) :: zip xs ys +| [] _ := [] +| (_ :: _) [] := [] + +#check list.zip + +def length {α : Type} : list α → ℕ +| [] := 0 +| (x :: xs) := length xs + 1 + +#check list.length + +/-! `cases'` can also be used to perform a case distinction on a proposition, in +conjunction with `classical.em`. Two cases emerge: one in which the proposition +is true and one in which it is false. -/ + +#check classical.em + +lemma min_add_add (l m n : ℕ) : + min (m + l) (n + l) = min m n + l := +begin + cases' classical.em (m ≤ n), + case inl { + simp [min, h] }, + case inr { + simp [min, h] } +end + +lemma min_add_add₂ (l m n : ℕ) : + min (m + l) (n + l) = min m n + l := +match classical.em (m ≤ n) with +| or.inl h := by simp [min, h] +| or.inr h := by simp [min, h] +end + +lemma min_add_add₃ (l m n : ℕ) : + min (m + l) (n + l) = min m n + l := +if h : m ≤ n then + by simp [min, h] +else + by simp [min, h] + +lemma length_zip {α β : Type} (xs : list α) (ys : list β) : + length (zip xs ys) = min (length xs) (length ys) := +begin + induction' xs, + case nil { + refl }, + case cons : x xs' { + cases' ys, + case nil { + refl }, + case cons : y ys' { + simp [zip, length, ih ys', min_add_add] } } +end + +lemma map_zip {α α' β β' : Type} (f : α → α') (g : β → β') : + ∀xs ys, + map (λab : α × β, (f (prod.fst ab), g (prod.snd ab))) + (zip xs ys) = + zip (map f xs) (map g ys) +| (x :: xs) (y :: ys) := by simp [zip, map, map_zip xs ys] +| [] _ := by refl +| (_ :: _) [] := by refl + + +/-! ## Binary Trees + +Inductive types with constructors taking several recursive arguments define +tree-like objects. __Binary trees__ have nodes with at most two children. -/ + +inductive btree (α : Type) : Type +| empty {} : btree +| node : α → btree → btree → btree + +/-! The type `aexp` of arithmetic expressions was also an example of a tree data +structure. + +The nodes of a tree, whether inner nodes or leaf nodes, often carry labels or +other annotations. + +Inductive trees contain no infinite branches, not even cycles. This is less +expressive than pointer- or reference-based data structures (in imperative +languages) but easier to reason about. + +Recursive definitions (and proofs by induction) work roughly as for lists, but +we may need to recurse (or invoke the induction hypothesis) on several child +nodes. -/ + +def mirror {α : Type} : btree α → btree α +| btree.empty := btree.empty +| (btree.node a l r) := btree.node a (mirror r) (mirror l) + +lemma mirror_mirror {α : Type} (t : btree α) : + mirror (mirror t) = t := +begin + induction' t, + case empty { + refl }, + case node : a l r ih_l ih_r { + simp [mirror, ih_l, ih_r] } +end + +lemma mirror_mirror₂ {α : Type} : + ∀t : btree α, mirror (mirror t) = t +| btree.empty := by refl +| (btree.node a l r) := + calc mirror (mirror (btree.node a l r)) + = mirror (btree.node a (mirror r) (mirror l)) : + by refl + ... = btree.node a (mirror (mirror l)) (mirror (mirror r)) : + by refl + ... = btree.node a l (mirror (mirror r)) : + by rw mirror_mirror₂ l + ... = btree.node a l r : + by rw mirror_mirror₂ r + +lemma mirror_eq_empty_iff {α : Type} : + ∀t : btree α, mirror t = btree.empty ↔ t = btree.empty +| btree.empty := by refl +| (btree.node _ _ _) := by simp [mirror] + + +/-! ## Dependent Inductive Types (**optional**) -/ + +#check vector + +inductive vec (α : Type) : ℕ → Type +| nil {} : vec 0 +| cons (a : α) {n : ℕ} (v : vec n) : vec (n + 1) + +#check vec.nil +#check vec.cons + +def list_of_vec {α : Type} : ∀{n : ℕ}, vec α n → list α +| _ vec.nil := [] +| _ (vec.cons a v) := a :: list_of_vec v + +def vec_of_list {α : Type} : + ∀xs : list α, vec α (list.length xs) +| [] := vec.nil +| (x :: xs) := vec.cons x (vec_of_list xs) + +lemma length_list_of_vec {α : Type} : + ∀{n : ℕ} (v : vec α n), list.length (list_of_vec v) = n +| _ vec.nil := by refl +| _ (vec.cons a v) := + by simp [list_of_vec, length_list_of_vec v] + +end LoVe diff --git a/lean/love04_functional_programming_exercise_sheet.lean b/lean/love04_functional_programming_exercise_sheet.lean new file mode 100644 index 0000000..c752bdd --- /dev/null +++ b/lean/love04_functional_programming_exercise_sheet.lean @@ -0,0 +1,159 @@ +import .love03_forward_proofs_demo + + +/-! # LoVe Exercise 4: Functional Programming -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Question 1: Reverse of a List + +We define a new accumulator-based version of `reverse`. The first argument, +`as`, serves as the accumulator. This definition is __tail-recursive__, meaning +that compilers and interpreters can easily optimize the recursion away, +resulting in more efficient code. -/ + +def accurev {α : Type} : list α → list α → list α +| as [] := as +| as (x :: xs) := accurev (x :: as) xs + +/-! 1.1. Our intention is that `accurev [] xs` should be equal to `reverse xs`. +But if we start an induction, we quickly see that the induction hypothesis is +not strong enough. Start by proving the following generalization (using the +`induction'` tactic or pattern matching): -/ + +lemma accurev_eq_reverse_append {α : Type} : + ∀as xs : list α, accurev as xs = reverse xs ++ as := +sorry + +/-! 1.2. Derive the desired equation. -/ + +lemma accurev_eq_reverse {α : Type} (xs : list α) : + accurev [] xs = reverse xs := +sorry + +/-! 1.3. Prove the following property. + +Hint: A one-line inductionless proof is possible. -/ + +lemma accurev_accurev {α : Type} (xs : list α) : + accurev [] (accurev [] xs) = xs := +sorry + +/-! 1.4. Prove the following lemma by structural induction, as a "paper" proof. +This is a good exercise to develop a deeper understanding of how structural +induction works (and is good practice for the final exam). + + lemma accurev_eq_reverse_append {α : Type} : + ∀as xs : list α, accurev as xs = reverse xs ++ as + +Guidelines for paper proofs: + +We expect detailed, rigorous, mathematical proofs. You are welcome to use +standard mathematical notation or Lean structured commands (e.g., `assume`, +`have`, `show`, `calc`). You can also use tactical proofs (e.g., `intro`, +`apply`), but then please indicate some of the intermediate goals, so that we +can follow the chain of reasoning. + +Major proof steps, including applications of induction and invocation of the +induction hypothesis, must be stated explicitly. For each case of a proof by +induction, you must list the inductive hypotheses assumed (if any) and the goal +to be proved. Minor proof steps corresponding to `refl`, `simp`, or `cc` need +not be justified if you think they are obvious (to humans), but you should say +which key lemmas they depend on. You should be explicit whenever you use a +function definition or an introduction rule for an inductive predicate. -/ + +-- enter your paper proof here + + +/-! ## Question 2: Drop and Take + +The `drop` function removes the first `n` elements from the front of a list. -/ + +def drop {α : Type} : ℕ → list α → list α +| 0 xs := xs +| (_ + 1) [] := [] +| (m + 1) (x :: xs) := drop m xs + +/-! 2.1. Define the `take` function, which returns a list consisting of the the +first `n` elements at the front of a list. + +To avoid unpleasant surprises in the proofs, we recommend that you follow the +same recursion pattern as for `drop` above. -/ + +def take {α : Type} : ℕ → list α → list α := +sorry + +#eval take 0 [3, 7, 11] -- expected: [] +#eval take 1 [3, 7, 11] -- expected: [3] +#eval take 2 [3, 7, 11] -- expected: [3, 7] +#eval take 3 [3, 7, 11] -- expected: [3, 7, 11] +#eval take 4 [3, 7, 11] -- expected: [3, 7, 11] + +#eval take 2 ["a", "b", "c"] -- expected: ["a", "b"] + +/-! 2.2. Prove the following lemmas, using `induction'` or pattern matching. +Notice that they are registered as simplification rules thanks to the `@[simp]` +attribute. -/ + +@[simp] lemma drop_nil {α : Type} : + ∀n : ℕ, drop n ([] : list α) = [] := +sorry + +@[simp] lemma take_nil {α : Type} : + ∀n : ℕ, take n ([] : list α) = [] := +sorry + +/-! 2.3. Follow the recursion pattern of `drop` and `take` to prove the +following lemmas. In other words, for each lemma, there should be three cases, +and the third case will need to invoke the induction hypothesis. + +Hint: Note that there are three variables in the `drop_drop` lemma (but only two +arguments to `drop`). For the third case, `←add_assoc` might be useful. -/ + +lemma drop_drop {α : Type} : + ∀(m n : ℕ) (xs : list α), drop n (drop m xs) = drop (n + m) xs +| 0 n xs := by refl +-- supply the two missing cases here + +lemma take_take {α : Type} : + ∀(m : ℕ) (xs : list α), take m (take m xs) = take m xs := +sorry + +lemma take_drop {α : Type} : + ∀(n : ℕ) (xs : list α), take n xs ++ drop n xs = xs := +sorry + + +/-! ## Question 3: A Type of λ-Terms + +3.1. Define an inductive type corresponding to the untyped λ-terms, as given +by the following context-free grammar: + + term ::= 'var' string -- variable (e.g., `x`) + | 'lam' string term -- λ-expression (e.g., `λx, t`) + | 'app' term term -- application (e.g., `t u`) -/ + +-- enter your definition here + +/-! 3.2. Register a textual representation of the type `term` as an instance of +the `has_repr` type class. Make sure to supply enough parentheses to guarantee +that the output is unambiguous. -/ + +def term.repr : term → string +-- enter your answer here + +@[instance] def term.has_repr : has_repr term := +{ repr := term.repr } + +/-! 3.3. Test your textual representation. The following command should print +something like `(λx, ((y x) x))`. -/ + +#eval (term.lam "x" (term.app (term.app (term.var "y") (term.var "x")) + (term.var "x"))) + +end LoVe diff --git a/lean/love05_inductive_predicates_demo.lean b/lean/love05_inductive_predicates_demo.lean new file mode 100644 index 0000000..ccab0c8 --- /dev/null +++ b/lean/love05_inductive_predicates_demo.lean @@ -0,0 +1,547 @@ +import .love03_forward_proofs_demo +import .love04_functional_programming_demo + + +/-! # LoVe Demo 5: Inductive Predicates + +__Inductive predicates__, or inductively defined propositions, are a convenient +way to specify functions of type `⋯ → Prop`. They are reminiscent of formal +systems and of the Horn clauses of Prolog, the logic programming language par +excellence. + +A possible view of Lean: + + Lean = functional programming + logic programming + more logic -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Introductory Examples + +### Even Numbers + +Mathematicians often define sets as the smallest that meets some criteria. For +example: + + The set `E` of even natural numbers is defined as the smallest set closed + under the following rules: (1) `0 ∈ E` and (2) for every `k ∈ ℕ`, if + `k ∈ E`, then `k + 2 ∈ E`. + +In Lean, we can define the corresponding "is even" predicate as follows: -/ + +inductive even : ℕ → Prop +| zero : even 0 +| add_two : ∀k : ℕ, even k → even (k + 2) + +/-! This should look familiar. We have used the same syntax, except with `Type` +instead of `Prop`, for inductive types. + +The above command introduces a new unary predicate `even` as well as two +constructors, `even.zero` and `even.add_two`, which can be used to build proof +terms. Thanks to the "no junk" guarantee of inductive definitions, `even.zero` +and `even.add_two` are the only two ways to construct proofs of `even`. + +By the PAT principle, `even` can be seen as an inductive type, the values being +the proof terms. -/ + +lemma even_4 : + even 4 := +have even_0 : even 0 := + even.zero, +have even_2 : even 2 := + even.add_two _ even_0, +show even 4, from + even.add_two _ even_2 + +/-! Why cannot we simply define `even` recursively? Indeed, why not? -/ + +def even₂ : ℕ → bool +| 0 := tt +| 1 := ff +| (k + 2) := even₂ k + +/-! There are advantages and disadvantages to both styles. + +The recursive version requires us to specify a false case (1), and it requires +us to worry about termination. On the other hand, because it is computational, +it works well with `refl`, `simp`, `#reduce`, and `#eval`. + +The inductive version is often considered more abstract and elegant. Each rule +is stated independently of the others. + +Yet another way to define `even` is as a nonrecursive definition: -/ + +def even₃ (k : ℕ) : bool := +k % 2 = 0 + +/-! Mathematicians would probably find this the most satisfactory definition. +But the inductive version is a convenient, intuitive example that is typical of +many realistic inductive definitions. + + +### Tennis Games + +Transition systems consists of transition rules, which together specify a +binary predicate connecting a "before" and an "after" state. As a simple +specimen of a transition system, we consider the possible transitions, in a game +of tennis, starting from 0–0. -/ + +inductive score : Type +| vs : ℕ → ℕ → score +| adv_srv : score +| adv_rcv : score +| game_srv : score +| game_rcv : score + +infixr ` – ` : 10 := score.vs + +inductive step : score → score → Prop +| srv_0_15 : ∀n, step (0–n) (15–n) +| srv_15_30 : ∀n, step (15–n) (30–n) +| srv_30_40 : ∀n, step (30–n) (40–n) +| srv_40_game : ∀n, n < 40 → step (40–n) score.game_srv +| srv_40_adv : step (40–40) score.adv_srv +| srv_adv_40 : step score.adv_srv (40–40) +| rcv_0_15 : ∀n, step (n–0) (n–15) +| rcv_15_30 : ∀n, step (n–15) (n–30) +| rcv_30_40 : ∀n, step (n–30) (n–40) +| rcv_40_game : ∀n, n < 40 → step (n–40) score.game_rcv +| rcv_40_adv : step (40–40) score.adv_rcv +| rcv_adv_40 : step score.adv_rcv (40–40) + +infixr ` ⇒ ` := step + +/-! We can ask, and formally answer, questions such as: Can the score ever +return to 0–0? -/ + +lemma no_step_to_0_0 (s : score) : + ¬ step s (0–0) := +begin + intro h, + cases h +end + + +/-! ### Reflexive Transitive Closure + +Our last introductory example is the reflexive transitive closure of a +relation `r`, modeled as a binary predicate `star r`. -/ + +inductive star {α : Type} (r : α → α → Prop) : α → α → Prop +| base (a b : α) : r a b → star a b +| refl (a : α) : star a a +| trans (a b c : α) : star a b → star b c → star a c + +/-! The first rule embeds `r` into `star r`. The second rule achieves the +reflexive closure. The third rule achieves the transitive closure. + +The definition is truly elegant. If you doubt this, try implementing `star` as a +recursive function: -/ + +def star_rec {α : Type} (r : α → α → bool) : + α → α → bool := +sorry + + +/-! ### A Nonexample + +Not all inductive definitions admit a least solution. -/ + +-- fails +inductive illegal₂ : Prop +| intro : ¬ illegal₂ → illegal₂ + + +/-! ## Logical Symbols + +The truth values `false` and `true`, the connectives `∧` and `∨`, the +`∃`-quantifier, and the equality predicate `=` are all defined as inductive +propositions or predicates. In contrast, `∀` (= `Π`) and `→` are built into +the logic. + +Syntactic sugar: + + `∃x : α, p` := `Exists (λx : α, p)` + `x = y` := `eq x y` -/ + +namespace logical_symbols + +inductive and (a b : Prop) : Prop +| intro : a → b → and + +inductive or (a b : Prop) : Prop +| intro_left : a → or +| intro_right : b → or + +inductive iff (a b : Prop) : Prop +| intro : (a → b) → (b → a) → iff + +inductive Exists {α : Type} (p : α → Prop) : Prop +| intro : ∀a : α, p a → Exists + +inductive true : Prop +| intro : true + +inductive false : Prop + +inductive eq {α : Type} : α → α → Prop +| refl : ∀a : α, eq a a + +end logical_symbols + +#print and +#print or +#print iff +#print Exists +#print true +#print false +#print eq + + +/-! ## Rule Induction + +Just as we can perform induction on a term, we can perform induction on a proof +term. + +This is called __rule induction__, because the induction is on the introduction +rules (i.e., the constructors of the proof term). Thanks to the PAT principle, +this works as expected. -/ + +lemma mod_two_eq_zero_of_even (n : ℕ) (h : even n) : + n % 2 = 0 := +begin + induction' h, + case zero { + refl }, + case add_two : k hk ih { + simp [ih] } +end + +lemma not_even_two_mul_add_one (n : ℕ) : + ¬ even (2 * n + 1) := +begin + intro h, + induction' h, + apply ih (n - 1), + cases' n, + case zero { + linarith }, + case succ { + simp [nat.succ_eq_add_one] at *, + linarith } +end + +/-! `linarith` proves goals involving linear arithmetic equalities or +inequalities. "Linear" means it works only with `+` and `-`, not `*` and `/` +(but multiplication by a constant is supported). -/ + +lemma linarith_example (i : ℤ) (hi : i > 5) : + 2 * i + 3 > 11 := +by linarith + +lemma star_star_iff_star {α : Type} (r : α → α → Prop) + (a b : α) : + star (star r) a b ↔ star r a b := +begin + apply iff.intro, + { intro h, + induction' h, + case base : a b hab { + exact hab }, + case refl : a { + apply star.refl }, + case trans : a b c hab hbc ihab ihbc { + apply star.trans a b, + { exact ihab }, + { exact ihbc } } }, + { intro h, + apply star.base, + exact h } +end + +@[simp] lemma star_star_eq_star {α : Type} + (r : α → α → Prop) : + star (star r) = star r := +begin + apply funext, + intro a, + apply funext, + intro b, + apply propext, + apply star_star_iff_star +end + +#check funext +#check propext + + +/-! ## Elimination + +Given an inductive predicate `p`, its introduction rules typically have the form +`∀…, ⋯ → p …` and can be used to prove goals of the form `⊢ p …`. + +Elimination works the other way around: It extracts information from a lemma or +hypothesis of the form `p …`. Elimination takes various forms: pattern matching, +the `cases'` and `induction'` tactics, and custom elimination rules (e.g., +`and.elim_left`). + +* `cases'` works like `induction'` but without induction hypothesis. + +* `match` is available as well, but it corresponds to dependently typed pattern + matching (cf. `vector` in lecture 4). + +Now we can finally analyze how `cases' h`, where `h : l = r`, and how +`cases' classical.em h` work. -/ + +#print eq + +lemma cases_eq_example {α : Type} (l r : α) (h : l = r) + (p : α → α → Prop) : + p l r := +begin + cases' h, + sorry +end + +#check classical.em +#print or + +lemma cases_classical_em_example {α : Type} (a : α) + (p q : α → Prop) : + q a := +begin + have h : p a ∨ ¬ p a := + classical.em (p a), + cases' h, + case inl { + sorry }, + case inr { + sorry } +end + +/-! Often it is convenient to rewrite concrete terms of the form `p (c …)`, +where `c` is typically a constructor. We can state and prove an +__inversion rule__ to support such eliminative reasoning. + +Typical inversion rule: + + `∀x y, p (c x y) → (∃…, ⋯ ∧ ⋯) ∨ ⋯ ∨ (∃…, ⋯ ∧ ⋯)` + +It can be useful to combine introduction and elimination into a single lemma, +which can be used for rewriting both the hypotheses and conclusions of goals: + + `∀x y, p (c x y) ↔ (∃…, ⋯ ∧ ⋯) ∨ ⋯ ∨ (∃…, ⋯ ∧ ⋯)` -/ + +lemma even_iff (n : ℕ) : + even n ↔ n = 0 ∨ (∃m : ℕ, n = m + 2 ∧ even m) := +begin + apply iff.intro, + { intro hn, + cases' hn, + case zero { + simp }, + case add_two : k hk { + apply or.intro_right, + apply exists.intro k, + simp [hk] } }, + { intro hor, + cases' hor, + case inl : heq { + simp [heq, even.zero] }, + case inr : hex { + cases' hex with k hand, + cases' hand with heq hk, + simp [heq, even.add_two _ hk] } } +end + +lemma even_iff₂ (n : ℕ) : + even n ↔ n = 0 ∨ (∃m : ℕ, n = m + 2 ∧ even m) := +iff.intro + (assume hn : even n, + match n, hn with + | _, even.zero := + show 0 = 0 ∨ _, from + by simp + | _, even.add_two k hk := + show _ ∨ (∃m, k + 2 = m + 2 ∧ even m), from + or.intro_right _ (exists.intro k (by simp [*])) + end) + (assume hor : n = 0 ∨ (∃m, n = m + 2 ∧ even m), + match hor with + | or.inl heq := + show even n, from + by simp [heq, even.zero] + | or.inr hex := + match hex with + | Exists.intro m hand := + match hand with + | and.intro heq hm := + show even n, from + by simp [heq, even.add_two _ hm] + end + end + end) + + +/-! ## Further Examples + +### Sorted Lists -/ + +inductive sorted : list ℕ → Prop +| nil : sorted [] +| single {x : ℕ} : sorted [x] +| two_or_more {x y : ℕ} {zs : list ℕ} (hle : x ≤ y) + (hsorted : sorted (y :: zs)) : + sorted (x :: y :: zs) + +lemma sorted_nil : + sorted [] := +sorted.nil + +lemma sorted_2 : + sorted [2] := +sorted.single + +lemma sorted_3_5 : + sorted [3, 5] := +begin + apply sorted.two_or_more, + { linarith }, + { exact sorted.single } +end + +lemma sorted_3_5₂ : + sorted [3, 5] := +sorted.two_or_more (by linarith) sorted.single + +lemma sorted_7_9_9_11 : + sorted [7, 9, 9, 11] := +sorted.two_or_more (by linarith) + (sorted.two_or_more (by linarith) + (sorted.two_or_more (by linarith) + sorted.single)) + +lemma not_sorted_17_13 : + ¬ sorted [17, 13] := +begin + intro h, + cases' h, + linarith +end + + +/-! ### Palindromes -/ + +inductive palindrome {α : Type} : list α → Prop +| nil : palindrome [] +| single (x : α) : palindrome [x] +| sandwich (x : α) (xs : list α) (hxs : palindrome xs) : + palindrome ([x] ++ xs ++ [x]) + +-- fails +def palindrome₂ {α : Type} : list α → Prop +| [] := true +| [_] := true +| ([x] ++ xs ++ [x]) := palindrome₂ xs +| _ := false + +lemma palindrome_aa {α : Type} (a : α) : + palindrome [a, a] := +palindrome.sandwich a _ palindrome.nil + +lemma palindrome_aba {α : Type} (a b : α) : + palindrome [a, b, a] := +palindrome.sandwich a _ (palindrome.single b) + +lemma reverse_palindrome {α : Type} (xs : list α) + (hxs : palindrome xs) : + palindrome (reverse xs) := +begin + induction' hxs, + case nil { + exact palindrome.nil }, + case single { + exact palindrome.single x }, + case sandwich { + simp [reverse, reverse_append], + exact palindrome.sandwich _ _ ih } +end + + +/-! ### Full Binary Trees -/ + +#check btree + +inductive is_full {α : Type} : btree α → Prop +| empty : is_full btree.empty +| node (a : α) (l r : btree α) + (hl : is_full l) (hr : is_full r) + (hiff : l = btree.empty ↔ r = btree.empty) : + is_full (btree.node a l r) + +lemma is_full_singleton {α : Type} (a : α) : + is_full (btree.node a btree.empty btree.empty) := +begin + apply is_full.node, + { exact is_full.empty }, + { exact is_full.empty }, + { refl } +end + +lemma is_full_mirror {α : Type} (t : btree α) + (ht : is_full t) : + is_full (mirror t) := +begin + induction' ht, + case empty { + exact is_full.empty }, + case node : a l r hl hr hiff ih_l ih_r { + rw mirror, + apply is_full.node, + { exact ih_r }, + { exact ih_l }, + { simp [mirror_eq_empty_iff, *] } } +end + +lemma is_full_mirror₂ {α : Type} (t : btree α) : + is_full t → is_full (mirror t) := +begin + induction' t, + case empty { + intro ht, + exact ht }, + case node : a l r ih_l ih_r { + intro ht, + cases' ht with _ _ _ hl hr hiff, + rw mirror, + apply is_full.node, + { exact ih_r hr }, + { apply ih_l hl }, + { simp [mirror_eq_empty_iff, *] } } +end + + +/-! ### First-Order Terms -/ + +inductive term (α β : Type) : Type +| var {} : β → term +| fn : α → list term → term + +inductive well_formed {α β : Type} (arity : α → ℕ) : + term α β → Prop +| var (x : β) : well_formed (term.var x) +| fn (f : α) (ts : list (term α β)) + (hargs : ∀t ∈ ts, well_formed t) + (hlen : list.length ts = arity f) : + well_formed (term.fn f ts) + +inductive variable_free {α β : Type} : term α β → Prop +| fn (f : α) (ts : list (term α β)) + (hargs : ∀t ∈ ts, variable_free t) : + variable_free (term.fn f ts) + +end LoVe diff --git a/lean/love05_inductive_predicates_exercise_sheet.lean b/lean/love05_inductive_predicates_exercise_sheet.lean new file mode 100644 index 0000000..d5bd256 --- /dev/null +++ b/lean/love05_inductive_predicates_exercise_sheet.lean @@ -0,0 +1,145 @@ +import .love05_inductive_predicates_demo + + +/-! # LoVe Exercise 5: Inductive Predicates -/ + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Question 1: Even and Odd + +The `even` predicate is true for even numbers and false for odd numbers. -/ + +#check even + +/-! We define `odd` as the negation of `even`: -/ + +def odd (n : ℕ) : Prop := + ¬ even n + +/-! 1.1. Prove that 1 is odd and register this fact as a simp rule. + +Hint: `cases'` is useful to reason about hypotheses of the form `even …`. -/ + +@[simp] lemma odd_1 : + odd 1 := +sorry + +/-! 1.2. Prove that 3 and 5 are odd. -/ + +-- enter your answer here + +/-! 1.3. Complete the following proof by structural induction. -/ + +lemma even_two_times : + ∀m : ℕ, even (2 * m) +| 0 := even.zero +| (m + 1) := + sorry + +/-! 1.4. Complete the following proof by rule induction. + +Hint: You can use the `cases'` tactic (or `match … with`) to destruct an +existential quantifier and extract the witness. -/ + +lemma even_imp_exists_two_times : + ∀n : ℕ, even n → ∃m, n = 2 * m := +begin + intros n hen, + induction' hen, + case zero { + apply exists.intro 0, + refl }, + case add_two : k hek ih { + sorry } +end + +/-! 1.5. Using `even_two_times` and `even_imp_exists_two_times`, prove the +following equivalence. -/ + +lemma even_iff_exists_two_times (n : ℕ) : + even n ↔ ∃m, n = 2 * m := +sorry + +/-! 1.6 (**optional**). Give a structurally recursive definition of `even` and +test it with `#eval`. + +Hint: The negation operator on `bool` is called `not`. -/ + +def even_rec : nat → bool := +sorry + + +/-! ## Question 2: Tennis Games + +Recall the inductive type of tennis scores from the demo: -/ + +#check score + +/-! 2.1. Define an inductive predicate that returns true if the server is ahead +of the receiver and that returns false otherwise. -/ + +inductive srv_ahead : score → Prop +-- enter the missing cases here + +/-! 2.2. Validate your predicate definition by proving the following lemmas. -/ + +lemma srv_ahead_vs {m n : ℕ} (hgt : m > n) : + srv_ahead (score.vs m n) := +sorry + +lemma srv_ahead_adv_srv : + srv_ahead score.adv_srv := +sorry + +lemma not_srv_ahead_adv_rcv : + ¬ srv_ahead score.adv_rcv := +sorry + +lemma srv_ahead_game_srv : + srv_ahead score.game_srv := +sorry + +lemma not_srv_ahead_game_rcv : + ¬ srv_ahead score.game_rcv := +sorry + +/-! 2.3. Compare the above lemma statements with your definition. What do you +observe? -/ + +-- enter your answer here + + +/-! ## Question 3: Binary Trees + +3.1. Prove the converse of `is_full_mirror`. You may exploit already proved +lemmas (e.g., `is_full_mirror`, `mirror_mirror`). -/ + +#check is_full_mirror +#check mirror_mirror + +lemma mirror_is_full {α : Type} : + ∀t : btree α, is_full (mirror t) → is_full t := +sorry + +/-! 3.2. Define a `map` function on binary trees, similar to `list.map`. -/ + +def map_btree {α β : Type} (f : α → β) : btree α → btree β +| _ := sorry -- remove this dummy case and enter the missing cases + +/-! 3.3. Prove the following lemma by case distinction. -/ + +lemma map_btree_eq_empty_iff {α β : Type} (f : α → β) : + ∀t : btree α, map_btree f t = btree.empty ↔ t = btree.empty := +sorry + +/-! 3.4 (**optional**). Prove the following lemma by rule induction. -/ + +lemma map_btree_mirror {α β : Type} (f : α → β) : + ∀t : btree α, is_full t → is_full (map_btree f t) := +sorry + +end LoVe diff --git a/lean/love06_monads_demo.lean b/lean/love06_monads_demo.lean new file mode 100644 index 0000000..5e94a79 --- /dev/null +++ b/lean/love06_monads_demo.lean @@ -0,0 +1,469 @@ +import .lovelib + + +/-! # LoVe Demo 6: Monads + +Monads are an important functional programming abstraction. They generalize +computation with side effects. Haskell has shown that they can be used very +successful to write imperative programs. For us, they are interesting in their +own right and for two more reasons: + +* They provide a nice example of axiomatic reasoning. + +* They are needed for programming Lean itself (metaprogramming, lecture 7). -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Introductory Example + +Consider the following programming task: + + Implement a function `sum_2_5_7 ns` that sums up the second, fifth, and + seventh items of a list `ns` of natural numbers. Use `option ℕ` for the + result so that if the list has fewer than seven elements, you can return + `option.none`. + +A straightforward solution follows: -/ + +def sum_2_5_7 (ns : list ℕ) : option ℕ := +match list.nth ns 1 with +| option.none := option.none +| option.some n2 := + match list.nth ns 4 with + | option.none := option.none + | option.some n5 := + match list.nth ns 6 with + | option.none := option.none + | option.some n7 := option.some (n2 + n5 + n7) + end + end +end + +/-! The code is ugly, because of all the pattern matching on options. + +We can put all the ugliness in one function, which we call `connect`: -/ + +def connect {α : Type} {β : Type} : + option α → (α → option β) → option β +| option.none f := option.none +| (option.some a) f := f a + +def sum_2_5_7₂ (ns : list ℕ) : option ℕ := +connect (list.nth ns 1) + (λn2, connect (list.nth ns 4) + (λn5, connect (list.nth ns 6) + (λn7, option.some (n2 + n5 + n7)))) + +/-! Instead of defining `connect` ourselves, we can use Lean's predefined +general `bind` operation. We can also use `pure` instead of `option.some`: -/ + +#check bind + +def sum_2_5_7₃ (ns : list ℕ) : option ℕ := +bind (list.nth ns 1) + (λn2, bind (list.nth ns 4) + (λn5, bind (list.nth ns 6) + (λn7, pure (n2 + n5 + n7)))) + +/-! Syntactic sugar: + + `ma >>= f` := `bind ma f` -/ + +#check (>>=) + +def sum_2_5_7₄ (ns : list ℕ) : option ℕ := +list.nth ns 1 >>= + λn2, list.nth ns 4 >>= + λn5, list.nth ns 6 >>= + λn7, pure (n2 + n5 + n7) + +/-! Syntactic sugar: + + `do a ← ma, t` := `ma >>= (λa, t)` + `do ma, t` := `ma >>= (λ_, t)` -/ + +def sum_2_5_7₅ (ns : list ℕ) : option ℕ := +do n2 ← list.nth ns 1, + do n5 ← list.nth ns 4, + do n7 ← list.nth ns 6, + pure (n2 + n5 + n7) + +/-! The `do`s can be combined: -/ + +def sum_2_5_7₆ (ns : list ℕ) : option ℕ := +do + n2 ← list.nth ns 1, + n5 ← list.nth ns 4, + n7 ← list.nth ns 6, + pure (n2 + n5 + n7) + +/-! Although the notation has an imperative flavor, the function is a pure +functional program. + + +## Two Operations and Three Laws + +The `option` type constructor is an example of a monad. + +In general, a __monad__ is a type constructor `m` that depends on some type +parameter `α` (i.e., `m α`) equipped with two distinguished operations: + + `pure {α : Type} : α → m α` + `bind {α β : Type} : m α → (α → m β) → m β` + +For `option`: + + `pure` := `option.some` + `bind` := `connect` + +Intuitively, we can think of a monad as a "box": + +* `pure` puts the data into the box. + +* `bind` allows us to access the data in the box and modify it (possibly even + changing its type, since the result is an `m β` monad, not a `m α` monad). + +There is no general way to extract the data from the monad, i.e., to obtain an +`α` from an `m α`. + +To summarize, `pure a` provides no side effect and simply provides a box +containing the the value `a`, whereas `bind ma f` (also written `ma >>= f`) +executes `ma`, then executes `f` with the boxed result `a` of `ma`. + +The option monad is only one instance among many. + +Type | Effect +------------ | -------------------------------------------------------------- +`id α` | no effects +`option α` | simple exceptions +`σ → α × σ` | threading through a state of type `σ` +`set α` | nondeterministic computation returning `α` values +`t → α` | reading elements of type `t` (e.g., a configuration) +`ℕ × α` | adjoining running time (e.g., to model algorithmic complexity) +`string × α` | adjoining text output (e.g., for logging) +`prob α` | probability (e.g., using random number generators) +`io α` | interaction with the operating system +`tactic α` | interaction with the proof assistant + +All of the above are type constructors `m` are parameterized by a type `α`. + +Some effects can be combined (e.g., `option (t → α)`). + +Some effects are not executable (e.g., `set α`, `prob α`). They are nonetheless +useful for modeling programs abstractly in the logic. + +Specific monads may provide a way to extract the boxed value stored in the monad +without `bind`'s requirement of putting it back in a monad. + +Monads have several benefits, including: + +* They provide the convenient and highly readable `do` notation. + +* They support generic operations, such as + `mmap {α β : Type} : (α → m β) → list α → m (list β)`, which work uniformly + across all monads. + +The `bind` and `pure` operations are normally required to obey three laws, +called the monad laws. Pure data as the first program can be simplified away: + + do + a' ← pure a, + f a' + = + f a + +Pure data as the second program can be simplified away: + + do + a ← ma, + pure a + = + ma + +Nested programs `ma`, `f`, `g` can be flattened using this associativity rule: + + do + b ← do { + a ← ma, + f a }, + g b + = + do + a ← ma, + b ← f a, + g b + + +## A Type Class of Monads + +Monads are a mathematical structure, so we use class to add them as a type +class. We can think of a type class as a structure that is parameterized by a +type—or here, by a type constructor `m : Type → Type`. -/ + +@[class] structure lawful_monad (m : Type → Type) + extends has_bind m, has_pure m := +(pure_bind {α β : Type} (a : α) (f : α → m β) : + (pure a >>= f) = f a) +(bind_pure {α : Type} (ma : m α) : + (ma >>= pure) = ma) +(bind_assoc {α β γ : Type} (f : α → m β) (g : β → m γ) + (ma : m α) : + ((ma >>= f) >>= g) = (ma >>= (λa, f a >>= g))) + +#print monad +#print is_lawful_monad + +/-! Step by step: + +* We are creating a structure parameterized by a unary type constructor `m`. + +* The structure inherits the fields, and any syntactic sugar, from structures + called `has_bind` and `has_pure`, which provide the `bind` and `pure` + operations on `m` and some syntactic sugar. + +* The definition adds three fields to those already provided by `has_bind` and + `has_pure`, to store the proofs of the monad laws. + +To instantiate this definition with a concrete monad, we must supply the type +constructor `m` (e.g., `option`), `bind` and `pure` operators, and +proofs of the monad laws. + +(Lean's actual definition of monads is more complicated.) + + +## No Effects + +Our first monad is the trivial monad `m := id` (i.e., `m := λα. α`). -/ + +def id.pure {α : Type} : α → id α := +id + +def id.bind {α β : Type} : id α → (α → id β) → id β +| a f := f a + +@[instance] def id.lawful_monad : lawful_monad id := +{ pure := @id.pure, + bind := @id.bind, + pure_bind := + begin + intros α β a f, + refl + end, + bind_pure := + begin + intros α ma, + refl + end, + bind_assoc := + begin + intros α β γ f g ma, + refl + end } + + +/-! ## Basic Exceptions + +As we saw above, the option type provides a basic exception mechanism. -/ + +def option.pure {α : Type} : α → option α := +option.some + +def option.bind {α β : Type} : + option α → (α → option β) → option β +| option.none f := option.none +| (option.some a) f := f a + +@[instance] def option.lawful_monad : lawful_monad option := +{ pure := @option.pure, + bind := @option.bind, + pure_bind := + begin + intros α β a f, + refl + end, + bind_pure := + begin + intros α ma, + cases' ma, + { refl }, + { refl } + end, + bind_assoc := + begin + intros α β γ f g ma, + cases' ma, + { refl }, + { refl } + end } + +def option.throw {α : Type} : option α := +option.none + +def option.catch {α : Type} : + option α → option α → option α +| option.none ma' := ma' +| (option.some a) _ := option.some a + +@[instance] def option.has_orelse : has_orelse option := +{ orelse := @option.catch } + + +/-! ## Mutable State + +The state monad provides an abstraction corresponding to a mutable state. Some +compiler recognize the state monad to produce efficient imperative code. -/ + +def action (σ α : Type) : Type := +σ → α × σ + +def action.read {σ : Type} : action σ σ +| s := (s, s) + +def action.write {σ : Type} (s : σ) : action σ unit +| _ := ((), s) + +def action.pure {σ α : Type} (a : α) : action σ α +| s := (a, s) + +def action.bind {σ : Type} {α β : Type} (ma : action σ α) + (f : α → action σ β) : + action σ β +| s := + match ma s with + | (a, s') := f a s' + end + +@[instance] def action.lawful_monad {σ : Type} : + lawful_monad (action σ) := +{ pure := @action.pure σ, + bind := @action.bind σ, + pure_bind := + begin + intros α β a f, + apply funext, + intro s, + refl + end, + bind_pure := + begin + intros α ma, + apply funext, + intro s, + simp [action.bind], + cases' ma s, + refl + end, + bind_assoc := + begin + intros α β γ f g ma, + apply funext, + intro s, + simp [action.bind], + cases' ma s, + refl + end } + +def increasingly : list ℕ → action ℕ (list ℕ) +| [] := pure [] +| (n :: ns) := + do + prev ← action.read, + if n < prev then + increasingly ns + else + do + action.write n, + ns' ← increasingly ns, + pure (n :: ns') + +#eval increasingly [1, 2, 3, 2] 0 +#eval increasingly [1, 2, 3, 2, 4, 5, 2] 0 + + +/-! ## Nondeterminism + +The set monad stores an arbitrary, possibly infinite number of `α` values. -/ + +#check set + +def set.pure {α : Type} : α → set α +| a := {a} + +def set.bind {α β : Type} : set α → (α → set β) → set β +| A f := {b | ∃a, a ∈ A ∧ b ∈ f a} + +@[instance] def set.lawful_monad : lawful_monad set := +{ pure := @set.pure, + bind := @set.bind, + pure_bind := + begin + intros α β a f, + simp [set.pure, set.bind] + end, + bind_pure := + begin + intros α ma, + simp [set.pure, set.bind] + end, + bind_assoc := + begin + intros α β γ f g ma, + simp [set.pure, set.bind], + apply set.ext, + simp, + tautology + end } + +/-! `tautology` performs elimination of the logical symbols `∧`, `∨`, `↔`, and +`∃` in hypotheses and introduction of `∧`, `↔`, and `∃` in the conclusion, until +all the emerging subgoals can be trivially proved (e.g., by `refl`). + + +## A Generic Algorithm: Iteration over a List + +We consider a generic effectful program `mmap` that iterates over a list and +applies a function `f` to each element. -/ + +def nths_fine {α : Type} (xss : list (list α)) (n : ℕ) : + list (option α) := +list.map (λxs, list.nth xs n) xss + +#eval nths_fine [[11, 12, 13, 14], [21, 22, 23]] 2 +#eval nths_fine [[11, 12, 13, 14], [21, 22, 23]] 3 + +def mmap {m : Type → Type} [lawful_monad m] {α β : Type} + (f : α → m β) : + list α → m (list β) +| [] := pure [] +| (a :: as) := + do + b ← f a, + bs ← mmap as, + pure (b :: bs) + +def nths_coarse {α : Type} (xss : list (list α)) (n : ℕ) : + option (list α) := +mmap (λxs, list.nth xs n) xss + +#eval nths_coarse [[11, 12, 13, 14], [21, 22, 23]] 2 +#eval nths_coarse [[11, 12, 13, 14], [21, 22, 23]] 3 + +lemma mmap_append {m : Type → Type} [lawful_monad m] + {α β : Type} (f : α → m β) : + ∀as as' : list α, mmap f (as ++ as') = + do + bs ← mmap f as, + bs' ← mmap f as', + pure (bs ++ bs') +| [] _ := + by simp [mmap, lawful_monad.bind_pure, lawful_monad.pure_bind] +| (a :: as) as' := + by simp [mmap, mmap_append as as', lawful_monad.pure_bind, + lawful_monad.bind_assoc] + +end LoVe diff --git a/lean/love06_monads_exercise_sheet.lean b/lean/love06_monads_exercise_sheet.lean new file mode 100644 index 0000000..292ce11 --- /dev/null +++ b/lean/love06_monads_exercise_sheet.lean @@ -0,0 +1,214 @@ +import .love06_monads_demo + + +/-! # LoVe Exercise 6: Monads -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Question 1: A State Monad with Failure + +We introduce a richer notion of lawful monad that provides an `orelse` +operator `<|>` satisfying some laws, given below. `emp` denotes failure. +`x <|> y` tries `x` first, falling back on `y` on failure. -/ + +@[class] structure lawful_monad_with_orelse (m : Type → Type) + extends lawful_monad m, has_orelse m : Type 1 := +(emp {} {α : Type} : m α) +(emp_orelse {α : Type} (a : m α) : + (emp <|> a) = a) +(orelse_emp {α : Type} (a : m α) : + (a <|> emp) = a) +(orelse_assoc {α : Type} (a b c : m α) : + ((a <|> b) <|> c) = (a <|> (b <|> c))) +(emp_bind {α β : Type} (f : α → m β) : + (emp >>= f) = emp) +(bind_emp {α β : Type} (f : m α) : + (f >>= (λa, emp : α → m β)) = emp) + +/-! 1.1. We set up the `option` type constructor to be a +`lawful_monad_with_orelse`. Complete the proofs. + +Hint: Use `simp [(>>=)]` if you want to unfold the definition of the bind +operator. -/ + +def option.orelse {α : Type} : option α → option α → option α +| option.none ma' := ma' +| (option.some a) _ := option.some a + +@[instance] def lawful_monad_with_orelse_option : + lawful_monad_with_orelse option := +{ emp := λα, option.none, + orelse := @option.orelse, + emp_orelse := + sorry, + orelse_emp := + begin + intros α a, + cases' a, + { refl }, + { refl } + end, + orelse_assoc := + sorry, + emp_bind := + begin + intros α β f, + refl + end, + bind_emp := + sorry, + .. option.lawful_monad } + +@[simp] lemma option.some_bind {α β : Type} (a : α) (g : α → option β) : + (option.some a >>= g) = g a := +sorry + +/-! Let us enable some convenient pattern matching syntax, by instantiating +Lean's `monad_fail` type class. (Do not worry if you do not understand what +we are referring to.) -/ + +@[instance] def lawful_monad_with_orelse.monad_fail {m : Type → Type} + [lawful_monad_with_orelse m] : monad_fail m := +{ fail := λα msg, lawful_monad_with_orelse.emp } + +/-! Now we can write definitions such as the following: -/ + +def first_of_three {m : Type → Type} [lawful_monad_with_orelse m] + (c : m (list ℕ)) : m ℕ := +do + [n, _, _] ← c, + pure n + +#eval first_of_three (option.some [1]) +#eval first_of_three (option.some [1, 2, 3]) +#eval first_of_three (option.some [1, 2, 3, 4]) + +/-! Using `lawful_monad_with_orelse` and the `monad_fail` syntax, we can give a +concise definition for the `sum_2_5_7` function seen in the lecture. -/ + +def sum_2_5_7₇ {m : Type → Type} [lawful_monad_with_orelse m] + (c : m (list ℕ)) : m ℕ := +do + (_ :: n2 :: _ :: _ :: n5 :: _ :: n7 :: _) ← c, + pure (n2 + n5 + n7) + +/-! 1.2. Now we are ready to define `faction σ` ("eff action"): a monad with an +internal state of type `σ` that can fail (unlike `action σ`). + +We start with defining `faction σ α`, where `σ` is the type of the internal +state, and `α` is the type of the value stored in the monad. We use `option` to +model failure. This means we can also use the monadic behavior of `option` when +defining the monadic operations on `faction`. + +Hints: + +* Remember that `faction σ α` is an alias for a function type, so you can use + pattern matching and `λs, …` to define values of type `faction σ α`. + +* `faction` is very similar to `action` from the lecture's demo. You can look + there for inspiration. -/ + +def faction (σ : Type) (α : Type) : Type := +σ → option (α × σ) + +/-! 1.3. Define the `get` and `set` function for `faction`, where `get` returns +the state passed along the state monad and `set s` changes the state to `s`. -/ + +def get {σ : Type} : faction σ σ := +sorry + +def set {σ : Type} (s : σ) : faction σ unit := +sorry + +/-! We set up the `>>=` syntax on `faction`: -/ + +def faction.bind {σ α β : Type} (f : faction σ α) (g : α → faction σ β) : + faction σ β +| s := f s >>= (λas, g (prod.fst as) (prod.snd as)) + +@[instance] def faction.has_bind {σ : Type} : has_bind (faction σ) := +{ bind := @faction.bind σ } + +lemma faction.bind_apply {σ α β : Type} (f : faction σ α) (g : α → faction σ β) + (s : σ) : + (f >>= g) s = (f s >>= (λas, g (prod.fst as) (prod.snd as))) := +by refl + +/-! 1.4. Define the monadic operator `pure` for `faction`, in such a way that it +will satisfy the monad laws. -/ + +def faction.pure {σ α : Type} (a : α) : faction σ α := +sorry + +/-! We set up the syntax for `pure` on `faction`: -/ + +@[instance] def faction.has_pure {σ : Type} : has_pure (faction σ) := +{ pure := @faction.pure σ } + +lemma faction.pure_apply {σ α : Type} (a : α) (s : σ) : + (pure a : faction σ α) s = option.some (a, s) := +by refl + +/-! 1.3. Register `faction` as a monad. + +Hints: + +* The `funext` lemma is useful when you need to prove equality between two + functions. + +* `cases' f s` only works when `f s` appears in your goal, so you may need to + unfold some constants before you can invoke `cases'`. -/ + +@[instance] def faction.lawful_monad {σ : Type} : lawful_monad (faction σ) := +{ pure_bind := + begin + intros α β a f, + apply funext, + intro s, + refl + end, + bind_pure := + sorry, + bind_assoc := + sorry, + .. faction.has_bind, + .. faction.has_pure } + + +/-! ## Question 2: Kleisli Operator + +The Kleisli operator `>=>` (not to be confused with `>>=`) is useful for +pipelining monadic operations. Note that `λa, f a >>= g` is to be parsed as +`λa, (f a >>= g)`, not as `(λa, f a) >>= g`. -/ + +def kleisli {m : Type → Type} [lawful_monad m] {α β γ : Type} (f : α → m β) + (g : β → m γ) : α → m γ := +λa, f a >>= g + +infixr ` >=> ` : 90 := kleisli + +/-! 2.1. Prove that `pure` is a left and right unit for the Kleisli operator. -/ + +lemma pure_kleisli {m : Type → Type} [lawful_monad m] {α β : Type} + (f : α → m β) : + (pure >=> f) = f := +sorry + +lemma kleisli_pure {m : Type → Type} [lawful_monad m] {α β : Type} + (f : α → m β) : + (f >=> pure) = f := +sorry + +/-! 2.2. Prove that the Kleisli operator is associative. -/ + +lemma kleisli_assoc {m : Type → Type} [lawful_monad m] {α β γ δ : Type} + (f : α → m β) (g : β → m γ) (h : γ → m δ) : + ((f >=> g) >=> h) = (f >=> (g >=> h)) := +sorry + +end LoVe diff --git a/lean/love07_metaprogramming_demo.lean b/lean/love07_metaprogramming_demo.lean new file mode 100644 index 0000000..925a310 --- /dev/null +++ b/lean/love07_metaprogramming_demo.lean @@ -0,0 +1,535 @@ +import .love05_inductive_predicates_demo + + +/-! # LoVe Demo 7: Metaprogramming + +Users can extend Lean with custom monadic tactics and tools. This kind of +programming—programming the prover—is called metaprogramming. + +Lean's metaprogramming framework uses mostly the same notions and syntax as +Lean's input language itself. Abstract syntax trees __reflect__ internal data +structures, e.g., for expressions (terms). The prover's C++ internals are +exposed through Lean interfaces, which we can use for + +* accessing the current context and goal; +* unifying expressions; +* querying and modifying the environment; +* setting attributes. + +Most of Lean's predefined tactics are implemented in Lean (and not in C++). + +Example applications: + +* proof goal transformations; +* heuristic proof search; +* decision procedures; +* definition generators; +* advisor tools; +* exporters; +* ad hoc automation. + +Advantages of Lean's metaprogramming framework: + +* Users do not need to learn another programming language to write + metaprograms; they can work with the same constructs and notation used to + define ordinary objects in the prover's library. + +* Everything in that library is available for metaprogramming purposes. + +* Metaprograms can be written and debugged in the same interactive environment, + encouraging a style where formal libraries and supporting automation are + developed at the same time. -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Tactics and Tactic Combinators + +When programming our own tactics, we often need to repeat some actions on +several goals, or to recover if a tactic fails. Tactic combinators help in such +case. + +`repeat` applies its argument repeatedly on all (sub…sub)goals until it cannot +be applied any further. -/ + +lemma repeat_example : + even 4 ∧ even 7 ∧ even 3 ∧ even 0 := +begin + repeat { apply and.intro }, + repeat { apply even.add_two }, + repeat { sorry } +end + +/-! The "orelse" combinator `<|>` tries its first argument and applies its +second argument in case of failure. -/ + +lemma repeat_orelse_example : + even 4 ∧ even 7 ∧ even 3 ∧ even 0 := +begin + repeat { apply and.intro }, + repeat { + apply even.add_two + <|> apply even.zero }, + repeat { sorry } +end + +/-! `iterate` works repeatedly on the first goal until it fails; then it +stops. -/ + +lemma iterate_orelse_example : + even 4 ∧ even 7 ∧ even 3 ∧ even 0 := +begin + repeat { apply and.intro }, + iterate { + apply even.add_two + <|> apply even.zero }, + repeat { sorry } +end + +/-! `all_goals` applies its argument exactly once to each goal. It succeeds only +if the argument succeeds on **all** goals. -/ + +lemma all_goals_example : + even 4 ∧ even 7 ∧ even 3 ∧ even 0 := +begin + repeat { apply and.intro }, + all_goals { apply even.add_two }, -- fails + repeat { sorry } +end + +/-! `try` transforms its argument into a tactic that never fails. -/ + +lemma all_goals_try_example : + even 4 ∧ even 7 ∧ even 3 ∧ even 0 := +begin + repeat { apply and.intro }, + all_goals { try { apply even.add_two } }, + repeat { sorry } +end + +/-! `any_goals` applies its argument exactly once to each goal. It succeeds +if the argument succeeds on **any** goal. -/ + +lemma any_goals_example : + even 4 ∧ even 7 ∧ even 3 ∧ even 0 := +begin + repeat { apply and.intro }, + any_goals { apply even.add_two }, + repeat { sorry } +end + +/-! `solve1` transforms its argument into an all-or-nothing tactic. If the +argument does not prove the goal, `solve1` fails. -/ + +lemma any_goals_solve1_repeat_orelse_example : + even 4 ∧ even 7 ∧ even 3 ∧ even 0 := +begin + repeat { apply and.intro }, + any_goals { solve1 { repeat { + apply even.add_two + <|> apply even.zero } } }, + repeat { sorry } +end + +/-! The combinators `repeat`, `iterate`, `all_goals`, and `any_goals` can easily +lead to infinite looping: -/ + +/- +lemma repeat_not_example : + ¬ even 1 := +begin + repeat { apply not.intro }, + sorry +end +-/ + +/-! Let us start with the actual metaprogramming, by coding a custom tactic. The +tactic embodies the behavior we hardcoded in the `solve1` example above: -/ + +meta def intro_and_even : tactic unit := +do + tactic.repeat (tactic.applyc ``and.intro), + tactic.any_goals (tactic.solve1 (tactic.repeat + (tactic.applyc ``even.add_two + <|> tactic.applyc ``even.zero))), + pure () + +/-! The `meta` keyword makes it possible for the function to call other +metafunctions. The `do` keyword enters a monad, and the `<|>` operator is the +"orelse" operator of alternative monads. At the end, we return `()`, of type +`unit`, to ensure the metaprogram has the desired type. + +Any executable Lean definition can be used as a metaprogram. In addition, we can +put `meta` in front of a definition to indicate that is a metadefinition. Such +definitions need not terminate but cannot be used in non-`meta` contexts. + +Let us apply our custom tactic: -/ + +lemma any_goals_solve1_repeat_orelse_example₂ : + even 4 ∧ even 7 ∧ even 3 ∧ even 0 := +begin + intro_and_even, + repeat { sorry } +end + + +/-! ## The Metaprogramming Monad + +Tactics have access to + +* the list of **goals** as metavariables (each metavariables has a type and a + local context (hypothesis); they can optionally be instantiated); + +* the **elaborator** (to elaborate expressions and compute their type); + +* the **environment**, containing all declarations and inductive types; + +* the **attributes** (e.g., the list of `@[simp]` rules). + +The tactic monad is an alternative monad, with `fail` and `<|>`. Tactics can +also produce trace messages. -/ + +lemma even_14 : + even 14 := +by do + tactic.trace "Proving evenness …", + intro_and_even + +meta def hello_then_intro_and_even : tactic unit := +do + tactic.trace "Proving evenness …", + intro_and_even + +lemma even_16 : + even 16 := +by hello_then_intro_and_even + +run_cmd tactic.trace "Hello, Metaworld!" + +meta def trace_goals : tactic unit := +do + tactic.trace "local context:", + ctx ← tactic.local_context, + tactic.trace ctx, + tactic.trace "target:", + P ← tactic.target, + tactic.trace P, + tactic.trace "all missing proofs:", + Hs ← tactic.get_goals, + tactic.trace Hs, + τs ← list.mmap tactic.infer_type Hs, + tactic.trace τs + +lemma even_18_and_even_20 (α : Type) (a : α) : + even 18 ∧ even 20 := +by do + tactic.applyc ``and.intro, + trace_goals, + intro_and_even + +lemma triv_imp (a : Prop) (h : a) : + a := +by do + h ← tactic.get_local `h, + tactic.trace "h:", + tactic.trace h, + tactic.trace "raw h:", + tactic.trace (expr.to_raw_fmt h), + tactic.trace "type of h:", + τ ← tactic.infer_type h, + tactic.trace τ, + tactic.trace "type of type of h:", + υ ← tactic.infer_type τ, + tactic.trace υ, + tactic.apply h + +meta def exact_list : list expr → tactic unit +| [] := tactic.fail "no matching expression found" +| (h :: hs) := + do { + tactic.trace "trying", + tactic.trace h, + tactic.exact h } + <|> exact_list hs + +meta def hypothesis : tactic unit := +do + hs ← tactic.local_context, + exact_list hs + +lemma app_of_app {α : Type} {p : α → Prop} {a : α} + (h : p a) : + p a := +by hypothesis + + +/-! ## Names, Expressions, Declarations, and Environments + +The metaprogramming framework is articulated around five main types: + +* `tactic` manages the proof state, the global context, and more; + +* `name` represents a structured name (e.g., `x`, `even.add_two`); + +* `expr` represents an expression (a term) as an abstract syntax tree; + +* `declaration` represents a constant declaration, a definition, an axiom, or a + lemma; + +* `environment` stores all the declarations and notations that make up the + global context. -/ + +#print expr + +#check expr tt -- elaborated expressions +#check expr ff -- unelaborated expressions (pre-expressions) + +#print name + +#check (expr.const `ℕ [] : expr) +#check expr.sort level.zero -- Sort 0, i.e., Prop +#check expr.sort (level.succ level.zero) + -- Sort 1, i.e., Type +#check expr.var 0 -- bound variable with De Bruijn index 0 +#check (expr.local_const `uniq_name `pp_name binder_info.default + `(ℕ) : expr) +#check (expr.mvar `uniq_name `pp_name `(ℕ) : expr) +#check (expr.pi `pp_name binder_info.default `(ℕ) + (expr.sort level.zero) : expr) +#check (expr.lam `pp_name binder_info.default `(ℕ) + (expr.var 0) : expr) +#check expr.elet +#check expr.macro + +/-! We can create literal expressions conveniently using backticks and +parentheses: + +* Expressions with a single backtick must be fully elaborated. + +* Expressions with two backticks are __pre-expressions__: They may contain some + holes to be filled in later, based on some context. + +* Expressions with three backticks are pre-expressions without name checking. -/ + +run_cmd do + let e : expr := `(list.map (λn : ℕ, n + 1) [1, 2, 3]), + tactic.trace e + +run_cmd do + let e : expr := `(list.map _ [1, 2, 3]), -- fails + tactic.trace e + +run_cmd do + let e₁ : pexpr := ``(list.map (λn, n + 1) [1, 2, 3]), + let e₂ : pexpr := ``(list.map _ [1, 2, 3]), + tactic.trace e₁, + tactic.trace e₂ + +run_cmd do + let e : pexpr := ```(seattle.washington), + tactic.trace e + +/-! We can also create literal names with backticks: + +* Names with a single backtick, `n, are not checked for existence. + +* Names with two backticks, ``n, are resolved and checked. -/ + +run_cmd tactic.trace `and.intro +run_cmd tactic.trace `intro_and_even +run_cmd tactic.trace `seattle.washington + +run_cmd tactic.trace ``and.intro +run_cmd tactic.trace ``intro_and_even +run_cmd tactic.trace ``seattle.washington -- fails + +/-! __Antiquotations__ embed an existing expression in a larger expression. They +are announced by the prefix `%%` followed by a name from the current context. +Antiquotations are available with one, two, and three backticks: -/ + +run_cmd do + let x : expr := `(2 : ℕ), + let e : expr := `(%%x + 1), + tactic.trace e + +run_cmd do + let x : expr := `(@id ℕ), + let e : pexpr := ``(list.map %%x), + tactic.trace e + +run_cmd do + let x : expr := `(@id ℕ), + let e : pexpr := ```(a _ %%x), + tactic.trace e + +lemma one_add_two_eq_three : + 1 + 2 = 3 := +by do + `(%%a + %%b = %%c) ← tactic.target, + tactic.trace a, + tactic.trace b, + tactic.trace c, + `(@eq %%α %%l %%r) ← tactic.target, + tactic.trace α, + tactic.trace l, + tactic.trace r, + tactic.exact `(refl _ : 3 = 3) + +#print declaration + +/-! The `environment` type is presented as an abstract type, equipped with some +operations to query and modify it. The `environment.fold` metafunction iterates +over all declarations making up the environment. -/ + +run_cmd do + env ← tactic.get_env, + tactic.trace (environment.fold env 0 (λdecl n, n + 1)) + + +/-! ## First Example: A Conjuction-Destructing Tactic + +We define a `destruct_and` tactic that automates the elimination of `∧` in +premises, automating proofs such as these: -/ + +lemma abcd_a (a b c d : Prop) (h : a ∧ (b ∧ c) ∧ d) : + a := +and.elim_left h + +lemma abcd_b (a b c d : Prop) (h : a ∧ (b ∧ c) ∧ d) : + b := +and.elim_left (and.elim_left (and.elim_right h)) + +lemma abcd_bc (a b c d : Prop) (h : a ∧ (b ∧ c) ∧ d) : + b ∧ c := +and.elim_left (and.elim_right h) + +/-! Our tactic relies on a helper metafunction, which takes as argument the +hypothesis `h` to use as an expression rather than as a name: -/ + +meta def destruct_and_helper : expr → tactic unit +| h := + do + t ← tactic.infer_type h, + match t with + | `(%%a ∧ %%b) := + tactic.exact h + <|> + do { + ha ← tactic.to_expr ``(and.elim_left %%h), + destruct_and_helper ha } + <|> + do { + hb ← tactic.to_expr ``(and.elim_right %%h), + destruct_and_helper hb } + | _ := tactic.exact h + end + +meta def destruct_and (nam : name) : tactic unit := +do + h ← tactic.get_local nam, + destruct_and_helper h + +/-! Let us check that our tactic works: -/ + +lemma abc_a (a b c : Prop) (h : a ∧ b ∧ c) : + a := +by destruct_and `h + +lemma abc_b (a b c : Prop) (h : a ∧ b ∧ c) : + b := +by destruct_and `h + +lemma abc_bc (a b c : Prop) (h : a ∧ b ∧ c) : + b ∧ c := +by destruct_and `h + +lemma abc_ac (a b c : Prop) (h : a ∧ b ∧ c) : + a ∧ c := +by destruct_and `h -- fails + + +/-! ## Second Example: A Provability Advisor + +Next, we implement a `prove_direct` tool that traverses all lemmas in the +database and checks whether one of them can be used to prove the current goal. A +similar tactic is available in `mathlib` under the name `library_search`. -/ + +meta def is_theorem : declaration → bool +| (declaration.defn _ _ _ _ _ _) := ff +| (declaration.thm _ _ _ _) := tt +| (declaration.cnst _ _ _ _) := ff +| (declaration.ax _ _ _) := tt + +meta def get_all_theorems : tactic (list name) := +do + env ← tactic.get_env, + pure (environment.fold env [] (λdecl nams, + if is_theorem decl then declaration.to_name decl :: nams + else nams)) + +meta def prove_with_name (nam : name) : tactic unit := +do + tactic.applyc nam + ({ md := tactic.transparency.reducible, unify := ff } + : tactic.apply_cfg), + tactic.all_goals tactic.assumption, + pure () + +meta def prove_direct : tactic unit := +do + nams ← get_all_theorems, + list.mfirst (λnam, + do + prove_with_name nam, + tactic.trace ("directly proved by " ++ to_string nam)) + nams + +lemma nat.eq_symm (x y : ℕ) (h : x = y) : + y = x := +by prove_direct + +lemma nat.eq_symm₂ (x y : ℕ) (h : x = y) : + y = x := +by library_search + +lemma list.reverse_twice (xs : list ℕ) : + list.reverse (list.reverse xs) = xs := +by prove_direct + +lemma list.reverse_twice_symm (xs : list ℕ) : + xs = list.reverse (list.reverse xs) := +by prove_direct -- fails + +/-! As a small refinement, we propose a version of `prove_direct` that also +looks for equalities stated in symmetric form. -/ + +meta def prove_direct_symm : tactic unit := +prove_direct +<|> +do { + tactic.applyc `eq.symm, + prove_direct } + +lemma list.reverse_twice₂ (xs : list ℕ) : + list.reverse (list.reverse xs) = xs := +by prove_direct_symm + +lemma list.reverse_twice_symm₂ (xs : list ℕ) : + xs = list.reverse (list.reverse xs) := +by prove_direct_symm + + +/-! ## A Look at Two Predefined Tactics + +Quite a few of Lean's predefined tactics are implemented as metaprograms and +not in C++. We can find these definitions by clicking the name of a construct +in Visual Studio Code while holding the control or command key. -/ + +#check tactic.intro +#check tactic.assumption + +end LoVe diff --git a/lean/love07_metaprogramming_exercise_sheet.lean b/lean/love07_metaprogramming_exercise_sheet.lean new file mode 100644 index 0000000..2e6b47d --- /dev/null +++ b/lean/love07_metaprogramming_exercise_sheet.lean @@ -0,0 +1,250 @@ +import .love07_metaprogramming_demo + + +/-! # LoVe Exercise 7: Metaprogramming -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Question 1: A Term Exploder + +In this question, we develop a string format for the `expr` metatype. By +default, there is no `has_repr` instance to print a nice string. For example: -/ + +#eval (expr.app (expr.var 0) (expr.var 1) : expr) -- result: `[external]` +#eval (`(λx : ℕ, x + x) : expr) -- result: `[external]` + +/-! 1.1. Define a metafunction `expr.repr` that converts an `expr` into a +`string`. It is acceptable to leave out some fields from the `expr` +constructors, such as the level `l` of a sort, the binder information `bi` of +a `λ` or `∀` binder, and the arguments of the `macro` constructor. + +Hint: Use `name.to_string` to convert a name to a string, and `repr` for other +types that belong to the `has_repr` type class. -/ + +meta def expr.repr : expr → string +| (expr.var n) := "(var " ++ repr n ++ ")" +-- enter the missing cases here + +/-! We register `expr.repr` in the `has_repr` type class, so that we can use +`repr` without qualification in the future, and so that it is available to +`#eval`. We need the `meta` keyword in front of the command we enter. -/ + +meta instance expr.has_repr : has_repr expr := +{ repr := expr.repr } + +/-! 1.2. Test your setup. -/ + +#eval (expr.app (expr.var 0) (expr.var 1) : expr) +#eval (`(λx : ℕ, x + x) : expr) + + +/-! ## Question 2: `destruct_and` on Steroids + +Recall from the lecture that `destruct_and` fails on easy goals such as -/ + +lemma abc_ac₂ (a b c : Prop) (h : a ∧ b ∧ c) : + a ∧ c := +sorry + +/-! We will now address this by developing a new tactic called `destro_and`, +which applies both **des**truction and in**tro**duction rules for conjunction. +It will also go automatically through the hypotheses instead of taking an +argument. We will develop it in three steps. + +2.1. Develop a tactic `intro_ands` that replaces all goals of the form +`a ∧ b` with two new goals `a` and `b` systematically, until all top-level +conjunctions are gone. + +For this, we can use tactics such as `tactic.repeat` (which repeatedly applies a +tactic on all goals and subgoals until the tactic fails on each of the goal) and +`tactic.applyc` (which can be used to apply a rule, in connection with backtick +quoting). -/ + +#check tactic.repeat +#check tactic.applyc + +meta def intro_ands : tactic unit := +sorry + +lemma abcd_bd (a b c d : Prop) (h : a ∧ (b ∧ c) ∧ d) : + b ∧ d := +begin + intro_ands, + /- The proof state should be as follows: + + 2 goals + a b c d : Prop, + h : a ∧ (b ∧ c) ∧ d + ⊢ b + + a b c d : Prop, + h : a ∧ (b ∧ c) ∧ d + ⊢ d -/ + repeat { sorry } +end + +lemma abcd_bacb (a b c d : Prop) (h : a ∧ (b ∧ c) ∧ d) : + b ∧ (a ∧ (c ∧ b)) := +begin + intro_ands, + /- The proof state should be as follows: + + 4 goals + a b c d : Prop, + h : a ∧ (b ∧ c) ∧ d + ⊢ b + + a b c d : Prop, + h : a ∧ (b ∧ c) ∧ d + ⊢ a + + a b c d : Prop, + h : a ∧ (b ∧ c) ∧ d + ⊢ c + + a b c d : Prop, + h : a ∧ (b ∧ c) ∧ d + ⊢ b -/ + repeat { sorry } +end + +/-! 2.2. Develop a tactic `destruct_ands` that replaces hypotheses of the form +`h : a ∧ b` by two new hypotheses `h_left : a` and `h_right : b` systematically, +until all top-level conjunctions are gone. + +Here is some pseudocode that you can follow: + +1. Retrieve the list of hypotheses from the context. This is provided by the + metaconstant `tactic.local_context`. + +2. Find the first hypothesis (= term) with a type (= proposition) of the form + `_ ∧ _`. Here, you can use the `list.mfirst` function, in conjunction with + pattern matching. You can use `tactic.infer_type` to query the type of a + term. + +3. Perform a case split on the first found hypothesis. This can be achieved + using `tactic.cases`, a metaprogram that corresponds roughly to `cases'`. + +4. Repeat. + +The above procedure might fail if there exists no hypotheses of the required +form. Make sure to handle this failure gracefully, for example using +`tactic.try` or `<|> pure ()`. -/ + +meta def destruct_ands : tactic unit := +sorry + +lemma abcd_bd₂ (a b c d : Prop) (h : a ∧ (b ∧ c) ∧ d) : + b ∧ d := +begin + destruct_ands, + /- The proof state should be as follows: + + a b c d : Prop, + h_left : a, + h_right_right : d, + h_right_left_left : b, + h_right_left_right : c + ⊢ b ∧ d -/ + sorry +end + +/-! 2.3. Combine the two tactics developed above and `tactic.assumption` to +implement the desired `destro_and` tactic. -/ + +meta def destro_and : tactic unit := +sorry + +lemma abcd_bd₃ (a b c d : Prop) (h : a ∧ (b ∧ c) ∧ d) : + b ∧ d := +by destro_and + +lemma abcd_bacb₂ (a b c d : Prop) (h : a ∧ (b ∧ c) ∧ d) : + b ∧ (a ∧ (c ∧ b)) := +by destro_and + +lemma abd_bacb (a b c d : Prop) (h : a ∧ b ∧ d) : + b ∧ (a ∧ (c ∧ b)) := +begin + destro_and, + /- The proof state should be roughly as follows: + + a b c d : Prop, + h_left : a, + h_right_left : b, + h_right_right : d + ⊢ c -/ + sorry -- unprovable +end + +/-! 2.4. Provide some more examples for `destro_and` to convince yourself that +it works as expected also on more complicated examples. -/ + +-- enter your examples here + + +/-! ## Question 3 (**optional**): A Theorem Finder + +We will implement a function that allows us to find theorems by constants +appearing in their statements. So given a list of constant names, the function +will list all theorems in which all these constants appear. + +You can use the following metaconstants: + +* `declaration` contains all data (name, type, value) associated with a + declaration understood broadly (e.g., axiom, lemma, constant, etc.). + +* `tactic.get_env` gives us access to the `environment`, a metatype that lists + all `declaration`s (including all theorems). + +* `environment.fold` allows us to walk through the environment and collect data. + +* `expr.fold` allows us to walk through an expression and collect data. + +3.1 (**optional**). Write a metafunction that checks whether an expression +contains a specific constant. + +You can use `expr.fold` to walk through the expression, `||` and `ff` for +Booleans, and `expr.is_constant_of` to check whether an expression is a +constant. -/ + +meta def term_contains (e : expr) (nam : name) : bool := +sorry + +/-! 3.2 (**optional**). Write a metafunction that checks whether an expression +contains **all** constants in a list. + +You can use `list.band` (Boolean and). -/ + +meta def term_contains_all (nams : list name) (e : expr) : bool := +sorry + +/-! 3.3 (**optional**). Produce the list of all theorems that contain all +constants `nams` in their statement. + +`environment.fold` allows you to walk over the list of declarations. With +`declaration.type`, you get the type of a theorem, and with +`declaration.to_name` you get the name. -/ + +meta def list_constants (nams : list name) (e : environment) : list name := +sorry + +/-! Finally, we develop a tactic that uses the above metafunctions to log all +found theorems: -/ + +meta def find_constants (nams : list name) : tactic unit := +do + env ← tactic.get_env, + list.mmap' tactic.trace (list_constants nams env) + +/-! We test the solution. -/ + +run_cmd find_constants [] -- lists all theorems +run_cmd find_constants [`list.map, `function.comp] + +end LoVe diff --git a/lean/love08_operational_semantics_demo.lean b/lean/love08_operational_semantics_demo.lean new file mode 100644 index 0000000..0da711c --- /dev/null +++ b/lean/love08_operational_semantics_demo.lean @@ -0,0 +1,751 @@ +import .lovelib + + +/-! # LoVe Demo 8: Operational Semantics + +In this and the next two lectures, we will see how to use Lean to specify the +syntax and semantics of programming languages and to reason about the +semantics. -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Formal Semantics + +A formal semantics helps specify and reason about the programming language +itself, and about individual programs. + +It can form the basis of verified compilers, interpreters, verifiers, static +analyzers, type checkers, etc. Without formal proofs, these tools are +**almost always wrong**. + +In this area, proof assistants are widely used. Every year, about 10-20% of POPL +papers are partially or totally formalized. Reasons for this success: + +* Little machinery (background libraries, tactics) is needed to get started, + beyond inductive types and predicates and recursive functions. + +* The proofs tend to have lots of cases, which is a good match for computers. + +* Proof assistants keep track of what needs to be changed when as extend the + programming language with more features. + +Case in point: WebAssembly. To quote Conrad Watt (with some abbreviations): + + We have produced a full Isabelle mechanisation of the core execution + semantics and type system of the WebAssembly language. To complete this + proof, **several deficiencies** in the official WebAssembly specification, + uncovered by our proof and modelling work, needed to be corrected. In some + cases, these meant that the type system was **originally unsound**. + + We have maintained a constructive dialogue with the working group, + verifying new features as they are added. In particular, the mechanism by + which a WebAssembly implementation interfaces with its host environment was + not formally specified in the working group's original paper. Extending our + mechanisation to model this feature revealed a deficiency in the WebAssembly + specification that **sabotaged the soundness** of the type system. + + +## A Minimalistic Imperative Language + +A state `s` is a function from variable names to values (`string → ℕ`). + +__WHILE__ is a minimalistic imperative language with the following grammar: + + S ::= skip -- no-op + | x := a -- assignment + | S ; S -- sequential composition + | if b then S else S -- conditional statement + | while b do S -- while loop + +where `S` stands for a statement (also called command or program), `x` for a +variable, `a` for an arithmetic expression, and `b` for a Boolean expression. -/ + +#check state + +inductive stmt : Type +| skip : stmt +| assign : string → (state → ℕ) → stmt +| seq : stmt → stmt → stmt +| ite : (state → Prop) → stmt → stmt → stmt +| while : (state → Prop) → stmt → stmt + +infixr ` ;; ` : 90 := stmt.seq + +/-! In our grammar, we deliberately leave the syntax of arithmetic and Boolean +expressions unspecified. In Lean, we have the choice: + +* We could use a type such as `aexp` from lecture 1 and similarly for Boolean + expressions. + +* We could decide that an arithmetic expression is simply a function from + states to natural numbers (`state → ℕ`) and a Boolean expression is a + predicate (`state → Prop` or `state → bool`). + +This corresponds to the difference between deep and shallow embeddings: + +* A __deep embedding__ of some syntax (expression, formula, program, etc.) + consists of an abstract syntax tree specified in the proof assistant + (e.g., `aexp`) with a semantics (e.g., `eval`). + +* In contrast, a __shallow embedding__ simply reuses the corresponding + mechanisms from the logic (e.g., λ-terms, functions and predicate types). + +A deep embedding allows us to reason about the syntax (and its semantics). A +shallow embedding is more lightweight, because we can use it directly, without +having to define a semantics. + +We will use a deep embedding of programs (which we find interesting), and +shallow embeddings of assignments and Boolean expressions (which we find +boring). -/ + +def silly_loop : stmt := +stmt.while (λs, s "x" > s "y") + (stmt.skip ;; stmt.assign "x" (λs, s "x" - 1)) + + +/-! ## Big-Step Semantics + +An __operational semantics__ corresponds to an idealized interpreter (specified +in a Prolog-like language). Two main variants: + +* big-step semantics; + +* small-step semantics. + +In a __big-step semantics__ (also called __natural semantics__), judgments have +the form `(S, s) ⟹ t`: + + Starting in a state `s`, executing `S` terminates in the state `t`. + +Example: + + `(x := x + y; y := 0, [x ↦ 3, y ↦ 5]) ⟹ [x ↦ 8, y ↦ 0]` + +Derivation rules: + + ——————————————— Skip + (skip, s) ⟹ s + + ——————————————————————————— Asn + (x := a, s) ⟹ s[x ↦ s(a)] + + (S, s) ⟹ t (T, t) ⟹ u + ——————————————————————————— Seq + (S; T, s) ⟹ u + + (S, s) ⟹ t + ————————————————————————————— If-True if s(b) is true + (if b then S else T, s) ⟹ t + + (T, s) ⟹ t + ————————————————————————————— If-False if s(b) is false + (if b then S else T, s) ⟹ t + + (S, s) ⟹ t (while b do S, t) ⟹ u + —————————————————————————————————————— While-True if s(b) is true + (while b do S, s) ⟹ u + + ————————————————————————— While-False if s(b) is false + (while b do S, s) ⟹ s + +Above, `s(e)` denotes the value of expression `e` in state `s`. + +In Lean, the judgment corresponds to an inductive predicate, and the derivation +rules correspond to the predicate's introduction rules. Using an inductive +predicate as opposed to a recursive function allows us to cope with +nontermination (e.g., a diverging `while`) and nondeterminism (e.g., +multithreading). -/ + +inductive big_step : stmt × state → state → Prop +| skip {s} : + big_step (stmt.skip, s) s +| assign {x a s} : + big_step (stmt.assign x a, s) (s{x ↦ a s}) +| seq {S T s t u} (hS : big_step (S, s) t) + (hT : big_step (T, t) u) : + big_step (S ;; T, s) u +| ite_true {b : state → Prop} {S T s t} (hcond : b s) + (hbody : big_step (S, s) t) : + big_step (stmt.ite b S T, s) t +| ite_false {b : state → Prop} {S T s t} (hcond : ¬ b s) + (hbody : big_step (T, s) t) : + big_step (stmt.ite b S T, s) t +| while_true {b : state → Prop} {S s t u} (hcond : b s) + (hbody : big_step (S, s) t) + (hrest : big_step (stmt.while b S, t) u) : + big_step (stmt.while b S, s) u +| while_false {b : state → Prop} {S s} (hcond : ¬ b s) : + big_step (stmt.while b S, s) s + +infix ` ⟹ ` : 110 := big_step + +lemma silly_loop_from_1_big_step : + (silly_loop, (λ_, 0){"x" ↦ 1}) ⟹ (λ_, 0) := +begin + rw silly_loop, + apply big_step.while_true, + { simp }, + { apply big_step.seq, + { apply big_step.skip }, + { apply big_step.assign } }, + { simp, + apply big_step.while_false, + linarith } +end + + +/-! ## Properties of the Big-Step Semantics + +Equipped with a big-step semantics, we can + +* prove properties of the programming language, such as **equivalence proofs** + between programs and **determinism**; + +* reason about **concrete programs**, proving theorems relating final states `t` + with initial states `s`. -/ + +lemma big_step_deterministic {S s l r} (hl : (S, s) ⟹ l) + (hr : (S, s) ⟹ r) : + l = r := +begin + induction' hl, + case skip { + cases' hr, + refl }, + case assign { + cases' hr, + refl }, + case seq : S T s t l hS hT ihS ihT { + cases' hr with _ _ _ _ _ _ _ t' _ hS' hT', + cases' ihS hS', + cases' ihT hT', + refl }, + case ite_true : b S T s t hb hS ih { + cases' hr, + { apply ih hr }, + { cc } }, + case ite_false : b S T s t hb hT ih { + cases' hr, + { cc }, + { apply ih hr } }, + case while_true : b S s t u hb hS hw ihS ihw { + cases' hr, + { cases' ihS hr, + cases' ihw hr_1, + refl }, + { cc } }, + { cases' hr, + { cc }, + { refl } } +end + +lemma big_step_terminates {S s} : + ∃t, (S, s) ⟹ t := +sorry -- unprovable + +lemma big_step_doesnt_terminate {S s t} : + ¬ (stmt.while (λ_, true) S, s) ⟹ t := +begin + intro hw, + induction' hw, + case while_true { + assumption }, + case while_false { + cc } +end + +/-! We can define inversion rules about the big-step semantics: -/ + +@[simp] lemma big_step_skip_iff {s t} : + (stmt.skip, s) ⟹ t ↔ t = s := +begin + apply iff.intro, + { intro h, + cases' h, + refl }, + { intro h, + rw h, + exact big_step.skip } +end + +@[simp] lemma big_step_assign_iff {x a s t} : + (stmt.assign x a, s) ⟹ t ↔ t = s{x ↦ a s} := +begin + apply iff.intro, + { intro h, + cases' h, + refl }, + { intro h, + rw h, + exact big_step.assign } +end + +@[simp] lemma big_step_seq_iff {S T s t} : + (S ;; T, s) ⟹ t ↔ (∃u, (S, s) ⟹ u ∧ (T, u) ⟹ t) := +begin + apply iff.intro, + { intro h, + cases' h, + apply exists.intro, + apply and.intro; assumption }, + { intro h, + cases' h, + cases' h, + apply big_step.seq; assumption } +end + +@[simp] lemma big_step_ite_iff {b S T s t} : + (stmt.ite b S T, s) ⟹ t ↔ + (b s ∧ (S, s) ⟹ t) ∨ (¬ b s ∧ (T, s) ⟹ t) := +begin + apply iff.intro, + { intro h, + cases' h, + { apply or.intro_left, + cc }, + { apply or.intro_right, + cc } }, + { intro h, + cases' h; cases' h, + { apply big_step.ite_true; assumption }, + { apply big_step.ite_false; assumption } } +end + +lemma big_step_while_iff {b S s u} : + (stmt.while b S, s) ⟹ u ↔ + (∃t, b s ∧ (S, s) ⟹ t ∧ (stmt.while b S, t) ⟹ u) + ∨ (¬ b s ∧ u = s) := +begin + apply iff.intro, + { intro h, + cases' h, + { apply or.intro_left, + apply exists.intro t, + cc }, + { apply or.intro_right, + cc } }, + { intro h, + cases' h, + case inl { + cases' h with t h, + cases' h with hb h, + cases' h with hS hwhile, + exact big_step.while_true hb hS hwhile }, + case inr { + cases' h with hb hus, + rw hus, + exact big_step.while_false hb } } +end + +lemma big_step_while_true_iff {b : state → Prop} {S s u} + (hcond : b s) : + (stmt.while b S, s) ⟹ u ↔ + (∃t, (S, s) ⟹ t ∧ (stmt.while b S, t) ⟹ u) := +by rw big_step_while_iff; simp [hcond] + +@[simp] lemma big_step_while_false_iff {b : state → Prop} + {S s t} (hcond : ¬ b s) : + (stmt.while b S, s) ⟹ t ↔ t = s := +by rw big_step_while_iff; simp [hcond] + + +/-! ## Small-Step Semantics + +A big-step semantics + +* does not let us reason about intermediate states; + +* does not let us express nontermination or interleaving (for multithreading). + +__Small-step semantics__ (also called __structural operational semantics__) +solve the above issues. + +A judgment has the form `(S, s) ⇒ (T, t)`: + + Starting in a state `s`, executing one step of `S` leaves us in the + state `t`, with the program `T` remaining to be executed. + +An execution is a finite or infinite chain `(S₀, s₀) ⇒ (S₁, s₁) ⇒ …`. + +A pair `(S, s)` is called a __configuration__. It is __final__ if no transition +of the form `(S, s) ⇒ _` is possible. + +Example: + + `(x := x + y; y := 0, [x ↦ 3, y ↦ 5])` + `⇒ (skip; y := 0, [x ↦ 8, y ↦ 5])` + `⇒ (y := 0, [x ↦ 8, y ↦ 5])` + `⇒ (skip, [x ↦ 8, y ↦ 0])` + +Derivation rules: + + ————————————————————————————————— Asn + (x := a, s) ⇒ (skip, s[x ↦ s(a)]) + + (S, s) ⇒ (S', s') + ———-————————————————————— Seq-Step + (S ; T, s) ⇒ (S' ; T, s') + + —————————————————————— Seq-Skip + (skip ; S, s) ⇒ (S, s) + + ———————————————————————————————— If-True if s(b) is true + (if b then S else T, s) ⇒ (S, s) + + ———————————————————————————————— If-False if s(b) is false + (if b then S else T, s) ⇒ (T, s) + + ——————————————————————————————————————————————————————————————— While + (while b do S, s) ⇒ (if b then (S ; while b do S) else skip, s) + +There is no rule for `skip` (why?). -/ + +inductive small_step : stmt × state → stmt × state → Prop +| assign {x a s} : + small_step (stmt.assign x a, s) (stmt.skip, s{x ↦ a s}) +| seq_step {S S' T s s'} (hS : small_step (S, s) (S', s')) : + small_step (S ;; T, s) (S' ;; T, s') +| seq_skip {T s} : + small_step (stmt.skip ;; T, s) (T, s) +| ite_true {b : state → Prop} {S T s} (hcond : b s) : + small_step (stmt.ite b S T, s) (S, s) +| ite_false {b : state → Prop} {S T s} (hcond : ¬ b s) : + small_step (stmt.ite b S T, s) (T, s) +| while {b : state → Prop} {S s} : + small_step (stmt.while b S, s) + (stmt.ite b (S ;; stmt.while b S) stmt.skip, s) + +infixr ` ⇒ ` := small_step +infixr ` ⇒* ` : 100 := star small_step + +lemma silly_loop_from_1_small_step : + (silly_loop, (λ_, 0){"x" ↦ 1}) ⇒* + (stmt.skip, ((λ_, 0) : state)) := +begin + rw silly_loop, + apply star.head, + { apply small_step.while }, + { apply star.head, + { apply small_step.ite_true, + simp }, + { apply star.head, + { apply small_step.seq_step, + apply small_step.seq_skip }, + { apply star.head, + { apply small_step.seq_step, + apply small_step.assign }, + { apply star.head, + { apply small_step.seq_skip }, + { apply star.head, + { apply small_step.while }, + { apply star.head, + { apply small_step.ite_false, + simp }, + { simp } } } } } } } +end + +/-! Equipped with a small-step semantics, we can **define** a big-step +semantics: + + `(S, s) ⟹ t` if and only if `(S, s) ⇒* (skip, t)` + +where `r*` denotes the reflexive transitive closure of a relation `r`. + +Alternatively, if we have already defined a big-step semantics, we can **prove** +the above equivalence theorem to validate our definitions. + +The main disadvantage of small-step semantics is that we now have two relations, +`⇒` and `⇒*`, and reasoning tends to be more complicated. + + +## Properties of the Small-Step Semantics + +We can prove that a configuration `(S, s)` is final if and only if `S = skip`. +This ensures that we have not forgotten a derivation rule. -/ + +lemma small_step_final (S s) : + (¬ ∃T t, (S, s) ⇒ (T, t)) ↔ S = stmt.skip := +begin + induction' S, + case skip { + simp, + intros T t hstep, + cases' hstep }, + case assign : x a { + simp, + apply exists.intro stmt.skip, + apply exists.intro (s{x ↦ a s}), + exact small_step.assign }, + case seq : S T ihS ihT { + simp, + cases' classical.em (S = stmt.skip), + case inl { + rw h, + apply exists.intro T, + apply exists.intro s, + exact small_step.seq_skip }, + case inr { + simp [h, auto.not_forall_eq, auto.not_not_eq] at ihS, + cases' ihS s with S' hS', + cases' hS' with s' hs', + apply exists.intro (S' ;; T), + apply exists.intro s', + exact small_step.seq_step hs' } }, + case ite : b S T ihS ihT { + simp, + cases' classical.em (b s), + case inl { + apply exists.intro S, + apply exists.intro s, + exact small_step.ite_true h }, + case inr { + apply exists.intro T, + apply exists.intro s, + exact small_step.ite_false h } }, + case while : b S ih { + simp, + apply exists.intro (stmt.ite b (S ;; stmt.while b S) stmt.skip), + apply exists.intro s, + exact small_step.while } +end + +lemma small_step_deterministic {S s Ll Rr} + (hl : (S, s) ⇒ Ll) (hr : (S, s) ⇒ Rr) : + Ll = Rr := +begin + induction' hl, + case assign : x a s { + cases' hr, + refl }, + case seq_step : S S₁ T s s₁ hS₁ ih { + cases' hr, + case seq_step : S S₂ _ _ s₂ hS₂ { + have hSs₁₂ := ih hS₂, + cc }, + case seq_skip { + cases' hS₁ } }, + case seq_skip : T s { + cases' hr, + case seq_step { + cases' hr }, + case seq_skip { + refl } }, + case ite_true : b S T s hcond { + cases' hr, + case ite_true { + refl }, + case ite_false { + cc } }, + case ite_false : b S T s hcond { + cases' hr, + case ite_true { + cc }, + case ite_false { + refl } }, + case while : b S s { + cases' hr, + refl } +end + +/-! We can define inversion rules also about the small-step semantics. Here are +three examples: -/ + +lemma small_step_skip {S s t} : + ¬ ((stmt.skip, s) ⇒ (S, t)) := +by intro h; cases' h + +@[simp] lemma small_step_seq_iff {S T s Ut} : + (S ;; T, s) ⇒ Ut ↔ + (∃S' t, (S, s) ⇒ (S', t) ∧ Ut = (S' ;; T, t)) + ∨ (S = stmt.skip ∧ Ut = (T, s)) := +begin + apply iff.intro, + { intro h, + cases' h, + { apply or.intro_left, + apply exists.intro S', + apply exists.intro s', + cc }, + { apply or.intro_right, + cc } }, + { intro h, + cases' h, + { cases' h, + cases' h, + cases' h, + rw right, + apply small_step.seq_step, + assumption }, + { cases' h, + rw left, + rw right, + apply small_step.seq_skip } } +end + +@[simp] lemma small_step_ite_iff {b S T s Us} : + (stmt.ite b S T, s) ⇒ Us ↔ + (b s ∧ Us = (S, s)) ∨ (¬ b s ∧ Us = (T, s)) := +begin + apply iff.intro, + { intro h, + cases' h, + { apply or.intro_left, + cc }, + { apply or.intro_right, + cc } }, + { intro h, + cases' h, + { cases' h, + rw right, + apply small_step.ite_true, + assumption }, + { cases' h, + rw right, + apply small_step.ite_false, + assumption } } +end + + +/-! ### Equivalence of the Big-Step and the Small-Step Semantics (**optional**) + +A more important result is the connection between the big-step and the +small-step semantics: + + `(S, s) ⟹ t ↔ (S, s) ⇒* (stmt.skip, t)` + +Its proof, given below, is beyond the scope of this course. -/ + +lemma star_small_step_seq {S T s u} + (h : (S, s) ⇒* (stmt.skip, u)) : + (S ;; T, s) ⇒* (stmt.skip ;; T, u) := +begin + apply star.lift (λSs, (prod.fst Ss ;; T, prod.snd Ss)) _ h, + intros Ss Ss' h, + cases' Ss, + cases' Ss', + apply small_step.seq_step, + assumption +end + +lemma star_small_step_of_big_step {S s t} (h : (S, s) ⟹ t) : + (S, s) ⇒* (stmt.skip, t) := +begin + induction' h, + case skip { + refl }, + case assign { + exact star.single small_step.assign }, + case seq : S T s t u hS hT ihS ihT { + transitivity, + exact star_small_step_seq ihS, + apply star.head small_step.seq_skip ihT }, + case ite_true : b S T s t hs hst ih { + exact star.head (small_step.ite_true hs) ih }, + case ite_false : b S T s t hs hst ih { + exact star.head (small_step.ite_false hs) ih }, + case while_true : b S s t u hb hS hw ihS ihw { + exact (star.head small_step.while + (star.head (small_step.ite_true hb) + (star.trans (star_small_step_seq ihS) + (star.head small_step.seq_skip ihw)))) }, + case while_false : b S s hb { + exact star.tail (star.single small_step.while) + (small_step.ite_false hb) } +end + +lemma big_step_of_small_step_of_big_step {S₀ S₁ s₀ s₁ s₂} + (h₁ : (S₀, s₀) ⇒ (S₁, s₁)) : + (S₁, s₁) ⟹ s₂ → (S₀, s₀) ⟹ s₂ := +begin + induction' h₁; + simp [*, big_step_while_true_iff] {contextual := tt}, + case seq_step { + intros u hS' hT, + apply exists.intro u, + exact and.intro (ih hS') hT } +end + +lemma big_step_of_star_small_step {S s t} : + (S, s) ⇒* (stmt.skip, t) → (S, s) ⟹ t := +begin + generalize hSs : (S, s) = Ss, + intro h, + induction h + using LoVe.rtc.star.head_induction_on + with _ S's' h h' ih + generalizing S s; + cases' hSs, + { exact big_step.skip }, + { cases' S's' with S' s', + apply big_step_of_small_step_of_big_step h, + apply ih, + refl } +end + +lemma big_step_iff_star_small_step {S s t} : + (S, s) ⟹ t ↔ (S, s) ⇒* (stmt.skip, t) := +iff.intro star_small_step_of_big_step + big_step_of_star_small_step + + +/-! ## Parallelism (**optional**) -/ + +inductive par_step : + nat → list stmt × state → list stmt × state → Prop +| intro {Ss Ss' S S' s s' i} + (hi : i < list.length Ss) + (hS : S = list.nth_le Ss i hi) + (hs : (S, s) ⇒ (S', s')) + (hS' : Ss' = list.update_nth Ss i S') : + par_step i (Ss, s) (Ss', s') + +lemma par_step_diamond {i j Ss Ts Ts' s t t'} + (hi : i < list.length Ss) + (hj : j < list.length Ss) + (hij : i ≠ j) + (hT : par_step i (Ss, s) (Ts, t)) + (hT' : par_step j (Ss, s) (Ts', t')) : + ∃u Us, par_step j (Ts, t) (Us, u) ∧ + par_step i (Ts', t') (Us, u) := +sorry -- unprovable + +def stmt.W : stmt → set string +| stmt.skip := ∅ +| (stmt.assign x _) := {x} +| (stmt.seq S T) := stmt.W S ∪ stmt.W T +| (stmt.ite _ S T) := stmt.W S ∪ stmt.W T +| (stmt.while _ S) := stmt.W S + +def exp.R {α : Type} : (state → α) → set string +| f := {x | ∀s n, f (s{x ↦ n}) ≠ f s} + +def stmt.R : stmt → set string +| stmt.skip := ∅ +| (stmt.assign _ a) := exp.R a +| (stmt.seq S T) := stmt.R S ∪ stmt.R T +| (stmt.ite b S T) := exp.R b ∪ stmt.R S ∪ stmt.R T +| (stmt.while b S) := exp.R b ∪ stmt.R S + +def stmt.V : stmt → set string +| S := stmt.W S ∪ stmt.R S + +lemma par_step_diamond_VW_disjoint {i j Ss Ts Ts' s t t'} + (hiS : i < list.length Ss) + (hjT : j < list.length Ts) + (hij : i ≠ j) + (hT : par_step i (Ss, s) (Ts, t)) + (hT' : par_step j (Ss, s) (Ts', t')) + (hWV : stmt.W (list.nth_le Ss i hiS) + ∩ stmt.V (list.nth_le Ts j hjT) = ∅) + (hVW : stmt.V (list.nth_le Ss i hiS) + ∩ stmt.W (list.nth_le Ts j hjT) = ∅) : + ∃u Us, par_step j (Ts, t) (Us, u) ∧ + par_step i (Ts', t') (Us, u) := +sorry -- this should be provable + +end LoVe diff --git a/lean/love08_operational_semantics_exercise_sheet.lean b/lean/love08_operational_semantics_exercise_sheet.lean new file mode 100644 index 0000000..8578e10 --- /dev/null +++ b/lean/love08_operational_semantics_exercise_sheet.lean @@ -0,0 +1,185 @@ +import .love08_operational_semantics_demo + + +/-! # LoVe Exercise 8: Operational Semantics -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Question 1: Guarded Command Language + +In 1976, E. W. Dijkstra introduced the guarded command language (GCL), a +minimalistic imperative language with built-in nondeterminism. A grammar for one +of its variants is given below: + + S ::= x := e -- assignment + | assert b -- assertion + | S ; S -- sequential composition + | S | ⋯ | S -- nondeterministic choice + | loop S -- nondeterministic iteration + +Assignment and sequential composition are as in the WHILE language. The other +statements have the following semantics: + +* `assert b` aborts if `b` evaluates to false; otherwise, the command is a + no-op. + +* `S | ⋯ | S` chooses any of the branches and executes it, ignoring the other + branches. + +* `loop S` executes `S` any number of times. + +In Lean, GCL is captured by the following inductive type: -/ + +namespace gcl + +inductive stmt (σ : Type) : Type +| assign : string → (σ → ℕ) → stmt +| assert : (σ → Prop) → stmt +| seq : stmt → stmt → stmt +| choice : list stmt → stmt +| loop : stmt → stmt + +infixr ` ;; ` : 90 := stmt.seq + +/-! The parameter `σ` abstracts over the state type. It is necessary as a work +around for a Lean bug. Below we will take `σ := state`. + +1.1. Complete the following big-step semantics, based on the informal +specification of GCL above. -/ + +inductive big_step : (stmt state × state) → state → Prop +-- enter the missing `assign` rule here +| assert {b : state → Prop} {s} (hcond : b s) : + big_step (stmt.assert b, s) s +-- enter the missing `seq` rule here +| choice {Ss s t} (i) (hless : i < list.length Ss) + (hbody : big_step (list.nth_le Ss i hless, s) t) : + big_step (stmt.choice Ss, s) t +-- enter the missing `loop` rules here + +infix ` ⟹ ` : 110 := big_step + +/-! 1.2. Prove the following inversion rules, as we did in the lecture for the +WHILE language. -/ + +@[simp] lemma big_step_assign_iff {x a s t} : + (stmt.assign x a, s) ⟹ t ↔ t = s{x ↦ a s} := +sorry + +@[simp] lemma big_step_assert {b s t} : + (stmt.assert b, s) ⟹ t ↔ t = s ∧ b s := +sorry + +@[simp] lemma big_step_seq_iff {S₁ S₂ s t} : + (stmt.seq S₁ S₂, s) ⟹ t ↔ (∃u, (S₁, s) ⟹ u ∧ (S₂, u) ⟹ t) := +sorry + +lemma big_step_loop {S s u} : + (stmt.loop S, s) ⟹ u ↔ + (s = u ∨ (∃t, (S, s) ⟹ t ∧ (stmt.loop S, t) ⟹ u)) := +sorry + +@[simp] lemma big_step_choice {Ss s t} : + (stmt.choice Ss, s) ⟹ t ↔ + (∃(i : ℕ) (hless : i < list.length Ss), + (list.nth_le Ss i hless, s) ⟹ t) := +sorry + +end gcl + +/-! 1.3. Complete the translation below of a deterministic program to a GCL +program, by filling in the `sorry` placeholders below. -/ + +def gcl_of : stmt → gcl.stmt state +| stmt.skip := gcl.stmt.assert (λ_, true) +| (stmt.assign x a) := + sorry +| (S ;; T) := + sorry +| (stmt.ite b S T) := + sorry +| (stmt.while b S) := + sorry + +/-! 1.4. In the definition of `gcl_of` above, `skip` is translated to +`assert (λ_, true)`. Looking at the big-step semantics of both constructs, we +can convince ourselves that it makes sense. Can you think of other correct ways +to define the `skip` case? -/ + +-- enter your answer here + + +/-! ## Question 2: Program Equivalence + +For this question, we introduce the notion of program equivalence: `S₁ ≈ S₂`. -/ + +def big_step_equiv (S₁ S₂ : stmt) : Prop := +∀s t, (S₁, s) ⟹ t ↔ (S₂, s) ⟹ t + +infix ` ≈ ` := big_step_equiv + +/-! Program equivalence is a equivalence relation, i.e., it is reflexive, +symmetric, and transitive. -/ + +@[refl] lemma big_step_equiv.refl {S : stmt} : + S ≈ S := +fix s t, +show (S, s) ⟹ t ↔ (S, s) ⟹ t, from + by refl + +@[symm] lemma big_step_equiv.symm {S₁ S₂ : stmt}: + S₁ ≈ S₂ → S₂ ≈ S₁ := +assume h : S₁ ≈ S₂, +fix s t, +show (S₂, s) ⟹ t ↔ (S₁, s) ⟹ t, from + iff.symm (h s t) + +@[trans] lemma big_step_equiv.trans {S₁ S₂ S₃ : stmt} (h₁₂ : S₁ ≈ S₂) + (h₂₃ : S₂ ≈ S₃) : + S₁ ≈ S₃ := +fix s t, +show (S₁, s) ⟹ t ↔ (S₃, s) ⟹ t, from + iff.trans (h₁₂ s t) (h₂₃ s t) + +/-! 2.1. Prove the following program equivalences. -/ + +lemma big_step_equiv.skip_assign_id {x} : + stmt.assign x (λs, s x) ≈ stmt.skip := +sorry + +lemma big_step_equiv.seq_skip_left {S : stmt} : + stmt.skip ;; S ≈ S := +sorry + +lemma big_step_equiv.seq_skip_right {S : stmt} : + S ;; stmt.skip ≈ S := +sorry + +lemma big_step_equiv.ite_seq_while {b} {S : stmt} : + stmt.ite b (S ;; stmt.while b S) stmt.skip ≈ stmt.while b S := +sorry + +/-! 2.2 (**optional**). Program equivalence can be used to replace subprograms +by other subprograms with the same semantics. Prove the following so-called +congruence rules: -/ + +lemma big_step_equiv.seq_congr {S₁ S₂ T₁ T₂ : stmt} (hS : S₁ ≈ S₂) + (hT : T₁ ≈ T₂) : + S₁ ;; T₁ ≈ S₂ ;; T₂ := +sorry + +lemma big_step_equiv.ite_congr {b} {S₁ S₂ T₁ T₂ : stmt} (hS : S₁ ≈ S₂) + (hT : T₁ ≈ T₂) : + stmt.ite b S₁ T₁ ≈ stmt.ite b S₂ T₂ := +sorry + +lemma denote_equiv.while_congr {b} {S₁ S₂ : stmt} (hS : S₁ ≈ S₂) : + stmt.while b S₁ ≈ stmt.while b S₂ := +sorry + +end LoVe diff --git a/lean/love09_hoare_logic_demo.lean b/lean/love09_hoare_logic_demo.lean new file mode 100644 index 0000000..461b697 --- /dev/null +++ b/lean/love09_hoare_logic_demo.lean @@ -0,0 +1,523 @@ +import .love08_operational_semantics_demo + + +/-! # LoVe Demo 9: Hoare Logic + +We review a second way to specify the semantics of a programming language: Hoare +logic. If operational semantics corresponds to an idealized interpreter, +__Hoare logic__ (also called __axiomatic semantics__) corresponds to a verifier. +Hoare logic is particularly convenient to reason about concrete programs. -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## First Things First: Formalization Projects + +Instead of two of the homework sheets, you can do a verification project, worth +20 points. If you choose to do so, please send your lecturer a message by email +by the end of the week. For a fully successful project, we expect about 200 (or +more) lines of Lean, including definitions and proofs. + +Some ideas for projects follow. + +Computer science: + +* extended WHILE language with static arrays or other features; +* functional data structures (e.g., balanced trees); +* functional algorithms (e.g., bubble sort, merge sort, Tarjan's algorithm); +* compiler from expressions or imperative programs to, e.g., stack machine; +* type systems (e.g., Benjamin Pierce's __Types and Programming Languages__); +* security properties (e.g., Volpano–Smith-style noninterference analysis); +* theory of first-order terms, including matching, term rewriting; +* automata theory; +* normalization of context-free grammars or regular expressions; +* process algebras and bisimilarity; +* soundness and possibly completeness of proof systems (e.g., Genzen's sequent + calculus, natural deduction, tableaux); +* separation logic; +* verified program using Hoare logic. + +Mathematics: + +* graphs; +* combinatorics; +* number theory. + +Evaluation from 2018–2019: + +Q: How did you find the project? + +A: Enjoyable. + +A: Fun and hard. + +A: Good, I think the format was excellent in a way that it gave people the + chance to do challenging exercises and hand them in incomplete. + +A: I really really liked it. I think it's a great way of learning—find + something you like, dig in it a little, get stuck, ask for help. I wish I + could do more of that! + +A: It was great to have some time to try to work out some stuff you find + interesting yourself. + +A: lots of fun actually!!! + +A: Very helpful. It gave the opportunity to spend some more time on a + particular aspect of the course. + + +## Hoare Triples + +The basic judgments of Hoare logic are often called __Hoare triples__. They have +the form + + `{P} S {Q}` + +where `S` is a statement, and `P` and `Q` (called __precondition__ and +__postcondition__) are logical formulas over the state variables. + +Intended meaning: + + If `P` holds before `S` is executed and the execution terminates normally, + `Q` holds at termination. + +This is a __partial correctness__ statement: The program is correct if it +terminates normally (i.e., no run-time error, no infinite loop or divergence). + +All of these Hoare triples are valid (with respect to the intended meaning): + + `{true} b := 4 {b = 4}` + `{a = 2} b := 2 * a {a = 2 ∧ b = 4}` + `{b ≥ 5} b := b + 1 {b ≥ 6}` + `{false} skip {b = 100}` + `{true} while i ≠ 100 do i := i + 1 {i = 100}` + + +## Hoare Rules + +The following is a complete set of rules for reasoning about WHILE programs: + + ———————————— Skip + {P} skip {P} + + ——————————————————— Asn + {Q[a/x]} x := a {Q} + + {P} S {R} {R} S' {Q} + —————————————————————— Seq + {P} S; S' {Q} + + {P ∧ b} S {Q} {P ∧ ¬b} S' {Q} + ——————————————————————————————— If + {P} if b then S else S' {Q} + + {I ∧ b} S {I} + ————————————————————————— While + {I} while b do S {I ∧ ¬b} + + P' → P {P} S {Q} Q → Q' + ——————————————————————————— Conseq + {P'} S {Q'} + +`Q[a/x]` denotes `Q` with `x` replaced by `a`. + +In the `While` rule, `I` is called an __invariant__. + +Except for `Conseq`, the rules are syntax-driven: by looking at a program, we +see immediately which rule to apply. + +Example derivations: + + —————————————————————— Asn —————————————————————— Asn + {a = 2} b := a {b = 2} {b = 2} c := b {c = 2} + ——————————————————————————————————————————————————— Seq + {a = 2} b := a; c := b {c = 2} + + + —————————————————————— Asn + x > 10 → x > 5 {x > 5} y := x {y > 5} y > 5 → y > 0 + ——————————————————————————————————————————————————————— Conseq + {x > 10} y := x {y > 0} + +Various __derived rules__ can be proved to be correct in terms of the standard +rules. For example, we can derive bidirectional rules for `skip`, `:=`, and +`while`: + + P → Q + ———————————— Skip' + {P} skip {Q} + + P → Q[a/x] + —————————————— Asn' + {P} x := a {Q} + + {P ∧ b} S {P} P ∧ ¬b → Q + —————————————————————————— While' + {P} while b do S {Q} + + +## A Semantic Approach to Hoare Logic + +We can, and will, define Hoare triples **semantically** in Lean. + +We will use predicates on states (`state → Prop`) to represent pre- and +postconditions, following the shallow embedding style. -/ + +def partial_hoare (P : state → Prop) (S : stmt) + (Q : state → Prop) : Prop := +∀s t, P s → (S, s) ⟹ t → Q t + +notation `{* ` P : 1 ` *} ` S : 1 ` {* ` Q : 1 ` *}` := +partial_hoare P S Q + +namespace partial_hoare + +lemma skip_intro {P} : + {* P *} stmt.skip {* P *} := +begin + intros s t hs hst, + cases' hst, + assumption +end + +lemma assign_intro (P : state → Prop) {x} {a : state → ℕ} : + {* λs, P (s{x ↦ a s}) *} stmt.assign x a {* P *} := +begin + intros s t P hst, + cases' hst, + assumption +end + +lemma seq_intro {P Q R S T} (hS : {* P *} S {* Q *}) + (hT : {* Q *} T {* R *}) : + {* P *} S ;; T {* R *} := +begin + intros s t hs hst, + cases' hst, + apply hT, + { apply hS, + { exact hs }, + { assumption } }, + { assumption } +end + +lemma ite_intro {b P Q : state → Prop} {S T} + (hS : {* λs, P s ∧ b s *} S {* Q *}) + (hT : {* λs, P s ∧ ¬ b s *} T {* Q *}) : + {* P *} stmt.ite b S T {* Q *} := +begin + intros s t hs hst, + cases' hst, + { apply hS, + exact and.intro hs hcond, + assumption }, + { apply hT, + exact and.intro hs hcond, + assumption } +end + +lemma while_intro (P : state → Prop) {b : state → Prop} {S} + (h : {* λs, P s ∧ b s *} S {* P *}) : + {* P *} stmt.while b S {* λs, P s ∧ ¬ b s *} := +begin + intros s t hs hst, + induction' hst, + case while_true { + apply ih_hst_1 P h, + exact h _ _ (and.intro hs hcond) hst }, + case while_false { + exact and.intro hs hcond } +end + +lemma consequence {P P' Q Q' : state → Prop} {S} + (h : {* P *} S {* Q *}) (hp : ∀s, P' s → P s) + (hq : ∀s, Q s → Q' s) : + {* P' *} S {* Q' *} := +fix s t, +assume hs : P' s, +assume hst : (S, s) ⟹ t, +show Q' t, from + hq _ (h s t (hp s hs) hst) + +lemma consequence_left (P' : state → Prop) {P Q S} + (h : {* P *} S {* Q *}) (hp : ∀s, P' s → P s) : + {* P' *} S {* Q *} := +consequence h hp (by cc) + +lemma consequence_right (Q) {Q' : state → Prop} {P S} + (h : {* P *} S {* Q *}) (hq : ∀s, Q s → Q' s) : + {* P *} S {* Q' *} := +consequence h (by cc) hq + +lemma skip_intro' {P Q : state → Prop} (h : ∀s, P s → Q s) : + {* P *} stmt.skip {* Q *} := +consequence skip_intro h (by cc) + +lemma assign_intro' {P Q : state → Prop} {x} {a : state → ℕ} + (h : ∀s, P s → Q (s{x ↦ a s})): + {* P *} stmt.assign x a {* Q *} := +consequence (assign_intro Q) h (by cc) + +lemma seq_intro' {P Q R S T} (hT : {* Q *} T {* R *}) + (hS : {* P *} S {* Q *}) : + {* P *} S ;; T {* R *} := +seq_intro hS hT + +lemma while_intro' {b P Q : state → Prop} {S} + (I : state → Prop) + (hS : {* λs, I s ∧ b s *} S {* I *}) + (hP : ∀s, P s → I s) + (hQ : ∀s, ¬ b s → I s → Q s) : + {* P *} stmt.while b S {* Q *} := +consequence (while_intro I hS) hP (by finish) + +/-! `finish` applies a combination of techniques, including normalization of +logical connectives and quantifiers, simplification, congruence closure, and +quantifier instantiation. It either fully succeeds or fails. -/ + +lemma assign_intro_forward (P) {x a} : + {* P *} + stmt.assign x a + {* λs, ∃n₀, P (s{x ↦ n₀}) ∧ s x = a (s{x ↦ n₀}) *} := +begin + apply assign_intro', + intros s hP, + apply exists.intro (s x), + simp [*] +end + +lemma assign_intro_backward (Q : state → Prop) {x} + {a : state → ℕ} : + {* λs, ∃n', Q (s{x ↦ n'}) ∧ n' = a s *} + stmt.assign x a + {* Q *} := +begin + apply assign_intro', + intros s hP, + cases' hP, + cc +end + +end partial_hoare + + +/-! ## First Program: Exchanging Two Variables -/ + +def SWAP : stmt := +stmt.assign "t" (λs, s "a") ;; +stmt.assign "a" (λs, s "b") ;; +stmt.assign "b" (λs, s "t") + +lemma SWAP_correct (a₀ b₀ : ℕ) : + {* λs, s "a" = a₀ ∧ s "b" = b₀ *} + SWAP + {* λs, s "a" = b₀ ∧ s "b" = a₀ *} := +begin + apply partial_hoare.seq_intro', + apply partial_hoare.seq_intro', + apply partial_hoare.assign_intro, + apply partial_hoare.assign_intro, + apply partial_hoare.assign_intro', + simp { contextual := tt } +end + +lemma SWAP_correct₂ (a₀ b₀ : ℕ) : + {* λs, s "a" = a₀ ∧ s "b" = b₀ *} + SWAP + {* λs, s "a" = b₀ ∧ s "b" = a₀ *} := +begin + intros s t hP hstep, + cases' hstep, + cases' hstep, + cases' hstep_1, + cases' hstep_1_1, + cases' hstep_1, + finish +end + + +/-! ## Second Program: Adding Two Numbers -/ + +def ADD : stmt := +stmt.while (λs, s "n" ≠ 0) + (stmt.assign "n" (λs, s "n" - 1) ;; + stmt.assign "m" (λs, s "m" + 1)) + +lemma ADD_correct (n₀ m₀ : ℕ) : + {* λs, s "n" = n₀ ∧ s "m" = m₀ *} + ADD + {* λs, s "n" = 0 ∧ s "m" = n₀ + m₀ *} := +partial_hoare.while_intro' (λs, s "n" + s "m" = n₀ + m₀) + begin + apply partial_hoare.seq_intro', + { apply partial_hoare.assign_intro }, + { apply partial_hoare.assign_intro', + simp, + intros s hnm hnz, + rw ←hnm, + cases s "n", + { finish }, + { simp [nat.succ_eq_add_one], + linarith } } + end + (by simp { contextual := true }) + (by simp { contextual := true }) + +/-! How did we come up with this invariant? The invariant must + +1. be true before we enter the loop; + +2. remain true after each iteration of the loop if it was true before the + iteration; + +3. be strong enough to imply the desired loop postcondition. + +The invariant `true` meets 1 and 2 but usually not 3. Similarly, `false` meets +2 and 3 but usually not 1. Suitable invariants are often of the form + +__work done__ + __work remaining__ = __desired result__ + +where `+` is some suitable operator. When we enter the loop, __work done__ will +often be `0`. And when we exit the loop, __work remaining__ should be `0`. + +For the `ADD` loop: + +* __work done__ is `m`; +* __work remaining__ is `n`; +* __desired result__ is `n₀ + m₀`. + + +## A Verification Condition Generator + +__Verification condition generators__ (VCGs) are programs that apply Hoare rules +automatically, producing __verification conditions__ that must be proved by the +user. The user must usually also provide strong enough loop invariants, as an +annotation in their programs. + +We can use Lean's metaprogramming framework to define a simple VCG. + +Hundreds of program verification tools are based on these principles. + +VCGs typically work backwards from the postcondition, using backward rules +(rules stated to have an arbitrary `Q` as their postcondition). This works well +because `Asn` is backward. -/ + +def stmt.while_inv (I b : state → Prop) (S : stmt) : stmt := +stmt.while b S + +namespace partial_hoare + +lemma while_inv_intro {b I Q : state → Prop} {S} + (hS : {* λs, I s ∧ b s *} S {* I *}) + (hQ : ∀s, ¬ b s → I s → Q s) : + {* I *} stmt.while_inv I b S {* Q *} := +while_intro' I hS (by cc) hQ + +lemma while_inv_intro' {b I P Q : state → Prop} {S} + (hS : {* λs, I s ∧ b s *} S {* I *}) + (hP : ∀s, P s → I s) (hQ : ∀s, ¬ b s → I s → Q s) : + {* P *} stmt.while_inv I b S {* Q *} := +while_intro' I hS hP hQ + +end partial_hoare + +meta def vcg : tactic unit := +do + t ← tactic.target, + match t with + | `({* %%P *} %%S {* _ *}) := + match S with + | `(stmt.skip) := + tactic.applyc + (if expr.is_mvar P then ``partial_hoare.skip_intro + else ``partial_hoare.skip_intro') + | `(stmt.assign _ _) := + tactic.applyc + (if expr.is_mvar P then ``partial_hoare.assign_intro + else ``partial_hoare.assign_intro') + | `(stmt.seq _ _) := + tactic.applyc ``partial_hoare.seq_intro'; vcg + | `(stmt.ite _ _ _) := + tactic.applyc ``partial_hoare.ite_intro; vcg + | `(stmt.while_inv _ _ _) := + tactic.applyc + (if expr.is_mvar P then ``partial_hoare.while_inv_intro + else ``partial_hoare.while_inv_intro'); + vcg + | _ := + tactic.fail (to_fmt "cannot analyze " ++ to_fmt S) + end + | _ := pure () + end + +end LoVe + +/-! Register `vcg` as a proper tactic: -/ + +meta def tactic.interactive.vcg : tactic unit := +LoVe.vcg + +namespace LoVe + + +/-! ## Second Program Revisited: Adding Two Numbers -/ + +lemma ADD_correct₂ (n₀ m₀ : ℕ) : + {* λs, s "n" = n₀ ∧ s "m" = m₀ *} + ADD + {* λs, s "n" = 0 ∧ s "m" = n₀ + m₀ *} := +show {* λs, s "n" = n₀ ∧ s "m" = m₀ *} + stmt.while_inv (λs, s "n" + s "m" = n₀ + m₀) + (λs, s "n" ≠ 0) + (stmt.assign "n" (λs, s "n" - 1) ;; + stmt.assign "m" (λs, s "m" + 1)) + {* λs, s "n" = 0 ∧ s "m" = n₀ + m₀ *}, from + begin + vcg; simp { contextual := tt }, + intros s hnm hnz, + rw ←hnm, + cases s "n", + { finish }, + { simp [nat.succ_eq_add_one], + linarith } + end + + +/-! ## Hoare Triples for Total Correctness + +__Total correctness__ asserts that the program not only is partially correct but +also that it always terminates normally. Hoare triples for total correctness +have the form + + [P] S [Q] + +Intended meaning: + + If `P` holds before `S` is executed, the execution terminates normally and + `Q` holds in the final state. + +For deterministic programs, an equivalent formulation is as follows: + + If `P` holds before `S` is executed, there exists a state in which execution + terminates normally and `Q` holds in that state. + +Example: + + `[i ≤ 100] while i ≠ 100 do i := i + 1 [i = 100]` + +In our WHILE language, this only affects while loops, which must now be +annotated by a __variant__ `V` (a natural number that decreases with each +iteration): + + [I ∧ b ∧ V = v₀] S [I ∧ V < v₀] + ——————————————————————————————— While-Var + [I] while b do S [I ∧ ¬b] + +What is a suitable variant for the example above? -/ + +end LoVe diff --git a/lean/love09_hoare_logic_exercise_sheet.lean b/lean/love09_hoare_logic_exercise_sheet.lean new file mode 100644 index 0000000..c475a59 --- /dev/null +++ b/lean/love09_hoare_logic_exercise_sheet.lean @@ -0,0 +1,131 @@ +import .love09_hoare_logic_demo + + +/-! # LoVe Exercise 9: Hoare Logic -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Question 1: Program Verification + +The following WHILE program is intended to compute the Gaussian sum up to `n`, +leaving the result in `r`. -/ + +def GAUSS : stmt := +stmt.assign "r" (λs, 0) ;; +stmt.while (λs, s "n" ≠ 0) + (stmt.assign "r" (λs, s "r" + s "n") ;; + stmt.assign "n" (λs, s "n" - 1)) + +/-! The summation function: -/ + +def sum_upto : ℕ → ℕ +| 0 := 0 +| (n + 1) := n + 1 + sum_upto n + +/-! 1.1. Prove the correctness of `GAUSS` using `vcg`. The main challenge is to +figure out which invariant to use for the while loop. The invariant should +capture both the work that has been done already (the intermediate result) and +the work that remains to be done. -/ + +lemma GAUSS_correct (n₀ : ℕ) : + {* λs, s "n" = n₀ *} GAUSS {* λs, s "r" = sum_upto n₀ *} := +sorry + +/-! 1.2. The following WHILE program is intended to compute the product of `n` +and `m`, leaving the result in `r`. Prove its correctness using `vcg`. + +Hint: If a variable `x` does not change in a program, it might be useful to +record this in the invariant, by adding a conjunct `s "x" = x₀`. -/ + +def MUL : stmt := +stmt.assign "r" (λs, 0) ;; +stmt.while (λs, s "n" ≠ 0) + (stmt.assign "r" (λs, s "r" + s "m") ;; + stmt.assign "n" (λs, s "n" - 1)) + +lemma MUL_correct (n₀ m₀ : ℕ) : + {* λs, s "n" = n₀ ∧ s "m" = m₀ *} MUL {* λs, s "r" = n₀ * m₀ *} := +sorry + + +/-! ## Question 2: Hoare Triples for Total Correctness -/ + +def total_hoare (P : state → Prop) (S : stmt) (Q : state → Prop) : Prop := +∀s, P s → ∃t, (S, s) ⟹ t ∧ Q t + +notation `[* ` P : 1 ` *] ` S : 1 ` [* ` Q : 1 ` *]` := +total_hoare P S Q + +namespace total_hoare + +/-! 2.1. Prove the consequence rule. -/ + +lemma consequence {P P' Q Q' : state → Prop} {S} + (hS : [* P *] S [* Q *]) (hP : ∀s, P' s → P s) (hQ : ∀s, Q s → Q' s) : + [* P' *] S [* Q' *] := +sorry + +/-! 2.2. Prove the rule for `skip`. -/ + +lemma skip_intro {P} : + [* P *] stmt.skip [* P *] := +sorry + +/-! 2.3. Prove the rule for `assign`. -/ + +lemma assign_intro {P : state → Prop} {x} {a : state → ℕ} : + [* λs, P (s{x ↦ a s}) *] stmt.assign x a [* P *] := +sorry + +/-! 2.4. Prove the rule for `seq`. -/ + +lemma seq_intro {P Q R S T} (hS : [* P *] S [* Q *]) (hT : [* Q *] T [* R *]) : + [* P *] S ;; T [* R *] := +sorry + +/-! 2.5. Complete the proof of the rule for `ite`. + +Hint: This requires a case distinction on the truth value of `b s`. -/ + +lemma ite_intro {b P Q : state → Prop} {S T} + (hS : [* λs, P s ∧ b s *] S [* Q *]) + (hT : [* λs, P s ∧ ¬ b s *] T [* Q *]) : + [* P *] stmt.ite b S T [* Q *] := +sorry + +/-! 2.6 (**optional**). Try to prove the rule for `while`. + +The rule is parameterized by a loop invariant `I` and by a variant `V` that +decreases with each iteration of the loop body. + +Before we prove the desired lemma, we introduce an auxiliary lemma. Its proof +requires well-founded induction. When using `while_var_intro_aux` as induction +hypothesis we recommend to do it directly after proving that the argument is +less than `v₀`: + + have ih : ∃u, (stmt.while b S, t) ⟹ u ∧ I u ∧ ¬ b u := + have V t < v₀ := + …, + while_var_intro_aux (V t) …, + +Similarly to `ite`, the proof requires a case distinction on `b s ∨ ¬ b s`. -/ + +lemma while_var_intro_aux {b : state → Prop} (I : state → Prop) (V : state → ℕ) + {S} (h_inv : ∀v₀, [* λs, I s ∧ b s ∧ V s = v₀ *] S [* λs, I s ∧ V s < v₀ *]) : + ∀v₀ s, V s = v₀ → I s → ∃t, (stmt.while b S, s) ⟹ t ∧ I t ∧ ¬ b t +| v₀ s V_eq hs := +sorry + +lemma while_var_intro {b : state → Prop} (I : state → Prop) (V : state → ℕ) {S} + (hinv : ∀v₀, [* λs, I s ∧ b s ∧ V s = v₀ *] S [* λs, I s ∧ V s < v₀ *]) : + [* I *] stmt.while b S [* λs, I s ∧ ¬ b s *] := +sorry + +end total_hoare + +end LoVe diff --git a/lean/love10_denotational_semantics_demo.lean b/lean/love10_denotational_semantics_demo.lean new file mode 100644 index 0000000..ac9ea89 --- /dev/null +++ b/lean/love10_denotational_semantics_demo.lean @@ -0,0 +1,493 @@ +import .love08_operational_semantics_demo + + +/-! # LoVe Demo 10: Denotational Semantics + +We review a third way to specify the semantics of a programming language: +denotational semantics. Denotational semantics attempt to directly specify the +meaning of programs. + +If operational semantics is an idealized interpreter and Hoare logic is an +idealized verifier, then denotational semantics is an idealized compiler. -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Compositionality + +A __denotational semantics__ defines the meaning of each program as a +mathematical object: + + `⟦ ⟧ : syntax → semantics` + +A key property of denotational semantics is __compositionality__: The meaning of +a compound statement should be defined in terms of the meaning of its +components. This disqualifies + + `⟦S⟧ = {st | (S, prod.fst st) ⟹ prod.snd st}` + +i.e. + + `⟦S⟧ = {(s, t) | (S, s) ⟹ t}` + +because operational semantics are not compositional. + +In short, we want + + `⟦S ; T⟧ = … ⟦S⟧ … ⟦T⟧ …` + `⟦if b then S else T⟧ = … ⟦S⟧ … ⟦T⟧ …` + `⟦while b do S⟧ = … ⟦S⟧ …` + +An evaluation function on arithmetic expressions + + `eval : aexp → ((string → ℤ) → ℤ)` + +is a denotational semantics. We want the same for imperative programs. + + +## A Relational Denotational Semantics + +We can represent the semantics of an imperative program as a function from +initial state to final state or more generally as a relation between initial +state and final state: `set (state × state)`. + +For `skip`, `:=`, `;`, and `if then else`, the denotational semantics is +easy: -/ + +namespace sorry_defs + +def denote : stmt → set (state × state) +| stmt.skip := Id +| (stmt.assign n a) := + {st | prod.snd st = (prod.fst st){n ↦ a (prod.fst st)}} +| (stmt.seq S T) := denote S ◯ denote T +| (stmt.ite b S T) := + (denote S ⇃ b) ∪ (denote T ⇃ (λs, ¬ b s)) +| (stmt.while b S) := sorry + +end sorry_defs + +/-! We write `⟦S⟧` for `denote S`. For `while`, we would like to write + + `((denote S ◯ denote (stmt.while b S)) ⇃ b) ∪ (Id ⇃ (λs, ¬ b s))` + +but this is ill-founded due to the recursive call on `stmt.while b S`. + +What we are looking for is an `X` such that + + `X = ((denote S ◯ X) ⇃ b) ∪ (Id ⇃ (λs, ¬ b s))` + +In other words, we are looking for a fixpoint. + +Most of this lecture is concerned with building a least fixpoint operator +`lfp` that will allow us to define the `while` case as well: + + `lfp (λX, ((denote S ◯ X) ⇃ b) ∪ (Id ⇃ (λs, ¬ b s)))` + + +## Fixpoints + +A __fixpoint__ (or fixed point) of `f` is a solution for `X` in the equation + + `X = f X` + +In general, fixpoints may not exist at all (e.g., `f := nat.succ`), or there may +be several fixpoints (e.g., `f := id`). But under some conditions on `f`, a +unique __least fixpoint__ and a unique __greatest fixpoint__ are guaranteed to +exist. + +Consider this __fixpoint equation__: + + `X = (λ(p : ℕ → Prop) (n : ℕ), n = 0 ∨ ∃m : ℕ, n = m + 2 ∧ p m) X` + `= (λn : ℕ, n = 0 ∨ ∃m : ℕ, n = m + 2 ∧ X m)` + +where `X : ℕ → Prop` and +`f := (λ(p : ℕ → Prop) (n : ℕ), n = 0 ∨ ∃m : ℕ, n = m + 2 ∧ p m)`. + +The above example admits only one fixpoint. The fixpoint equation uniquely +specifies `X` as the set of even numbers. + +In general, the least and greatest fixpoint may be different: + + `X = X` + +Here, the least fixpoint is `(λ_, False)` and the greatest fixpoint is +`(λ_, True)`. Conventionally, `False < True`, and thus +`(λ_, False) < (λ_, True)`. Similarly, `∅ < @set.univ α` (assuming `α` is +inhabited). + +For the semantics of programming languages: + +* `X` will have type `set (state × state)` (which is isomorphic to + `state → state → Prop`), representing relations between states; + +* `f` will correspond to either taking one extra iteration of the loop (if the + condition `b` is true) or the identity (if `b` is false). + +Kleene's fixpoint theorem: + + `f^0(∅) ∪ f^1(∅) ∪ f^2(∅) ∪ ⋯ = lfp f` + +The least fixpoint corresponds to finite executions of a program, which is all +we care about. + +**Key observation**: + + Inductive predicates correspond to least fixpoints, but they are built into + Lean's logic (the calculus of inductive constructions). + + +## Monotone Functions + +Let `α` and `β` be types with partial order `≤`. A function `f : α → β` is +__monotone__ if + + `a ≤ b → f a ≤ f b` for all `a`, `b` + +Many operations on sets (e.g., `∪`), relations (e.g., `◯`), and functions +(e.g., `λx, x`, `λ_, k`, `∘`) are monotone or preserve monotonicity. + +All monotone functions `f : set α → set α` admit least and greatest fixpoints. + +**Example of a nonmonotone function**: + + `f A = (if A = ∅ then set.univ else ∅)` + +Assuming `α` is inhabited, we have `∅ ⊆ set.univ`, but +`f ∅ = set.univ ⊈ ∅ = f set.univ`. -/ + +def monotone {α β : Type} [partial_order α] [partial_order β] + (f : α → β) : Prop := +∀a₁ a₂, a₁ ≤ a₂ → f a₁ ≤ f a₂ + +lemma monotone_id {α : Type} [partial_order α] : + monotone (λa : α, a) := +begin + intros a₁ a₂ ha, + exact ha +end + +lemma monotone_const {α β : Type} [partial_order α] + [partial_order β] (b : β) : + monotone (λ_ : α, b) := +begin + intros a₁ a₂ ha, + exact le_refl b +end + +lemma monotone_union {α β : Type} [partial_order α] + (f g : α → set β) (hf : monotone f) (hg : monotone g) : + monotone (λa, f a ∪ g a) := +begin + intros a₁ a₂ ha b hb, + cases' hb, + { exact or.intro_left _ (hf a₁ a₂ ha h) }, + { exact or.intro_right _ (hg a₁ a₂ ha h) } +end + +/-! We will prove the following two lemmas in the exercise. -/ + +namespace sorry_lemmas + +lemma monotone_comp {α β : Type} [partial_order α] + (f g : α → set (β × β)) (hf : monotone f) + (hg : monotone g) : + monotone (λa, f a ◯ g a) := +sorry + +lemma monotone_restrict {α β : Type} [partial_order α] + (f : α → set (β × β)) (p : β → Prop) (hf : monotone f) : + monotone (λa, f a ⇃ p) := +sorry + +end sorry_lemmas + + +/-! ## Complete Lattices + +To define the least fixpoint on sets, we need `⊆` and `⋂`. Complete lattices +capture this concept abstractly. A __complete lattice__ is an ordered type `α` +for which each set of type `set α` has an infimum. + +More precisely, A complete lattice consists of + +* a partial order `≤ : α → α → Prop` (i.e., a reflexive, transitive, and + antisymmetric binary predicate); + +* an operator `Inf : set α → α`, called __infimum__. + +Moreover, `Inf A` must satisfy these two properties: + +* `Inf A` is a lower bound of `A`: `Inf A ≤ b` for all `b ∈ A`; + +* `Inf A` is a greatest lower bound: `b ≤ Inf A` for all `b` such that + `∀a, a ∈ A → b ≤ a`. + +**Warning:** `Inf A` is not necessarily an element of `A`. + +Examples: + +* `set α` is an instance w.r.t. `⊆` and `⋂` for all `α`; +* `Prop` is an instance w.r.t. `→` and `∀` (`Inf A := ∀a ∈ A, a`); +* `enat := ℕ ∪ {∞}`; +* `ereal := ℝ ∪ {- ∞, ∞}`; +* `β → α` if `α` is a complete lattice; +* `α × β` if `α`, `β` are complete lattices. + +Finite example (with apologies for the ASCII art): + + Z Inf {} = ? + / \ Inf {Z} = ? + A B Inf {A, B} = ? + \ / Inf {Z, A} = ? + Y Inf {Z, A, B, Y} = ? + +Nonexamples: + +* `ℕ`, `ℤ`, `ℚ`, `ℝ`: no infimum for `∅`, `Inf ℕ`, etc. +* `erat := ℚ ∪ {- ∞, ∞}`: `Inf {q | 2 < q * q} = sqrt 2` is not in `erat`. -/ + +@[class] structure complete_lattice (α : Type) + extends partial_order α : Type := +(Inf : set α → α) +(Inf_le : ∀A b, b ∈ A → Inf A ≤ b) +(le_Inf : ∀A b, (∀a, a ∈ A → b ≤ a) → b ≤ Inf A) + +/-! For sets: -/ + +@[instance] def set.complete_lattice {α : Type} : + complete_lattice (set α) := +{ le := (⊆), + le_refl := by tautology, + le_trans := by tautology, + le_antisymm := + begin + intros A B hAB hBA, + apply set.ext, + tautology + end, + Inf := λX, {a | ∀A, A ∈ X → a ∈ A}, + Inf_le := by tautology, + le_Inf := by tautology } + + +/-! ## Least Fixpoint -/ + +def lfp {α : Type} [complete_lattice α] (f : α → α) : α := +complete_lattice.Inf ({a | f a ≤ a}) + +lemma lfp_le {α : Type} [complete_lattice α] (f : α → α) + (a : α) (h : f a ≤ a) : + lfp f ≤ a := +complete_lattice.Inf_le _ _ h + +lemma le_lfp {α : Type} [complete_lattice α] (f : α → α) + (a : α) (h : ∀a', f a' ≤ a' → a ≤ a') : + a ≤ lfp f := +complete_lattice.le_Inf _ _ h + +/-! **Knaster-Tarski theorem:** For any monotone function `f`: + +* `lfp f` is a fixpoint: `lfp f = f (lfp f)` (lemma `lfp_eq`); +* `lfp f` is smaller than any other fixpoint: `X = f X → lfp f ≤ X`. -/ + +lemma lfp_eq {α : Type} [complete_lattice α] (f : α → α) + (hf : monotone f) : + lfp f = f (lfp f) := +begin + have h : f (lfp f) ≤ lfp f := + begin + apply le_lfp, + intros a' ha', + apply @le_trans _ _ _ (f a'), + { apply hf, + apply lfp_le, + assumption }, + { assumption } + end, + apply le_antisymm, + { apply lfp_le, + apply hf, + assumption }, + { assumption } +end + + +/-! ## A Relational Denotational Semantics, Continued -/ + +def denote : stmt → set (state × state) +| stmt.skip := Id +| (stmt.assign x a) := + {st | prod.snd st = (prod.fst st){x ↦ a (prod.fst st)}} +| (stmt.seq S T) := denote S ◯ denote T +| (stmt.ite b S T) := + (denote S ⇃ b) ∪ (denote T ⇃ (λs, ¬ b s)) +| (stmt.while b S) := + lfp (λX, ((denote S ◯ X) ⇃ b) ∪ (Id ⇃ (λs, ¬ b s))) + +notation `⟦` S `⟧`:= denote S + + +/-! ## Application to Program Equivalence + +Based on the denotational semantics, we introduce the notion of program +equivalence: `S₁ ~ S₂`. (Compare with exercise 8.) -/ + +def denote_equiv (S₁ S₂ : stmt) : Prop := +⟦S₁⟧ = ⟦S₂⟧ + +infix ` ~ ` := denote_equiv + +/-! It is obvious from the definition that `~` is an equivalence relation. + +Program equivalence can be used to replace subprograms by other subprograms with +the same semantics. This is achieved by the following congruence rules: -/ + +lemma denote_equiv.seq_congr {S₁ S₂ T₁ T₂ : stmt} + (hS : S₁ ~ S₂) (hT : T₁ ~ T₂) : + S₁ ;; T₁ ~ S₂ ;; T₂ := +by simp [denote_equiv, denote, *] at * + +lemma denote_equiv.ite_congr {b} {S₁ S₂ T₁ T₂ : stmt} + (hS : S₁ ~ S₂) (hT : T₁ ~ T₂) : + stmt.ite b S₁ T₁ ~ stmt.ite b S₂ T₂ := +by simp [denote_equiv, denote, *] at * + +lemma denote_equiv.while_congr {b} {S₁ S₂ : stmt} + (hS : S₁ ~ S₂) : + stmt.while b S₁ ~ stmt.while b S₂ := +by simp [denote_equiv, denote, *] at * + +/-! Compare the simplicity of these proofs with the corresponding proofs for a +big-step semantics (exercise 8). + +Let us prove some program equivalences. -/ + +lemma denote_equiv.skip_assign_id {x} : + stmt.assign x (λs, s x) ~ stmt.skip := +by simp [denote_equiv, denote, Id] + +lemma denote_equiv.seq_skip_left {S : stmt} : + stmt.skip ;; S ~ S := +by simp [denote_equiv, denote, Id, comp] + +lemma denote_equiv.seq_skip_right {S : stmt} : + S ;; stmt.skip ~ S := +by simp [denote_equiv, denote, Id, comp] + +lemma denote_equiv.ite_seq_while {b} {S : stmt} : + stmt.ite b (S ;; stmt.while b S) stmt.skip ~ stmt.while b S := +begin + simp [denote_equiv, denote], + apply eq.symm, + apply lfp_eq, + apply monotone_union, + { apply sorry_lemmas.monotone_restrict, + apply sorry_lemmas.monotone_comp, + { exact monotone_const _ }, + { exact monotone_id } }, + { apply sorry_lemmas.monotone_restrict, + exact monotone_const _ } +end + + +/-! ## Equivalence of the Denotational and the Big-Step Semantics +## (**optional**) -/ + +lemma denote_of_big_step (S : stmt) (s t : state) (h : (S, s) ⟹ t) : + (s, t) ∈ ⟦S⟧ := +begin + induction' h; try { solve1 { simp [denote, *] } }, + case seq { + apply exists.intro t, + exact and.intro ih_h ih_h_1 }, + case while_true { + rw eq.symm denote_equiv.ite_seq_while, + simp [denote, *], + apply exists.intro t, + apply and.intro; assumption }, + case while_false { + rw eq.symm denote_equiv.ite_seq_while, + simp [denote, *] } +end + +lemma big_step_of_denote : + ∀(S : stmt) (s t : state), (s, t) ∈ ⟦S⟧ → (S, s) ⟹ t +| stmt.skip s t := by simp [denote] +| (stmt.assign n a) s t := by simp [denote] +| (stmt.seq S T) s t := + begin + intro h, + cases' h with u hu, + exact big_step.seq + (big_step_of_denote S _ _ (and.elim_left hu)) + (big_step_of_denote T _ _ (and.elim_right hu)) + end +| (stmt.ite b S T) s t := + begin + intro h, + cases' h, + case inl { + cases' h, + apply big_step.ite_true right, + exact big_step_of_denote _ _ _ left }, + case inr { + cases' h, + apply big_step.ite_false right, + exact big_step_of_denote _ _ _ left } + end +| (stmt.while b S) s t := + begin + have hw : ⟦stmt.while b S⟧ + ≤ {st | (stmt.while b S, prod.fst st) ⟹ prod.snd st} := + begin + apply lfp_le _ _ _, + intros x hx, + cases' x with s' t', + simp at hx, + cases' hx, + case inl { + cases' h with hst hs, + cases' hst with u hu, + apply big_step.while_true hs, + { exact big_step_of_denote S _ _ (and.elim_left hu) }, + { exact and.elim_right hu } }, + case inr { + cases' h, + cases' left, + apply big_step.while_false right } + end, + apply hw + end + +lemma denote_iff_big_step (S : stmt) (s t : state) : + (s, t) ∈ ⟦S⟧ ↔ (S, s) ⟹ t := +iff.intro (big_step_of_denote S s t) (denote_of_big_step S s t) + + +/-! ## A Simpler Approach Based on an Inductive Predicate (**optional**) -/ + +inductive awhile (b : state → Prop) + (r : set (state × state)) : + state → state → Prop +| true {s t u} (hcond : b s) (hbody : (s, t) ∈ r) + (hrest : awhile t u) : + awhile s u +| false {s} (hcond : ¬ b s) : + awhile s s + +def denote_ind : stmt → set (state × state) +| stmt.skip := Id +| (stmt.assign x a) := + {st | prod.snd st = (prod.fst st){x ↦ a (prod.fst st)}} +| (stmt.seq S T) := denote_ind S ◯ denote_ind T +| (stmt.ite b S T) := + (denote_ind S ⇃ b) ∪ (denote_ind T ⇃ (λs, ¬ b s)) +| (stmt.while b S) := + {st | awhile b (denote_ind S) (prod.fst st) (prod.snd st)} + +end LoVe diff --git a/lean/love10_denotational_semantics_exercise_sheet.lean b/lean/love10_denotational_semantics_exercise_sheet.lean new file mode 100644 index 0000000..a45caf2 --- /dev/null +++ b/lean/love10_denotational_semantics_exercise_sheet.lean @@ -0,0 +1,92 @@ +import .love10_denotational_semantics_demo + + +/-! # LoVe Exercise 10: Denotational Semantics -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Question 1: Monotonicity + +1.1. Prove the following lemma from the lecture. -/ + +lemma monotone_comp {α β : Type} [partial_order α] (f g : α → set (β × β)) + (hf : monotone f) (hg : monotone g) : + monotone (λa, f a ◯ g a) := +sorry + +/-! 1.2. Prove its cousin. -/ + +lemma monotone_restrict {α β : Type} [partial_order α] (f : α → set (β × β)) + (p : β → Prop) (hf : monotone f) : + monotone (λa, f a ⇃ p) := +sorry + + +/-! ## Question 2: Regular Expressions + +__Regular expressions__, or __regexes__, are a highly popular tool for software +development, to analyze textual inputs. Regexes are generated by the following +grammar: + + R ::= ∅ + | ε + | a + | R ⬝ R + | R + R + | R* + +Informally, the semantics of regular expressions is as follows: + +* `∅` accepts nothing; +* `ε` accepts the empty string; +* `a` accepts the atom `a`; +* `R ⬝ R` accepts the concatenation of two regexes; +* `R + R` accepts either of two regexes; +* `R*` accepts arbitrary many repetitions of a regex. + +Notice the rough correspondence with a WHILE language: + + `∅` ~ diverging statement (e.g., `while true do skip`) + `ε` ~ `skip` + `a` ~ `:=` + `⬝` ~ `;` + `+` ~ `if then else` + `*` ~ `while` loop -/ + +inductive regex (α : Type) : Type +| nothing {} : regex +| empty {} : regex +| atom : α → regex +| concat : regex → regex → regex +| alt : regex → regex → regex +| star : regex → regex + +/-! In this exercise, we explore an alternative semantics of regular +expressions. Namely, we can imagine that the atoms represent binary relations, +instead of letters or symbols. Concatenation corresponds to composition of +relations, and alternation is union. Mathematically, regexes and binary +relations are both instances of Kleene algebras. + +2.1. Complete the following translation of regular expressions to relations. + +Hint: Exploit the correspondence with the WHILE language. -/ + +def rel_of_regex {α : Type} : regex (set (α × α)) → set (α × α) +| regex.nothing := ∅ +| regex.empty := Id +| (regex.atom s) := s +-- enter the missing cases here + +/-! 2.2. Prove the following recursive equation about your definition. -/ + +lemma rel_of_regex_star {α : Type} (r : regex (set (α × α))) : + rel_of_regex (regex.star r) = + rel_of_regex (regex.alt (regex.concat r (regex.star r)) regex.empty) := +sorry + +end LoVe diff --git a/lean/love11_logical_foundations_of_mathematics_demo.lean b/lean/love11_logical_foundations_of_mathematics_demo.lean new file mode 100644 index 0000000..d74e04b --- /dev/null +++ b/lean/love11_logical_foundations_of_mathematics_demo.lean @@ -0,0 +1,606 @@ +import .love05_inductive_predicates_demo + + +/-! # LoVe Demo 11: Logical Foundations of Mathematics + +We dive deeper into the logical foundations of Lean. Most of the features +described here are especially relevant for defining mathematical objects and +proving theorems about them. -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Universes + +Not only terms have a type, but also types have a type. For example, by the PAT +principle: + + `@and.intro : ∀a b, a → b → a ∧ b` + +Moreover: + + `∀a b, a → b → a ∧ b : Prop` + +Now, what is the type of `Prop`? `Prop` has the same type as virtually all other +types we have constructed so far: + + `Prop : Type` + +What is the type of `Type`? The typing `Type : Type` would lead to a +contradiction, called **Girard's paradox**, resembling Russel's paradox. +Instead: + + `Type : Type 1` + `Type 1 : Type 2` + `Type 2 : Type 3` + ⋮ + +Aliases: + + `Type` := `Type 0` + `Prop` := `Sort 0` + `Type u` := `Sort (u + 1)` + +The types of types (`Sort u`, `Type u`, and `Prop`) are called __universes__. +The `u` in `Sort u` is a __universe level__. + +The hierarchy is captured by the following typing judgment: + + ————————————————————————— Sort + C ⊢ Sort u : Sort (u + 1) -/ + +#check @and.intro +#check ∀a b : Prop, a → b → a ∧ b +#check Prop +#check ℕ +#check Type +#check Type 1 +#check Type 2 + +universe variables u v + +#check Type u + +#check Sort 0 +#check Sort 1 +#check Sort 2 +#check Sort u + +#check Type* + + +/-! ## The Peculiarities of Prop + +`Prop` is different from the other universes in many respects. + + +### Impredicativity + +The function type `σ → τ` is put into the larger one of the universes that +`σ` and `τ` live in: + + C ⊢ σ : Type u C ⊢ τ : Type v + ————————————————————————————————— Arrow-Type + C ⊢ σ → τ : Type (max u v) + +For dependent types, this generalizes to + + C ⊢ σ : Type u C, x : σ ⊢ τ[x] : Type v + ——————————————————————————————————————————— Forall-Type + C ⊢ (∀x : σ, τ[x]) : Type (max u v) + +This behavior of the universes `Type v` is called __predicativity__. + +To force expressions such as `∀a : Prop, a → a` to be of type `Prop` anyway, we +need a special typing rule for `Prop`: + + C ⊢ σ : Sort u x : σ ⊢ τ[x] : Prop + —————————————————————————————————————— Forall-Prop + C ⊢ (∀x : σ, τ[x]) : Prop + +This behavior of `Prop` is called __impredicativity__. + +The rules `Forall-Type` and `Forall-Prop` can be generalized into a single rule: + + C ⊢ σ : Sort u C, x : σ ⊢ τ[x] : Sort v + ——————————————————————————————————————————— Forall + C ⊢ (∀x : σ, τ[x]) : Sort (imax u v) + +where + + `imax u 0 = 0` + `imax u (v + 1) = max u (v + 1)` -/ + +#check λ(α : Type u) (β : Type v), α → β +#check ∀a : Prop, a → a + + +/-! ### Proof Irrelevance + +A second difference between `Prop` and `Type u` is __proof irrelevance__: + + `∀(a : Prop) (h₁ h₂ : a), h₁ = h₂` + +This and makes reasoning about dependent types easier. + +When viewing a proposition as a type and a proof as an element of that type, +proof irrelevance means that a proposition is either an empty type or has +exactly one inhabitant. + +Proof irrelevance can be proved `by refl`. + +An unfortunate consequence of proof irrelevance is that it prevents us from +performing rule induction by pattern matching. -/ + +#check proof_irrel + +lemma proof_irrel {a : Prop} (h₁ h₂ : a) : + h₁ = h₂ := +by refl + + +/-! ### No Large Elimination + +A further difference between `Prop` and `Type u` is that `Prop` does not allow +__large elimination__, meaning that it impossible to extract data from a proof +of a proposition. + +This is necessary to allow proof irrelevance. -/ + +#check 0 + +-- fails +def unsquare (i : ℤ) (hsq : ∃j, i = j * j) : ℤ := +match hsq with +| Exists.intro j _ := j +end + +/-! If the above were accepted, we could derive `false` as follows. + +Let + + `hsq₁` := `Exists.intro 3 (by linarith)` + `hsq₂` := `Exists.intro (-3) (by linarith)` + +be two proofs of `∃j, (9 : ℤ) = j * j`. Then + + `unsquare 9 hsq₁ = 3` + `unsquare 9 hsq₂ = -3` + +However, by proof irrelevance, `hsq₁ = hsq₂`. Hence + + `unsquare 9 hsq₂ = 3` + +Thus + + `3 = -3` + +a contradiction. + +As a compromise, Lean allows __small elimination__. It is called small +elimination because it eliminate only into `Prop`, whereas large elimination can +eliminate into an arbitrary large universe `Sort u`. This means we can use +`match` to analyze the structure of a proof, extract existential witnesses, and +so on, as long as the `match` expression is itself a proof. We have seen several +examples of this in lecture 5. + +As a further compromise, Lean allows large elimination for +__syntactic subsingletons__: types in `Prop` for which it can be established +syntactically that they have cardinality 0 or 1. These are propositions such as +`false` and `a ∧ b` that can be proved in at most one way. + + +## The Axiom of Choice + +Lean's logic includes the axiom of choice, which makes it possible to obtain an +arbitrary element from any nonempty type. + +Consider Lean's `nonempty` inductive predicate: -/ + +#print nonempty + +/-! The predicate states that `α` has at least one element. + +To prove `nonempty α`, we must provide an `α` value to `nonempty.intro`: -/ + +lemma nat.nonempty : + nonempty ℕ := +nonempty.intro 0 + +/-! Since `nonempty` lives in `Prop`, large elimination is not available, and +thus we cannot extract the element that was used from a proof of `nonempty α`. + +The axiom of choice allows us to obtain some element of type `α` if we can show +`nonempty α`: -/ + +#print classical.choice + +/-! It will just give us an arbitrary element of `α`; we have no way of knowing +whether this is the element that was used to prove `nonempty α`. + +The constant `classical.choice` is noncomputable, one of the reasons why some +logicians prefer to work without this axiom. -/ + +#eval classical.choice nat.nonempty -- fails +#reduce classical.choice nat.nonempty -- result: classical.choice _ + +/-! The axiom of choice is only an axiom in Lean's core library, giving users +the freedom to work with or without it. + +Definitions using it must be marked as `noncomputable`: -/ + +noncomputable def arbitrary_nat : ℕ := +classical.choice nat.nonempty + +/-! The following tools rely on choice. + + +### Law of Excluded Middle -/ + +#check classical.em +#check classical.by_contradiction + + +/-! ### Hilbert Choice -/ + +#print classical.some +#print classical.some_spec + +#check λ(p : ℕ → Prop) (h : ∃n : ℕ, p n), classical.some h +#check λ(p : ℕ → Prop) (h : ∃n : ℕ, p n), classical.some_spec h + + +/-! ### Set-Theoretic Axiom of Choice -/ + +#print classical.axiom_of_choice + + +/-! ## Subtypes + +Subtyping is a mechanism to create new types from existing ones. + +Given a predicate on the elements of the base type, the __subtype__ contains +only those elements of the base type that fulfill this property. More precisely, +the subtype contains element–proof pairs that combine an element of the base +type and a proof that it fulfills the property. + +Subtyping is useful for those types that cannot be defined as an inductive +type. For example, any attempt at defining the type of finite sets along the +following lines is doomed to fail: -/ + +inductive finset (α : Type) : Type +| empty : finset +| insert : α → finset → finset + +/-! Given a base type and a property, the subtype has the syntax + + `{` _variable_ `:` _base-type_ `//` _property-applied-to-variable_ `}` + +Example: + + `{i : ℕ // i ≤ n}` + +Alias: + + `{x : τ // P[x]}` := `@subtype τ (λx, P[x])` + + +### First Example: Full Binary Trees -/ + +#check btree +#check is_full +#check mirror +#check is_full_mirror +#check mirror_mirror + +def full_btree (α : Type) : Type := +{t : btree α // is_full t} + +#print subtype +#check subtype.mk + +/-! To define elements of `full_btree`, we must provide a `btree` and a proof +that it is full: -/ + +def empty_full_btree : full_btree ℕ := +subtype.mk btree.empty is_full.empty + +def full_btree_6 : full_btree ℕ := +subtype.mk (btree.node 6 btree.empty btree.empty) + begin + apply is_full.node, + apply is_full.empty, + apply is_full.empty, + refl + end + +#reduce subtype.val full_btree_6 +#check subtype.property full_btree_6 + +/-! We can lift existing operations on `btree` to `full_btree`: -/ + +def full_btree.mirror {α : Type} (t : full_btree α) : + full_btree α := +subtype.mk (mirror (subtype.val t)) + begin + apply is_full_mirror, + apply subtype.property t + end + +#reduce subtype.val (full_btree.mirror full_btree_6) + +/-! And of course we can prove lemmas about the lifted operations: -/ + +lemma full_btree.mirror_mirror {α : Type} (t : full_btree α) : + (full_btree.mirror (full_btree.mirror t)) = t := +begin + apply subtype.eq, + simp [full_btree.mirror], + apply mirror_mirror +end + +#check subtype.eq + + +/-! ### Second Example: Vectors -/ + +#check vector + +def vector (α : Type) (n : ℕ) : Type := +{xs : list α // list.length xs = n} + +def vector_123 : vector ℤ 3 := +subtype.mk [1, 2, 3] (by refl) + +def vector.neg {n : ℕ} (v : vector ℤ n) : vector ℤ n := +subtype.mk (list.map int.neg (subtype.val v)) + begin + rw list.length_map, + exact subtype.property v + end + +lemma vector.neg_neg (n : ℕ) (v : vector ℤ n) : + vector.neg (vector.neg v) = v := +begin + apply subtype.eq, + simp [vector.neg] +end + +#check finset + + +/-! ## Quotient Types + +Quotients are a powerful construction in mathematics used to construct `ℤ`, `ℚ`, +`ℝ`, and many other types. + +Like subtyping, quotienting constructs a new type from an existing type. Unlike +a subtype, a quotient type contains all of the elements of the base type, but +some elements that were different in the base type are considered equal in the +quotient type. In mathematical terms, the quotient type is isomorphic to a +partition of the base type. + +To define a quotient type, we need to provide a type that it is derived from and +a equivalence relation on it that determines which elements are considered +equal. -/ + +#check quotient +#print setoid + +#check quotient.mk +#check quotient.sound +#check quotient.exact + +#check quotient.lift +#check quotient.lift₂ +#check quotient.induction_on + + +/-! ## First Example: Integers + +Let us build the integers `ℤ` as a quotient over pairs of natural numbers +`ℕ × ℕ`. + +A pair `(p, n)` of natural numbers represents the integer `p - n`. Nonnegative +integers `p` can be represented by `(p, 0)`. Negative integers `-n` can be +represented by `(0, n)`. However, many representations of the same integer are +possible; e.g., `(7, 0)`, `(8, 1)`, `(9, 2)`, and `(10, 3)` all represent the +integer `7`. + +Which equivalence relation can we use? + +We want two pairs `(p₁, n₁)` and `(p₂, n₂)` to be equal if `p₁ - n₁ = p₂ - n₂`. +However, this does not work because subtraction on `ℕ` is ill-behaved (e.g., +`0 - 1 = 0`). Instead, we use `p₁ + n₂ = p₂ + n₁`. -/ + +@[instance] def int.rel : setoid (ℕ × ℕ) := +{ r := + λpn₁ pn₂ : ℕ × ℕ, + prod.fst pn₁ + prod.snd pn₂ = prod.fst pn₂ + prod.snd pn₁, + iseqv := + begin + repeat { apply and.intro }, + { intro pn, + refl }, + { intros pn₁ pn₂ h, + rw h }, + { intros pn₁ pn₂ pn₃ h₁₂ h₂₃, + apply @add_right_cancel _ _ _ (prod.snd pn₂), + cc } + end } + +#print equivalence + +lemma int.rel_iff (pn₁ pn₂ : ℕ × ℕ) : + pn₁ ≈ pn₂ ↔ + prod.fst pn₁ + prod.snd pn₂ = prod.fst pn₂ + prod.snd pn₁ := +by refl + +def int : Type := +quotient int.rel + +def int.zero : int := +⟦(0, 0)⟧ + +lemma int.zero_eq (m : ℕ) : + int.zero = ⟦(m, m)⟧ := +begin + rw int.zero, + apply quotient.sound, + rw int.rel_iff, + simp +end + +def int.add : int → int → int := +quotient.lift₂ + (λpn₁ pn₂ : ℕ × ℕ, + ⟦(prod.fst pn₁ + prod.fst pn₂, + prod.snd pn₁ + prod.snd pn₂)⟧) + begin + intros pn₁ pn₂ pn₁' pn₂' h₁ h₂, + apply quotient.sound, + rw int.rel_iff at *, + linarith + end + +lemma int.add_eq (p₁ n₁ p₂ n₂ : ℕ) : + int.add ⟦(p₁, n₁)⟧ ⟦(p₂, n₂)⟧ = ⟦(p₁ + p₂, n₁ + n₂)⟧ := +by refl + +lemma int.add_zero (i : int) : + int.add int.zero i = i := +begin + apply quotient.induction_on i, + intro pn, + rw int.zero, + cases' pn with p n, + rw int.add_eq, + apply quotient.sound, + simp +end + +/-! This definitional syntax would be nice: -/ + +-- fails +def int.add : int → int → int +| ⟦(p₁, n₁)⟧ ⟦(p₂, n₂)⟧ := ⟦(p₁ + p₂, n₁ + n₂)⟧ + +/-! But it would be dangerous: -/ + +-- fails +def int.fst : int → ℕ +| ⟦(p, n)⟧ := p + +/-! Using `int.fst`, we could derive `false`. First, we have + + `int.fst ⟦(0, 0)⟧ = 0` + `int.fst ⟦(1, 1)⟧ = 1` + +But since `⟦(0, 0)⟧ = ⟦(1, 1)⟧`, we get + + `0 = 1` -/ + + +/-! ### Second Example: Unordered Pairs + +__Unordered pairs__ are pairs for which no distinction is made between the first +and second components. They are usually written `{a, b}`. + +We will introduce the type `upair` of unordered pairs as the quotient of pairs +`(a, b)` with respect to the relation "contains the same elements as". -/ + +@[instance] def upair.rel (α : Type) : setoid (α × α) := +{ r := λab₁ ab₂ : α × α, + ({prod.fst ab₁, prod.snd ab₁} : set α) = + ({prod.fst ab₂, prod.snd ab₂} : set α), + iseqv := by repeat { apply and.intro }; finish } + +-- needed for technical reasons, to deprioritize `upair.rel` w.r.t. `int.rel` +attribute [instance, priority 999] upair.rel + +lemma upair.rel_iff {α : Type} (ab₁ ab₂ : α × α) : + ab₁ ≈ ab₂ ↔ + ({prod.fst ab₁, prod.snd ab₁} : set α) = + ({prod.fst ab₂, prod.snd ab₂} : set α) := +by refl + +def upair (α : Type) : Type := +quotient (upair.rel α) + +def upair.mk {α : Type} (a b : α) : upair α := +⟦(a, b)⟧ + +/-! It is easy to prove that the `upair.mk` constructor is symmetric: -/ + +lemma upair.mk_symm {α : Type} (a b : α) : + upair.mk a b = upair.mk b a := +begin + unfold upair.mk, + apply quotient.sound, + rw upair.rel_iff, + apply set.union_comm +end + +/-! Another representation of unordered pairs is as sets of cardinality 1 or 2. +The following operation converts `upair` to that representation: -/ + +def set_of_upair {α : Type} : upair α → set α := +quotient.lift (λab : α × α, {prod.fst ab, prod.snd ab}) + begin + intros ab₁ ab₂ h, + rw upair.rel_iff at *, + exact h + end + + +/-! ### Alternative Definitions via Normalization and Subtyping + +Each element of a quotient type correspond to an `≈`-equivalence class. +If there exists a systematic way to obtain a **canonical representative** for +each class, we can use a subtype instead of a quotient, keeping only the +canonical representatives. + +Consider the quotient type `int` above. We could say that a pair `(p, n)` is +__canonical__ if `p` or `n` is `0`. -/ + +namespace alternative + +inductive int.is_canonical : ℕ × ℕ → Prop +| nonpos {n : ℕ} : int.is_canonical (0, n) +| nonneg {p : ℕ} : int.is_canonical (p, 0) + +def int : Type := +{pn : ℕ × ℕ // int.is_canonical pn} + +/-! **Normalizing** pairs of natural numbers is easy: -/ + +def int.normalize : ℕ × ℕ → ℕ × ℕ +| (p, n) := if p ≥ n then (p - n, 0) else (0, n - p) + +lemma int.is_canonical_normalize (pn : ℕ × ℕ) : + int.is_canonical (int.normalize pn) := +begin + cases' pn with p n, + simp [int.normalize], + split_ifs, + { exact int.is_canonical.nonneg }, + { exact int.is_canonical.nonpos } +end + +/-! For unordered pairs, there is no obvious normal form, except to always put +the smaller element first (or last). This requires a linear order `≤` on `α`. -/ + +def upair.is_canonical {α : Type} [linear_order α] : + α × α → Prop +| (a, b) := a ≤ b + +def upair (α : Type) [linear_order α] : Type := +{ab : α × α // upair.is_canonical ab} + +end alternative + +end LoVe diff --git a/lean/love11_logical_foundations_of_mathematics_exercise_sheet.lean b/lean/love11_logical_foundations_of_mathematics_exercise_sheet.lean new file mode 100644 index 0000000..be1ed35 --- /dev/null +++ b/lean/love11_logical_foundations_of_mathematics_exercise_sheet.lean @@ -0,0 +1,85 @@ +import .love11_logical_foundations_of_mathematics_demo + + +/-! # LoVe Exercise 11: Logical Foundations of Mathematics -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Question 1: Vectors as Subtypes + +Recall the definition of vectors from the demo: -/ + +#check vector + +/-! The following function adds two lists of integers elementwise. If one +function is longer than the other, the tail of the longer function is +truncated. -/ + +def list.add : list ℤ → list ℤ → list ℤ +| [] [] := [] +| (x :: xs) (y :: ys) := (x + y) :: list.add xs ys +| [] (y :: ys) := [] +| (x :: xs) [] := [] + +/-! 1.1. Show that if the lists have the same length, the resulting list also +has that length. -/ + +lemma list.length_add : + ∀(xs : list ℤ) (ys : list ℤ) (h : list.length xs = list.length ys), + list.length (list.add xs ys) = list.length xs +| [] [] := + sorry +| (x :: xs) (y :: ys) := + sorry +| [] (y :: ys) := + sorry +| (x :: xs) [] := + sorry + +/-! 1.2. Define componentwise addition on vectors using `list.add` and +`list.length_add`. -/ + +def vector.add {n : ℕ} : vector ℤ n → vector ℤ n → vector ℤ n := +sorry + +/-! 1.3. Show that `list.add` and `vector.add` are commutative. -/ + +lemma list.add.comm : + ∀(xs : list ℤ) (ys : list ℤ), list.add xs ys = list.add ys xs := +sorry + +lemma vector.add.comm {n : ℕ} (x y : vector ℤ n) : + vector.add x y = vector.add y x := +sorry + + +/-! ## Question 2: Integers as Quotients + +Recall the construction of integers from the lecture, not to be confused with +Lean's predefined type `int` (= `ℤ`): -/ + +#check int.rel +#check int.rel_iff +#check int + +/-! 2.1. Define negation on these integers. -/ + +def int.neg : int → int := +sorry + +/-! 2.2. Prove the following lemmas about negation. -/ + +lemma int.neg_eq (p n : ℕ) : + int.neg ⟦(p, n)⟧ = ⟦(n, p)⟧ := +sorry + +lemma int.neg_neg (a : int) : + int.neg (int.neg a) = a := +sorry + +end LoVe diff --git a/lean/love12_basic_mathematical_structures_demo.lean b/lean/love12_basic_mathematical_structures_demo.lean new file mode 100644 index 0000000..fa675c1 --- /dev/null +++ b/lean/love12_basic_mathematical_structures_demo.lean @@ -0,0 +1,372 @@ +import .love05_inductive_predicates_demo + + +/-! # LoVe Demo 12: Basic Mathematical Structures + +We introduce definitions and proofs about basic mathematical structures such as +groups, fields, and linear orders. -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Type Classes over a Single Binary Operator + +Mathematically, a __group__ is a set `G` with a binary operator `• : G × G → G` +with the following properties, called __group axioms__: + +* Associativity: For all `a, b, c ∈ G`, we have `(a • b) • c = a • (b • c)`. +* Identity element: There exists an element `e ∈ G` such that for all `a ∈ G`, + we have `e • a = a`. +* Inverse element: For each `a ∈ G`, there exists an inverse element + `inv(a) ∈ G` such that `inv(a) • a = e`. + +Examples of groups are +* `ℤ` with `+`; +* `ℝ` with `+`; +* `ℝ \ {0}` with `*`. + +In Lean, a type class for groups can be defined as follows: -/ + +namespace monolithic_group + +@[class] structure group (α : Type) := +(mul : α → α → α) +(one : α) +(inv : α → α) +(mul_assoc : ∀a b c, mul (mul a b) c = mul a (mul b c)) +(one_mul : ∀a, mul one a = a) +(mul_left_inv : ∀a, mul (inv a) a = one) + +end monolithic_group + +/-! In Lean, however, group is part of a larger hierarchy of algebraic +structures: + +Type class | Properties | Examples +------------------------ | -----------------------------------------|------------------- +`semigroup` | associativity of `*` | `ℝ`, `ℚ`, `ℤ`, `ℕ` +`monoid` | `semigroup` with unit `1` | `ℝ`, `ℚ`, `ℤ`, `ℕ` +`left_cancel_semigroup` | `semigroup` with `c * a = c * b → a = b` | +`right_cancel_semigroup` | `semigroup` with `a * c = b * c → a = b` | +`group` | `monoid` with inverse `⁻¹` | + +Most of these structures have commutative versions: `comm_semigroup`, +`comm_monoid`, `comm_group`. + +The __multiplicative__ structures (over `*`, `1`, `⁻¹`) are copied to produce +__additive__ versions (over `+`, `0`, `-`): + +Type class | Properties | Examples +---------------------------- | ---------------------------------------------|------------------- +`add_semigroup` | associativity of `+` | `ℝ`, `ℚ`, `ℤ`, `ℕ` +`add_monoid` | `add_semigroup` with unit `0` | `ℝ`, `ℚ`, `ℤ`, `ℕ` +`add_left_cancel_semigroup` | `add_semigroup` with `c + a = c + b → a = b` | `ℝ`, `ℚ`, `ℤ`, `ℕ` +`add_right_cancel_semigroup` | `add_semigroup` with `a + c = b + c → a = b` | `ℝ`, `ℚ`, `ℤ`, `ℕ` +`add_group` | `add_monoid` with inverse `-` | `ℝ`, `ℚ`, `ℤ` -/ + +#print group +#print add_group + +/-! Let us define our own type, of integers modulo 2, and register it as an +additive group. -/ + +inductive int2 : Type +| zero +| one + +def int2.add : int2 → int2 → int2 +| int2.zero a := a +| a int2.zero := a +| int2.one int2.one := int2.zero + +@[instance] def int2.add_group : add_group int2 := +{ add := int2.add, + add_assoc := + by intros a b c; cases' a; cases' b; cases' c; refl, + zero := int2.zero, + zero_add := by intro a; cases' a; refl, + add_zero := by intro a; cases' a; refl, + neg := λa, a, + add_left_neg := by intro a; cases' a; refl } + +#reduce int2.one + 0 - 0 - int2.one + +lemma int2.add_right_neg: + ∀a : int2, a + - a = 0 := +add_right_neg + +/-! Another example: Lists are an `add_monoid`: -/ + +@[instance] def list.add_monoid {α : Type} : + add_monoid (list α) := +{ zero := [], + add := (++), + add_assoc := list.append_assoc, + zero_add := list.nil_append, + add_zero := list.append_nil } + + +/-! ## Type Classes with Two Binary Operators + +Mathematically, a __field__ is a set `F` such that + +* `F` forms a commutative group under an operator `+`, called addition, with + identity element `0`. +* `F\{0}` forms a commutative group under an operator `*`, called + multiplication. +* Multiplication distributes over addition—i.e., + `a * (b + c) = a * b + a * c` for all `a, b, c ∈ F`. + +In Lean, fields are also part of a larger hierarchy: + +Type class | Properties | Examples +-----------------|-----------------------------------------------------|------------------- +`semiring` | `monoid` and `add_comm_monoid` with distributivity | `ℝ`, `ℚ`, `ℤ`, `ℕ` +`comm_semiring` | `semiring` with commutativity of `*` | `ℝ`, `ℚ`, `ℤ`, `ℕ` +`ring` | `monoid` and `add_comm_group` with distributivity | `ℝ`, `ℚ`, `ℤ` +`comm_ring` | `ring` with commutativity of `*` | `ℝ`, `ℚ`, `ℤ` +`division_ring` | `ring` with multiplicative inverse `⁻¹` | `ℝ`, `ℚ` +`field` | `division_ring` with commutativity of `*` | `ℝ`, `ℚ` +`discrete_field` | `field` with decidable equality and `∀n, n / 0 = 0` | `ℝ`, `ℚ` -/ + +#print field + +/-! Let us continue with our example: -/ + +def int2.mul : int2 → int2 → int2 +| int2.one a := a +| a int2.one := a +| int2.zero int2.zero := int2.zero + +@[instance] def int2.field : field int2 := +{ one := int2.one, + mul := int2.mul, + inv := λa, a, + add_comm := by intros a b; cases' a; cases' b; refl, + exists_pair_ne := + by apply exists.intro int2.zero; + apply exists.intro int2.one; finish, + one_mul := by intros a; cases' a; refl, + mul_one := by intros a; cases' a; refl, + mul_inv_cancel := by intros a h; cases' a; finish, + inv_zero := by refl, + mul_assoc := + by intros a b c; cases' a; cases' b; cases' c; refl, + mul_comm := by intros a b; cases' a; cases' b; refl, + left_distrib := + by intros a b c; cases' a; cases' b; cases' c; refl, + right_distrib := + by intros a b c; cases' a; cases' b; cases' c; refl, + ..int2.add_group } + +#reduce (1 : int2) * 0 / (0 - 1) + +#reduce (3 : int2) + +lemma ring_example (a b : int2) : + (a + b) ^ 3 = a ^ 3 + 3 * a ^ 2 * b + 3 * a * b ^ 2 + b ^ 3 := +by ring + +lemma ring_exp_example (a b : int2) (n : ℕ): + (a + b) ^ (2 + n) = + (a + b) ^ n * (a ^ 2 + 2 * a * b + b ^ 2) := +by ring_exp + +/-! `ring` and `ring_exp` prove equalities over commutative rings and semirings +by normalizing expressions. The `ring_exp` variant also normalizes exponents. -/ + +lemma abel_example (a b : ℤ) : + a + b + 0 - (b + a + a) = - a := +by abel + +/-! `abel` proves equalities over additive commutative monoids and groups by +normalizing expressions. + + +## Coercions + +When combining numbers form `ℕ`, `ℤ`, `ℚ`, and `ℝ`, we might want to cast from +one type to another. Lean has a mechanism to automatically introduce coercions, +represented by `coe` (syntactic sugar: `↑`). `coe` can be set up to provide +implicit coercions between arbitrary types. + +Many coercions are already in place, including the following: + +* `coe : ℕ → α` casts `ℕ` to another semiring `α`; +* `coe : ℤ → α` casts `ℤ` to another ring `α`; +* `coe : ℚ → α` casts `ℚ` to another division ring `α`. + +For example, this works, although negation `- n` is not defined on natural +numbers: -/ + +lemma neg_mul_neg_nat (n : ℕ) (z : ℤ) : + (- z) * (- n) = z * n := +neg_mul_neg z n + +/-! Notice how Lean introduced a `↑` coercion: -/ + +#print neg_mul_neg_nat + +/-! Another example: -/ + +lemma neg_nat_mul_neg (n : ℕ) (z : ℤ) : + (- n : ℤ) * (- z) = n * z := +neg_mul_neg n z + +#print neg_nat_mul_neg + +/-! In proofs involving coercions, the tactic `norm_cast` can be convenient. -/ + +lemma norm_cast_example_1 (m n : ℕ) (h : (m : ℤ) = (n : ℤ)) : + m = n := +begin + norm_cast at h, + exact h +end + +lemma norm_cast_example_2 (m n : ℕ) : + (m : ℤ) + (n : ℤ) = ((m + n : ℕ) : ℤ) := +by norm_cast + +/-! `norm_cast` moves coercions towards the inside of expressions, as a form of +simplification. Like `simp`, it will generally produce a subgoal. + +`norm_cast` relies on lemmas such as the following: -/ + +#check nat.cast_add +#check int.cast_add +#check rat.cast_add + + +/-! ### Lists, Multisets and Finite Sets + +For finite collections of elements different structures are available: + +* lists: order and duplicates matter; +* multisets: only duplicates matter; +* finsets: neither order nor duplicates matter. -/ + +lemma list_duplicates_example : + [2, 3, 3, 4] ≠ [2, 3, 4] := +dec_trivial + +lemma list_order_example : + [4, 3, 2] ≠ [2, 3, 4] := +dec_trivial + +lemma multiset_duplicates_example : + ({2, 3, 3, 4} : multiset ℕ) ≠ {2, 3, 4} := +dec_trivial + +lemma multiset_order_example : + ({2, 3, 4} : multiset ℕ) = {4, 3, 2} := +dec_trivial + +lemma finset_duplicates_example : + ({2, 3, 3, 4} : finset ℕ) = {2, 3, 4} := +dec_trivial + +lemma finsetorder_example : + ({2, 3, 4} : finset ℕ) = {4, 3, 2} := +dec_trivial + +/-! `dec_trivial` is a special lemma that can be used on trivial decidable +goals (e.g., true closed executable expressions). -/ + +def list.elems : btree ℕ → list ℕ +| btree.empty := [] +| (btree.node a l r) := a :: list.elems l ++ list.elems r + +def multiset.elems : btree ℕ → multiset ℕ +| btree.empty := ∅ +| (btree.node a l r) := + {a} ∪ (multiset.elems l ∪ multiset.elems r) + +def finset.elems : btree ℕ → finset ℕ +| btree.empty := ∅ +| (btree.node a l r) := {a} ∪ (finset.elems l ∪ finset.elems r) + +#eval list.sum [2, 3, 4] -- result: 9 +#eval multiset.sum ({2, 3, 4} : multiset ℕ) -- result: 9 +#eval finset.sum ({2, 3, 4} : finset ℕ) (λn, n) -- result: 9 + +#eval list.prod [2, 3, 4] -- result: 24 +#eval multiset.prod ({2, 3, 4} : multiset ℕ) -- result: 24 +#eval finset.prod ({2, 3, 4} : finset ℕ) (λn, n) -- result: 24 + + +/-! ## Order Type Classes + +Many of the structures introduced above can be ordered. For example, the +well-known order on the natural numbers can be defined as follows: -/ + +inductive nat.le : ℕ → ℕ → Prop +| refl : ∀a : ℕ, nat.le a a +| step : ∀a b : ℕ, nat.le a b → nat.le a (b + 1) + +/-! This is an example of a linear order. A __linear order__ (or +__total order__) is a binary relation `≤` such that for all `a`, `b`, `c`, the +following properties hold: + +* Reflexivity: `a ≤ a`. +* Transitivity: If `a ≤ b` and `b ≤ c`, then `a ≤ c`. +* Antisymmetry: If `a ≤ b` and `b ≤ a`, then `a = b`. +* Totality: `a ≤ b` or `b ≤ a`. + +If a relation has the first three properties, it is a __partial order__. An +example is `⊆` on sets, finite sets, or multisets. If a relation has the first +two properties, it is a __preorder__. An example is comparing lists by their +length. + +In Lean, there are type classes for these different kinds of orders: +`linear_order`, `partial_order`, and `preorder`. The `preorder` class has the +fields + + `le : α → α → Prop` + `le_refl : ∀a : α, le a a` + `le_trans : ∀a b c : α, le a b → le b c → le a c` + +The `partial_order` class also has + + `le_antisymm : ∀a b : α, le a b → le b a → a = b` + +and `linear_order` also has + + `le_total : ∀a b : α, le a b ∨ le b a` + +We can declare the preorder on lists that compares lists by their length as +follows: -/ + +@[instance] def list.length.preord {α : Type} : + preorder (list α) := +{ le := λxs ys, list.length xs ≤ list.length ys, + le_refl := by intro xs; exact nat.le_refl _, + le_trans := by intros xs ys zs; exact nat.le_trans } + +/-! This instance introduces the infix syntax `≤` and the relations `≥`, `<`, +and `>`: -/ + +lemma list.length.preord_example {α : Type} (c : α) : + [c] > [] := +dec_trivial + +/-! Complete lattices (lecture 10) are formalized as another type class, +`complete_lattice`, which inherits from `partial_order`. + +Type classes combining orders and algebraic structures are also available: + + `ordered_cancel_comm_monoid` + `ordered_comm_group ordered_semiring` + `linear_ordered_semiring` + `linear_ordered_comm_ring` + `linear_ordered_field` + +All these mathematical structures relate `≤` and `<` with `0`, `1`, `+`, and `*` +by monotonicity rules (e.g., `a ≤ b → c ≤ d → a + c ≤ b + d`) and cancellation +rules (e.g., `c + a ≤ c + b → a ≤ b`). -/ + +end LoVe diff --git a/lean/love12_basic_mathematical_structures_exercise_sheet.lean b/lean/love12_basic_mathematical_structures_exercise_sheet.lean new file mode 100644 index 0000000..9822879 --- /dev/null +++ b/lean/love12_basic_mathematical_structures_exercise_sheet.lean @@ -0,0 +1,105 @@ +import .love12_basic_mathematical_structures_demo + + +/-! # LoVe Exercise 12: Basic Mathematical Structures -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Question 1: Type Classes + +Recall the inductive type `btree` we introduced earlier: -/ + +#check btree + +/-! The following function takes two trees and attaches copies of the second +tree to each leaf of the first tree. -/ + +def btree.graft {α : Type} : btree α → btree α → btree α +| btree.empty u := u +| (btree.node a l r) u := btree.node a (btree.graft l u) (btree.graft r u) + +#reduce btree.graft (btree.node 1 btree.empty btree.empty) + (btree.node 2 btree.empty btree.empty) + +/-! 1.1. Prove the following two lemmas by structural induction on `t`. -/ + +lemma btree.graft_assoc {α : Type} (t u v : btree α) : + btree.graft (btree.graft t u) v = btree.graft t (btree.graft u v) := +sorry + +lemma btree.graft_empty {α : Type} (t : btree α) : + btree.graft t btree.empty = t := +sorry + +/-! 1.2. Declare btree an instance of `add_monoid` using `graft` as addition +operator. -/ + +#print add_monoid + +@[instance] def btree.add_monid {α : Type} : add_monoid (btree α) := +sorry + +/-! 1.3. Explain why `btree` with `graft` as addition cannot be declared an +instance of `add_group`. -/ + +#print add_group + +/-! 1.4 (**optional**). Prove the following lemma illustrating why `btree` with +`graft` as addition does not constitute an `add_group`. -/ + +lemma btree.add_left_neg_counterexample : + ∃x : btree ℕ, ∀y : btree ℕ, btree.graft y x ≠ btree.empty := +sorry + + +/-! ## Question 2: Multisets and Finsets + +Recall the following definitions from the lecture: -/ + +#check multiset.elems +#check finset.elems +#check list.elems + +/-! 2.1. Prove that the multiset of nodes does not change when mirroring a tree. + +Hints: + +* Perform structural induction on `t`. + +* The `cc` tactic also works with set operations. -/ + +lemma multiset.elems_mirror (t : btree ℕ) : + multiset.elems (mirror t) = multiset.elems t := +sorry + +/-! 2.2. Prove that the finite set of nodes does not change when mirroring a +tree. -/ + +lemma finset.elems_mirror (t : btree ℕ) : + finset.elems (mirror t) = finset.elems t := +sorry + +/-! 2.3. Show that this does not hold for the list of nodes by providing a +tree `t` for which `nodes_list t ≠ nodes_list (mirror t)`. + +If you define a suitable counterexample, the proof below will succeed. -/ + +def rotten_tree : btree ℕ := +sorry + +#eval list.elems rotten_tree +#eval list.elems (mirror rotten_tree) + +lemma list.elems_mirror_counterexample : + ∃t : btree ℕ, list.elems t ≠ list.elems (mirror t) := +begin + apply exists.intro rotten_tree, + exact dec_trivial +end + +end LoVe diff --git a/lean/love13_rational_and_real_numbers_demo.lean b/lean/love13_rational_and_real_numbers_demo.lean new file mode 100644 index 0000000..ed8fe9d --- /dev/null +++ b/lean/love13_rational_and_real_numbers_demo.lean @@ -0,0 +1,516 @@ +import .lovelib + + +/-! # LoVe Demo 13: Rational and Real Numbers + +We review the construction of `ℚ` and `ℝ` as quotient types. + +Our procedure to construct types with specific properties: + +1. Create a new type that can represent all elements, but not necessarily in a + unique manner. + +2. Quotient this representation, equating elements that should be equal. + +3. Define operators on the quotient type by lifting functions from the base + type and prove that they are compatible with the quotient relation. + +We used this approach in lecture 11 to construct `ℤ`. It can be used for `ℚ` and +`ℝ` as well. -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Rational Numbers + +**Step 1:** A rational number is a number that can be expressed as a fraction +`n / d` of integers `n` and `d ≠ 0`: -/ + +structure fraction := +(num : ℤ) +(denom : ℤ) +(denom_ne_zero : denom ≠ 0) + +/-! The number `n` is called the numerator, and the number `d` is called the +denominator. + +The representation of a rational number as a fraction is not unique—e.g., +`1 / 2 = 2 / 4 = -1 / -2`. + +**Step 2:** Two fractions `n₁ / d₁` and `n₂ / d₂` represent the same rational +number if the ratio between numerator and denominator are the same—i.e., +`n₁ * d₂ = n₂ * d₁`. This will be our equivalence relation `≈` on fractions. -/ + +namespace fraction + +@[instance] def setoid : setoid fraction := +{ r := λa b : fraction, num a * denom b = num b * denom a, + iseqv := + begin + repeat { apply and.intro }, + { intros a; refl }, + { intros a b h; cc }, + { intros a b c eq_ab eq_bc, + apply int.eq_of_mul_eq_mul_right (denom_ne_zero b), + cc } + end } + +lemma setoid_iff (a b : fraction) : + a ≈ b ↔ num a * denom b = num b * denom a := +by refl + +/-! **Step 3:** Define `0 := 0 / 1`, `1 := 1 / 1`, addition, multiplication, etc. + + `n₁ / d₁ + n₂ / d₂` := `(n₁ * d₂ + n₂ * d₁) / (d₁ * d₂)` + `(n₁ / d₁) * (n₂ / d₂)` := `(n₁ * n₂) / (d₁ * d₂)` + +Then show that they are compatible with `≈`. -/ + +def of_int (i : ℤ) : fraction := +{ num := i, + denom := 1, + denom_ne_zero := by simp } + +@[instance] def has_zero : has_zero fraction := +{ zero := of_int 0 } + +@[instance] def has_one : has_one fraction := +{ one := of_int 1 } + +@[instance] def has_add : has_add fraction := +{ add := λa b : fraction, + { num := num a * denom b + num b * denom a, + denom := denom a * denom b, + denom_ne_zero := + by apply mul_ne_zero; exact denom_ne_zero _ } } + +@[simp] lemma add_num (a b : fraction) : + num (a + b) = num a * denom b + num b * denom a := +by refl + +@[simp] lemma add_denom (a b : fraction) : + denom (a + b) = denom a * denom b := +by refl + +lemma add_equiv_add {a a' b b' : fraction} (ha : a ≈ a') + (hb : b ≈ b') : + a + b ≈ a' + b' := +begin + simp [setoid_iff, add_denom, add_num] at *, + calc (num a * denom b + num b * denom a) + * (denom a' * denom b') + = num a * denom a' * denom b * denom b' + + num b * denom b' * denom a * denom a' : + by simp [add_mul, mul_add]; ac_refl + ... = num a' * denom a * denom b * denom b' + + num b' * denom b * denom a * denom a' : + by simp [*] + ... = (num a' * denom b' + num b' * denom a') + * (denom a * denom b) : + by simp [add_mul, mul_add]; ac_refl +end + +@[instance] def has_neg : has_neg fraction := +{ neg := λa : fraction, + { num := - num a, + ..a } } + +@[simp] lemma neg_num (a : fraction) : + num (- a) = - num a := +by refl + +@[simp] lemma neg_denom (a : fraction) : + denom (- a) = denom a := +by refl + +lemma setoid_neg {a a' : fraction} (hab : a ≈ a') : + - a ≈ - a' := +by simp [setoid_iff] at hab ⊢; exact hab + +@[instance] def has_mul : has_mul fraction := +{ mul := λa b : fraction, + { num := num a * num b, + denom := denom a * denom b, + denom_ne_zero := + mul_ne_zero (denom_ne_zero a) (denom_ne_zero b) } } + +@[simp] lemma mul_num (a b : fraction) : + num (a * b) = num a * num b := +by refl + +@[simp] lemma mul_denom (a b : fraction) : + denom (a * b) = denom a * denom b := +by refl + +lemma setoid_mul {a a' b b' : fraction} (ha : a ≈ a') + (hb : b ≈ b') : + a * b ≈ a' * b' := +by simp [setoid_iff] at ha hb ⊢; cc + +@[instance] def has_inv : has_inv fraction := +{ inv := λa : fraction, + if ha : num a = 0 then + 0 + else + { num := denom a, + denom := num a, + denom_ne_zero := ha } } + +lemma inv_def (a : fraction) (ha : num a ≠ 0) : + a⁻¹ = + { num := denom a, + denom := num a, + denom_ne_zero := ha } := +dif_neg ha + +lemma inv_zero (a : fraction) (ha : num a = 0) : + a⁻¹ = 0 := +dif_pos ha + +@[simp] lemma inv_num (a : fraction) (ha : num a ≠ 0) : + num (a⁻¹) = denom a := +by rw inv_def a ha + +@[simp] lemma inv_denom (a : fraction) (ha : num a ≠ 0) : + denom (a⁻¹) = num a := +by rw inv_def a ha + +lemma setoid_inv {a a' : fraction} (ha : a ≈ a') : + a⁻¹ ≈ a'⁻¹ := +begin + cases' classical.em (num a = 0), + case inl : ha0 { + cases' classical.em (num a' = 0), + case inl : ha'0 { + simp [ha0, ha'0, inv_zero] }, + case inr : ha'0 { + simp [ha0, ha'0, setoid_iff, denom_ne_zero] at ha, + cc } }, + case inr : ha0 { + cases' classical.em (num a' = 0), + case inl : ha'0 { + simp [setoid_iff, ha'0, denom_ne_zero] at ha, + cc }, + case inr : ha'0 { + simp [setoid_iff, ha0, ha'0] at ha ⊢, + cc } } +end + +end fraction + +def rat : Type := +quotient fraction.setoid + +namespace rat + +@[instance] def has_zero : has_zero rat := +{ zero := ⟦0⟧ } + +@[instance] def has_one : has_one rat := +{ one := ⟦1⟧ } + +@[instance] def has_add : has_add rat := +{ add := quotient.lift₂ (λa b : fraction, ⟦a + b⟧) + begin + intros a b a' b' ha hb, + apply quotient.sound, + exact fraction.add_equiv_add ha hb + end } + +@[instance] def has_neg : has_neg rat := +{ neg := quotient.lift (λa : fraction, ⟦- a⟧) + begin + intros a a' ha, + apply quotient.sound, + exact fraction.setoid_neg ha + end } + +@[instance] def has_mul : has_mul rat := +{ mul := quotient.lift₂ (λa b : fraction, ⟦a * b⟧) + begin + intros a b a' b' ha hb, + apply quotient.sound, + exact fraction.setoid_mul ha hb + end } + +@[instance] def has_inv : has_inv rat := +{ inv := quotient.lift (λa : fraction, ⟦a⁻¹⟧) + begin + intros a a' ha, + apply quotient.sound, + exact fraction.setoid_inv ha + end } + +end rat + + +/-! ### Alternative Definitions of `ℚ` + +**Alternative 1:** Define `ℚ` as a subtype of `fraction`, with the requirement +that the denominator is positive and that the numerator and the denominator have +no common divisors except `1` and `-1`: -/ + +namespace alternative_1 + +def rat.is_canonical (a : fraction) : Prop := +fraction.denom a > 0 +∧ nat.coprime (int.nat_abs (fraction.num a)) + (int.nat_abs (fraction.denom a)) + +def rat : Type := +{a : fraction // rat.is_canonical a} + +end alternative_1 + +/-! This is more or less the `mathlib` definition. + +Advantages: + +* no quotient required; +* more efficient computation; +* more properties are syntactic equalities up to computation. + +Disadvantage: + +* more complicated function definitions. + +**Alternative 2**: Define all elements syntactically, including the desired +operations: -/ + +namespace alternative_2 + +inductive pre_rat : Type +| zero : pre_rat +| one : pre_rat +| add : pre_rat → pre_rat → pre_rat +| sub : pre_rat → pre_rat → pre_rat +| mul : pre_rat → pre_rat → pre_rat +| div : pre_rat → pre_rat → pre_rat + +/-! Then quotient `pre_rat` to enforce congruence rules and the field axioms: -/ + +inductive rat.rel : pre_rat → pre_rat → Prop +| add_congr {a b c d : pre_rat} : + rat.rel a b → rat.rel c d → + rat.rel (pre_rat.add a c) (pre_rat.add b d) +| add_assoc {a b c : pre_rat} : + rat.rel (pre_rat.add a (pre_rat.add b c)) + (pre_rat.add (pre_rat.add a b) c) +| zero_add {a : pre_rat} : + rat.rel (pre_rat.add pre_rat.zero a) a +-- etc. + +def rat : Type := +quot rat.rel + +end alternative_2 + +/-! Advantages: + +* no dependency on `ℤ`; +* easy proofs of the field axioms; +* general recipe reusable for other algebraic constructions (e.g., free monoids, + free groups). + +Disadvantage: + +* the definition of orders and lemmas about them are more complicated. + + +### Real Numbers + +Some sequences of rational numbers seem to converge because the numbers in the +sequence get closer and closer to each other, and yet do not converge to a +rational number. + +Example: + + `a₀ = 1` + `a₁ = 1.4` + `a₂ = 1.41` + `a₃ = 1.414` + `a₄ = 1.4142` + `a₅ = 1.41421` + `a₆ = 1.414213` + `a₇ = 1.4142135` + ⋮ + +This sequence seems to converge because each `a_n` is at most `10^-n` away from +any of the following numbers. But the limit is `√2`, which is not a rational +number. + +The rational numbers are incomplete, and the reals are their __completion__. + +To construct the reals, we need to fill in the gaps that are revealed by these +sequences that seem to converge, but do not. + +Mathematically, a sequence `a₀, a₁, …` of rational numbers is __Cauchy__ if for +any `ε > 0`, there exists an `N ∈ ℕ` such that for all `m ≥ N`, we have +`|a_N - a_m| < ε`. + +In other words, no matter how small we choose `ε`, we can always find a point in +the sequence from which all following numbers deviate less than by `ε`. -/ + +def is_cau_seq (f : ℕ → ℚ) : Prop := +∀ε > 0, ∃N, ∀m ≥ N, abs (f N - f m) < ε + +/-! Not every sequence is a Cauchy sequence: -/ + +lemma id_not_cau_seq : + ¬ is_cau_seq (λn : ℕ, (n : ℚ)) := +begin + rw is_cau_seq, + intro h, + cases' h 1 zero_lt_one with i hi, + have hi_succi := + hi (i + 1) (by simp), + simp [←sub_sub] at hi_succi, + assumption +end + +/-! We define a type of Cauchy sequences as a subtype: -/ + +def cau_seq : Type := +{f : ℕ → ℚ // is_cau_seq f} + +def seq_of (f : cau_seq) : ℕ → ℚ := +subtype.val f + +/-! Cauchy sequences represent real numbers: + +* `a_n = 1 / n` represents the real number `0`; +* `1, 1.4, 1.41, …` represents the real number `√2`; +* `a_n = 0` also represents the real number `0`. + +Since different Cauchy sequences can represent the same real number, we need to +take the quotient. Formally, two sequences represent the same real number when +their difference converges to zero: -/ + +namespace cau_seq + +@[instance] def setoid : setoid cau_seq := +{ r := λf g : cau_seq, + ∀ε > 0, ∃N, ∀m ≥ N, abs (seq_of f m - seq_of g m) < ε, + iseqv := + begin + apply and.intro, + { intros f ε hε, + apply exists.intro 0, + finish }, + apply and.intro, + { intros f g hfg ε hε, + cases' hfg ε hε with N hN, + apply exists.intro N, + intros m hm, + rw abs_sub, + apply hN m hm }, + { intros f g h hfg hgh ε hε, + cases' hfg (ε / 2) (half_pos hε) with N₁ hN₁, + cases' hgh (ε / 2) (half_pos hε) with N₂ hN₂, + apply exists.intro (max N₁ N₂), + intros m hm, + calc abs (seq_of f m - seq_of h m) + ≤ abs (seq_of f m - seq_of g m) + + abs (seq_of g m - seq_of h m) : + by apply abs_sub_le + ... < ε / 2 + ε / 2 : + add_lt_add (hN₁ m (le_of_max_le_left hm)) + (hN₂ m (le_of_max_le_right hm)) + ... = ε : + by simp } + end } + +lemma setoid_iff (f g : cau_seq) : + f ≈ g ↔ + ∀ε > 0, ∃N, ∀m ≥ N, abs (seq_of f m - seq_of g m) < ε := +by refl + +/-! We can define constants such as `0` and `1` as a constant sequence. Any +constant sequence is a Cauchy sequence: -/ + +def const (q : ℚ) : cau_seq := +subtype.mk (λ_ : ℕ, q) (by rw is_cau_seq; intros ε hε; finish) + +/-! Defining addition of real numbers requires a little more effort. We define +addition on Cauchy sequences as pairwise addition: -/ + +@[instance] def has_add : has_add cau_seq := +{ add := λf g : cau_seq, + subtype.mk (λn : ℕ, seq_of f n + seq_of g n) sorry } + +/-! Above, we omit the proof that the addition of two Cauchy sequences is again +a Cauchy sequence. + +Next, we need to show that this addition is compatible with `≈`: -/ + +lemma add_equiv_add {f f' g g' : cau_seq} (hf : f ≈ f') + (hg : g ≈ g') : + f + g ≈ f' + g' := +begin + intros ε₀ hε₀, + simp [setoid_iff], + cases' hf (ε₀ / 2) (half_pos hε₀) with Nf hNf, + cases' hg (ε₀ / 2) (half_pos hε₀) with Ng hNg, + apply exists.intro (max Nf Ng), + intros m hm, + calc abs (seq_of (f + g) m - seq_of (f' + g') m) + = abs ((seq_of f m + seq_of g m) + - (seq_of f' m + seq_of g' m)) : + by refl + ... = abs ((seq_of f m - seq_of f' m) + + (seq_of g m - seq_of g' m)) : + begin + have arg_eq : + seq_of f m + seq_of g m - (seq_of f' m + seq_of g' m) = + seq_of f m - seq_of f' m + (seq_of g m - seq_of g' m), + by linarith, + rw arg_eq + end + ... ≤ abs (seq_of f m - seq_of f' m) + + abs (seq_of g m - seq_of g' m) : + by apply abs_add + ... < ε₀ / 2 + ε₀ / 2 : + add_lt_add (hNf m (le_of_max_le_left hm)) + (hNg m (le_of_max_le_right hm)) + ... = ε₀ : + by simp +end + +end cau_seq + +/-! The real numbers are the quotient: -/ + +def real : Type := +quotient cau_seq.setoid + +namespace real + +@[instance] def has_zero : has_zero real := +{ zero := ⟦cau_seq.const 0⟧ } + +@[instance] def has_one : has_one real := +{ one := ⟦cau_seq.const 1⟧ } + +@[instance] def has_add : has_add real := +{ add := quotient.lift₂ (λa b : cau_seq, ⟦a + b⟧) + begin + intros a b a' b' ha hb, + apply quotient.sound, + exact cau_seq.add_equiv_add ha hb, + end } + +end real + + +/-! ### Alternative Definitions of `ℝ` + +* Dedekind cuts: `r : ℝ` is represented essentially as `{x : ℚ | x < r}`. + +* Binary sequences `ℕ → bool` can represent the interval `[0, 1]`. This can be + used to build `ℝ`. -/ + +end LoVe diff --git a/lean/love13_rational_and_real_numbers_exercise_sheet.lean b/lean/love13_rational_and_real_numbers_exercise_sheet.lean new file mode 100644 index 0000000..0824b3d --- /dev/null +++ b/lean/love13_rational_and_real_numbers_exercise_sheet.lean @@ -0,0 +1,50 @@ +import .love05_inductive_predicates_demo +import .love13_rational_and_real_numbers_demo + + +/-! # LoVe Exercise 13: Rational and Real Numbers -/ + + +set_option pp.beta true +set_option pp.generalized_field_notation false + +namespace LoVe + + +/-! ## Question 1: Rationals + +1.1. Prove the following lemma. + +Hint: The lemma `fraction.mk.inj_eq` might be useful. -/ + +#check fraction.mk.inj_eq + +lemma fraction.ext (a b : fraction) (hnum : fraction.num a = fraction.num b) + (hdenom : fraction.denom a = fraction.denom b) : + a = b := +sorry + +/-! 1.2. Extending the `fraction.has_mul` instance from the lecture, declare +`fraction` as an instance of `semigroup`. + +Hint: Use the lemma `fraction.ext` above, and possibly `fraction.mul_num`, and +`fraction.mul_denom`. -/ + +#check fraction.ext +#check fraction.mul_num +#check fraction.mul_denom + +@[instance] def fraction.semigroup : semigroup fraction := +{ mul_assoc := + sorry, + ..fraction.has_mul } + +/-! 1.3. Extending the `rat.has_mul` instance from the lecture, declare `rat` as +an instance of `semigroup`. -/ + +@[instance] def rat.semigroup : semigroup rat := +{ mul_assoc := + sorry, + ..rat.has_mul } + +end LoVe diff --git a/lean/lovelib.lean b/lean/lovelib.lean new file mode 100644 index 0000000..241a3e4 --- /dev/null +++ b/lean/lovelib.lean @@ -0,0 +1,295 @@ +import algebra +import data.real.basic +import data.vector +import tactic.explode +import tactic.find +import tactic.induction +import tactic.linarith +import tactic.rcases +import tactic.rewrite +import tactic.ring_exp +import tactic.tidy +import tactic.where + + +/-! # LoVe Library + +This files contains a few extensions on top of Lean's core libraries and +`mathlib`. -/ + + +namespace LoVe + + +/-! ## Structured Proofs -/ + +notation `fix ` binders `, ` r:(scoped f, f) := r + + +/-! ## Logical Connectives -/ + +meta def tactic.dec_trivial := `[exact dec_trivial] + +lemma not_def (a : Prop) : + ¬ a ↔ a → false := +by refl + +@[simp] lemma not_not_iff (a : Prop) [decidable a] : + ¬¬ a ↔ a := +by by_cases a; simp [h] + +@[simp] lemma and_imp_distrib (a b c : Prop) : + (a ∧ b → c) ↔ (a → b → c) := +iff.intro + (assume h ha hb, h ⟨ha, hb⟩) + (assume h ⟨ha, hb⟩, h ha hb) + +@[simp] lemma or_imp_distrib {a b c : Prop} : + a ∨ b → c ↔ (a → c) ∧ (b → c) := +iff.intro + (assume h, + ⟨assume ha, h (or.intro_left _ ha), assume hb, h (or.intro_right _ hb)⟩) + (assume ⟨ha, hb⟩ h, match h with or.inl h := ha h | or.inr h := hb h end) + +@[simp] lemma exists_imp_distrib {α : Sort*} {p : α → Prop} {a : Prop} : + ((∃x, p x) → a) ↔ (∀x, p x → a) := +iff.intro + (assume h hp ha, h ⟨hp, ha⟩) + (assume h ⟨hp, ha⟩, h hp ha) + +lemma and_exists {α : Sort*} {p : α → Prop} {a : Prop} : + (a ∧ (∃x, p x)) ↔ (∃x, a ∧ p x) := +iff.intro + (assume ⟨ha, x, hp⟩, ⟨x, ha, hp⟩) + (assume ⟨x, ha, hp⟩, ⟨ha, x, hp⟩) + +@[simp] lemma exists_false {α : Sort*} : + (∃x : α, false) ↔ false := +iff.intro (assume ⟨a, f⟩, f) (assume h, h.elim) + + +/-! ## Natural Numbers -/ + +attribute [simp] nat.add + + +/-! ## Integers -/ + +@[simp] lemma int.neg_comp_neg : + int.neg ∘ int.neg = id := +begin + apply funext, + apply neg_neg +end + + +/-! ## Reflexive Transitive Closure -/ + +namespace rtc + +inductive star {α : Sort*} (r : α → α → Prop) (a : α) : α → Prop +| refl {} : star a +| tail {b c} : star b → r b c → star c + +attribute [refl] star.refl + +namespace star + +variables {α : Sort*} {r : α → α → Prop} {a b c d : α} + +@[trans] lemma trans (hab : star r a b) (hbc : star r b c) : + star r a c := +begin + induction' hbc, + case refl { + assumption }, + case tail : c d hbc hcd hac { + exact (tail (hac hab)) hcd } +end + +lemma single (hab : r a b) : + star r a b := +refl.tail hab + +lemma head (hab : r a b) (hbc : star r b c) : + star r a c := +begin + induction' hbc, + case refl { + exact (tail refl) hab }, + case tail : c d hbc hcd hac { + exact (tail (hac hab)) hcd } +end + +lemma head_induction_on {α : Sort*} {r : α → α → Prop} {b : α} + {P : ∀a : α, star r a b → Prop} {a : α} (h : star r a b) + (refl : P b refl) + (head : ∀{a c} (h' : r a c) (h : star r c b), P c h → P a (h.head h')) : + P a h := +begin + induction' h, + case refl { + exact refl }, + case tail : b c hab hbc ih { + apply ih, + show P b _, from + head hbc _ refl, + show ∀a a', r a a' → star r a' b → P a' _ → P a _, from + assume a a' hab hbc, head hab _ } +end + +lemma trans_induction_on {α : Sort*} {r : α → α → Prop} + {p : ∀{a b : α}, star r a b → Prop} {a b : α} (h : star r a b) + (ih₁ : ∀a, @p a a refl) (ih₂ : ∀{a b} (h : r a b), p (single h)) + (ih₃ : ∀{a b c} (h₁ : star r a b) (h₂ : star r b c), p h₁ → + p h₂ → p (h₁.trans h₂)) : + p h := +begin + induction' h, + case refl { + exact ih₁ a }, + case tail : b c hab hbc ih { + exact ih₃ hab (single hbc) (ih ih₁ @ih₂ @ih₃) (ih₂ hbc) } +end + +lemma lift {β : Sort*} {s : β → β → Prop} (f : α → β) + (h : ∀a b, r a b → s (f a) (f b)) (hab : star r a b) : + star s (f a) (f b) := +hab.trans_induction_on + (assume a, refl) + (assume a b, single ∘ h _ _) + (assume a b c _ _, trans) + +lemma mono {p : α → α → Prop} : + (∀a b, r a b → p a b) → star r a b → star p a b := +lift id + +lemma star_star_eq : + star (star r) = star r := +funext + (assume a, + funext + (assume b, + propext (iff.intro + (assume h, + begin + induction' h, + { refl }, + { transitivity; + assumption } + end) + (star.mono (assume a b, + single))))) + +end star + +end rtc + +export rtc + + +/-! ## States -/ + +def state : Type := +string → ℕ + +def state.update (name : string) (val : ℕ) (s : state) : state := +λname', if name' = name then val else s name' + +notation s `{` name ` ↦ ` val `}` := state.update name val s + +instance : has_emptyc state := +{ emptyc := λ_, 0 } + +@[simp] lemma update_apply (name : string) (val : ℕ) (s : state) : + s{name ↦ val} name = val := +if_pos rfl + +@[simp] lemma update_apply_ne (name name' : string) (val : ℕ) (s : state) + (h : name' ≠ name . tactic.dec_trivial) : + s{name ↦ val} name' = s name' := +if_neg h + +@[simp] lemma update_override (name : string) (val₁ val₂ : ℕ) (s : state) : + s{name ↦ val₂}{name ↦ val₁} = s{name ↦ val₁} := +begin + apply funext, + intro name', + by_cases name' = name; + simp [h] +end + +@[simp] lemma update_swap (name₁ name₂ : string) (val₁ val₂ : ℕ) (s : state) + (h : name₁ ≠ name₂ . tactic.dec_trivial) : + s{name₂ ↦ val₂}{name₁ ↦ val₁} = s{name₁ ↦ val₁}{name₂ ↦ val₂} := +begin + apply funext, + intro name', + by_cases name' = name₁; + by_cases name' = name₂; + simp * at * +end + +@[simp] lemma update_id (name : string) (s : state) : + s{name ↦ s name} = s := +begin + apply funext, + intro name', + by_cases name' = name; + simp * at * +end + +@[simp] lemma update_same_const (name : string) (val : ℕ) : + (λ_, val){name ↦ val} = (λ_, val) := +by apply funext; simp + +example (s : state) : + s{"a" ↦ 0}{"a" ↦ 2} = s{"a" ↦ 2} := +by simp + +example (s : state) : + s{"a" ↦ 0}{"b" ↦ 2} = s{"b" ↦ 2}{"a" ↦ 0} := +by simp + +example (s : state) : + s{"a" ↦ s "a"}{"b" ↦ 0} = s{"b" ↦ 0} := +by simp + + +/-! ## Relations -/ + +def Id {α : Type} : set (α × α) := +{ab | prod.snd ab = prod.fst ab} + +@[simp] lemma mem_Id {α : Type} (a b : α) : + (a, b) ∈ @Id α ↔ b = a := +by refl + +def comp {α : Type} (r₁ r₂ : set (α × α)) : set (α × α) := +{ac | ∃b, (prod.fst ac, b) ∈ r₁ ∧ (b, prod.snd ac) ∈ r₂} + +infixl ` ◯ ` : 90 := comp + +@[simp] lemma mem_comp {α : Type} (r₁ r₂ : set (α × α)) + (a b : α) : + (a, b) ∈ r₁ ◯ r₂ ↔ (∃c, (a, c) ∈ r₁ ∧ (c, b) ∈ r₂) := +by refl + +def restrict {α : Type} (r : set (α × α)) (p : α → Prop) : + set (α × α) := +{ab | ab ∈ r ∧ p (prod.fst ab)} + +infixl ` ⇃ ` : 90 := restrict + +@[simp] lemma mem_restrict {α : Type} (r : set (α × α)) + (p : α → Prop) (a b : α) : + (a, b) ∈ r ⇃ p ↔ (a, b) ∈ r ∧ p a := +by refl + + +/-! ## Setoids -/ + +def equivalence_rel : Type → Type := +setoid + +end LoVe diff --git a/leanpkg.toml b/leanpkg.toml new file mode 100644 index 0000000..8a1abaa --- /dev/null +++ b/leanpkg.toml @@ -0,0 +1,8 @@ +[package] +name = "logical-verification" +version = "2.0" +lean_version = "leanprover-community/lean:3.30.0" +path = "." + +[dependencies] +mathlib = {git = "https://github.com/leanprover-community/mathlib", rev = "48104c3a22ae4b35d92b8ad30831eb1591049958"}