This is a repository for the course 18.06: Linear Algebra at MIT in Fall 2022. See other branches of this repository for previous semesters.
Instructor: Prof. Steven G. Johnson. Course administrator: Sergei Korotkikh.
Lectures: MWF11 in 26-100. Handwritten notes are posted online, along with video recordings (linked below) and other materials (slides, further reading) in the lecture summaries below.
Exams: 11am in 26-100, on 10/7, 11/14, & 12/9. Final exam: date TBA.
Recitations:
- R01,R02 — Chirag Falor: T9 in 2-143, T10 in 2-146 (office hours Mon 3pm via Zoom, Wed 3pm in TBD).
- R03,R05 — Melissa Sherman-Bennett: T11 in 2-147, T12 in 2-147 (office hours Wed 10am via Zoom, Thurs 10am in TBD).
- R04 — Sergei Korotkikh: T11 in 2-146 (office hours Tues 6pm via Zoom, Thurs 6pm in 2-231D).
- R06,R09 — Victor Rong: T12 in 2-146, T1 in 2-146 (office hours Mon 8pm via Zoom, Tues 2pm in TBD).
- R07,R08 — Mitchell Harris: T12 in 2-361, T1 in 2-142 (office hours Mon 2pm via Zoom, Fri 2pm in 32-D 6th-floor lounge).
- R10,R11 — Ishan Levy: T1 in 2-136, T2 in 2-142 (office hours Thurs 10:30am via Zoom, Wed 2pm in 2-390).
- R12,R13 — Gefei Dang: T2 in 2-146, T3 in 2-142 (office hours Thurs 4pm via Zoom, Wed 11am in 2-239).
Undergraduate Assistants: TBA.
Resources: Piazza discussion forum, math learning center, TSR^2 study/resource room, pset partners.
This document is a brief summary of what was covered in each 18.06 lecture, along with links and suggestions for further reading. It is not a good substitute for attending lecture, but may provide a useful study guide. (You can also look at the analogous summaries from Spring 2022.)
- course overview/syllabus
- pset 1: due Friday Sep 16 at 11am (submit your solutions on Gradescope).
- video
Slides giving the syllabus and the "big picture" of what 18.06 is about. Introduction to thinking about matrices as linear operations, not just as "bags of numbers".
Further reading: Strang, chapter 1, and section 8.1 on linear transformations. 3blue1brown has a nice video on matrix multiplication as composition of linear transformations. If you've forgotten the basics of how to multiply matrices by vectors or matrices by matrices, google for some tutorial material online (e.g. Khan academy) and do a quick brush-up.
- handwritten notes: see link above (at beginning)
- video
Gaussian elimination for Ax=b: I started with the grade-school/high-school viewpoint of writing out three equations in three unknowns, adding/subtracting multiples of equations until we were left with one equation in one unknown. Then, I wrote the same equations in matrix form, and renamed this process "Gaussian elimination": we add/subtract multiples of matrix rows to introduce zeros below the diagonal, i.e. to make the matrix upper triangular U. We then do the same row operations to the right hand side b to get a new vector c. Finally, we solve Ux=c for x by working from bottom (1 equation in 1 variable) to top, a process called "backsubstitution".
To do the same operations to both A and b, a useful trick for hand calculations is to augment the matrix with a new column representing the right-hand side, forming [A b] before starting Gaussian elimination.
What comes next? The problem with expressing Gaussian elimination this way, as operations on individual numbers in the matrix, is that it is impossible to follow the process in detail for anything except a very tiny matrix. We need a higher-level "algebraic" way to express the process, both to help us understand it and also to help us to use it (e.g. to perform additional algebraic transformations afterwards). To do this, we want to express the process, not as operations on individual numbers, but as matrix operations.
Rewrote Gaussian elimination in matrix form: we multiply a matrix A on the left by a sequence of lower-triangular "elimination matrices" Eₙ to arrive at an upper-triangular matrix U = EA. To solve Ax=b, we can think of the earlier process as multiplying both sides on the left by E, the linear operator representing the composition (product) of all of the elimination steps, yielding Ux=EAx on the left and c=Eb on the right.
We're not done: it turns out to be even more fruitful to reverse the process, and write A = LU: L represents the operations required to turn the matrix U back into A, and turns out toe be a lower-triangular matrix whose entries are just a record of the elimination steps. This LU factorization is extremely useful and important because it allows us to replace a complicated matrix A with two much simpler (triangular) ones. For example, solving Ax=b turns into LUx=b, and we can do this just by two "triangular" solves. More on this next time.
Further reading: Textbook sections 2.1, 2.2, 2.3. Strang lecture 2 video. And there is a Gaussian-elimination Julia notebook that covers the same steps in Julia form. See also "The key reason why A = LU" in section 2.6 of the textbook.
Optional Julia Tutorial (Monday Sep 12 @ 5pm): Zoom
- video recording: to be posted
A basic overview of the Julia programming environment for numerical computations that we will use in 18.06 for simple computational exploration. This (Zoom-based) tutorial will cover what Julia is and the basics of interaction, scalar/vector/matrix arithmetic, and plotting — we'll be using it as just a "fancy calculator" and no "real programming" will be required.
- Tutorial materials (and links to other resources)
If possible, try to install Julia on your laptop beforehand using the instructions at the above link. Failing that, you can run Julia in the cloud (see instructions above).