(Left to Right): Avalanche activity cascades in a sandpile automaton; a vortex street formed by flow past a cylinder; and Turing patterns in a reaction-diffusion model. All simulations from the course homeworks; a higher-resolution video may be viewed here
Materials for UT Austin's graduate computational physics course, taught by William Gilpin.
This course aims to provide a very broad survey of computational methods that are particularly relevant to modern physics research. We will aim to cover efficient algorithm design and performance analysis, traditional numerical recipes such as integration and matrix manipulation, and emerging methods in data analysis and machine learning. Our goal by the end of the class will be to feel comfortable approaching diverse, open-ended computational problems that arise during research, and to be ready to design and share new algorithms with the broader research community.
The class website is located here. If you are enrolled in the course at UT, the syllabus and calendar are here
- HW1: The sandpile cellular automaton and directed percolation. Covers recursion, runtime scaling, vectorization
- HW2: Linear dynamical systems and decomposing a chaotic flow. Covers numerical linear algebra, optimization, and unsupervised learning
- HW3: Turing patterns and phase separation. Covers numerical integration; finite-difference and spectral methods
- HW4: Predicting turbulence with operator methods. Covers Supervised learning, time series forecasting, ridge, kernel, and logistic regression
- Homework solutions
- Lecture 1: Python for Scientific Computing, Vectorization, and the Mandelbrot set
- Lecture 2: Inheritance, Object-Oriented Programming, and the Game of Life
- Lecture 3: Time & Space complexity, recursion, and solving a labyrinth
- Lecture 4: Recursion and the Fast Fourier Transform
- Lecture 5: Numerical Linear Algebra, Condition Number, Preconditioning
- Lecture 6 : Matrix inversion and LU factorization
- Lecture 7: The QR algorithm for eigenvalues
- Lecture 8: Singular Value Decomposition
- Lecture 9: Krylov subspace methods & Conjugate gradient methods
- Lecture 10: Optimization in low dimensions
- Lecture 11: Multivariate Optimization and Potential Flows
- Lecture 12: Numerical Integration and predicting chaos
- Lecture 13: Variable step integration, symplectic and stochastic systems
- Lecture 14: A pragmatist's guide to numerical integration [video]
- Lecture 15: Boundary value problems and electrodynamics
- Lecture 16: Shocks, solitons, and hyperbolic partial differential equations
- Lecture 17: Spectral methods using the Dedalus Python Package [video]
- Lecture 18: Supervised Learning & The Ising Model [video]
- Lecture 19: Classification, Logistic Regression, and phases of matter
- Lecture 20: Overfitting, bias-variance tradeoff, and double-descent
- Lecture 21: Unsupervised learning, embedding, clustering
- Lecture 22: Time series representation, featurizing chaos, kernel methods
- Lecture 23: Gaussian mixtures, expectation-maximization, and superresolution microscopy
- Lecture 24: Introduction to deep learning, predicting the Reynolds number of turbulence
- Lecture 25: Types of neural networks; symmetries in physical systems:
- Lecture 26: Training neural networks with backpropagation:
- Lecture 27: Using convolutional neural networks to predict the phases of the Ising Model
- Lab 1: Getting started with Python
- Lab 2: git, GitHub, and GitHub Pages
- Lab 3: Documentation and Formatting
- Lab 4: Automatically creating online documentation with Sphinx
- Lab 5: Unit Testing
- Lab 6: Structuring an Open-Source Repository
- Quantum Reinforcement Learning with the Grover method
- Modelling the contractile dynamics of muscle
- Neural System Identification by Training Recurrent Neural Networks
- Testing particle phenomenology beyond the Standard Model with Bayesian classification
If you are teaching a similar course, please feel free to use any or all of these materials. If you have any suggestions for improvements or find any errors, I would very much appreciate any feedback.
If you find any errors or typos, please open an issue or submit a correction as a pull request on GitHub.
For students, course-related questions are best posted on GitHub as Discussions or Issues on the course repository; for other issues, I be reached via email
We will primarily use Python 3 with the following packages
- numpy
- matplotlib
- scipy
- scikit-learn
- jupyter
For projects and other parts of the class, you might also need
- ipykernel
- scikit-image
- umap-learn
- statsmodels
- pytorch
- jax
- numba
Portions of the material in this course are adapted or inspired by other open-source classes, including: Pankaj Mehta's Machine Learning for Physics Course, Chris Rycroft's Numerical Recipe's Course, Volodymyr Kuleshov's Applied Machine Learning course, Fei-Fei Li's Deep Learning for Computer Vision course, Lorena Barba's CFD course and Jim Crutchfield's Nonlinear Dynamics course
<script async src="https://www.googletagmanager.com/gtag/js?id=G-37RSFCXBQY"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-37RSFCXBQY'); </script>