Acourse on electromagnetism, starting from the Maxwell equations and describing their application to electrostatics, magnetostatics, induction, light and radiation. The course also covers the relativistic form of the equations and electromagnetism in materials.
An introduction to the quantum Hall effect. The first half uses only quantum mechanicsand is at a levelsuitable for undergraduates. The second half covers more advanced field theoretic techniques of Chern-Simonsand conformal field theories.
An introduction to fluid mechanics, aimed at undergraduates. The course covers the basic flows arising from the Euler and Navier-Stokes equations, including discussions of waves, stability, and turbulence.
An introduction to statistical mechanics and thermodynamics,aimed at final year undergraduates. After developing the fundamentals of the subject, the course covers classical gases, quantum gases and phase transitions.
An introduction to general relativity, aimed atfirst year graduate students. It starts with a gentle introduction to geodesics in curvedspacetime. The course then describes the basics of differential geometry before turning tomore advanced topics in gravitation.
These notes provide an introduction to the fun bits of quantum field theory, in particular those topics relatedto topology and strong coupling. They are aimed at beginning graduate students and assumea familiarity with the path integral.
An elementary course on elementary particles. This is, by some margin, the least mathematically sophisticated of all my lecture notes, requiring little more than high school mathematics. The lectures provide a pop-science, but detailed, account of particle physics and quantum field theory. These lectures were given at the CERN summer school.
A course on particle physics that most definitely uses more than high school mathematics. The lectures describe the mathematical structure of the Standard Model, and explore features of the stong and weak forces. There are also sections on spontaneous symmetry breaking and anomalies.
An introduction to N=1 supersymmetry in d=3+1 dimensions, aimed at first year graduate students. The lectures describe how to construct supersymmetric actions before unpacking the details of their quantum dynamics and dualities.
The goal of these notes is to give an introduction tofundamental models and techniques in graduate-levelmodern discrete probability.Topics are taken mostly from probability on graphs:percolation,random graphs,Markov random fields,random walks on graphs,etc.No attempt is made at covering these areas in depth.Rather the emphasis is on developing and illustrating common and important techniques.Various applications, in particular in the theoretical foundations of data science and machine learning, arealso discussed along the way.
The notes are aimed at graduate studentsin mathematics, statistics, computer science, electricalengineering, physics, economics, etc.with previous exposure to basic probability theory(ideally measure-theoretic probability theory;at Wisconsin, Math 733;my own course notes)and stochastic processes(at Wisconsin, Math 632).These notes were developed for a one-semester courseat UW-Madison(Fall 2014, Fall 2017, Fall 2020, and Fall 2023).I also gave a version of this course at the TRIPODSSummer School on Fundamentals of Data Analysis.The slides are here.
Much of the material covered can also be found in the following excellent texts(conveniently available online): [vdH] Random Graphs and Complex Networks. Vol. I by van der Hofstad [LP] Probability on Trees and Networks by Lyons and Peres [LPW] Markov Chains and Mixing Times by Levin, Peres and Wilmer [St] A Mini Course on Percolation Theory by Steif [Gr] Probability on Graphs by Grimmett [Lu] Concentration-of-measure Inequalities by Lugosi [Ve] High-dimensional probability: An introduction with applications in data science by Vershynin [Du] Random Graph Dynamics by Durrett [vH] Probability in High Dimension by van Handel
Chapter 2: Moments and tails (updated: dec 20, 2023)Review: Markov's inequality, Chebyshev's inequality, moment-generating function.Techniques: probabilistic method, first moment principle, second moment method, Chernoff-Cramer method, sub-Gaussian and sub-exponential variables, epsilon-nets, chaining.Examples: random k-SAT threshold, percolation on trees and lattices (critical value), Erdos-Renyi graph (containment, connectivity), stochastic knapsack problem, Johnson-Lindenstrauss, VC theory.Summary: In this chapter we look at the moments of a random variable. Specifically we demonstrate that moments capture useful information about the tail of a random variable while often being simpler to compute or at least bound. Several well-known inequalities quantify this intuition. Although they are straightforward to derive, such inequalities are surprisingly powerful. Through a range of applications, we illustrate the utility of controlling the tail of a random variable, typically by allowing one to dismiss certain "bad events" as rare. Two applications in data science are also introduced: sparse recovery and empirical risk minimization.
Chapter 3: Martingales and potentials (updated: dec 20, 2023)Review: stopping times, martingales.Techniques: concentration for martingales, method of bounded differences, Talagrand's inequality, slicing method, electrical networks.Examples: Markov chains (hitting times, cover times, recurrence), percolation on trees (critical regime), Erdos-Renyi graphs (chromatic number), preferential attachment graphs (degree sequence), uniform spanning trees (Wilson's method), bandit problems.Summary: In this chapter we turn to martingales, which play a central role in probability theory. We illustrate their use in a number of applications to the analysis of discrete stochastic processes. After some background on stopping times and a brief review of basic martingale properties and results, we develop two major directions. We show how martingales can be used to derive a substantial generalization of our previous concentration inequalities---from the sums of independent random variables we focused on previously to nonlinear functions with Lipschitz properties. In particular, we give several applications of the method of bounded differences to random graphs. We also discuss bandit problems in machine learning. In the second thread, we give an introduction to potential theory and electrical network theory for Markov chains.
Chapter 4: Coupling (updated: dec 20, 2023)Review: coupling inequality, maximal coupling, Markovian coupling.Techniques: stochastic domination, Strassen's theorem, correlation inequalities (FKG, Holley), path coupling.Examples: harmonic functions on lattices and trees, Markov chains (mixing time, Glauber dynamics), Erdos-Renyi graph (degree sequence, Janson's inequality), percolation on lattices (RSW theory, Harris' theorem), and Ising model on lattices (extremal measures).Summary: In this chapter we move on to coupling, another probabilistic technique with a wide range of applications (far beyond discrete stochastic processes). The idea behind the coupling method is deceptively simple: to compare two probability measures, it is sometimes useful to construct a joint probability space with the corresponding marginals. We begin by defining coupling formally and deriving its connection to the total variation distance through the coupling inequality. Then we introduce the concept of stochastic domination and some related correlation inequalities. Coupling of Markov chains is the next topic, where it serves as a powerful tool to derive mixing time bounds. Finally, we end with the Chen-Stein method for Poisson approximation, a technique that applies in particular in some natural settings with dependent variables.
Chapter 5: Spectral methods (updated: dec 20, 2023)Review: spectral theorem, Courant-Fischer, perturbation, spectral graph theory.Techniques: spectral gap, canonical paths, bottleneck ratio, expander graphs.Examples: Markov chains (mixing time, Glauber dynamics, Varopoulos-Carne), expander graphs, community detection.Summary: In this chapter, we develop spectral techniques. We highlight some applications to Markov chain mixing and network analysis. The main tools are the spectral theorem and the variational characterization of eigenvalues, which we review together with some related results. We also give a brief introduction to spectral graph theory and detail an application to community recovery. Then we apply the spectral theorem to reversible Markov chains. In particular we define the spectral gap and establish its close relationship to the mixing time. We also show in that the spectral gap can be bounded using certain isoperimetric properties of the underlying network.
Chapter 6: Branching processes (updated: dec 20, 2023)Review: Galton-Watson processes, extinction, multitype branching processes.Techniques: Random-walk representation, duality principle, comparison to branching processes.Examples: percolation on trees (critical exponents) and on Galton-Watson trees, Erdos-Renyi graph (phase transition), random binary search trees, the reconstruction problem.Summary: Branching processes, which are the focus of this chapter, arise naturally in the study of stochastic processes on trees and locally tree-like graphs. Similarly to martingales, finding a hidden branching process within a probabilistic model can lead to useful bounds and insights into asymptotic behavior. After a review of extinction theory for branching processes and of a useful random-walk perspective, we give a couple examples of applications in discrete probability. In particular we analyze the height of a binary search tree, a standard data structure in computer science. We also give an introduction to phylogenetics, where a "multitype" variant of the Galton-Watson branching process plays an important role. We end this chapter with a detailed look into the phase transition of the Erdos-Renyi graph model.
3a8082e126