Course Description:Error-correcting codes play an important role in many areas of scienceand engineering. This course is a graduate level introduction toerror-correcting codes, with a focus on the theoretical andalgorithmic aspects arising in the context of the "channel coding"problem: We want to transmit data over a noisy communication channelso that the receiver can recover the correct data despite the adverseeffects of the channel.
Starting from the basics of coding theory and some of the classictheorems of the subject, the course will discuss more recent progresson code constructions and error-correction algorithms. List ofpotential topics include:
- Shannon coding theorem and noise models (worst-case, stochastic)
- Basics notions and combinatorial bounds of coding theory
- Classical code constructions (algebraic, LDPC, concatenated, etc.)
- Reed-Solomon decoding and applications
- Expander codes and linear time error-correction
- List decoding; error-correction with optimal redundancy.
- Iterative decoding methods and belief propagation
- Rateless codes; convolutional codes.
- Codes in computational complexity
The main prerequisites for this class are mathematical maturity, exposure to basic mathematical courses such as COMPSCI 240 and COMPSCI 250 and a solid grounding in linear algebra and probability theory. Students without such background can seek permissionof the instructor.
Coding Theory has been playing an important role in Theoretical Computer Science in topics such as probabilistically checkable proofs, list decoding and local testability. It has also applications in other areas of computer science such networking and security. This course will be useful to students interested in any of these areas and will expose them to a wide range of mathematical tools and techniques. The specific topics covered will include:
A group of two students will submit a single homework that will reflect the collaborative effort of that group of students. Assignments must be submitted at the beginning of the class on the deadline or by email or in Moodle. Late submission by a day (e-submission only) will incur a 20% penalty. Submissions will not be accepted if there is any more delay without substantial reason (such as a doctor's note).
A group of two students will scribe the notes for one (or two) lectures in TeX. Template for lecture notes will be provided. Scribed notes must be submitted by email to the instructor within 3 days of the lecture. Late submission by a day will incur a 20% penalty. Any more delay without a Doctor's note will incur full penalty.
Can anybody suggest to me good coding theory books? I've already taken a cryptography class last semester and I studied it with Handbook of Applied Cryptography by Alfred J. Menezes; I found it very useful. Is there a coding theory book like this with many examples?
A book I quite like that's more recent is R.M. Roth's Introduction to Coding Theory -- has a bit of a CS flavor to the approach. For engineering (esp. applications), I think Error correction coding by Moon is good.
It was a nice taste of single-error correcting codes and provided a strong foundation from which to build on. In addition, if you are looking for a book with many examples, this textbook has exercises at the end of every chapter and has answers to EVERY exercise in the back of the book.
Ron M. Roth received the BSc degree in computer engineering (1980), the MSc in electrical engineering (1984), and the DSc in computer science (1988) from Technion, Haifa, Israel. Since 1988 he has been with the Computer Science Department at Technion, where he now holds the General Yaakov Dori Chair in Engineering. He is an IEEE Fellow, and has held visiting research positions at IBM Research Division, HP Labs, Hewlett-Packard Enterprise, and UC San Diego. Dr. Roth is the author of the book "Introduction to Coding Theory" (published in 2006). He served as an Associate Editor for Coding Theory in IEEE Transactions on Information Theory and he is currently serving as an Associate Editor in SIAM Journal on Discrete Mathematics. His 1993 paper "New Array Codes for Multiple Phased Burst Correction," co-authored with M. Blaum, received the 2021 NVMW Persistent Impact Prize for ECC. His research interests include coding theory, information theory, and their application to storage, computation, and the theory of complexity.
Instructor: VenkatesanGuruswami
Meeting times: Wednesday 3:00-4:20pm and Fridays 10:30-11:50am at CSE 403
Office hours: After class or by appointment.
Mailing List
- It is important to subscribe to the class mailing listimmediately. Everyone is expected to be reading cse533 e-mail to keepup-to-date on the course.
- To subscribe to the CSE 533 mailing list, click here
Course announcementProblem Sets
- Problem Set 1, due October 30. Latex source.
- Problem Set 2, due Friday, December 1st. Latex source.
Lecture notesHere is the style file scribe.sty for preparing lecture notes.
- Lecture 1 - A Hat puzzle; Basic definitions about codes; Rate and Distance; Linear codes; [7,4,3] Hamming code
- Lecture 2 - Parity check matrix, Hamming code family, Volume bound, Dual codes, Hadamard code.
- Lecture 3 - Introduction to Shannon theory; stochastic channels and examples; entropy function; Capacity theorem for BSC.
- Lecture 4 - Proof of Shannon's theorem for BSC, Elias' constructive method using product of Hamming codes
- Lecture 5 - Bounds on codes: Hamming upper bound, Singleton bound, Gilbert-Varshamov bound, Plotkin bound.
- Lecture 6 - Johnson bound, Elias-Bassalygo bound.
- Lecture 7 - Reed-Solomon (RS) codes, Bounds on number of roots of polynomials, Multivariate polynomial and Reed-Muller codes.
- Lecture 8 - Majority logic decoding of Reed-Muller codes, Basics of extension fields, Binary codes from RS codes.
- Lecture 9 - Concatenated codes, Explicit asymptotically good codes meeting Zyablov bound, GV, Zyablov & MRRW bounds, Justesen codes.
- Lecture 10 - Justesen codes proof, Reed-Solomon decoding -- History and Welch-Berlekamp decoder (Gemmell-Sudan description).
- Lecture 11 - Decoding concatenated codes: A naive decoder, and Forney's GMD decoding.
- Lecture 12 - Achieving capacity of BSC with polytime encoding anddecoding.
- Lecture 13 - Expander based asymptotically good codes and linear timedecoding.
- Lecture 14 - Tanner codes; Linear time decodable codes using spectral expanders; Boosting error-correction using expander based symbol redistribution.
- Related material appears in these notes from the Winter 2003 course at UW.
- Lectures 15 and 16 - Wrap-up of linear time codes with optimal rate; Introduction to list decoding, List decoding capacity, Connection to Johnson bounds, Toy Reed-Solomon decoding problem
- Lecture 17 - Interpolation based list decoding of RS codes, Soft decoding, Using multiplicities in decoding.
- Lecture 18 - Decoding RS codes up to Johnson bound (wrap-up), Beyond RS codes: Folded RS codes, Multivariate interpolation based decoding.
- Lecture 19 - Achieving list decoding capacity using folded RS codes. Topics we didn't cover: AG codes, LDPC decoding, Convolutional/turbo coding.
Reading links
- Error-freecoding, a paper by Peter Elias that achieves positive rate for BSCwith small but positive crossover probability.
Reference materialWe will not follow any one particular textbook. The closest resource is the excellent set of lecture notes for Madhu Sudan's coding theory course at MIT: Notes from 2001 and 2004.
The basic material oncodes we discuss in initial lectures can be found in one of manytextbooks (some of which are listed below), but the recent algorithmicdevelopments are probably not adequately covered in any of these:
- Introduction to Coding Theory, by J. H. van Lint, GTM 86. One of my favorites due to its compactness, and of course GTM yellow cover.
- Introduction to Coding Theory, by Ron M. Roth, 2006 (very recent book!). Includes discussion of some recent algorithmic progress (such as list decoding, decoding expander codes).
- Algebraic codes for data transmission. by Richard E. Blahut.
Though we won't cover much information theory in this course, if your curiosity is aroused on aspects such as entropy, mutual information, capacity theorems, source coding, etc., there is the classic information theory text
- Elements of Information Theory, Thomas M. Cover and Joy A. Thomas,Wiley Series in Telecommunications, 1991.
Recommended preparationThe principal prerequisite for this class is some "mathematical maturity", or more specifically comfort with basics of linear algebra (vector spaces, basis, dual spaces); finite fields, field extensions and polynomials over finite fields; elementary probability; and analysis of algorithms.
If you are not comfortable with your algebra background, you can read thealgebra material in any of the above 3 coding texts, your favoritealgebra text, or these notesdue to Madhu Sudan. You can also do this "as the need arises" in the lectures since we won't get to much algebraic stuff until a few lectures into the class.Class Assignments and GradingCoursework will include preparing Latex scribe notes for one or two lecturesand doing a couple of problem sets. Posting scribe notes promptly isimportant to complement the lectures, and so students are urged tocooperate in this regard. The "grade" on scribe notes will be afunction of both the quality of the notes and the timeliness withwhich it is handed in.
Final grades will be based on the scribe notes, class participation,and performance on the problem sets. There will be no exams.
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
795a8134c1