Profdr.ir. Henk C.A. van Tilborg is a retired professor in the Department of Mathematics and Computer Science at the Technische Universiteit Eindhoven. He became a fellow of the IEEE Information Theory Society in 2000. His main research interests are coding theory, cryptology and discrete mathematics.
Can anybody suggest to me good coding theory books? I've already taken a cryptography class last semester and I studied it with Handbook of Applied Cryptography by Alfred J. Menezes; I found it very useful. Is there a coding theory book like this with many examples?
A book I quite like that's more recent is R.M. Roth's Introduction to Coding Theory -- has a bit of a CS flavor to the approach. For engineering (esp. applications), I think Error correction coding by Moon is good.
It was a nice taste of single-error correcting codes and provided a strong foundation from which to build on. In addition, if you are looking for a book with many examples, this textbook has exercises at the end of every chapter and has answers to EVERY exercise in the back of the book.
Lecture notes of the course "An Algorithmic Introduction to Coding Theory," by Madhu Sudan.Publication Date: 2001. The first chapter offers useful comments regarding several textbooks on coding theory.Available at
Course Description:Error-correcting codes play an important role in many areas of scienceand engineering. This course is a graduate level introduction toerror-correcting codes, with a focus on the theoretical andalgorithmic aspects arising in the context of the "channel coding"problem: We want to transmit data over a noisy communication channelso that the receiver can recover the correct data despite the adverseeffects of the channel.
Starting from the basics of coding theory and some of the classictheorems of the subject, the course will discuss more recent progresson code constructions and error-correction algorithms. List ofpotential topics include: Shannon coding theorem and noise models (worst-case, stochastic) Basics notions and combinatorial bounds of coding theory Classical code constructions (algebraic, LDPC, concatenated, etc.) Reed-Solomon decoding and applications Expander codes and linear time error-correction List decoding; error-correction with optimal redundancy. Iterative decoding methods and belief propagation Rateless codes; convolutional codes. Codes in computational complexity
In recent weeks people all over the world have been fascinated by the pictures and scientific data being relayed from Mars by NASA's Pathfinder mission. For decades space probes have been sending back similar data from the furthest planets. Yet the power of the radio transmitters on these craft is only a few watts, comparable to the strength of a dim electric light bulb. How can thisinformation be reliably transmitted across hundreds of millions of miles without being completely swamped by noise?
We assume that our message is in the form of binary digits or bits, strings of 0 or 1. We have to transmit these bits along a channel (such as a telephone line) in which errors occur randomly, but at a predictable overall rate. To compensate for the errors we need to transmit more bits than there are in the original message.
The simplest method for detecting errors in binary data is the parity code which transmits an extra "parity" bit after every 7 bits from the source message. However, this method can only detect errors, the only way to correct them is to ask for the data to be transmitted again!
A simple way to correct as well as detect errors is to repeat each bit a set number of times. The recipient sees which value, 0 or 1, occurs more often and assumes that that was the intended bit. The scheme can tolerate error rates up to 1 error in every 2 bits transmitted at the expense of increasing the amount of repetition.
The disadvantage of the repetition scheme is that it multiplies the number of bits transmitted by a factor which may prove unacceptably high. In 1948, Claude Shannon, working at Bell Laboratories in the USA, inaugurated the whole subject of coding theory by showing that it was possible to encode messages in such a way that the number of extra bits transmitted was as small as possible.Unfortunately his proof did not give any explicit recipes for these optimal codes.
It was two years later that Richard Hamming, also at Bell Labs, began studying explicit error-correcting codes with information transmission rates more efficient than simple repetition.[Correction]. His first attempt produced a code in which four data bits were followed by three check bits which allowed not only the detection but thecorrection of a single error. (The repetition code would require nine check bits to achieve this.)
It is said that Hamming invented his code after several attempts to punch out a message on paper tape using the parity code. "If it can detect the error," he complained, "why can't it correct it!".
While Shannon and Hamming were working on information transmission in the States, John Leech invented similar codes while working on Group Theory at Cambridge. This research included work on the sphere packing problem (see this issue's mathematical mystery) and culminated in the remarkable, 24-dimensional Leech lattice, the study of which was a key element in the programme tounderstand and classify finite symmetry groups.
The value of error-correcting codes for information transmission, both on Earth and from space, was immediately apparent, and a wide variety of codes were constructed which achieved both economy of transmission and error-correction capacity. Between 1969 and 1973 the NASA Mariner probes used a powerful Reed-Muller code capable of correcting 7 errors out of 32 bits transmitted,consisting now of 6 data bits and 26 check bits! Over 16,000 bits per second were relayed back to Earth.
A less obvious application of error-correcting codes came with the development of the compact disc. On CDs the signal in encoded digitally. To guard against scratches, cracks and similar damage two "interleaved" codes which can correct up to 4,000 consecutive errors (about 2.5 mm of track) are used. (Audio disc players can recover from even more damage by interpolating the signal.)
In the past two years the goal of finding explicit codes which reach the limits predicted by Shannon's original work has been achieved. The constructions require techniques from a surprisingly wide range of pure mathematics: linear algebra, the theory of fields and algebraic geometry all play a vital role. Not only has coding theory helped to solve problems of vital importance in the worldoutside mathematics, it has enriched other branches of mathematics, with new problems as well as new solutions.
To prepare for the exam: go through your class notes do some old exams for practice without a computerknow your tools: you will likely need to compute modular inverses or so, so practice doing the extended Euclidean algorithm; for small fields get used to inverting 'by hand'; another important tool is the Chinese Remainder Theorem.get used to your pocket calculator.The lectures take place Tuesdays 10:45 - 12:30 (matrix 1.41) andThursdays 08:45 - 10:30 (HG 6.09). There are no lessons on September20 and 22. At TU/e the semester is interrupted by an exam break at theend of October; since this course does not have intermediate examsthere won't be any program during this time. Note that there will beclasses on October 27 at the regular time and location; there won't be lessons on the 25th.
Make sure to register for this course; in the newsystem (OASE) you shouldn't wait till registering for the exam becomesnecessary. On the bright side, this makes it possible for me to sendemail to all of you - that is, to those of you who are registered.
There will be one exam on January 27 and one on April 13.
06 Sep 2011
General introduction to cryptography; concepts public key andsymmetric key cryptography. We took pictures of all black boardsbefore erasing them. They are here.
If you want to try your skills on some interactive Caesar cipherchallenges visit select MysteryTwister I from the taks bar. There are also lots ofother nice challenges which even come with a hall of fame (but areless interactive).
Here are the slides in pdf for the perfect codesystem. The homework is to figure out how and why this worked and tobreak the system.
08 Sep 2011
Discussion of why the perfect-code systemworks and how to break it. This system was propsed as aneducational tool by Fellows and Koblitz. Attention, this wasnever proposed for cryptography but only as a teaching tool.
You can read more about it and a few other systems in theirCryptologia article.
Quick revamp of algebra: groups, order, cyclic groups. Diffie-Hellmankey exchange in general cyclic group.
We took pictures of all black boardsbefore erasing them. They are here.
13 Sep 2011
Cyclic groups, lots of examples, Diffie-Hellman(DH) key exchange, discrete logarithm problem, several examples oftotally insecure groups for DH, one more secure group. Lagrangetheorem. Extended Euclidean Algorithm.
We took pictures of all black boardsbefore erasing them. They are here.
If you're not familiar with the extended Euclidean algorithms andefficient ways to compute it, now is a good moment to start training!
15 Nov 2011
Attacks on the discrete logarithm problem:Pohlig-Hellman, BSGS, Pollard rho, parallel Pollard rho, and indexcalculus attacks. Please read section 8.3 in Henk van Tilborg's bookand test your understanding solving Problem 8.8. The slides I usedfor Pollard rho are available here
For keysize recommendations see
Pictures are here.
3a8082e126