Introduction To Coding And Information Theory Steven Roman Apr 2026
Data is fragile. A scratch on a CD, a crackle on a radio wave, or cosmic radiation hitting a memory chip corrupts bits. A '0' flips to a '1'. How do you know? How do you fix it?
This is not a tutorial on Python. This is an exploration of the mathematical bones of the digital age. Before Claude Shannon, the father of information theory, information was a philosophical or semantic concept. Shannon did something radical: he stripped meaning away entirely. Introduction To Coding And Information Theory Steven Roman
Mathematically, the information content ( h(x) ) of an event ( x ) with probability ( p ) is: Data is fragile
Entropy is the average amount of information produced by a source. It is also the minimum number of bits required, on average, to encode the source without losing any information. How do you know
Think of entropy as the "randomness temperature." High entropy (like white noise or scrambled text) means high information density. Low entropy (like a repeating loop of silence or a predictable string of zeroes) means you can compress it down to almost nothing. Coding Theory: The Art of Reliable Imperfection If information theory is about efficiency , coding theory is about survival .
If I tell you something you already know (e.g., "The sun will rise tomorrow"), I have transmitted very little information. If I tell you something shocking (e.g., "The sun did not rise today"), I have transmitted a massive amount of information.
Why the logarithm? Because information is additive. If you flip two coins, the total surprise is the sum of the individual surprises. The logarithm turns multiplication of probabilities into addition of information. The most famous equation in information theory is Entropy ( H ):