Essentials of Error-Control Coding

Jorge Castiñeira Moreira
Patrick Guy Farrell


The subject of this book is the detection and correction of errors in digital information. Such errors almost inevitably occur after the transmission, storage or processing of information in digital (mainly binary) form, because of noise and interference in communication channels, or imperfections in storage media, for example. Protecting digital information with a suitable error-control code enables the efficient detection and correction of any errors that may have occurred.

Error-control codes are now used in almost the entire range of information communication, storage and processing systems. Rapid advances in electronic and optical devices and systems have enabled the implementation of very powerful codes with close to optimum error-control performance. In addition, new types of code, and new decoding methods, have recently been developed and are starting to be applied. However, error-control coding is complex, novel and unfamiliar, not yet widely understood and appreciated. This book sets out to provide a clear description of the essentials of the topic, with comprehensive and up-to-date coverage of the most useful codes and their decoding algorithms. The book has a practical engineering and information technology emphasis, but includes relevant background material and fundamental theoretical aspects. Several system applications of error-control codes are described, and there are many worked examples and problems for the reader to solve.

The book is an advanced text aimed at postgraduate and third/final year undergraduate students of courses on telecommunications engineering, communication networks, electronic engineering, computer science, information systems and technology, digital signal processing, and applied mathematics, and for engineers and researchers working in any of these areas. The book is designed to be virtually self-contained for a reader with any of these backgrounds. Enough information and signal theory, and coding mathematics, is included to enable a full understanding of any of the error-control topics described in the book.

Chapter 1 provides an introduction to information theory and how it relates to error-control coding. The theory defines what we mean by information, determines limits on the capacity of an information channel, and tells us how efficient a code is at detecting and correcting errors. Chapter 2 describes the basic concepts of error detection and correction, in the context of the parameters, encoding and decoding of some simple binary block error-control codes. Block codes were the first type of error-control code to be discovered, in the decade from about 1940 to 1950. The two basic ways in which error coding is applied to an information system are also described: forward error correction and retransmission error control. A particularly useful kind of block code, the cyclic code, is introduced in Chapter 3, together with an example of a practical application, the cyclic redundancy check (CRC) code for the Ethernet standard. In Chapters 4 and 5 two very effective and widely used classes of cyclic code are described, the Bose-Chaudhuri-Hocquenghem (BCH) and Reed-Solomon (RS) codes, named after their inventors. BCH codes can be binary or non-binary, but the RS codes are non-binary and are particularly effective in a large number of error-control scenarios. One of the best known of these, also described in Chapter 5. is the application of RS codes to error-correction in the compact disc (CD).

Not long after the discovery of block codes, a second type of error-control codes emerged, initially called recurrent and later convolutional codes. Encoding and decoding even a quite powerful convolutional code involves rather simple, repetitive, quasi-continuous processes, applied on a very convenient trellis representation of the code, instead of the more complex block processing that seems to be required in the case of a powerful block code. This makes it relatively easy to use maximum likelihood (soft-decision) decoding with block codes, in the form of the optimum Viterbi algorithm (VA). Convolutional codes, their trellis and state diagrams, soft-decision detection, the Viterbi decoding algorithm, and practical punctured and rate-compatible coding schemes are all presented in Chapter 6. Disappointingly, however, even very powerful convolutional codes were found to be incapable of achieving performances close to the limits first published by Shannon, the father of information theory, in 1948. This was still true even when very powerful combinations of block and convolutional codes, called concatenated codes, were devised. The breakthrough, by Berrou, Glavieux and Thitimajshima in 1993, was to use a special kind of interleaved concatenation, in conjunction with iterative soft-decision decoding. All aspects of these very effective coding schemes, called turbo codes because of the supercharging effect of the iterative decoding algorithm, are fully described in Chapter 7.

The final Chapter returns to the topic of block codes, in the form of low-density parity-check (LDPC) codes. Block codes had been found to have trellis representations, so that they could be soft-decision decoded with performances almost as good as those of convolutional codes. Also, they could be used in effective turbo coding schemes. Complexity remained a problem, however, until it was quite recently realised that a particularly simple class of codes, the LDPC codes discovered by Gallager in 1962, were capable of delivering performances as good or better than those of turbo codes when decoded by an appropriate iterative algorithm. All aspects of the construction, encoding, decoding and performance of LDPC codes are fully described in Chapter 8, together with various forms of LDPC codes which are particularly effective for use in communication networks.

Appendix A shows how to calculate the error probability of digital signals transmitted over additive white Gaussian noise (AWGN) channels, and Appendix B introduces various topics in discrete mathematics. These are followed by a list of the answers to the problems located at the end of each chapter. Detailed solutions will be available on this website.

This website will also contain additional material, which will be regularly updated in response to comments and questions from readers.


We are very grateful for all the help, support and encouragement we have had during the writing of this book, from our colleagues past and present, from many generations of research assistants and students, from the reviewers, and from our families and friends. We would particularly like to thank: Damian Levin and Leonardo Arnone for their contributions to Chapters 7 and 8, respectively; Mario Blaum, Rolando Carrasco, Evan Ciner, Bahram Honary, Garik Markarian, and Robert McEliece for stimulating discussions and very welcome support; and Sarah Hinton at John Wiley & Sons, who patiently waited for her initial suggestion to bear fruit.