Email (for orders and customer service enquiries): cs-books@wiley.co.uk
Visit our Home Page on www.wileyeurope.com or www.wiley.com
All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted
in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except
under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the
Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP, UK, without the permission in
writing of the Publisher. Requests to the Publisher should be addressed to the Permissions Department, John
Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England, or emailed to
permreq@wiley.co.uk, or faxed to (+44) 1243 770620.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names
and product names used in this book are trade names, service marks, trademarks or registered trademarks of
their respective owners. The Publisher is not associated with any product or vendor mentioned in this book. All
trademarks referred to in the text of this publication are the property of their respective owners.
This publication is designed to provide accurate and authoritative information in regard to the subject matter
covered. It is sold on the understanding that the Publisher is not engaged in rendering professional services. If
professional advice or other expert assistance is required, the services of a competent professional should be
sought.
Other Wiley Editorial Offices
John Wiley & Sons Inc., 111 River Street, Hoboken, NJ 07030, USA
Jossey-Bass, 989 Market Street, San Francisco, CA 94103-1741, USA
John Wiley & Sons Australia Ltd, 42 McDougall Street, Milton, Queensland 4064, Australia
John Wiley & Sons (Asia) Pte Ltd, 2 Clementi Loop #02-01, Jin Xing Distripark, Singapore 129809
John Wiley & Sons Canada Ltd, 6045 Freemont Blvd, Mississauga, Ontario, L5R 4J3, Canada
Wiley also publishes its books in a variety of electronic formats. Some content that appears
in print may not be available in electronic books.
Anniversary Logo Design: Richard J. Pacifico
Library of Congress Cataloging-in-Publication Data
Neubauer, Andre.
Coding theory : algorithms, architectures and applications / Andre
Neubauer, J¨urgen Freudenberger, Volker K¨uhn.
p. cm.
ISBN 978-0-470-02861-2 (cloth)
1. Coding theory. I Freudenberger, Jrgen. II. K¨uhn, Volker. III.
Title
QA268.N48 2007
003
.54–dc22
British Library Cataloguing in Publication Data
A catalogue record for this book is available from the British Library
ISBN 978-0-470-02861-2 (HB)
Typeset in 10/12pt Times by Laserwords Private Limited, Chennai, India
Printed and bound in Great Britain by Antony Rowe Ltd, Chippenham, Wiltshire
This book is printed on acid-free paper responsibly manufactured from sustainable forestry
in which at least two trees are planted for each one used for paper production.
Modern information and communication systems are based on the reliable and efficient
transmission of information. Channels encountered in practical applications are usually
disturbed regardless of whether they correspond to information transmission over noisy
and time-variant mobile radio channels or to information transmission on optical discs that
might be damaged by scratches. Owing to these disturbances, appropriate channel coding
schemes have to be employed such that errors within the transmitted information can be
detected or even corrected. To this end, channel coding theory provides suitable coding
schemes for error detection and error correction. Besides good code characteristics with
respect to the number of errors that can be detected or corrected, the complexity of the
architectures used for implementing the encoding and decoding algorithms is important for
practical applications.
The present book provides a concise overview of channel coding theory and practice
as well as the accompanying algorithms, architectures and applications. The selection of
the topics presented in this book is oriented towards those subjects that are relevant for
information and communication systems in use today or in the near future. The focus
is on those aspects of coding theory that are important for the understanding of these
systems. This book places emphasis on the algorithms for encoding and decoding and their
architectures, as well as the applications of the corresponding coding schemes in a unified
framework.
The idea for this book originated from a two-day seminar on coding theory in the
industrial context. We have tried to keep this seminar style in the book by highlighting the
most important facts within the figures and by restricting the scope to the most important
topics with respect to the applications of coding theory, especially within communication
systems. This also means that many important and interesting topics could not be covered
in order to be as concise as possible.
The target audience for the book are students of communication and information engi-
neering as well as computer science at universities and also applied mathematicians who are
interested in a presentation that subsumes theory and practice of coding theory without sacrificing exactness or relevance with regard to real-world practical applications. Therefore,
this book is well suited for engineers in industry who want to know about the theoretical
basics of coding theory and their application in currently relevant communication systems.
The book is organised as follows. In Chapter 1 a brief overview of the principle
architecture of a communication system is given and the information theory fundamentals
underlying coding theory are summarised. The most important concepts of information theory, such as entropy and channel capacity as well as simple channel models, are described.
xPREFACE
Chapter 2 presents the classical, i.e. algebraic, coding theory. The fundamentals of the
encoding and decoding of block codes are explained, and the maximum likelihood decoding
rule is derived as the optimum decoding strategy for minimising the word error probability
after decoding a received word. Linear block codes and their definition based on generator
and parity-check matrices are discussed. General performance measures and bounds relating
important code characteristics such as the minimum Hamming distance and the code rate
are presented, illustrating the compromises necessary between error detection and error
correction capabilities and transmission efficiency. It is explained how new codes can be
constructed from already known codes. Repetition codes, parity-check-codes, Hamming
codes, simplex codes and Reed – Muller codes are presented as examples. Since the task
of decoding linear block codes is difficult in general, the algebraic properties of cyclic
codes are exploited for efficient decoding algorithms. These cyclic codes, together with
their generator and parity-check polynomials, are discussed, as well as efficient encoding
and decoding architectures based on linear feedback shift registers. Important cyclic codes
such as BCH codes and Reed – Solomon codes are presented, and an efficient algebraic
decoding algorithm for the decoding of these cyclic codes is derived.
Chapter 3 deals with the fundamentals of convolutional coding. Convolutional codes
can be found in many applications, for instance in dial-up modems, satellite communications
and digital cellular systems. The major reason for this popularity is the existence of efficient
decoding algorithms that can utilise soft input values from the demodulator. This so-called
soft-input decoding leads to significant performance gains. Two famous examples for a
soft-input decoding algorithm are the Viterbi algorithm and the Bahl, Cocke, Jelinek, Raviv
(BCJR) algorithm which also provides a reliability output. Both algorithms are based on
the trellis representation of the convolutional code. This highly repetitive structure makes
trellis-based decoding very suitable for hardware implementations.
We start our discussion with the encoding of convolutional codes and some of their
basic properties. It follows a presentation of the Viterbi algorithm and an analysis of
the error correction performance with this maximum likelihood decoding procedure. The
concept of soft-output decoding and the BCJR algorithm are considered in Section 3.5. Softoutput decoding is a prerequisite for the iterative decoding of concatenated convolutional
codes as introduced in Chapter 4. Finally, we consider an application of convolutional
codes for mobile communication channels as defined in the Global System for Mobile
communications (GSM) standard. In particular, the considered hybrid ARQ protocols are
excellent examples of the adaptive coding systems that are required for strongly time-variant
mobile channels.
As mentioned above, Chapter 4 is dedicated to the construction of long powerful codes
based on the concatenation of simple convolutional component codes. These concatenated
convolutional codes, for example the famous turbo codes, are capable of achieving low
bit error rates at signal-to-noise ratios close to the theoretical Shannon limit. The term
turbo reflects a property of the employed iterative decoding algorithm, where the decoder
output of one iteration is used as the decoder input of the next iteration. This concept
of iterative decoding was first introduced for the class of low-density parity-check codes.
Therefore, we first introduce low-density parity-check codes in Section 4.1 and discuss
the relation between these codes and concatenated code constructions. Then, we introduce
some popular encoding schemes for concatenated convolutional codes and present three
methods to analyse the performance of the corresponding codes. The EXIT chart method
PREFACExi
in Section 4.4 makes it possible to predict the behaviour of the iterative decoder by looking
at the input/output relations of the individual constituent soft-output decoders. Next, we
present a common approach in coding theory. We estimate the code performance with
maximum likelihood decoding for an ensemble of concatenated codes. This method explains
why many concatenated code constructions lead to a low minimum Hamming distance and
therefore to a relatively poor performance for high signal-to-noise ratios. In Section 4.6 we
consider code designs that lead to a higher minimum Hamming distance owing to a special
encoder construction, called the woven encoder, or the application of designed interleavers.
The fifth chapter addresses space – time coding concepts, a still rather new topic in the
area of radio communications. Although these techniques do not represent error-correcting
codes in the classical sense, they can also be used to improve the reliability of a data
link. Since space –time coding became popular only a decade ago, only a few concepts
have found their way into current standards hitherto. However, many other approaches are
currently being discussed. As already mentioned before, we restrict this book to the most
important and promising concepts.
While classical encoders and decoders are separated from the physical channel by
modulators, equalisers, etc., and experience rather simple hyperchannels, this is not true
for space–time coding schemes. They directly work on the physical channel. Therefore,
Chapter 5 starts with a short survey of linear modulation schemes and explains the principle of diversity. Next, spatial channel models are described and different performance
measures for their quantitative evaluation are discussed. Sections 5.4 and 5.5 introduce
two space – time coding concepts with the highest practical relevance, namely orthogonal
space–time block codes increasing the diversity degree and spatial multiplexing techniques
boosting the achievable data rate. For the latter approach, sophisticated signal processing
algorithms are required at the receiver in order to separate superimposed data streams again.
In the appendices a brief summary of algebraic structures such as finite fields and poly-
nomial rings is given, which are needed for the treatment especially of classical algebraic
codes, and the basics of linear algebra are briefly reviewed.
Finally, we would like to thank the Wiley team, especially Sarah Hinton as the respon-
sible editor, for their support during the completion of the manuscript. We also thank
Dr. rer. nat. Jens Schlembach who was involved from the beginning of this book project
and who gave encouragement when needed. Last but not least, we would like to give special thanks to our families – Fabian, Heike, Jana and Erik, Claudia, Hannah and Jakob and
Christiane – for their emotional support during the writing of the manuscript.
Andr´e Neubauer
M¨unster University of Applied Sciences
J¨urgen Freudenberger
HTWG Konstanz University of Applied Sciences
Volker K¨uhn
University of Rostock
1
Introduction
The reliable transmission of information over noisy channels is one of the basic requirements of digital information and communication systems. Here, transmission is understood
both as transmission in space, e.g. over mobile radio channels, and as transmission in time
by storing information in appropriate storage media. Because of this requirement, modern
communication systems rely heavily on powerful channel coding methodologies. For practical applications these coding schemes do not only need to have good coding characteristics
with respect to the capability of detecting or correcting errors introduced on the channel.
They also have to be efficiently implementable, e.g. in digital hardware within integrated
circuits. Practical applications of channel codes include space and satellite communications, data transmission, digital audio and video broadcasting and mobile communications,
as well as storage systems such as computer memories or the compact disc (Costello et al.,
1998).
In this introductory chapter we will give a brief introduction into the field of channel
coding. To this end, we will describe the information theory fundamentals of channel
coding. Simple channel models will be presented that will be used throughout the text.
Furthermore, we will present the binary triple repetition code as an illustrative example of
a simple channel code.
1.1Communication Systems
In Figure 1.1 the basic structure of a digital communication system is shown which represents the architecture of the communication systems in use today. Within the transmitter
of such a communication system the following tasks are carried out:
● source encoding,
● channel encoding,
● modulation.
Coding Theory – Algorithms, Architectures, and Applications Andr´e Neubauer, J¨urgen Freudenberger, Volker K¨uhn
2007 John Wiley & Sons, Ltd
2INTRODUCTION
Principal structure of digital communication systems
u
u
ˆ
channel
FEC
encoder
encoder
channel
FEC
decoder
encoder
source
source
FEC
encoder
encoder
encoder
source
FEC
decoder
encoder
■ The sequence of information symbols u is encoded into the sequence of
b
modu-
FEC
lator
encoder
FEC
channel
encoder
r
demo-
FEC
dulator
encoder
code symbols b which are transmitted across the channel after modulation.
■ The sequence of received symbols r is decoded into the sequence of
information symbolsˆu which are estimates of the originally transmitted
information symbols.
Figure 1.1: Basic structure of digital communication systems
In the receiver the corresponding inverse operations are implemented:
● demodulation,
● channel decoding,
● source decoding.
According to Figure 1.1 the modulator generates the signal that is used to transmit the
sequence of symbols b across the channel (Benedetto and Biglieri, 1999; Neubauer, 2007;
Proakis, 2001). Due to the noisy nature of the channel, the transmitted signal is disturbed.
The noisy received signal is demodulated by the demodulator in the receiver, leading to the
sequence of received symbols r. Since the received symbol sequence r usually differs from
the transmitted symbol sequence b,achannel code is used such that the receiver is able to
detect or even correct errors (Bossert, 1999; Lin and Costello, 2004; Neubauer, 2006b). To
this end, the channel encoder introduces redundancy into the information sequence u. This
redundancy can be exploited by the channel decoder for error detection or error correction
by estimating the transmitted symbol sequenceˆu.
In his fundamental work, Shannon showed that it is theoretically possible to realise an
information transmission system with as small an error probability as required (Shannon,
1948). The prerequisite for this is that the information rate of the information source
be smaller than the so-called channel capacity. In order to reduce the information rate,
source coding schemes are used which are implemented by the source encoder in the
transmitter and the source decoder in the receiver (McEliece, 2002; Neubauer, 2006a).
INTRODUCTION3
Further information about source coding can be found elsewhere (Gibson et al., 1998;
Sayood, 2000, 2003).
In order better to understand the theoretical basics of information transmission as well
as channel coding, we now give a brief overview of information theory as introduced by
Shannon in his seminal paper (Shannon, 1948). In this context we will also introduce the
simple channel models that will be used throughout the text.
1.2Information Theory
An important result of information theory is the finding that error-free transmission across a
noisy channel is theoretically possible – as long as the information rate does not exceed the
so-called channel capacity. In order to quantify this result, we need to measure information.
Within Shannon’s information theory this is done by considering the statistics of symbols
emitted by information sources.
1.2.1Entropy
Let us consider the discrete memoryless information source shown in Figure 1.2. At a given
time instant, this discrete information source emits the random discrete symbol X = x
which assumes one out of M possible symbol values x1, x2, ..., xM. The rate at which these
symbol values appear are given by the probabilities P
(xi) = Pr{X = xi}.
P
X
(x1), PX(x2), ..., PX(xM) with
X
i
Discrete information source
Information
X
source
■ The discrete information source emits the random discrete symbol X .
■ The symbol values x
..., P
■ Entropy
(xM).
X
Figure 1.2: Discrete information source emitting discrete symbols X
, x2, ..., xMappear with probabilities PX(x1), PX(x2),
1
M
I(X ) =−
PX(xi) · log2(PX(xi))(1.1)
i=1
4INTRODUCTION
The average information associated with the random discrete symbol X is given by the
so-called entropy measured in the unit ‘bit’
M
I(X ) =−
PX(xi) · log
i=1
(xi)).
P
(
X
2
For a binary information source that emits the binary symbols X = 0 and X = 1 with
probabilities Pr{X = 0}=p
and Pr{X = 1}=1 −Pr{X = 0}=1 − p0, the entropy is
0
given by the so-called Shannon function or binary entropy function
I(X ) =−p
log2(p0) − (1 −p0) log2(1 −p0).
0
1.2.2Channel Capacity
With the help of the entropy concept we can model a channel according to Berger’s channel
diagram shown in Figure 1.3 (Neubauer, 2006a). Here, X refers to the input symbol and
R denotes the output symbol or received symbol. We now assume that M input symbol
values x
help of the conditional probabilities
and
the conditional entropies are given by
and
With these conditional probabilities the mutual information
can be derived which measures the amount of information that is transmitted across the
channel from the input to the output for a given information source.
I(X ;R) with respect to the statistical properties of the input X , i.e. by appropriately
choosing the probabilities {P
If the input entropy I(X ) is smaller than the channel capacity C
, x2, ..., xMand N output symbol values r1, r2, ..., rNare possible. With the
1
(xi|rj) = Pr{X = xi|R = rj}
P
X |R
P
(rj|xi) = Pr{R = rj|X = xi}
R|X
M
N
I(X |R) =−
I(R|X ) =−
i=1
M
i=1
j=1
N
j=1
P
(xi,rj) · log
X ,R
P
(xi,rj) · log2(P
X ,R
P
X |R
2
R|X
(xi|rj)
(rj|xi)).
I(X ;R) = I(X ) − I(X |R) = I(R) − I(R|X )
The so-called channel capacity C is obtained by maximising the mutual information
(xi)}
X
C =max
. This leads to
1≤i≤M
{PX(xi)}
I(X )
1≤i≤M
!
<C,
I(X ;R).
then information can be transmitted across the noisy channel with arbitrarily small error
probability. Thus, the channel capacity C in fact quantifies the information transmission
capacity of the channel.
INTRODUCTION5
Berger’s channel diagram
I(X |R)
I(X )
I(X ;R)
I(R)
I(R|X )
■ Mutual information
I(X ;R) = I(X ) − I(X |R) = I(R) − I(R|X )(1.2)
■ Channel capacity
C =max
{PX(xi)}
I(X ;R)(1.3)
1≤i≤M
Figure 1.3: Berger’s channel diagram
1.2.3Binary Symmetric Channel
As an important example of a memoryless channel we turn to the binary symmetric channel
or BSC. Figure 1.4 shows the channel diagram of the binary symmetric channel with bit
error probability ε. This channel transmits the binary symbol X = 0orX = 1 correctly
with probability 1 −ε, whereas the incorrect binary symbol R = 1orR = 0 is emitted
with probability ε.
By maximising the mutual information I(X ;R), the channel capacity of a binary
symmetric channel is obtained according to
C = 1 + ε log
This channel capacity is equal to 1 if ε = 0orε = 1; for ε =
(ε) + (1 − ε) log2(1 −ε).
2
1
2
the channel capacity is 0. In
contrast to the binary symmetric channel, which has discrete input and output symbols taken
from binary alphabets, the so-called AWGN channel is defined on the basis of continuous
real-valued random variables.
1
In Chapter 5 we will also consider complex-valued random variables.
1
6INTRODUCTION
Binary symmetric channel
1 −ε
X = 0
ε
ε
X = 1
1 −ε
■ Bit error probability ε
■ Channel capacity
R = 0
R = 1
C = 1 + ε log
(ε) + (1 − ε) log2(1 −ε)(1.4)
2
Figure 1.4: Binary symmetric channel with bit error probability ε
1.2.4AWGN Channel
Up to now we have exclusively considered discrete-valued symbols. The concept of entropy
can be transferred to continuous real-valued random variables by introducing the so-called
differential entropy. It turns out that a channel with real-valued input and output symbols can
again be characterised with the help of the mutual information I(X ;R) and its maximum,
the channel capacity C. In Figure 1.5 the so-called AWGN channel is illustrated which is
described by the additive white Gaussian noise term Z .
With the help of the signal power
S = EX
and the noise power
N = EZ
the channel capacity of the AWGN channel is given by
1
C =
log
2
2
2
2
1 +
S
.
N
The channel capacity exclusively depends on the signal-to-noise ratio S/N.
In order to compare the channel capacities of the binary symmetric channel and the
AWGN channel, we assume a digital transmission scheme using binary phase shift keying
(BPSK) and optimal reception with the help of a matched filter (Benedetto and Biglieri,
1999; Neubauer, 2007; Proakis, 2001). The signal-to-noise ratio of the real-valued output
INTRODUCTION7
AWGN channel
X
+
R
Z
■ Signal-to-noise ratio
■ Channel capacity
S
N
C =
1
log
2
2
1 +
S
N
(1.5)
Figure 1.5: AWGN channel with signal-to-noise ratio S/N
R of the matched filter is then given by
S
E
b
=
N
N0/2
with bit energy E
and noise power spectral density N0. If the output R of the matched
b
filter is compared with the threshold 0, we obtain the binary symmetric channel with bit
error probability
ε =
1
erfc
2
E
b
.
N
0
Here, erfc(·) denotes the complementary error function. In Figure 1.6 the channel capacities of the binary symmetric channel and the AWGN channel are compared as a function
of E
. The signal-to-noise ratio S/N or the ratio Eb/N0must be higher for the binary
b/N0
symmetric channel compared with the AWGN channel in order to achieve the same channel
capacity. This gain also translates to the coding gain achievable by soft-decision decoding
as opposed to hard-decision decoding of channel codes, as we will see later (e.g. in
Section 2.2.8).
Although information theory tells us that it is theoretically possible to find a channel
code that for a given channel leads to as small an error probability as required, the design
of good channel codes is generally difficult. Therefore, in the next chapters several classes
of channel codes will be described. Here, we start with a simple example.
8INTRODUCTION
Channel capacity of BSC vs AWGN channel
1.5
1
C
0.5
AWGN
BSC
0
Ŧ5Ŧ4Ŧ3Ŧ2Ŧ1012345
10 log
■ Signal-to-noise ratio of AWGN channel
E
b
10
N
0
E
S
N
=
b
N0/2
(1.6)
■ Bit error probability of binary symmetric channel
ε =
1
erfc
2
E
b
N
0
(1.7)
Figure 1.6: Channel capacity of the binary symmetric channel vs the channel capacity of
the AWGN channel
1.3A Simple Channel Code
As an introductory example of a simple channel code we consider the transmission of the
binary information sequence
01110
001
over a binary symmetric channel with bit error probability ε = 0.25 (Neubauer, 2006b).
On average, every fourth binary symbol will be received incorrectly. In this example we
assume that the binary sequence
00110
000
is received at the output of the binary symmetric channel (see Figure 1.7).
INTRODUCTION9
Channel transmission
00101110
■ Binary symmetric channel with bit error probability ε = 0.25
■ Transmission w/o channel code
BSC
00000110
Figure 1.7: Channel transmission without channel code
Encoder
00101110
■ Binary information symbols 0 and 1
■ Binary code words 000 and 111
■ Binary triple repetition code {000, 111}
Encoder
000000111000111111111000
Figure 1.8: Encoder of a triple repetition code
In order to implement a simple error correction scheme we make use of the so-called
binary triple repetition code. This simple channel code is used for the encoding of binary
data. If the binary symbol 0 is to be transmitted, the encoder emits the code word 000.
Alternatively, the code word 111 is issued by the encoder when the binary symbol 1 is to
be transmitted. The encoder of a triple repetition code is illustrated in Figure 1.8.
For the binary information sequence given above we obtain the binary code sequence
000 000 111 000 111111 111 000
at the output of the encoder. If we again assume that on average every fourth binary
symbol is incorrectly transmitted by the binary symmetric channel, we may obtain the
received sequence
0 000 011 010 111010 111 010.
01
This is illustrated in Figure 1.9.
10INTRODUCTION
Channel transmission
000000111000111111111000
■ Binary symmetric channel with bit error probability ε = 0.25
■ Transmission with binary triple repetition code
BSC
010000011010111010111010
Figure 1.9: Channel transmission of a binary triple repetition code
Decoder
010000011010111010111010
■ Decoding of triple repetition code by majority decision
Decoder
000 → 000
001 → 000
010 → 000
00101010
011 → 111
.
.
.
110 → 111
111 → 111
Figure 1.10: Decoder of a triple repetition code
The decoder in Figure 1.10 tries to estimate the original information sequence with the
help of a majority decision. If the number of 0s within a received 3-bit word is larger than
the number of 1s, the decoder emits the binary symbol 0; otherwise a 1 is decoded. With
this decoding algorithm we obtain the decoded information sequence
001010
10.
INTRODUCTION11
As can be seen from this example, the binary triple repetition code is able to correct a single
error within a code word. More errors cannot be corrected. With the help of this simple
channel code we are able to reduce the number of errors. Compared with the unprotected
transmission without a channel code, the number of errors has been reduced from two to
one. However, this is achieved by a significant reduction in the transmission bandwidth
because, for a given symbol rate on the channel, it takes 3 times longer to transmit an
information symbol with the help of the triple repetition code. It is one of the main topics
of the following chapters to present more efficient coding schemes.
2
Algebraic Coding Theory
In this chapter we will introduce the basic concepts of algebraic coding theory. To this end,
the fundamental properties of block codes are first discussed. We will define important code
parameters and describe how these codes can be used for the purpose of error detection
and error correction. The optimal maximum likelihood decoding strategy will be derived
and applied to the binary symmetric channel.
With these fundamentals at hand we will then introduce linear block codes. These
channel codes can be generated with the help of so-called generator matrices owing to
their special algebraic properties. Based on the closely related parity-check matrix and the
syndrome, the decoding of linear block codes can be carried out. We will also introduce
dual codes and several techniques for the construction of new block codes based on known
ones, as well as bounds for the respective code parameters and the accompanying code
characteristics. As examples of linear block codes we will treat the repetition code, paritycheck code, Hamming code, simplex code and Reed –Muller code.
Although code generation can be carried out efficiently for linear block codes, the
decoding problem for general linear block codes is difficult to solve. By introducing further
algebraic structures, cyclic codes can be derived as a subclass of linear block codes for
which efficient algebraic decoding algorithms exist. Similar to general linear block codes,
which are defined using the generator matrix or the parity-check matrix, cyclic codes
are defined with the help of the so-called generator polynomial or parity-check polynomial.
Based on linear feedback shift registers, the respective encoding and decoding architectures
for cyclic codes can be efficiently implemented. As important examples of cyclic codes
we will discuss BCH codes and Reed – Solomon codes. Furthermore, an algebraic decoding
algorithm is presented that can be used for the decoding of BCH and Reed–Solomon
codes.
In this chapter the classical algebraic coding theory is presented. In particular, we
will follow work (Berlekamp, 1984; Bossert, 1999; Hamming, 1986; Jungnickel, 1995;
Lin and Costello, 2004; Ling and Xing, 2004; MacWilliams and Sloane, 1998; McEliece,
2002; Neubauer, 2006b; van Lint, 1999) that contains further details about algebraic coding
theory.
Coding Theory – Algorithms, Architectures, and Applications Andr´e Neubauer, J¨urgen Freudenberger, Volker K¨uhn
2007 John Wiley & Sons, Ltd
14ALGEBRAIC CODING THEORY
2.1Fundamentals of Block Codes
In Section 1.3, the binary triple repetition code was given as an introductory example of a
simple channel code. This specific channel code consists of two code words 000 and 111 of
length n = 3, which represent k = 1 binary information symbol 0 or 1 respectively. Each
symbol of a binary information sequence is encoded separately. The respective code word
of length 3 is then transmitted across the channel. The potentially erroneously received word
is finally decoded into a valid code word 000 or 111 – or equivalently into the respective
information symbol 0 or 1. As we have seen, this simple code is merely able to correct
one error by transmitting three code symbols instead of just one information symbol across
the channel.
In order to generalise this simple channel coding scheme and to come up with more efficient and powerful channel codes, we now consider an information sequence uu5u6u7... of discrete information symbols ui. This information sequence is grouped into
blocks of length k according to
u
···u
0u1
k−1
block
u
kuk+1
···u
block
u
2k−1
2ku2k+1
block
···u
3k−1
···.
In so-called q-nary (n, k) block codes the information words
u
0u1
u
kuk+1
u
2ku2k+1
.
.
.
···u
k−1
···u
···u
,
2k−1
3k−1
,
,
0u1u2u3u4
of length k with u
∈{0, 1,...,q−1} are encoded separately from each other into the
i
corresponding code words
b
of length n with b
···b
0b1
b
nbn+1
b
2nb2n+1
.
.
.
∈{0, 1,...,q−1} (see Figure 2.1).1These code words are trans-
i
n−1
···b
···b
,
2n−1
3n−1
,
,
mitted across the channel and the received words are appropriately decoded, as shown
in Figure 2.2. In the following, we will write the information word u
the code word b
0b1
···b
respectively. Accordingly, the received word is denoted by r = (r
1
For q = 2 we obtain the important class of binary channel codes.
as vectors u = (u0,u1,...,u
n−1
) and b = (b0,b1,...,b
k−1
,...,r
0,r1
0u1
···u
n−1
and
k−1
n−1
), whereas
)
ALGEBRAIC CODING THEORY15
Encoding of an (n, k) block code
(u0,u1,...,u
■ The sequence of information symbols is grouped into words (or blocks) of
k−1
)
Encoder
(b
0,b1
,...,b
equal length k which are independently encoded.
■ Each information word (u
onto a code word (b
0,b1
0,u1
,...,b
,...,u
k−1
) of length n.
n−1
) of length k is uniquely mapped
Figure 2.1: Encoding of an (n, k) block code
Decoding of an (n, k) block code
(r0,r1,...,r
■ The received word (r
decoded into the code word (ˆb
wordˆu = ( ˆu
n−1
)
, ˆu1,..., ˆu
0
Decoder
,...,r
0,r1
) of length k respectively.
k−1
) of length n at the output of the channel is
n−1
,ˆb1,...,ˆb
0
,ˆb1,...,ˆb
(ˆb
0
) of length n or the information
n−1
n−1
n−1
)
)
Figure 2.2: Decoding of an (n, k) block code
the decoded code word and decoded information word are given byˆb = (ˆb
andˆu = ( ˆu
, ˆu1,..., ˆu
0
) respectively. In general, we obtain the transmission scheme for
k−1
,ˆb1,...,ˆb
0
n−1
an (n, k) block code as shown in Figure 2.3.
Without further algebraic properties of the (n, k) block code, the encoding can be carried
out by a table look-up procedure. The information word u to be encoded is used to address
a table that for each information word u contains the corresponding code word b at the
respective address. If each information symbol can assume one out of q possible values,
)
Loading...
+ 325 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.