trellis-coded modulation for voiceband data modems · pdf fileloughborough university...
TRANSCRIPT
Loughborough UniversityInstitutional Repository
Trellis-coded modulation forvoiceband data modems
This item was submitted to Loughborough University's Institutional Repositoryby the/an author.
Additional Information:
A Doctoral Thesis. Submitted in partial fulfilment of the requirementsfor the award of Doctor of Philosophy at Loughborough University.
Metadata Record: https://dspace.lboro.ac.uk/2134/27238
Publisher: c S.F.A. lp
Rights: This work is made available according to the conditions of the CreativeCommons Attribution-NonCommercial-NoDerivatives 2.5 Generic (CC BY-NC-ND 2.5) licence. Full details of this licence are available at: http://creativecommons.org/licenses/by-nc-nd/2.5/
Please cite the published version.
https://dspace.lboro.ac.uk/2134/27238
This item was submitted to Loughborough University as a PhD thesis by the author and is made available in the Institutional Repository
(https://dspace.lboro.ac.uk/) under the following Creative Commons Licence conditions.
For the full text of this licence, please go to: http://creativecommons.org/licenses/by-nc-nd/2.5/
LOUGHBOROUGH UNIVERSITY OF TECHNOLOGY
LIBRARY
AUTHOR/FILING TITLE
__________ --'l.J'-7-- _~__e. _____________ --------------------------------------- --- ----- - ---------
ACCESSION/COPY NO.
03 '2.. ~I 00'2... ----------------- ---- .-- ------- ----------- - - --- - --VOL. NO. CLASS MARK
003 2810 02
~~IIIIIII~III\IIII~I\II\\II\III\\I~~III\III .
TRELLIS-CODED MODULATION FOR
VOICEBAND DATA MODEMS
by
SHU FUN ALEXIS lP, B.Sc., M.Sc.
A Doctoral Thesis
Submitted in partial fulfilment of the requirements for the award of
Doctor of Philosophy
of the LoughboroughUniversity of Technology
September 1988
Supervisor : Professor Adrian Percy Clark
Department of Electronic and Electrical Engineering
~ by S. F. A. lp, 1988
ABSTRACT
The thesis is concerned with a combined convolutional coding and
modulation technique, known as Trellis-Coded Modulation (TCM), with AM and
QAM signals. TCM provides a useful error correction capability, without
bandwidth expansion, by increasing appropriately the number of possible
levels in a transmitted signal element. The development of the
Correlative-Level Modulo Arithmetic (CLMA) coding technique has opened up
a new approach to the understanding and implementation of TCM. The study
goes further in optimizing the performance of the CL MA coder by
investigating various combinations of the correlative-level coding vectors
and the modulo arithmetic operations. A new CLMA coding technique is also
studied for the coding of a 16QAM signal. A further application of modulo
arithmetic is then investigated for Ungerboeck's convolutional codes. An
equivalent set of codes using the CLMA coding technique is obtained for
Ungerboeck's rate 1/2, 2/3 and 4/5 convolutional codes with multilevel
signals. Rate 4/5 90 degree and IBa degree 32QAM rotationally invariant
codes are next introduced and simulated using a novel technique that
involves the application of an adaptive CLMA coding scheme. A new set of
optimum IBa degree rotationally invariant codes is also developed, and
these are complemented by an automatic 90 degree phase correction
technique. Performances of codes using a Viterbi-algorithm decoder and
various near-maximum-likelihood decoders are studied in the thesis. The
most cost-effective decoder needs only a fraction of the equipment
complexity required by the corresponding maximum-likelihood decoder. It
therefore allows the use of convolutional codes with a long constraint
length, which have a better asymptotic coding gain than shorter codes. The
systems studied are suitable for ,data modems transmitting at rates up to
9600 bit/so Computer simulation test results are presented for the
comparison of the relative tolerances to white Gaussian noise of various
systems.
i
ACKNOWLEDGEMENTS
I would like to express my deepest gratitude to Professor Adrian Cl ark for his constant and enthusiastic encouragement of the research project. His
superb supervision and most professional approach, in every way, has
benefited me tremendously. I would also like to thank CASE Communication
Ltd. and Professor Cl ark for their financial support which made the
completion of the project possible.
I would also like to thank my christian brothers and sisters in
Loughborough for supporting me, both in prayers and in actions, during the
preparation of the thesis.
Much gratitude must go to my wife, Kwen, who has persevered so much in
every way. Without her daily support and exhortation, the completion of
" the thesis would not have been possible.
Finally, I would like to express my gratitude towards my parents for their
hard work and patience in assisting me in my education and career.
ii
LIST OF PRINCIPAL SYMBOLS
ai = binary data symbol at the input to the data-transmission system (eqn.2.l6)
a = input binary data symbol when separated into groups of j ~,J
symbols {ail (eqn.2.l7)
Ci = cost of a stored vector obtained after the receipt of ri in a decoder (eqn.2.50)
ci = incremental cost of a stored vector after the receipt of ri in a decoder (eqn.2.51)
dist(x,y) = Euclidean or Unitary distance between sequence x and sequence y (eqns.2.53 and 2.54)
duncoded - E[.]
f( )
G
g
= minimum free distance of a code (eqn.2.55) = normalized minimum free distance of a code(eqn.2.56) = normalized minimum distance of an uncoded system (eqn.2.5B) = expected value or mean value = average transmitted signal energy per symbol = average transmitted signal energy per bit = recoded symbol of a stored vector in the decoder after the
receipt of ri (eqn.2.47)
= coded signal mapping function = semi-infinite convolutional code generator matrix (eqn.2.22) = asymptotic coding gain over an uncoded system (eqn.2.59) = convolutional code generator matrix of polynomials of
Delay-operaters (eqn.2.27)
= number of symbols (or sampling intervals) involved in the memory of a code
g+l = constraint length of a code gb = effective number of bits in the code memory H = semi-infinite parity-check matrix (eqn.2.33) H(D) = parity-check matrix of polynomials of Delay-operators
Im[.]
j
K
m
(eqn.2.36)
= the imaginary part of a complex number = ~ (when not being used as subscript) = number of possible values of si (eqn.2.l4) = number of stored vectors in a decoder
Hi
m MOD(N)
n
= average number of stored vectors in a decoder
= a function that defines a set of operations involving the use
of real modulo-N arithmetic (eqn.3.9)
= delay in decoding (measured in number of data symbols)
N' = number of data symbols in the sequences used for
measuring the dfree (eqn.2.55)
No/2 = two-sided power spectral density of the zero mean stationary
AWGN added at the input to the receiver filter
= expanded vectors that derived from a Zi_l in a decoder
= average probability of bit errors in a decoding process.
= Q-function (eqn.2.4)
qi
ri Re[.]
Si
si 1
wi y
Zi
= transmitted coded symbol
= received signal sample at time t=iT
= the real part of a complex number
= state of a coder at time t=iT (eqn.2.40)
= K-Ievel uncoded data symbol
= sampling interval
= additive white Gaussian noise (AWGN) component
= coding vector in a CLMA code
= stored vectors (sequences) in a decoder
in
v = number of information bits per sampling interval ~ = number of coded bits per sampling interval
a 2 = variance of the real-valued noise component wi
ri
f = signal-ta-noise ratio (SNR) (eqn.2.60 and eqn.2.6I) o(t) = unit impulse function (dirac function)
1.1 = modulus or absolute value
iv
CONTENTS
Abstract
Acknowledgements
List of Principal Symbols
CHAPTER 1 Introduction
CHAPTER 2 : Basic Assumptions
2.1 Model of Binary or Quarternary Uncoded
Data-Transmission System
2.2 Model of a 16QAM Uncoded Data-Transmission System
2.3 Model of a Data-Transmission System Using
Trellis-Coded Modulation (TCM)
2.4 Convolutional Coding and Signal Mapping Process
2.5 Viterbi-Algorithm Decoding
2.6 Minimum Free Distance and Asymptotic Coding Gain
2.7 Computer Simulation Tests
CHAPTER 3 Correlative-Level MOD(N) Coding of Quarternary
Baseband Signals
3.1 Model of the Data-Transmission System
3.2 Quarternary Correlative-Level MOD(N) Coder
3.3 Code-Search Procedures for Correlative-Level
MOD(N) Codes
3.4 Correlative-Level MOD(N) Codes with Coding Memory
3.5 Viterbi-Algorithm Decoding of Correlative-Level
MOD(N) Codes
3.6 Correlative-Level MOD(N) Codes with Coding Memory
3.7 Near-Maximum-Likelihood Decoding Processes
3.7.1 System-l
3.7.2 Pseudobinary System-A
g=2
g>2
Page
i
ii
Hi
1
9
9
16
20
23
31
34
37
39
39
41
44
45
63
74
84
84
102
Page
CHAPTER 4 Correlative-Level Modulo Arit