Dorf, R C, Wan, Z, Millstein, L B, Simon, M.K"Digital Communication The electrical Engineering Handbook Ed. Richard C. Dorf Boca raton crc Press llc. 2000
Dorf, R.C., Wan, Z., Millstein, L.B., Simon, M..K. “Digital Communication” The Electrical Engineering Handbook Ed. Richard C. Dorf Boca Raton: CRC Press LLC, 2000
70 Digital Communication Richard C. Dorf 70.1 Error Control Coding University of California, davis Block codes· Convolutional codes· Code performance· Trellis Zhen wa Coded modulation 70.2 Equalization Linear Transversal Equalizers. Nonlinear Equalizers. Linear L. B. Milstein Receivers Nonlinear receivers University of California 70.3 Spread Spectrum Communications A Brief History. Why Spread Spectrum?. Basic Concepts and M. K. Simon Terminology. Spread Spectrum Techniques. Applications of oread Spectrum 70.1 Error Control Coding Richard C. Dorf and Zhen wan Error correcting codes may be classified into two broad categories: block codes and tree codes. a block code is a mapping of k input binary symbols into n output binary symbols. Consequently, the block coder is a memoryless device. Since n> k, the code can be selected to provide redundancy, such as parity bits, which are used by the decoder to provide some error detection and error correction. The codes are denoted by(n, k), where the code rate R is defined by r=k/n. Practical values of R range from 1/4 to 7/8, and k ranges from 3 to several hundred [Clark and Cain, 1981]. Some properties of block codes are given in Table 70.1 a tree code is produced by a coder that has memory Convolutional codes are a subset of tree codes. The convolutional coder accepts k binary symbols at its input and produces n binary symbols at its output, where he n output symbols are affected by v+ k input symbols. Memory is incorporated since v>0. The code rate is defined by R=k/n. Typical values for k and n range from 1 to 8, and the values for v range from 2 to 60 The range of R is between 1/4 and 7/8[ Clark and Cain, 1981 Block Codes In block code, the n code digits generated in a particular time unit depend only on the k message digits within that time unit. Some of the errors can be detected and corrected if d>s+t+1. where s is the number of errors that can be detected, t is the number of errors that can be corrected, and d is the hamming distance Usually, s2 t thus, d2 2t+ 1. A general code word can be expressed as a1, a2y,, G1, c2., k is the number of information bits and r is the number of check bits. Total word length is n=k+r. In Fig. 70.1, the gain h (i=1,2 j=1,2,, k)are elements of the parity check matrix H. The k data bits are shifted in each time, while k t r bits are simultaneously shifted out by the commutator. yclic Codes Cyclic codes are block codes such that another code word can be obtained by taking any one code word, shifting ne bits to the right, and placing the dropped-off bits on the left An encoding circuit with (n-k)shift registers is shown in Fig. 70.2 c 2000 by CRC Press LLC
© 2000 by CRC Press LLC 70 Digital Communication 70.1 Error Control Coding Block Codes • Convolutional Codes • Code Performance • TrellisCoded Modulation 70.2 Equalization Linear Transversal Equalizers • Nonlinear Equalizers • Linear Receivers • Nonlinear Receivers 70.3 Spread Spectrum Communications A Brief History • Why Spread Spectrum? • Basic Concepts and Terminology • Spread Spectrum Techniques • Applications of Spread Spectrum 70.1 Error Control Coding Richard C. Dorf and Zhen Wan Error correcting codes may be classified into two broad categories: block codes and tree codes. A block code is a mapping of k input binary symbols into n output binary symbols. Consequently, the block coder is a memoryless device. Since n > k, the code can be selected to provide redundancy, such as parity bits, which are used by the decoder to provide some error detection and error correction. The codes are denoted by (n, k), where the code rate R is defined by R = k/n. Practical values of R range from 1/4 to 7/8, and k ranges from 3 to several hundred [Clark and Cain, 1981]. Some properties of block codes are given in Table 70.1. A tree code is produced by a coder that has memory. Convolutional codes are a subset of tree codes. The convolutional coder accepts k binary symbols at its input and produces n binary symbols at its output, where the n output symbols are affected by v + k input symbols. Memory is incorporated since v > 0. The code rate is defined by R = k/n. Typical values for k and n range from 1 to 8, and the values for v range from 2 to 60. The range of R is between 1/4 and 7/8 [Clark and Cain, 1981]. Block Codes In block code, the n code digits generated in a particular time unit depend only on the k message digits within that time unit. Some of the errors can be detected and corrected if d ³ s + t + 1, where s is the number of errors that can be detected, t is the number of errors that can be corrected, and d is the hamming distance. Usually, s ³ t, thus, d ³ 2t + 1. A general code word can be expressed as a1, a2,...,ak, c1, c2,...,cr. k is the number of information bits and r is the number of check bits. Total word length is n = k + r. In Fig. 70.1, the gain hij (i = 1, 2,..., r, j = 1, 2,..., k) are elements of the parity check matrix H. The k data bits are shifted in each time, while k + r bits are simultaneously shifted out by the commutator. Cyclic Codes Cyclic codes are block codes such that another code word can be obtained by taking any one code word, shifting the bits to the right, and placing the dropped-off bits on the left. An encoding circuit with (n – k) shift registers is shown in Fig. 70.2. Richard C. Dorf University of California, Davis Zhen Wan University of California, Davis L. B. Milstein University of California M. K. Simon Jet Propulsion Laboratory
TABLE 70.1 Properties of Block Codes Codea BC Reed-Solomon Hamming Maximal Leng Block length n= m(2m-1)bits n=2-1 n=2 m=3,4,5, Number of parity bits r= 2t bits r=rml Minimum distance d≥2t+1 d= m(2r+ 1)bits d=3 d=2m-1 Number of information bits k2n-mt am is any positive integer unless otherwise indicated; n is the block length; kis the number of information bits; t is the number of errors that can be corrected; r is the number of parity bits; d is the distand Commutator FIGURE 70.1 An encoding circuit of (n, k) block code 11 a→o+l Switch s k Data digits n-kParity-check digits FIGURE 70. 2 An encoder for systematic cyclic code. Source: B P. Lathi, Modern Digital and Analog Communications,New York: CBS College Pul In Fig. 70.2, the gain gs are the coefficients of the generator polynomial g(x)=x*+gx-1+..+gux +1. The gains k are either 0 or 1. The k data digits are shifted in one at a time at the input with the switch s held at position Pi. The symbol D represents a one-digit delay. As the data digits move through the encoder, they are also shifted out onto the output lines, because the first k digits of code word are the data digits themselves. As soon as the last(or kth)data digit clears the last(n-k) register, all the registers contain the parity-check digits. The switch s is now thrown to position P2, and the n-k parity-check are one at a time onto the line e 2000 by CRC Press LLC
© 2000 by CRC Press LLC In Fig. 70.2, the gain gks are the coefficients of the generator polynomial g(x) = xn–k + g1xn–k–1 + . . . + gn–k–1x + 1. The gains gk are either 0 or 1. The k data digits are shifted in one at a time at the input with the switch s held at position p1. The symbol D represents a one-digit delay. As the data digits move through the encoder, they are also shifted out onto the output lines, because the first k digits of code word are the data digits themselves. As soon as the last (or kth) data digit clears the last (n – k) register, all the registers contain the parity-check digits. The switch s is now thrown to position p2, and the n – k parity-check digits are shifted out one at a time onto the line. TABLE 70.1 Properties of Block Codes Codea Property BCH Reed–Solomon Hamming Maximal Length Block length n = 2m – 1, n = m(2m – 1) bits n = 2m – 1 n = 2m – 1 m = 3, 4, 5, . . . Number of parity bits r = m2t bits r = m Minimum distance d ³ 2t + 1 d = m(2t + 1) bits d = 3 d = 2m – 1 Number of information bits k ³ n – mt k = m a m is any positive integer unless otherwise indicated; n is the block length; k is the number of information bits; t is the number of errors that can be corrected; r is the number of parity bits; d is the distance. FIGURE 70.1 An encoding circuit of (n, k) block code. FIGURE 70.2 An encoder for systematic cyclic code. (Source: B.P. Lathi, Modern Digital and Analog Communications, New York: CBS College Publishing, 1983. With permission.)
Constraint length K input frames HH器 Uncoded input data 非·非·。非 n bit frame YYZ Shift register FIGURE 70.3 Convolutional encoding (k=3, n=4, K=5, and R =3/4) Examples of cyclic and related codes are Bose-Chaudhuri-Hocquenhem(BCH) 2. Reed-Solomon 3. Hamming 4. Maximal lengt 5. Reed-Muller 6. Golay codes Convolutional Codes In convolutional code, the block of n code digits generated by the encoder in a particular time unit depends not only on the block of k message digits within that time unit but also on the block of data digits within a previous span of N-1 time units(N>1). A convolutional encoder is illustrated in Fig. 70.3 Here k bits(one input frame)are shifted in each time, and concurrently n bits(the output frame)are shifted out,where n>k. Thus, every k-bit input frame prod uces an 1 bit output frame. Redundancy is provided in the output, since n >k. Also, there is memory in the coder, since the output frame depends on the previous K input frames where K>1. The code rate is r= k/n, which is 3/4 in this illustration. the constraint length K, is the number of input frames that are held in the kk bit shift register. Depending on the particular convolutional code that is to be generated, data from the kK stages of the shift register are added(modulo 2) and used to set the bits in the n-stage output register Code performance The improvement in the performance of a digital communication system that can be achieved by the use of coding is illustrated in Fig. 70.4. It is assumed that a digital signal plus channel noise is present at the receiver input. The performance of a system that uses binary-Phase-shift-keyed(BPSK) signaling is shown both for the case when coding is used and for the case when there is no coding For the BPSK no code case, P=Q(2(EVN) For the coded case a(23, 12)Golay code is used; Pe is the probability of bit error-also called the bit error rate (BER)that is measured at the receiver output. c2000 by CRC Press LLC
© 2000 by CRC Press LLC Examples of cyclic and related codes are 1. Bose–Chaudhuri–Hocquenhem (BCH) 2. Reed–Solomon 3. Hamming 4. Maximal length 5. Reed–Muller 6. Golay codes Convolutional Codes In convolutional code, the block of n code digits generated by the encoder in a particular time unit depends not only on the block of k message digits within that time unit but also on the block of data digits within a previous span of N – 1 time units (N >1). A convolutional encoder is illustrated in Fig. 70.3. Here k bits (one input frame) are shifted in each time, and concurrently n bits (the output frame) are shifted out, where n > k. Thus, every k-bit input frame produces an n-bit output frame. Redundancy is provided in the output, since n > k. Also, there is memory in the coder, since the output frame depends on the previous K input frames where K > 1. The code rate is R = k/n, which is 3/4 in this illustration. The constraint length, K, is the number of input frames that are held in the kK bit shift register. Depending on the particular convolutional code that is to be generated, data from the kK stages of the shift register are added (modulo 2) and used to set the bits in the n-stage output register. Code Performance The improvement in the performance of a digital communication system that can be achieved by the use of coding is illustrated in Fig. 70.4. It is assumed that a digital signal plus channel noise is present at the receiver input. The performance of a system that uses binary-phase-shift-keyed (BPSK) signaling is shown both for the case when coding is used and for the case when there is no coding. For the BPSK no code case, Pe = Q ( ). For the coded case a (23,12) Golay code is used; Pe is the probability of bit error—also called the bit error rate (BER)—that is measured at the receiver output. FIGURE 70.3 Convolutional encoding (k = 3, n = 4, K = 5, and R = 3/4). 2(E N/b o
without coding Shannon s ideal system 14 10 Coding gain= 1.33 dB (23, 12)Golay coding Coding gain =2. 15 dB deal coin gan=11.2 Eb/No(dB) FIGURE 70.4 Performance of digital systems-with and without coding. E, is the energy-per-bit to noise-density at the receiver input. The function Q(x)is Q(x)=(1/v2Txe-snz TABLE 70.2 Coding Gains with BPSK or QPSK oding Gain(dB) Coding Gain(dB) Data Rate oding Technique Used at 10-BER at 10- BER Ideal coding 11.2 3.6 oncatenated Reed-Solomon and convolution iterbi decoding) 6.5-75 volutional with sequential decoding ft decisions 6.0-7.0 809.0 Modera Block codes(soft decisions) 50-6.0 Concatenated Reed-Solomon and short block 4.5-5.5 6.5-7.5 Very high Convolutional with Viterbi decoding 4.0-5.5 Convolutional with sequential decoding rd decisions 4.0-5.0 6.0-7.0 High Block codes(hard decisions) 3.0-4.0 4.5-5.5 Block codes with threshold decoding 2.0-4.0 3.5-5.5 Convolutional with threshold decoding 1.5-3.0 2.5-4.0 Very high BPSK: modulation technique-binary phase-shift keying: QPSK: modulation technique--quadrature phase shift keying: BER: bit error rate. Source: V.K. Bhargava, "Forward error correction schemes for digital communications, IEEE Communicatio Magazine, 21, 11-19,@ 1983 IEEE. with permission. Trellis-Coded Modulation Trellis-coded modulation(TCM) combines multilevel modulation and coding to achieve coding gain without bandwidth expansion [Ungerboeck, 1982, 1987]. TCM has been adopted for use in the new CCITTV32 modem that allows an information data rate of 9600 b/s(bits per second )to be transmitted over VF(voice frequency lines. The TCM has a coding gain of 4 dB [Wei, 1984]. The combined modulation and coding operation of TCM is shown in Fig. 70.5(b). Here, the serial data from the source, m(t), are converted into parallel(m-bit) e 2000 by CRC Press LLC
© 2000 by CRC Press LLC Trellis-Coded Modulation Trellis-coded modulation (TCM) combines multilevel modulation and coding to achieve coding gain without bandwidth expansion [Ungerboeck, 1982, 1987]. TCM has been adopted for use in the new CCITT V.32 modem that allows an information data rate of 9600 b/s (bits per second) to be transmitted over VF (voice frequency) lines. The TCM has a coding gain of 4 dB [Wei, 1984]. The combined modulation and coding operation of TCM is shown in Fig. 70.5(b). Here, the serial data from the source, m(t), are converted into parallel (m-bit) FIGURE 70.4 Performance of digital systems—with and without coding. Eb is the energy-per-bit to noise-density at the receiver input. The function Q(x) is Q(x) = (1/ x)e –x2/2. TABLE 70.2 Coding Gains with BPSK or QPSK Coding Gain (dB) Coding Gain (dB) Data Rate Coding Technique Used at 10–5 BER at 10–8 BER Capability Ideal coding 11.2 13.6 Concatenated Reed–Solomon and convolution (Viterbi decoding) 6.5–7.5 8.5–9.5 Moderate Convolutional with sequential decoding (soft decisions) 6.0–7.0 8.0–9.0 Moderate Block codes (soft decisions) 5.0–6.0 6.5–7.5 Moderate Concatenated Reed–Solomon and short block 4.5–5.5 6.5–7.5 Very high Convolutional with Viterbi decoding 4.0–5.5 5.0–6.5 High Convolutional with sequential decoding (hard decisions) 4.0–5.0 6.0–7.0 High Block codes (hard decisions) 3.0–4.0 4.5–5.5 High Block codes with threshold decoding 2.0–4.0 3.5–5.5 High Convolutional with threshold decoding 1.5–3.0 2.5–4.0 Very high BPSK: modulation technique—binary phase-shift keying; QPSK: modulation technique—quadrature phaseshift keying; BER: bit error rate. Source: V.K. Bhargava,“Forward error correction schemes for digital communications,” IEEE Communication Magazine, 21, 11–19, © 1983 IEEE. With permission. 2p
GEORGE ANSON HAMILTON (1843-1935 telegraphy captivated George Hamiltons interest while he was still a boy-to the extent that he built a small telegraph line himself, from sinking the poles to making the necessary apparatus. By the time he was 17, he was the manager of the telegraph office of the Atlantic great Western Railroad at Ravenna Ohio. Hamilton continued to hold managerial positions with telegraph companies until 187 when he became assistant to moses g. farmer in his work on general electrical apparatus and In 1875, Hamilton joined Western Union ssistant electrician and, for the next two years, worked with Gerritt Smith in establishing and maintaining the first quadruplex telegraph cir cuits in both America and England. He then focused on the development of the Wheatstone high-speed automatic system and was also the chief electrician on the Key West-Havana cable repair expedition. Hamilton left Western Union in 1889, however, to join Western Electric, George Anson Hamilton (1843-1935) where he was placed in charge of the production of fine electrical instruments until the time of his retirement. Courtesy of the IEEE Center for the History of Electrical Engineering. data, which are partitioned into k-bit and(m -k)-bit words where k 2 m. The k-bit words( frames)are convolutionally encoded into(n =k+ 1)-bit words so that the code rate is r= k /(k+ 1). The amplitude and phase are then set jointly on the basis of the coded n-bit word and the uncoded(m -k)-bit word. Almost dB of coding gain can be realized if coders of constraint length 9 are used. Defining Terms Block code: A mapping of k input binary symbols into n output binary symbols Convolutional code: A subset of tree codes, accepting k binary symbols at its input and producing n binary symbols at its output. Cyclic code: Block code such that another code word can be obtained by taking any one code word, shifting Tree code: Produced by a coder that has memory Related Topics 69.1 Modulat c2000 by CRC Press LLC
© 2000 by CRC Press LLC data, which are partitioned into k-bit and (m – k)-bit words where k ³ m. The k-bit words (frames) are convolutionally encoded into (n = k + 1)-bit words so that the code rate is R = k/(k + 1). The amplitude and phase are then set jointly on the basis of the coded n-bit word and the uncoded (m – k)-bit word. Almost 6 dB of coding gain can be realized if coders of constraint length 9 are used. Defining Terms Block code: A mapping of k input binary symbols into n output binary symbols. Convolutional code: A subset of tree codes, accepting k binary symbols at its input and producing n binary symbols at its output. Cyclic code: Block code such that another code word can be obtained by taking any one code word, shifting the bits to the right, and placing the dropped-off bits on the left. Tree code: Produced by a coder that has memory. Related Topics 69.1 Modulation • 70.2 Equalization GEORGE ANSON HAMILTON (1843–1935) elegraphy captivated George Hamilton’s interest while he was still a boy — to the extent that he built a small telegraph line himself, from sinking the poles to making the necessary apparatus. By the time he was 17, he was the manager of the telegraph office of the Atlantic & Great Western Railroad at Ravenna, Ohio. Hamilton continued to hold managerial positions with telegraph companies until 1873 when he became assistant to Moses G. Farmer in his work on general electrical apparatus and machinery. In 1875, Hamilton joined Western Union as assistant electrician and, for the next two years, worked with Gerritt Smith in establishing and maintaining the first quadruplex telegraph circuits in both America and England. He then focused on the development of the Wheatstone high-speed automatic system and was also the chief electrician on the Key West–Havana cable repair expedition. Hamilton left Western Union in 1889, however, to join Western Electric, where he was placed in charge of the production of fine electrical instruments until the time of his retirement. (Courtesy of the IEEE Center for the History of Electrical Engineering.) T
Transmitter (t) Encoder ource 燃叫器=}四 (a)Conventional Coding Technique Transmitter m bit word (parallel data) m-k bit word / ata k bit word nsk+1 bit word (b) FIGURE 70.5 Transmitters for conventional coding and for TCM References V.K. Bhargava, Forward error correction schemes for digital communications, IEEE Communication magazine 21,1983 G.C. Clark and J.B. Cain, Error-Correction Coding for Digital Communications, New York: Plenum, 1981 L W Couch, Digital and Analog Communication Systems, New York: Macmillan, 1990 B.P. Lathi, Modern Digital and Analog Communication, New York: CBS College Publishing, 1983 G Ungerboeck, " Channel coding with multilevel/phase signals, IEEE Transactions on Information Theory, vol IT-28(January), Pp 55-67, 1982. G. Ungerboeck,Trellis-coded modulation with redundant signal sets, Parts 1 and 2, IEEE Communications agazine, vol. 25, no. 2(February), Pp 5-21, 1987 L. Wei,"Rotationally invariant convolutional channel coding with expanded signal space--Part II: Nonlinear codes, IEEE Journal on Selected Areas in Communications, vol SAC-2, no 2, pp 672-686, 1984 Further Information For further information refer to IEEE Communications and IEEE Journal on Selected Areas in Communications. 70.2 Equalization Richard C. Dorf and Zhen Wan In bandwidth-efficient digital communication systems the effect of each symbol transmitted over a time dispersive channel extends beyond the time interval used to represent that symbol. The distortion caused by the resulting overlap of received symbols is called intersymbol interference(ISI)[Lucky et al., 1968]. ISI arises in all pulse-modulation systems, including frequency-shift keying(FSK), phase-shift keying(PSK), and quadra ture amplitude modulation(QAM)[Lucky et al., 1968]. However, its effect can be most easily described for a baseband PAM system. The purpose of an equalizer, placed in the path of the received signal, is to reduce the ISI as mud to maximize the probability of correct decisions e 2000 by CRC Press LLC
© 2000 by CRC Press LLC References V.K. Bhargava,“Forward error correction schemes for digital communications,” IEEE Communication Magazine, 21, 1983. G.C. Clark and J.B. Cain, Error-Correction Coding for Digital Communications, New York: Plenum, 1981. L.W. Couch, Digital and Analog Communication Systems, New York: Macmillan, 1990. B.P. Lathi, Modern Digital and Analog Communication, New York: CBS College Publishing, 1983. G. Ungerboeck, “Channel coding with multilevel/phase signals,” IEEE Transactions on Information Theory, vol. IT-28 (January), pp. 55–67, 1982. G. Ungerboeck, “Trellis-coded modulation with redundant signal sets,” Parts 1 and 2, IEEE Communications Magazine, vol. 25, no. 2 (February), pp. 5–21, 1987. L. Wei, “Rotationally invariant convolutional channel coding with expanded signal space—Part II: Nonlinear codes,” IEEE Journal on Selected Areas in Communications, vol. SAC-2, no. 2, pp. 672–686, 1984. Further Information For further information refer to IEEE Communications and IEEE Journal on Selected Areas in Communications. 70.2 Equalization Richard C. Dorf and Zhen Wan In bandwidth-efficient digital communication systems the effect of each symbol transmitted over a time dispersive channel extends beyond the time interval used to represent that symbol. The distortion caused by the resulting overlap of received symbols is called intersymbol interference (ISI) [Lucky et al., 1968]. ISI arises in all pulse-modulation systems, including frequency-shift keying (FSK), phase-shift keying (PSK), and quadrature amplitude modulation (QAM) [Lucky et al., 1968]. However, its effect can be most easily described for a baseband PAM system. The purpose of an equalizer, placed in the path of the received signal, is to reduce the ISI as much as possible to maximize the probability of correct decisions. FIGURE 70.5 Transmitters for conventional coding and for TCM
T T T zk URE 70.6 Linear transversal equalizer. Source: K Feher, Advanced Digital Communications, Englewood Cliffs, N J tice- Hall, 1987, P 648. with permission. Linear Transversal equalizers Among the many structures used for equalization, the simplest is the transversal (tapped delay line or nonre- cursive)equalizer shown in Fig. 70.6. In such an equalizer the current and past values r() of the received signal are linearly weighted by equalizer coefficients(tap gains)cn and summed to produce the output. In the commonly used digital implementation, samples of the received signal at the symbol rate are stored in a digital shift register(or memory), and the equalizer output samples(sums of products)zt kT)or zg are computed digitally, once per symbol, according to z=∑cnr(t+kr-m) where Nis the number of equalizer coefficients and to denotes sample timing The equalizer coefficients, cm, n=0, 1,. N-1, may be chosen to force the samples of the combined channel and equalizer impulse response to zero at all but one of the NT-spaced instants in the span of the equalize Such an equalizer is called a zero-forcing(ZF)equalizer [Lucky, 1965 If we let the number of coefficients of a ZF equalizer increase without bound, we would obtain an infinite length equalizer with zero ISI at its output. An infinite-length zero-ISI equalizer is simply an inverse filter, which inverts the folded frequency response of the channel. Clearly, the ZF criterion neglects the effect of noise altogether. A finite-length ZF equalizer is approximately inverse to the folded frequency response of the channel a finite-length ZF equalizer is guaranteed to minimize the peak distortion or worst-case ISI only if the peak distortion before equalization is less than 100%[Lucky, 1965 The least-mean-squared(LMS)equalizer [Lucky et al., 1968] is more robust. Here the equalizer coefficients re chosen to minimize the mean squared error( MSe)the sum of squares of all the ISI terms plus the noise power at the output of the equalizer. Therefore, the LMS equalizer maximizes the signal-to-distortion ratio (S/D)at its output within the constraints of the equalizer time span and the delay through the equalizer. Before regular data transmission begins, automatic synthesis of the ZF or LMS equalizers for unknown channels may be carried out during a training period. During the training period, a known signal is transmitted and a ynchronized version of this signal is generated in the receiver to acquire information about the channel characteristics. The automatic adaptive equalizer is shown in Fig. 70.7. A noisy but unbiased estimate 2er(to +kr-nT) c2000 by CRC Press LLC
© 2000 by CRC Press LLC Linear Transversal Equalizers Among the many structures used for equalization, the simplest is the transversal (tapped delay line or nonrecursive) equalizer shown in Fig. 70.6. In such an equalizer the current and past values r(t – nT) of the received signal are linearly weighted by equalizer coefficients (tap gains) cn and summed to produce the output. In the commonly used digital implementation, samples of the received signal at the symbol rate are stored in a digital shift register (or memory), and the equalizer output samples (sums of products) z(t0 + kT) or zk are computed digitally, once per symbol, according to where N is the number of equalizer coefficients and t0 denotes sample timing. The equalizer coefficients, cn, n = 0, 1,. . .,N – 1, may be chosen to force the samples of the combined channel and equalizer impulse response to zero at all but one of the NT-spaced instants in the span of the equalizer. Such an equalizer is called a zero-forcing (ZF) equalizer [Lucky, 1965]. If we let the number of coefficients of a ZF equalizer increase without bound, we would obtain an infinitelength equalizer with zero ISI at its output. An infinite-length zero-ISI equalizer is simply an inverse filter, which inverts the folded frequency response of the channel. Clearly, the ZF criterion neglects the effect of noise altogether.A finite-length ZF equalizer is approximately inverse to the folded frequency response of the channel. Also, a finite-length ZF equalizer is guaranteed to minimize the peak distortion or worst-case ISI only if the peak distortion before equalization is less than 100% [Lucky, 1965]. The least-mean-squared (LMS) equalizer [Lucky et al.,1968] is more robust. Here the equalizer coefficients are chosen to minimize the mean squared error (MSE)—the sum of squares of all the ISI terms plus the noise power at the output of the equalizer. Therefore, the LMS equalizer maximizes the signal-to-distortion ratio (S/D) at its output within the constraints of the equalizer time span and the delay through the equalizer. Automatic Synthesis Before regular data transmission begins, automatic synthesis of the ZF or LMS equalizers for unknown channels may be carried out during a training period. During the training period, a known signal is transmitted and a synchronized version of this signal is generated in the receiver to acquire information about the channel characteristics. The automatic adaptive equalizer is shown in Fig. 70.7. A noisy but unbiased estimate: FIGURE 70.6 Linear transversal equalizer. (Source: K. Feher, Advanced Digital Communications, Englewood Cliffs, N.J.: Prentice-Hall, 1987, p. 648. With permission.) z c r t kT nt k n n N = + = Â ( – ) – 0 0 1 d d e c k e r t kT nT k n k 2 0 2 ( ) = + ( – )
Decision Device ek FIGURE 70.7 Automatic adaptive equalizer. Source: K Feher, Advanced Digital Communications, Englewood Cliffs, N J . is used. Thus, the tap gains are updated according to Cn(k+ 1)=c(k)-Aegr(t, + kT-nT), n=0,1, where c,(k)is the nth tap gain at time k, e, is the error signal, and A is a positive adaptation constant or step ize, error signals e=22-qu can be computed at the equalizer output and used to adjust the equalizer coefficients to reduce the sum of the squared errors. Note gk=xk The most popular equalizer adjustment method involves updates to each tap gain during each symbol interval. The adjustment to each tap gain is in a direction opposite to an estimate of the gradient of the mse with respect to that tap gain. The idea is to move the set of equalizer coefficients closer to the unique optimum set corresponding to the minimum MSE. This symbol-by-symbol procedure developed by widrow and Hoff [ Feher, 1987] is commonly referred to as the stochastic gradient method Adaptive Equalization After the initial training period (if there is one), the coefficients of an adaptive equalizer may be continually adjusted in a decision-directed manner. In this mode the error signal e,=42- qu is derived from the final(not necessarily correct) receiver estimate ak of the transmitted sequence x where qr is the estimate of xx. In normal operation the receiver decisions are correct with high probability, so that the error estimates are correct often enough to allow the adaptive equalizer to maintain precise equalization. Moreover, a decision-directed adaptive equalizer can track slow variations in the channel characteristics or linear perturbations in the receiver front end, such as slow jitter in the sampler phase Nonlinear equalizers Decision-Feedback Equalizers A decision-feedback equalizer(DFE)is a simple nonlinear equalizer [Monsen, 1971], which is particularly useful for channels with severe amplitude distortion and uses decision feedback to cancel the interference from symbols which have already been detected. Fig 70.8 shows the diagram of the equalizer. The equalized signal is the sum of the outputs of the forward and feedback parts of the equalizer. The forward part is like the linear transversal equalizer discussed earlier. Decisions made on the equalized signal are fed back via a second transversal filter. The basic idea is that if the values of the symbols already detected are known (past decisions are assumed to be correct), then the ISI contributed by these symbols can be canceled exactly, by subtracting past symbol values with appropriate weighting from the equalizer output. The forward and feedback coefficients may be adjusted simultaneously to minimize the mse. The update equation for the forward coefficients is the same as for the linear equalizer. The feedback coefficients are adjusted according to b (k +1)=b(k)+ where A is the kth symbol decision, bm(k)is the mth feedback coefficient at time k, and there are M feedback coefficients in all. The optimum LMS settings of bm, m=1,., M, are those that reduce the ISI to zero, within the span of the feedback part, in a manner similar to a ZF equalizer e 2000 by CRC Press LLC
© 2000 by CRC Press LLC is used. Thus, the tap gains are updated according to cn(k + 1) = cn(k) – Dekr(t0 + kT – nT), n = 0, 1, . . ., N – 1 where cn(k) is the nth tap gain at time k, ek is the error signal, and D is a positive adaptation constant or step size, error signals ek = zk – qk can be computed at the equalizer output and used to adjust the equalizer coefficients to reduce the sum of the squared errors. Note qk = xˆk. The most popular equalizer adjustment method involves updates to each tap gain during each symbol interval. The adjustment to each tap gain is in a direction opposite to an estimate of the gradient of the MSE with respect to that tap gain. The idea is to move the set of equalizer coefficients closer to the unique optimum set corresponding to the minimum MSE. This symbol-by-symbol procedure developed by Widrow and Hoff [Feher, 1987] is commonly referred to as the stochastic gradient method. Adaptive Equalization After the initial training period (if there is one), the coefficients of an adaptive equalizer may be continually adjusted in a decision-directed manner. In this mode the error signal ek = zk – qk is derived from the final (not necessarily correct) receiver estimate {qk} of the transmitted sequence {xk} where qk is the estimate of xk. In normal operation the receiver decisions are correct with high probability, so that the error estimates are correct often enough to allow the adaptive equalizer to maintain precise equalization. Moreover, a decision-directed adaptive equalizer can track slow variations in the channel characteristics or linear perturbations in the receiver front end, such as slow jitter in the sampler phase. Nonlinear Equalizers Decision-Feedback Equalizers A decision-feedback equalizer (DFE) is a simple nonlinear equalizer [Monsen, 1971], which is particularly useful for channels with severe amplitude distortion and uses decision feedback to cancel the interference from symbols which have already been detected. Fig. 70.8 shows the diagram of the equalizer. The equalized signal is the sum of the outputs of the forward and feedback parts of the equalizer. The forward part is like the linear transversal equalizer discussed earlier. Decisions made on the equalized signal are fed back via a second transversal filter. The basic idea is that if the values of the symbols already detected are known (past decisions are assumed to be correct), then the ISI contributed by these symbols can be canceled exactly, by subtracting past symbol values with appropriate weighting from the equalizer output. The forward and feedback coefficients may be adjusted simultaneously to minimize the MSE. The update equation for the forward coefficients is the same as for the linear equalizer. The feedback coefficients are adjusted according to bm(k + 1) = bm(k) + Dek xˆ k–m m = 1, . . ., M where xˆk is the kth symbol decision, bm(k) is the mth feedback coefficient at time k, and there are M feedback coefficients in all. The optimum LMS settings of bm, m = 1, . . ., M, are those that reduce the ISI to zero, within the span of the feedback part, in a manner similar to a ZF equalizer. FIGURE 70.7 Automatic adaptive equalizer. (Source: K. Feher, Advanced Digital Communications, Englewood Cliffs, N.J.: Prentice-Hall, 1987, p. 651. With permission.)
p[r↑[ FIGURE 70.8 Decision-feedback equalizer. Source: K. Feher, Advanced Digital Communications, Englewood Cliffs, N J. Prentice-Hall, 1987, P 655. With permission. t t FIGURE 70.9 Fractionally spaced equalizer.( Source: K. Feher, Advanced Digital Communications, Englewood Cliffs, N J. Prentice-Hall, P. 656. with permission. Fractionally Spaced Equalizers The optimum receive filter in a linear modulation system is the cascade of a filter matched to the actual channel, with a transversal T-spaced equalizer[ Forney, 1972]. The fractionally spaced equalizer(FSE), by virtue of its sampling rate, can synthesize the best combination of the characteristics of an adaptive matched filter and a T-spaced equalizer, within the constraints of its length and delay. A T-spaced equalizer, with symbol-rate sampling at its input, cannot perform matched filtering. A fractionally spaced equalizer can effectively compen- sate for more severe delay distortion and deal with amplitude distortion with less noise enhancement than a fractionally spaced transversal equalizer [Monsen, 1971]is shown in Fig. 70.9. The delay-line taps of such an equalizer are spaced at an interval t, which is less than, or a fraction of, the symbol interval T. The tap spacing t is typically selected such that the bandwidth occupied by the signal at the equalizer input is f< c2000 by CRC Press LLC
© 2000 by CRC Press LLC Fractionally Spaced Equalizers The optimum receive filter in a linear modulation system is the cascade of a filter matched to the actual channel, with a transversal T-spaced equalizer [Forney, 1972]. The fractionally spaced equalizer (FSE), by virtue of its sampling rate, can synthesize the best combination of the characteristics of an adaptive matched filter and a T-spaced equalizer, within the constraints of its length and delay. A T-spaced equalizer, with symbol-rate sampling at its input, cannot perform matched filtering. A fractionally spaced equalizer can effectively compensate for more severe delay distortion and deal with amplitude distortion with less noise enhancement than a T-equalizer. A fractionally spaced transversal equalizer [Monsen, 1971] is shown in Fig. 70.9. The delay-line taps of such an equalizer are spaced at an interval t, which is less than, or a fraction of, the symbol interval T. The tap spacing t is typically selected such that the bandwidth occupied by the signal at the equalizer input is *f * < FIGURE 70.8 Decision-feedback equalizer. (Source: K. Feher, Advanced Digital Communications, Englewood Cliffs, N.J.: Prentice-Hall, 1987, p. 655. With permission.) FIGURE 70.9 Fractionally spaced equalizer. (Source: K. Feher, Advanced Digital Communications, Englewood Cliffs, N.J.: Prentice-Hall, p. 656. With permission.)