正在加载图片...
13.6 Linear Prediction and Linear Predictive Coding 571 Linear Predictive Coding (LPC) A different.though related.method to which the formalism above can be applied is the "compression"of a sampled signal so that it can be stored more compactly.The original form should be exactly recoverable from the compressed version.Obviously,compression can be accomplished only if there is redundancy in the signal.Equation (13.6.11)describes one kind of redundancy:It says that the signal,except for a small discrepancy,is predictable from its previous values and from a small number of LP coefficients.Compression of a signal by the use of (13.6.11)is thus called linear predictive coding,or LPC. The basic idea of LPC(in its simplest form)is to record as a compressed file(i) the number of LP coefficients M,(ii)their M values,e.g.,as obtained by memcof, (iii)the first M data points,and then(iv)for each subsequent data point only its residual discrepancy zi(equation 13.6.1).When you are creating the compressed file,you find the residual by applying(13.6.1)to the previous M points,subtracting the sum from the actual value of the current point.When you are reconstructing the 0 original file,you add the residual back in,at the point indicated in the routine predic. It may not be obvious why there is any compression at all in this scheme.After 9 all,we are storing one value of residual per data point!Why not just store the original data point?The answer depends on the relative sizes of the numbers involved.The residual is obtained by subtracting two very nearly equal numbers (the data and the linear prediction).Therefore,the discrepancy typically has only a very small number of nonzero bits.These can be stored in a compressed file.How do you do it in a 9 high-level language?Here is one way:Scale your data to have integer values,say %P between +1000000 and-1000000(supposing that you need six significant figures). OF SCIENTIFIC Modify equation(13.6.1)by enclosing the sum term in an"integer part of operator. The discrepancy will now,by definition,be an integer.Experiment with different 6 values of M,to find LP coefficients that make the range of the discrepancy as small as you can.Ifyou can get to within a range of+127(and in our experience this is not at all difficult)then you can write it to a file as a single byte.This is a compression factor of 4,compared to 4-byte integer or floating formats. Notice that the LP coefficients are computed using the quantized data,and that the discrepancy is also quantized,i.e.,quantization is done both outside and inside Numerica 10621 the LPC loop.If you are careful in following this prescription,then,apart from the 3s 43108 initial quantization of the data,you will not introduce even a single bit of roundoff (outside Recipes error into the compression-reconstruction process:While the evaluation of the sum in (13.6.1 1)may have roundoff errors,the residual that you store is the value which. when added back to the sum,gives exactly the original(quantized)data value.Notice North also that you do not need to massage the LP coefficients for stability;by adding the residual back in to each point,you never depart from the original data,so instabilities cannot grow.There is therefore no need for fixrts,above. Look at 820.4 to learn about Huffman coding,which will further compress the residuals by taking advantage of the fact that smaller values of discrepancy will occur more often than larger values.A very primitive version of Huffman coding would be this:If most of the discrepancies are in the range +127,but an occasional one is outside,then reserve the value 127 to mean"out of range,"and then record on the file (immediately following the 127)a full-word value of the out-of-range discrepancy. $20.4 explains how to do much better.13.6 Linear Prediction and Linear Predictive Coding 571 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Linear Predictive Coding (LPC) A different, though related, method to which the formalism above can be applied is the “compression” of a sampled signal so that it can be stored more compactly. The original form should be exactly recoverable from the compressed version. Obviously, compression can be accomplished only if there is redundancy in the signal. Equation (13.6.11) describes one kind of redundancy: It says that the signal, except for a small discrepancy, is predictable from its previous values and from a small number of LP coefficients. Compression of a signal by the use of (13.6.11) is thus called linear predictive coding, or LPC. The basic idea of LPC (in its simplest form) is to record as a compressed file (i) the number of LP coefficients M, (ii) their M values, e.g., as obtained by memcof, (iii) the first M data points, and then (iv) for each subsequent data point only its residual discrepancy xi (equation 13.6.1). When you are creating the compressed file, you find the residual by applying (13.6.1) to the previous M points, subtracting the sum from the actual value of the current point. When you are reconstructing the original file, you add the residual back in, at the point indicated in the routine predic. It may not be obvious why there is any compression at all in this scheme. After all, we are storing one value of residual per data point! Why not just store the original data point? The answer depends on the relative sizes of the numbers involved. The residual is obtained by subtracting two very nearly equal numbers (the data and the linear prediction). Therefore, the discrepancy typically has only a very small number of nonzero bits. These can be stored in a compressed file. How do you do it in a high-level language? Here is one way: Scale your data to have integer values, say between +1000000 and −1000000 (supposing that you need six significant figures). Modify equation (13.6.1) by enclosing the sum term in an “integer part of” operator. The discrepancy will now, by definition, be an integer. Experiment with different values of M, to find LP coefficients that make the range of the discrepancy as small as you can. If you can get to within a range of ±127 (and in our experience this is not at all difficult) then you can write it to a file as a single byte. This is a compression factor of 4, compared to 4-byte integer or floating formats. Notice that the LP coefficients are computed using the quantized data, and that the discrepancy is also quantized, i.e., quantization is done both outside and inside the LPC loop. If you are careful in following this prescription, then, apart from the initial quantization of the data, you will not introduce even a single bit of roundoff error into the compression-reconstruction process: While the evaluation of the sum in (13.6.11) may have roundoff errors, the residual that you store is the value which, when added back to the sum, gives exactly the original (quantized) data value. Notice also that you do not need to massage the LP coefficients for stability; by adding the residual back in to each point, you never depart from the original data, so instabilities cannot grow. There is therefore no need for fixrts, above. Look at §20.4 to learn about Huffman coding, which will further compress the residuals by taking advantage of the fact that smaller values of discrepancy will occur more often than larger values. A very primitive version of Huffman coding would be this: If most of the discrepancies are in the range ±127, but an occasional one is outside, then reserve the value 127 to mean “out of range,” and then record on the file (immediately following the 127) a full-word value of the out-of-range discrepancy. §20.4 explains how to do much better
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有