正在加载图片...
minJ→wapr=R parameters FIGURE 20.5 Computing analytically optimal weights for the linear PE. w(k+1)=w(k)-nv k (20.2) where n is a small constant called the and V J(k)is the gradient of the performance surface at iteration k. Bernard widrow in the late 1960s proposed a very efficient estimate to compute the gradient at each iteration (8=18)-2(1)=-x (20.3) which when substituted into Eq (20.2)produces the so-called LMS algorithm. He showed that the LMS converged to the analytic solution provided the step size n is small enough. Since it is a steepest descent procedure, the largest step size is limited by the inverse of the largest eigenvalue of the input autocorrelation matrix. The larger the step size(below this limit), the faster is the convergence, but the final values will"rattle around the optimal value in a basin that has a radius proportional to the step size. Hence, there is a fundamental trade-off between speed of convergence and accuracy in the final weight values. One great appeal of the Lms algorithm is that it is very efficient (just one multiplication per weight)and requires only local quantities to The LMS algorithm can be framed as a computation of partial derivatives of the cost with respect to the unknowns, i.e the weight values. In fact, with the chainrule one rites dy d x(-y)2(Ex) we obtain the LMs algorithm for the linear PE. What happens if the PE is nonlinear? If the nonlinearity is differentiable(smooth), we still can apply the same method, because of the chain rule, which prescribes that (Fig.20.6) <品m②), ∫ logisti(ne1)=x(1-x) funb (net )=0.5(1-F FIGURE 20.6 How to extend LMS to nonlinear PEs with the chain rule c 2000 by CRC Press LLC© 2000 by CRC Press LLC (20.2) where h is a small constant called the step size, and — J (k) is the gradient of the performance surface at iteration k. Bernard Widrow in the late 1960s proposed a very efficient estimate to compute the gradient at each iteration (20.3) which when substituted into Eq. (20.2) produces the so-called LMS algorithm. He showed that the LMS converged to the analytic solution provided the step size h is small enough. Since it is a steepest descent procedure, the largest step size is limited by the inverse of the largest eigenvalue of the input autocorrelation matrix. The larger the step size (below this limit), the faster is the convergence, but the final values will “rattle” around the optimal value in a basin that has a radius proportional to the step size. Hence, there is a fundamental trade-off between speed of convergence and accuracy in the final weight values. One great appeal of the LMS algorithm is that it is very efficient (just one multiplication per weight) and requires only local quantities to be computed. The LMS algorithm can be framed as a computation of partial derivatives of the cost with respect to the unknowns, i.e., the weight values. In fact, with the chainrule one writes (20.4) we obtain the LMS algorithm for the linear PE. What happens if the PE is nonlinear? If the nonlinearity is differentiable (smooth), we still can apply the same method, because of the chain rule, which prescribes that (Fig. 20.6) FIGURE 20.5 Computing analytically optimal weights for the linear PE. FIGURE 20.6 How to extend LMS to nonlinear PEs with the chain rule. w k w k J k J J w i i i i i ( + 1) = ( ) - —h ( ) — = ¶ ¶ — J k( ) = ( ) ( ( )) = - ( ) ( ) w J k w k k x k i i i i ¶ ¶ ¶ ¶ ~ e e 1 2 2 ¶ ¶ ¶ ¶ ¶ ¶ ¶ ¶ ¶ ¶ e J w J y y w y d y w w x x i i i i i i = = Â( - ) Ê Ë ˆ ¯ (Â ) = - 2
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有