正在加载图片...
Input representation(memory. MLP(static mapper) FIGURE 20.9 A focused TLN x()一○圆 y(n-i)=x(n). 8(n-i)=x(n-i) y(n)y(n-1) (n-K) FIGURE 20 10 Tap delay line memory. e(n)=(1-e(n+1)+8(n) FIGURE 20.11 Memory by feedback(context PE) It is possible to combine the advantages of memory by feedback with the ones of the memory by delays linear systems called dispersive delay lines. The most studied of these memories is a cascade of low-pass functions called the gamma memory [de Vries and Principe, 1992]. The gamma memory has a free parameter u that controls and decouples memory depth from resolution of the memory. Memory depth d is defined as the first moment of the impulse response from the input to the last tap K, while memory resolution R is the number of taps per unit time. For the gamma memory D= k/p, and r=u; ie,, changing u modifies the memory depth and resolution inversely. This recursive parameter u can be adapted with the output MSE as the other network parameters; i. e, the aNN is able to choose the best memory depth to minimize the output error, which is unlike the tap delay memor Training-Focused TLN Architectures The appeal of the focused architecture is that the mlp weights can be still adapted with back-propagation. However, the input/output mapping produced by these networks is static. The input memory layer is bringing in past input information to establish the value of the mapping As we know in engineering, the size of the memory is fundamental to identify, for instance, an unknown plant or to perform prediction with a small error. But note now that with the focused tln the models for system identification become nonlinear (i.e, nonlinear moving average-NMA) When the tap delay implements the short-term memory, straight back-propagation can be utilized since the only adaptive parameters are the MLP weights. when the gamma memory is utilized (or the context PE), the recursive parameter is adapted in a total adaptive framework (or the parameter is preset by some external consideration). The equations to adapt the context PE and the gamma memory are shown in Figs. 20.11 and 0.12, respectively. For the context PE S(n)refers to the total error that is back-propagated from the MLP and that reaches the dual context pe c 2000 by CRC Press LLC© 2000 by CRC Press LLC It is possible to combine the advantages of memory by feedback with the ones of the memory by delays in linear systems called dispersive delay lines. The most studied of these memories is a cascade of low-pass functions called the gamma memory [de Vries and Principe, 1992]. The gamma memory has a free parameter m that controls and decouples memory depth from resolution of the memory. Memory depth D is defined as the first moment of the impulse response from the input to the last tap K, while memory resolution R is the number of taps per unit time. For the gamma memory D = K/m, and R = m; i.e., changing m modifies the memory depth and resolution inversely. This recursive parameter m can be adapted with the output MSE as the other network parameters; i.e., the ANN is able to choose the best memory depth to minimize the output error, which is unlike the tap delay memory. Training-Focused TLN Architectures The appeal of the focused architecture is that the MLP weights can be still adapted with back-propagation. However, the input/output mapping produced by these networks is static. The input memory layer is bringing in past input information to establish the value of the mapping. As we know in engineering, the size of the memory is fundamental to identify, for instance, an unknown plant or to perform prediction with a small error. But note now that with the focused TLN the models for system identification become nonlinear (i.e., nonlinear moving average — NMA). When the tap delay implements the short-term memory, straight back-propagation can be utilized since the only adaptive parameters are the MLP weights. When the gamma memory is utilized (or the context PE), the recursive parameter is adapted in a total adaptive framework (or the parameter is preset by some external consideration). The equations to adapt the context PE and the gamma memory are shown in Figs. 20.11 and 20.12, respectively. For the context PE d(n) refers to the total error that is back-propagated from the MLP and that reaches the dual context PE. FIGURE 20.9 A focused TLN. FIGURE 20.10 Tap delay line memory. FIGURE 20.11 Memory by feedback (context PE)
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有