正在加载图片...
∑wφ(x-x) 八 FIGURE 20.8 Radial Basis Function(RBF)network. The RBF network is also a layered net with the hidden layer built from Gaussian kernels and a linear(or nonlinear)output layer(Fig. 20.8). Training of the RBF network is done normally in two stages[Haykin, 1994] st, the centers x are adaptively placed in the input space using competitive learning or k means clustering [Bishop, 1995], which are unsupervised procedures. Competitive learning is explained later in the chapter. The variances of each Gaussian are chosen as a percentage (30 to 50%)to the distance to the nearest center. The goal is to cover adequately the input data distribution. Once the RBF is located, the second layer weights w are trained using the LMS procedure RBF networks are easy to work with, they train very fast, and they have shown good properties function approximation as classification. The problem is that they require lots of Gaussian kernels in dimensional spaces. 20.4 Time-Lagged Networks The MLP is the most common neural network topology, but it can only handle instantaneous information, since the system has no memory and it is feedforward. In engineering, the processing of signals that exist in es systems with memory, i.e., linear filters. Another alternative to implement memory is to use feedback, which gives rise to recurrent networks. Fully recurrent networks are difficult to train and to stabilize, so it is preferable to develop topologies based on MLPs but where explicit subsystems to store the past information are included. These subsystems are called short-term memory structures (de Vries and Principe, 1992]. The combination of an MLP with short-term memory structures is called a time-lagged network(TLN). The memory structures can be eventually recurrent, but the feedback is local, so stability is still easy to guarantee. Here,we will cover just one Tln topology, called focused, where the memory is at the input layer. The most general TLN have memory added anywhere in the network, but they require other more-involved training strategies(BPtT [Haykin, 1994). The interested reader is referred to de vries and Principe [ 1992] for further details he function of a short-term memory in the focused TLN is to represent the past of the input signal, while the nonlinear PEs provide the mapping as in the mlp(Fig. 20.9) Memory Structures The simplest memory structure is built from a tap delay line( Fig. 20.10). The memory by delays is a single input,multiple-output system that has no free parameters except its size k. The tap delay memory is the memory utilized in the time-delay neural network(TDNN) which has been utilized successfully in speed recognition and system identification [ Kung, 1993] different mechanism for linear memory is the feedback(Fig. 20. 11). Feedback allows the system to remem- ber past events because of the exponential decay of the response. This memory has limited resolution because of the low pass required for long memories. But notice that unlike the memory by delay, memory by feedback provides the learning system with a free parameter u that controls the length of the memory. Memory by feedback has been used in Elman and Jordan networks [ Haykin, 1994] c 2000 by CRC Press LLC© 2000 by CRC Press LLC The RBF network is also a layered net with the hidden layer built from Gaussian kernels and a linear (or nonlinear) output layer (Fig. 20.8). Training of the RBF network is done normally in two stages [Haykin, 1994]: first, the centers xi are adaptively placed in the input space using competitive learning or k means clustering [Bishop, 1995], which are unsupervised procedures. Competitive learning is explained later in the chapter. The variances of each Gaussian are chosen as a percentage (30 to 50%) to the distance to the nearest center. The goal is to cover adequately the input data distribution. Once the RBF is located, the second layer weights wi are trained using the LMS procedure. RBF networks are easy to work with, they train very fast, and they have shown good properties both for function approximation as classification. The problem is that they require lots of Gaussian kernels in high￾dimensional spaces. 20.4 Time-Lagged Networks The MLP is the most common neural network topology, but it can only handle instantaneous information, since the system has no memory and it is feedforward. In engineering, the processing of signals that exist in time requires systems with memory, i.e., linear filters. Another alternative to implement memory is to use feedback, which gives rise to recurrent networks. Fully recurrent networks are difficult to train and to stabilize, so it is preferable to develop topologies based on MLPs but where explicit subsystems to store the past information are included. These subsystems are called short-term memory structures [de Vries and Principe, 1992]. The combination of an MLP with short-term memory structures is called a time-lagged network (TLN). The memory structures can be eventually recurrent, but the feedback is local, so stability is still easy to guarantee. Here, we will cover just one TLN topology, called focused, where the memory is at the input layer. The most general TLN have memory added anywhere in the network, but they require other more-involved training strategies (BPTT [Haykin, 1994]). The interested reader is referred to de Vries and Principe [1992] for further details. The function of a short-term memory in the focused TLN is to represent the past of the input signal, while the nonlinear PEs provide the mapping as in the MLP (Fig. 20.9). Memory Structures The simplest memory structure is built from a tap delay line (Fig. 20.10). The memory by delays is a single￾input, multiple-output system that has no free parameters except its size K. The tap delay memory is the memory utilized in the time-delay neural network (TDNN) which has been utilized successfully in speech recognition and system identification [Kung, 1993]. A different mechanism for linear memory is the feedback (Fig. 20.11). Feedback allows the system to remem￾ber past events because of the exponential decay of the response. This memory has limited resolution because of the low pass required for long memories. But notice that unlike the memory by delay, memory by feedback provides the learning system with a free parameter m that controls the length of the memory. Memory by feedback has been used in Elman and Jordan networks [Haykin, 1994]. FIGURE 20.8 Radial Basis Function (RBF) network
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有