正在加载图片...
input layer dden layer output layer 0吃 m FIGURE 20.2 MLP with one hidden layer(d-k-m) PE net f(net) tanh(anet/=I+exp( I net>0 f=O net= o ∑x+b I (bias) Tanh Logistic Threshold figURe 20. 3 A PE and the most common nonlinearities It is clear that multilayer perceptrons(MLPs), the back-Propagation algorithm and its extensions-time-lagged networks(TLN)and back-propagation through time(BPTT), respectively -hold a prominent position in ANN technology. It is therefore only natural to spend most of our overview presenting the theory and tools of back-propagation learning. It is also important to notice that Hebbian learning (and its extension, the Oja rule) is also a very useful (and biologically plausible) learning mechanism. It is an unsupervised learning method since there is no need to specify the desired or target response to the ann 20.2 Multilayer Perceptrons Multilayer perceptrons are a layered arrangement of nonlinear PEs as shown in Fig. 20. 2. The layer that receives ne input is called the input layer, and the layer that produces the output is the output layer. The layers that do not have direct access to the external world are called hidden layers. a layered network with just the input and output layers is called the perceptron. Each connection between PEs is weighted by a scalar, w, called a weight, which is adapted during learn The PEs in the MLP are composed of an adder followed by a smooth saturating nonlinearity of the sigmoid pe(Fig. 20.3). The most common saturating nonlinearities are the logistic function and the hyperbolic tangent. The threshold is used in other nets. The importance of the mLP is that it is a universal mapper (implements arbitrary input/output maps)when the topology has at least two hidden layers and sufficient number of PEs [Haykin, 1994]. Even MLPs with a single hidden layer are able to approximate continuous input/output maps. This means that rarely we will need to choose topologies with more than two hidden layers. But these are existence proofs, so the issue that we must solve as engineers is to choose how many layers and how many PEs in each layer are required to produce good results Many problems in engineering can be thought of in terms of a transformation of an input space, containing the input, to an output space where the desired response exists. For instance, dividing data into classes can be thought of as transforming the input into 0 and 1 responses that will code the classes [Bishop, 1995]. Likewise entification of an unknown system can also be framed as a mapping(function approximation) from the input to the system output [Kung, 1993]. The MLP is highly recommended for these applications c 2000 by CRC Press LLC© 2000 by CRC Press LLC It is clear that multilayer perceptrons (MLPs), the back-propagation algorithm and its extensions — time-lagged networks (TLN) and back-propagation through time (BPTT), respectively — hold a prominent position in ANN technology. It is therefore only natural to spend most of our overview presenting the theory and tools of back-propagation learning. It is also important to notice that Hebbian learning (and its extension, the Oja rule) is also a very useful (and biologically plausible) learning mechanism. It is an unsupervised learning method since there is no need to specify the desired or target response to the ANN. 20.2 Multilayer Perceptrons Multilayer perceptrons are a layered arrangement of nonlinear PEs as shown in Fig. 20.2. The layer that receives the input is called the input layer, and the layer that produces the output is the output layer. The layers that do not have direct access to the external world are called hidden layers. A layered network with just the input and output layers is called the perceptron. Each connection between PEs is weighted by a scalar, wi , called a weight, which is adapted during learning. The PEs in the MLP are composed of an adder followed by a smooth saturating nonlinearity of the sigmoid type (Fig. 20.3). The most common saturating nonlinearities are the logistic function and the hyperbolic tangent. The threshold is used in other nets. The importance of the MLP is that it is a universal mapper (implements arbitrary input/output maps) when the topology has at least two hidden layers and sufficient number of PEs [Haykin, 1994]. Even MLPs with a single hidden layer are able to approximate continuous input/output maps. This means that rarely we will need to choose topologies with more than two hidden layers. But these are existence proofs, so the issue that we must solve as engineers is to choose how many layers and how many PEs in each layer are required to produce good results. Many problems in engineering can be thought of in terms of a transformation of an input space, containing the input, to an output space where the desired response exists. For instance, dividing data into classes can be thought of as transforming the input into 0 and 1 responses that will code the classes [Bishop, 1995]. Likewise, identification of an unknown system can also be framed as a mapping (function approximation) from the input to the system output [Kung, 1993]. The MLP is highly recommended for these applications. FIGURE 20.2 MLP with one hidden layer (d-k-m). FIGURE 20.3 A PE and the most common nonlinearities
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有