正在加载图片...
y=/(∑x2) with i=1, 2,...,m The parameters(scalar real numbers)w) are called the weights of the first hidden layer. The w@2)are called the weights of the second hidden layer. The w are called the weights of the output layer. The parameters 0, are called the biases of the first hidden layer. The parameters 0(are called the biases of the second hidden layer, and the 0, are the biases of the output layer. The functions f (for the output layer), f (2)(for the second hidden layer), and f (for the first hidden layer) represent the activation functions. The activation functions can be different for each neuron in the multilayer perception(e.g, the first layer could have one type of sigmoid, while the next two layers could have different sigmo id functions or threshold functions) This completes the definition of the multilayer perception. Next, we will introduce the radial basis function neural network. After that we explain how both of these neural networks relate to the other topics covered in this book 5.3.2 Radial Basis Function Neural Networks A locally tuned, overlapping receptive field is found in parts of the cerebral cortex, in the visual cortex, and in other parts of the brain. The radial basis function neural network model is based on these biological systems A radial basis function neural network is shown in Figure 5.3. There, the inputs are xi, i=1, 2,, n, and the output is ,f(x)where f represents the processing by the entire radial basis function neural network. Letx=[-1, r2 x, .The input to the i receptive field unit is x, and its output is denoted with R;(x). It has what is called a"strength"which denote by y,. Assume that there are M receptive field units. Hence, from Figure 5.3 y=f(x)=∑R(x) is the output of the radial basis function neural network Figure 5.3 Radial basis function neural network model There are several possible choices for the"receptive field units"Ri(x) 'e could choose R()=ex Wheres=[ci c.c]. o, is a scalar, and if: is a vector thenI=I=V==2 (2) 1 (( ) ) n j j ij i j i y f wx θ = = − ∑ with j = 1,2, ...,m. The parameters (scalar real numbers) (1) ij w are called the weights of the first hidden layer. The (2) ij w are called the weights of the second hidden layer. The are called the weights of the output layer. The parameters wij (1) θ j are called the biases of the first hidden layer. The parameters (2) θ j are called the biases of the second hidden layer, and theθ j are the biases of the output layer. The functions fj (for the output layer), (2) j f (for the second hidden layer), and (1) j f (for the first hidden layer) represent the activation functions. The activation functions can be different for each neuron in the multilayer perception (e.g., the first layer could have one type of sigmoid, while the next two layers could have different sigmoid functions or threshold functions). This completes the definition of the multilayer perception. Next, we will introduce the radial basis function neural network. After that we explain how both of these neural networks relate to the other topics covered in this book. 5.3.2 Radial Basis Function Neural Networks A locally tuned, overlapping receptive field is found in parts of the cerebral cortex, in the visual cortex, and in other parts of the brain. The radial basis function neural network model is based on these biological systems. A radial basis function neural network is shown in Figure 5.3. There, the inputs are xi, i = 1,2,..., n, and the output is y =f(x) where f represents the processing by the entire radial basis function neural network. Let [ 1 2 , , ] T n x = xx x " . The input to the i th receptive field unit is x, and its output is denoted with Ri (x). lt has what is called a "strength" which we denote by i y . Assume that there are M receptive field units. Hence, from Figure 5.3, ( ) ( ) 1 M i i i y f x yR x = = = ∑ (5.3) is the output of the radial basis function neural network. Figure 5.3 Radial basis function neural network model. There are several possible choices for the "receptive field units" R i (x): 1. We could choose ( ) 2 2 exp i i i x c R x σ ⎛ ⎞ − = ⎜− ⎟ ⎜ ⎟ ⎝ ⎠ Where 1 2 , , T ii i i n ⎤ ⎦ c cc c = ⎡ ⎣ … i ,σ is a scalar, and if z is a vector then T z = z z
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有