正在加载图片...
covered here. For instance, we do not discuss associative memories and Hopfield neural networks, recurrent networks Boltz-mann machines, or Hebbian or competitive learning. We refer the reader to Section 5.8, For Further Study, for references that cover these topics in detail 5.3.1 Multilayer Perceptrons The multilayer perceptron is a feed-forward neural network (i.e, it does not use past values of its outputs or other internal variables to compute its current output). It is composed of an interconnection of basic neuron processing units The Neuron For a single neuron, suppose that we use, i=1, 2, , n, to denote its inputs and suppose that it has a single output y Figure 5. 1 shows the neuron. Such a neuron first forms a weighted sum of the inputs 2nx|- wherea are the interconnection"weights"andes the"bias"for the neuron(these parameters model the interconnections between the cell bodies in the neurons of a biological neural network ). The signal represents a signal in the biological neuron, and the processing that the neuron performs on this signal is represented with an"activation function. This activation function is represented with a function f, and the output that it computes y=f(-)=川mx-0 (5.1) Basically, the neuron model represents the biological neuron that"fires"(turns on) when its inputs are significantly excited (ie, is big enough). The manner in which the neuron fires is defined by the activation function f. There are many ways to define the activation function Threshold function: For this type of activation function we have if f( 0 if 2<0 so that once the input signal is above zero the neuron turns on a Sigmoid function: For this type of activation function we have f(=)=1+eXp(-bx) so that the input signal continuously turns on the neuron an increasing amount as it increases(plot the function values against z to convince yourself of this). The parameter b affects the slope of the sigmoid function. There are many functions that take on a shape that is sigmoidal. For instance, one that is often used in neural network is the hyperbolic tangent function f(a)=tanh(-) 1-exp(=) 21+exp() Equation(5.1), with one of the above activation functions, represents the computations made by one neuron in the neural network. Next, we define how we interconnect these neurons to form a neural network-in particular, the multilayer perceptroncovered here. For instance, we do not discuss associative memories and Hopfield neural networks, recurrent networks, Boltz-mann machines, or Hebbian or competitive learning. We refer the reader to Section 5.8, For Further Study, for references that cover these topics in detail. 5.3.1 Multilayer Perceptrons The multilayer perceptron is a feed-forward neural network (i.e., it does not use past values of its outputs or other internal variables to compute its current output). It is composed of an interconnection of basic neuron processing units. The Neuron For a single neuron, suppose that we use , i = 1,2,..., n, to denote its inputs and suppose that it has a single output y. Figure 5.1 shows the neuron. Such a neuron first forms a weighted sum of the inputs 1 n i i i z x ω θ = ⎛ ⎞ = − ⎜ ⎟ ⎝ ⎠ ∑ whereωi are the interconnection "weights" andθis the "bias" for the neuron (these parameters model the interconnections between the cell bodies in the neurons of a biological neural network). The signal z represents a signal in the biological neuron, and the processing that the neuron performs on this signal is represented with an "activation function." This activation function is represented with a function f, and the output that it computes is ( ) 1 n i i i y fz f x ω θ = ⎛ ⎞ ⎛ ⎞ == − ⎜⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ∑ ⎟ ( 5.1) Basically, the neuron model represents the biological neuron that "fires" (turns on) when its inputs are significantly excited (i.e., z is big enough). The manner in which the neuron fires is defined by the activation function f . There are many ways to define the activation function: ƒ Threshold function: For this type of activation function we have ( ) 1 if 0 0 if 0 z f z z ⎧ ≥ = ⎨ ⎩ < so that once the input signal z is above zero the neuron turns on. ƒ Sigmoid function: For this type of activation function we have 1 ( ) 1 exp( ) f z bz = + − (5.2) so that the input signal z continuously turns on the neuron an increasing amount as it increases (plot the function values against z to convince yourself of this).The parameter b affects the slope of the sigmoid function. There are many functions that take on a shape that is sigmoidal. For instance, one that is often used in neural networks is the hyperbolic tangent function 1 exp( ) ( ) tanh( ) 2 1 exp( ) z z f z z − = = + Equation (5.1), with one of the above activation functions, represents the computations made by one neuron in the neural network. Next, we define how we interconnect these neurons to form a neural network—in particular, the multilayer perceptron
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有