正在加载图片...
y=f()= (4.1) Basically, the neuron model represents the biological neuron that"fires"(turns on) when its inputs are significantly excited (i.e, is big enough). The manner in which the neuron fires is defined by the activation function f. There are many ways to define the activation function Threshold function: For this type of activation function we have ifz≥0 0 if:<0 so that once the input signal is above zero the neuron turns on. Sigmoid function: For this type of activation function we have f(=) (4.2 I+ exp(-bz) at the input signal continuously turns on the neuron an increasing amount as it increases(plot the function values against z to convince yourself of this). The parameter b affects the slope of the sigmoid function. There are many functions that take on a shape that is sigmoidal. For instance, one that is often used in neural networks is the hyperbolic tangent function f(=)=tanh(二)= 1-exp(=) 1+exp(二) Equation(4.1), with one of the above activation functions, represents the computations made by one neuron in the neural network. Next, we define how we interconnect these neurons to form a neural network-in particular, the multilayer perceptron. utrt lsh Figure 4.1 Single neuron model Network of neurons The basic structure for the multilayer perceptron is shown in Figure 4.2. There, the circles represent the neurons( weights bias, and activation function)and the lines represent the connections between the inputs and neurons, and between the neurons in one layer and those in the next layer. This is a three-layer perceptron since there are three stages of neural processing between the inputs and outputs. More layers can be added by concatenating additional"hidden "layers of neurons The multilayer perceptron has inputs, i=1,2,, n, and outputs, j=1, 2,, m. The number of neurons in the first hidden layer(see Figure 4.2)is. In the second hidden layer there are neurons, and in the output layer there are m neurons Hence, in an N layer perceptron there are ni neurons in the h hidden layer, i=1,2N-1 PDF文件使用" pdffactory Pro"试用版本创建ww. fineprint,com,cn( ) 1 n i i i y f z f x w q = æ ö æ ö = = - ç ÷ ç ÷ è ø è ø å ( 4.1) Basically, the neuron model represents the biological neuron that "fires" (turns on) when its inputs are significantly excited (i.e., z is big enough). The manner in which the neuron fires is defined by the activation function f . There are many ways to define the activation function: ¡ Threshold function: For this type of activation function we have ( ) 1 if 0 0 if 0 z f z z ì ³ = í î < so that once the input signal z is above zero the neuron turns on. ¡ Sigmoid function: For this type of activation function we have 1 ( ) 1 exp( ) f z bz = + - (4.2) so that the input signal z continuously turns on the neuron an increasing amount as it increases (plot the function values against z to convince yourself of this).The parameter b affects the slope of the sigmoid function. There are many functions that take on a shape that is sigmoidal. For instance, one that is often used in neural networks is the hyperbolic tangent function 1 exp( ) ( ) tanh( ) 2 1 exp( ) z z f z z - = = + Equation (4.1), with one of the above activation functions, represents the computations made by one neuron in the neural network. Next, we define how we interconnect these neurons to form a neural network—in particular, the multilayer perceptron. Figure 4.1 Single neuron model Network of Neurons The basic structure for the multilayer perceptron is shown in Figure 4.2. There, the circles represent the neurons (weights, bias, and activation function) and the lines represent the connections between the inputs and neurons, and between the neurons in one layer and those in the next layer. This is a three-layer perceptron since there are three stages of neural processing between the inputs and outputs. More layers can be added by concatenating additional "hidden" layers of neurons. The multilayer perceptron has inputs, i = 1,2,..., n, and outputs , j =1,2,..., m. The number of neurons in the first hidden layer (see Figure 4.2) is . In the second hidden layer there are neurons, and in the output layer there are m neurons. Hence, in an N layer perceptron there are i n neurons in the i th hidden layer, i = 1,2,..., N- 1. PDF 文件使用 "pdfFactory Pro" 试用版本创建 www.fineprint.com.cn
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有