正在加载图片...
单层感知器学习算法(learning algorithm for single layer perceptron): stepl:initialize connection weight and threshold.set a smaller nonzero random value for wi(i=1,...,n)and 0 as their initial value.wi(0)represents connection weight of i-th input at the moment of t; step2:input a training parameter =(x1(t),x2(t),...,xn(tand the expectation output d(t); step3:compute the actual output of ANN: 0=f(②0,0x,(0-6.1=1,2,为 step4:compute the difference value between actual output and expectation output: DEL=d(t)-y(t) if DEL<e (e is a very small positive number),then training of ANN is over,otherwise goto step 5; step5:regulate the connection weight according to the following fomular: m,t+1)=m,⑤+p(d(⑤-⑤)x,,1=1-n where 0<n<=1 is an incremental factor,it is used to control regulation speed,and called learning rate.Usualy,the value of n can't be too large or small.if its value is too large,then n will impact the convergence of wi(t),otherwise make the convergence speed of wi(t)slower: step 6:goto step 2;单层感知器学习算法(learning algorithm for single layer perceptron): step1:initialize connection weight and threshold. set a smaller nonzero random value for wi(i=1,...,n) and θ as their initial value. wi(0) represents connection weight of i-th input at the moment of t; step2:input a training parameter X=(x1(t),x2(t),...,xn(t))and the expectation output d(t); step3:compute the actual output of ANN: step4:compute the difference value between actual output and expectation output: DEL=d(t)-y(t) if DEL<ε(ε is a very small positive number), then training of ANN is over, otherwise goto step 5; step5: regulate the connection weight according to the following fomular: where 0<η<=1 is an incremental factor, it is used to control regulation speed, and called learning rate. Usualy, the value of η can't be too large or small. if its value is too large, then η will impact the convergence of wi(t),otherwise make the convergence speed of wi(t) slower; step 6: goto step 2;
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有