正在加载图片...
G(z) Gamma PE yk(n)=(1-甲2)yk(n-1)+叫2k-1(n-1) k(m)=(1-1)E1k(n+1)+μE,k-1(n+1) y2k-1(n-1)-ya(n-1)]E(m FIGURE 20.12 Gamma memory(dispersive delay line). 0.5 Hebbian Learning and Principal Component Analysis Networks Hebbian learning Hebbian learning is an unsupervised learning rule that captures similarity between an input and an output rough correlation. To adapt a weight w; using Hebbian learning we adjust the weights according to Aw nxy or in an equation [Haykin, 1994 w(n+)=w (n)+nx(n)yn) (20.12) where n is the step size, x, is the ith input and y is the PE output. The output of the single PE is an inner product between the input and the weight vector(formula in Fig 20.13). It measures the similarity between the two vectors -i. e, if the input is close to the weight vector the output y is large; otherwise it is small. The weights are computed by an outer product of the input X and output Y, i.e., W=XY, where T means transpose. The problem of Hebbian learning is that it is unstable; i.e the weights will keep on growing with the number of iterations Haykin, 1994] Oja proposed to stabilize the Hebbian rule by normalizing the new weight by its size, which gives the rule Haykin, 1994: (n+1)=w(n)+nn)[x()-()w(可 The weights now converge to finite values. They still define in the input space the direction where the data cluster has its largest projection, which corresponds to the eigenvector with the largest eigenvalue of the input rrelation matrix[ Kung, 1993]. The output of the Pe provides the largest eigenvalue of the input correlation matrIx FIGURE 20 13 Hebbian pe c 2000 by CRC Press LLC© 2000 by CRC Press LLC 20.5 Hebbian Learning and Principal Component Analysis Networks Hebbian Learning Hebbian learning is an unsupervised learning rule that captures similarity between an input and an output through correlation. To adapt a weight wi using Hebbian learning we adjust the weights according to Dwi = hxiy or in an equation [Haykin, 1994] (20.12) where h is the step size, xi is the ith input and y is the PE output. The output of the single PE is an inner product between the input and the weight vector (formula in Fig. 20.13). It measures the similarity between the two vectors — i.e., if the input is close to the weight vector the output y is large; otherwise it is small. The weights are computed by an outer product of the input X and output Y, i.e., W = XY T , where T means transpose. The problem of Hebbian learning is that it is unstable; i.e., the weights will keep on growing with the number of iterations [Haykin, 1994]. Oja proposed to stabilize the Hebbian rule by normalizing the new weight by its size, which gives the rule [Haykin, 1994]: (20.13) The weights now converge to finite values. They still define in the input space the direction where the data cluster has its largest projection, which corresponds to the eigenvector with the largest eigenvalue of the input correlation matrix [Kung, 1993]. The output of the PE provides the largest eigenvalue of the input correlation matrix. FIGURE 20.12 Gamma memory (dispersive delay line). FIGURE 20.13 Hebbian PE. w n w n x n y n i i i ( + 1) = ( ) + h ( ) ( ) w n w n y n x n y n w n i i i i ( + 1) = ( ) + h ( ) [ ] ( ) - ( ) ( )
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有