Synaptic Dynamics: Unsupervised Learning PartⅡ Wang Xiumei 2023/7/9
2023/7/9 Synaptic Dynamics: Unsupervised Learning Part Ⅱ Wang Xiumei
1.Stochastic unsupervised learning and stochastic equilibrium; 2.Signal Hebbian Learning; 3.Competitive Learning. 2023/7/9
2023/7/9 1.Stochastic unsupervised learning and stochastic equilibrium; 2.Signal Hebbian Learning; 3.Competitive Learning
Stochastic unsupervised learning stochastic equilibrium (1)The noisy random unsupervised learning law; (2)Stochastic equilibrium; (3)The random competitive learning law; (4)The learning vector quantization system. 2023/7/9
2023/7/9 1.Stochastic unsupervised learning and stochastic equilibrium ⑴ The noisy random unsupervised learning law; ⑵ Stochastic equilibrium; ⑶ The random competitive learning law; ⑷ The learning vector quantization system
The noisy random unsupervised learning law The random-signal Hebbian learning law: dmj=-mi dt+S,(x,)S,(y)dt+dB (4-92) (B,(t)}denotes a Browian-motion diffusion process,each term in (4-92)demotes a separate random process. 2023/7/9
2023/7/9 The noisy random unsupervised learning law The random-signal Hebbian learning law: (4-92) denotes a Browian-motion diffusion process, each term in (4-92)demotes a separate random process. ( ) ( ) ij ij i i i i ij dm m dt S x S y dt dB = − + + { ( )} B t ij
The noisy random unsupervised learning law dB Using noise relationship: dt we can rewrite (4-92): m,=-m+S,(x)S,(y,)+n, (4-93) We assume the zero-mean, Gaussian white- noise process in(t);and use equation f(x,y,M)=-m,+S,(x)S,y,) 2023/7/9
2023/7/9 The noisy random unsupervised learning law • Using noise relationship: we can rewrite (4-92): (4-93) We assume the zero-mean, Gaussian whitenoise process ,and use equation : ( ) ( ) m m S x S y n ij ij i i j j ij = − + + dB n dt = { ( )} ij n t ( , , ) ( ) ( ) ij ij i i j j f x y M m S x S y = − +
The noisy random unsupervised learning law We can get a noisy random unsupervised learning law my =fi(X,Y,M)+n (4-94) Lemma Eiv aoi (4-95) is finite variance. proof:P132 2023/7/9
2023/7/9 The noisy random unsupervised learning law We can get a noisy random unsupervised learning law (4-94) Lemma: (4-95) is finite variance. proof: P132 m f X Y M n ij = + ij ij ( , , ) 2 ij E m ij ij
The noisy random unsupervised learning law The lemma implies two points: 1,stochastic synapses vibrate in equilibrium and they vibrate at least as much as the driving noise process vibrates; 2,the synaptic vector m,changes or vibrate at every instant t,and equals a constant value.m,wanders in a brownian motion about the c constant value Fl mi/. 2023/7/9
2023/7/9 The noisy random unsupervised learning law The lemma implies two points: 1, stochastic synapses vibrate in equilibrium, and they vibrate at least as much as the driving noise process vibrates; 2,the synaptic vector changes or vibrate at every instant t, and equals a constant value. wanders in a brownian motion about the constant value E[ ]. mj mj mj
Stochastic equilibrium When synaptic vector m,stops moving, synaptic equilibrium occurs in "steady state”, m,=0 (4-101) synaptic vector m,reaches synaptic equilibrium when only the random noise vector n change (4-103) 2023/7/9
2023/7/9 Stochastic equilibrium When synaptic vector stops moving, synaptic equilibrium occurs in “steady state”, (4-101) synaptic vector reaches synaptic equilibrium when only the random noise vector change : (4-103) mj mj mj mj = 0 m n j j = j n
The random competitive learning law The random competitive learning law m=S,(y,)汇S(X)-m,]+n The random linear competitive learning law m,=S,(y,)儿X-m,]+n 2023/7/9
2023/7/9 The random competitive learning law The random competitive learning law The random linear competitive learning law m S y S X m n j = − + j j j j ( ) ( ) m S y X m n j = − + j j j j ( )
The earning vector guantization system. m,(k+1)=m,(k)+cx[X&-m,(k)] Xk∈D m,(k+I)=m,(k)-c[XA-m,(k)] Xk年D m,(k+1)=m,(k) i丰j 2023/7/9
2023/7/9 The learning vector quantization system. ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 1 1 j j k k j k j j j k k j k j i i m k m k c X m k X D m k m k c X m k X D m k m k i j + = + − + = − − + =