正在加载图片...
y:(n)=∑wf(n)x(n) ∑ △wv(m)=myo)5(n)-∑wt(n)y4(n FIGURE 20 14 PCA network. Principal Component analysis Principal component analysis(PCA)is a well-known technique in signal processing that is used to project a ignal into a signal-specific basis. The importance of PCA analysis is that it provides the best linear projecti to a subspace in terms of preserving the signal energy [Haykin, 1994]. Normally, PCA is computed analytically through a singular value decomposition. PCA networks offer an alternative to this computation by providing an iterative implementation that may be preferred for real-time operation in embedded systems The PCA network is a one-layer network with linear-processing elements(Fig. 20. 14). One can extend output PEs(less or equal to the number of input PEs), according to the formula shown in Fig 20.14 which is called the Sanger's rule [Haykin, 1994]. The weight matrix rows(that contain the weights connected to the output PEs in descending order)are the eigenvectors of the input correlation matrix. If we set the number of output PEs equal to M<D, we will be projecting the input data onto the M largest principa components. Their outputs will be proportional to the m largest eigenvalues. Note that we are performing an eigendecomposition through an iterative procedure Associative memorie Hebbian learning is also the rule to create associative memories[ Zurada, 1992]. The most-utilized associative memory implements heteroassociation, where the system is able to associate an input X to a designated output Y which can be of a different dimension(Fig. 20.15). So, in heteroassociation the signal Y works as the desired respa oding Ve can train such a memory using Hebbian learning or LMS, but the LMS provides a more efficient en of information. Associative memories differ from conventional computer memories in several respects. First, they are content addressable, and the information is distributed throughout the network, so they are robust to noise in the input. with nonlinear PEs or recurrent connections(as in the famous Hopfield network)[Haykin, 1994]they display the important property of pattern completion; i. e, when the input is distorted or only partially available, the recall can still be perfect XI DWC y2 ←口 FIGURE c 2000 by CRC Press LLC© 2000 by CRC Press LLC Principal Component Analysis Principal component analysis (PCA) is a well-known technique in signal processing that is used to project a signal into a signal-specific basis. The importance of PCA analysis is that it provides the best linear projection to a subspace in terms of preserving the signal energy [Haykin, 1994]. Normally, PCA is computed analytically through a singular value decomposition. PCA networks offer an alternative to this computation by providing an iterative implementation that may be preferred for real-time operation in embedded systems. The PCA network is a one-layer network with linear-processing elements (Fig. 20.14). One can extend Oja’s rule for many-output PEs (less or equal to the number of input PEs), according to the formula shown in Fig. 20.14 which is called the Sanger’s rule [Haykin, 1994]. The weight matrix rows (that contain the weights connected to the output PEs in descending order) are the eigenvectors of the input correlation matrix. If we set the number of output PEs equal to M < D, we will be projecting the input data onto the M largest principal components. Their outputs will be proportional to the M largest eigenvalues. Note that we are performing an eigendecomposition through an iterative procedure. Associative Memories Hebbian learning is also the rule to create associative memories [Zurada, 1992]. The most-utilized associative memory implements heteroassociation, where the system is able to associate an input X to a designated output Y which can be of a different dimension (Fig. 20.15). So, in heteroassociation the signal Y works as the desired response. We can train such a memory using Hebbian learning or LMS, but the LMS provides a more efficient encoding of information. Associative memories differ from conventional computer memories in several respects. First, they are content addressable, and the information is distributed throughout the network, so they are robust to noise in the input. With nonlinear PEs or recurrent connections (as in the famous Hopfield network) [Haykin, 1994] they display the important property of pattern completion; i.e., when the input is distorted or only partially available, the recall can still be perfect. FIGURE 20.14 PCA network. FIGURE 20.15 Associative memory (heteroassociation)
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有