正在加载图片...
WaT ①→y0x1 -yu+o XD FIGURE 20.16 Autoassociator A special case of associative memories is called the autoassociator(Fig. 20.16), where the training output of size d is equal to the input signal (also a size D)[Kung, 1993]. Note that the hidden layer has fewer PEs(M E D) than the input(bottleneck layer). W,=W2 is enforced. The function of this network is one of enco or data reduction. The training of this network( W, matrix)is done with LMS. It can be shown that this nety also implements PCA with M components, even when the hidden layer is built from nonlinear PEs 20.6 Competitive Learning and Kohonen Networks Competition is a very efficient way to divide the computing resources of a network. Instead of having each output PE more or less sensitive to the full input space, as in the associative memories, in a competitive network each PE specializes into a piece of the input space and represents it [Haykin, 1994]. Competitive networks are linear, single-layer nets(Fig. 20. 17). Their functionality is directly related to the competitive learning rule, which belongs to the unsupervised category. First, only the Pe that has the largest output gets its weights updated. The weights of the winning PE are updated according to the formula in Fig. 20.17 in such a way that they approach the present input. The step size exactly controls how much is this adjustment(see Fig 20.17) Notice that there is an intrinsic nonlinearity in the learning rule: only the Pe that has the largest output(the winner) has its weights updated. All the other weights remain unchanged. This is the mechanism that allows Competitive networks are used for clustering; i.e., an Output PE net will seek M clusters in the input space The weights of each PE will correspond to the centers of mass of one of the M clusters of input samples. When a given pattern is shown to the trained net, only one of the outputs will be active and can be used to label the sample as belonging to one of the clusters. No more information about the input data is preserved. Competitive learning is one of the fundamental components of the Kohonen self-organizing feature map (SOFM)network, which is also a single-layer network with linear PEs Haykin, 1994. Kohonen learning creates annealed competition in the output space, by adapting not only the winner Pe weights but also their spati y(n)= for i=i*. the for all other Pl w(n+1)=W,(n)+n(X(n)-W,(n)) FIGURE 20.17 Competitive neural network. c 2000 by CRC Press LLC© 2000 by CRC Press LLC A special case of associative memories is called the autoassociator (Fig. 20.16), where the training output of size D is equal to the input signal (also a size D) [Kung, 1993]. Note that the hidden layer has fewer PEs (M ! D) than the input (bottleneck layer). W1 = W2 T is enforced. The function of this network is one of encoding or data reduction. The training of this network (W2 matrix) is done with LMS. It can be shown that this network also implements PCA with M components, even when the hidden layer is built from nonlinear PEs. 20.6 Competitive Learning and Kohonen Networks Competition is a very efficient way to divide the computing resources of a network. Instead of having each output PE more or less sensitive to the full input space, as in the associative memories, in a competitive network each PE specializes into a piece of the input space and represents it [Haykin, 1994]. Competitive networks are linear, single-layer nets (Fig. 20.17). Their functionality is directly related to the competitive learning rule, which belongs to the unsupervised category. First, only the PE that has the largest output gets its weights updated. The weights of the winning PE are updated according to the formula in Fig. 20.17 in such a way that they approach the present input. The step size exactly controls how much is this adjustment (see Fig. 20.17). Notice that there is an intrinsic nonlinearity in the learning rule: only the PE that has the largest output (the winner) has its weights updated. All the other weights remain unchanged. This is the mechanism that allows the competitive net PEs to specialize. Competitive networks are used for clustering; i.e., an M output PE net will seek M clusters in the input space. The weights of each PE will correspond to the centers of mass of one of the M clusters of input samples. When a given pattern is shown to the trained net, only one of the outputs will be active and can be used to label the sample as belonging to one of the clusters. No more information about the input data is preserved. Competitive learning is one of the fundamental components of the Kohonen self-organizing feature map (SOFM) network, which is also a single-layer network with linear PEs [Haykin, 1994]. Kohonen learning creates annealed competition in the output space, by adapting not only the winner PE weights but also their spatial FIGURE 20.16 Autoassociator. FIGURE 20.17 Competitive neural network
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有