正在加载图片...
KIM AND LEWIS: OPTIMAL DESIGN OF NEURAL-NETWORK CONTROLLER receptive field adjustable Fig. 1. Architecture of a Cmac neural network Fig. I shows the architecture and operation of the CMAC. The 3 =[1 u2. sional Receptive-Field Functions: Given any A. CMAC Neural Networks 2)Multisim nl E 3e", the multidimensional receptive- CMAC can be used to approximate a nonlinear mapping y(r): field functions are defined as Ⅺ→ Y where X" C9 is the application in mensional input space and Y c gm in the application 9i1,2,…,j ,n(x1)·p2,(x2)…pm,n(xn)(3) space. The CMAC algorithm consists of two primary functions for determining the value of a complex function, as shown in Fig 1 1,……, n The output of the CMAc is given by R:X→A P.A→Y v(x)=>m(x),j=1,…,m( where where X continuous n-dimensional input space Wji E s output-layer weight values, a NA-dimensional association space O: continuous, multidimensional receptive-fie Y m-dimensional output space → function; The function p= R()is fixed and maps each point in the NA number of the association point. input space onto the association space A The function P() The effect of receptive-field basis function type and partition computes an output ye Y by projecting the association vector number along each dimension on the Cmac performance has determined by R(r)onto a vector of adjustable weights such not yet been systematically studied The output of the CMAC can be expressed in a vector notation y=P(p)=wp v(r)=w p(a) R()in(1)is the multidimensional receptive field function I)Receptive-Field Fumction: Given x=[T1r2..mI E Re, let Eci mim Si]ev1<i<n be domain of interest w matrix of adjustable weight values For this domain, select integers Ni and strictly increasing par- p(r), vector of receptive-field functions titions Based on the approximation property of the CMAC, there ex ists ideal weight values W, so that the function to be approxi mated can be represented as 丌;=[x;,1x,2…x;,N 1<< f(a=wpr)+e(c) or each component of the input space, the receptive-field basis function can be defined as rectangular [1] or triangular [4] or with e(=r)the"functional reconstructional error"and E()< any continuously bounded function, e.g., Gaussian 31KIM AND LEWIS: OPTIMAL DESIGN OF NEURAL-NETWORK CONTROLLER 23 Fig. 1. Architecture of a CMAC neural network. A. CMAC Neural Networks Fig. 1 shows the architecture and operation of the CMAC. The CMAC can be used to approximate a nonlinear mapping : where is the application in the -di￾mensional input space and in the application output space. The CMAC algorithm consists of two primary functions for determining the value of a complex function, as shown in Fig. 1 (1) where continuous -dimensional input space; -dimensional association space; -dimensional output space. The function is fixed and maps each point in the input space onto the association space . The function computes an output by projecting the association vector determined by onto a vector of adjustable weights such that (2) in (1) is the multidimensional receptive field function. 1) Receptive-Field Function: Given , let be domain of interest. For this domain, select integers and strictly increasing par￾titions For each component of the input space, the receptive-field basis function can be defined as rectangular [1] or triangular [4] or any continuously bounded function, e.g., Gaussian [3]. 2) Multidimensional Receptive-Field Functions: Given any , the multidimensional receptive￾field functions are defined as (3) with , . The output of the CMAC is given by (4) where output-layer weight values; : continuous, multidimensional receptive-field function; number of the association point. The effect of receptive-field basis function type and partition number along each dimension on the CMAC performance has not yet been systematically studied. The output of the CMAC can be expressed in a vector notation as (5) where matrix of adjustable weight values vector of receptive-field functions. Based on the approximation property of the CMAC, there ex￾ists ideal weight values , so that the function to be approxi￾mated can be represented as (6) with the “functional reconstructional error” and bounded
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有