正在加载图片...
第5期 孙正兴,等:基于局部SM分类器的表情识别方法 ·461· a given pointx'in the input pace,we first find its K-the classification eror of each training example accord- nearest neighbors in the transfomed feature space F,ing o its si ilarity to the test example The si ilarity and then search for an optial separating hyper plane is captured by a distance functiono,the same as the only over these K-nearest neighbors In practice,this app roach used by KNN. means that an SVM is built over the neighborhood of For each test sample x',we construct its bcal each test point x'.Accordingly,the constraints in SVM model by solving the follow ing opti ization prob- Equ (6)become: lem: nm中(w+]≥1-5ai=L.k10 Whee,左:fl,…}→{l,N}is a function, s t y (w'x,-b≥1-5, (13) which maps the indexes of the training point defined in 5≥0.i=1,2,…n Equ (4).In this way,is the th point of the set Where,o(x'x,)is the L2 distance between x'and x. D in tems of distance from x'and thus j<k号‖中(3)-中(x)‖<‖φ(xw)-中()I The solution to Equ (13)identifies the decision sur face as well as the bcal neighborhood of the samples because of the monotonicity of the quadratic operator The functiono penalizes training examples that are loca- The computation is expressed in tems of kemels as ted far away from the test example As a result,classi- ‖中(x)-中(x)I2=(中(x),中(x))p+(φ(x), fication of the test example depends only on the support φ(x))p-2(φ(x,φ(x))r= vectors in its local neighborhood To further appreciate k(x.x)+k(x'x)-2k(x,x). 11) the role of the weight function,consider the dual fom In the case of linear kemels,the ordering function of Equ (13): can be built using the Euclidean distance,whereas if the kemel is not linear,the ordering can be different 1,0y max∑-2 If the kemel is the RBF kemel,the ordering function is (14) equivalent to using the Euclidean metric The decision s t 2m=00≤a,≤x动. rule associated with the method is i=1,2.,n Compared to Equ (7),the difference beteen LSVM SVMNN (x)=sign( and SVM is that the constraint on the upper bound for (12) a,has been modified from c to co (x.x).Thismodifr- 33 Local support vector machnes cation has the following to effects It reduces the m- KNN-SVM is a combination of KNN and SVM. pact of distant support vectors,and Non-support vectors But this method abandons nearest-neighbor searches in of the nonlinear SVM may become support vectors of the SVM leaming algorithm.Once K-nearest neighbors LSVM. are identified,the SVM algorithm completely ignores 3 4 LSVM in facial expression recogn ition their si ilarity to the given test example when solving For facial exp ression recognition using LSVM,ge- the dual opti ization problem given in Equ (7). ometric features are used as an input Six classes were So we developed a new LSVM algorithm,which considered in the experments,each one representing incoporates neighborhood infomation directly into one of the basic facial expressions (anger,disgust, SVM leaming The princple of LSVM is to reduce the fear,happiness,sadness,and surprise).The LSVM mpact of support vectors bcated far away from a given classifies geometric features as one of these six basic fa- test example This can be accomplished by weighting cial expressions Pseudo code of the basic version of 1994-2009 China Academic Journal Electronic Publishing House.All rights reserved.http://www.cnki.neta given point x′in the input space, we first find its K2 nearest neighbors in the transformed feature space F, and then search for an op timal separating hyper p lane only over these K2nearest neighbors. In p ractice, this means that an SVM is built over the neighborhood of each test point x ’. Accordingly, the constraints in Equ. (6) become: yr x′( i) [w< ( xr x′( i) + b) ]≥1 -ξr x′( i) , i = 1, …, k. (10) Where, rx′: { 1, …, N } → { 1, …, N } is a function, which map s the indexes of the training point defined in Equ. (4). In this way, xr x′( j) is the j2th point of the set D in term s of distance from x′and thus j < k] ‖< ( xr x′( j) ) - < ( x′) ‖ < ‖< ( xr x′( k) ) - < ( x′) ‖ because of the monotonicity of the quadratic operator. The computation is exp ressed in term s of kernels as: ‖< ( x) - < ( xi ) ‖ 2 =〈< ( x) , < ( x) 〉F +〈< ( x′) , < ( x′) 〉F - 2〈< ( x) , < ( x′) 〉F = k ( x, x) + k ( x′, x′) - 2k ( x, x′). (11) In the case of linear kernels, the ordering function can be built using the Euclidean distance, whereas if the kernel is not linear, the ordering can be different. If the kernel is the RBF kernel, the ordering function is equivalent to using the Euclidean metric. The decision rule associated with the method is: SVMNN ( x) = sign ( 6 k i =1 ar x ( i) yr s ( i) k ( xr x ( i) , x) + b). (12) 3. 3 Loca l support vector machines KNN2SVM is a combination of KNN and SVM. But this method abandons nearest2neighbor searches in the SVM learning algorithm. Once K2nearest neighbors are identified, the SVM algorithm comp letely ignores their sim ilarity to the given test examp le when solving the dual op tim ization p roblem given in Equ. (7). So we developed a new LSVM algorithm, which incorporates neighborhood information directly into SVM learning. The p rincip le of LSVM is to reduce the impact of support vectors located far away from a given test examp le. This can be accomp lished by weighting the classification error of each training examp le accord2 ing to its sim ilarity to the test examp le. The sim ilarity is cap tured by a distance functionσ, the same as the app roach used by KNN. For each test samp le x ’ , we construct its local SVM model by solving the following op tim ization p rob2 lem: m in 1 2 ‖w‖ 2 2 + C 6 n i =1 σ( x′, xi )ξi , s. t. yi (w T xi - b) ≥ 1 - ξi , (13) ξi ≥ 0, i = 1, 2, …, n. W here, σ( x′, xi ) is the L2 distance between x′and xi . The solution to Equ. ( 13) identifies the decision sur2 face as well as the local neighborhood of the samp les. The functionσpenalizes training examp les that are loca2 ted far away from the test examp le. A s a result, classi2 fication of the test examp le depends only on the support vectors in its local neighborhood. To further app reciate the role of the weight function, consider the dual form of Equ. (13) : max6 n i =1 αi - 1 2 6 n i, j=1 αiαj yi yj < ( xi , xj ) , s. t. 6 n i =1 αi yi = 0, 0 ≤αi ≤ cσ( x′, xi ) , i = 1, 2, …, n. (14) Compared to Equ. ( 7) , the difference between LSVM and SVM is that the constraint on the upper bound for αi has been modified from c to cσ( x′, xi ). Thismodifi2 cation has the following two effects: It reduces the im2 pact of distant support vectors, and Non2support vectors of the nonlinear SVM may become support vectors of LSVM. 3. 4 LSVM in fac ia l expression recogn ition For facial exp ression recognition using LSVM, ge2 ometric features are used as an input. Six classes were considered in the experiments, each one rep resenting one of the basic facial exp ressions ( anger, disgust, fear, happ iness, sadness, and surp rise). The LSVM classifies geometric features as one of these six basic fa2 cial exp ressions. Pseudo code of the basic version of 第 5期 孙正兴 ,等 :基于局部 SVM分类器的表情识别方法 ·461·
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有