当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《多元统计分析》课程教学资源(阅读材料)SVM in R

资源类别:文库,文档格式:PDF,文档页数:28,文件大小:1.22MB,团购合买
点击下载完整版文档(PDF)

Journal of Statistical Software April 2006,Volume 15,Issue 9. http://www.jstatsoft.org/ Support Vector Machines in R Alexandros Karatzoglou David Meyer Technische Universitat Wien Wirtschaftsuniversitat Wien Kurt Hornik Wirtschaftsuniversitat Wien Abstract Being among the most popular and efficient classification and regression methods currently available,implementations of support vector machines exist in almost every popular programming language.Currently four R packages contain SVM related software. The purpose of this paper is to present and compare these implementations. Keywords:support vector machines,R. 1.Introduction Support Vector learning is based on simple ideas which originated in statistical learning theory (Vapnik 1998).The simplicity comes from the fact that Support Vector Machines (SVMs) apply a simple linear method to the data but in a high-dimensional feature space non-linearly related to the input space.Moreover,even though we can think of SVMs as a linear algorithm in a high-dimensional space,in practice,it does not involve any computations in that high- dimensional space.This simplicity combined with state of the art performance on many learning problems(classification,regression,and novelty detection)has contributed to the popularity of the SVM.The reminder of the paper is structured as follows.First,we provide a short introduction into Support Vector Machines,followed by an overview of the SVM- related software available in R and other programming languages.Next follows a section on the data sets we will be using.Then,we describe the four available SVM implementations in R.Finally,we present the results of a timing benchmark. 2.Support vector machines SVMs use an implicit mapping of the input data into a high-dimensional feature space

JSS Journal of Statistical Software April 2006, Volume 15, Issue 9. http://www.jstatsoft.org/ Support Vector Machines in R Alexandros Karatzoglou Technische Universit¨at Wien David Meyer Wirtschaftsuniversit¨at Wien Kurt Hornik Wirtschaftsuniversit¨at Wien Abstract Being among the most popular and efficient classification and regression methods currently available, implementations of support vector machines exist in almost every popular programming language. Currently four R packages contain SVM related software. The purpose of this paper is to present and compare these implementations. Keywords: support vector machines, R. 1. Introduction Support Vector learning is based on simple ideas which originated in statistical learning theory (Vapnik 1998). The simplicity comes from the fact that Support Vector Machines (SVMs) apply a simple linear method to the data but in a high-dimensional feature space non-linearly related to the input space. Moreover, even though we can think of SVMs as a linear algorithm in a high-dimensional space, in practice, it does not involve any computations in that high￾dimensional space. This simplicity combined with state of the art performance on many learning problems (classification, regression, and novelty detection) has contributed to the popularity of the SVM. The reminder of the paper is structured as follows. First, we provide a short introduction into Support Vector Machines, followed by an overview of the SVM￾related software available in R and other programming languages. Next follows a section on the data sets we will be using. Then, we describe the four available SVM implementations in R. Finally, we present the results of a timing benchmark. 2. Support vector machines SVMs use an implicit mapping Φ of the input data into a high-dimensional feature space

2 Support Vector Machines in R defined by a kernel function,i.e.,a function returning the inner product ((x),(x))between the images of two data points z,x'in the feature space.The learning then takes place in the feature space,and the data points only appear inside dot products with other points.This is often referred to as the "kernel trick"(Scholkopf and Smola 2002).More precisely,if a projection重:X→H is used,the dot product(Φ(x),Φ(x)》can be represented by a kernel function k k(c,x)=(④(x),④(x)》, (1) which is computationally simpler than explicitly projecting r and r'into the feature space H. One interesting property of support vector machines and other kernel-based systems is that, once a valid kernel function has been selected,one can practically work in spaces of any dimension without any significant additional computational cost,since feature mapping is never effectively performed.In fact,one does not even need to know which features are being used. Another advantage of SVMs and kernel methods is that one can design and use a kernel for a particular problem that could be applied directly to the data without the need for a feature extraction process.This is particularly important in problems where a lot of structure of the data is lost by the feature extraction process (e.g.,text processing). Training a SVM for classification,regression or novelty detection involves solving a quadratic optimization problem.Using a standard quadratic problem solver for training an SVM would involve solving a big QP problem even for a moderate sized data set,including the computation of an m x m matrix in memory (m number of training points).This would seriously limit the size of problems an SVM could be applied to.To handle this issue,methods like SMO(Platt 1998),chunking (Osuna,Freund,and Girosi 1997)and simple SVM(Vishwanathan,Smola, and Murty 2003)exist that iteratively compute the solution of the SVM and scale O(Nk) where k is between 1 and 2.5 and have a linear space complexity. 2.1.Classification In classification,support vector machines separate the different classes of data by a hyper- plane (w,Φ(x)》+b=0 (2) corresponding to the decision function f(x)=sign(w,Φ(x)》+b) (3) It can be shown that the optimal,in terms of classification performance,hyper-plane (Vapnik 1998)is the one with the maximal margin of separation between the two classes.It can be constructed by solving a constrained quadratic optimization problem whose solution w has an expansion w=iai(ri)in terms of a subset of training patterns that lie on the margin.These training patterns,called support vectors,carry all relevant information about the classification problem.Omitting the details of the calculation,there is just one crucial property of the algorithm that we need to emphasize:both the quadratic programming problem and the final decision function depend only on dot products between patterns.This allows the use of the "kernel trick"and the generalization of this linear algorithm to the nonlinear case

2 Support Vector Machines in R defined by a kernel function, i.e., a function returning the inner product hΦ(x), Φ(x 0 )i between the images of two data points x, x0 in the feature space. The learning then takes place in the feature space, and the data points only appear inside dot products with other points. This is often referred to as the “kernel trick” (Sch¨olkopf and Smola 2002). More precisely, if a projection Φ : X → H is used, the dot product hΦ(x), Φ(x 0 )i can be represented by a kernel function k k(x, x0 ) = hΦ(x), Φ(x 0 )i, (1) which is computationally simpler than explicitly projecting x and x 0 into the feature space H. One interesting property of support vector machines and other kernel-based systems is that, once a valid kernel function has been selected, one can practically work in spaces of any dimension without any significant additional computational cost, since feature mapping is never effectively performed. In fact, one does not even need to know which features are being used. Another advantage of SVMs and kernel methods is that one can design and use a kernel for a particular problem that could be applied directly to the data without the need for a feature extraction process. This is particularly important in problems where a lot of structure of the data is lost by the feature extraction process (e.g., text processing). Training a SVM for classification, regression or novelty detection involves solving a quadratic optimization problem. Using a standard quadratic problem solver for training an SVM would involve solving a big QP problem even for a moderate sized data set, including the computation of an m × m matrix in memory (m number of training points). This would seriously limit the size of problems an SVM could be applied to. To handle this issue, methods like SMO (Platt 1998), chunking (Osuna, Freund, and Girosi 1997) and simple SVM (Vishwanathan, Smola, and Murty 2003) exist that iteratively compute the solution of the SVM and scale O(Nk ) where k is between 1 and 2.5 and have a linear space complexity. 2.1. Classification In classification, support vector machines separate the different classes of data by a hyper￾plane hw, Φ(x)i + b = 0 (2) corresponding to the decision function f(x) = sign(hw, Φ(x)i + b) (3) It can be shown that the optimal, in terms of classification performance, hyper-plane (Vapnik 1998) is the one with the maximal margin of separation between the two classes. It can be constructed by solving a constrained quadratic optimization problem whose solution w has an expansion w = P i αiΦ(xi) in terms of a subset of training patterns that lie on the margin. These training patterns, called support vectors, carry all relevant information about the classification problem. Omitting the details of the calculation, there is just one crucial property of the algorithm that we need to emphasize: both the quadratic programming problem and the final decision function depend only on dot products between patterns. This allows the use of the “kernel trick” and the generalization of this linear algorithm to the nonlinear case

JJournal of Statistical Software 3 In the case of the L2-norm soft margin classification the primal optimization problem takes the form: minimize t(w,)= m i=1 subject to (④(x),w〉+b)≥1- (i=1,..,m) (4) 5≥0 (i=1,.,m) where m is the number of training patterns,and yi=1.As in most kernel methods,the SVM solution w can be shown to have an expansion w= 〉aΦ(x) (5) i=1 where non-zero coefficients(support vectors)occur when a point (i,yi)meets the constraint. The coefficients a;are found by solving the following(dual)quadratic programming problem: m 1 m maximize W(a)=>ai- 2 Qiajyiyik(Ti,xj) i=1 ij=1 subject to 0≤ai≤ (i=1,..,m) (6) m m ,ay=0. i=1 This is a typical quadratic problem of the form: minimize cx+xHx subject to b≤Ax≤b+r (7) l≤x≤u where H∈Rmxm with entries Hij=yk(x,x)c=(1,,1)∈Rm,u=(C,,C)∈Rm, 1 =(0,...,0)E Rm,A =(y1,...,ym)ERm,b=0,r=0.The problem can easily be solved in a standard QP solver such as quadprog()in package quadprog(Weingessel 2004)or ipop() in package kernlab (Karatzoglou,Smola,Hornik,and Zeileis 2005),both available in R(R Development Core Team 2005).Techniques taking advantage of the special structure of the SVM QP problem like SMO and chunking (Osuna et al.1997)though offer much better performance in terms of speed,scalability and memory usage. The cost parameter C of the SVM formulation in Equation 7 controls the penalty paid by the SVM for missclassifying a training point and thus the complexity of the prediction function. A high cost value C will force the SVM to create a complex enough prediction function to missclassify as few training points as possible,while a lower cost parameter will lead to a simpler prediction function.Therefore,this type of SVM is usually called C-SVM. Another formulation of the classification with a more intuitive hyperparameter than C is the v-SVM (Scholkopf,Smola,Williamson,and Bartlett 2000).The v parameter has the interesting property of being an upper bound on the training error and a lower bound on

Journal of Statistical Software 3 In the case of the L2-norm soft margin classification the primal optimization problem takes the form: minimize t(w, ξ) = 1 2 kwk 2 + C m Xm i=1 ξi subject to yi(hΦ(xi), wi + b) ≥ 1 − ξi (i = 1, . . . , m) (4) ξi ≥ 0 (i = 1, . . . , m) where m is the number of training patterns, and yi = ±1. As in most kernel methods, the SVM solution w can be shown to have an expansion w = Xm i=1 αiyiΦ(xi) (5) where non-zero coefficients (support vectors) occur when a point (xi , yi) meets the constraint. The coefficients αi are found by solving the following (dual) quadratic programming problem: maximize W(α) = Xm i=1 αi − 1 2 Xm i,j=1 αiαjyiyjk(xi , xj ) subject to 0 ≤ αi ≤ C m (i = 1, . . . , m) (6) Xm i=1 αiyi = 0. This is a typical quadratic problem of the form: minimize c >x + 1 2 x >Hx subject to b ≤ Ax ≤ b + r l ≤ x ≤ u (7) where H ∈ R m×m with entries Hij = yiyjk(xi , xj ), c = (1, . . . , 1) ∈ R m, u = (C, . . . , C) ∈ R m, l = (0, . . . , 0) ∈ R m, A = (y1, . . . , ym) ∈ R m, b = 0, r = 0. The problem can easily be solved in a standard QP solver such as quadprog() in package quadprog (Weingessel 2004) or ipop() in package kernlab (Karatzoglou, Smola, Hornik, and Zeileis 2005), both available in R (R Development Core Team 2005). Techniques taking advantage of the special structure of the SVM QP problem like SMO and chunking (Osuna et al. 1997) though offer much better performance in terms of speed, scalability and memory usage. The cost parameter C of the SVM formulation in Equation 7 controls the penalty paid by the SVM for missclassifying a training point and thus the complexity of the prediction function. A high cost value C will force the SVM to create a complex enough prediction function to missclassify as few training points as possible, while a lower cost parameter will lead to a simpler prediction function. Therefore, this type of SVM is usually called C-SVM. Another formulation of the classification with a more intuitive hyperparameter than C is the ν-SVM (Sch¨olkopf, Smola, Williamson, and Bartlett 2000). The ν parameter has the interesting property of being an upper bound on the training error and a lower bound on

Support Vector Machines in R the fraction of support vectors found in the data set,thus controlling the complexity of the classification function build by the SVM(see Appendix for details). For multi-class classification,mostly voting schemes such as one-against-one and one-against- all are used.In the one-against-all method k binary SVM classifiers are trained,where k is the number of classes,each trained to separate one class from the rest.The classifiers are then combined by comparing their decision values on a test data instance and labeling it according to the classifier with the highest decision value. In the one-against-one classification method (also called pairwise classification;see Knerr, Personnaz,and Dreyfus 1990;Krefel 1999),()classifiers are constructed where each one is trained on data from two classes.Prediction is done by voting where each classifier gives a prediction and the class which is most frequently predicted wins ("Max Wins").This method has been shown to produce robust results when used with SVMs(Hsu and Lin 2002a). Although this suggests a higher number of support vector machines to train the overall CPU time used is less compared to the one-against-all method since the problems are smaller and the SVM optimization problem scales super-linearly. Furthermore,SVMs can also produce class probabilities as output instead of class labels.This is can done by an improved implementation (Lin,Lin,and Weng 2001)of Platt's a posteriori probabilities (Platt 2000)where a sigmoid function 1 P(y=1|f)=1+eA+B (8) is fitted to the decision values f of the binary SVM classifiers,A and B being estimated by minimizing the negative log-likelihood function.This is equivalent to fitting a logistic regression model to the estimated decision values.To extend the class probabilities to the multi-class case,all binary classifiers class probability output can be combined as proposed in Wu,Lin,and Weng (2003). In addition to these heuristics for extending a binary SVM to the multi-class problem,there have been reformulations of the support vector quadratic problem that deal with more than two classes.One of the many approaches for native support vector multi-class classification is the one proposed in Crammer and Singer(2000),which we will refer to as 'spoc-svc'.This algorithm works by solving a single optimization problem including the data from all classes. The primal formulation is: minimize a0=∑wP+2s n=1 m subject to (Φ(x),w)-(④(x),wn〉≥b-a (i=1....,m) (9) where b}=1-δ,n (10) where the decision function is argmaxn=1,…k(Φ(x),wn) (11) Details on performance and benchmarks on various approaches for multi-class classification can be found in Hsu and Lin (2002b)

4 Support Vector Machines in R the fraction of support vectors found in the data set, thus controlling the complexity of the classification function build by the SVM (see Appendix for details). For multi-class classification, mostly voting schemes such as one-against-one and one-against￾all are used. In the one-against-all method k binary SVM classifiers are trained, where k is the number of classes, each trained to separate one class from the rest. The classifiers are then combined by comparing their decision values on a test data instance and labeling it according to the classifier with the highest decision value. In the one-against-one classification method (also called pairwise classification; see Knerr, Personnaz, and Dreyfus 1990; Kreßel 1999), ￾ k 2  classifiers are constructed where each one is trained on data from two classes. Prediction is done by voting where each classifier gives a prediction and the class which is most frequently predicted wins (“Max Wins”). This method has been shown to produce robust results when used with SVMs (Hsu and Lin 2002a). Although this suggests a higher number of support vector machines to train the overall CPU time used is less compared to the one-against-all method since the problems are smaller and the SVM optimization problem scales super-linearly. Furthermore, SVMs can also produce class probabilities as output instead of class labels. This is can done by an improved implementation (Lin, Lin, and Weng 2001) of Platt’s a posteriori probabilities (Platt 2000) where a sigmoid function P(y = 1 | f) = 1 1 + eAf+B (8) is fitted to the decision values f of the binary SVM classifiers, A and B being estimated by minimizing the negative log-likelihood function. This is equivalent to fitting a logistic regression model to the estimated decision values. To extend the class probabilities to the multi-class case, all binary classifiers class probability output can be combined as proposed in Wu, Lin, and Weng (2003). In addition to these heuristics for extending a binary SVM to the multi-class problem, there have been reformulations of the support vector quadratic problem that deal with more than two classes. One of the many approaches for native support vector multi-class classification is the one proposed in Crammer and Singer (2000), which we will refer to as ‘spoc-svc’. This algorithm works by solving a single optimization problem including the data from all classes. The primal formulation is: minimize t({wn}, ξ) = 1 2 X k n=1 kwnk 2 + C m Xm i=1 ξi subject to hΦ(xi), wyi i − hΦ(xi), wni ≥ b n i − ξi (i = 1, . . . , m) (9) where b n i = 1 − δyi,n (10) where the decision function is argmaxn=1,...,khΦ(xi), wni (11) Details on performance and benchmarks on various approaches for multi-class classification can be found in Hsu and Lin (2002b)

Journal of Statistical Software 5 2.2.Novelty detection SVMs have also been extended to deal with the problem of novelty detection (or one-class classification;see Scholkopf,Platt,Shawe-Taylor,Smola,and Williamson 1999;Tax and Duin 1999),where essentially an SVM detects outliers in a data set.SVM novelty detection works by creating a spherical decision boundary around a set of data points by a set of support vectors describing the sphere's boundary.The primal optimization problem for support vector novelty detection is the following: minimize tw,5,)=w2-p+s mv i=1 subject to (Φ(x),w〉+b≥p-E(i=1,.,m) (12) 5≥0(i=1,.,m): The v parameter is used to control the volume of the sphere and consequently the number of outliers found.The value of v sets an upper bound on the fraction of outliers found in the data. 2.3.Regression By using a different loss function called the c-insensitive loss function lly-f()le=max{0,lly- f(x)-e},SVMs can also perform regression.This loss function ignores errors that are smaller than a certain threshold e>0 thus creating a tube around the true output.The primal becomes: minimize 9P+2G+ =1 subject to (Φ(x),w)+b)-≤e- (13) -(④(x),w〉+b)≤e- (14) 接≥0(i=1,.,m) We can estimate the accuracy of SVM regression by computing the scale parameter of a Laplacian distribution on the residuals s=y-f(c),where f(x)is the estimated decision function (Lin and Weng 2004). The dual problems of the various classification,regression and novelty detection SVM formu- lations can be found in the Appendix. 2.4.Kernel functions As seen before,the kernel functions return the inner product between two points in a suitable feature space,thus defining a notion of similarity,with little computational cost even in very high-dimensional spaces.Kernels commonly used with kernel methods and SVMs in particular include the following:

Journal of Statistical Software 5 2.2. Novelty detection SVMs have also been extended to deal with the problem of novelty detection (or one-class classification; see Sch¨olkopf, Platt, Shawe-Taylor, Smola, and Williamson 1999; Tax and Duin 1999), where essentially an SVM detects outliers in a data set. SVM novelty detection works by creating a spherical decision boundary around a set of data points by a set of support vectors describing the sphere’s boundary. The primal optimization problem for support vector novelty detection is the following: minimize t(w, ξ, ρ) = 1 2 kwk 2 − ρ + 1 mν Xm i=1 ξi subject to hΦ(xi), wi + b ≥ ρ − ξi (i = 1, . . . , m) (12) ξi ≥ 0 (i = 1, . . . , m). The ν parameter is used to control the volume of the sphere and consequently the number of outliers found. The value of ν sets an upper bound on the fraction of outliers found in the data. 2.3. Regression By using a different loss function called the -insensitive loss function ky−f(x)k = max{0, ky− f(x)k − }, SVMs can also perform regression. This loss function ignores errors that are smaller than a certain threshold  > 0 thus creating a tube around the true output. The primal becomes: minimize t(w, ξ) = 1 2 kwk 2 + C m Xm i=1 (ξi + ξ ∗ i ) subject to (hΦ(xi), wi + b) − yi ≤  − ξi (13) yi − (hΦ(xi), wi + b) ≤  − ξ ∗ i (14) ξ ∗ i ≥ 0 (i = 1, . . . , m) We can estimate the accuracy of SVM regression by computing the scale parameter of a Laplacian distribution on the residuals ζ = y − f(x), where f(x) is the estimated decision function (Lin and Weng 2004). The dual problems of the various classification, regression and novelty detection SVM formu￾lations can be found in the Appendix. 2.4. Kernel functions As seen before, the kernel functions return the inner product between two points in a suitable feature space, thus defining a notion of similarity, with little computational cost even in very high-dimensional spaces. Kernels commonly used with kernel methods and SVMs in particular include the following:

6 Support Vector Machines in R the linear kernel implementing the simplest of all kernel functions k(x,x)=(x,x (15) the Gaussian Radial Basis Function (RBF)kernel k(x,x')=exp(-allx-xl2) (16) the polynomial kernel (x,x)=(scale.(x)+offset)degree (17) the hyperbolic tangent kernel (x,x)=tanh (scale.(x,x+offset) (18) the Bessel function of the first kind kernel k(x,x')= Bessel+1)(allx-x'll) (n-t'l)-n(v+1) (19) the Laplace Radial Basis Function(RBF)kenrel k(x,x)=exp(-allx-x'll) (20) the ANOVA radial basis kernel k(x,x)) ep-oe-ry】 (21) . the linear splines kernel in one dimension ,)=1+min(c,)-专'(min(e,y+ams,r 3 (22) and for the multidimensional case k(x,x')=II=1k(,). The Gaussian and Laplace RBF and Bessel kernels are general-purpose kernels used when there is no prior knowledge about the data.The linear kernel is useful when dealing with large sparse data vectors as is usually the case in text categorization.The polynomial kernel is popular in image processing and the sigmoid kernel is mainly used as a proxy for neural networks.The splines and ANOVA RBF kernels typically perform well in regression problems. 2.5.Software Support vector machines are currently used in a wide range of fields,from bioinformatics to astrophysics.Thus,the existence of many SVM software packages comes as little surprise. Most existing software is written in C or C++,such as the award winning libsvm(Chang and Lin 2001),which provides a robust and fast SVM implementation and produces state of the

6 Support Vector Machines in R • the linear kernel implementing the simplest of all kernel functions k(x, x 0 ) = hx, x 0 i (15) • the Gaussian Radial Basis Function (RBF) kernel k(x, x 0 ) = exp(−σkx − x 0 k 2 ) (16) • the polynomial kernel k(x, x 0 ) = ￾ scale · hx, x 0 i + offsetdegree (17) • the hyperbolic tangent kernel k(x, x 0 ) = tanh ￾ scale · hx, x 0 i + offset (18) • the Bessel function of the first kind kernel k(x, x 0 ) = Besseln (ν+1)(σkx − x 0k) (kx − x 0k)−n(ν+1) (19) • the Laplace Radial Basis Function (RBF) kenrel k(x, x 0 ) = exp(−σkx − x 0 k) (20) • the ANOVA radial basis kernel k(x, x 0 ) = Xn k=1 exp(−σ(x k − x 0k ) 2 ) !d (21) • the linear splines kernel in one dimension k(x, x0 ) = 1 + xx0 min(x, x0 ) − x + x 0 2 (min(x, x0 ) 2 + (min(x, x0 ) 3 ) 3 (22) and for the multidimensional case k(x, x 0 ) = Qn k=1 k(x k , x0k ). The Gaussian and Laplace RBF and Bessel kernels are general-purpose kernels used when there is no prior knowledge about the data. The linear kernel is useful when dealing with large sparse data vectors as is usually the case in text categorization. The polynomial kernel is popular in image processing and the sigmoid kernel is mainly used as a proxy for neural networks. The splines and ANOVA RBF kernels typically perform well in regression problems. 2.5. Software Support vector machines are currently used in a wide range of fields, from bioinformatics to astrophysics. Thus, the existence of many SVM software packages comes as little surprise. Most existing software is written in C or C++, such as the award winning libsvm (Chang and Lin 2001), which provides a robust and fast SVM implementation and produces state of the

Journal of Statistical Software 7 art results on most classification and regression problems(Meyer,Leisch,and Hornik 2003), SVMlight (Joachims 1999),SVMTorch(Collobert,Bengio,and Mariethoz 2002),Royal Hol- loway Support Vector Machines,(Gammerman,Bozanic,Scholkopf,Vovk,Vapnik,Bottou, Smola,Watkins,LeCun,Saunders,Stitson,and Weston 2001),mySVM(Ruiping 2004),and M-SVM(Guermeur 2004).Many packages provide interfaces to MATLAB (The Math Works 2005)(such as libsvm),and there are some native MATLAB toolboxes as well such as the SVM and Kernel Methods Matlab Toolbox (Canu,Grandvalet,and Rakotomamonjy 2003) or the MATLAB Support Vector Machine Toolbox (Gunn 1998)and the SVM toolbox for Matlab(Schwaighofer 2005) 2.6.R software overview The first implementation of SVM in R(R Development Core Team 2005)was introduced in the e1071(Dimitriadou,Hornik,Leisch,Meyer,and Weingessel 2005)package.The svm() function in e1071 provides a rigid interface to libsvm along with visualization and parameter tuning methods. Package kernlab features a variety of kernel-based methods and includes a SVM method based on the optimizers used in libsvm and bsvm(Hsu and Lin 2002c).It aims to provide a flexible and extensible SVM implementation. Package klaR (Roever,Raabe,Luebke,and Ligges 2005)includes an interface to SVMlight,a popular SVM implementation that additionally offers classification tools such as Regularized Discriminant Analysis. Finally,package svmpath(Hastie 2004)provides an algorithm that fits the entire path of the SVM solution (i.e.,for any value of the cost parameter). In the remainder of the paper we will extensively review and compare these four SVM imple- mentations. 3.Data Throughout the paper,we will use the following data sets accessible through R(see Table 1), most of them originating from the UCI machine learning database (Blake and Merz 1998): iris This famous (Fisher's or Anderson's)iris data set gives the measurements in centimeters of the variables sepal length and width and petal length and width,respectively,for 50 flowers from each of 3 species of iris.The species are Iris setosa,versicolor,and virginica.The data set is provided by base R. spam A data set collected at Hewlett-Packard Labs which classifies 4601 e-mails as spam or non-spam.In addition to this class label there are 57 variables indicating the frequency of certain words and characters in the e-mail.The data set is provided by the kernlab package. musk This dataset in package kernlab describes a set of 476 molecules of which 207 are judged by human experts to be musks and the remaining 269 molecules are judged to be non-musks.The data has 167 variables which describe the geometry of the molecules

Journal of Statistical Software 7 art results on most classification and regression problems (Meyer, Leisch, and Hornik 2003), SVMlight (Joachims 1999), SVMTorch (Collobert, Bengio, and Mari´ethoz 2002), Royal Hol￾loway Support Vector Machines, (Gammerman, Bozanic, Sch¨olkopf, Vovk, Vapnik, Bottou, Smola, Watkins, LeCun, Saunders, Stitson, and Weston 2001), mySVM (Ruping ¨ 2004), and M-SVM (Guermeur 2004). Many packages provide interfaces to MATLAB (The MathWorks 2005) (such as libsvm), and there are some native MATLAB toolboxes as well such as the SVM and Kernel Methods Matlab Toolbox (Canu, Grandvalet, and Rakotomamonjy 2003) or the MATLAB Support Vector Machine Toolbox (Gunn 1998) and the SVM toolbox for Matlab (Schwaighofer 2005) 2.6. R software overview The first implementation of SVM in R (R Development Core Team 2005) was introduced in the e1071 (Dimitriadou, Hornik, Leisch, Meyer, and Weingessel 2005) package. The svm() function in e1071 provides a rigid interface to libsvm along with visualization and parameter tuning methods. Package kernlab features a variety of kernel-based methods and includes a SVM method based on the optimizers used in libsvm and bsvm (Hsu and Lin 2002c). It aims to provide a flexible and extensible SVM implementation. Package klaR (Roever, Raabe, Luebke, and Ligges 2005) includes an interface to SVMlight, a popular SVM implementation that additionally offers classification tools such as Regularized Discriminant Analysis. Finally, package svmpath (Hastie 2004) provides an algorithm that fits the entire path of the SVM solution (i.e., for any value of the cost parameter). In the remainder of the paper we will extensively review and compare these four SVM imple￾mentations. 3. Data Throughout the paper, we will use the following data sets accessible through R (see Table 1), most of them originating from the UCI machine learning database (Blake and Merz 1998): iris This famous (Fisher’s or Anderson’s) iris data set gives the measurements in centimeters of the variables sepal length and width and petal length and width, respectively, for 50 flowers from each of 3 species of iris. The species are Iris setosa, versicolor, and virginica. The data set is provided by base R. spam A data set collected at Hewlett-Packard Labs which classifies 4601 e-mails as spam or non-spam. In addition to this class label there are 57 variables indicating the frequency of certain words and characters in the e-mail. The data set is provided by the kernlab package. musk This dataset in package kernlab describes a set of 476 molecules of which 207 are judged by human experts to be musks and the remaining 269 molecules are judged to be non-musks. The data has 167 variables which describe the geometry of the molecules

8 Support Vector Machines in R promotergene Promoters have a region where a protein (RNA polymerase)must make contact and the helical DNA sequence must have a valid conformation so that the two pieces of the contact region spatially align.The dataset in package kernlab contains DNA sequences of promoters and non-promoters in a data frame with 106 observations and 58 variables.The DNA bases are coded as follows:'a'adenine,'c'cytosine,'g' guanine,and 't'thymine. Vowel Speaker independent recognition of the eleven steady state vowels of British English using a specified training set of LPC derived log area ratios.The vowels are indexed by integers 0 to 10.This dataset in package mlbench (Leisch and Dimitriadou 2001)has 990 observations on 10 independent variables. DNA in package mlbench consists of 3,186 data points (splice junctions).The data points are described by 180 indicator binary variables and the problem is to recognize the 3 classes ('ei','ie',neither),i.e.,the boundaries between exons (the parts of the DNA sequence retained after splicing)and introns(the parts of the DNA sequence that are spliced out). Breast Cancer in package mlbench is a data frame with 699 observations on 11 variables, one being a character variable,9 being ordered or nominal,and 1 target class.The objective is to identify each of a number of benign or malignant classes. BostonHousing Housing data in package mlbench for 506 census tracts of Boston from the 1970 census.There are 506 observations on 14 variables. B3 German Bussiness Cycles from 1955 to 1994 in package klaR.A data frame with 157 observations on the following 14 variables. Dataset #Examples #Attributes Class Distribution(%) b c m cl iris 150 5 2 33.3/33.3/33.3 spam 4601 572 39.40/60.59 musk 476 166 2 42.99/57.00 promotergene 106 57 2 50.00/50.00 Vowel 990 1 910 10.0/10.0/.. DNA 3186 180 2 24.07/24.07/51.91 BreastCancer 699 9 2 34.48/65.52 BostonHousing 506 12 (regression) B3 506 13 4 37.57/15.28/29.93/17.19 Table 1:The data sets used throughout the paper. Legend: b=binary,c=categorical, m=metric.cl number of classes. 4.ksvm in kernlab Package kernlab(Karatzoglou,Smola,Hornik,and Zeileis 2004)aims to provide the R user with basic kernel functionality (e.g.,like computing a kernel matrix using a particular kernel)

8 Support Vector Machines in R promotergene Promoters have a region where a protein (RNA polymerase) must make contact and the helical DNA sequence must have a valid conformation so that the two pieces of the contact region spatially align. The dataset in package kernlab contains DNA sequences of promoters and non-promoters in a data frame with 106 observations and 58 variables. The DNA bases are coded as follows: ‘a’ adenine, ‘c’ cytosine, ‘g’ guanine, and ‘t’ thymine. Vowel Speaker independent recognition of the eleven steady state vowels of British English using a specified training set of LPC derived log area ratios. The vowels are indexed by integers 0 to 10. This dataset in package mlbench (Leisch and Dimitriadou 2001) has 990 observations on 10 independent variables. DNA in package mlbench consists of 3,186 data points (splice junctions). The data points are described by 180 indicator binary variables and the problem is to recognize the 3 classes (‘ei’, ‘ie’, neither), i.e., the boundaries between exons (the parts of the DNA sequence retained after splicing) and introns (the parts of the DNA sequence that are spliced out). BreastCancer in package mlbench is a data frame with 699 observations on 11 variables, one being a character variable, 9 being ordered or nominal, and 1 target class. The objective is to identify each of a number of benign or malignant classes. BostonHousing Housing data in package mlbench for 506 census tracts of Boston from the 1970 census. There are 506 observations on 14 variables. B3 German Bussiness Cycles from 1955 to 1994 in package klaR. A data frame with 157 observations on the following 14 variables. #Attributes Dataset #Examples b c m cl Class Distribution (%) iris 150 5 3 33.3/33.3/33.3 spam 4601 57 2 39.40/60.59 musk 476 166 2 42.99 / 57.00 promotergene 106 57 2 50.00 / 50.00 Vowel 990 1 9 10 10.0/10.0/... DNA 3186 180 3 24.07/24.07/51.91 BreastCancer 699 9 2 34.48 / 65.52 BostonHousing 506 1 12 (regression) B3 506 13 4 37.57/15.28/29.93/17.19 Table 1: The data sets used throughout the paper. Legend: b=binary, c=categorical, m=metric, cl = number of classes. 4. ksvm in kernlab Package kernlab (Karatzoglou, Smola, Hornik, and Zeileis 2004) aims to provide the R user with basic kernel functionality (e.g., like computing a kernel matrix using a particular kernel)

Journal of Statistical Software 9 along with some utility functions commonly used in kernel-based methods like a quadratic programming solver,and modern kernel-based algorithms based on the functionality that the package provides.It also takes advantage of the inherent modularity of kernel-based methods, aiming to allow the user to switch between kernels on an existing algorithm and even create and use own kernel functions for the various kernel methods provided in the package. kernlab uses R's new object model described in "Programming with Data"(Chambers 1998) which is known as the S4 class system and is implemented in package methods.In contrast to the older S3 model for objects in R,classes,slots,and methods relationships must be declared explicitly when using the S4 system.The number and types of slots in an instance of a class have to be established at the time the class is defined.The objects from the class are validated against this definition and have to comply to it at any time.S4 also requires formal declarations of methods,unlike the informal system of using function names to identify a certain method in S3.Package kernlab is available from CRAN (http://CRAN.R-project. org/)under the GPL license. The ksvm()function,kernlab's implementation of SVMs,provides a standard formula inter- face along with a matrix interface.ksvm()is mostly programmed in R but uses,through the Call interface,the optimizers found in bsvm and libsvm (Chang and Lin 2001)which provide a very efficient C++version of the Sequential Minimization Optimization (SMO). The SMO algorithm solves the SVM quadratic problem (QP)without using any numerical QP optimization steps.Instead,it chooses to solve the smallest possible optimization prob- lem involving two elements of ai because the must obey one linear equality constraint.At every step,SMO chooses two oi to jointly optimize and finds the optimal values for these ai analytically,thus avoiding numerical QP optimization,and updates the SVM to reflect the new optimal values. The SVM implementations available in ksvm()include the C-SVM classification algorithm along with the v-SVM classification.Also included is a bound constraint version of C classi- fication(C-BSVM)which solves a slightly different QP problem(Mangasarian and Musicant 1999,including the offset B in the objective function)using a modified version of the TRON (Lin and More 1999)optimization software.For regression,ksvm()includes the e-SVM regres- sion algorithm along with the v-SVM regression formulation.In addition,a bound constraint version (e-BSVM)is provided,and novelty detection (one-class classification)is supported. For classification problems which include more then two classes (multi-class case)two options are available:a one-against-one (pairwise)classification method or the native multi-class formulation of the SVM(spoc-svc)described in Section 2.The optimization problem of the native multi-class SVM implementation is solved by a decomposition method proposed in Hsu and Lin (2002c)where optimal working sets are found (that is,sets of ai values which have a high probability of being non-zero).The QP sub-problems are then solved by a modified version of the TRON optimization software. The ksvm()implementation can also compute class-probability output by using Platt's prob- ability methods (Equation 8)along with the multi-class extension of the method in Wu et al. (2003).The prediction method can also return the raw decision values of the support vector model: library("kernlab") data("iris") irismodel <-ksvm(Species ~.data iris

Journal of Statistical Software 9 along with some utility functions commonly used in kernel-based methods like a quadratic programming solver, and modern kernel-based algorithms based on the functionality that the package provides. It also takes advantage of the inherent modularity of kernel-based methods, aiming to allow the user to switch between kernels on an existing algorithm and even create and use own kernel functions for the various kernel methods provided in the package. kernlab uses R’s new object model described in “Programming with Data” (Chambers 1998) which is known as the S4 class system and is implemented in package methods. In contrast to the older S3 model for objects in R, classes, slots, and methods relationships must be declared explicitly when using the S4 system. The number and types of slots in an instance of a class have to be established at the time the class is defined. The objects from the class are validated against this definition and have to comply to it at any time. S4 also requires formal declarations of methods, unlike the informal system of using function names to identify a certain method in S3. Package kernlab is available from CRAN (http://CRAN.R-project. org/) under the GPL license. The ksvm() function, kernlab’s implementation of SVMs, provides a standard formula inter￾face along with a matrix interface. ksvm() is mostly programmed in R but uses, through the .Call interface, the optimizers found in bsvm and libsvm (Chang and Lin 2001) which provide a very efficient C++ version of the Sequential Minimization Optimization (SMO). The SMO algorithm solves the SVM quadratic problem (QP) without using any numerical QP optimization steps. Instead, it chooses to solve the smallest possible optimization prob￾lem involving two elements of αi because the must obey one linear equality constraint. At every step, SMO chooses two αi to jointly optimize and finds the optimal values for these αi analytically, thus avoiding numerical QP optimization, and updates the SVM to reflect the new optimal values. The SVM implementations available in ksvm() include the C-SVM classification algorithm along with the ν-SVM classification. Also included is a bound constraint version of C classi- fication (C-BSVM) which solves a slightly different QP problem (Mangasarian and Musicant 1999, including the offset β in the objective function) using a modified version of the TRON (Lin and More 1999) optimization software. For regression, ksvm() includes the -SVM regres￾sion algorithm along with the ν-SVM regression formulation. In addition, a bound constraint version (-BSVM) is provided, and novelty detection (one-class classification) is supported. For classification problems which include more then two classes (multi-class case) two options are available: a one-against-one (pairwise) classification method or the native multi-class formulation of the SVM (spoc-svc) described in Section 2. The optimization problem of the native multi-class SVM implementation is solved by a decomposition method proposed in Hsu and Lin (2002c) where optimal working sets are found (that is, sets of αi values which have a high probability of being non-zero). The QP sub-problems are then solved by a modified version of the TRON optimization software. The ksvm() implementation can also compute class-probability output by using Platt’s prob￾ability methods (Equation 8) along with the multi-class extension of the method in Wu et al. (2003). The prediction method can also return the raw decision values of the support vector model: > library("kernlab") > data("iris") > irismodel <- ksvm(Species ~ ., data = iris

10 Support Vector Machines in R type ="C-bsvc",kernel ="rbfdot", kpar list(sigma =0.1),C=10, prob.model TRUE) irismodel Support Vector Machine object of class "ksvm" SV type:C-bsvc (classification) parameter cost C =10 Gaussian Radial Basis kernel function. Hyperparameter sigma =0.1 Number of Support Vectors 32 Training error:0.02 Probability model included. predict(irismodel,iris[c(3,10,56,68, 107,120),-5],type "probabilities") setosa versicolor virginica [1,]0.9864328200.0073594070.006207773 [2,]0.9833238130.0101189920.006557195 [3,]0.0048525280.9675551260.027592346 [4,]0.0095468230.9884967240.001956452 [5,]0.0127673400.0694960290.917736631 [6,]0.0115481760.1500353840.838416441 predict(irismodel,iris[c(3,10,56,68, 107,120),-5],type "decision") [,1] [,2] [,3] [1,]-1.460398-1.1910251-3.8868836 [2,]-1.357355-1.1749491-4.2107843 [3,]1.647272 0.7655001-1.3205306 [4,]1.412721 0.4736201-2.7521640 [5,]1.844763 1.00000001.0000019 [6,]1.848985 1.00690100.6742889 ksvm allows for the use of any valid user defined kernel function by just defining a function which takes two vector arguments and returns its Hilbert Space dot product in scalar form. >k <-function(x,y) (sum(x*y)+1)*exp(0.001*sum((x- y)2) +}

10 Support Vector Machines in R + type = "C-bsvc", kernel = "rbfdot", + kpar = list(sigma = 0.1), C = 10, + prob.model = TRUE) > irismodel Support Vector Machine object of class "ksvm" SV type: C-bsvc (classification) parameter : cost C = 10 Gaussian Radial Basis kernel function. Hyperparameter : sigma = 0.1 Number of Support Vectors : 32 Training error : 0.02 Probability model included. > predict(irismodel, iris[c(3, 10, 56, 68, + 107, 120), -5], type = "probabilities") setosa versicolor virginica [1,] 0.986432820 0.007359407 0.006207773 [2,] 0.983323813 0.010118992 0.006557195 [3,] 0.004852528 0.967555126 0.027592346 [4,] 0.009546823 0.988496724 0.001956452 [5,] 0.012767340 0.069496029 0.917736631 [6,] 0.011548176 0.150035384 0.838416441 > predict(irismodel, iris[c(3, 10, 56, 68, + 107, 120), -5], type = "decision") [,1] [,2] [,3] [1,] -1.460398 -1.1910251 -3.8868836 [2,] -1.357355 -1.1749491 -4.2107843 [3,] 1.647272 0.7655001 -1.3205306 [4,] 1.412721 0.4736201 -2.7521640 [5,] 1.844763 1.0000000 1.0000019 [6,] 1.848985 1.0069010 0.6742889 ksvm allows for the use of any valid user defined kernel function by just defining a function which takes two vector arguments and returns its Hilbert Space dot product in scalar form. > k <- function(x, y) { + (sum(x * y) + 1) * exp(0.001 * sum((x - + y)^2)) + }

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共28页,试读已结束,阅读完整版请下载
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有