正在加载图片...
Journal of Statistical Software 5 2.2.Novelty detection SVMs have also been extended to deal with the problem of novelty detection (or one-class classification;see Scholkopf,Platt,Shawe-Taylor,Smola,and Williamson 1999;Tax and Duin 1999),where essentially an SVM detects outliers in a data set.SVM novelty detection works by creating a spherical decision boundary around a set of data points by a set of support vectors describing the sphere's boundary.The primal optimization problem for support vector novelty detection is the following: minimize tw,5,)=w2-p+s mv i=1 subject to (Φ(x),w〉+b≥p-E(i=1,.,m) (12) 5≥0(i=1,.,m): The v parameter is used to control the volume of the sphere and consequently the number of outliers found.The value of v sets an upper bound on the fraction of outliers found in the data. 2.3.Regression By using a different loss function called the c-insensitive loss function lly-f()le=max{0,lly- f(x)-e},SVMs can also perform regression.This loss function ignores errors that are smaller than a certain threshold e>0 thus creating a tube around the true output.The primal becomes: minimize 9P+2G+ =1 subject to (Φ(x),w)+b)-≤e- (13) -(④(x),w〉+b)≤e- (14) 接≥0(i=1,.,m) We can estimate the accuracy of SVM regression by computing the scale parameter of a Laplacian distribution on the residuals s=y-f(c),where f(x)is the estimated decision function (Lin and Weng 2004). The dual problems of the various classification,regression and novelty detection SVM formu- lations can be found in the Appendix. 2.4.Kernel functions As seen before,the kernel functions return the inner product between two points in a suitable feature space,thus defining a notion of similarity,with little computational cost even in very high-dimensional spaces.Kernels commonly used with kernel methods and SVMs in particular include the following:Journal of Statistical Software 5 2.2. Novelty detection SVMs have also been extended to deal with the problem of novelty detection (or one-class classification; see Sch¨olkopf, Platt, Shawe-Taylor, Smola, and Williamson 1999; Tax and Duin 1999), where essentially an SVM detects outliers in a data set. SVM novelty detection works by creating a spherical decision boundary around a set of data points by a set of support vectors describing the sphere’s boundary. The primal optimization problem for support vector novelty detection is the following: minimize t(w, ξ, ρ) = 1 2 kwk 2 − ρ + 1 mν Xm i=1 ξi subject to hΦ(xi), wi + b ≥ ρ − ξi (i = 1, . . . , m) (12) ξi ≥ 0 (i = 1, . . . , m). The ν parameter is used to control the volume of the sphere and consequently the number of outliers found. The value of ν sets an upper bound on the fraction of outliers found in the data. 2.3. Regression By using a different loss function called the -insensitive loss function ky−f(x)k = max{0, ky− f(x)k − }, SVMs can also perform regression. This loss function ignores errors that are smaller than a certain threshold  > 0 thus creating a tube around the true output. The primal becomes: minimize t(w, ξ) = 1 2 kwk 2 + C m Xm i=1 (ξi + ξ ∗ i ) subject to (hΦ(xi), wi + b) − yi ≤  − ξi (13) yi − (hΦ(xi), wi + b) ≤  − ξ ∗ i (14) ξ ∗ i ≥ 0 (i = 1, . . . , m) We can estimate the accuracy of SVM regression by computing the scale parameter of a Laplacian distribution on the residuals ζ = y − f(x), where f(x) is the estimated decision function (Lin and Weng 2004). The dual problems of the various classification, regression and novelty detection SVM formu￾lations can be found in the Appendix. 2.4. Kernel functions As seen before, the kernel functions return the inner product between two points in a suitable feature space, thus defining a notion of similarity, with little computational cost even in very high-dimensional spaces. Kernels commonly used with kernel methods and SVMs in particular include the following:
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有