正在加载图片...
Preface lems, we show performance evaluation of these methods for the benchmark data sets Since support vector machines were proposed, many variants of support vector machines have been developed In Chapter 4, we discuss some of them least-squares support vector machines whose training results in solving a set of linear equations, linear programming support vector machines, robust support vector machines, and so on. In Chapter 5, we discuss some training methods for support vector ma- chines. Because we need to solve a quadratic optimization problem with the number of variables equal to the number of training data, it is impractical to solve a problem with a huge number of training data. For example, for 10,000 training data, 800 MB memory is necessary to store the Hessian matrix in double precision. Therefore, several methods have been developed to speed training. One approach reduces the number of training data by preselecting the training data. The other is to speed training by decomposing the problem into two subproblems and repeatedly solving the one subproblem while fixing the other and exchanging the variables between the two subproblems Optimal selection of features is important in realizing high-performance classification systems. Because support vector machines are trained so that the margins are maximized, they are said to be robust for nonoptimal fea- tures. In Chapter 7, we discuss several methods for selecting optimal features and show, using some benchmark data sets, that feature selection is impor tant even for support vector machines. Then we discuss feature extraction that transforms input features by linear and nonlinear transformation Some classifiers need clustering of training data before training. But sup- ort vector machines do not require clustering because mapping into a feature pace results in clustering in the input space. In Chapter 8, we discuss how we can realize support vector machine-based clustering One of the features of support vector machines is that by mapping the in- put space into the feature space, nonlinear separation of class data is realized Thus the conventional linear models become nonlinear if the linear models are formulated in the feature space. They are usually called kernel-based methods In Chapter 6, we discuss typical kernel-based methods: kernel least squares kernel principal component analysis, and the kernel Mahalanobis distance. he concept of maximum margins can be used for conventional classifiers to enhance generalization ability. In Chapter 9, we discuss methods for max- imizing margins of multilayer neural networks and in Chapter 10 we discuss maximum-margin fuzzy classifiers with ellipsoidal regions and polyhedral re- Support vector machines can be applied to function approximation. In Chapter 11, we discuss how to extend support vector machines to function approximation and compare the performance of the support vector machine with that of other function approximators.Preface ix lems, we show performance evaluation of these methods for the benchmark data sets. Since support vector machines were proposed, many variants of support vector machines have been developed. In Chapter 4, we discuss some of them: least-squares support vector machines whose training results in solving a set of linear equations, linear programming support vector machines, robust support vector machines, and so on. In Chapter 5, we discuss some training methods for support vector ma￾chines. Because we need to solve a quadratic optimization problem with the number of variables equal to the number of training data, it is impractical to solve a problem with a huge number of training data. For example, for 10,000 training data, 800 MB memory is necessary to store the Hessian matrix in double precision. Therefore, several methods have been developed to speed training. One approach reduces the number of training data by preselecting the training data. The other is to speed training by decomposing the problem into two subproblems and repeatedly solving the one subproblem while fixing the other and exchanging the variables between the two subproblems. Optimal selection of features is important in realizing high-performance classification systems. Because support vector machines are trained so that the margins are maximized, they are said to be robust for nonoptimal fea￾tures. In Chapter 7, we discuss several methods for selecting optimal features and show, using some benchmark data sets, that feature selection is impor￾tant even for support vector machines. Then we discuss feature extraction that transforms input features by linear and nonlinear transformation. Some classifiers need clustering of training data before training. But sup￾port vector machines do not require clustering because mapping into a feature space results in clustering in the input space. In Chapter 8, we discuss how we can realize support vector machine-based clustering. One of the features of support vector machines is that by mapping the in￾put space into the feature space, nonlinear separation of class data is realized. Thus the conventional linear models become nonlinear if the linear models are formulated in the feature space. They are usually called kernel-based methods. In Chapter 6, we discuss typical kernel-based methods: kernel least squares, kernel principal component analysis, and the kernel Mahalanobis distance. The concept of maximum margins can be used for conventional classifiers to enhance generalization ability. In Chapter 9, we discuss methods for max￾imizing margins of multilayer neural networks and in Chapter 10 we discuss maximum-margin fuzzy classifiers with ellipsoidal regions and polyhedral re￾gions. Support vector machines can be applied to function approximation. In Chapter 11, we discuss how to extend support vector machines to function approximation and compare the performance of the support vector machine with that of other function approximators
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有