正在加载图片...
Preface Preface to the Second edition Since the introduction of support vector machines, we have witnessed the huge development in theory, models, and applications of what is so-calle kernel-based methods: advancement in generalization theory, kernel classifiers and regressors and their variants, various feature selection and extraction methods, and wide variety of applications such as pattern classification and regressions in biology, medicine, chemistry, as well as computer science. In Support vector Machines for Pattern Classification, Second Edition, I try to reflect the development of kernel-based methods since 2005. In ad- dition, I have included more intensive performance comparison of classifiers and regressors, added new references, and corrected many errors in the first edition. The major modifications of, and additions to, the first edition are as follows I have changed the symbols of the mapping function to the feature from g(x)to more commonly used (x) and its associated kernel from H(x,x')to K(x,x') 1.3 Data Sets Used in the Book: I have added publicly available two-class data sets, microarray data sets, multiclass data sets, and regression data 1.4 Classifier Evaluation: Evaluation criteria for classifiers and regressors are discussed 2.3.2 Kernels: Mahalanobis kernels, graph kernels, etc are added 2.3.6 Empirical Feature Space: The high-dimensional feature space is treated implicitly via kernel tricks. This is an advantage and also a disadvantage because we treat the feature space without knowing its structure. The em- pirical feature space is equivalent to the feature space, in that it gives the same kernel value as that of the feature space. The introduction of the empirical feature space greatly enhances the interpretability and manipu- lability of the fea ature spacePreface Preface to the Second Edition Since the introduction of support vector machines, we have witnessed the huge development in theory, models, and applications of what is so-called kernel-based methods: advancement in generalization theory, kernel classifiers and regressors and their variants, various feature selection and extraction methods, and wide variety of applications such as pattern classification and regressions in biology, medicine, chemistry, as well as computer science. In Support Vector Machines for Pattern Classification, Second Edition, I try to reflect the development of kernel-based methods since 2005. In ad￾dition, I have included more intensive performance comparison of classifiers and regressors, added new references, and corrected many errors in the first edition. The major modifications of, and additions to, the first edition are as follows: Symbols: I have changed the symbols of the mapping function to the feature space from g(x) to more commonly used φ(x) and its associated kernel from H(x, x ) to K(x, x ). 1.3 Data Sets Used in the Book: I have added publicly available two-class data sets, microarray data sets, multiclass data sets, and regression data sets. 1.4 Classifier Evaluation: Evaluation criteria for classifiers and regressors are discussed. 2.3.2 Kernels: Mahalanobis kernels, graph kernels, etc., are added. 2.3.6 Empirical Feature Space: The high-dimensional feature space is treated implicitly via kernel tricks. This is an advantage and also a disadvantage because we treat the feature space without knowing its structure. The em￾pirical feature space is equivalent to the feature space, in that it gives the same kernel value as that of the feature space. The introduction of the empirical feature space greatly enhances the interpretability and manipu￾lability of the feature space. v
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有