The support vector machine (SVM) is a supervised learning method that generates input-output mapping functions from a set of labeled training data. The mapping function can be either a classification function, i.e., the category of the input data, or a regression function. For classification, nonlinear kernel functions are often used to transform input data to a high-dimensional feature space in which the input data become more separable compared to the original input space. Maximum-margin hyperplanes are then created. The model thus produced depends on only a subset of the training data near the class boundaries. Similarly, the model produced by Support Vector Regression ignores any training data that is sufficiently close to the model prediction. SVMs are also said to belong to “kernel methods”.
In addition to its solid mathematical foundation in statistical learning theory, SVMs have demonstrated highly competitive performance in numerous real-world applications, such as bioinformatics, text mining, face recognition, and image processing, which has established SVMs as one of the state-ofthe-art tools for machine learning and data mining, along with other soft computing techniques, e.g., neural networks and fuzzy systems.
This volume is composed of 20 chapters selected from the recent myriad of novel SVM applications, powerful SVM algorithms, as well as enlightening theoretical analysis. Written by experts in their respective fields, the first 12 chapters concentrate on SVM theory, whereas the subsequent 8 chapters emphasize practical applications, although the “decision boundary” separating these two categories is rather “fuzzy”.