Loading...
Thumbnail Image

Date

2002

Journal Title

Journal ISSN

Volume Title

Publisher

The theory of the Support Vector Machine (SVM) algorithm is based on statistical learning theory and can be applied to pattern recognition and regression. Training of SVMs leads to either a quadratic programming (QP) problem, or a linear programming (LP) problem. This depends on the specific noun that is used when the distance between the two classes is computed. l1 or linfinity norm distance leads to a large scale linear programming problem in the case where the sample size is very large. We propose to apply the Benders Decomposition technique to the resulting LP for the regression case. In addition to this, other decomposition techniques like support clusters and bagging methods have been developed. Also, a very efficient data preprocessing method for SVM which is called gamma-SVM has been developed. This method reduces the size of the training set and preserves the important data points which are high candidates as support vectors for defining the decision function.


Comparisons with other decomposition methods such as SVMTorch and SVMFu reveal that gamma-SVM, support clusters, subsampling and bagging methods are more efficient in terms of CPU time. gamma-SVM method outperforms SVMFu and SVMTorch for generalization error. In the case of support clusters, subsampling and bagging methods, generalization error is close or higher than SVMFu and SVMTorch. Some information is lost for speeding up the algorithm. SVM regression has been applied to an option pricing model and prediction of S&P 500 daily return. Comparisons with multilayer perceptron and radial basis function networks is also presented.

Description

Keywords

Quadratic programming., Computer Science., Operations Research., Linear programming., Data mining., Decomposition (Mathematics), Engineering, Industrial.

Citation

DOI

Related file

Notes

Sponsorship