Kernel Methods Fast Algorithms and real life applications
Electronic Theses of Indian Institute of Science
View Archive InfoField | Value | |
Title |
Kernel Methods Fast Algorithms and real life applications
|
|
Creator |
Vishwanathan, S V N
|
|
Subject |
Computer and Information Science
Computer programming Machine Learning Fast String Kernel methods Jigsawing Support vector machines |
|
Description |
Support Vector Machines (SVM) have recently gained prominence in the field of machine learning and pattern classification (Vapnik, 1995, Herbrich, 2002, Scholkopf and Smola, 2002). Classification is achieved by finding a separating hyperplane in a feature space, which can be mapped back onto a non-linear surface in the input space. However, training an SVM involves solving a quadratic optimization problem, which tends to be computationally intensive. Furthermore, it can be subject to stability problems and is non-trivial to implement. This thesis proposes an fast iterative Support Vector training algorithm which overcomes some of these problems. Our algorithm, which we christen Simple SVM, works mainly for the quadratic soft margin loss (also called the l2 formulation). We also sketch an extension for the linear soft-margin loss (also called the l1 formulation). Simple SVM works by incrementally changing a candidate Support Vector set using a locally greedy approach, until the supporting hyperplane is found within a finite number of iterations. It is derived by a simple (yet computationally crucial) modification of the incremental SVM training algorithms of Cauwenberghs and Poggio (2001) which allows us to perform update operations very efficiently. Constant-time methods for initialization of the algorithm and experimental evidence for the speed of the proposed algorithm, when compared to methods such as Sequential Minimal Optimization and the Nearest Point Algorithm are given. We present results on a variety of real life datasets to validate our claims. In many real life applications, especially for the l2 formulation, the kernel matrix K є R n x n can be written as K = Z T Z + Λ , where, Z є R n x m with m |
|
Publisher |
Indian Institute of Science
|
|
Contributor |
Murthy, Narasimha M
|
|
Date |
2005-02-08T06:09:35Z
2005-02-08T06:09:35Z 2005-02-08T06:09:35Z 2003-06 |
|
Type |
Electronic Thesis and Dissertation
|
|
Format |
7491868 bytes
application/pdf |
|
Identifier |
http://hdl.handle.net/2005/49
null |
|
Language |
en
|
|
Rights |
I grant Indian Institute of Science the right to archive and to make available my thesis or dissertation in whole or in part in all forms of media, now hereafter known. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertation.
|
|