Kernel learning as minimization of the single validation estimate
Abstract
In order to prevent overfitting in traditional support vector kernel learning, we propose to learn a kernel (jointly with the cost parameter C) by minimizing the single validation estimate with a sequential linear filter algorithm. Additionally, we introduce a simple heuristic in order to improve risk estimation, which randomly swaps several points between the validation and the training sets. Contrarily to previous works, which use several validation sets to improve risk estimation, our strategy does not increase the number of optimization variables. This is easily done thanks to Karasuyama and Takeuchi's multiple incremental decremental support vector learning algorithm. A synthetic signal classification problem underlines the effectiveness of our method. The main parameters of the learned kernel are the finite impulse responses of a filter bank.
Keywords
Synthetic signals
Support vector learning
Optimization variables
machine learning
Kernel learning
Finite-impulse response
Complementary constraints
Bi-level optimization
Support vector machines
Signal processing
Validation sets
Learning algorithms
Risk perception
Optimization
Learning systems
Filter banks
Artificial intelligence
Origin : Files produced by the author(s)