Journal of Systems Engineering and Electronics ›› 2010, Vol. 21 ›› Issue (1): 72-80.doi: 10.3969/j.issn.1004-4132.2010.01.013

• CONTROL THEORY AND APPLICATION • Previous Articles     Next Articles

Kernel matrix learning with a general regularized risk functional criterion

Chengqun Wang1, Jiming Chen1,∗, Chonghai Hu2, and Youxian Sun1   

  1. 1. State Key Laboratory of Industrial Control Technology, Department Control Science and Engineering, Zhejiang University, Hangzhou 310027, P. R. China;
    2. Department of Mathematics, Zhejiang University, Hangzhou 310027, P. R. China
  • Online:2010-02-26 Published:2010-01-03
  • Supported by:

    This work was supported by the National Natural Science Fundation of China (60736021) and the Joint Funds of NSFC-Guangdong Province (U0735003).

Abstract:

Kernel-based methods work by embedding the data into a feature space and then searching linear hypothesis among the embedding data points. The performance is mostly affected by which kernel is used. A promising way is to learn the kernel from the data automatically. A general regularized risk functional (RRF) criterion for kernel matrix learning is proposed. Compared with the RRF criterion, general RRF criterion takes into account the geometric distributions of the embedding data points. It is proven that the distance between different geometric distributions can be estimated by their centroid distance in the reproducing kernel Hilbert space. Using this criterion for kernel matrix learning leads to a convex quadratically constrained quadratic programming (QCQP) problem. For several commonly used loss functions, their mathematical formulations are given. Experiment results on a collection of benchmark data sets demonstrate the effectiveness of the proposed method.