Journal of Systems Engineering and Electronics ›› 2019, Vol. 30 ›› Issue (5): 875-885.doi: 10.21629/JSEE.2019.05.06

• Electronics Technology • Previous Articles     Next Articles

A combined algorithm of K-means and MTRL for multi-class classification

Mengfan XUE(), NLei HA(), Dongliang PENG*()   

  • Received:2018-10-15 Online:2019-10-08 Published:2019-10-09
  • Contact: Dongliang PENG;;
  • About author:XUE Mengfan was born in 1990. She received her Ph.D. degree from Xidian University in 2016. She is currently a lecturer in Hangzhou Dianzi University. Her research interests are signal processing and machine learning. E-mail:|HAN Lei was born in 1994. He received his M.S. degree from Hangzhou Dianzi University. His research interests are machine learning and automatic target recognition. E-mail:|PENG Dongliang was born in 1977. He received his B.S. and M.S. degrees in flight vehicle design and engineering from the Harbin Institute of Technology, Harbin, China, in 1998 and 2000, respectively, and his Ph.D. degree in control science and engineering from Zhejiang University, Hangzhou, China, in 2013. In 2003, he joined Hangzhou Dianzi University, Hangzhou, China, where he is currently a professor in the School of Automation. His research interests include information fusion and estimated theory. E-mail:
  • Supported by:
    the National Natural Science Foundation of China(61703131);the National Natural Science Foundation of China(61703129);the National Natural Science Foundation of China(61701148);the National Natural Science Foundation of China(61703128);This work was supported by the National Natural Science Foundation of China (61703131; 61703129; 61701148; 61703128)


The basic idea of multi-class classification is a disassembly method, which is to decompose a multi-class classification task into several binary classification tasks. In order to improve the accuracy of multi-class classification in the case of insufficient samples, this paper proposes a multi-class classification method combining K-means and multi-task relationship learning (MTRL). The method first uses the split method of One vs. Rest to disassemble the multi-class classification task into binary classification tasks. K-means is used to down sample the dataset of each task, which can prevent over-fitting of the model while reducing training costs. Finally, the sampled dataset is applied to the MTRL, and multiple binary classifiers are trained together. With the help of MTRL, this method can utilize the inter-task association to train the model, and achieve the purpose of improving the classification accuracy of each binary classifier. The effectiveness of the proposed approach is demonstrated by experimental results on the Iris dataset, Wine dataset, Multiple Features dataset, Wireless Indoor Localization dataset and Avila dataset.

Key words: machine learning, multi-class classification, K-means, multi-task relationship learning (MTRL), over-fitting