Journal of Systems Engineering and Electronics ›› 2023, Vol. 34 ›› Issue (2): 439-459.doi: 10.23919/JSEE.2023.000051

• CONTROL THEORY AND APPLICATION • Previous Articles    

A review of mobile robot motion planning methods: from classical motion planning workflows to reinforcement learning-based architectures

Lu DONG1(), Zichen HE2,3(), Chunwei SONG3(), Changyin SUN3,4,*()   

  1. 1 School of Cyber Science and Engineering, Southeast University, Nanjing 211189, China
    2 Shanghai Institute of Intelligent Science and Technology, Tongji University, Shanghai 201804, China
    3 College of Electronics and Information Engineering, Tongji University, Shanghai 201804, China
    4 School of Automation, Southeast University, Nanjing 210096, China
  • Received:2022-03-08 Online:2023-04-18 Published:2023-04-18
  • Contact: Changyin SUN E-mail:ldong90@seu.edu.cn;1910646@tongji.edu.cn;2030739@tongji.edu.cn;cysun@seu.edu.cn
  • About author:
    DONG Lu was born in 1990. She received her B.S. degree in School of Physics and Ph.D. degree in School of Automation from Southeast University, Nanjing, China, in 2012 and 2017, respectively. She is currently an associate professor with the School of Cyber Science and Engineering, Southeast University. Her research interests include adaptive dynamic programming, event-triggered control, nonlinear system control, and optimization. E-mail: ldong90@seu.edu.cn

    HE Zichen was born in 1995. He received his B.S. and M.S. degrees in mechanical engineering from China University of Petroleum, Beijing, China, in 2016 and 2019. He is currently pursuing his Ph.D. degree in control science and engineering with Tongji University, Shanghai, China. His research interests include reinforcement learning, multi-robot collaborative navigation, and motion planning. E-mail: 1910646@tongji.edu.cn

    SONG Chunwei was born in 1998. He received his B.E. degree in automation from Hunan University, Changsha, China, in 2020. He is currently pursuing his M.S. degree in control science and engineering at the School of Electronics and Information Engineering, Tongji University, Shanghai, China. His research interests include multi-agent reinforcement learning and robot navigation. E-mail: 2030739@tongji.edu.cn

    SUN Changyin was born in 1975. He received his B.S. degree in applied mathematics from the College of Mathematics, Sichuan University, Chengdu, China, in 1996, and M.S. and Ph.D. degrees in electrical engineering from Southeast University, Nanjing, China, in 2001 and 2004, respectively. He is a professor with the School of Automation, Southeast University, Nanjing, China. His research interests include intelligent control, flight control, and optimal theory. E-mail: cysun@seu.edu.cn
  • Supported by:
    This work was supported by the National Natural Science Foundation of China (62173251), the“Zhishan”Scholars Programs of Southeast University, the Fundamental Research Funds for the Central Universities, and Shanghai Gaofeng & Gaoyuan Project for University Academic Program Development (22120210022)

Abstract:

Motion planning is critical to realize the autonomous operation of mobile robots. As the complexity and randomness of robot application scenarios increase, the planning capability of the classical hierarchical motion planners is challenged. With the development of machine learning, the deep reinforcement learning (DRL)-based motion planner has gradually become a research hotspot due to its several advantageous feature. The DRL-based motion planner is model-free and does not rely on the prior structured map. Most importantly, the DRL-based motion planner achieves the unification of the global planner and the local planner. In this paper, we provide a systematic review of various motion planning methods. Firstly, we summarize the representative and state-of-the-art works for each submodule of the classical motion planning architecture and analyze their performance features. Then, we concentrate on summarizing reinforcement learning (RL)-based motion planning approaches, including motion planners combined with RL improvements, map-free RL-based motion planners, and multi-robot cooperative planning methods. Finally, we analyze the urgent challenges faced by these mainstream RL-based motion planners in detail, review some state-of-the-art works for these issues, and propose suggestions for future research.

Key words: mobile robot, reinforcement learning (RL), motion planning, multi-robot cooperative planning