Journal of Systems Engineering and Electronics ›› 2021, Vol. 32 ›› Issue (4): 927-938.doi: 10.23919/JSEE.2021.000079

• CONTROL THEORY AND APPLICATION • Previous Articles     Next Articles

A guidance method for coplanar orbital interception based on reinforcement learning

Xin ZENG(), Yanwei ZHU*(), Leping YANG(), Chengming ZHANG()   

  1. 1 College of Aeronautics and Astronautics, National University of Defense Technology, Changsha 410073, China
  • Received:2020-11-12 Online:2021-08-18 Published:2021-09-30
  • Contact: Yanwei ZHU E-mail:xzavier0214@outlook.com;zywnudt@163.com;ylpnudt@163.com;zhchm_vincent@163.com
  • About author:|ZENG Xin was born in 1992. He received his B.S. and M.S. degrees from the National University of Defense Technology (NUDT), Changsha, China, in 2014 and 2016, respectively. He is a Ph.D. student with the College of Aeronautics and Astronautics, NUDT. His research interests include aerospace dynamics, guidance and control, and application of artificial intelligence to the control of astronautic systems. E-mail: xzavier0214@outlook.com||ZHU Yanwei was born in 1981. He received his B.S., M.S., and Ph.D. degrees from the National University of Defense Technology (NUDT), Changsha, China, in 2002, 2004, and 2009 respectively. He is an associate professor with the College of Aeronautics and Astronautics, NUDT. His research interests include aerospace dynamics, guidance and control, and astronautic mission planning and design.E-mail: zywnudt@163.com||YANG Leping was born in 1964. He received his B.S. and M.S. degrees from the National University of Defense Technology (NUDT), Changsha, China, in 1984 and 1987, respectively. He is a professor with the College of Aeronautics and Astronautics, NUDT. His research interests include aerospace dynamics, guidance and control, and astronautic mission planning and design. E-mail: ylpnudt@163.com||ZHANG Chengming was born in 1998. He received his B.S. degree from the National University of Defense Technology (NUDT), Changsha, China, in 2019. He is a student with the College of Aeronautics and Astronautics, NUDT. His research interests include aerospace dynamics, guidance and control, and application of artificial intelligence to the control of astronautic systems. E-mail: zhchm_vincent@163.com
  • Supported by:
    This work was supported by the National Defense Science and Technology Innovation (18-163-15-LZ-001-004-13)

Abstract:

This paper investigates the guidance method based on reinforcement learning (RL) for the coplanar orbital interception in a continuous low-thrust scenario. The problem is formulated into a Markov decision process (MDP) model, then a well-designed RL algorithm, experience based deep deterministic policy gradient (EBDDPG), is proposed to solve it. By taking the advantage of prior information generated through the optimal control model, the proposed algorithm not only resolves the convergence problem of the common RL algorithm, but also successfully trains an efficient deep neural network (DNN) controller for the chaser spacecraft to generate the control sequence. Numerical simulation results show that the proposed algorithm is feasible and the trained DNN controller significantly improves the efficiency over traditional optimization methods by roughly two orders of magnitude.

Key words: orbital interception, reinforcement learning (RL), Markov decision process (MDP), deep neural network (DNN)