Journal of Systems Engineering and Electronics ›› 2023, Vol. 34 ›› Issue (1): 117-128.doi: 10.23919/JSEE.2023.000036
• SYSTEMS ENGINEERING • Previous Articles Next Articles
Guangran CHENG1,2(), Lu DONG3(), Xin YUAN1(), Changyin SUN1,2,*()
|1||FUSELLI D, DE ANGELIS F, BOARO M, et al Action dependent heuristic dynamic programming for home energy resource scheduling. International Journal of Electrical Power & Energy Systems, 2013, 48, 148- 160.|
ERSEGHE T, ZANELLA A, CODEMO C G Optimal and compact control policies for energy storage units with single and multiple batteries. IEEE Trans. on Smart Grid, 2014, 5 (3): 1308- 1317.
ALBADI M H, EL-SAADANY E F A summary of demand response in electricity markets. Electric Power Systems Research, 2008, 78 (11): 1989- 1996.
SETLHAOLO D, XIA X H Optimal scheduling of household appliances with a battery storage system and coordination. Energy and Buildings, 2015, 94, 61- 70.
|5||LIU C Y, WANG X L, WU X, et al Economic scheduling model of microgrid considering the lifetime of batteries. IET Generation, Transmission & Distribution, 2017, 11 (3): 759- 767.|
|6||LUNA A C, DIAZ N L, GRAELLS M, et al. Mixed-integer-linear-programming-based energy management system for hybrid PV-wind-battery microgrids: modeling, design, and experimental verification. IEEE Trans. on Power Electronics, 2016, 32(4): 2769−2783.|
|7||GAN L K, ZHANG P, LEE J, et al Data-driven energy management system with Gaussian process forecasting and MPC for interconnected microgrids. IEEE Trans. on Sustainable Energy, 2020, 12 (1): 695- 704.|
|8||ARASTEH F, RIAHY G H MPC-based approach for online demand side and storage system management in market based wind integrated power systems. International Journal of Electrical Power & Energy Systems, 2019, 106, 124- 137.|
|9||ZHANG Y, WANG R, ZHANG T, et al Model predictive control-based operation management for a residential microgrid with considering forecast uncertainties and demand response strategies. IET Generation, Transmission & Distribution, 2016, 10 (10): 2367- 2378.|
HABIB M, LADJICI A A, BOLLIN E, et al One-day ahead predictive management of building hybrid power system improving energy cost and batteries lifetime. IET Renewable Power Generation, 2019, 13 (3): 482- 490.
|11||HU K Y, LI W J, WANG L D, et al Energy management for multi-microgrid system based on model predictive control. Frontiers of Information Technology & Electronic Engineering, 2018, 19 (11): 1340- 1351.|
LU R Z, HONG S H, YU M M Demand response for home energy management using reinforcement learning and artificial neural network. IEEE Trans. on Smart Grid, 2019, 10 (6): 6629- 6639.
MNIH V, KAVUKCUOGLU K, SILVER D, et al Human-level control through deep reinforcement learning. Nature, 2015, 518 (7540): 529- 533.
|14||SILVER D, LEVER G, HEESS N, et al. Deterministic policy gradient algorithms. Proc. of the 31st International Conference on International Conference on Machine Learning, 2014, 32: 387–395.|
|15||LILLICRAP T P, HUNT J J, PRITZEL A, et al. Continuous control with deep reinforcement learning. https://arxiv.org/abs/1509.02971v2.|
|16||SCHULMAN J, LEVINE S, ABBEEL P, et al. Trust region policy optimization. Proc. of the 31st International Conference on Machine Learning, 2015. DOI: 10.48550/arXiv.1502.05477.|
|17||DONG L, TANG Y F, HE H B, et al An event-triggered approach for load frequency control with supplementary ADP. IEEE Trans. on Power Systems, 2016, 32 (1): 581- 589.|
|18||DONG L, ZHONG X N, SUN C Y, et al Adaptive event-triggered control based on heuristic dynamic programming for nonlinear discrete-time systems. IEEE Trans. on Neural Networks and Learning Systems, 2016, 28 (7): 1594- 1605.|
|19||WU Z Q, WEI J, ZHANG F, et al MDLB: a metadata dynamic load balancing mechanism based on reinforcement learning. Frontiers of Information Technology & Electronic Engineering, 2020, 21 (7): 1034- 1046.|
XU X, JIA Y W, XU Y, et al A multi-agent reinforcement learning-based data-driven method for home energy management. IEEE Trans. on Smart Grid, 2020, 11 (4): 3201- 3211.
|21||BAHRAMI S, CHEN Y C, WONG V W Deep reinforcement learning for demand response in distribution networks. IEEE Trans. on Smart Grid, 2020, 12 (2): 1496- 1506.|
|22||WAN Z Q, LI H P, HE H B, et al Model-free real-time EV charging scheduling based on deep reinforcement learning. IEEE Trans. on Smart Grid, 2018, 10 (5): 5246- 5257.|
CAO J, HARROLD D, FAN Z, et al Deep reinforcement learning-based energy storage arbitrage with accurate lithiumion battery degradation model. IEEE Trans. on Smart Grid, 2020, 11 (5): 4513- 4521.
|24||YU L, XIE W W, XIE D, et al Deep reinforcement learning for smart home energy management. IEEE Internet of Things Journal, 2019, 7 (4): 2751- 2762.|
|25||MOCANU E, MOCANU D C, NGUYEN P H, et al On-line building energy optimization using deep reinforcement learning. IEEE Trans. on Smart Grid, 2018, 10 (4): 3698- 3708.|
GOROSTIZA F S, GONZALEZ-LONGATT F M Deep reinforcement learning-based controller for SOC management of multi-electrical energy storage system. IEEE Trans. on Smart Grid, 2020, 11 (6): 5039- 5050.
ZHU F Q, YANG Z P, LIN F, et al Decentralized cooperative control of multiple energy storage systems in urban railway based on multiagent deep reinforcement learning. IEEE Trans. on Power Electronics, 2020, 35 (9): 9368- 9379.
HUANG T, LIU D R A self-learning scheme for residential energy system control and management. Neural Computing and Applications, 2013, 22 (2): 259- 269.
MBUWIR B V, RUELENS F, SPIESSENS F, et al Battery energy management in a microgrid using batch reinforcement learning. Energies, 2017, 10 (11): 1846.
KIM S, LIM H Reinforcement learning based energy management algorithm for smart energy buildings. Energies, 2018, 11 (8): 2010.
LIU L T, GAURAV S A solution to time-varying Markov decision processes. IEEE Robotics and Automation Letters, 2018, 3 (3): 1631- 1638.
VAZQUEZ-CANTELI J R, NAGY Z Reinforcement learning for demand response: a review of algorithms and modeling techniques. Applied Energy, 2019, 235, 1072- 1089.
|33||WEI Q L, LIU D R, SHI G A novel dual iterative Q-learning method for optimal battery management in smart residential environments. IEEE Trans. on Industrial Electronics, 2014, 62 (4): 2509- 2518.|
|34||ZHU Y H, ZHAO D B, LI X J, et al Control-limited adaptive dynamic programming for multi-battery energy storage systems. IEEE Trans. on Smart Grid, 2018, 10 (4): 4235- 4244.|
|35||KONG W C, ZHAO Y D, JIA Y W, et al Short-term residential load forecasting based on LSTM recurrent neural network. IEEE Trans. on Smart Grid, 2017, 10 (1): 841- 851.|
|36||JONSSON T, PINSON P, NIELSEN H A, et al Forecasting electricity spot prices accounting for wind power predictions. IEEE Trans. on Sustainable Energy, 2012, 4 (1): 210- 218.|
|37||BELLMAN R Dynamic programming. Science, 1996, 153 (3731): 34- 37.|
|38||LIN L J. Reinforcement learning for robots using neural networks. Pittsburgh: Carnegie Mellon University, 1992.|
|||Peng LIU, Boyuan XIA, Zhiwei YANG, Jichao LI, Yuejin TAN. A deep reinforcement learning method for multi-stage equipment development planning in uncertain environments [J]. Journal of Systems Engineering and Electronics, 2022, 33(6): 1159-1175.|
|||Runze GAO, Yuanqing XIA, Li DAI, Zhongqi SUN, Yufeng ZHAN. Design and implementation of data-driven predictive cloud control system [J]. Journal of Systems Engineering and Electronics, 2022, 33(6): 1258-1268.|
|||Bohao LI, Yunjie WU, Guofei LI. Hierarchical reinforcement learning guidance with threat avoidance [J]. Journal of Systems Engineering and Electronics, 2022, 33(5): 1173-1185.|
|||Xiaofeng LI, Lu DONG, Changyin SUN. Hybrid Q-learning for data-based optimal control of non-linear switching system [J]. Journal of Systems Engineering and Electronics, 2022, 33(5): 1186-1194.|
|||Ang GAO, Qisheng GUO, Zhiming DONG, Zaijiang TANG, Ziwei ZHANG, Qiqi FENG. Research on virtual entity decision model for LVC tactical confrontation of army units [J]. Journal of Systems Engineering and Electronics, 2022, 33(5): 1249-1267.|
|||Jingyu CAO, Lu DONG, Changyin SUN. Day-ahead scheduling based on reinforcement learning with hybrid action space [J]. Journal of Systems Engineering and Electronics, 2022, 33(3): 693-705.|
|||Xiangyang LIN, Qinghua XING, Fuxian LIU. Choice of discount rate in reinforcement learning with long-delay rewards [J]. Journal of Systems Engineering and Electronics, 2022, 33(2): 381-392.|
|||Wenzhang LIU, Lu DONG, Jian LIU, Changyin SUN. Knowledge transfer in multi-agent reinforcement learning with incremental number of agents [J]. Journal of Systems Engineering and Electronics, 2022, 33(2): 447-460.|
|||Wanping SONG, Zengqiang CHEN, Mingwei SUN, Qinglin SUN. Reinforcement learning based parameter optimization of active disturbance rejection control for autonomous underwater vehicle [J]. Journal of Systems Engineering and Electronics, 2022, 33(1): 170-179.|
|||Jiandong ZHANG, Qiming YANG, Guoqing SHI, Yi LU, Yong WU. UAV cooperative air combat maneuver decision based on multi-agent reinforcement learning [J]. Journal of Systems Engineering and Electronics, 2021, 32(6): 1421-1438.|
|||Kaifang WAN, Bo LI, Xiaoguang GAO, Zijian HU, Zhipeng YANG. A learning-based flexible autonomous motion control method for UAV in dynamic unknown environments [J]. Journal of Systems Engineering and Electronics, 2021, 32(6): 1490-1508.|
|||Xin ZENG, Yanwei ZHU, Leping YANG, Chengming ZHANG. A guidance method for coplanar orbital interception based on reinforcement learning [J]. Journal of Systems Engineering and Electronics, 2021, 32(4): 927-938.|
|||Ye MA, Tianqing CHANG, Wenhui FAN. A single-task and multi-decision evolutionary game model based on multi-agent reinforcement learning [J]. Journal of Systems Engineering and Electronics, 2021, 32(3): 642-657.|
|||Huixiang ZHEN, Wenyin GONG, Ling WANG. Data-driven evolutionary sampling optimization for expensive problems [J]. Journal of Systems Engineering and Electronics, 2021, 32(2): 318-330.|
|||Rongling Lang, Zheping Xu, and Fei Gao. Data-driven fault diagnosis method for analog circuits based on robust competitive agglomeration [J]. Journal of Systems Engineering and Electronics, 2013, 24(4): 706-712.|