[1] U.S. ARMY. The U.S. army in multi-domain operations 2028. https://publicintelligence.net/usarmy-multidomain-ops-2028/. [2] Singapore Government Agency. Fact sheet: see more, shoot further, smarter – RSAF’s island air defence system. https://www.mindef.gov.sg/web/portal/mindef/news-and-events/latest-releases/article-detail/2020/December/17dec20_fs. [3] TAKAKO F G. Non-cooperative game theory. Japan: Springer Japan eBooks, 2015. [4] KORZHYK D, YIN Z , KIEKINTVELD C, et al. Stackelberg vs. Nash in security games: an extended investigation of interchangeability, equivalence, and uniqueness. Journal of Artificial Intelligence Research, 2011, 41: 297–327. [5] YUAN W L, LUO J R, LU L N, et. al. Methods in adversarial intelligent game: a holistic comparative analysis from perspective of game theory and reinforcement learning. Computer Science, 2022, 49 (8): 191–204. [6] PITA J, JAIN M, MARECKI J, et al. Deployed armor protection: the application of a game theoretic model for security at the Los Angeles international airport. Proc. of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems: Industrial Track, 2008: 125–132. [7] TSAI J, RATHI S, KIEKINTVELD C, et al. IRIS: a tool for strategic security allocation in transportation networks. Cambridge: Cambridge University Press, 2011. [8] PITA J, BELLAMANE H, JAIN M, et al. Security applications: lessons of real-world deployment. ACM SIGecom Exchanges, 2009, 8(2): 1–4. [9] HAN C Y, LUNDAY B J, ROBBINS M J. A game theoretic model for the optimal location of integrated air defense system missile batteries. Informs Journal on Computing, 2016, 28(3): 405–416. [10] KEITH A, AHNER D. Counterfactual regret minimization for integrated cyber and air defense resource allocation. European Journal of Operational Research 2021, 292(1): 95–107. [11] ROBERSON B. The Colonel Blotto game. Economic Theory, 2006, 29(1): 1–24. [12] KOVENOCK D, ROBERSON B. Coalitional Colonel Blotto games with application to the economics of alliances. Journal of Public Economic Theory, 2012, 14(4): 653–676. [13] ZOU M W, CHEN S F, LUO J R, et al. An evolutionary learning approach for anti-jamming game in cognitive radio confrontation. Proc. of the IEEE International Conference on Systems, Man, and Cybernetics, 2022: 3210–3215. [14] HASAN K, SHETTY S, SOKOLOWSKI J A, et al. Security game for cyber physical systems. Proc. of the Communications and Networking Symposium, 2018: 1–12. [15] CLEMPNER J, POZNYAK A. Stackelberg security games. Expert Systems with Applications, 2015, 42(8): 3967–3979. [16] SINHA A, FANG F, AN B, et al. Stackelberg security games: looking beyond a decade of success. Proc. of the International Joint Conference on Artificial Intelligence, 2018: 5494–5501. [17] CONITZER V, SANDHOLM T. Computing the optimal strategy to commit to. Proc. of the 7th ACM Conference on Electronic Commerce, 2006: 82–90. [18] MUTZARI D, AUMANN Y, KRAUS S. Robust solutions for multi-defender Stackelberg security games. https://arxiv.org/pdf/2204.14000.pdf. [19] NGUYEN T, JIAN A, TAMB M. Stop the compartmentalization: unified robust algorithms for handling uncertainties in security games. Proc. of the Autonomous Agents and Multi-Agents Systems, 2014: 317–324. [20] GUILLERMO A J, JULIO B C . Repeated Stackelberg security games: learning with incomplete state information. https://www-sciencedirect-com-s.libyc.nudt.edu.cn/science/article/pii/S0951832019304478. [21] GUO Q, AN B, BOVSANSKY B, et al. Comparing strategic secrecy and Stackelberg commitment in security games. Proc. of the International Joint Conference on Artificial Intelligence, 2017: 3691–3699. [22] VINYALS O, BABUSCHKIN I, CZARNECKI W M, et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature, 2019, 575(7782): 350–354. [23] SANDHOLM T, GILPIN A, CONITZER V. Mixed-integer programming methods for finding Nash equilibria. Proc. of the National Conference on Artificial Intelligence, 2005: 495–501. [24] LIU W T, LEI J L, YI P, et al. No-regret learning for repeated non-cooperative games with lossy bandits. https://www.sciencedirect.com/science/article/abs/pii/S0005109823006222. [25] ZINKEVICH M, JOHANSON M, BOWLING M, et al. Regret minimization in games with incomplete information. Advances in Neural Information Processing Systems, 2007, 20: 905–912. [26] BROWN N, SANDHOLM T. Solving imperfect-information games via discounted regret minimization. https://doi.org/10.48550/arXiv.1809.04040. [27] LANCTOT M. Monte Carlo sampling and regret minimization for equilibrium computation and decision-making in large extensive form games. Edmonton: University of Albert, 2013. [28] BOWLING M, BURCH N, JOHANSON M, et al. Heads-up limit hold’em poker is solved. Science, 2015, 347(6218): 145–149. [29] BLAIR A, SAFFIDINE A. AI surpasses humans at six-player poker. Science, 2019, 365(6456): 864–865. [30] MORAVCIK M, SCHMID M, BURCH N, et al. Deepstack: expert-level artificial intelligence in heads-up no-limit poker. Science, 2017, 356(6337): 508–513. [31] SHOHAM Y, LEYTON-BROWN K. Multiagent systems: algorithmic, game-theoretic, and logical foundations. London: Cambridge University Press, 2008. [32] BROWN N. Equilibrium finding for large adversarial imperfect-information games. London: Carnegie Mellon University, 2020. [33] FRITH C, FRITH U. Theory of mind. https://www.researchgate.net/publication/232296544_Theory_of_mind.
|