[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Learning Scalable Task Assignment with Imperative-Priori Conflict Resolution in Multi-UAV Adversarial Swarm Defense Problem

  • Published:
Journal of Systems Science and Complexity Aims and scope Submit manuscript

Abstract

The multi-UAV adversary swarm defense (MUASD) problem is to defend a static base against an adversary UAV swarm by a defensive UAV swarm. Decomposing the problem into task assignment and low-level interception strategies is a widely used approach. Learning-based approaches for task assignment are a promising direction. Existing studies on learning-based methods generally assume decentralized decision-making architecture, which is not beneficial for conflict resolution. In contrast, centralized decision-making architecture is beneficial for conflict resolution while it is often detrimental to scalability. To achieve scalability and conflict resolution simultaneously, inspired by a self-attention-based task assignment method for sensor target coverage problem, a scalable centralized assignment method based on self-attention mechanism together with a defender-attacker pairwise observation preprocessing (DAP-SelfAtt) is proposed. Then, an imperative-priori conflict resolution (IPCR) mechanism is proposed to achieve conflict-free assignment. Further, the IPCR mechanism is parallelized to enable efficient training. To validate the algorithm, a variant of proximal policy optimization algorithm (PPO) is employed for training in scenarios of various scales. The experimental results show that the proposed algorithm not only achieves conflict-free task assignment but also maintains scalability, and significantly improve the success rate of defense.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. He C and Huang J, Leader-following formation tracking for multiple quadrotor helicopters over switching networks, Unmanned Systems, 2023, DOI: https://doi.org/10.1142/S2301385024500201.

  2. Liu Y, Hu J, and Li Y, Quantized formation control of heterogeneous nonlinear multi-agent systems with switching topology, Journal of Systems Science & Complexity, 2023, 36(6): 2382–2397.

    Article  MathSciNet  Google Scholar 

  3. Gao Z Y, Zhang Y, and Guo G, Fixed-time leader-following formation control of fully-actuated underwater vehicles without velocity measurements, Journal of Systems Science & Complexity, 2022, 35(2): 559–585.

    Article  MathSciNet  Google Scholar 

  4. Liao J, Cheng J, and Xin B, UAV swarm formation reconfiguration control based on variable-stepsize MPC-APCMPIO algorithm, Science China Information Sciences, 2023, 66(11): 212207.

    Article  MathSciNet  Google Scholar 

  5. Pang Z, Fu Y, Guo H, et al., Analysis of stealthy false data injection attacks against networked control systems: Three case studies, Journal of Systems Science & Complexity, 2023, 36(4): 1407–1422.

    Article  MathSciNet  Google Scholar 

  6. Liu Y, Liu J, He Z, et al., A survey of multi-agent systems on distributed formation control, Unmanned Systems, 2023, DOI: https://doi.org/10.1142/S2301385024500274.

  7. Li N, Su Z, Ling H, et al., Optimization of air defense system deployment against reconnaissance drone swarms, Complex System Modeling and Simulation, 2023, 3(2): 102–117.

    Article  CAS  Google Scholar 

  8. Zhou J, Adversarial swarm defense with decentralized swarm, Master’s degre thesis, University of California, Berkeley, 2021.

    Google Scholar 

  9. Jaderberg M, Czarnecki W M, Dunning M, et al., Human-level performance in 3D multiplayer games with population-based reinforcement learning, Science, 2019, 364: 859–865.

    Article  ADS  MathSciNet  PubMed  CAS  Google Scholar 

  10. Vinyals O, Babuschkin I, Czarnecki W M, et al., Grandmaster level in StarCraft II using multiagent reinforcement learning, Nature, 2019, 575: 350–354.

    Article  ADS  PubMed  CAS  Google Scholar 

  11. Zhou Q, Li Y, and Niu Y, Intelligent anti-jamming communication for wireless sensor networks: A multi-agent reinforcement learning approach, IEEE Open Journal of the Communications Society, 2021, 2: 775–784.

    Article  Google Scholar 

  12. Wu T, Zhou P, Liu K, et al., Multi-agent deep reinforcement learning for urban traffic light control in vehicular networks, IEEE Transactions on Vehicular Technology, 2020, 69: 8243–8256.

    Article  Google Scholar 

  13. Chu T, Wang J, Codecà L, et al., Multi-agent deep reinforcement learning for large-scale traffic signal control, IEEE Transactions on Intelligent Transportation Systems, 2020, 21: 1086–1095.

    Article  Google Scholar 

  14. Huang L, Fu M, Qu H, et al., A deep reinforcement learning-based method applied for solving multi-agent defense and attack problems, Expert Systems with Applications, 2021, 176: 114896.

    Article  Google Scholar 

  15. Pope A P, Ide J S, Mićović D, et al., Hierarchical reinforcement learning for air combat at DARPA’s AlphaDogfight trials, IEEE Transactions on Artificial Intelligence, 2022, 4(6): 1–15.

    Google Scholar 

  16. Zhou W J, Subagdja B, Tan A H, et al., Hierarchical control of multi-agent reinforcement learning team in real-time strategy (RTS) games, Expert Systems with Applications, 2021, 186: 115707.

    Article  Google Scholar 

  17. Xing D, Zhen Z, and Gong H, Offense-defense confrontation decision making for dynamic UAV swarm versus UAV swarm, Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering, 2019, 233: 5689–5702.

    Article  Google Scholar 

  18. Lowe R, Wu Y, Tamar A, et al., Multi-agent actor-critic for mixed cooperative-competitive environments, Advances in Neural Information Processing Systems, Long Beach, 2017.

  19. Sun Z, Wu H, Shi Y, et al., Multi-agent air combat with two-stage graph-attention communication, Neural Computing and Applications, 2023, 35: 19765–19781.

    Article  Google Scholar 

  20. Chen Y, Song G, Ye Z, et al., Scalable and transferable reinforcement learning for multi-agent mixed cooperative-competitive environments based on hierarchical graph attention, Entropy, 2022, 24: 563.

    Article  ADS  MathSciNet  PubMed  PubMed Central  Google Scholar 

  21. Piao H, Han Y, Chen H, et al., Complex relationship graph abstraction for autonomous air combat collaboration: A learning and expert knowledge hybrid approach, Expert Systems with Applications, 2023, 215: 119285.

    Article  Google Scholar 

  22. Bakker B, Reinforcement learning with long short-term memory, Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and Synthetic, Vancouver, 2001.

  23. Yoo J, Kim D, and Shim D H, Deep reinforcement learning based autonomous air-to-air combat using target trajectory prediction, 21st International Conference on Control, Automation and Systems (ICCAS), Jeju, Korea, 2021.

  24. Bae J H, Jung H, Kim S, et al., Deep reinforcement learning-based air-to-air combat maneuver generation in a realistic environment, IEEE Access, 2023, 11: 26427–26440.

    Article  Google Scholar 

  25. Liu J, Wang G, Fu Q, et al., Task assignment in ground-to-air confrontation based on multiagent deep reinforcement learning, Defence Technology, 2023, 19: 210–219.

    Article  Google Scholar 

  26. Lincolao-Venegas I, and Rojas-Mora J, A centralized solution to the student-school assignment problem in segregated environments via a CUDA parallelized simulated annealing algorithm, 39th International Conference of the Chilean Computer Science Society (SCCC), Coquimbo, 2020.

  27. Xu J, Zhong F, and Wang Y, Learning multi-agent coordination for enhancing target coverage in directional sensor networks, 34th Conference on Neural Information Processing Systems, Vancouver, 2020.

  28. Zhang T, Chen C, Xu Y, et al., Joint task scheduling and multi-UAV deployment for aerial computing in emergency communication networks, Science China Information Sciences, 2023, 66(9): 192303.

    Article  MathSciNet  Google Scholar 

  29. Lee M, Xiong Y, Yu G, et al., Deep neural networks for linear sum assignment problems, IEEE Wireless Communications Letters, 2018, 7: 962–965.

    Article  Google Scholar 

  30. Hüttenrauch M, Šošić A, and Neumann G, Deep reinforcement learning for swarm systems, Journal of Machine Learning Research, 2019, 20: 1–31.

    MathSciNet  Google Scholar 

  31. Zaheer M, Kottur S, Ravanbakhsh S, et al., Deep sets, Advances in Neural Information Processing Systems, Long Beach, 2017.

  32. Vaswani A, Shazeer N, Parmar N, et al., Attention is all you need, 31st Conference on Neural Information Processing Systems, Long Beach, 2017.

  33. Li Q, Gama F, Ribeiro A, et al., Graph neural networks for decentralized multi-robot path planning, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, 2020.

  34. Li Q, Lin W, Liu Z, et al., Message-aware graph attention networks for large-scale multi-robot path planning, IEEE Robotics and Automation Letters, 2021, 6: 5533–5540.

    Article  Google Scholar 

  35. Zhou L and Tokekar P, Multi-robot coordination and planning in uncertain and adversarial environments, Current Robotics Reports, 2021, 2: 147–157.

    Article  Google Scholar 

  36. Schulman J, Wolski F, Dhariwal P, et al., Proximal policy optimization algorithms, 2017, arXiv: 1707.06347.

  37. Gong Z, Xu Y, and Luo D, UAV cooperative air combat maneuvering confrontation based on multi-agent reinforcement learning, Unmanned Systems, 2023, 11: 273–286.

    Article  Google Scholar 

  38. Pachter M, Garcia E, and Casbeer D W, Differential game of guarding a target, Journal of Guidance, Control, and Dynamics, 2017, 40(11): 2991–2998.

    Article  ADS  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bin Xin.

Ethics declarations

CHEN Jie and XIN Bin are editorial board members for Journal of Systems Science & Complexity and were not involved in the editorial review or the decision to publish this article. All authors declare that there are no competing interests.

Additional information

This research was supported in part by the National Natural Science Foundation of China Basic Science Research Center Program under Grant No. 62088101, the National Natural Science Foundation of China under Grant Nos. 7217117 and 92367101, the Aeronautical Science Foundation of China under Grant No. 2023Z066038 001, Shanghai Municipal Science and Technology Major Project under Grant No. 2021SHZDZX0100, Chinese Academy of Engineering, Strategic Research and Consulting Program under Grant No. 2023-XZ-65.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, Z., Chen, J., Xin, B. et al. Learning Scalable Task Assignment with Imperative-Priori Conflict Resolution in Multi-UAV Adversarial Swarm Defense Problem. J Syst Sci Complex 37, 369–388 (2024). https://doi.org/10.1007/s11424-024-4029-8

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11424-024-4029-8

Keywords

Navigation