[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
abstract
Public Access

Robust Multi-Agent Bandits Over Undirected Graphs

Published: 27 June 2023 Publication History

Abstract

We consider a multi-agent multi-armed bandit setting in which n honest agents collaborate over a network to minimize regret but m malicious agents can disrupt learning arbitrarily. Assuming the network is the complete graph, existing algorithms incur O((m + K/n) log (T) / Δ) regret in this setting, where K is the number of arms and Δ is the arm gap. For m << K, this improves over the single-agent baseline regret of O(K log(T)/Δ). In this work, we show the situation is murkier beyond the case of a complete graph. In particular, we provide an instance for which honest agents using the state-of-the-art algorithm suffer (nearly) linear regret until time is doubly exponential in n. In light of this negative result, we propose a new algorithm for which the i-th agent has regret O((dmal (i) + K/n) log(T)/Δ) on any connected and undirected graph, where dmal (i) is the number of i's neighbors who are malicious. Thus, we generalize existing regret bounds beyond the complete graph and show the effect of malicious agents is entirely local.

References

[1]
Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. 2002. Finite-time analysis of the multiarmed bandit problem. Machine learning, Vol. 47, 2--3 (2002), 235--256.
[2]
Ronshee Chawla, Abishek Sankararaman, Ayalvadi Ganesh, and Sanjay Shakkottai. 2020. The Gossiping Insert-Eliminate Algorithm for Multi-Agent Bandits. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics. 3471--3481.
[3]
Conor Newton, AJ Ganesh, and Henry Reeve. 2021. Asymptotic Optimality for Decentralised Bandits. In Reinforcement Learning in Networks and Queues, Sigmetrics 2021.
[4]
Boris Pittel. 1987. On spreading a rumor. SIAM J. Appl. Math., Vol. 47, 1 (1987), 213--223.
[5]
Abishek Sankararaman, Ayalvadi Ganesh, and Sanjay Shakkottai. 2019. Social learning in multi agent multi armed bandits. Proceedings of the ACM on Measurement and Analysis of Computing Systems, Vol. 3, 3 (2019), 1--35.
[6]
Daniel Vial, Sanjay Shakkottai, and R Srikant. 2021. Robust multi-agent multi-armed bandits. In Proceedings of the Twenty-second International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing. 161--170.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM SIGMETRICS Performance Evaluation Review
ACM SIGMETRICS Performance Evaluation Review  Volume 51, Issue 1
SIGMETRICS '23
June 2023
108 pages
ISSN:0163-5999
DOI:10.1145/3606376
Issue’s Table of Contents
  • cover image ACM Conferences
    SIGMETRICS '23: Abstract Proceedings of the 2023 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems
    June 2023
    123 pages
    ISBN:9798400700743
    DOI:10.1145/3578338
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 June 2023
Published in SIGMETRICS Volume 51, Issue 1

Check for updates

Author Tags

  1. malicious agents
  2. multi-armed bandits

Qualifiers

  • Abstract
Data Availability

Funding Sources

  • NSF
  • ONR

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 78
    Total Downloads
  • Downloads (Last 12 months)52
  • Downloads (Last 6 weeks)6
Reflects downloads up to 12 Jan 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media