[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3607947.3608094acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesic3Conference Proceedingsconference-collections
research-article

An Empirical Assessment of the Performance of Multi-Armed Bandits and Contextual Multi-Armed Bandits in Handling Cold-Start Bugs

Published: 28 September 2023 Publication History

Abstract

Bug triaging is a crucial activity in software development that involves identifying and prioritizing bugs for fixing. But recommending cold bugs to an appropriate developer is a more challenging problem. In this work, an empirical assessment of the effectiveness of two reinforcement learning algorithms, Multiarmed bandits and Contextual bandits, in the context of triaging cold-start bugs, is investigated. Five publicly available open-source bug-triaging datasets have been used to evaluate the performance of the algorithms using two standard evaluation metrics i.e., rewards and average rewards. Evaluations are done of two MAB algorithms- ɛ-Greedy and UCB, and LinUCB as a Contextual MAB algorithm. Our results showed that both Multiarmed bandits and Contextual bandits are effective in triaging cold start bugs but in different settings. Contextual bandits outperformed Multiarmed bandits in terms of rewards and average reward in different simulation settings. Our results demonstrate that contextual MAB algorithms outperform traditional MAB algorithms in accurately recommending cold bugs to an appropriate developer.

References

[1]
A. Yadav, S. K. Singh, and J. S. Suri, “Ranking of software developers based on expertise score for bug triaging,” Inf. Softw. Technol., vol. 112, pp. 1–17, 2019.
[2]
N. Singh and S. Kumar Singh, “MABTriage: Multi armed bandit triaging model approach,” ACM Int. Conf. Proceeding Ser., pp. 457–460, 2021.
[3]
L. Li, W. Chu, J. Langford, and R. E. Schapire, “A Contextual-Bandit Approach to Personalized News Article Recommendation,” Feb. 2010.
[4]
H. Bastani, M. Bayati, and K. Khosravi, “Mostly exploration-free algorithms for contextual bandits,” Manage. Sci., vol. 67, no. 3, pp. 1329–1349, Mar. 2021.
[5]
L. Liu, R. Downe, and J. Reid, “Multi-Armed Bandit Strategies for Non-Stationary Reward Distributions and Delayed Feedback Processes,” Feb. 2019, [Online]. Available: http://arxiv.org/abs/1902.08593
[6]
A. Slivkins, “Contextual Bandits with Similarity Information *,” 2011.
[7]
C. Z. Felício, K. V. R. Paixão, C. A. Z. Barcelos, and P. Preux, “A multi-Armed bandit model selection for cold-start user recommendation,” UMAP 2017 - Proc. 25th Conf. User Model. Adapt. Pers., pp. 32–40, 2017.
[8]
A. Gosavi, “Reinforcement learning: A tutorial survey and recent advances,” INFORMS J. Comput., vol. 21, no. 2, pp. 178–192, 2009.
[9]
E. F. Morales and J. H. Zaragoza, “An introduction to reinforcement learning,” Decis. Theory Model. Appl. Artif. Intell. Concepts Solut., pp. 63–80, 2011.
[10]
Y. Liu, X. Qi, J. Zhang, H. Li, X. Ge, and J. Ai, “Automatic Bug Triaging via Deep Reinforcement Learning,” Appl. Sci., vol. 12, no. 7, 2022.
[11]
D. Jagdish Rao, “Contextual Bandits for adapting to changing User preferences over time,” 2020

Cited By

View all
  • (2024)Reinforcement Learning in Bug TriagingAdvancing Software Engineering Through AI, Federated Learning, and Large Language Models10.4018/979-8-3693-3502-4.ch011(162-182)Online publication date: 21-Jun-2024

Index Terms

  1. An Empirical Assessment of the Performance of Multi-Armed Bandits and Contextual Multi-Armed Bandits in Handling Cold-Start Bugs

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    IC3-2023: Proceedings of the 2023 Fifteenth International Conference on Contemporary Computing
    August 2023
    783 pages
    ISBN:9798400700224
    DOI:10.1145/3607947
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 28 September 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Bug Cold-Start
    2. Contextual Multi-Armed Bandits
    3. Mining Software Repositories

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    IC3 2023

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)19
    • Downloads (Last 6 weeks)5
    Reflects downloads up to 02 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Reinforcement Learning in Bug TriagingAdvancing Software Engineering Through AI, Federated Learning, and Large Language Models10.4018/979-8-3693-3502-4.ch011(162-182)Online publication date: 21-Jun-2024

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media