[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
editorial
Free access

Thoughts about Artifact Badging

Published: 23 May 2020 Publication History

Abstract

Reproducibility: the extent to which consistent results are obtained when an experiment is repeated, is important as a means to validate experimental results, promote integrity of research, and accelerate follow up work. Commitment to artifact reviewing and badging seeks to promote reproducibility and rank the quality of submitted artifacts.
However, as illustrated in this issue, the current badging scheme, with its focus upon an artifact being reusable, may not identify limitations of architecture, implementation, or evaluation.
We propose that to improve the insight into artifact reproducibility, the depth and nature of artifact evaluation must move beyond simply considering if an artifact is reusable. Artifact evaluation should consider the methods of that evaluation alongside the varying of inputs to that evaluation. To achieve this, we suggest an extension to the scope of artifact badging, and describe both approaches and best practice arising in other communities. We seek to promote conversation and make a call to action intended to strengthen the scientific method within our domain.

References

[1]
ACM. 2018. Artifact Review and Badging. https://www.acm.org/publications/ policies/artifact-review-badging.
[2]
Vaibhav Bajpai, Anna Brunstrom, Anja Feldmann, Wolfgang Kellerer, Aiko Pras, Henning Schulzrinne, Georgios Smaragdakis, Matthias Wählisch, and Klaus Wehrle. 2019. The Dagstuhl beginners guide to reproducibility for experimental networking research.
[3]
Lorena Barba and Grigori Fursin. 2019. Reproducibility Initiative. https: //sc19.supercomputing.org/submit/reproducibility-initiative/.
[4]
E. D. Berger, S. M. Blackburn, M. Hauswirth, and M. W. Hicks. 2019. A Checklist Manifesto for Empirical Evaluation: A Preemptive Strike Against a Replication Crisis in Computer Science.
[5]
E. D. Berger, S. M. Blackburn, M. Hauswirth, and M. W. Hicks. 2019. Empirical Evaluation Guidelines.
[6]
Ronald F Boisvert. 2016. Incentivizing reproducibility. Commun. ACM 59, 10 (2016), 5--5.
[7]
Mark Handley, Costin Raiciu, Alexandru Agache, Andrei Voinescu, Andrew W Moore, Gianni Antichi, and Marcin Wójcik. 2017. Re-architecting datacenter networks and stacks for low latency and high performance. In SIGCOMM. ACM, Los Angeles, CA, USA, 29--42.
[8]
Gernot Heiser. 2020. Systems Benchmarking Crimes. https://www.cse.unsw.edu. au/~gernot/benchmarking-crimes.html [accessed Jan. 2020].
[9]
M Heroux. 2015. The TOMS initiative and policies for replicated computational results (RCR). ACM Trans. Math. Softw. 41, 3, Article Article 13 (June 2015), 5 pages.
[10]
Michael A. Heroux. 2018. SC Reproducibility Initiative. https://sc18. supercomputing.org/submit/sc-reproducibility-initiative/index.html.
[11]
Beth Plale. 2020. Transparency and Reproducibility Initiative. https://sc20. supercomputing.org/submit/transparency-reproducibility-initiative/.
[12]
Damien Saucez, Luigi Iannone, and Olivier Bonaventure. 2019. Evaluating the artifacts of SIGCOMM papers. ACM SIGCOMM Computer Communication Review 49, 2 (2019), 44--47.
[13]
Quirin Scheitle, Matthias Wählisch, Oliver Gasser, Thomas C Schmidt, and Georg Carle. 2017. Towards an ecosystem for reproducible research in computer networking. In Proceedings of the Reproducibility Workshop (Reproducibility ’17). ACM, Los Angeles, CA, USA, 5--8.
[14]
Erik van der Kouwe, Dennis Andriesse, Herbert Bos, Cristiano Giuffrida, and Gernot Heiser. 2018. Benchmarking Crimes: An Emerging Threat in Systems Security. arXiv:1801.02381 http://arxiv.org/abs/1801.02381
[15]
Noa Zilberman. 2020. An artifact evaluation of NDP. ACM SIGCOMM Computer Communication Review 50, 2 (2020).
[16]
Noa Zilberman, Gabi Bracha, and Golan Schzukin. 2019. Stardust: Divide and Conquer in the Data Center Network. In NSDI. Boston, MA, USA, 141--160.

Cited By

View all
  • (2025)Robust Streaming Benchmark Design in the Presence of BackpressureComputer Performance Engineering10.1007/978-3-031-80932-3_10(137-152)Online publication date: 13-Feb-2025
  • (2024)Sharing Software-Evolution Datasets: Practices, Challenges, and RecommendationsProceedings of the ACM on Software Engineering10.1145/36607981:FSE(2051-2074)Online publication date: 12-Jul-2024
  • (2023)Analyzing Cyber Security Research Practices through a Meta-Research FrameworkProceedings of the 16th Cyber Security Experimentation and Test Workshop10.1145/3607505.3607523(64-74)Online publication date: 7-Aug-2023
  • Show More Cited By

Index Terms

  1. Thoughts about Artifact Badging

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM SIGCOMM Computer Communication Review
    ACM SIGCOMM Computer Communication Review  Volume 50, Issue 2
    April 2020
    64 pages
    ISSN:0146-4833
    DOI:10.1145/3402413
    Issue’s Table of Contents
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 23 May 2020
    Published in SIGCOMM-CCR Volume 50, Issue 2

    Check for updates

    Author Tags

    1. Artifact Evaluation
    2. Reproducibility
    3. Robustness

    Qualifiers

    • Editorial

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)182
    • Downloads (Last 6 weeks)23
    Reflects downloads up to 02 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Robust Streaming Benchmark Design in the Presence of BackpressureComputer Performance Engineering10.1007/978-3-031-80932-3_10(137-152)Online publication date: 13-Feb-2025
    • (2024)Sharing Software-Evolution Datasets: Practices, Challenges, and RecommendationsProceedings of the ACM on Software Engineering10.1145/36607981:FSE(2051-2074)Online publication date: 12-Jul-2024
    • (2023)Analyzing Cyber Security Research Practices through a Meta-Research FrameworkProceedings of the 16th Cyber Security Experimentation and Test Workshop10.1145/3607505.3607523(64-74)Online publication date: 7-Aug-2023
    • (2023)Continuously Accelerating ResearchProceedings of the 45th International Conference on Software Engineering: New Ideas and Emerging Results10.1109/ICSE-NIER58687.2023.00028(123-128)Online publication date: 17-May-2023
    • (2022)A retrospective study of one decade of artifact evaluationsProceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3540250.3549172(145-156)Online publication date: 7-Nov-2022
    • (2021)Understanding experiments and research practices for reproducibility: an exploratory studyPeerJ10.7717/peerj.111409(e11140)Online publication date: 21-Apr-2021
    • (2021)The pos frameworkProceedings of the 17th International Conference on emerging Networking EXperiments and Technologies10.1145/3485983.3494841(259-266)Online publication date: 2-Dec-2021
    • (2020)An Artifact Evaluation of NDPACM SIGCOMM Computer Communication Review10.1145/3402413.340241850:2(32-36)Online publication date: 25-May-2020

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media