[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.5555/3666122.3666758guideproceedingsArticle/Chapter ViewAbstractPublication PagesnipsConference Proceedingsconference-collections
research-article

A one-size-fits-all approach to improving randomness in paper assignment

Published: 30 May 2024 Publication History

Abstract

The assignment of papers to reviewers is a crucial part of the peer review processes of large publication venues, where organizers (e.g., conference program chairs) rely on algorithms to perform automated paper assignment. As such, a major challenge for the organizers of these processes is to specify paper assignment algorithms that find appropriate assignments with respect to various desiderata. Although the main objective when choosing a good paper assignment is to maximize the expertise of each reviewer for their assigned papers, several other considerations make introducing randomization into the paper assignment desirable: robustness to malicious behavior, the ability to evaluate alternative paper assignments, reviewer diversity, and reviewer anonymity. However, it is unclear in what way one should randomize the paper assignment in order to best satisfy all of these considerations simultaneously. In this work, we present a practical, one-size-fits-all method for randomized paper assignment intended to perform well across different motivations for randomness. We show theoretically and experimentally that our method outperforms currently-deployed methods for randomized paper assignment on several intuitive randomness metrics, demonstrating that the randomized assignments produced by our method are general-purpose.

References

[1]
Laurent Charlin and Richard S. Zemel. The Toronto Paper Matching System: An automated paper-reviewer assignment system. In ICML Workshop on Peer Reviewing and Publishing Models, 2013.
[2]
Ivan Stelmakh, Nihar B. Shah, and Aarti Singh. PeerReview4All: Fair and accurate reviewer assignment in peer review. In ALT, 2019.
[3]
Steven Jecmen, Hanrui Zhang, Ryan Liu, Fei Fang, Vincent Conitzer, and Nihar B Shah. Near-optimal reviewer splitting in two-phase paper reviewing and conference experiment design. In HCOMP, 2022.
[4]
Wenbin Tang, Jie Tang, and Chenhao Tan. Expertise matching via constraint-based optimization. In International Conference on Web Intelligence and Intelligent Agent Technology, 2010.
[5]
Peter A. Flach, Sebastian Spiegler, Bruno Golénia, Simon Price, John Guiver, Ralf Herbrich, Thore Graepel, and Mohammed J. Zaki. Novel tools to streamline the conference review process: Experiences from SIGKDD'09. SIGKDD Explorations Newsletter, 2010.
[6]
Camillo J. Taylor. On the optimal assignment of conference papers to reviewers. Technical report, Department of Computer and Information Science, University of Pennsylvania, 2008.
[7]
Laurent Charlin, Richard S. Zemel, and Craig Boutilier. A framework for optimizing paper matching. In UAI, 2011.
[8]
Nihar B Shah. Challenges, experiments, and computational solutions in peer review. Communications of the ACM, 2022.
[9]
David Mimno and Andrew McCallum. Expertise modeling for matching papers with reviewers. In KDD, 2007.
[10]
Xiang Liu, Torsten Suel, and Nasir Memon. A robust model for paper reviewer assignment. In RecSys, 2014.
[11]
Marko A. Rodriguez and Johan Bollen. An algorithm to determine peer-reviewers. In CIKM, 2008.
[12]
Hong Diep Tran, Guillaume Cabanac, and Gilles Hubert. Expert suggestion for conference program committees. In RCIS, 2017.
[13]
Judy Goldsmith and Robert H. Sloan. The AI conference paper assignment problem. In AAAI Workshop, 2007.
[14]
T. N. Vijaykumar. Potential organized fraud in ACM/IEEE computer architecture conferences. https://medium.com/@tnvijayk/potential-organized-fraud-in-acm-ieee-computer-architecture-conferences-ccd61169370d, 2020. Accessed February 1, 2023.
[15]
Michael Littman. Collusion rings threaten the integrity of computer science research. Communications of the ACM, 2021.
[16]
Jef Akst. I hate your paper. Many say the peer review system is broken. Here's how some journals are trying to fix it. The Scientist, 2010.
[17]
Edward F. Barroga. Safeguarding the integrity of science communication by restraining 'rational cheating' in peer review. Journal of Korean Medical Science, 2014.
[18]
Mario Paolucci and Francisco Grimaldo. Mechanism change in a simulation of peer review: From junk support to elitism. Scientometrics, 2014.
[19]
Nihar B Shah, Behzad Tabibian, Krikamol Muandet, Isabelle Guyon, and Ulrike Von Luxburg. Design and analysis of the nips 2016 review process. Journal of Machine Learning Research, 2018.
[20]
Kevin Leyton-Brown, Yatin Nandwani, Hedayat Zarkoob, Chris Cameron, Neil Newman, Dinesh Raghu, et al. Matching papers and reviewers at large conferences. arXiv preprint arXiv:2202.12273, 2022.
[21]
Martin Saveski, Steven Jecmen, Nihar Shah, and Johan Ugander. Counterfactual evaluation of peer review assignment strategies in computer science and artificial intelligence. In NeurIPS, 2023.
[22]
Steven Jecmen, Hanrui Zhang, Ryan Liu, Nihar Shah, Vincent Conitzer, and Fei Fang. Mitigating manipulation in peer review via randomized reviewer assignments. In NeurIPS, 2020.
[23]
Eric Budish, Yeon-Koo Che, Fuhito Kojima, and Paul Milgrom. Implementing random assignments: A generalization of the Birkhoff-von Neumann theorem. In Cowles Summer Conference, 2009.
[24]
OpenReview.net. OpenReviewmatcher. https://github.com/openreview/openreview-matcher, 2023. Accessed April 2023.
[25]
Komal Dhull, Steven Jecmen, Pravesh Kothari, and Nihar B Shah. Strategyproofing peer assessment via partitioning: The price in terms of evaluators' expertise. In HCOMP, 2022.
[26]
Yichong Xu, Han Zhao, Xiaofei Shi, Jeremy Zhang, and Nihar B Shah. On strategyproof conference peer review. In IJCAI, 2019.
[27]
Steven Jecmen, Nihar B Shah, Fei Fang, and Vincent Conitzer. Tradeoffs in preventing manipulation in paper bidding for reviewer assignment. In ML Evaluation Standards Workshop at ICLR, 2022.
[28]
Steven Jecmen, Minji Yoon, Vincent Conitzer, Nihar B. Shah, and Fei Fang. A dataset on malicious paper bidding in peer review. In WWW, 2023.
[29]
Ruihan Wu, Chuan Guo, Felix Wu, Rahul Kidambi, Laurens Van Der Maaten, and Kilian Weinberger. Making paper reviewing robust to bid manipulation attacks. In ICML, 2021.
[30]
Niclas Boehmer, Robert Bredereck, and André Nichterlein. Combating collusion rings is hard but possible. In AAAI, 2022.
[31]
Andrew Tomkins, Min Zhang, and William D. Heavlin. Reviewer bias in single-versus double-blind peer review. Proceedings of the National Academy of Sciences, 2017.
[32]
Neil D. Lawrence. The NIPS experiment. https://inverseprobability.com/2014/12/16/the-nips-experiment, 2014. Accessed February 1, 2023.
[33]
Eric Price. The NIPS experiment. http://blog.mrtz.org/2014/12/15/the-nips-experiment.html, 2014. Accessed February 1, 2023.
[34]
Ivan Stelmakh, Charvi Rastogi, Nihar B Shah, Aarti Singh, and Hal Daumé III. A large scale randomized controlled trial on herding in peer-review discussions. PLoS ONE, 2023.
[35]
Alina Beygelzimer, Yann Dauphin, Percy Liang, and Jennifer Wortman Vaughan. The NeurIPS 2021 cons stency experiment. https://blog.neurips.cc/2021/12/08/the-neurips-2021-consistency-experiment/, 2021. Accessed February 1, 2023.
[36]
Gurobi Optimization, LLC. Gurobi Optimizer Reference Manual, 2023.
[37]
Jack Edmonds and Richard M Karp. Theoretical improvements in algorithmic efficiency for network flow problems. Journal of the ACM (JACM), 1972.
[38]
Michael B Cohen, Yin Tat Lee, and Zhao Song. Solving linear programs in the current matrix multiplication time. Journal of the ACM (JACM), 2021.
[39]
Nicholas Mattei and Toby Walsh. Preflib: A library for preferences http://www.preflib.org. In ADT, 2013.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
NIPS '23: Proceedings of the 37th International Conference on Neural Information Processing Systems
December 2023
80772 pages

Publisher

Curran Associates Inc.

Red Hook, NY, United States

Publication History

Published: 30 May 2024

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 10 Dec 2024

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media