[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

PeerNomination: : A novel peer selection algorithm to handle strategic and noisy assessments

Published: 01 March 2023 Publication History

Abstract

In peer selection a group of agents must choose a subset of themselves, as winners for, e.g., peer-reviewed grants or prizes. We take a Condorcet view of this aggregation problem, assuming that there is an objective ground-truth ordering over the agents. We study agents that have a noisy perception of this ground truth and give assessments that, even when truthful, can be inaccurate. Our goal is to select the best set of agents according to the underlying ground truth by looking at the potentially unreliable assessments of the peers. Besides being potentially unreliable, we also allow agents to be self-interested, attempting to influence the outcome of the decision in their favour. Hence, we are focused on tackling the problem of impartial (or strategyproof) peer selection – how do we prevent agents from manipulating their reviews while still selecting the most deserving individuals, all in the presence of noisy evaluations? We propose a novel impartial peer selection algorithm, PeerNomination, that aims to fulfil the above desiderata. We provide a comprehensive theoretical analysis of the recall of PeerNomination and prove various properties, including impartiality and monotonicity. We also provide empirical results based on computer simulations to show its effectiveness compared to the state-of-the-art impartial peer selection algorithms. We then investigate the robustness of PeerNomination to various levels of noise in the reviews. In order to maintain good performance under such conditions, we extend PeerNomination by using weights for reviewers which, informally, capture some notion of reliability of the reviewer. We show, theoretically, that the new algorithm preserves strategyproofness and, empirically, that the weights help identify the noisy reviewers and hence to increase selection performance.1

References

[1]
N. Alon, F. Fischer, A. Procaccia, M. Tennenholtz, Sum of us: strategyproof selection from the selectors, in: Proceedings of the 13th Conference on Theoretical Aspects of Rationality and Knowledge (TARK), 2011, pp. 101–110.
[2]
H. Azari, D. Parks, L. Xia, Random utility theory for social choice, Adv. Neural Inf. Process. Syst. (2012) 25.
[3]
H. Aziz, O. Lev, N. Mattei, J.S. Rosenschein, T. Walsh, Strategyproof peer selection: mechanisms, analyses, and experiments, in: D. Schuurmans, M.P. Wellman (Eds.), AAAI, AAAI Press, 2016, pp. 397–403.
[4]
H. Aziz, O. Lev, N. Mattei, J.S. Rosenschein, T. Walsh, Strategyproof peer selection using randomization, partitioning, and apportionment, Artif. Intell. 275 (2019) 295–309,.
[5]
C.M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006.
[6]
A. Bjelde, F. Fischer, M. Klimm, Impartial selection and the power of up to two choices, ACM Trans. Econ. Comput. 5 (4) (2017) 1–20,.
[7]
J. Bohannon, Who's afraid of peer review?, Science 342 (6154) (2013) 60–65.
[8]
N. Bousquet, S. Norin, A. Vetta, A near-optimal mechanism for impartial selection, in: Proceedings of the 10th International Workshop on Internet and Network Economics (WINE), in: Lecture Notes in Computer Science (LNCS), 2014, pp. 133–146.
[9]
S. Brams, P. Fishburn, Approval voting, Am. Polit. Sci. Rev. 72 (1978) 831–847.
[10]
I. Caragiannis, G.A. Krimpas, A.A. Voudouris, Aggregating partial rankings with applications to peer grading in massive online open courses, in: Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, AAMAS, ACM, 2015, pp. 675–683.
[11]
I. Caragiannis, G.A. Krimpas, A.A. Voudouris, How effective can simple ordinal peer grading be?, ACM Trans. Econ. Comput. 8 (3) (2020),.
[12]
L. Charlin, R.S. Zemel, C. Boutilier, A framework for optimizing paper matching, in: UAI 2011, Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence, AUAI Press, 2011, pp. 86–95.
[13]
W. Chen, R. Zhou, C. Tian, C. Shen, On top-k selection from m-wise partial rankings via Borda counting, IEEE Trans. Signal Process. 70 (2022) 2031–2045.
[14]
G. de Clippel, H. Moulin, N. Tideman, Impartial division of a dollar, J. Econ. Theory 139 (2008) 176–191.
[15]
A.P. Dawid, A.M. Skene, Maximum likelihood estimation of observer error-rates using the em algorithm, J. R. Stat. Soc., Ser. C, Appl. Stat. 28 (1) (1979) 20–28.
[16]
L. De Alfaro, M. Shavlovsky, Crowdgrader: a tool for crowdsourcing the evaluation of homework assignments, in: Proceedings of the 45th ACM Technical Symposium on Computer Science Education (ACM:CACM), 2014, pp. 415–420.
[17]
F. Fischer, M. Klimm, Optimal impartial selection, in: Proceedings of the 15th ACM Conference on Economics and Computation (ACM-EC), 2014, pp. 803–820.
[18]
P.A. Flach, Machine Learning - The Art and Science of Algorithms That Make Sense of Data, Cambridge University Press, 2012, http://www.cambridge.org/de/academic/subjects/computer-science/pattern-recognition-and-machine-learning/machine-learning-art-and-science-algorithms-make-sense-data.
[19]
S. Jecmen, H. Zhang, R. Liu, N.B. Shah, V. Conitzer, F. Fang, Mitigating manipulation in peer review via randomized reviewer assignments, in: H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, H. Lin (Eds.), Annual Conference on Neural Information Processing Systems 2020, NeurIPS, 2020.
[20]
A. Kahng, Y. Kotturi, C. Kulkarni, D. Kurokawa, A. Procaccia, Ranking wily people who rank each other, in: Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), 2018.
[21]
M.G. Kendall, A new measure of rank correlation, Biometrika 30 (1/2) (1938) 81–93.
[22]
J.M. Kleinberg, Authoritative sources in a hyperlinked environment, J. ACM 46 (5) (1999) 604–632.
[23]
D. Kurokawa, O. Lev, J. Morgenstern, A.D. Procaccia, Impartial peer review, in: Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI'15, AAAI Press, 2015, pp. 582–588. http://dl.acm.org/citation.cfm?id=2832249.2832330.
[25]
J.W. Lian, N. Mattei, R. Noble, T. Walsh, The conference paper assignment problem: using order weighted averages to assign indivisible goods, in: Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), 2018, pp. 1138–1145.
[26]
T.Y. Liu, Learning to Rank for Information Retrieval, Springer Science & Business Media, 2011.
[27]
T. Lu, C. Boutilier, Learning mallows models with pairwise preferences, in: Proceedings of the 28th International Conference on International Conference on Machine Learning (ICML), 2011, pp. 145–152.
[28]
C.L. Mallows, Non-null ranking models. I, Biometrika 44 (1–2) (1957) 114–130.
[29]
N. Mattei, P. Turrini, S. Zhydkov, Peernomination: relaxing exactness for increased accuracy in peer selection, in: Proc. International Joint Conference on Artificial Intelligence (IJCAI), 2020, pp. 393–399. ijcai.org.
[30]
N. Mattei, T. Walsh, PrefLib: a library for preferences, in: Proceedings of the 3rd International Conference on Algorithmic Decision Theory (ADT), 2013, http://www.preflib.org.
[31]
N. Mattei, T. Walsh, A PrefLib.Org retrospective: lessons learned and new directions, in: U. Endriss (Ed.), Trends in Computational Social Choice, AI Access Foundation, 2017, pp. 289–309.
[32]
R. Meir, J. Lang, J. Lesca, N. Kaminski, N. Mattei, A market-inspired bidding scheme for peer review paper assignment, in: Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI), 2021.
[33]
M. Merrifield, D. Saari, Telescope time without tears: a distributed approach to peer review, Astron. Geophys. 50 (4) (2009) 4.16–4.20,.
[34]
Namanloo, A.A.; Thorpe, J.; Salehi-Abari, A. (2021): Improving peer assessment with graph convolutional networks. https://arxiv.org/abs/2111.04466.
[35]
N. Nisan, T. Roughgarden, E. Tardos, V.V. Vazirani, Algorithmic Game Theory, Cambridge University Press, Cambridge, 2007.
[36]
L. Page, S. Brin, R. Motwani, T. Winograd, The PageRank Citation Ranking: Bringing Order to the Web, Technical Report 1999-66 Stanford InfoLab, 1999, http://ilpubs.stanford.edu:8090/422/ previous number = SIDL-WP-1999-0120.
[37]
Piech, C.; Huang, J.; Chen, Z.; Do, C.; Ng, A.; Koller, D. (2013): Tuned models of peer assessment in moocs. arXiv preprint arXiv:1307.2579.
[38]
C. Piech, J. Huang, Z. Chen, C.B. Do, A.Y. Ng, D. Koller, Tuned models of peer assessment in moocs, in: Proceedings of the 6th International Conference on Educational Data Mining (EDM), 2013, pp. 153–160.
[39]
P. Resnick, R. Sami, The influence limiter: provably manipulation-resistant recommender systems, in: Proceedings of the 2007 ACM Conference on Recommender Systems, 2007, pp. 25–32.
[40]
J. Sabater, C. Sierra, Review on computational trust and reputation models, Artif. Intell. Rev. 24 (1) (2005) 33–60.
[41]
T. Schnabel, A. Swaminathan, P.I. Frazier, T. Joachims, Unbiased comparative evaluation of ranking functions, in: Proceedings of the 2016 ACM International Conference on the Theory of Information Retrieval, 2016, pp. 109–118.
[42]
N.B. Shah, KDD 2021 tutorial on systemic challenges and solutions on bias and unfairness in peer review, in: F. Zhu, B.C. Ooi, C. Miao (Eds.), KDD '21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, Singapore, August 14-18, 2021, ACM, 2021, pp. 4066–4067,.
[43]
N.B. Shah, B. Tabibian, K. Muandet, I. Guyon, U. Von Luxburg, Design and analysis of the NIPS 2016 review process, J. Mach. Learn. Res. 19 (1) (2018) 1913–1946.
[44]
N.B. Shah, M.J. Wainwright, Simple, robust and optimal ranking from pairwise comparisons, J. Mach. Learn. Res. 18 (1) (2017) 7246–7283.
[45]
Stelmakh, I.; Shah, N.B.; Singh, A. (2020): Catch me if I can: detecting strategic behaviour in peer assessment. arXiv preprint arXiv:2010.04041.
[46]
I. Stelmakh, N.B. Shah, A. Singh, H. Daumé III, A novice-reviewer experiment to address scarcity of qualified reviewers in large conferences, in: Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI Press, 2021, pp. 4785–4793.
[47]
T. Walsh, The PeerRank method for peer assessment, in: Proceedings of the 21st European Conference on Artificial Intelligence (ECAI), Prague, Czech Republic, 2014, pp. 909–914.
[48]
J. Wang, N.B. Shah, Your 2 is my 1, your 3 is my 9: handling arbitrary miscalibrations in ratings, in: E. Elkind, M. Veloso, N. Agmon, M.E. Taylor (Eds.), Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems AAMAS, IFAAMAS, 2019, pp. 864–872.
[49]
Wang, J.; Stelmakh, I.; Wei, Y.; Shah, N.B. (2020): Debiasing evaluations that are biased by evaluations. arXiv preprint arXiv:2012.00714.
[50]
J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, J. Movellan, Whose vote should count more: optimal integration of labels from labelers of unknown expertise, in: Proceedings of the 22nd International Conference on Neural Information Processing Systems, 2009, pp. 2035–2043.
[51]
J. Whitehill, T.f. Wu, J. Bergsma, J. Movellan, P. Ruvolo, Whose vote should count more: optimal integration of labels from labelers of unknown expertise, in: Advances in Neural Information Processing Systems, 2009, pp. 2035–2043.
[52]
L. Xia, Learning and Decision-Making from Rank Data. Synthesis Lectures on Artificial Intelligence and Machine Learning, Morgan and Claypool, 2019.
[53]
Y. Xu, H. Zhao, X. Shi, N.B. Shah, On strategyproof conference peer review, in: Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI), Macau, 2019, pp. 616–622.

Cited By

View all
  • (2023)Impartial Selection with Prior InformationProceedings of the ACM Web Conference 202310.1145/3543507.3583553(3614-3624)Online publication date: 30-Apr-2023

Index Terms

  1. PeerNomination: A novel peer selection algorithm to handle strategic and noisy assessments
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image Artificial Intelligence
        Artificial Intelligence  Volume 316, Issue C
        Mar 2023
        421 pages

        Publisher

        Elsevier Science Publishers Ltd.

        United Kingdom

        Publication History

        Published: 01 March 2023

        Author Tags

        1. Peer selection
        2. Strategyproofness
        3. Optimality
        4. Noisy opinions
        5. Reweighting

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 05 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2023)Impartial Selection with Prior InformationProceedings of the ACM Web Conference 202310.1145/3543507.3583553(3614-3624)Online publication date: 30-Apr-2023

        View Options

        View options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media