[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2736277.2741102acmotherconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article

Incentivizing High Quality Crowdwork

Published: 18 May 2015 Publication History

Abstract

We study the causal effects of financial incentives on the quality of crowdwork. We focus on performance-based payments (PBPs), bonus payments awarded to workers for producing high quality work. We design and run randomized behavioral experiments on the popular crowdsourcing platform Amazon Mechanical Turk with the goal of understanding when, where, and why PBPs help, identifying properties of the payment, payment structure, and the task itself that make them most effective. We provide examples of tasks for which PBPs do improve quality. For such tasks, the effectiveness of PBPs is not too sensitive to the threshold for quality required to receive the bonus, while the magnitude of the bonus must be large enough to make the reward salient. We also present examples of tasks for which PBPs do not improve quality. Our results suggest that for PBPs to improve quality, the task must be effort-responsive: the task must allow workers to produce higher quality work by exerting more effort. We also give a simple method to determine if a task is effort-responsive a priori. Furthermore, our experiments suggest that all payments on Mechanical Turk are, to some degree, implicitly performance-based in that workers believe their work may be rejected if their performance is sufficiently poor. Finally, we propose a new model of worker behavior that extends the standard principal-agent model from economics to include a worker's subjective beliefs about his likelihood of being paid, and show that the predictions of this model are in line with our experimental findings. This model may be useful as a foundation for theoretical studies of incentives in crowdsourcing markets.

References

[1]
O. Alonso. Implementing crowdsourcing-based relevance experimentation: An industrial perspective. Information Retrieval, 16(2):101--120, 2013.
[2]
R. M. Araujo. 99designs: An analysis of creative competition in crowdsourced design. In HCOMP, 2013.
[3]
S. E. Bonner, R. Hastie, S. G. B., and S. M. Young. A review of the effects of financial incentives on performance in laboratory tasks: Implications for management accounting. Journal of Management Accounting Research, 12(1):19--64, 2000.
[4]
M. Buhrmester, T. Kwang, and S. D. Gosling. Amazon's Mechanical Turk: A new source of inexpensive, yet high-quality, data- Perspectives on Psychological Science, 2011.
[5]
C. F. Camerer and R. Hogarth. The effects of financial incentives in economics experiments: A review and capital-labor-production framework. Journal of Risk and Uncertainty, 19(1):7--42, 1999.
[6]
J. Cameron and W. D. Pierce. Reinforcement, reward, and intrinsic motivation: A meta-analysis. Review of Educational Research, 64(3):363--423, 1994.
[7]
J. Chandler, P. Mueller, and G. Paolacci. Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior Research Methods, 46(1):112--130, 2014.
[8]
R. Eisenberger and J. Cameron. Detrimental effects of reward: Reality or myth? American psychologist, 51(11):1153, 1996.
[9]
B. S. Frey and F. Oberholzer-Gee. The cost of price incentives: An empirical analysis of motivation crowding-out. The American Economic Review, pages 746--755, 1997.
[10]
D. Gilchrist, M. Luca, and D. Malhotra. When 3+1>4: Gift structure and reciprocity in the field. Technical report, 2014. Working Paper.
[11]
U. Gneezy and A. Rustichini. Pay enough or don't pay at all. Quarterly Journal of Economics, August, pages 791--810, 2000.
[12]
C. G. Harris. You're hired! An examination of crowdsourcing incentive models in human resource tasks. In WSDM 2011 Workshop on Crwdsourcing for Search and Data Mining, 2011.
[13]
R. Hertwig and A. Ortmann. Experimental practices in economics: A methodological challenge for psychologists? Behavioral and Brain Sciences, 24(3): 383--403, 2001.
[14]
C.-J. Ho and J. W. Vaughan. Online task assignment in crowdsourcing markets. In AAAI, 2012.
[15]
C.-J. Ho, S. Jabbari, and J. W. Vaughan. Adaptive task assignment for crowdsourced classification. In ICML, 2013.
[16]
C.-J. Ho, A. Slivkins, and J. W. Vaughan. Adaptive contract design for crowdsourcing markets: Bandit algorithms for repeated principal-agent problems. In ACM EC, 2014.
[17]
J. J. Horton and L. B. Chilton. The labor economics of paid crowdsourcing. In ACM EC, 2010.
[18]
J. J. Horton, D. Rand, and R. Zeckhauser. The online laboratory: Conducting experiments in a real labor market. Experimental Economics, 14(3):399--425, 2011.
[19]
P. G. Ipeirotis, F. Provost, and J. Wang. Quality management on Amazon Mechanical Turk. In HCOMP, 2010.
[20]
G. D. Jenkins Jr., N. Gupta, A. Gupta, and J. D. Shaw. Are financial incentives related to performance? A meta-analytic review of empirical research. Journal of Applied Psychology, 83(5):777--787, 1998.
[21]
D. Karger, S. Oh, and D. Shah. Iterative learning for reliable crowdsourcing systems. In NIPS, 2011.
[22]
A. Kittur, E. Chi, and B. Suh. Crowdsourcing user studies with Mechanical Turk. In CHI, 2008.
[23]
J.-J. Laffont and D. Martimort. The Theory of Incentives: The Principal-Agent Model. Princeton University Press, 2002.
[24]
E. P. Lazear. Performance pay and productivity. American Economic Review, pages 1346--1361, 2000.
[25]
L. Litman, J. Robinson, and C. Rosenzweig. The relationship between motivation, monetary compensation, and data quality among US- and India-based workers on Mechanical Turk. Behavioral Research Methods, 2014. To appear.
[26]
Q. Liu, J. Peng, and A. Ihler. Variational inference for crowdsourcing. In NIPS, 2012.
[27]
U.-V. Marti and H. Bunke. The iam-database: an english sentence database for offline handwriting recognition. International Journal on Document Analysis and Recognition, 5(1):39--46, 2002.
[28]
W. Mason and S. Suri. Conducting behavioral research on Amazon's Mechanical Turk. Behavior Research Methods, 44(1):1--23, 2012.
[29]
W. Mason and D. J. Watts. Financial incentives and the "performance of crowds". In HCOMP, 2009.
[30]
J. Rogstadius, V. Kostakos, A. Kittur, B. Smus, J. Laredo, and M. Vukovic. An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets. In ICWSM, 2011.
[31]
D. Schall. Service-Oriented Crowdsourcing - Architecture, Protocols and Algorithms. Springer Briefs in Computer Science. Springer, 2012.
[32]
H. J. Seltman. Experimental design and analysis. http://www.stat.cmu.edu/~hseltman/309/Book, 2014.
[33]
A. D. Shaw, J. J. Horton, and D. L. Chen. Designing incentives for inexpert human raters. In CSCW, 2011.
[34]
V. Sheng, F. Provost, and P. Ipeirotis. Get another label? Improving data quality using multiple, noisy labelers. In KDD, 2008.
[35]
C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011.
[36]
M. Yin, Y. Chen, and Y.-A. Sun. The effects of performance-contingent financial incentives in online labor markets. In AAAI, 2013.
[37]
M. Yin, Y. Chen, and Y.-A. Sun. Monetary interventions in crowdsourcing task switching. In HCOMP, 2014.

Cited By

View all
  • (2024)After Online Innovators Receive Performance-Contingent Material Rewards: A Study Based on an Open Innovation PlatformBehavioral Sciences10.3390/bs1408072314:8(723)Online publication date: 19-Aug-2024
  • (2024)Analyzing Dataset Annotation Quality Management in the WildComputational Linguistics10.1162/coli_a_0051650:3(817-866)Online publication date: 1-Sep-2024
  • (2024)Are Bounded Contracts Learnable and Approximately Optimal?Proceedings of the 25th ACM Conference on Economics and Computation10.1145/3670865.3673483(315-344)Online publication date: 8-Jul-2024
  • Show More Cited By

Index Terms

  1. Incentivizing High Quality Crowdwork

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    WWW '15: Proceedings of the 24th International Conference on World Wide Web
    May 2015
    1460 pages
    ISBN:9781450334693

    Sponsors

    • IW3C2: International World Wide Web Conference Committee

    In-Cooperation

    Publisher

    International World Wide Web Conferences Steering Committee

    Republic and Canton of Geneva, Switzerland

    Publication History

    Published: 18 May 2015

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. crowdsourcing
    2. incentives
    3. performance-based payments

    Qualifiers

    • Research-article

    Funding Sources

    • National Science Foundation

    Conference

    WWW '15
    Sponsor:
    • IW3C2

    Acceptance Rates

    WWW '15 Paper Acceptance Rate 131 of 929 submissions, 14%;
    Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)59
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 13 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)After Online Innovators Receive Performance-Contingent Material Rewards: A Study Based on an Open Innovation PlatformBehavioral Sciences10.3390/bs1408072314:8(723)Online publication date: 19-Aug-2024
    • (2024)Analyzing Dataset Annotation Quality Management in the WildComputational Linguistics10.1162/coli_a_0051650:3(817-866)Online publication date: 1-Sep-2024
    • (2024)Are Bounded Contracts Learnable and Approximately Optimal?Proceedings of the 25th ACM Conference on Economics and Computation10.1145/3670865.3673483(315-344)Online publication date: 8-Jul-2024
    • (2024)The State of Pilot Study Reporting in Crowdsourcing: A Reflection on Best Practices and GuidelinesProceedings of the ACM on Human-Computer Interaction10.1145/36410238:CSCW1(1-45)Online publication date: 26-Apr-2024
    • (2024)AnVILMEDFORD: Documenting Cloud-Based Analysis Projects Using the MEDFORD Metadata Language2024 IEEE International Conference on Big Data (BigData)10.1109/BigData62323.2024.10825145(3158-3162)Online publication date: 15-Dec-2024
    • (2024)Guidelines for using financial incentives in software-engineering experimentationEmpirical Software Engineering10.1007/s10664-024-10517-w29:5Online publication date: 10-Aug-2024
    • (2023)Encoding human behavior in information design through deep learningProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3666452(7506-7528)Online publication date: 10-Dec-2023
    • (2023)Trading-off payments and accuracy in online classification with paid stochastic expertsProceedings of the 40th International Conference on Machine Learning10.5555/3618408.3619857(34809-34830)Online publication date: 23-Jul-2023
    • (2023)Tragedy of the Commons in Crowd Work-Based ResearchACM Journal on Responsible Computing10.1145/36264931:1(1-25)Online publication date: 6-Oct-2023
    • (2023)How does Value Similarity affect Human Reliance in AI-Assisted Ethical Decision Making?Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society10.1145/3600211.3604709(49-57)Online publication date: 8-Aug-2023
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media