[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3132847.3133002acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article

Budgeted Task Scheduling for Crowdsourced Knowledge Acquisition

Published: 06 November 2017 Publication History

Abstract

Knowledge acquisition (e.g. through labeling) is one of the most successful applications in crowdsourcing. In practice, collecting as specific as possible knowledge via crowdsourcing is very useful since specific knowledge can be generalized easily if we have a knowledge base, but it is difficult to infer specific knowledge from general knowledge. Meanwhile, tasks for acquiring more specific knowledge can be more difficult for workers, thus need more answers to infer high-quality results. Given a limited budget, assigning workers to difficult tasks will be more effective for the goal of specific knowledge acquisition. However, existing crowdsourcing task scheduling cannot incorporate the specificity of workers' answers. In this paper, we present a new framework for task scheduling with the limited budget, targeting an effective solution to more specific knowledge acquisition. We propose novel criteria for evaluating the quality of specificity-dependent answers and result inference algorithms to aggregate more specific answers with budget constraints. We have implemented our framework with real crowdsourcing data and platform, and have achieved significant performance improvement compared with existing approaches.

References

[1]
Yoram Bachrach, Thore Graepel, Tom Minka, and John Guiver. 2012. How To Grade a Test Without Knowing the Answers--A Bayesian Graphical Model for Adaptive Crowdsourcing and Aptitude Testing. arXiv preprint arXiv:1206.6386 (2012).
[2]
Piyush Bansal, Carsten Eickhoff, and Thomas Hofmann. 2016. Active Content-Based Crowdsourcing Task Selection. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. ACM, 529--538.
[3]
Jonathan Bragg, Mausam, and Daniel S. Weld. 2013. Crowdsourcing Multi-Label Classification for Taxonomy Creation AAAI Conference on Human Computation and Crowdsourcing (HCOMP).
[4]
Lydia B Chilton, Greg Little, Darren Edge, Daniel S Weld, and James A Landay. 2013. Cascade: Crowdsourcing taxonomy creation. In SIGCHI Conference on Human Factors in Computing Systems. ACM, 1999--2008.
[5]
Alexander Philip Dawid and Allan M Skene. 1979. Maximum likelihood estimation of observer error-rates using the EM algorithm. Applied statistics (1979), 20--28.
[6]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database CVPR. 248--255.
[7]
Chao Gao, Yu Lu, and Dengyong Zhou. 2016. Exact exponent in optimal rates for crowdsourcing. International Conference on Machine Learning (ICML).
[8]
Tao Han, Hailong Sun, Yangqiu Song, Yili Fang, and Xudong Liu. 2016 a. Incorporating External Knowledge into Crowd Intelligence for More Specific Knowledge Acquisition. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16).
[9]
Tao Han, Hailong Sun, Yangqiu Song, Yili Fang, and Xudong Liu. 2016 b. Incorporating External Knowledge into Crowd Intelligence for More Specific Knowledge Acquisition. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9--15 July 2016. 1541--1547.
[10]
Jeff Howe. 2006. The rise of crowdsourcing. Wired magazine, Vol. 14, 6 (2006), 1--4.
[11]
David R Karger, Sewoong Oh, and Devavrat Shah. 2014. Budget-optimal task allocation for reliable crowdsourcing systems. Operations Research, Vol. 62, 1 (2014), 1--24.
[12]
Ashish Khetan and Sewoong Oh. 2016. Achieving budget-optimality with adaptive schemes in crowdsourcing Advances in Neural Information Processing Systems. 4844--4852.
[13]
Farshad Lahouti and Babak Hassibi. 2016. Fundamental Limits of Budget-Fidelity Trade-off in Label Crowdsourcing Advances in Neural Information Processing Systems. 5059--5067.
[14]
Mahyar Salek, Yoram Bachrach, and Peter Key. 2013. Hotspotting-A Probabilistic Graphical Model For Image Object Localization Through Crowdsourcing. In AAAI.
[15]
Yuyin Sun, Adish Singla, Dieter Fox, and Andreas Krause. 2015. Building Hierarchies of Concepts via Crowdsourcing IJCAI. 844--851.
[16]
Long Tran-Thanh, Trung Dong Huynh, Avi Rosenfeld, Sarvapali D Ramchurn, and Nicholas R Jennings. 2014. BudgetFix: budget limited crowdsourcing for interdependent task allocation with quality guarantees. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems. International Foundation for Autonomous Agents and Multiagent Systems, 477--484.
[17]
Long Tran-Thanh, Trung Dong Huynh, Avi Rosenfeld, Sarvapali D Ramchurn, and Nicholas R Jennings. 2015. Crowdsourcing Complex Workflows under Budget Constraints. AAAI. 1298--1304.
[18]
Jacob Whitehill, Ting-fan Wu, Jacob Bergsma, Javier R Movellan, and Paul L Ruvolo. 2009 a. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise Advances in neural information processing systems. 2035--2043.
[19]
Jacob Whitehill, Ting-fan Wu, Jacob Bergsma, Javier R Movellan, and Paul L Ruvolo. 2009 b. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise NIPS. 2035--2043.
[20]
Han Yu, Chunyan Miao, Zhiqi Shen, and Cyril Leung. 2015. Quality and budget aware task allocation for spatial crowdsourcing Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 1689--1690.
[21]
Denny Zhou, Sumit Basu, Yi Mao, and John C Platt. 2012 a. Learning from the wisdom of crowds by minimax entropy Advances in Neural Information Processing Systems. 2195--2203.
[22]
Dengyong Zhou, Sumit Basu, Yi Mao, and John C Platt. 2012 b. Learning from the wisdom of crowds by minimax entropy NIPS. 2195--2203.
[23]
Dengyong Zhou, Qiang Liu, John Platt, and Christopher Meek. 2014. Aggregating ordinal labels from crowds by minimax conditional entropy International Conference on Machine Learning. 262--270.

Cited By

View all
  • (2023)Incorporating Feature Labeling into Crowdsourcing for More Accurate Aggregation LabelsCollaborative Computing: Networking, Applications and Worksharing10.1007/978-3-031-24386-8_17(306-324)Online publication date: 25-Jan-2023
  • (2022)Accurate inference of crowdsourcing properties when using efficient allocation strategiesScientific Reports10.1038/s41598-022-10794-912:1Online publication date: 27-Apr-2022
  • (2022)Expertise-Aware Crowdsourcing Taxonomy EnrichmentWeb Information Systems Engineering – WISE 202110.1007/978-3-030-90888-1_2(14-29)Online publication date: 1-Jan-2022
  • Show More Cited By

Index Terms

  1. Budgeted Task Scheduling for Crowdsourced Knowledge Acquisition

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CIKM '17: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management
    November 2017
    2604 pages
    ISBN:9781450349185
    DOI:10.1145/3132847
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 06 November 2017

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. crowdsourcing
    2. knowledge acquisition
    3. task scheduling

    Qualifiers

    • Research-article

    Funding Sources

    • State Key Laboratory of Software Development Environment
    • National Basic Research 973 Program of China

    Conference

    CIKM '17
    Sponsor:

    Acceptance Rates

    CIKM '17 Paper Acceptance Rate 171 of 855 submissions, 20%;
    Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

    Upcoming Conference

    CIKM '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)8
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 24 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Incorporating Feature Labeling into Crowdsourcing for More Accurate Aggregation LabelsCollaborative Computing: Networking, Applications and Worksharing10.1007/978-3-031-24386-8_17(306-324)Online publication date: 25-Jan-2023
    • (2022)Accurate inference of crowdsourcing properties when using efficient allocation strategiesScientific Reports10.1038/s41598-022-10794-912:1Online publication date: 27-Apr-2022
    • (2022)Expertise-Aware Crowdsourcing Taxonomy EnrichmentWeb Information Systems Engineering – WISE 202110.1007/978-3-030-90888-1_2(14-29)Online publication date: 1-Jan-2022
    • (2021)Find truth in the hands of the few: acquiring specific knowledge with crowdsourcingFrontiers of Computer Science: Selected Publications from Chinese Universities10.1007/s11704-020-9364-x15:4Online publication date: 1-Aug-2021
    • (2021)Crowdsourcing usage, task assignment methods, and crowdsourcing platformsJournal of Software: Evolution and Process10.1002/smr.236833:8Online publication date: 1-Aug-2021
    • (2020)Suitability-based Task Assignment in Crowdsourcing Markets2020 IEEE International Conference on Services Computing (SCC)10.1109/SCC49832.2020.00054(361-369)Online publication date: Nov-2020
    • (2018)Interplay of Game Incentives, Player Profiles and Task Difficulty in Games with a PurposeKnowledge Engineering and Knowledge Management10.1007/978-3-030-03667-6_20(306-321)Online publication date: 31-Oct-2018

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media