Leveraging crowd skills and consensus for collaborative web-resource labeling
References
Index Terms
- Leveraging crowd skills and consensus for collaborative web-resource labeling
Recommendations
Agreement/disagreement based crowd labeling
In many supervised learning problems, determining the true labels of training instances is expensive, laborious, and even practically impossible. As an alternative approach, it is much easier to collect multiple subjective (possibly noisy) labels from ...
Modeling annotator behaviors for crowd labeling
Machine learning applications can benefit greatly from vast amounts of data, provided that reliable labels are available. Mobilizing crowds to annotate the unlabeled data is a common solution. Although the labels provided by the crowd are subjective and ...
Consensus algorithms for biased labeling in crowdsourcing
Although it has become an accepted lay view that when labeling objects through crowdsourcing systems, non-expert annotators often exhibit biases, this argument lacks sufficient evidential observation and systematic empirical study. This paper initially ...
Comments
Please enable JavaScript to view thecomments powered by Disqus.Information & Contributors
Information
Published In
Publisher
Elsevier Science Publishers B. V.
Netherlands
Publication History
Author Tags
Qualifiers
- Research-article
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 0Total Downloads
- Downloads (Last 12 months)0
- Downloads (Last 6 weeks)0