[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Graham West ; Matthew I. Swindall ; Ben Keener ; Timothy Player ; Alex C. Williams et al. - Incorporating Crowdsourced Annotator Distributions into Ensemble Modeling to Improve Classification Trustworthiness for Ancient Greek Papyrijdmdh:10297 - Journal of Data Mining & Digital Humanities, 7 février 2024, Documents historiques et reconnaissance automatique de texte - https://doi.org/10.46298/jdmdh.10297
Incorporating Crowdsourced Annotator Distributions into Ensemble Modeling to Improve Classification Trustworthiness for Ancient Greek PapyriArticle

Auteurs : Graham West ORCID1; Matthew I. Swindall ORCID1; Ben Keener 2; Timothy Player ORCID2; Alex C. Williams 3; James H. Brusuelas 4; John F. Wallin ORCID1

Performing classification on noisy, crowdsourced image datasets can prove challenging even for the best neural networks. Two issues which complicate the problem on such datasets are class imbalance and ground-truth uncertainty in labeling. The AL-ALL and AL-PUB datasets - consisting of tightly cropped, individual characters from images of ancient Greek papyri - are strongly affected by both issues. The application of ensemble modeling to such datasets can help identify images where the ground-truth is questionable and quantify the trustworthiness of those samples. As such, we apply stacked generalization consisting of nearly identical ResNets with different loss functions: one utilizing sparse cross-entropy (CXE) and the other Kullback-Liebler Divergence (KLD). Both networks use labels drawn from a crowd-sourced consensus. This consensus is derived from a Normalized Distribution of Annotations (NDA) based on all annotations for a given character in the dataset. For the second network, the KLD is calculated with respect to the NDA. For our ensemble model, we apply a k-nearest neighbors model to the outputs of the CXE and KLD networks. Individually, the ResNet models have approximately 93% accuracy, while the ensemble model achieves an accuracy of > 95%, increasing the classification trustworthiness. We also perform an analysis of the Shannon entropy of the various models' output distributions to measure classification uncertainty. Our results suggest that entropy is useful for predicting model misclassifications.


Volume : Documents historiques et reconnaissance automatique de texte
Rubrique : Humanités numériques en langues
Publié le : 7 février 2024
Accepté le : 16 décembre 2023
Soumis le : 13 novembre 2022
Mots-clés : Computer Science - Computer Vision and Pattern Recognition,Computer Science - Machine Learning

Statistiques de consultation

Cette page a été consultée 586 fois.
Le PDF de cet article a été téléchargé 254 fois.