[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

REVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets

Published: 01 July 2022 Publication History

Abstract

Machine learning models are known to perpetuate and even amplify the biases present in the data. However, these data biases frequently do not become apparent until after the models are deployed. Our work tackles this issue and enables the preemptive analysis of large-scale datasets. REvealing VIsual biaSEs (REVISE) is a tool that assists in the investigation of a visual dataset, surfacing potential biases along three dimensions: (1) object-based, (2) person-based, and (3) geography-based. Object-based biases relate to the size, context, or diversity of the depicted objects. Person-based metrics focus on analyzing the portrayal of people within the dataset. Geography-based analyses consider the representation of different geographic locations. These three dimensions are deeply intertwined in how they interact to bias a dataset, and REVISE sheds light on this; the responsibility then lies with the user to consider the cultural and historical context, and to determine which of the revealed biases may be problematic. The tool further assists the user by suggesting actionable steps that may be taken to mitigate the revealed biases. Overall, the key aim of our work is to tackle the machine learning bias problem early in the pipeline. REVISE is available at https://github.com/princetonvisualai/revise-tool.

References

[1]
Alwassel, H., Heilbron, F. C., Escorcia, V., & Ghanem, B. (2018). Diagnosing error in temporal action detectors. In European conference on computer vision (ECCV).
[2]
Amazon. (2021). Amazon sagemaker clarify. Retrieved December 2, 2019, from https://aws.amazon.com/sagemaker/clarify/
[3]
Amazon rekognition. (n.d.). Retrieved December 2, 2019, from https://aws.amazon.com/rekognition/
[4]
Balakrishnan, G., Xiong, Y., Xia, W., & Perona, P. (2020). Towards causal benchmarking of bias in face analysis algorithms. In European conference on computer vision (ECCV).
[5]
Bao, M., Zhou, A., Zottola, S., Brubach, B., Desmarais, S., Horowitz, A., ... Venkatasubramanian, S. (2021). It’s compaslicated: The messy relationship between rai datasets and algorithmic fairness benchmarks. arXiv:2106.05498.
[6]
Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and machine learning. Retrieved December 2, 2019, from http://www.fairmlbook.org.fairmlbook.org.
[7]
Bearman S, Korobov N, and Thorne A The fabric of internalized sexism Journal of Integrated Social Sciences 2009 1 1 10-47
[8]
Bellamy, R. K. E., Dey, K., Hend, M., Hoffman, S. C., Houde, S., Kannan, K., ... Zhang, Y. (2018). AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv:1810.01943.
[9]
Berg, A. C., Berg, T. L., III, H. D., Dodge, J., Goyal, A., Han, X., ... Yamaguchi, K. (2012). Understanding and predicting importance in images. In Conference on computer vision and pattern recognition (CVPR).
[10]
Birhane, A. (2021). Algorithmic injustice: A relational ethics approach. Patterns, 2, 100205.
[11]
Birhane, A., Prabhu, V. U., & Kahembwe, E. (2021). Multimodal datasets: Misogyny, pornography, and malignant stereotypes. arXiv:2110.01963.
[12]
Bolya, D., Foley, S., Hays, J., & Hoffman, J. (2020). TIDE: A general toolbox for identifying object detection errors. In European conference on computer vision (ECCV).
[13]
Brown, C. (2014). Archives and recordkeeping: Theory into practices. Facet Publishing.
[14]
Buda, M., Maki, A., & Mazurowski, M. A. (2017). A systematic study of the class imbalance problem in convolutional neural networks. arXiv:1710.05381.
[15]
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In ACM conference on fairness, accountability, transparency (FAccT).
[16]
Burns, K., Hendricks, L. A., Saenko, K., Darrell, T., & Rohrbach, A. (2018). Women also snowboard: Overcoming bias in captioning models. In European conference on computer vision (ECCV).
[17]
Cadene, R., Dancette, C., Ben-younes, H., Cord, M., & Parikh, D. (2019). RUBi: Reducing unimodal biases in visual question answering. In Advances in neural information processing systems (NeurIPS).
[18]
Caliskan A, Bryson JJ, and Narayanan A Semantics derived automatically from language corpora contain humanlike biases Science 2017 356 6334 183-186
[19]
Choi MJ, Torralba A, and Willsky AS Context models and out-of-context objects Pattern Recognition Letters 2012 33 853-862
[20]
Chouldechova A Fair prediction with disparate impact: A study of bias in recidivism prediction instruments Big Data 2017 52 153-163
[21]
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). ImageNet: A large-scale hierarchical image database. In CVPR.
[22]
Denton, E., Hanna, A., Amironesei, R., Smart, A., Nicole, H., & Scheuerman, M. K. (2020). Bringing the people back in: Contesting benchmark machine learning datasets. arXiv:2007.07399.
[23]
Denton, E., Hutchinson, B., Mitchell, M., Gebru, T., & Zaldivar, A. (2019). Image counterfactual sensitivity analysis for detecting unintended bias. In CVPR workshop on fairness accountability transparency and ethics in computer vision.
[24]
DeVries, T., Misra, I., Wang, C., & van der Maaten, L. (2019). Does object recognition work for everyone? In Conference on computer vision and pattern recognition workshops (CVPRW).
[25]
Ding, F., Hardt, M., Miller, J., & Schmidt, L. (2021). Retiring adult: New datasets for fair machine learning. arXiv:2108.04884.
[26]
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference.
[27]
Dwork, C., Immorlica, N., Kalai, A. T., & Leiserson, M. (2017). Decoupled classifiers for fair and efficient machine learning. arXiv:1707.06613.
[28]
Everingham M, Gool LV, Williams CKI, Winn J, and Zisserman A The pascal visual object classes (VOC) challenge International Journal of Computer Vision (IJCV) 2010 88 303-338
[29]
Fabbrizzi, S., Papadopoulos, S., & Eirini Ntoutsi, I. K. (2021). A survey on bias in visual datasets. arXiv:2107.07919.
[31]
Fei-Fei, L., Fergus, R., & Perona, P. (2004). Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories. In IEEE CVPR workshop of generative model based vision.
[32]
Fitzpatrick TB The validity and practicality of sun-reactive skin types I through VI Archives of Dermatology 1988 6 869-871
[33]
Gajane, P., & Pechenizkiy, M. (2017). On formalizing fairness in prediction with machine learning. arXiv:1710.03184.
[34]
Galleguillos, C., Rabinovich, A., & Belongie, S. (2008). Object categorization using co-occurrence, location and appearance. In Conference on computer vision and pattern recognition (CVPR).
[35]
Gebru T, Krause J, Wang Y, Chen D, Deng J, Aiden EL, and Fei-Fei L Using deep learning and google street view to estimate the demographic makeup of neighborhoods across the united states Proceedings of the National Academy of Sciences of the United States of America (PNAS) 2017 114 50 13108-13113
[36]
Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., III, H. D., & Crawford, K. (2018). Datasheets for datasets. In ACM conference on fairness, accountability, transparency (FAccT).
[37]
Google People + AI Research. (2021). Know your data. Retrieved from https://knowyourdata.withgoogle.com/.
[38]
Green, B., & Hu, L. (2018). The myth in the methodology: Towards a recontextualization of fairness in machine learning. In Machine learning: The debates workshop at the 35th international conference on machine learning.
[39]
Hamidi, F., Scheuerman, M. K., & Branham, S. (2018). Gender recognition or gender reductionism? The social implications of embedded gender recognition systems. In Conference on human factors in computing systems (CHI).
[40]
Hanna, A., Denton, E., Smart, A., & Smith-Loud, J. (2020). Towards a critical race methodology in algorithmic fairness. In ACM conference on fairness, accountability, transparency (FAccT).
[41]
Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. In Advances in neural information processing systems (NeurIPS).
[42]
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In European conference on computer vision (ECCV).
[43]
Hill, K. (2020). Wrongfully accused by an algorithm. The New York Times. Retrieved from https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html.
[44]
Hoiem, D., Chodpathumwan, Y., & Dai, Q. (2012). Diagnosing error in object detectors. In European conference on computer vision (ECCV).
[45]
Holland, S., Hosny, A., Newman, S., Joseph, J., & Chmielinski, K. (2018). The dataset nutrition label: A framework to drive higher data quality standards. arXiv:1805.03677.
[46]
Honnibal, M., Montani, I., Van Landeghem, S., & Boyd, A. (2020). spaCy: Industrial-strength natural language processing in python.
[47]
Hua J, Xiong Z, Lowey J, Suh E, and Dougherty ER Optimal number of features as a function of sample size for various classification rules Bioinformatics 2005 21 1509-1515
[49]
Jacobs, A. Z., & Wallach, H. (2021). Measurement and fairness. In ACM conference on fairness, accountability, transparency (FAccT).
[50]
Jain AK and Waller W On the optimal number of features in the classification of multivariate gaussian data Pattern Recognition 1978 10 365-374
[51]
Jo, E. S., & Gebru, T. (2020). Lessons from archives: Strategies for collecting sociocultural data in machine learning. In ACM conference on fairness, accountability, transparency (FAccT).
[52]
Jonckheere AR A distribution-free k-sample test against ordered alternatives Biometrika 1954 41 133-145
[53]
Joulin, A., Grave, E., Bojanowski, P., Douze, M., Jégou, H., & Mikolov, T. (2016). Fasttext.zip: Compressing text classification models. arXiv preprint arXiv:1612.03651.
[54]
Joulin, A., Grave, E., Bojanowski, P., & Mikolov, T. (2016). Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.
[55]
Kay M, Matuszek C, and Munson SA Unequal representation and gender stereotypes in image search results for occupations Human Factors in Computing Systems 2015 33 3819-3828
[56]
Keeping Track Online. (2019). Median incomes. Retrieved from https://data.cccnewyork.org/data/table/66/median-incomes#66/107/62/a/a.
[57]
Khosla, A., Zhou, T., Malisiewicz, T., Efros, A. A., & Torralba, A. (2012). Undoing the damage of dataset bias. In European conference on computer vision (ECCV).
[58]
Kilbertus, N., Rojas-Carulla, M., Parascandolo, G., Hardt, M., Janzing, D., & Schölkopf, B. (2017). Avoiding discrimination through causal reasoning. In Advances in neural information processing systems (NeurIPS).
[59]
Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. In Proceedings of innovations in theoretical computer science (ITCS).
[60]
Krasin, I., Duerig, T., Alldrin, N., Ferrari, V., Abu-ElHaija, S., Kuznetsova, A., ... Murphy, K. (2017). Openimages: A public dataset for large-scale multilabel and multi-class image classification. Dataset available from https://github.com/openimages.
[61]
Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., ... Fei-Fei, L. (2016). Visual genome: Connecting language and vision using crowdsourced dense image annotations. Retrieved from https://arxiv.org/abs/1602.07332
[62]
Krizhevsky, A. (2009). Learning multiple layers of features from tiny images. Technical Report.
[63]
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (NeurIPS) (pp. 1097–1105).
[64]
Lin, T.-Y., Maire, M., Belongie, S., Bourdev, L., Girshick, R., Hays, J., ... Dollar, P. (2014). Microsoft COCO: Common objects in context. In European conference on computer vision (ECCV).
[65]
Liu X-Y, Wu J, and Zhou Z-H Exploratory undersampling for class-imbalance learning IEEE Transactions on Systems, Man, and Cybernetics 2009 39 539-550
[66]
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2019). A survey on bias and fairness in machine learning. arXiv:1908.09635.
[67]
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., ... Gebru, T. (2019). Model cards for model reporting. In ACM conference on fairness, accountability, transparency (FAccT).
[68]
Moulton, J. (1981). The myth of the neutral ‘man’. In Sexist language: A modern philosophical analysis (pp. 100–116).
[69]
Ojala M and Garriga GC Permutation tests for studying classifier performance Journal of Machine Learning Research 2010 11 1833-1863
[70]
Oksuz, K., Cam, B. C., Kalkan, S., & Akbas, E. (2019). Imbalance Problems in Object Detection: A Review. arXiv e-prints, arXiv:1909.00169. eprint: 1909. 00169
[71]
Oliva A and Torralba A The role of context in object recognition Trends in Cognitive Sciences 2007 11 520-527
[72]
Ouyang, W., Wang, X., Zhang, C., & Yang, X. (2016). Factors in finetuning deep model for object detection with long-tail distribution. In Conference on computer vision and pattern recognition (CVPR).
[73]
Paullada, A., Raji, I. D., Bender, E. M., Denton, E., & Hanna, A. (2020). Data and its (dis)contents: A survey of dataset development and use in machine learning research. In NeurIPS workshop: ML retrospectives, surveys, and meta-analyses.
[74]
Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, and Duchesnay E Scikit-learn: Machine learning in Python Journal of Machine Learning Research 2011 12 2825-2830
[75]
Peng, K., Mathur, A., & Narayanan, A. (2021). Mitigating dataset harms requires stewardship: Lessons from 1000 papers. In Advances in Neural Information Processing Systems (NeurIPS).
[76]
Pleiss, G., Raghavan, M., Wu, F., Kleinberg, J., & Weinberger, K. Q. (2017). On fairness and calibration. In Advances in neural information processing systems (NeurIPS).
[77]
Prabhu, V. U., & Birhane, A. (2020). Large image datasets: A pyrrhic win for computer vision? arXiv:2006.16923.
[78]
Roll U, Correia RA, and Berger-Tal O Using machine learning to disentangle homonyms in large text corpora Conservation Biology 2018 32 716-724
[79]
Rosenfeld, A., Zemel, R., & Tsotsos, J. K. (2018). The elephant in the room. arXiv:1808.03305.
[80]
Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, and Fei-Fei L ImageNet large scale visual recognition challenge International Journal of Computer Vision (IJCV) 2015 115 3 211-252
[81]
Salakhutdinov, R., Torralba, A., & Tenenbaum, J. (2011). Learning to share visual appearance for multiclass object detection. In Conference on computer vision and pattern recognition (CVPR).
[82]
Sattigeri, P., Hoffman, S. C., Chenthamarakshan, V., & Varshney, K. R. (2019). Fairness GAN. IBM Journal of Research and Development, 63, 3-1–3-9.
[83]
Scheuerman, M. K., Hanna, A., & Denton, E. (2021). Do datasets have politics? disciplinary values in computer vision dataset development. In ACM conference on computer-supported cooperative work and social computing (CSCW).
[84]
Scheuerman, M. K., Wade, K., Lustig, C., & Brubaker, J. R. (2020). How we’ve taught algorithms to see identity: Constructing race and gender in image databases for facial analysis. In Proceedings of the ACM on human–computer interaction.
[85]
Shankar, S., Halpern, Y., Breck, E., Atwood, J., Wilson, J., & Sculley, D. (2017). No classification without representation: Assessing geodiversity issues in open datasets for the developing world. In NeurIPS workshop: Machine learning for the developing world.
[86]
Sharmanska, V., Hendricks, L. A., Darrell, T., & Quadrianto, N. (2020). Contrastive examples for addressing the tyranny of the majority. arXiv:2004.06524.
[87]
Sheeny, M., Pellegrin, E. D., Mukherjee, S., Ahrabian, A., Wang, S., & Wallace, A. (2021). RADIATE: A radar dataset for automotive perception in bad weather. In IEEE international conference on robotics and automation (ICRA).
[88]
Sigurdsson, G. A., Russakovsky, O., & Gupta, A. (2017). What actions are needed for understanding human actions in videos? In International conference on computer vision (ICCV).
[89]
Steed, R., & Caliskan, A. (2021). Image representations learned with unsupervised pre-training contain human-like biases. In Conference on fairness, accountability, and transparency (FAccT).
[90]
Swinger, N., De-Arteaga, M., IV, N. H., Leiserson, M., & Kalai, A. (2019). What are the biases in my word embedding? In Proceedings of the AAAI/ACM conference on artificial intelligence, ethics, and society (AIES).
[91]
The United States Census Bureau. (2019). American community survey 1-year estimates, table s1903 (2005–2019). Retrieved from https://data.census.gov/.
[92]
Thomee, B., Shamma, D. A., Friedland, G., Elizalde, B., Ni, K., Poland, D., Li, & L.-J. (2016). Yfcc100m: The new data in multimedia research. Communications of the ACM,59, 64–73.
[93]
Tommasi, T., Patricia, N., Caputo, B., & Tuytelaars, T. (2015). A deeper look at dataset bias. In German conference on pattern recognition.
[94]
Torralba, A., & Efros, A. A. (2011). Unbiased look at dataset bias. In Conference on computer vision and pattern recognition (CVPR).
[95]
Torralba A, Fergus R, and Freeman WT 80 million tiny images: A large dataset for nonparametric object and scene recognition IEEE Transactions on Pattern Analysis and Machine Intelligence 2008 30 11 1958-1970
[96]
United Nations Statistics Division. (2019). United Nations statistics division - methodology. Retrieved from https://unstats.un.org/unsd/methodology/m49/.
[97]
van Miltenburg, E., Elliott, D., & Vossen, P. (2018). Talking about other people: An endless range of possibilities. In International natural language generation conference.
[98]
Wang, A., Narayanan, A., & Russakovsky, O. (2020). REVISE: A tool for measuring and mitigating bias in visual datasets. In European conference on computer vision (ECCV).
[99]
Wang, A., & Russakovsky, O. (2021). Directional bias. In International conference on machine learning (ICML).
[100]
Wang, Z., Qinami, K., Karakozis, Y., Genova, K., Nair, P., Hata, K., & Russakovsky, O. (2020). Towards fairness in visual recognition: Effective strategies for bias mitigation. In Conference on computer vision and pattern recognition (CVPR).
[101]
Wilson, B., Hoffman, J., & Morgenstern, J. (2019). Predictive inequity in object detection. arXiv:1902.11097
[102]
Xiao, J., Hays, J., Ehinger, K. A., Oliva, A., & Torralba, A. (2010). Sun database: Large-scale scene recognition from abbey to zoo. In Conference on computer vision and pattern recognition (CVPR).
[103]
Yang, J., Price, B., Cohen, S., & Yang, M.-H. (2014). Context driven scene parsing with attention to rare classes. In Conference on computer vision and pattern recognition (CVPR).
[104]
Yang, K., Qinami, K., Fei-Fei, L., Deng, J., & Russakovsky, O. (2020). Towards fairer datasets: Filtering and balancing the distribution of the people subtree in the imagenet hierarchy. In ACM conference on fairness, accountability, transparency (FAccT).
[105]
Yang, K., Russakovsky, O., & Deng, J. (2019). Spatialsense: An adversarially crowdsourced benchmark for spatial relation recognition. In International conference on computer vision (ICCV).
[106]
Yang, K., Yau, J., Fei-Fei, L., Deng, J., & Russakovsky, O. (2021). A study of face obfuscation in imagenet. arXiv:2103.06191.
[107]
Yao Y, Zhang J, Shen F, Hua X, Xu J, and Tang Z Exploiting web images for dataset construction: A domain robust approach IEEE Transactions on Multimedia 2017 19 1771-1784
[108]
Yu, F., Chen, H., Wang, X., Xian, W., Chen, Y., Liu, F., ... Darrell, T. (2020). Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In IEEE conference on computer vision and pattern recognition (CVPR).
[109]
Zhang, B. H., Lemoine, B., & Mitchell, M. (2018). Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society.
[110]
Zhao, D., Wang, A., & Russakovsky, O. (2021). Understanding and evaluating racial biases in image captioning. In CoRR, arXiv:2106.08503.
[111]
Zhao, J., Wang, T., Yatskar, M., Ordonez, V., & Chang, K.-W. (2017). Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the conference on empirical methods in natural language processing (EMNLP).
[112]
Zhou B, Lapedriza A, Khosla A, Oliva A, and Torralba A Places: A 10 million image database for scene recognition IEEE Transactions on Pattern Analysis and Machine Intelligence 2017 40 1452-1464
[113]
Zhu, X., Anguelov, D., & Ramanan, D. (2014). Capturing long-tail distributions of object subcategories. In Conference on computer vision and pattern recognition (CVPR).

Cited By

View all
  • (2024)PositionProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3694580(60644-60673)Online publication date: 21-Jul-2024
  • (2024)Evaluating model bias requires characterizing its mistakesProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692109(938-954)Online publication date: 21-Jul-2024
  • (2024)CICProceedings of the Thirty-Third International Joint Conference on Artificial Intelligence10.24963/ijcai.2024/180(1625-1633)Online publication date: 3-Aug-2024
  • Show More Cited By

Index Terms

  1. REVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image International Journal of Computer Vision
        International Journal of Computer Vision  Volume 130, Issue 7
        Jul 2022
        266 pages

        Publisher

        Kluwer Academic Publishers

        United States

        Publication History

        Published: 01 July 2022
        Accepted: 27 April 2022
        Received: 17 July 2021

        Author Tags

        1. Computer vision datasets
        2. Bias mitigation
        3. Tool

        Qualifiers

        • Research-article

        Funding Sources

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 20 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)PositionProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3694580(60644-60673)Online publication date: 21-Jul-2024
        • (2024)Evaluating model bias requires characterizing its mistakesProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692109(938-954)Online publication date: 21-Jul-2024
        • (2024)CICProceedings of the Thirty-Third International Joint Conference on Artificial Intelligence10.24963/ijcai.2024/180(1625-1633)Online publication date: 3-Aug-2024
        • (2024)RAI Guidelines: Method for Generating Responsible AI Guidelines Grounded in Regulations and Usable by (Non-)Technical RolesProceedings of the ACM on Human-Computer Interaction10.1145/36869278:CSCW2(1-28)Online publication date: 8-Nov-2024
        • (2024)Exploring Aesthetic Qualities of Deep Generative Models through Technological (Art) MediationProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661498(2738-2752)Online publication date: 1-Jul-2024
        • (2024)Robust Visual Question Answering: Datasets, Methods, and Future ChallengesIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2024.336615446:8(5575-5594)Online publication date: 1-Aug-2024
        • (2024)Metrics for Dataset Demographic Bias: A Case Study on Facial Expression RecognitionIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2024.336197946:8(5209-5226)Online publication date: 1-Aug-2024
        • (2024)An empirical investigation of challenges of specifying training data and runtime monitors for critical software with machine learning and their relation to architectural decisionsRequirements Engineering10.1007/s00766-024-00415-429:1(97-117)Online publication date: 1-Mar-2024
        • (2024)LLM as Dataset Analyst: Subpopulation Structure Discovery with Large Language ModelComputer Vision – ECCV 202410.1007/978-3-031-73414-4_14(235-252)Online publication date: 29-Sep-2024
        • (2024)Efficient Bias Mitigation Without Privileged InformationComputer Vision – ECCV 202410.1007/978-3-031-73220-1_9(148-166)Online publication date: 29-Sep-2024
        • Show More Cited By

        View Options

        View options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media