[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3664476.3670934acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaresConference Proceedingsconference-collections
research-article

Trustworthiness and explainability of a watermarking and machine learning-based system for image modification detection to combat disinformation

Published: 30 July 2024 Publication History

Abstract

The widespread use of digital platforms, prioritising content based on engagement metrics and rewarding content creators accordingly, has contributed to the proliferation of disinformation and its far-reaching social and political impact. In addition, digital platforms often operate as black boxes, concealing their decision-making processes from users and prioritizing investor interests over ethical and social considerations. Consequently, this has contributed to the erosion of general trust in verification systems. To mitigate this issue, our project proposes a two-stage verification system. The first stage allows media industries to watermark their image and video content. The second stage involves implementing a machine-learning-based manipulation detection system for suspicious content. We present findings from an international user experience study, where potential online news consumers verified the authenticity of images on a prototype version of our system. In this paper, we reflect on critical issues of explainability addressed by participants in our user study and how we addressed this issue in the platform’s design.

References

[1]
Tanya Koohpayeh Araghi, Ala Abdulsalam Alarood, and Sagheb Kohpayeh Araghi. 2021. Analysis and Evaluation of Template Based Methods Against Geometric Attacks: A Survey. In Innovative Systems for Intelligent Health Informatics, Faisal Saeed, Fathey Mohammed, and Abdulaziz Al-Nahari (Eds.). Springer International Publishing, Cham, 807–814.
[2]
Tanya Koohpayeh Araghi and David Megías. 2024. Analysis and effectiveness of deeper levels of SVD on performance of hybrid DWT and SVD watermarking. Multimedia Tools and Applications 83, 2 (2024), 3895–3916.
[3]
Tanya Koohpayeh Araghi, David Megías, Victor Garcia-Font, Minoru Kuribayashi, and Wojciech Mazurczyk. 2024. Disinformation detection and source tracking using semi-fragile watermarking and blockchain. In Proceedings of the 2024 European Interdisciplinary Cybersecurity Conference (, Xanthi, Greece,) (EICC ’24). Association for Computing Machinery, New York, NY, USA, 136–143. https://doi.org/10.1145/3655693.3655718
[4]
Daniel Blanche-Tarragó and Andrea Rosales. 2024. Usability test protocol of a fake news detection platform. Technical Report. O2 Institutional Repository, Universitat Oberta de Catalunya, Barcelona, Spain. https://openaccess.uoc.edu/handle/10609/149373
[5]
Emma Bowman. 2023. Security forces regain control after Bolsonaro supporters storm Brazil’s Congress. https://www.npr.org/2023/01/08/1147757260/bolsonaro-supporters-storm-brazil-congress-lula
[6]
F. Chollet. 2017. Xception: Deep Learning with Depthwise Separable Convolutions. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1800–1807. https://doi.org/10.1109/CVPR.2017.195
[7]
Michael Chromik and Andreas Butz. 2021. Human-XAI Interaction: A Review and Design Principles for Explanation User Interfaces. In Human-Computer Interaction – INTERACT 2021. Springer International Publishing, 619–640.
[8]
Kelley Cotter and Bianca C Reisdorf. 2020. Algorithmic Knowledge Gaps: A New Horizon of (Digital) Inequality. Int. J. Commun. Syst. 14, 0 (Jan. 2020), 21.
[9]
J. Deng, W. Dong, R. Socher, Kai Li L. Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In Proc. CVPR’09. 248–255.
[10]
Weiping Ding, Mohamed Abdel-Basset, Hossam Hawash, and Ahmed M Ali. 2022. Explainability of artificial intelligence methods, applications and challenges: A comprehensive survey. Information Sciences (2022).
[11]
Jana Dittmann, Arnd Steinmetz, and Ralf Steinmetz. 1999. Content-based digital signature for motion pictures authentication and content-fragile watermarking. In Proceedings IEEE International Conference on Multimedia Computing and Systems, Vol. 2. IEEE, 209–213.
[12]
Brian Dolhansky, Russ Howes, Ben Pflaum, Nicole Baram, and Cristian Canton Ferrer. 2019. The deepfake detection challenge (DFDC) preview dataset. arXiv preprint arXiv:1910.08854 (2019).
[13]
Ling Du, Anthony TS Ho, and Runmin Cong. 2020. Perceptual hashing for image authentication: A survey. Signal Processing: Image Communication 81 (2020), 115713.
[14]
Juliana J Ferreira and Mateus S Monteiro. 2020. What Are People Doing About XAI User Experience? A Survey on AI Explainability Research and Practice. In Design, User Experience, and Usability. Design for Contemporary Interactive Environments. Springer International Publishing, 56–73.
[15]
Houdong Hu, Yan Wang, Linjun Yang, Pavel Komlev, Li Huang, Xi Chen, Jiapei Huang, Ye Wu, Meenaz Merchant, and Arun Sacheti. 2018. Web-scale responsive visual search at bing. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 359–367.
[16]
Bo Li, Peng Qi, Bo Liu, Shuai Di, Jingen Liu, Jiquan Pei, Jinfeng Yi, and Bowen Zhou. 2023. Trustworthy AI: From Principles to Practices. ACM Comput. Surv. 55, 9 (Jan. 2023), 1–46.
[17]
Yuezun Li, Xin Yang, Pu Sun, Honggang Qi, and Siwei Lyu. 2020. Celeb-DF: A large-scale challenging dataset for deepfake forensics. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3207–3216.
[18]
David Megías, Minoru Kuribayashi, Andrea Rosales, Krzysztof Cabaj, and Wojciech Mazurczyk. 2022. Architecture of a fake news detection system combining digital watermarking, signal processing, and machine learning. Journal of Wireless Mobile Networks, Ubiquitous Computing, and Dependable Applications (2022), 33–55.
[19]
Thanh Thi Nguyen, Quoc Viet Hung Nguyen, Dung Tien Nguyen, Duc Thanh Nguyen, Thien Huynh-The, Saeid Nahavandi, Thanh Tam Nguyen, Quoc-Viet Pham, and Cuong M Nguyen. 2022. Deep learning for deepfakes creation and detection: A survey. Computer Vision and Image Understanding 223 (2022), 103525.
[20]
Jakob Nielsen. 2000. Why You Only Need to Test with 5 Users. https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/. Accessed: 2024-4-30.
[21]
Nobuyuki Otsu. 1979. A threshold selection method from gray-level histograms. IEEE transactions on systems, man, and cybernetics 9, 1 (1979), 62–66.
[22]
Arthur Picard, Yazan Mualla, Franck Gechter, and Stéphane Galland. 2023. Human-computer interaction and explainability: Intersection and terminology. In xAI 2023: Explainable Artificial Intelligence. Springer, Cham, Switzerland, 214–236.
[23]
Amna Qureshi, David Megías, and Minoru Kuribayashi. 2021. Detecting Deepfake Videos using Digital Watermarking. In 2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). 1786–1793.
[24]
Sara Ramaswamy. 2022. The Wizard of Oz Method in UX. https://www.nngroup.com/articles/wizard-of-oz/
[25]
Andreas Rossler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Nießner. 2019. FaceForensics++: Learning to detect manipulated facial images. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1–11.
[26]
Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In 2017 IEEE International Conference on Computer Vision (ICCV). 618–626. https://doi.org/10.1109/ICCV.2017.74
[27]
Joëlle Swart. 2021. Experiencing Algorithms: How Young People Understand, Feel About, and Engage With Algorithmic News Selection on Social Media. Social Media + Society 7, 2 (April 2021), 20563051211008828.
[28]
Ehsan Toreini, Mhairi Aitken, Kovila Coopamootoo, Karen Elliott, Carlos Gonzalez Zelaya, and Aad Van Moorsel. 2020. The relationship between trust in AI and trustworthy machine learning technologies. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 272–283.
[29]
Luisa Verdoliva. 2020. Media forensics and deepfakes: an overview. IEEE Journal of Selected Topics in Signal Processing 14, 5 (2020), 910–932.
[30]
Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Yu Qiao. 2016. Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks. IEEE Signal Processing Letters 23, 10 (2016), 1499–1503. https://doi.org/10.1109/LSP.2016.2603342

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
ARES '24: Proceedings of the 19th International Conference on Availability, Reliability and Security
July 2024
2032 pages
ISBN:9798400717185
DOI:10.1145/3664476
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 July 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Authentication
  2. Disinformation
  3. Fake news
  4. Forensic tools
  5. Integrity
  6. Machine learning.
  7. Watermarking

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Japan Science and Technology Agency award number(s): Detection of fake newS on SocIal MedIa pLAtfoRms project from the EIG CONCERT-Japan
  • Ministerio de Ciencia e Innovación, the Agencia Estatal de Investigación, and the European Regional Development Fund (ERDF) award number(s): SECURING
  • Government of Spain award number(s): Detection of fake newS on SocIal MedIa pLAtfoRms project from the EIG CONCERT-Japan
  • National Centre for Research and Development, Poland award number(s): Detection of fake newS on SocIal MedIa pLAtfoRms project from the EIG CONCERT-Japan
  • DANGER Strategic Project of Cybersecurity- Next Generation EU and the Recovery, Transformation and Resilience Plan
  • ARTEMISA International Chair of Cybersecurity - Next Generation EU and the Recovery, Transformation and Resilience Plan

Conference

ARES 2024

Acceptance Rates

Overall Acceptance Rate 228 of 451 submissions, 51%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 56
    Total Downloads
  • Downloads (Last 12 months)56
  • Downloads (Last 6 weeks)24
Reflects downloads up to 22 Dec 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media