[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Anonymizing eye-tracking stimuli with stable diffusion

Published: 18 July 2024 Publication History

Abstract

Casual users nowadays can create almost arbitrary image content by providing textual prompts to generative machine-learning models. These models rapidly improve image quality with each new generation, providing means to create photos, paintings in different styles, and even videos. One feature of such models is the ability to take an image as input and adjust content according to a prompt. A visual obfuscation of content can be achieved for static images and videos by slightly changing persons, text, and other objects. The potential of this technique can be applied in eye-tracking experiments for post-hoc dissemination of analysis results and visualization. In this work, we discuss how the technique could serve to anonymize stimuli (e.g., for double-blind reviews, remove product placements, etc.) and protect the privacy of people visible in the stimuli. We further investigate how the application of this anonymization process influences visual saliency and the depiction of stimuli in visualization techniques. Our results show that slight image transformations do not drastically change the saliency of a scene but obfuscate objects and faces while keeping important image structures for context.

Graphical abstract

Display Omitted

Highlights

Investigation of parameters on different stimuli.
Influence of altered image content on a saliency model.
Influence of altered image content on visualization techniques.
A discussion of important aspects for the future applications.

References

[1]
Ho J., Jain A., Abbeel P., Denoising diffusion probabilistic models, Adv Neural Inf Process Syst 33 (2020) 6840–6851.
[2]
Rombach R, Blattmann A, Lorenz D, Esser P, Ommer B. High-Resolution Image Synthesis With Latent Diffusion Models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, p. 10684–95.
[3]
Cetinic E., She J., Understanding and creating art with AI: Review and outlook, ACM Trans Multimedia Comput, Commun, Appl (TOMM) 18 (2) (2022) 1–22.
[4]
Chesney B., Citron D., Deep fakes: A looming challenge for privacy, democracy, and national security, California Law Rev 107 (2019) 1753–1820.
[5]
Blascheck T., Kurzhals K., Raschke M., Burch M., Weiskopf D., Ertl T., Visualization of eye tracking data: A taxonomy and survey, Comput Graph Forum 36 (8) (2017) 260–284.
[6]
Kurzhals K. Privacy in Eye Tracking Research with Stable Diffusion. In: Proceedings of the ACM Symposium on Eye Tracking Research and Applications. 2023, p. 1–7.
[7]
Schetinger V., Bartolomeo S.D., El-Assady M., McNutt A., Miller M., Passos J.P.A., et al., Doom or deliciousness: Challenges and opportunities for visualization in the age of generative models, Comput Graph Forum 42 (3) (2023) 423–435.
[8]
Dang H, Mecke L, Buschek D. GANSlider: How Users Control Generative Models for Images Using Multiple Sliders with and without Feedforward Information. In: Proceedings of the CHI Conference on Human Factors in Computing Systems. 2022, p. 1–15.
[9]
Davis RL, Wambsganss T, Jiang W, Kim KG, Käser T, Dillenbourg P. Fashioning the Future: Unlocking the Creative Potential of Deep Generative Models for Design Space Exploration. In: Extended abstracts of the CHI Conference on Human Factors in Computing Systems. 2023, p. 1–9.
[10]
Liu V, Chilton LB. Design Guidelines for Prompt Engineering Text-to-Image Generative Models. In: Proceedings of the CHI Conference on Human Factors in Computing Systems. 2022, p. 1–23.
[11]
Nirkin Y, Keller Y, Hassner T. Fsgan: Subject agnostic face swapping and reenactment. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019, p. 7184–93.
[12]
Xu C, Zhang J, Hua M, He Q, Yi Z, Liu Y. Region-aware face swapping. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, p. 7632–41.
[13]
Gafni O, Wolf L, Taigman Y. Live face de-identification in video. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019, p. 9378–87.
[14]
Hukkelås H, Mester R, Lindseth F. DeepPrivacy: A generative adversarial network for face anonymization. In: Advances in Visual Computing: International Symposium on Visual Computing. 2019, p. 565–78.
[15]
Li T, Lin L. Anonymousnet: Natural face de-identification with measurable privacy. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 2019, p. 1–10.
[16]
Li J, Han L, Chen R, Zhang H, Han B, Wang L, et al. Identity-preserving face anonymization via adaptively facial attributes obfuscation. In: Proceedings of the ACM International Conference on Multimedia. 2021, p. 3891–9.
[17]
Maximov M, Elezi I, Leal-Taixé L. CIAGAN: Conditional identity anonymization generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020, p. 5447–56.
[18]
Wen Y., Liu B., Ding M., Xie R., Song L., IdentityDP: Differential private identification protection for face images, Neurocomputing 501 (2022) 197–211.
[19]
Xue H., Liu B., Yuan X., Ding M., Zhu T., Face image de-identification by feature space adversarial perturbation, Concurr Comput: Pract Exper 35 (5) (2023) 1–13. e7554.
[20]
Mirsky Y., Lee W., The creation and detection of deepfakes: A survey, ACM Comput Surv 54 (1) (2021) 1–41.
[21]
Zhao W, Rao Y, Shi W, Liu Z, Zhou J, Lu J. DiffSwap: High-Fidelity and Controllable Face Swapping via 3D-Aware Masked Diffusion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, p. 8568–77.
[22]
Lee S, Glasser A, Dingman B, Xia Z, Metaxas D, Neidle C, et al. American sign language video anonymization to support online participation of deaf and hard of hearing users. In: Proceedings of the International ACM Conference on Computers and Accessibility. 2021, p. 1–13.
[23]
Wilson E., Shic F., Skytta J., Jain E., Practical digital disguises: Leveraging face swaps to protect patient privacy, 2022, arXiv preprint arXiv:2204.03559.
[24]
Zhu B, Fang H, Sui Y, Li L. Deepfakes for medical video de-identification: Privacy protection and diagnostic information preservation. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 2020, p. 414–20.
[25]
Berkovsky S, Taib R, Koprinska I, Wang E, Zeng Y, Li J, et al. Detecting personality traits using eye-tracking data. In: Proceedings of the CHI Conference on Human Factors in Computing Systems. 2019, p. 1–12.
[26]
Sammaknejad N., Pouretemad H., Eslahchi C., Salahirad A., Alinejad A., Gender classification based on eye movements: A processing effect during passive face viewing, Adv Cognit Psychol 13 (3) (2017) 232–240.
[27]
Zhang AT, Le Meur BO. How old do you look? Inferring your age from your gaze. In: Proceedings of the IEEE international Conference on Image Processing. 2018, p. 2660–4.
[28]
Steil J, Hagestedt I, Huang MX, Bulling A. Privacy-aware eye tracking using differential privacy. In: Proceedings of the ACM Symposium on Eye Tracking Research and Applications. 2019, p. 1–9.
[29]
Göbel F, Kurzhals K, Raubal M, Schinazi VR. Gaze-aware mixed-reality: Addressing privacy issues with eye tracking. In: CHI 2020 workshop on exploring potentially abusive ethical, social and political implications of mixed reality in HCI. 2020, p. 1–6.
[30]
Liebling DJ, Preibusch S. Privacy considerations for a pervasive eye tracking world. In: Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication. 2014, p. 1169–77.
[31]
David-John B., Hosfelt D., Butler K., Jain E., A privacy-preserving approach to streaming eye-tracking data, IEEE Trans Vis Comput Graphics 27 (5) (2021) 2555–2565.
[32]
Katsini C, Abdrabou Y, Raptis GE, Khamis M, Alt F. The role of eye gaze in security and privacy applications: Survey and future HCI research directions. In: Proceedings of the CHI Conference on Human Factors in Computing Systems. 2020, p. 1–21.
[33]
Lohr D., Komogortsev O.V., Eye know you too: Toward viable end-to-end eye movement biometrics for user authentication, IEEE Trans Inf Forensics Secur 17 (2022) 3151–3164.
[34]
Kurzhals K., Hlawatsch M., Seeger C., Weiskopf D., Visual analytics for mobile eye tracking, IEEE Trans Vis Comput Graphics 23 (1) (2016) 301–310.
[35]
Blascheck T, Kurzhals K, Raschke M, Strohmaier S, Weiskopf D, Ertl T. AOI hierarchies for visual exploration of fixation sequences. In: Proceedings of the ACM Symposium on Eye Tracking Research and Applications. 2016, p. 111–8.
[36]
Koch M., Weiskopf D., Kurzhals K., A spiral into the mind: Gaze spiral visualization for mobile eye tracking, Proc ACM Comput Graph Interactive Tech 5 (2) (2022) 1–16.
[37]
Thapar D, Nigam A, Arora C. Anonymizing egocentric videos. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021, p. 2320–9.
[38]
Birnstill P, Ren D, Beyerer J. A user study on anonymization techniques for smart video surveillance. In: Proceedings of the IEEE International Conference on Advanced Video and Signal Based Surveillance. 2015, p. 1–6.
[39]
Steil J, Koelle M, Heuten W, Boll S, Bulling A. PrivacEye: Privacy-preserving head-mounted eye tracking using egocentric scene image and eye movement features. In: Proceedings of the ACM Symposium on Eye Tracking Research and Applications. 2019, p. 1–10.
[40]
Jing Y., Yang Y., Feng Z., Ye J., Yu Y., Song M., Neural style transfer: A review, IEEE Trans Vis Comput Graphics 26 (11) (2019) 3365–3385.
[41]
Klemp M, Rösch K, Wagner R, Quehl J, Lauer M. LDFA: Latent Diffusion Face Anonymization for Self-driving Applications. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, p. 3198–204.
[42]
Blattmann A, Rombach R, Ling H, Dockhorn T, Kim SW, Fidler S, et al. Align your latents: High-resolution video synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, p. 22563–75.
[43]
Ceylan D, Huang C-H, Mitra NJ. Pix2Video: Video Editing using Image Diffusion. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023, p. 23206–17.
[44]
Zhang L, Rao A, Agrawala M. Adding conditional control to text-to-image diffusion models. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023, p. 3836–47.
[45]
Linardos A, Kummerer M, Press O, Bethge M. DeepGaze IIE: Calibrated prediction in and out-of-domain for state-of-the-art saliency modeling. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021, p. 12899–908.
[46]
Le Meur O., Baccino T., Methods for comparing scanpaths and saliency maps: Strengths and weaknesses, Behav Res Methods 45 (1) (2013) 251–266.
[47]
Kurzhals K., Hlawatsch M., Heimerl F., Burch M., Ertl T., Weiskopf D., Gaze stripes: Image-based visualization of eye tracking data, IEEE Trans Vis Comput Graphics 22 (1) (2015) 1005–1014.
[48]
Koch M, Kurzhals K, Weiskopf D. Image-based scanpath comparison with slit-scan visualization. In: Proceedings of the ACM Symposium on Eye Tracking Research and Applications. 2018, p. 1–5.
[49]
Kurzhals K. Image-based projection labeling for mobile eye tracking. In: Proceedings of the ACM Symposium on Eye Tracking Research and Applications. 2021, p. 1–12.
[50]
Oliva A., Torralba A., Building the gist of a scene: The role of global image features in recognition, Prog Brain Res 155 (2006) 23–36.
[51]
Li J, Li D, Xiong C, Hoi S. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In: Proceedings of the International Conference on Machine Learning. 2022, p. 12888–900.
[52]
Radford A, Kim JW, Hallacy C, Ramesh A, Goh G, Agarwal S, et al. Learning transferable visual models from natural language supervision. In: Proceedings of the International Conference on Machine Learning. 2021, p. 8748–63.
[53]
Khamis M, Farzand H, Mumm M, Marky K. DeepFakes for Privacy: Investigating the Effectiveness of State-of-the-Art Privacy-Enhancing Face Obfuscation Methods. In: Proceedings of the 2022 International Conference on Advanced Visual interfaces. 2022, p. 1–5.

Cited By

View all

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Computers and Graphics
Computers and Graphics  Volume 119, Issue C
Apr 2024
407 pages

Publisher

Pergamon Press, Inc.

United States

Publication History

Published: 18 July 2024

Author Tags

  1. Eye tracking
  2. Visualization
  3. Stable diffusion

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 06 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media