[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3472749.3474755acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
research-article

Dynamic Guidance for Decluttering Photographic Compositions

Published: 12 October 2021 Publication History

Abstract

Unwanted clutter in a photo can be incredibly distracting. However in the moment, photographers have so many things to simultaneously consider, it can be hard to catch every detail. Designers have long known the benefits of abstraction for seeing a more holistic view of their design. We wondered if, similarly, some form of image abstraction might be helpful for photographers as an alternative perspective or “lens” with which to see their image. Specifically, we wondered if such abstraction might draw the photographer’s attention away from details in the subject to noticing objects in the background, such as unwanted clutter. We present our process for designing such a camera overlay, based on the idea of using abstraction to recognize clutter. Our final design uses object-based saliency and edge detection to highlight contrast along subject and image borders, outlining potential distractors in these regions. We describe the implementation and evaluation of a capture-time tool that interactively displays these overlays and find that the tool is helpful for making users more confident in their ability to take decluttered photos that clearly convey their intended story.

Supplementary Material

VTT File (p359-talk.vtt)
VTT File (p359-video_figure.vtt)
VTT File (p359-video_preview.vtt)
Supplemental materials (p359-supplement_material.zip)
MP4 File (p359-talk.mp4)
Talk video and captions
MP4 File (p359-video_preview.mp4)
Video preview and captions
MP4 File (p359-video_figure.mp4)
Video figure and captions

References

[1]
Radhakrishna Achanta and Sabine Susstrunk. 2017. Superpixels and Polygons using Simple Non-Iterative Clustering. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[2]
Eric Adams. 2021. How to Use Focus Peaking to Take Better Photos. https://www.gearpatrol.com/tech/a330370/how-to-use-focus-peaking-photography/
[3]
Apple. [n.d.]. CILineOverlay. https://developer.apple.com/documentation/coreimage/cilineoverlay
[4]
Apple. 2019. Framework: Vision. https://developer.apple.com/documentation/vision
[5]
Shai Avidan and Ariel Shamir. 2007. Seam carving for content-aware image resizing. In ACM SIGGRAPH 2007 papers. 10–es.
[6]
Soonmin Bae, Aseem Agarwala, and Frédo Durand. 2010. Computational rephotography.ACM Trans. Graph. 29, 3 (2010), 24–1.
[7]
Mitra Basu. 2002. Gaussian-based edge-detection methods-a survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 32, 3 (2002), 252–260.
[8]
Finn Beales and Alex Strohl. 2020. The Photography Storytelling Workshop: A five-step guide to creating unforgettable photographs. White Lion Publishing.
[9]
Bruce Block. 2013. The Visual Story: Creating the Visual Structure of Film, TV and Digital Media. Routledge.
[10]
Bill Buxton. 2010. Sketching user experiences: getting the design right and the right design. Morgan kaufmann.
[11]
John Canny. 1986. A computational approach to edge detection. IEEE Transactions on pattern analysis and machine intelligence6 (1986), 679–698.
[12]
Scott Carter, John Adcock, John Doherty, and Stacy Branham. 2010. NudgeCam: Toward targeted, higher quality media capture. In Proceedings of the 18th ACM international conference on Multimedia. 615–618.
[13]
Erin Cherry and Celine Latulipe. 2014. Quantifying the creativity support of digital tools through the creativity support index. ACM Transactions on Computer-Human Interaction (TOCHI) 21, 4(2014), 21.
[14]
Nils Dahlbäck, Arne Jönsson, and Lars Ahrenberg. 1993. Wizard of Oz studies—why and how. Knowledge-based systems 6, 4 (1993), 258–266.
[15]
Doug DeCarlo and Anthony Santella. 2002. Stylization and abstraction of photographs. ACM transactions on graphics (TOG) 21, 3 (2002), 769–776.
[16]
Jane L. E, Ohad Fried, and Maneesh Agrawala. 2019. Optimizing Portrait Lighting at Capture-Time Using a 360 Camera as a Light Probe. In UIST. 221–232.
[17]
Jane L. E, Ohad Fried, Jingwan Lu, Jianming Zhang, Radomír Měch, Jose Echevarria, Pat Hanrahan, and James A Landay. 2020. Adaptive Photographic Composition Guidance. In CHI. 1–13.
[18]
Adam Finkelstein and David H Salesin. 1994. Multiresolution curves. In Proceedings of the 21st annual conference on Computer graphics and interactive techniques. 261–268.
[19]
Ohad Fried, Eli Shechtman, Dan B Goldman, and Adam Finkelstein. 2015. Finding distractors in images. In Proceedings of the IEEE Conference on Computer Vision and pattern Recognition. 1703–1712.
[20]
Tavis Leaf Glover. 2014. Canon of Design: Mastering Artistic Composition. Tavis Leaf Glover.
[21]
About JasenkaG. 2020. The 6 Fundamentals Of Storytelling Through Photography. https://www.lightstalking.com/storytelling-through-photography/
[22]
Henry Kang, Seungyong Lee, and Charles K Chui. 2008. Flow-based image abstraction. IEEE transactions on visualization and computer graphics 15, 1(2008), 62–76.
[23]
James A Landay and Brad A Myers. 1995. Interactive sketching for the early stages of user interface design. In Proceedings of the SIGCHI conference on Human factors in computing systems. 43–50.
[24]
Qifan Li and Daniel Vogel. 2017. Guided Selfies Using Models of Portrait Aesthetics. In Proceedings of the 2017 Conference on Designing Interactive Systems (Edinburgh, United Kingdom) (DIS ’17). ACM, New York, NY, USA, 179–190. https://doi.org/10.1145/3064663.3064700
[25]
Shuai Ma, Zijun Wei, Feng Tian, Xiangmin Fan, Jianming Zhang, Xiaohui Shen, Zhe Lin, Jin Huang, Radomír Měch, Dimitris Samaras, and Hongan Wang. 2019. SmartEye: Assisting Instant Photo Taking via Integrating User Preference with Deep View Proposal Network. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). ACM, New York, NY, USA, Article 471, 12 pages. https://doi.org/10.1145/3290605.3300701
[26]
Hiroko Mitarai, Yoshihiro Itamiya, and Atsuo Yoshitaka. 2013. Interactive photographic shooting assistance based on composition and saliency. In International Conference on Computational Science and Its Applications. Springer, 348–363.
[27]
Pedro Quintela. 2019. How to Create Storytelling Photography That Will Captivate Your Audience. https://www.photoblog.com/learn/storytelling-photography-tips/
[28]
Yogesh Singh Rawat and Mohan S. Kankanhalli. 2015. Context-Aware Photography Learning for Smart Mobile Devices. ACM Trans. Multimedia Comput. Commun. Appl. 12, 1s, Article 19 (Oct. 2015), 24 pages. https://doi.org/10.1145/2808199
[29]
Katharina Reinecke and Krzysztof Z Gajos. 2014. Quantifying visual preferences around the world. In Proceedings of the SIGCHI conference on human factors in computing systems. 11–20.
[30]
Fred Ross. [n.d.]. The Art of Composition: A Simple Approach for Creating Masterful Designs. The Art of Composition.
[31]
Xiaoyong Shen, Aaron Hertzmann, Jiaya Jia, Sylvain Paris, Brian Price, Eli Shechtman, and Ian Sachs. 2016. Automatic portrait segmentation for image stylization. In Computer Graphics Forum, Vol. 35. Wiley Online Library, 93–102.
[32]
Shawn C. Steiner. 2015. What is Focus Peaking?https://www.bhphotovideo.com/explora/photography/tips-and-solutions/what-focus-peaking
[33]
Hsiao-Hang Su, Tse-Wei Chen, Chieh-Chi Kao, Winston H Hsu, and Shao-Yi Chien. 2011. Scenic photo quality assessment with bag of aesthetics-preserving features. In Proceedings of the 19th ACM international conference on Multimedia. 1213–1216.
[34]
Masaki Suwa and Barbara Tversky. 2002. External representations contribute to the dynamic construction of ideas. In International Conference on Theory and Application of Diagrams. Springer, 341–343.
[35]
Carlo Tomasi and Roberto Manduchi. 1998. Bilateral filtering for gray and color images. In Sixth international conference on computer vision (IEEE Cat. No. 98CH36271). IEEE, 839–846.
[36]
Holger Winnemöller, Sven C Olsen, and Bruce Gooch. 2006. Real-time video abstraction. ACM Transactions On Graphics (TOG) 25, 3 (2006), 1221–1226.

Cited By

View all
  • (2024)Zooming In: A Review of Designing for Photo Taking in Human-Computer Interaction and Future ProspectsProceedings of the ACM on Human-Computer Interaction10.1145/36981508:ISS(597-623)Online publication date: 24-Oct-2024
  • (2024)Personal Time-LapseProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676383(1-13)Online publication date: 13-Oct-2024
  • (2024)ProcessGallery: Contrasting Early and Late Iterations for Design Principle LearningProceedings of the ACM on Human-Computer Interaction10.1145/36373898:CSCW1(1-35)Online publication date: 26-Apr-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
UIST '21: The 34th Annual ACM Symposium on User Interface Software and Technology
October 2021
1357 pages
ISBN:9781450386357
DOI:10.1145/3472749
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 October 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. camera interfaces
  2. composition
  3. declutter
  4. photography

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

UIST '21

Acceptance Rates

Overall Acceptance Rate 561 of 2,567 submissions, 22%

Upcoming Conference

UIST '25
The 38th Annual ACM Symposium on User Interface Software and Technology
September 28 - October 1, 2025
Busan , Republic of Korea

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)41
  • Downloads (Last 6 weeks)2
Reflects downloads up to 05 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Zooming In: A Review of Designing for Photo Taking in Human-Computer Interaction and Future ProspectsProceedings of the ACM on Human-Computer Interaction10.1145/36981508:ISS(597-623)Online publication date: 24-Oct-2024
  • (2024)Personal Time-LapseProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676383(1-13)Online publication date: 13-Oct-2024
  • (2024)ProcessGallery: Contrasting Early and Late Iterations for Design Principle LearningProceedings of the ACM on Human-Computer Interaction10.1145/36373898:CSCW1(1-35)Online publication date: 26-Apr-2024
  • (2024)When to Give Feedback: Exploring Tradeoffs in the Timing of Design FeedbackProceedings of the 16th Conference on Creativity & Cognition10.1145/3635636.3656183(292-310)Online publication date: 23-Jun-2024
  • (2023)Color Field: Developing Professional Vision by Visualizing the Effects of Color FiltersProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology10.1145/3586183.3606828(1-16)Online publication date: 29-Oct-2023
  • (2023)Interactive Flexible Style Transfer for Vector GraphicsProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology10.1145/3586183.3606751(1-14)Online publication date: 29-Oct-2023
  • (2022)ReCapture: AR-Guided Time-lapse PhotographyProceedings of the 35th Annual ACM Symposium on User Interface Software and Technology10.1145/3526113.3545641(1-14)Online publication date: 29-Oct-2022

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media