[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
article

Schematic storyboarding for video visualization and editing

Published: 01 July 2006 Publication History

Abstract

We present a method for visualizing short video clips in a single static image, using the visual language of storyboards. These schematic storyboards are composed from multiple input frames and annotated using outlines, arrows, and text describing the motion in the scene. The principal advantage of this storyboard representation over standard representations of video -- generally either a static thumbnail image or a playback of the video clip in its entirety -- is that it requires only a moment to observe and comprehend but at the same time retains much of the detail of the source video. Our system renders a schematic storyboard layout based on a small amount of user interaction. We also demonstrate an interaction technique to scrub through time using the natural spatial dimensions of the storyboard. Potential applications include video editing, surveillance summarization, assembly instructions, composition of graphic novels, and illustration of camera technique for film studies.

Supplementary Material

JPG File (p862-goldman-high.jpg)
JPG File (p862-goldman-low.jpg)
High Resolution (p862-goldman-high.mov)
Low Resolution (p862-goldman-low.mov)

References

[1]
Agarwala, A., Dontcheva, M., Agrawala, M., Drucker, S., Colburn, A., Curless, B., Salesin, D., and Cohen, M. 2004. Interactive digital photomontage. ACM Transactions on Graphics (Proc. SIGGRAPH) 23, 4, 294--301.
[2]
Assa, J., Caspi, Y., and Cohen-Or, D. 2005. Action synopsis: Pose selection and illustration. ACM Transactions on Graphics (Proc. SIGGRAPH) 24, 3, 667--676.
[3]
Cheong, L., and Huo, H. 2001. Shot change detection using scene-based constraint. Multimedia Tools and Applications 14, 2 (June), 175--186.
[4]
Cutting, J. E. 2002. Representing motion in a static image: constraints and parallels in art, science, and popular culture. Perception 31, 1165--1193.
[5]
Freeman, W. T., and Zhang, H. 2003. Shape-time photography. In Proc. Computer Vision and Pattern Recognition, 151--157.
[6]
Hart, J. 1999. The art of the storyboard: storyboarding for film, TV and animation. Focal Press.
[7]
Heng, W., and Ngan, K. 2001. An object-based shot boundary detection using edge tracing and tracking. Journal of Visual Communication and Image Representation 12, 3 (September), 217--239.
[8]
Horn, B., Hilden, H., and Negahdaripour, S. 1988. Closed-form solution of absolute orientation using orthonormal matrices. Journal of the Optical Society of America A 5, 7, 1127--1135.
[9]
Irani, M., and Anandan, P. 1998. Video indexing based on mosaic representations. IEEE Transactions on Pattern Analysis and Machine Intelligence 86, 5 (May), 905--921.
[10]
Jia, J., and Tang, C.-K. 2005. Eliminating structure and intensity misalignment in image stitching. In Proc. ICCV.
[11]
Jojic, N., Basu, S., Petrovic, N., Frey, B., and Huang, T., 2003. Joint design of data analysis algorithms and user interface for video applications. presented at NIPS 2003 workshop on Machine Learning in User Interface, extended abstract at: http://research.microsoft.com/workshops/MLUI03/jojic.html.
[12]
Katz, S. D. 1991. Film directing shot by shot: visualizing from concept to screen. Michael Wiese Productions.
[13]
Kawagishi, Y., Hatsuyama, K., and Kondo, K. 2003. Cartoon blur: nonphotorealistic motion blur. In Proc. Comp. Graph. Intl., 276--281.
[14]
Kim, B., and Essa, I. 2005. Video-based nonphotorealistic and expressive illustration of motion. In Proc. Comp. Graph. Intl., 32--35.
[15]
Kumar, M. P., Torr, P. H. S., and Zisserman, A. 2005. Learning layered motion segmentations of video. In Proc. ICCV, 33--40.
[16]
Lee, M., Yang, Y., and Lee, S. 2001. Automatic video parsing using shot boundary detection and camera operation analysis. Pattern Recognition 34, 3 (March), 711--719.
[17]
Li, Y., Zhang, T., and Tretter, D. 2001. An overview of video abstraction techniques. Tech. Rep. HPL-2001-191, HP Laboratories.
[18]
Massey, M., and Bender, W. 1996. Salient stills: process and practice. IBM Systems Journal 35, 3, 4, 557--574.
[19]
Masuch, M., Schlechtweg, S., and Schultz, R. 1999. Speedlines: depicting motion in motionless pictures. In ACM SIGGRAPH 99 Conference abstracts and applications, 277.
[20]
McCloud, S. 1993. Understanding Comics: The Invisible Art. Harper-Collins.
[21]
Nicolas, H., Manaury, A., Benois-Pineau, J., Dupuy, W., and Barba, D. 2004. Grouping video shots into scenes based on 1d mosaic descriptors. In Proc. Intl. Conf. on Image Proc., I: 637--640.
[22]
Nienhaus, M., and Döllner, J. 2003. Dynamic glyphs -- depicting dynamics in images of 3D scenes. In Third International Symposium on Smart Graphics, 102--111.
[23]
Rother, C., Kolmogorov, V., and Blake, A. 2004. "GrabCut" -- interactive foreground extraction using iterated graph cuts. ACM Transactions on Graphics (Proc. SIGGRAPH) 23, 3, 309--314.
[24]
Rubin, M. 2005. Droidmaker: George Lucas and the digital revolution. Triad Publishing. 327, 338.
[25]
Sangster, C., 2005. Personal Communication.
[26]
Taniguchi, Y., Akutsu, A., and Tonomura, Y. 1997. PanoramaExcerpts: extracting and packing panoramas for video browsing. In Proc. ACM Intl. Conf. on Multimedia, 427--436.
[27]
Teodosio, L., and Bender, W. 1993. Salient video stills: Content and context preserved. In Proc. ACM Intl. Conf. on Multimedia, 39--46.
[28]
Umeyama, S. 1991. Least-squares estimation of transformation parameters between two point patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence 13, 4, 376--380.
[29]
Ward, J. 1979. Perception and Pictorial Representation, vol. 1. Praeger, New York, ch. 13, "A piece of the action: Moving figures in still pictures", 246--271.
[30]
Wexler, Y., and Simakov, D. 2005. Space-time scene manifolds. In Proc. ICCV, 858--863.
[31]
Wood, D. N., Finkelstein, A., Hughes, J. F., Thayer, C. E., and Salesin, D. H. 1997. Multiperspective panoramas for cel animation. In Proc. SIGGRAPH, 243--250.
[32]
Zelnik-Manor, L., Peters, G., and Perona, P. 2005. Squaring the circle in panoramas. In Proc. ICCV, 1292--1299.

Cited By

View all
  • (2025)Tangi: a Tool to Create Tangible Artifacts for Sharing Insights from 360° Video.Proceedings of the Nineteenth International Conference on Tangible, Embedded, and Embodied Interaction10.1145/3689050.3704928(1-14)Online publication date: 4-Mar-2025
  • (2024)“Previously on…” from Recaps to Story Summarization2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.01294(13635-13646)Online publication date: 16-Jun-2024
  • (2023)Unsupervised Video Summarization via Deep Reinforcement Learning With Shot-Level SemanticsIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2022.319781933:1(445-456)Online publication date: Jan-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Graphics
ACM Transactions on Graphics  Volume 25, Issue 3
July 2006
742 pages
ISSN:0730-0301
EISSN:1557-7368
DOI:10.1145/1141911
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 July 2006
Published in TOG Volume 25, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. storyboards
  2. video editing
  3. video interaction
  4. video summarization
  5. video visualization

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)102
  • Downloads (Last 6 weeks)13
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Tangi: a Tool to Create Tangible Artifacts for Sharing Insights from 360° Video.Proceedings of the Nineteenth International Conference on Tangible, Embedded, and Embodied Interaction10.1145/3689050.3704928(1-14)Online publication date: 4-Mar-2025
  • (2024)“Previously on…” from Recaps to Story Summarization2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.01294(13635-13646)Online publication date: 16-Jun-2024
  • (2023)Unsupervised Video Summarization via Deep Reinforcement Learning With Shot-Level SemanticsIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2022.319781933:1(445-456)Online publication date: Jan-2023
  • (2023)A Survey on Evolution of Video Summarization Techniques in the Era of Deep Learning2023 Third International Conference on Artificial Intelligence and Smart Energy (ICAIS)10.1109/ICAIS56108.2023.10073688(1656-1661)Online publication date: 2-Feb-2023
  • (2023)A review on video summarization techniquesEngineering Applications of Artificial Intelligence10.1016/j.engappai.2022.105667118:COnline publication date: 1-Feb-2023
  • (2022)A Player-Specific Framework for Cricket Highlights Generation Using Deep Convolutional Neural NetworksElectronics10.3390/electronics1201006512:1(65)Online publication date: 24-Dec-2022
  • (2022)Epistasis StoryboardedThe American Biology Teacher10.1525/abt.2022.84.9.56284:9(562-569)Online publication date: 1-Dec-2022
  • (2022)Leveraging semantic saliency maps for query-specific video summarizationMultimedia Tools and Applications10.1007/s11042-022-12442-w81:12(17457-17482)Online publication date: 7-Mar-2022
  • (2021)Film Directing for Computer Games and AnimationComputer Graphics Forum10.1111/cgf.14266340:2(713-730)Online publication date: 4-Jun-2021
  • (2021)Text Synopsis Generation for Egocentric Videos2020 25th International Conference on Pattern Recognition (ICPR)10.1109/ICPR48806.2021.9412111(4252-4259)Online publication date: 10-Jan-2021
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media