[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US20050243192A1 - Method for removal of moving objects from a video stream - Google Patents

Method for removal of moving objects from a video stream Download PDF

Info

Publication number
US20050243192A1
US20050243192A1 US11/064,352 US6435205A US2005243192A1 US 20050243192 A1 US20050243192 A1 US 20050243192A1 US 6435205 A US6435205 A US 6435205A US 2005243192 A1 US2005243192 A1 US 2005243192A1
Authority
US
United States
Prior art keywords
scene
frame
image
static
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US11/064,352
Other versions
US7483062B2 (en
Inventor
Mark Allman
Scott Clee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ALLMAN, MARK, CLEE, SCOTT JOHN
Publication of US20050243192A1 publication Critical patent/US20050243192A1/en
Application granted granted Critical
Publication of US7483062B2 publication Critical patent/US7483062B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation

Definitions

  • This invention relates to image processing, and particularly to removal of moving objects from a video stream.
  • the present invention provides a method of removing a moving part from a video stream image of a given scene, the method comprising the steps of: obtaining a plurality of frame-series images of the scene, each image comprising a moving part and a static part; comparing the plurality of frame-series images to identify parts of the scene which are static for a plurality of frames; and building a single image of the scene comprising substantially a part of the scene identified as static.
  • the present invention provides a computer program element comprising computer program means for performing substantially the method described above.
  • the present invention provides an apparatus for removing a moving part from a video stream image of a given scene, the apparatus comprising: means for obtaining a plurality of frame-series images of the scene, each image comprising a moving part and a static part; means for comparing the plurality of frame-series images to identify parts of the scene which are static for a plurality of frames; and means for building a single image of the scene comprising substantially a part of the scene identified as static.
  • FIG. 1 shows a flowchart illustrating a method for removal of moving objects from a video stream incorporating the present invention.
  • a method for removing moving objects from a frame-series video image The video is filmed several times to provide several copies of each image. For each image the copies are processed to identify static objects and then a new image is built which is made up of pixels taken from the various copies and which were identified as being part of static objects. Accordingly the new image does not include moving objects. Once every image has been processed in this way the video is reconstructed to provide a video sequence without the moving objects. It will be understood that, by obtaining and processing several copies of each image, the method makes it possible to produce a high quality end result.
  • each pixel For each frame in the fixed viewpoint streaming video, every pixel is analysed. Each pixel has a table which stores a history for every colour the pixel has been and how many times it has been that colour, e.g. colour (R, G, B) hits 123, 256, 99 10 123, 256, 102 2 124, 255, 100 1
  • colour (R, G, B) hits 123, 256, 99 10 123, 256, 102 2 124, 255, 100 1
  • image ⁇ ⁇ confidence total ⁇ ⁇ number ⁇ ⁇ of ⁇ ⁇ pixels ⁇ ⁇ with ⁇ ⁇ pixel ⁇ ⁇ confidence ⁇ ⁇ greater ⁇ ⁇ than ⁇ ⁇ X total ⁇ ⁇ number ⁇ ⁇ of ⁇ ⁇ pixels
  • X is an adjustable lower bound pixel confidence constant which can be lowered for scenes with greater traffic (since it may be possible that the image confidence value is never reached in “busy” scenes). It will be understood that a minimum image confidence value, that will result in a suitably processed video stream, can be derived from testing an implementation.
  • application of the technique to motion control photography may be implemented as follows (to achieve an effect similar to that in such cinematic films as ‘28 Days Later’TM and ‘Vanilla Sky’TM:
  • step 220 the method ends (step 220 ).
  • step 180 stage 7
  • Remove the unwanted images This is non-trivial, as these frames include the actor himself/herself (who presumably is wanted), so the problem is how to achieve only the desired removal. It will be understood that there are a number of known techniques for achieving this.
  • the actor could be identified in the first frame, by laboriously outlining him/her, and then automatically tracked by computer in each sequential frame (such a technique is known in colorizing black-and-white movies); alternatively, the actor could be filmed on a different set, using ‘Ultimatte’TM or chroma-keying techniques, and then compositted onto the backplate movie, which would avoid calculating the deltas at all.
  • the method for removal of moving objects from a video stream described above may be carried out in software running on processors (not shown) in a computer system (also not shown), and that the software may be provided as a computer program element carried on any suitable data carrier (also not shown) such as a magnetic or optical computer disc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A method of removing a moving part from a video stream image comprising obtaining a plurality of frame-series images of the scene, each image comprising a moving part and a static part; comparing the plurality of frame-series images to identify parts of the scene that are static for a plurality of frames; and building part of the scene identified as static.

Description

    FIELD OF THE INVENTION
  • This invention relates to image processing, and particularly to removal of moving objects from a video stream.
  • BACKGROUND OF THE INVENTION
  • In the field of this invention it is known that it can be problematic to generate a video image of a scene with all moving objects removed. For example, in filming a motion picture the director may need to film an apocalyptic or relatively deserted view of the streets of a city. In order to do so, the director may “hire” streets of the city for a short period at a relatively affordable time (e.g., a Sunday morning). The streets would be blocked off and emptied and the filming would then take place. Clearly, such an approach is nevertheless extremely costly and disruptive.
  • There are known software techniques for ameliorating this problem, by digitising existing scanned images of buildings, etc., and pasting these onto three dimensional (3D) computer models, so creating a realistic 3D model. However, building such a 3D computer model is cumbersome.
  • From patent publication WO/01/1685 there is known a method for real-time segmentation of video objects in known stationary image background. This method uses segmentation of foreground objects calculated by average value of several takes of individual image pixels. Foreground objects are marked, and the method ensures that background is not considered as foreground because of change in light conditions.
  • From U.S. Pat. No. 6,078,619 there is known an object-oriented video system implemented as a two-layer object model in a video compressor system. In this system bandwidth is reduced by not sending full pictures, there being less information in the background layer.
  • From U.S. Pat. No. 6,301,382 there is known a method for extracting a matte of a foreground object from a composite image by filming against two completely different backgrounds.
  • From U.S. Pat. No. 5,915,044 there is known a scheme for encoding video images using foreground/background segmentation. This scheme sends less data for background segments so as to concentrate bandwidth on foreground segments.
  • From U.S. Pat. No. 5,914,748 there is known a scheme for generating a composite image using the difference of two images. This scheme requires a clean background in order to obtain a fragment object, which is then placed as a foreground object on a new background.
  • From a demonstration published at the website http://www.cs.huji.ac.il/labs/vision/demos/removal/removal.html it is known to remove a moving object from a video stream of image frames by (i) using optical-flow to identify and track the moving object and to blacken its pixels and (ii) using pixels from subsequent frames to substitute for the blackened pixels. However, this demonstrated technique uses only a single copy of each image, resulting in a low quality end result.
  • A need therefore exists for method and arrangement for removal of moving objects from a video stream wherein the abovementioned disadvantage(s) may be alleviated.
  • STATEMENT OF INVENTION
  • According to a first aspect the present invention provides a method of removing a moving part from a video stream image of a given scene, the method comprising the steps of: obtaining a plurality of frame-series images of the scene, each image comprising a moving part and a static part; comparing the plurality of frame-series images to identify parts of the scene which are static for a plurality of frames; and building a single image of the scene comprising substantially a part of the scene identified as static.
  • According to a second aspect the present invention provides a computer program element comprising computer program means for performing substantially the method described above.
  • According to a third aspect the present invention provides an apparatus for removing a moving part from a video stream image of a given scene, the apparatus comprising: means for obtaining a plurality of frame-series images of the scene, each image comprising a moving part and a static part; means for comparing the plurality of frame-series images to identify parts of the scene which are static for a plurality of frames; and means for building a single image of the scene comprising substantially a part of the scene identified as static.
  • BRIEF DESCRIPTION OF THE DRAWING(S)
  • One method for removal of moving objects from a video stream incorporating the present invention will now be described, by way of example only, with reference to the accompanying drawing(s), in which:
  • FIG. 1 shows a flowchart illustrating a method for removal of moving objects from a video stream incorporating the present invention.
  • DESCRIPTION OF PREFERRED EMBODIMENT(S)
  • As will be described in greater detail below, this preferred embodiment is based on the conjunction of two techniques:
      • Firstly, that of histogramming a series of still images in order to derive a confidence value for each pixel in order to remove noise from the images.
      • Secondly, that of deriving a confidence value for each frame in order to allow a “clean” frame-series to be produced.
  • Briefly stated, described in greater detail below is a method for removing moving objects from a frame-series video image. The video is filmed several times to provide several copies of each image. For each image the copies are processed to identify static objects and then a new image is built which is made up of pixels taken from the various copies and which were identified as being part of static objects. Accordingly the new image does not include moving objects. Once every image has been processed in this way the video is reconstructed to provide a video sequence without the moving objects. It will be understood that, by obtaining and processing several copies of each image, the method makes it possible to produce a high quality end result.
  • For each frame in the fixed viewpoint streaming video, every pixel is analysed. Each pixel has a table which stores a history for every colour the pixel has been and how many times it has been that colour, e.g.
    colour (R, G, B) hits
    123, 256, 99 10
    123, 256, 102 2
    124, 255, 100 1
  • A pixel confidence value for a pixel is calculated by: pixel confidence = most common colour ' s hit count total hits for all colours ,
  • i.e., in the above table, pixel confidence=10/13=0.77.
  • An image is considered “complete” when an overall image confidence value is reached; this is calculated using: image confidence = total number of pixels with pixel confidence greater than X total number of pixels
  • where X is an adjustable lower bound pixel confidence constant which can be lowered for scenes with greater traffic (since it may be possible that the image confidence value is never reached in “busy” scenes). It will be understood that a minimum image confidence value, that will result in a suitably processed video stream, can be derived from testing an implementation.
  • Referring now also to FIG. 1, application of the technique to motion control photography may be implemented as follows (to achieve an effect similar to that in such cinematic films as ‘28 Days Later’™ and ‘Vanilla Sky’™:
      • 1 (step 110) Film a scene with a motion control camera (with an actor walking down a street, say)
      • 2 (step 120) Move the camera back to the position of the first frame
      • 3 (step 130) Apply the above technique (calculating pixel confidence and image confidence values as detailed above) to the video stream at this position until an image confidence value is reached
      • 4 (step 140) Repeat previous steps 120 and 130 (moving camera to next frame position—step 150—before each repetition) until all frame positions have been processed.
  • At this point there have been produced two versions of the same scene: one with the actor and various unwanted moving objects and another which acts as a ‘backplate’ frame
      • 5 (step 170) For frame being considered (initially the first frame—step 160), find the delta between the original frame and the backplate frame (from step 130 above)
      • 6 (step 180) Remove any unwanted images from the delta frame (e.g., other people walking down the street)
      • 7 (step 190) Overlay the delta frame onto the backplate frame
      • 8 (steps 200 and 210) Until all frames have been processed (step 200), the next frame is considered (step 210) and steps 170, 180 and 190 are repeated.
  • At this point a finished scene with the desired effect has been produced, and the method ends (step 220).
  • As mentioned above, it will be appreciated that at the point after step 140 when all frames have been processed, there have been produced 2 video streams or ‘movies’ exactly aligned frame-by-frame, one with the original with actor & unwanted crowds maybe, and the other the backplate movie with perfect buildings & background and no actor/people whatsoever. A problem arises in the following stage (step 180) stage 7, ‘remove the unwanted images’. This is non-trivial, as these frames include the actor himself/herself (who presumably is wanted), so the problem is how to achieve only the desired removal. It will be understood that there are a number of known techniques for achieving this. The actor could be identified in the first frame, by laboriously outlining him/her, and then automatically tracked by computer in each sequential frame (such a technique is known in colorizing black-and-white movies); alternatively, the actor could be filmed on a different set, using ‘Ultimatte’™ or chroma-keying techniques, and then compositted onto the backplate movie, which would avoid calculating the deltas at all.
  • It will be understood that the technique described above in relation to FIG. 1 allows processing to move through the filmed sequence, only moving on once the current frame is processed to an acceptable level. This means that it would not be required to film frames more times than necessary (moving on only when ready), but this does not take into account motion blur that occurs when the scene is filmed with a moving camera. This may be acceptable if the camera is moving slowly during the original sequence, but otherwise the technique described above could be modified replacing steps 130-150 as follows:
      • (i) re-film sequence from start to finish
      • (ii) process each frame in the sequence that has not reached the image confidence level
      • (iii) if the image confidence level has not been reached for all frames then repeat from step (i)
  • The advantage to this modified method is that it takes into account the effect of motion blur caused by camera movement. Under this modified process, the final full sequence progressively improves until completion is reached. Although a disadvantage of this modified process is that it could involve repeatedly filming individual frames for which the confidence value has already been reached, it would not be necessary to process this redundant footage.
  • It will be appreciated that the method for removal of moving objects from a video stream described above may be carried out in software running on processors (not shown) in a computer system (also not shown), and that the software may be provided as a computer program element carried on any suitable data carrier (also not shown) such as a magnetic or optical computer disc.
  • In conclusion, it will be understood that the method for removal of moving objects from a video stream described above provides the following advantages:
      • does not require construction of a 3D model
      • results in a high quality video image
      • once the software has been written it can be used many times over with no additional cost
      • reduces cost and disruption which would normally be involved in closing off a busy location to film a deserted scene
      • may allow filming in locations which it may not be possible to close off to film a deserted scene.
  • Note that a skilled person in the art would realize that the methods described herein and/or with reference to FIG. 1 could be implemented in a variety of programming languages, for example, Java™, C, and C++ (Java is a registered trademark of Sun Microsystems, Inc. in the United States, other countries, or both.). Further a skilled person would realize that once implemented the methods can be stored in a computer program product comprising one or more programs, in source or executable form, on a media, such as floppy disk, CD, and DVD, suitable for loading onto a data processing host and causing the data processing host to carry out the methods. Further a skilled person would realize that the methods described herein and/or with reference to FIG. 1 could be embodied in a data processing apparatus, and further used in providing a compensation service.

Claims (8)

1. A method of removing a moving part from a video stream image of a given scene, the method comprising the steps of:
obtaining a plurality of frame-series images of the scene, each image comprising a moving part and a static part;
comparing the plurality of frame-series images to identify parts of the scene which are static for a plurality of frames; and
building a single image of the scene comprising substantially a part of the scene identified as static.
2. The method of claim 1 wherein the steps of comparing and building comprise histogramming the frame-series images in order to remove noise therefrom, and combining the histogrammed plurality frame series images to produce the single image.
3. The method of claim 2 wherein the single image is formed using the most common value for each frame pixel, whereby the resulting image is a representation of the scene with an undesired moving object removed.
4. The method of claim 2 wherein the step of histogramming comprises producing for each pixel a pixel confidence value representative of the ratio of the number of times the pixel has been its most common colour and the total number of colours that the pixel has been.
5. The method of claim 4 wherein the step of building comprises producing for each frame an image confidence value representative of the ratio of the total number of pixels with pixel confidence value greater than a predetermined value and the total number of pixels.
6. The method of claim 1 wherein the step of obtaining comprises obtaining a plurality of frame-series images of the scene captured with a motion control camera.
7. A computer program storage product comprising computer program means for performing substantially the method of any one of claims 1-6.
8. An apparatus for removing a moving part from a video stream image of a given scene, the apparatus comprising:
means for obtaining a plurality of frame-series images of the scene, each image comprising a moving part and a static part;
means for comparing the plurality of frame-series images to identify parts of the scene which are static for a plurality of frames; and
means for building a single image of the scene comprising substantially a part of the scene identified as static.
US11/064,352 2004-04-28 2005-02-23 Method for removal of moving objects from a video stream Expired - Fee Related US7483062B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB0409463.7A GB0409463D0 (en) 2004-04-28 2004-04-28 Method for removal of moving objects from a video stream
GB0409463.7 2004-04-28

Publications (2)

Publication Number Publication Date
US20050243192A1 true US20050243192A1 (en) 2005-11-03
US7483062B2 US7483062B2 (en) 2009-01-27

Family

ID=32408173

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/064,352 Expired - Fee Related US7483062B2 (en) 2004-04-28 2005-02-23 Method for removal of moving objects from a video stream

Country Status (2)

Country Link
US (1) US7483062B2 (en)
GB (1) GB0409463D0 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090316777A1 (en) * 2008-06-20 2009-12-24 Xin Feng Method and Apparatus for Improved Broadcast Bandwidth Efficiency During Transmission of a Static Code Page of an Advertisement
CN103440626A (en) * 2013-08-16 2013-12-11 北京智谷睿拓技术服务有限公司 Lighting method and lighting system
KR20190101802A (en) * 2018-02-23 2019-09-02 삼성전자주식회사 Electronic device and method for providing augmented reality object thereof
KR102061867B1 (en) 2018-09-10 2020-01-02 한성욱 Apparatus for generating image and method thereof

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010004365A1 (en) 2008-07-10 2010-01-14 Ecole Polytechnique Federale De Lausanne (Epfl) Functional optical coherent imaging
EP2442720B1 (en) 2009-06-17 2016-08-24 3Shape A/S Focus scanning apparatus
US8340351B2 (en) * 2009-07-01 2012-12-25 Texas Instruments Incorporated Method and apparatus for eliminating unwanted objects from a streaming image
US9165605B1 (en) 2009-09-11 2015-10-20 Lindsay Friedman System and method for personal floating video
US20130172735A1 (en) * 2010-03-26 2013-07-04 Aimago S.A. Optical coherent imaging medical device
DK3401876T4 (en) 2011-07-15 2022-10-31 3Shape As DETECTION OF A MOVING OBJECT BY 3D SCANNING OF A RIGID OBJECT
CA2909914C (en) 2012-04-27 2018-05-01 Aimago S.A. Optical coherent imaging medical device
CA2914780C (en) 2012-07-10 2020-02-25 Aimago S.A. Perfusion assessment multi-modality optical medical device
US9479709B2 (en) 2013-10-10 2016-10-25 Nvidia Corporation Method and apparatus for long term image exposure with image stabilization on a mobile device
WO2015118120A1 (en) 2014-02-07 2015-08-13 3Shape A/S Detecting tooth shade
US9275284B2 (en) 2014-04-30 2016-03-01 Sony Corporation Method and apparatus for extraction of static scene photo from sequence of images
EP3291725B1 (en) 2015-05-07 2024-07-24 Stryker Corporation Methods and systems for laser speckle imaging of tissue using a color image sensor
US10902626B2 (en) 2018-04-11 2021-01-26 International Business Machines Corporation Preventing intrusion during video recording or streaming
US10839492B2 (en) 2018-05-23 2020-11-17 International Business Machines Corporation Selectively redacting unrelated objects from images of a group captured within a coverage area
US12125169B2 (en) 2022-03-03 2024-10-22 Microsoft Technology Licensing, Llc Device for replacing intrusive object in images

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5914748A (en) * 1996-08-30 1999-06-22 Eastman Kodak Company Method and apparatus for generating a composite image using the difference of two images
US5915044A (en) * 1995-09-29 1999-06-22 Intel Corporation Encoding video images using foreground/background segmentation
US6026181A (en) * 1996-12-25 2000-02-15 Sharp Kabushiki Kaisha Image processing apparatus
US6078619A (en) * 1996-09-12 2000-06-20 University Of Bath Object-oriented video system
US6301382B1 (en) * 1996-06-07 2001-10-09 Microsoft Corporation Extracting a matte of a foreground object from multiple backgrounds by triangulation
US20010053248A1 (en) * 1996-02-15 2001-12-20 Mitsuru Maeda Image processing apparatus and method and medium
US6681058B1 (en) * 1999-04-15 2004-01-20 Sarnoff Corporation Method and apparatus for estimating feature values in a region of a sequence of images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19941644A1 (en) 1999-08-27 2001-03-01 Deutsche Telekom Ag Method for realtime segmentation of video objects with known stationary image background

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5915044A (en) * 1995-09-29 1999-06-22 Intel Corporation Encoding video images using foreground/background segmentation
US20010053248A1 (en) * 1996-02-15 2001-12-20 Mitsuru Maeda Image processing apparatus and method and medium
US6301382B1 (en) * 1996-06-07 2001-10-09 Microsoft Corporation Extracting a matte of a foreground object from multiple backgrounds by triangulation
US5914748A (en) * 1996-08-30 1999-06-22 Eastman Kodak Company Method and apparatus for generating a composite image using the difference of two images
US6078619A (en) * 1996-09-12 2000-06-20 University Of Bath Object-oriented video system
US6026181A (en) * 1996-12-25 2000-02-15 Sharp Kabushiki Kaisha Image processing apparatus
US6681058B1 (en) * 1999-04-15 2004-01-20 Sarnoff Corporation Method and apparatus for estimating feature values in a region of a sequence of images

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090316777A1 (en) * 2008-06-20 2009-12-24 Xin Feng Method and Apparatus for Improved Broadcast Bandwidth Efficiency During Transmission of a Static Code Page of an Advertisement
CN103440626A (en) * 2013-08-16 2013-12-11 北京智谷睿拓技术服务有限公司 Lighting method and lighting system
KR20190101802A (en) * 2018-02-23 2019-09-02 삼성전자주식회사 Electronic device and method for providing augmented reality object thereof
US11164388B2 (en) * 2018-02-23 2021-11-02 Samsung Electronics Co., Ltd. Electronic device and method for providing augmented reality object therefor
KR102450948B1 (en) * 2018-02-23 2022-10-05 삼성전자주식회사 Electronic device and method for providing augmented reality object thereof
KR102061867B1 (en) 2018-09-10 2020-01-02 한성욱 Apparatus for generating image and method thereof
WO2020054978A1 (en) * 2018-09-10 2020-03-19 한성욱 Device and method for generating image

Also Published As

Publication number Publication date
US7483062B2 (en) 2009-01-27
GB0409463D0 (en) 2004-06-02

Similar Documents

Publication Publication Date Title
US7483062B2 (en) Method for removal of moving objects from a video stream
US8363117B2 (en) Method and apparatus for photographing and projecting moving images
US11574655B2 (en) Modification of objects in film
US20090027549A1 (en) Method for processing motion pictures at high frame rates with improved temporal and spatial resolution, resulting in improved audience perception of dimensionality in 2-D and 3-D presentation
US5198902A (en) Apparatus and method for processing a video signal containing single frame animation material
US6950130B1 (en) Method of image background replacement
US7129961B1 (en) System and method for dynamic autocropping of images
US20060244917A1 (en) Method for exhibiting motion picture films at a higher frame rate than that in which they were originally produced
US7164462B2 (en) Filming using rear-projection screen and image projector
JP2005223487A (en) Digital camera work apparatus, digital camera work method, and digital camera work program
US9277169B2 (en) Method for enhancing motion pictures for exhibition at a higher frame rate than that in which they were originally produced
US11715495B2 (en) Modification of objects in film
US20050254011A1 (en) Method for exhibiting motion picture films at a higher frame rate than that in which they were originally produced
US6909491B2 (en) Electronic and film theatrical quality
US20150281637A1 (en) Method for correcting corrupted frames during conversion of motion pictures photographed at a low frame rate, for exhibition at a higher frame rate
CN113949832A (en) Delivering motion pictures to motion picture auditoriums at multiple frame rates
US20110025911A1 (en) Method of enhancing motion pictures for exhibition at a higher frame rate than that in which they were originally produced
US7474328B2 (en) Method for recomposing large format media
Clark et al. The Status of Cinematography Today
CA2924161C (en) Post production pipeline process for editing and manipulating 180 degree footage for half-dome theaters
Medioni Using Computer Vision in Real Applications: Two Success Stories,.
Van Rijsselbergen et al. Enabling universal media experiences through semantic adaptation in the creative drama productionworkflow
Massey The new cronophotography: novel applications of salient stills
Forbin Flicker and unsteadiness compensation for archived film sequences
Kuiper et al. Simulating of authentic movie faults

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ALLMAN, MARK;CLEE, SCOTT JOHN;REEL/FRAME:015750/0666

Effective date: 20050217

FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20130127