[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US20150271381A1 - Methods and systems for determining frames and photo composition within multiple frames - Google Patents

Methods and systems for determining frames and photo composition within multiple frames Download PDF

Info

Publication number
US20150271381A1
US20150271381A1 US14/220,149 US201414220149A US2015271381A1 US 20150271381 A1 US20150271381 A1 US 20150271381A1 US 201414220149 A US201414220149 A US 201414220149A US 2015271381 A1 US2015271381 A1 US 2015271381A1
Authority
US
United States
Prior art keywords
frames
frame
moving speed
candidate
area corresponding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/220,149
Inventor
Bing-Sheng Lin
Yi-Chi Lin
Tai-Ling Lu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HTC Corp
Original Assignee
HTC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HTC Corp filed Critical HTC Corp
Priority to US14/220,149 priority Critical patent/US20150271381A1/en
Priority to TW103137446A priority patent/TWI531911B/en
Priority to CN201410614516.5A priority patent/CN104933677B/en
Assigned to HTC CORPORATION reassignment HTC CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LIN, YI-CHI, LU, TAI-LING, LIN, BING-SHENG
Publication of US20150271381A1 publication Critical patent/US20150271381A1/en
Priority to US15/159,825 priority patent/US9898828B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • H04N5/2353
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0412Digitisers structurally integrated in a display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • G06T7/2006
    • G06T7/2073
    • G06T7/2086
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • H04N5/23245
    • H04N5/23293
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2625Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect
    • H04N5/372
    • H04N5/374
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/041012.5D-digitiser, i.e. digitiser detecting the X/Y position of the input means, finger or stylus, also when it does not touch, but is proximate to the digitiser's interaction surface and also measures the distance of the input means within a short range in the Z direction, possibly with a separate measurement setup
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2209/00Details of colour television systems
    • H04N2209/04Picture signal generators
    • H04N2209/041Picture signal generators using solid-state devices
    • H04N2209/042Picture signal generators using solid-state devices having a single pick-up sensor

Definitions

  • the disclosure relates generally to image frame management, and, more particularly to methods and systems for determining frames and photo composition within multiple frames.
  • a handheld device may have telecommunications capabilities, e-mail message capabilities, image capture capabilities, an advanced address book management system, a media playback system, and various other functions. Due to increased convenience and functions of the devices, these devices have become necessities of life.
  • the image capture unit such as a camera takes images immediately one after another in a short amount of time. That is, when the continuous shot function is performed, a continuous image capture process is performed to continuously capture a plurality of images in sequence.
  • an inventive function called “dynamic continuous shot composition” may be also provided on the portable devices.
  • the dynamic continuous shot composition is a way for image composition.
  • a camera can be set on a tripod, and several images with the same scene are continuously captured by the camera.
  • the moving object within the images are extracted and overlapped onto the last image, thus to present the dynamic effect of the track of the moving object.
  • the overlap of the moving object onto the last image can be achieved by using an image composition algorithm. It is understood that, the image composition algorithm may be various and known in the art, and related descriptions are omitted here.
  • the respective images are continuously captured with a fixed time interval. If the time interval is too short or the moving speed of the object is too fast, the objects on the composed image may have a large overlapped portion, as shown in FIG. 1A . On the contrary, if the time interval is too long or the moving speed of the object is too slow, the objects may be scattered on the composed image, as shown in FIG. 1B , and the number of the objects on the composed image may be not enough, resulting in difficulties for presenting the dynamic effect.
  • a plurality of frames which are respectively captured with a predefined time interval are obtained. At least one object within at least two of the frames is detected. A moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval, and candidate frames are selected from the frames according to the moving speed of the object.
  • An embodiment of a system for determining frames within multiple frames comprises a storage unit and a processing unit.
  • the storage unit comprises a plurality of frames, which are respectively captured with a predefined time interval.
  • the processing unit detects at least one object within at least two of the frames.
  • the processing unit calculates a moving speed of the object according to the positions of the object in the respective frames and the predefined time interval, and selects candidate frames from the frames according to the moving speed of the object.
  • a plurality of frames which are respectively captured with a predefined time interval are obtained. At least one object within at least two of the frames is detected. A moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval, and candidate frames are selected from the frames according to the moving speed of the object. The candidate frames are composed to generate a composed photo.
  • An embodiment of a system for photo composition within multiple frames comprises a storage unit and a processing unit.
  • the storage unit comprises a plurality of frames, which are respectively captured with a predefined time interval.
  • the processing unit detects at least one object within at least two of the frames.
  • the processing unit calculates a moving speed of the object according to the positions of the object in the respective frames and the predefined time interval, and selects candidate frames from the frames according to the moving speed of the object.
  • the processing unit composes the candidate frames to generate a composed photo.
  • a table is looked up according to the moving speed of the object to obtain a frame gap number, and the candidates frames are selected within the frames at intervals of the frame gap number. In some embodiments, the faster the moving speed is, the larger the frame gap number is.
  • a plurality of frames which are respectively captured with a predefined time interval are obtained. At least one object within at least two of the frames is detected. An overlapped area corresponding to the object within a first frame and a second frame is calculated, and at least one candidate frame is selected according, to the overlapped area corresponding to the object.
  • An embodiment of a system for determining frames within multiple frames comprises a storage unit and a processing unit.
  • the storage unit comprises a plurality of frames, which are respectively captured with a predefined time interval.
  • the processing unit detects at least one object within at least two of the frames.
  • the processing unit calculates an overlapped area corresponding to the object within a first frame and a second frame, and selects at least one candidate frame according to the overlapped area corresponding to the object.
  • a plurality of frames which are respectively captured with a predefined time interval are obtained. At least one object within at least two of the frames is detected. An overlapped area corresponding to the object within a first frame and a second frame is calculated, and at least one candidate frame is selected according to the overlapped area corresponding to the object. The at least one candidate frame is composed to generate a composed photo.
  • An embodiment of a system for photo composition within multiple frames comprises a storage unit and a processing unit.
  • the storage unit comprises a plurality of frames, which are respectively captured with a predefined time interval.
  • the processing unit detects at least one object within at least two of the frames.
  • the processing unit calculates an overlapped area corresponding to the object within a first frame and a second frame, and selects at least one candidate frame according to the overlapped area corresponding to the object.
  • the processing unit composes the at least one candidate frame to generate a composed photo.
  • Methods for determining frames and photo composition within multiple frames may take the form of a program code embodied in a tangible media.
  • the program code When the program code is loaded into and executed by a machine, the machine becomes an apparatus for practicing the disclosed method.
  • FIG. 1A is a schematic diagram illustrating an example of a composed image with objects having a large overlapped portion
  • FIG. 1B is a schematic diagram illustrating an example of a composed image with scattered objects
  • FIG. 2 is a schematic diagram illustrating an embodiment of a system for determining frames and photo composition within multiple frames of the invention
  • FIG. 3 is a flowchart of an embodiment of a method for determining frames within multiple frames of the invention
  • FIG. 4 is a flowchart of another embodiment of a method for determining frames within multiple frames of the invention.
  • FIG. 5 is a flowchart of an embodiment of a method for selecting at least one candidate frame according to the overlapped area corresponding to the object of the invention
  • FIG. 6 is a flowchart of another embodiment of a method for selecting at least one candidate frame according to the overlapped area corresponding to the object of the invention.
  • FIG. 7 is a flowchart of an embodiment of a method for photo composition within multiple frames of the invention.
  • FIG. 8 is a flowchart of another embodiment of a method for photo composition within multiple frames of the invention.
  • FIG. 2 is a schematic diagram illustrating an embodiment of a system for determining frames and/or photo composition within multiple frames of the invention.
  • the system for determining frames and/or photo composition within multiple frames 100 can be used in an electronic device, such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA (Personal Digital Assistant), a GPS (Global Positioning System), or any picture-taking device.
  • an electronic device such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA (Personal Digital Assistant), a GPS (Global Positioning System), or any picture-taking device.
  • PDA Personal Digital Assistant
  • GPS Global Positioning System
  • the system for determining frames and/or photo con position within multiple frames 100 comprises a storage unit 110 and a processing unit 120 .
  • the storage unit 110 comprises a plurality of frames, which are respectively captured with a tune interval. It is understood that, in some embodiments, a time interval would be predefined or dynamically defined. It is understood that, in some embodiments, the frames can be obtained from a video. It is understood that, in some embodiments, the system for determining frames and/or photo composition within multiple frames 100 can also comprise an image capture unit (not shown in FIG. 2 ).
  • the image capture unit may be a CCD (Charge Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor), placed at the imaging position for objects inside the electronic device.
  • the image capture unit can continuously capture the frames within a predefined time interval.
  • the system for determining frames and/or photo composition within multiple frames 100 can also comprise a display unit (not shown in FIG. 2 ).
  • the display unit can display related figures and interfaces, and related data, such as the image frames continuously captured by the image capture unit.
  • the display unit may be a screen integrated with a touch-sensitive device (not shown).
  • the touch-sensitive device has a touch-sensitive surface comprising sensors in at least one dimension to detect contact and movement of an input tool, such as a stylus or finger on the touch-sensitive surface. That is, users can directly input related data via the display unit.
  • the processing unit 120 can control related components of the system for determining frames and/or photo composition within multiple frames 100 , process the image frames, and perform the methods for determining frames and/or photo composition within multiple frames, which will be discussed further in the following paragraphs. It is noted that, in some embodiments, the system for determining frames and/or photo composition within multiple frames 100 can further comprise a focus unit (not shown in FIG. 2 ). The processing unit 120 can control the focus unit to perform a focus process for at least one object during, the photography process.
  • FIG. 3 is a flowchart of an embodiment of a method for determining frames within multiple frames of the invention.
  • the method for determining frames within multiple frames can be used in an electronic device, such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA, a GPS, or any picture-taking device.
  • candidate frames for photo composition can be determined.
  • a plurality of frames are obtained. It is understood that, in some embodiments, the frames are continuously and respectively captured within a predefined time interval. In some embodiments, the frames can be obtained from a video.
  • step S 320 at least one object within at least two of the frames, such as the first two successive frames is detected. It is understood that, in some embodiments, the object can be obtained by transforming the at least two frames into grayscale frames, and subtracting the grayscale frames with each other to obtain a contour of the object.
  • a moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval.
  • step S 340 candidate frames are selected from the frames according to the moving speed of the object.
  • a table can be looked up according to the moving speed of the object to obtain a frame gap number, and the candidates frames can be selected within the frames at intervals of the frame gap number. In the table, the faster the moving speed is, the larger the frame nap number is. It is noted that, the actual value of the frame gap number can be flexibly designed according different applications and requirements.
  • the selected candidate frames can be used for photo composition.
  • FIG. 4 is a flowchart of another embodiment of a method for determining frames within multiple frames of the invention.
  • the method for determining frames within multiple frames can be used in an electronic device, such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA, a GPS, or any picture-taking device.
  • candidate frames for photo composition can be determined.
  • a plurality of frames are obtained. It is understood that, in some embodiments, the frames are continuously and respectively captured with a predefined time interval. In some embodiments, the frames can be obtained from a video.
  • step S 420 at least one object within at least two of the frames, such as the first two successive frames is detected. It is understood that, in some embodiments, the object can be obtained by transforming the at least two frames into grayscale frames, and subtracting the grayscale frames with each other to obtain as contour of the object.
  • an overlapped area corresponding to the object within two frames, such as a first frame and a second frame is calculated.
  • step S 440 at least one candidate frame is selected according to the overlapped area corresponding to the object.
  • the selected candidate frames can be used for photo composition.
  • FIG. 5 is a flowchart of an embodiment of a method for selecting at least one candidate frame according to the overlapped area corresponding to the object of the invention.
  • step S 510 it is determined whether the overlapped area corresponding to the object within two frames, such as a first frame and a second frame is less than a specific percentage of a contour area of the object. If the overlapped area corresponding to the object is less than a specific percentage of the contour area of the object (Yes in step S 510 ), in step S 520 , the rear frame, such as the second frame is selected as the candidate frame. If the overlapped area corresponding to the object is not less than a specific percentage of the contour area of the object (No in step S 510 ), the procedure is completed.
  • an overlapped area corresponding to the object within the first frame and a subsequent frame such as a third frame (wherein the second frame is not selected as a candidate frame) can be calculated, and accordingly determined.
  • an overlapped area corresponding to the object within the second frame and a subsequent frame such as a third frame (wherein the second frame is selected as a candidate frame) can be calculated, and accordingly determined. The process is repeated until all frames are examined.
  • FIG. 6 is a flowchart of another embodiment of a method for selecting at least one candidate frame according to the overlapped area corresponding to the object of the invention.
  • step S 610 it is determined whether the overlapped area corresponding to the object within two frames, such as a first frame and a second frame equals to zero. If the overlapped area corresponding to the object equals to zero (Yes in step S 610 ), in step S 620 , a moving speed of the object is calculated according to the positions of the object in the two frames and the predefined time interval. It is understood that, in some embodiments, once the contour of the object is detected, the area of the contour, the position of mass center of the area, and the moving speed of the object can be also calculated.
  • step S 630 candidate frames are selected within the frames accord Mg to the moving speed of the object.
  • a table can be looked up according to the moving speed of the object to obtain a frame gap number, and the candidates frames can be selected within the frames at intervals of the frame gap number. In the table, the faster the moving speed is, the larger the frame gap number is. It is noted that, the actual value of the frame gap number can be flexibly designed according different applications and requirements.
  • the selected candidate frames can be used for photo composition. If the overlapped area corresponding to the object does not equal to zero (No in step S 610 ), the procedure is completed.
  • an overlapped area corresponding to the object within the first frame and a subsequent frame such as a third frame (wherein the overlapped area corresponding to the object does not equal to zero) can be calculated, and accordingly determined. The process is repeated until all frames are examined.
  • FIG. 7 is a flowchart of an embodiment of it method for photo composition within multiple frames of the invention.
  • the method for photo composition within multiple frames can be used in an electronic device, such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA, a GPS, or any picture-taking device.
  • candidate frames can be selected from frames and used for photo composition.
  • a plurality of frames are obtained. It is understood that, in some embodiments, the frames are continuously and respectively captured with a predefined time interval. In some embodiments, the frames can be obtained from a video.
  • step S 720 at least one object within at least two of the frames, such as the first two successive frames is detected. It is understood that, in some embodiments, the object can be obtained by transforming the at least two frames into grayscale frames, and subtracting the grayscale frames with each other to obtain a contour of the object.
  • a moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval.
  • step S 740 candidate frames are selected from the frames according to the moving speed of the object. It is understood that, in some embodiments, a table can be looked up according to the moving speed of the object to obtain a frame gap number, and the candidates frames can be selected within the frames at intervals of the frame gap number. In the table, the faster the moving speed is, the larger the frame gap number is. It is noted that, the actual value of the frame gap number can be flexibly designed according different applications and requirements.
  • step S 750 the selected candidate frames are composed to generate a composed photo. It is understood that, the composition of candidate frames can be performed by using an image composition algorithm. It is understood that the image composition algorithm may be various and known in the art, and related descriptions are omitted here.
  • FIG. 8 is a flowchart of another embodiment of a method for photo composition within multiple frames of the invention.
  • the method for photo composition can be used in an electronic device, such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA, a GPS, or any picture-taking device, in the embodiment, candidate frames can be selected from frames and used for photo composition.
  • step S 810 a plurality of frames are obtained. It is understood that, in some embodiments, the frames are continuously and respectively captured with as predefined time interval. In some embodiments, the frames can be obtained from a video.
  • step S 820 at least one object within at least two of the frames, such as the first two successive frames is detected. It is understood that, in some embodiments, the object can be obtained by transforming the at least two frames into grayscale frames, and subtracting the grayscale frames with each other to obtain a contour of the object.
  • step S 830 an overlapped area corresponding to the object within two frames, such as a first frame and a second frame is calculated.
  • step S 840 at least one candidate frame is selected according to the overlapped area corresponding, to the object. It is understood that, in some embodiments, it is determined whether the overlapped area corresponding to the object is less than a specific percentage of a contour area of the object. If the overlapped area corresponding to the object is less than a specific percentage of the contour area of the object, the second frame is selected as the candidate frame. In some embodiments, it is determined whether the overlapped area corresponding to the object equals to zero.
  • a moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval, and candidate frames are selected within the frames according to the moving speed of the object.
  • the selected candidate frames are composed to generate a composed photo.
  • the composition of candidate frames can be performed by using an image composition algorithm. It is understood that, the image composition algorithm may be various and known in the art, and related descriptions are omitted here.
  • the methods and systems for determining frames and photo composition within multiple frames of the present invention can select appropriate frames from continuously captured frames according to the moving speed of the object, and/or the overlapped area corresponding to the object with frames, and accordingly generate a composed frame, thus improving the dynamic effect of the track of the moving object.
  • Methods for determining frames and photo composition within multiple frames may take the form of a program code (i.e., executable instructions) embodied in tangible media, such as floppy diskettes, CD-ROMS, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded, into and executed by a machine, such as a computer, the machine thereby becomes an apparatus for practicing the methods.
  • the methods may also be embodied in the form of a program code transmitted over some transmission medium, such as electrical wiring or cabling, through fiber optics, or via any other form of transmission, wherein, when the program code is received and loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing, the disclosed methods.
  • the program code When implemented on a general-purpose processor, the program code combines with the processor to provide a unique apparatus that operates analogously to application specific logic circuits.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

Methods and systems for determining frames and photo composition within multiple frames are provided. First, a plurality of frames, which are respectively captured with a predefined time interval are obtained. At least one object within at least two of the frames is detected. In some embodiments, a moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval, and candidate frames are selected from the frames according, to the moving speed of the object. In some embodiments, an overlapped area corresponding to the object within a first frame and a second frame is calculated, and at least one candidate frame is selected according to the overlapped area corresponding to the object. The at least one candidate frame is composed to generate a composed photo.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The disclosure relates generally to image frame management, and, more particularly to methods and systems for determining frames and photo composition within multiple frames.
  • 2. Description of the Related Art
  • Recently, portable devices, such as handheld devices, have become more and more technically advanced and multifunctional. For example, a handheld device may have telecommunications capabilities, e-mail message capabilities, image capture capabilities, an advanced address book management system, a media playback system, and various other functions. Due to increased convenience and functions of the devices, these devices have become necessities of life.
  • Currently, a function called continuous shot is provided on the portable devices. In the continuous shot mode, the image capture unit, such as a camera takes images immediately one after another in a short amount of time. That is, when the continuous shot function is performed, a continuous image capture process is performed to continuously capture a plurality of images in sequence. Additionally, an inventive function called “dynamic continuous shot composition” may be also provided on the portable devices. The dynamic continuous shot composition is a way for image composition. In the dynamic continuous shot composition, a camera can be set on a tripod, and several images with the same scene are continuously captured by the camera. The moving object within the images are extracted and overlapped onto the last image, thus to present the dynamic effect of the track of the moving object. The overlap of the moving object onto the last image can be achieved by using an image composition algorithm. It is understood that, the image composition algorithm may be various and known in the art, and related descriptions are omitted here.
  • Conventionally, the respective images are continuously captured with a fixed time interval. If the time interval is too short or the moving speed of the object is too fast, the objects on the composed image may have a large overlapped portion, as shown in FIG. 1A. On the contrary, if the time interval is too long or the moving speed of the object is too slow, the objects may be scattered on the composed image, as shown in FIG. 1B, and the number of the objects on the composed image may be not enough, resulting in difficulties for presenting the dynamic effect.
  • BRIEF SUMMARY OF THE INVENTION
  • Methods and systems for determining frames and photo composition within multiple frames are provided are provided.
  • In an embodiment of a method for determining frames within multiple frames, a plurality of frames, which are respectively captured with a predefined time interval are obtained. At least one object within at least two of the frames is detected. A moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval, and candidate frames are selected from the frames according to the moving speed of the object.
  • An embodiment of a system for determining frames within multiple frames comprises a storage unit and a processing unit. The storage unit comprises a plurality of frames, which are respectively captured with a predefined time interval. The processing unit detects at least one object within at least two of the frames. The processing unit calculates a moving speed of the object according to the positions of the object in the respective frames and the predefined time interval, and selects candidate frames from the frames according to the moving speed of the object.
  • In an embodiment of a method for photo composition within multiple frames, a plurality of frames, which are respectively captured with a predefined time interval are obtained. At least one object within at least two of the frames is detected. A moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval, and candidate frames are selected from the frames according to the moving speed of the object. The candidate frames are composed to generate a composed photo.
  • An embodiment of a system for photo composition within multiple frames comprises a storage unit and a processing unit. The storage unit comprises a plurality of frames, which are respectively captured with a predefined time interval. The processing unit detects at least one object within at least two of the frames. The processing unit calculates a moving speed of the object according to the positions of the object in the respective frames and the predefined time interval, and selects candidate frames from the frames according to the moving speed of the object. The processing unit composes the candidate frames to generate a composed photo.
  • In some embodiments, a table is looked up according to the moving speed of the object to obtain a frame gap number, and the candidates frames are selected within the frames at intervals of the frame gap number. In some embodiments, the faster the moving speed is, the larger the frame gap number is.
  • In an embodiment of a method for determining frames within multiple frames, a plurality of frames, which are respectively captured with a predefined time interval are obtained. At least one object within at least two of the frames is detected. An overlapped area corresponding to the object within a first frame and a second frame is calculated, and at least one candidate frame is selected according, to the overlapped area corresponding to the object.
  • An embodiment of a system for determining frames within multiple frames comprises a storage unit and a processing unit. The storage unit comprises a plurality of frames, which are respectively captured with a predefined time interval. The processing unit detects at least one object within at least two of the frames. The processing unit calculates an overlapped area corresponding to the object within a first frame and a second frame, and selects at least one candidate frame according to the overlapped area corresponding to the object.
  • In an embodiment of a method for photo composition within multiple frames, a plurality of frames, which are respectively captured with a predefined time interval are obtained. At least one object within at least two of the frames is detected. An overlapped area corresponding to the object within a first frame and a second frame is calculated, and at least one candidate frame is selected according to the overlapped area corresponding to the object. The at least one candidate frame is composed to generate a composed photo.
  • An embodiment of a system for photo composition within multiple frames comprises a storage unit and a processing unit. The storage unit comprises a plurality of frames, which are respectively captured with a predefined time interval. The processing unit detects at least one object within at least two of the frames. The processing unit calculates an overlapped area corresponding to the object within a first frame and a second frame, and selects at least one candidate frame according to the overlapped area corresponding to the object. The processing unit composes the at least one candidate frame to generate a composed photo.
  • In some embodiments, it is determined whether the overlapped area corresponding to the object is less than a specific percentage of a contour area of the object. If the overlapped area corresponding to the object is less than a specific percentage of the contour area of the object, the second frame is selected as the candidate frame.
  • In some embodiments, it is determined whether the overlapped area corresponding to the object equals to zero. If the overlapped area corresponding to the object equals to zero, a moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval, and candidate frames are selected within the frames according to the moving speed of the object.
  • Methods for determining frames and photo composition within multiple frames may take the form of a program code embodied in a tangible media. When the program code is loaded into and executed by a machine, the machine becomes an apparatus for practicing the disclosed method.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention will become more fully understood by referring to the following detailed description with reference to the accompanying drawings, wherein:
  • FIG. 1A is a schematic diagram illustrating an example of a composed image with objects having a large overlapped portion;
  • FIG. 1B is a schematic diagram illustrating an example of a composed image with scattered objects;
  • FIG. 2 is a schematic diagram illustrating an embodiment of a system for determining frames and photo composition within multiple frames of the invention;
  • FIG. 3 is a flowchart of an embodiment of a method for determining frames within multiple frames of the invention;
  • FIG. 4 is a flowchart of another embodiment of a method for determining frames within multiple frames of the invention;
  • FIG. 5 is a flowchart of an embodiment of a method for selecting at least one candidate frame according to the overlapped area corresponding to the object of the invention;
  • FIG. 6 is a flowchart of another embodiment of a method for selecting at least one candidate frame according to the overlapped area corresponding to the object of the invention;
  • FIG. 7 is a flowchart of an embodiment of a method for photo composition within multiple frames of the invention; and
  • FIG. 8 is a flowchart of another embodiment of a method for photo composition within multiple frames of the invention.
  • DETAILED DESCRIPTION ION OF THE INVENTION
  • Methods and systems for determining frames and photo composition within multiple frames are provided.
  • FIG. 2 is a schematic diagram illustrating an embodiment of a system for determining frames and/or photo composition within multiple frames of the invention. The system for determining frames and/or photo composition within multiple frames 100 can be used in an electronic device, such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA (Personal Digital Assistant), a GPS (Global Positioning System), or any picture-taking device.
  • The system for determining frames and/or photo con position within multiple frames 100 comprises a storage unit 110 and a processing unit 120. The storage unit 110 comprises a plurality of frames, which are respectively captured with a tune interval. It is understood that, in some embodiments, a time interval would be predefined or dynamically defined. It is understood that, in some embodiments, the frames can be obtained from a video. It is understood that, in some embodiments, the system for determining frames and/or photo composition within multiple frames 100 can also comprise an image capture unit (not shown in FIG. 2). The image capture unit may be a CCD (Charge Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor), placed at the imaging position for objects inside the electronic device. The image capture unit can continuously capture the frames within a predefined time interval. It is also understood that, in some embodiments, the system for determining frames and/or photo composition within multiple frames 100 can also comprise a display unit (not shown in FIG. 2). The display unit can display related figures and interfaces, and related data, such as the image frames continuously captured by the image capture unit. It is understood that, in some embodiments, the display unit may be a screen integrated with a touch-sensitive device (not shown). The touch-sensitive device has a touch-sensitive surface comprising sensors in at least one dimension to detect contact and movement of an input tool, such as a stylus or finger on the touch-sensitive surface. That is, users can directly input related data via the display unit. The processing unit 120 can control related components of the system for determining frames and/or photo composition within multiple frames 100, process the image frames, and perform the methods for determining frames and/or photo composition within multiple frames, which will be discussed further in the following paragraphs. It is noted that, in some embodiments, the system for determining frames and/or photo composition within multiple frames 100 can further comprise a focus unit (not shown in FIG. 2). The processing unit 120 can control the focus unit to perform a focus process for at least one object during, the photography process.
  • FIG. 3 is a flowchart of an embodiment of a method for determining frames within multiple frames of the invention. The method for determining frames within multiple frames can be used in an electronic device, such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA, a GPS, or any picture-taking device. In the embodiment, candidate frames for photo composition can be determined.
  • In step S310, a plurality of frames are obtained. It is understood that, in some embodiments, the frames are continuously and respectively captured within a predefined time interval. In some embodiments, the frames can be obtained from a video. In step S320, at least one object within at least two of the frames, such as the first two successive frames is detected. It is understood that, in some embodiments, the object can be obtained by transforming the at least two frames into grayscale frames, and subtracting the grayscale frames with each other to obtain a contour of the object. In step S330, a moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval. It is understood that, in some embodiments, once the contour of the object is detected, the area of the contour is calculated based on the contour of the object, the position of mass center of the area is calculated based on the area of the contour, and the moving speed of the object can be also calculated based on the position of mass center of the area. In step S340, candidate frames are selected from the frames according to the moving speed of the object. It is understood that, in some embodiments, a table can be looked up according to the moving speed of the object to obtain a frame gap number, and the candidates frames can be selected within the frames at intervals of the frame gap number. In the table, the faster the moving speed is, the larger the frame nap number is. It is noted that, the actual value of the frame gap number can be flexibly designed according different applications and requirements. The selected candidate frames can be used for photo composition.
  • FIG. 4 is a flowchart of another embodiment of a method for determining frames within multiple frames of the invention. The method for determining frames within multiple frames can be used in an electronic device, such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA, a GPS, or any picture-taking device. In the embodiment, candidate frames for photo composition can be determined.
  • In step S410, a plurality of frames are obtained. It is understood that, in some embodiments, the frames are continuously and respectively captured with a predefined time interval. In some embodiments, the frames can be obtained from a video. In step S420, at least one object within at least two of the frames, such as the first two successive frames is detected. It is understood that, in some embodiments, the object can be obtained by transforming the at least two frames into grayscale frames, and subtracting the grayscale frames with each other to obtain as contour of the object. In step S430, an overlapped area corresponding to the object within two frames, such as a first frame and a second frame is calculated. It is understood that, in some embodiments, once the contour of the object is detected, the area and the position of the contour of the object can be calculated, and the overlapped area can be accordingly calculated. In step S440, at least one candidate frame is selected according to the overlapped area corresponding to the object. The selected candidate frames can be used for photo composition.
  • FIG. 5 is a flowchart of an embodiment of a method for selecting at least one candidate frame according to the overlapped area corresponding to the object of the invention. In step S510, it is determined whether the overlapped area corresponding to the object within two frames, such as a first frame and a second frame is less than a specific percentage of a contour area of the object. If the overlapped area corresponding to the object is less than a specific percentage of the contour area of the object (Yes in step S510), in step S520, the rear frame, such as the second frame is selected as the candidate frame. If the overlapped area corresponding to the object is not less than a specific percentage of the contour area of the object (No in step S510), the procedure is completed. It is understood that thereafter, an overlapped area corresponding to the object within the first frame and a subsequent frame, such as a third frame (wherein the second frame is not selected as a candidate frame) can be calculated, and accordingly determined. Alternatively, an overlapped area corresponding to the object within the second frame and a subsequent frame, such as a third frame (wherein the second frame is selected as a candidate frame) can be calculated, and accordingly determined. The process is repeated until all frames are examined.
  • FIG. 6 is a flowchart of another embodiment of a method for selecting at least one candidate frame according to the overlapped area corresponding to the object of the invention. In step S610, it is determined whether the overlapped area corresponding to the object within two frames, such as a first frame and a second frame equals to zero. If the overlapped area corresponding to the object equals to zero (Yes in step S610), in step S620, a moving speed of the object is calculated according to the positions of the object in the two frames and the predefined time interval. It is understood that, in some embodiments, once the contour of the object is detected, the area of the contour, the position of mass center of the area, and the moving speed of the object can be also calculated. In step S630, candidate frames are selected within the frames accord Mg to the moving speed of the object. It is understood that, in some embodiments, a table can be looked up according to the moving speed of the object to obtain a frame gap number, and the candidates frames can be selected within the frames at intervals of the frame gap number. In the table, the faster the moving speed is, the larger the frame gap number is. It is noted that, the actual value of the frame gap number can be flexibly designed according different applications and requirements. The selected candidate frames can be used for photo composition. If the overlapped area corresponding to the object does not equal to zero (No in step S610), the procedure is completed. It is understood that, thereafter, an overlapped area corresponding to the object within the first frame and a subsequent frame, such as a third frame (wherein the overlapped area corresponding to the object does not equal to zero) can be calculated, and accordingly determined. The process is repeated until all frames are examined.
  • FIG. 7 is a flowchart of an embodiment of it method for photo composition within multiple frames of the invention. The method for photo composition within multiple frames can be used in an electronic device, such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA, a GPS, or any picture-taking device. In the embodiment, candidate frames can be selected from frames and used for photo composition.
  • In step S710, a plurality of frames are obtained. It is understood that, in some embodiments, the frames are continuously and respectively captured with a predefined time interval. In some embodiments, the frames can be obtained from a video. In step S720, at least one object within at least two of the frames, such as the first two successive frames is detected. It is understood that, in some embodiments, the object can be obtained by transforming the at least two frames into grayscale frames, and subtracting the grayscale frames with each other to obtain a contour of the object. In step S730, a moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval. It is understood that, in some embodiments, once the contour of the object is detected, the area of the contour, the position of mass center of the area, and the moving speed of the object can be also calculated. In step S740, candidate frames are selected from the frames according to the moving speed of the object. It is understood that, in some embodiments, a table can be looked up according to the moving speed of the object to obtain a frame gap number, and the candidates frames can be selected within the frames at intervals of the frame gap number. In the table, the faster the moving speed is, the larger the frame gap number is. It is noted that, the actual value of the frame gap number can be flexibly designed according different applications and requirements. In step S750, the selected candidate frames are composed to generate a composed photo. It is understood that, the composition of candidate frames can be performed by using an image composition algorithm. It is understood that the image composition algorithm may be various and known in the art, and related descriptions are omitted here.
  • FIG. 8 is a flowchart of another embodiment of a method for photo composition within multiple frames of the invention. The method for photo composition can be used in an electronic device, such as a computer, or a portable device, such as a digital camera, a handheld device such as a mobile phone, a smart phone, a PDA, a GPS, or any picture-taking device, in the embodiment, candidate frames can be selected from frames and used for photo composition.
  • In step S810, a plurality of frames are obtained. It is understood that, in some embodiments, the frames are continuously and respectively captured with as predefined time interval. In some embodiments, the frames can be obtained from a video. In step S820, at least one object within at least two of the frames, such as the first two successive frames is detected. It is understood that, in some embodiments, the object can be obtained by transforming the at least two frames into grayscale frames, and subtracting the grayscale frames with each other to obtain a contour of the object. In step S830, an overlapped area corresponding to the object within two frames, such as a first frame and a second frame is calculated. It is understood that, in some embodiments, once the contour of the object is detected, the area and the position of the contour of the object can be calculated, and the overlapped area can be accordingly calculated. In step S840, at least one candidate frame is selected according to the overlapped area corresponding, to the object. It is understood that, in some embodiments, it is determined whether the overlapped area corresponding to the object is less than a specific percentage of a contour area of the object. If the overlapped area corresponding to the object is less than a specific percentage of the contour area of the object, the second frame is selected as the candidate frame. In some embodiments, it is determined whether the overlapped area corresponding to the object equals to zero. If the overlapped area corresponding to the object equals to zero, a moving speed of the object is calculated according to the positions of the object in the respective frames and the predefined time interval, and candidate frames are selected within the frames according to the moving speed of the object. After all frames are examined, in step S850, the selected candidate frames are composed to generate a composed photo. It is understood that, the composition of candidate frames can be performed by using an image composition algorithm. It is understood that, the image composition algorithm may be various and known in the art, and related descriptions are omitted here.
  • Therefore, the methods and systems for determining frames and photo composition within multiple frames of the present invention can select appropriate frames from continuously captured frames according to the moving speed of the object, and/or the overlapped area corresponding to the object with frames, and accordingly generate a composed frame, thus improving the dynamic effect of the track of the moving object.
  • Methods for determining frames and photo composition within multiple frames, may take the form of a program code (i.e., executable instructions) embodied in tangible media, such as floppy diskettes, CD-ROMS, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded, into and executed by a machine, such as a computer, the machine thereby becomes an apparatus for practicing the methods. The methods may also be embodied in the form of a program code transmitted over some transmission medium, such as electrical wiring or cabling, through fiber optics, or via any other form of transmission, wherein, when the program code is received and loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing, the disclosed methods. When implemented on a general-purpose processor, the program code combines with the processor to provide a unique apparatus that operates analogously to application specific logic circuits.
  • While the invention has been described by way of example and in terms of preferred embodiment, it is to be understood that the invention is not limited thereto. Those who are skilled in this technology can still make various alterations and modifications without departing from the scope and spirit of this invention. Therefore, the scope of the present invention shall be defined and protected by the following claims and their equivalent.

Claims (13)

What is claimed is:
1. A method for determining frames within multiple frames for use in an electronic device, comprising:
obtaining a plurality of frames, wherein the respective frames are captured with a time interval;
detecting at least one object within at least two of the frames;
calculating a moving speed of the object according to the positions of the object in the respective frames and the time interval; and
selecting candidate frames from the frames according to the moving speed of the object.
2. The method of claim 1, wherein the step of detecting at least one object within at least two of the frames comprises the steps of:
transforming the at least two frames into grayscale frames; and
subtracting the grayscale frames with each other to obtain a contour of the object.
3. The method of claim 1, wherein the step of selecting candidate frames from the frames according to the moving speed of the object comprises the steps of:
looking up a table according to the moving speed of the object to obtain a frame gap number; and
selecting the candidate frames within the frames at intervals of the frame gap number.
4. The method of claim 3, wherein the faster the moving speed is, the larger the frame gap number is.
5. The method of claim 1, further comprising recording a video, and the frames are obtained from the video.
6. A method for determining frames within multiple frames for use in an electronic device, comprising:
obtaining a plurality of frames, wherein the respective frames are captured with a time interval;
detecting at least one object within at least two of the frames:
calculating an overlapped area corresponding to the object within a first frame and a second frame; and
selecting at least one candidate frame according to the overlapped area corresponding to the object.
7. The method of claim 5, wherein the step of selecting at least one candidate frame according to the overlapped area corresponding to the object comprises the steps of:
determining whether the overlapped area corresponding to the object is less than a specific percentage of a contour area of the object;
if the overlapped area corresponding to the object is less than a specific percentage of the contour area of the object, selecting the second frame as the candidate frame.
8. The method of claim 6, wherein the step of selecting at least one candidate frame according to the overlapped area corresponding to the object comprises the steps of:
determining whether the overlapped area corresponding to the object equals to zero;
if the overlapped area corresponding to the object equals to zero, calculating a moving speed of the object according to the positions of the object in the respective frames and the predefined time interval;
selecting candidate frames within the frames according to the moving speed of the object.
9. The method of claim 6, wherein the step of detecting at least one object within at least two of the frames comprises the steps of:
transforming the at least two frames into grayscale frames; and
subtracting the grayscale frames with each other to obtain a contour of the object.
10. The method of claim 8, wherein the step of selecting candidate frames within the frames according to the moving speed of the object comprises the steps of:
looking up a table according to the moving speed of the object to obtain a frame gap number; and
selecting the candidates frames within the frames at intervals of the frame gap number.
11. The method of claim 10, wherein the faster the moving speed is the larger the frame gap number is.
12. The method of claim 6, further comprising recording a video, and the frames are obtained from the video.
13. A system for determining frames within multiple frames for use in an electronic, device, comprising:
a storage unit comprising a plurality of frames, wherein the respective frames are captured with a predefined time interval; and
a processing unit detecting at least one object within at least two of the frames, calculating a moving speed of the object according to the positions of the object in the respective frames and the predefined time interval, and selecting candidate frames from the frames according to the moving speed of the object.
US14/220,149 2014-03-20 2014-03-20 Methods and systems for determining frames and photo composition within multiple frames Abandoned US20150271381A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US14/220,149 US20150271381A1 (en) 2014-03-20 2014-03-20 Methods and systems for determining frames and photo composition within multiple frames
TW103137446A TWI531911B (en) 2014-03-20 2014-10-29 Methods for determing frames within mutiple frames
CN201410614516.5A CN104933677B (en) 2014-03-20 2014-11-04 Method to determine multiple candidate frames in multiple frames
US15/159,825 US9898828B2 (en) 2014-03-20 2016-05-20 Methods and systems for determining frames and photo composition within multiple frames

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/220,149 US20150271381A1 (en) 2014-03-20 2014-03-20 Methods and systems for determining frames and photo composition within multiple frames

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/159,825 Division US9898828B2 (en) 2014-03-20 2016-05-20 Methods and systems for determining frames and photo composition within multiple frames

Publications (1)

Publication Number Publication Date
US20150271381A1 true US20150271381A1 (en) 2015-09-24

Family

ID=54120832

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/220,149 Abandoned US20150271381A1 (en) 2014-03-20 2014-03-20 Methods and systems for determining frames and photo composition within multiple frames
US15/159,825 Expired - Fee Related US9898828B2 (en) 2014-03-20 2016-05-20 Methods and systems for determining frames and photo composition within multiple frames

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/159,825 Expired - Fee Related US9898828B2 (en) 2014-03-20 2016-05-20 Methods and systems for determining frames and photo composition within multiple frames

Country Status (3)

Country Link
US (2) US20150271381A1 (en)
CN (1) CN104933677B (en)
TW (1) TWI531911B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9036943B1 (en) * 2013-03-14 2015-05-19 Amazon Technologies, Inc. Cloud-based image improvement
TWI647674B (en) * 2018-02-09 2019-01-11 光陽工業股份有限公司 Navigation method and system for presenting different navigation pictures

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110043639A1 (en) * 2009-08-20 2011-02-24 Sanyo Electric Co., Ltd. Image Sensing Apparatus And Image Processing Apparatus
US20110205397A1 (en) * 2010-02-24 2011-08-25 John Christopher Hahn Portable imaging device having display with improved visibility under adverse conditions
US20120242853A1 (en) * 2011-03-25 2012-09-27 David Wayne Jasinski Digital camera for capturing an image sequence

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6659344B2 (en) * 2000-12-06 2003-12-09 Ncr Corporation Automated monitoring of activity of shoppers in a market
US7623733B2 (en) * 2002-08-09 2009-11-24 Sharp Kabushiki Kaisha Image combination device, image combination method, image combination program, and recording medium for combining images having at least partially same background
CA2549883A1 (en) * 2003-12-15 2005-06-23 Technion Research & Development Foundation Ltd. Therapeutic drug-eluting endoluminal covering
US7860343B2 (en) * 2006-04-10 2010-12-28 Nokia Corporation Constructing image panorama using frame selection
JP4513869B2 (en) * 2008-02-13 2010-07-28 カシオ計算機株式会社 Imaging apparatus, strobe image generation method, and program
US8103134B2 (en) * 2008-02-20 2012-01-24 Samsung Electronics Co., Ltd. Method and a handheld device for capturing motion
TW201001338A (en) * 2008-06-16 2010-01-01 Huper Lab Co Ltd Method of detecting moving objects
JP4735742B2 (en) * 2008-12-18 2011-07-27 カシオ計算機株式会社 Imaging apparatus, strobe image generation method, and program
JP5321070B2 (en) * 2009-01-08 2013-10-23 カシオ計算機株式会社 Imaging apparatus, imaging method, and program
JP5131257B2 (en) * 2009-08-27 2013-01-30 カシオ計算機株式会社 Display control apparatus and display control program
JP5484097B2 (en) * 2010-01-27 2014-05-07 株式会社ワコム Position detection apparatus and method
US20120002112A1 (en) * 2010-07-02 2012-01-05 Sony Corporation Tail the motion method of generating simulated strobe motion videos and pictures using image cloning
US8736716B2 (en) * 2011-04-06 2014-05-27 Apple Inc. Digital camera having variable duration burst mode

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110043639A1 (en) * 2009-08-20 2011-02-24 Sanyo Electric Co., Ltd. Image Sensing Apparatus And Image Processing Apparatus
US20110205397A1 (en) * 2010-02-24 2011-08-25 John Christopher Hahn Portable imaging device having display with improved visibility under adverse conditions
US20120242853A1 (en) * 2011-03-25 2012-09-27 David Wayne Jasinski Digital camera for capturing an image sequence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Author: Jacques, J.C.S. et al. Title: Background Subtraction and Shadow Detection in Grayscale Video Sequences Date: Oct. 2005 Published in: Computer Graphics and Image Processing, 2005. SIBGRAPI 2005. 18th Brazilian Symposium on *

Also Published As

Publication number Publication date
US9898828B2 (en) 2018-02-20
CN104933677A (en) 2015-09-23
TWI531911B (en) 2016-05-01
TW201537359A (en) 2015-10-01
CN104933677B (en) 2018-11-09
US20160267680A1 (en) 2016-09-15

Similar Documents

Publication Publication Date Title
KR101990073B1 (en) Method and apparatus for shooting and storing multi-focused image in electronic device
US20190208125A1 (en) Depth Map Calculation in a Stereo Camera System
US10586308B2 (en) Digital media environment for removal of obstructions in a digital image scene
CN107787463B (en) The capture of optimization focusing storehouse
US9030577B2 (en) Image processing methods and systems for handheld devices
US20120098981A1 (en) Image capture methods and systems
US20140204263A1 (en) Image capture methods and systems
US9445073B2 (en) Image processing methods and systems in accordance with depth information
US9898828B2 (en) Methods and systems for determining frames and photo composition within multiple frames
CN107003730A (en) A kind of electronic equipment, photographic method and camera arrangement
KR102174808B1 (en) Control of shake blur and motion blur for pixel multiplexing cameras
US9071735B2 (en) Name management and group recovery methods and systems for burst shot
CN109889736B (en) Image acquisition method, device and equipment based on double cameras and multiple cameras
US20160373648A1 (en) Methods and systems for capturing frames based on device information
US9420194B2 (en) Methods and systems for generating long shutter frames
US9307160B2 (en) Methods and systems for generating HDR images
US20150355780A1 (en) Methods and systems for intuitively refocusing images
CN113364985B (en) Live broadcast lens tracking method, device and medium
US9202133B2 (en) Methods and systems for scene recognition
CN115278053A (en) Image shooting method and electronic equipment
US8233787B2 (en) Focus method and photographic device using the method

Legal Events

Date Code Title Description
AS Assignment

Owner name: HTC CORPORATION, TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIN, BING-SHENG;LIN, YI-CHI;LU, TAI-LING;SIGNING DATES FROM 20070910 TO 20140513;REEL/FRAME:035485/0435

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION