WO2016011859A1 - 拍摄光绘视频的方法、移动终端和计算机存储介质 - Google Patents
拍摄光绘视频的方法、移动终端和计算机存储介质 Download PDFInfo
- Publication number
- WO2016011859A1 WO2016011859A1 PCT/CN2015/081871 CN2015081871W WO2016011859A1 WO 2016011859 A1 WO2016011859 A1 WO 2016011859A1 CN 2015081871 W CN2015081871 W CN 2015081871W WO 2016011859 A1 WO2016011859 A1 WO 2016011859A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- image
- composite image
- video
- mobile terminal
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2621—Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/70—Media network packetisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/56—Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/57—Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/74—Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
Definitions
- the present invention relates to the field of camera technology, and in particular, to a method for photographing a light-drawn video, a mobile terminal, and a computer storage medium.
- the current mobile terminal has a shooting function that relies on the relevant processing algorithms provided by the camera hardware device and the chip supplier, and only several fixed shooting modes such as focus and white balance.
- a shooting mode of light painting photography has been launched, and users can use light painting photography to create art.
- Photographic photography refers to a shooting mode that uses long-time exposure to create a special image through changes in the light source during exposure. Due to the long-time exposure, it is necessary to support the corresponding photosensitive hardware, and the sensitive hardware capable of supporting long-time exposure is relatively expensive. Therefore, only professional camera devices such as SLR cameras have the function of photo-painting.
- the main purpose of the present invention is to realize the shooting of the light drawing video, satisfy the diverse needs of the user, and improve the user experience.
- the present invention provides a method of photographing a light-drawn video, the method of photographing a light-drawn video comprising the following steps:
- the photographic image is continuously collected by the camera
- the step of generating a composite image according to the current lithographic image and the previously acquired lithographic image comprises:
- pixels satisfying the preset condition are selected, and addition is performed on the pixels at the same position to generate a composite image.
- the selecting the pixel that meets the preset condition comprises:
- the selecting the pixel that meets the preset condition comprises:
- the pixel is a mutated pixel, calculate an average value of brightness parameters of a preset number of pixels around the mutated pixel, and determine whether the average value is greater than a preset threshold, and if yes, determine that the mutated pixel satisfies a preset Condition, select the mutant pixel;
- the pixel is not a mutated pixel, it is further determined whether the brightness parameter of the pixel is greater than a preset threshold, and if yes, determining that the pixel meets a preset condition, and selecting the pixel.
- the method for photographing the light-drawn video further includes:
- the present invention further provides a mobile terminal, where the mobile terminal includes:
- the acquisition module is configured to continuously collect the light drawing image through the camera after the shooting starts;
- An image generating module configured to read the illuminating image at intervals, and generate a composite image according to the current illuminating image and the previously acquired illuminating image;
- the video generating module is configured to capture the composite image, perform video encoding processing on the captured composite image, and generate a light drawing video according to the composite image after the video encoding process.
- the image generation module is configured to:
- pixels satisfying the preset condition are selected, and addition is performed on the pixels at the same position to generate a composite image.
- the image generating module is further configured to:
- the image generating module is further configured to:
- the pixel is a mutated pixel, calculate an average value of brightness parameters of a preset number of pixels around the mutated pixel, and determine whether the average value is greater than a preset threshold, and if yes, determine that the mutated pixel satisfies a preset Condition, select the mutant pixel;
- the pixel is not a mutated pixel, it is further determined whether the brightness parameter of the pixel is greater than a preset threshold, and if yes, determining that the pixel meets a preset condition, and selecting the pixel.
- the mobile terminal further includes:
- a processing module configured to perform special effect processing on the captured composite image.
- the present invention also provides a computer storage medium having stored therein computer executable instructions for performing the above processing.
- the invention continuously collects the light drawing image by using the camera after the shooting starts, and reads the light drawing image at intervals, generates a composite image according to the current light drawing image and the previously collected light drawing image; and grabs the composite image, and grasps the captured image
- the composite image is input into the video encoding process, and the light-drawn video is generated according to the composite image after the video encoding process, thereby realizing the shooting of the light-drawn video.
- the user can use the camera to capture the video of the running process of the display light source, or apply to a similar application scenario, which satisfies the diverse needs of the user and improves the user experience.
- the composite image is encoded without the need to store the generated composite image, so the video file obtained by the final shooting is not large in size and does not occupy too much storage space.
- FIG. 1 is a schematic flow chart of a first embodiment of a method for photographing a photographic video according to the present invention
- FIG. 2 is a schematic flow chart of a second embodiment of a method for photographing a photographic video according to the present invention
- FIG. 3 is a schematic diagram of functional modules of a first embodiment of a mobile terminal according to the present invention.
- FIG. 4 is a schematic diagram of functional modules of a second embodiment of a mobile terminal according to the present invention.
- FIG. 5 is a schematic diagram of an electrical structure of an apparatus for photographing a light-drawn video according to an embodiment of the present invention.
- Embodiments of the present invention provide a method for capturing a picture-drawn video.
- FIG. 1 is a schematic flowchart diagram of a first embodiment of a method for photographing a photographic video according to the present invention.
- a method of taking a picture-drawn video includes:
- Step S10 after the shooting starts, continuously collecting the light drawing image through the camera;
- the invention adds a light painting photography mode for the shooting function of the mobile terminal, and the user can select the light painting photography mode or the ordinary photography mode to perform the shooting, wherein the light painting photography mode combined with the requirements of the light painting photography scene, the ISO, the painting in advance
- the parameters such as quality and scene mode are adjusted and limited, and the parameters are output to the relevant hardware devices, so that the related hardware devices can sample or process the collected image data.
- the mobile terminal When the user selects the light painting photography mode, presses the shooting button or triggers the virtual shooting button, the mobile terminal starts the light painting shooting, and the camera continuously collects the light painting image, and the camera continuously
- the speed at which the photographic image is acquired can be set in advance. In order to ensure the consistency of the light painting, the camera needs to continuously collect at least ten images within 1 s clock, and the subsequent image processing is often unable to keep up with the image acquisition speed. Therefore, it is preferable to cache the illuminating image in the cache module. (Of course, if the processing speed of the mobile terminal is fast enough, you can also not use the cache).
- the mobile terminal can adjust the acquisition speed in real time according to the remaining space of the cache module, thereby maximally utilizing the processing capability of the mobile terminal and preventing the data from being too fast due to the acquisition speed. Overflow, which in turn leads to data loss.
- Step S20 reading the illuminating image at intervals, and generating a composite image according to the current illuminating image and the previously acquired illuminating image;
- the image synthesizing module for processing the illuminating image to generate the composite image in the mobile terminal directly receives and intermittently reads the collected illuminating image; or reads the illuminating image from the cache module in real time to perform image compositing, and resets the buffer.
- the module clears the data and provides space for subsequent data.
- the speed or interval at which the image synthesis module reads the lithographic image may be preset or may depend on the calculation speed of the mobile terminal.
- the image synthesis module superimposes the current illuminating image with the pixels in the previously acquired illuminating image to generate a composite image. Since the camera continuously collects the illuminating image, the composite image is also continuously generated in real time.
- the first photo-image when the first photo-image is acquired, it is taken as the image to be synthesized, and after the second photo-image is acquired, it is combined with the image to be synthesized into a current composite image, and sequentially
- the illuminating image acquired later is combined with the composite image generated by the previous one to finally generate a composite image formed by all the photographic images captured.
- the interval may be performed periodically; wherein the duration of the period may be set according to actual conditions.
- the image synthesizing module selects a pixel that satisfies a preset condition from the current illuminating image and the previously acquired illuminating image, and then performs an addition operation on the pixel.
- the image synthesizing module when it determines whether a certain pixel meets the preset condition, it may directly determine whether the brightness parameter of the pixel is greater than a threshold, and if so, determine The pixel is set to meet the preset condition.
- the image synthesizing module selects pixels with brightness parameters greater than a threshold from the current illuminating image and the previously acquired illuminating images (ie, the absolute value of the brightness of a point on the image is greater than a threshold), and then performs only on the pixels that satisfy the preset condition.
- the addition operation filters the pixels with lower brightness to a certain degree, thereby avoiding the accumulation effect of the ambient light and polluting the picture of the final composite image.
- the size of the threshold may be determined according to an average brightness of the image; the brightness parameter is an optical parameter such as an RGB value or a YUV value.
- the pixel unit 1 includes a pixel unit 1, a pixel unit 2, and a pixel unit n, and a total of n pixel units, wherein the pixel parameters of the pixel unit 101 to the pixel unit 200 in the current light-drawn image are greater than a threshold, and the pixel unit 1 to 100
- the addition operation is performed on the current and past pixel parameters of the pixel unit 1 to the pixel unit 200.
- the brightness parameter value of the pixel unit 1 in the current light picture is 10
- the brightness parameter value in the past light picture is 100
- the image synthesis module also performs noise reduction processing on the composite image, and also controls the synthesis ratio of the newly synthesized image according to the exposure degree of the existing image to suppress overexposure generation.
- the pixels satisfying the preset condition may also be selected by the following steps:
- the average value of the brightness parameter of the preset number of pixels around the abrupt pixel is calculated, and it is determined whether the average value is greater than a preset threshold. If yes, it is determined that the abrupt pixel satisfies a preset condition, and the mutation is selected.
- the pixel is not a mutated pixel, it is further determined whether the brightness parameter of the pixel is greater than a preset threshold, and if yes, determining that the pixel satisfies a preset condition, and selecting the pixel.
- the image synthesis module compares the brightness parameter of a pixel with an average value of brightness parameters of several (preferably 8) pixels around the pixel, and if it is higher or lower than a preset multiple of the average value, the pixel is determined to be Mutant pixels.
- the preset multiple is preferably 2 times higher than the average value or 0.5 times lower than the average value.
- the average of the luminance parameters of the surrounding pixels is taken.
- the surrounding pixels are preferably a plurality of pixels around the pixel, and the preset number is preferably eight. After calculating an average value of the brightness parameters of the preset number of pixels around the abrupt pixel, determining whether the average value is greater than a preset threshold, if the average value is greater than the preset threshold, determining that the abrupt pixel satisfies a preset condition, and selecting the pixel Subsequent execution of the addition operation to generate a composite image, thereby eliminating noise in the image and avoiding affecting the effect of the final composite image. If the average value is less than or equal to the preset threshold, it is determined that the abrupt pixel does not satisfy the preset condition, Selected.
- the brightness parameter of the pixel is directly compared to a preset threshold. If it is greater than the preset threshold, it is determined that the pixel satisfies the preset condition, the pixel is selected, and the addition operation is performed subsequently to generate a composite image. If it is less than or equal to the preset threshold, it is determined that the pixel does not satisfy the preset condition and is not selected.
- each composite image is continuously generated, it is limited by the processing speed of the image synthesis module, and the generated adjacent images actually have a certain time interval.
- the speed of image generation is generated. In turn, it affects the speed of collecting image data.
- the faster the image is generated the faster the image data in the cache module is read, and the space of the cache module is emptied too fast, so that the speed of the mobile terminal to collect the light image data is fast. Also faster.
- the mobile terminal displays the composite image in real time on the display screen for the user to preview the current light painting effect in real time.
- the composite image displayed by the mobile terminal is a compressed small-sized thumbnail image, and the full-size image is stored, that is, displayed and stored as two threads.
- the capture button again or presses the end button, the shooting ends.
- the mobile terminal may store each of the composite images locally, or may only store one composite image that was last generated when the shooting was ended.
- Step S30 capturing a composite image
- Step S31 performing video encoding processing on the captured composite image, according to the video encoding processing
- the composite image generates a light-drawn video.
- the composite image or the intermittent captured composite image may be continuously captured, and the composite image is subjected to video encoding processing to generate a light drawing video.
- Continuously capturing a composite image means that each time a composite image is generated, one image is captured for encoding, that is, all the generated composite images are used as the material of the composite video.
- the generation of the composite image and the capture of the composite image for the encoding process are performed simultaneously by the two threads, and since the composite image is encoded while being imaged, it is not necessary to store the generated composite image.
- Interval grabbing refers to selectively capturing a portion of a composite image as a material for a composite video.
- the interval mode can be a manual interval mode or an automatic interval mode.
- the manual interval mode refers to providing an operation interface for the user to click to trigger the captured image data, such as clicking the screen to capture the currently generated composite image (when there is a preview, that is, the current preview image);
- the automatic interval mode refers to The composite image is captured at a preset time interval, that is, a composite image is captured every preset time.
- the interval at which the composite image is captured is preferably longer than the interval at which the camera captures the image (ie, the exposure time), avoiding capturing the same composite image two or more times, or reducing the size of the final synthesized video file.
- a composite image can be captured every 1 to 2 Min, which is the currently generated composite image and the current time photo. Then, the captured composite image is subjected to video encoding processing, and processed into common video encodings such as MPEG-4, H264, H263, and VP8, for later generation of the video file, and the method for encoding the composite image and the prior art The same, no longer repeat here.
- Video file formats include, but are not limited to, mp4, 3gp, avi, rmvb, and the like.
- the camera continuously collects the light drawing image, and reads the light drawing image at intervals, generates a composite image according to the current light drawing image and the previously collected light drawing image; captures the composite image, and captures the captured image.
- the composite image is subjected to video encoding processing, and a light-drawn video is generated according to the composite image after the video encoding process, thereby realizing the shooting of the light-drawn video.
- the user can use the camera to capture the video of the running process of the display light source, or apply to a similar application scenario, which satisfies the diverse needs of the user and improves the user experience.
- the composite image is encoded while being photographed, there is no need to store the generated composite image, so the volume of the video file obtained by the final shooting is not large and does not occupy too much storage space.
- FIG. 2 is a schematic flowchart diagram of a second embodiment of a method for photographing a light-drawn video according to the present invention
- the method further includes:
- step S40 special effects processing is performed on the captured composite image.
- special effects processing is also performed on the captured composite image before the encoding process of the captured composite image, the special effect processing including basic effect processing, filter effect processing, and/or Special scene effects processing, etc.
- the basic effect processing including noise reduction, brightness, chromaticity and other processing
- filter effect processing including sketch, negative, black and white processing
- special scene effect processing including processing for common weather, starry sky and so on.
- the user can record the sound, capture the synthesized image and perform the encoding process, and further includes: turning on the audio device, receiving the audio data; and encoding the audio data.
- audio data There are two main ways to source audio data: microphone capture or custom audio files.
- the audio source is a custom audio file
- the audio file is first decoded to obtain the original audio data.
- special effects processing is also performed on the received audio data, the special effect processing including special effect recording, variable sound, pitch change and/or shifting, and the like.
- the specific way to generate a video file is as follows: The user captures the end command, and generates the video file according to the video file format set by the user, and the encoded image data and the encoded audio data.
- the invention also provides a mobile terminal.
- FIG. 3 is a schematic diagram of functional modules of a first embodiment of a mobile terminal according to the present invention.
- the mobile terminal includes:
- the collecting module 10 is configured to continuously collect the illuminating image through the camera after the shooting starts;
- the image generating module 20 is configured to read the illuminating image at intervals, and generate a composite image according to the current illuminating image and the previously acquired illuminating image;
- the video generation module 30 is configured to capture a composite image, perform video encoding processing on the captured composite image, and generate a light-drawn video according to the composite image after the video encoding process.
- the invention adds a light painting photography mode for the shooting function of the mobile terminal, and the user can select the light painting photography mode or the ordinary photography mode to perform the shooting, wherein the light painting photography mode combined with the requirements of the light painting photography scene, the ISO, the painting in advance
- the parameters such as quality and scene mode are adjusted and limited, and the parameters are output to the relevant hardware devices, so that the related hardware devices can sample or process the collected image data.
- the mobile terminal When the user selects the light painting photography mode, presses the shooting button or triggers the virtual shooting button, the mobile terminal starts the light drawing shooting, and the collecting module 10 continuously collects the light drawing image by using the camera, and the speed at which the camera continuously collects the light drawing image can be preset. .
- the camera In order to ensure the consistency of the light painting, the camera needs to continuously collect at least ten images within 1 s clock, and the subsequent image processing is often unable to keep up with the image acquisition speed. Therefore, it is preferable to cache the illuminating image in the cache module. (Of course, if the processing speed of the mobile terminal is fast enough, you can also not use the cache).
- the mobile terminal can adjust in real time according to the remaining space of the cache module.
- the acquisition speed can not only make maximum use of the processing power of the mobile terminal, but also prevent data overflow due to excessive acquisition speed, resulting in data loss.
- the image generating module 20 directly receives and intermittently reads the collected illuminating image by using an image compositing module for processing the illuminating image in the mobile terminal to generate a composite image; or reading the illuminating image in real time from the cache module for image compositing And reset the cache module, emptying the data to provide space for subsequent data.
- the speed or interval at which the image synthesis module reads the lithographic image may be preset or may depend on the calculation speed of the mobile terminal.
- the image synthesis module superimposes the current illuminating image with the pixels in the previously acquired illuminating image to generate a composite image. Since the camera continuously collects the illuminating image, the composite image is also continuously generated in real time.
- the first photo-image when the first photo-image is acquired, it is taken as the image to be synthesized, and after the second photo-image is acquired, it is combined with the image to be synthesized into a current composite image, and sequentially
- the illuminating image acquired later is combined with the composite image generated by the previous one to finally generate a composite image formed by all the photographic images captured.
- the image synthesizing module selects a pixel that satisfies a preset condition from the current illuminating image and the previously acquired illuminating image, and then performs an addition operation on the pixel.
- the image synthesis module may directly determine whether the brightness parameter of the pixel is greater than a threshold, and if yes, determine that the pixel meets the preset condition.
- the image synthesizing module selects pixels with brightness parameters greater than a threshold from the current illuminating image and the previously acquired illuminating images (ie, the absolute value of the brightness of a point on the image is greater than a threshold), and then performs only on the pixels that satisfy the preset condition.
- the addition operation filters the pixels with lower brightness to a certain degree, thereby avoiding the accumulation effect of the ambient light and polluting the picture of the final composite image.
- the size of the threshold may be determined according to an average brightness of the image; the brightness parameter is an optical parameter such as an RGB value or a YUV value.
- the image synthesis module also performs noise reduction processing on the composite image, and also controls the synthesis ratio of the newly synthesized image according to the exposure degree of the existing image to suppress overexposure generation.
- the pixels satisfying the preset condition may also be selected by the following steps:
- the average value of the brightness parameter of the preset number of pixels around the abrupt pixel is calculated, and it is determined whether the average value is greater than a preset threshold. If yes, it is determined that the abrupt pixel satisfies a preset condition, and the mutation is selected.
- the pixel is not a mutated pixel, it is further determined whether the brightness parameter of the pixel is greater than a preset threshold, and if yes, determining that the pixel satisfies a preset condition, and selecting the pixel.
- the image synthesis module compares the brightness parameter of a pixel with an average value of brightness parameters of several (preferably 8) pixels around the pixel, and if it is higher or lower than a preset multiple of the average value, the pixel is determined to be Mutant pixels.
- the preset multiple is preferably 2 times higher than the average value or 0.5 times lower than the average value.
- the average of the luminance parameters of the surrounding pixels is taken.
- the surrounding pixels are preferably a plurality of pixels around the pixel, and the preset number is preferably eight. After calculating an average value of the brightness parameters of the preset number of pixels around the abrupt pixel, determining whether the average value is greater than a preset threshold, if the average value is greater than the preset threshold, determining that the abrupt pixel satisfies a preset condition, and selecting the pixel Subsequent execution of the addition operation to generate a composite image, thereby eliminating noise in the image and avoiding affecting the effect of the final composite image. If the average value is less than or equal to the preset threshold, it is determined that the abrupt pixel does not satisfy the preset condition, Selected.
- the brightness parameter of the pixel is directly compared to a preset threshold. If it is greater than a preset threshold, it is determined that the pixel satisfies a preset condition, and the pixel is selected. Subsequent addition operations are performed to generate a composite image. If it is less than or equal to the preset threshold, it is determined that the pixel does not satisfy the preset condition and is not selected.
- each composite image is continuously generated, it is limited by the processing speed of the image synthesis module, and the generated adjacent images actually have a certain time interval.
- the speed of image generation is generated. In turn, it affects the speed of collecting image data.
- the faster the image is generated the faster the image data in the cache module is read, and the space of the cache module is emptied too fast, so that the speed of the mobile terminal to collect the light image data is fast. Also faster.
- the mobile terminal displays the composite image in real time on the display screen for the user to preview the current light painting effect in real time.
- the composite image displayed by the mobile terminal is a compressed small-sized thumbnail image, and the full-size image is stored, that is, displayed and stored as two threads.
- the capture button again or presses the end button, the shooting ends.
- the mobile terminal may store each of the composite images locally, or may only store one composite image that was last generated when the shooting was ended.
- the video generating module 30 may continuously capture the composite image or the intermittent captured composite image, and perform video encoding processing on the composite image to generate a illuminating video. Continuously capturing a composite image means that each time a composite image is generated, one image is captured for encoding, that is, all the generated composite images are used as the material of the composite video. The generation of the composite image and the capture of the composite image for the encoding process are performed simultaneously by the two threads, and since the composite image is encoded while being imaged, it is not necessary to store the generated composite image.
- Interval grabbing refers to selectively capturing a portion of a composite image as a material for a composite video.
- the interval mode can be a manual interval mode or an automatic interval mode.
- the manual interval mode refers to providing an operation interface for the user to click to trigger the captured image data, such as clicking the screen to capture the currently generated composite image (when there is a preview, that is, the current preview image);
- the automatic interval mode refers to The composite image is captured at a preset time interval, that is, a composite image is captured every preset time.
- Grab The interval of the composite image is preferably longer than the interval between the images captured by the camera (ie, the exposure time), avoiding the same composite image being captured two or more times, or reducing the size of the final synthesized video file.
- a composite image can be captured every 1 to 2 Min, which is the currently generated composite image and the current time photo. Then, the captured composite image is subjected to video encoding processing, and processed into common video encodings such as MPEG-4, H264, H263, and VP8, for later generation of the video file, and the method for encoding the composite image and the prior art The same, no longer repeat here.
- Video file formats include, but are not limited to, mp4, 3gp, avi, rmvb, and the like.
- the camera continuously collects the light drawing image, and reads the light drawing image at intervals, generates a composite image according to the current light drawing image and the previously collected light drawing image; captures the composite image, and captures the captured image.
- the composite image is subjected to video encoding processing, and a light-drawn video is generated according to the composite image after the video encoding process, thereby realizing the shooting of the light-drawn video.
- the user can use the camera to capture the video of the running process of the display light source, or apply to a similar application scenario, which satisfies the diverse needs of the user and improves the user experience.
- the composite image is encoded while being photographed, there is no need to store the generated composite image, so the volume of the video file obtained by the final shooting is not large and does not occupy too much storage space.
- FIG. 4 is a schematic diagram of functional modules of a second embodiment of a mobile terminal according to the present invention.
- the mobile terminal further includes:
- the processing module 40 is configured to perform special effect processing on the captured composite image.
- the processing module 40 performs special effect processing on the captured composite image, where the special effect processing includes basic effect processing and filter effect processing. And/or special scene effects processing, etc.
- the basic effect processing including noise reduction, brightness, chromaticity and other processing
- filter effect processing including sketch, negative, black and white processing
- special scene effect processing including processing for common weather, starry sky and so on.
- the user can record the sound, capture the synthesized image and perform the encoding process, and further includes: turning on the audio device, receiving the audio data; and encoding the audio data.
- audio data There are two main ways to source audio data: microphone capture or custom audio files.
- the audio source is a custom audio file
- the audio file is first decoded to obtain the original audio data.
- special effects processing is also performed on the received audio data, the special effect processing including special effect recording, variable sound, pitch change and/or shifting, and the like.
- the specific way of generating the video file is: according to the user shooting end instruction, the encoded image data and the encoded audio data are generated according to the video file format set by the user. Video file.
- the disclosed apparatus and method may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division, and may be implemented in actual implementation.
- multiple units or components may be combined, or may be integrated into another system, or some features may be ignored or not performed.
- the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
- the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the above integration
- the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
- the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
- the foregoing storage device includes the following steps: the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
- ROM read-only memory
- RAM random access memory
- magnetic disk or an optical disk.
- optical disk A medium that can store program code.
- the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores computer executable instructions, and the computer executable instructions are used in at least one of the methods for photographing the light drawing video; And/or the method shown in Figure 2.
- the computer storage medium may be various types of storage media such as a ROM/RAM, a magnetic disk, an optical disk, a DVD, or a USB flash drive.
- the computer storage medium may be a non-transitory storage medium.
- the device for taking a picture-drawing video can correspond to various structures capable of performing the above functions, for example, various types of processors having information processing functions.
- the processor may include an application processor (AP), a central processing unit (CPU), a digital signal processor (DSP), or a programmable gate array (FPGA, Field Programmable Gate). Array) or other information processing structure or chip that can implement the above functions by executing specified code.
- AP application processor
- CPU central processing unit
- DSP digital signal processor
- FPGA Field Programmable Gate
- Array programmable gate array
- Fig. 5 is a block diagram showing a main electrical configuration of a camera according to an embodiment of the present invention.
- the photographic lens 101 is composed of a plurality of optical lenses for forming a subject image, and is a single focus lens or a zoom lens.
- the photographic lens 101 can be moved in the optical axis direction by the lens driving unit 111, and controls the focus position of the taking lens 101 based on the control signal from the lens driving control unit 112, and also controls the focus distance in the case of the zoom lens.
- the lens drive control circuit 112 performs drive control of the lens drive unit 111 in accordance with a control command from the microcomputer 107.
- An imaging element 102 is disposed in the vicinity of a position where the subject image is formed by the photographing lens 101 on the optical axis of the photographing lens 101.
- the imaging element 102 functions as an imaging unit that captures a subject image and acquires captured image data.
- Photodiodes constituting each pixel are two-dimensionally arranged in a matrix on the imaging element 102. Each photodiode generates a photoelectric conversion current corresponding to the amount of received light, and the photoelectric conversion current is charged by a capacitor connected to each photodiode.
- the front surface of each pixel is provided with a Bayer array of RGB color filters.
- the imaging element 102 is connected to an imaging circuit 103 that performs charge accumulation control and image signal readout control in the imaging element 102, and performs waveform shaping after reducing the reset noise of the read image signal (analog image signal). Further, gain improvement or the like is performed to obtain an appropriate signal level.
- the imaging circuit 103 is connected to the A/D conversion unit 104, which performs analog-to-digital conversion on the analog image signal, and outputs a digital image signal (hereinafter referred to as image data) to the bus 199.
- image data a digital image signal
- the bus 199 is a transmission path for transmitting various data read or generated inside the camera.
- the A/D conversion unit 104 is connected to the bus 199, and an image processor 105 and JPEG are also connected.
- the image processor 105 performs various kinds of images such as OB subtraction processing, white balance adjustment, color matrix calculation, gamma conversion, color difference signal processing, noise removal processing, simultaneous processing, edge processing, and the like on the image data based on the output of the imaging element 102. deal with.
- the JPEG processor 106 compresses the image data read out from the SDRAM 108 in accordance with the JPEG compression method. Further, the JPEG processor 106 performs decompression of JPEG image data for image reproduction display.
- the file recorded on the recording medium 115 is read, and after the compression processing is performed in the JPEG processor 106, the decompressed image data is temporarily stored in the SDRAM 108 and displayed on the LCD 116.
- the JPEG method is adopted as the image compression/decompression method.
- the compression/decompression method is not limited thereto, and other compression/decompression methods such as MPEG, TIFF, and H.264 may be used.
- the operation unit 113 includes but is not limited to a physical button or a virtual button, and the entity or virtual button may be a power button, a camera button, an edit button, a dynamic image button, a reproduction button, a menu button, a cross button, an OK button, a delete button, and an enlarge button.
- the operation members such as various input buttons and various input keys are detected, and the operation states of these operation members are detected.
- the detection result is output to the microcomputer 107.
- a touch panel is provided on the front surface of the LCD 116 as a display portion, and the touch position of the user is detected, and the touch position is output to the microcomputer 107.
- the microcomputer 107 executes various processing sequences corresponding to the operation of the user based on the detection result of the operation member from the operation unit 113. Also, this place can be changed to the computer 107 to execute various processing sequences corresponding to the user's operation based on the detection result of the touch panel on the front of the LCD 116.
- the flash memory 114 stores programs for executing various processing sequences of the microcomputer 107.
- the microcomputer 107 performs overall control of the camera in accordance with the program. Further, the flash memory 114 stores various adjustment values of the camera, and the microcomputer 107 reads out the adjustment value, and performs control of the camera in accordance with the adjustment value.
- the SDRAM 108 is an electrically rewritable volatile memory for temporarily storing image data or the like.
- the SDRAM 108 temporarily stores image data output from the A/D conversion unit 104 and image data processed in the image processor 105, the JPEG processor 106, and the like.
- the microcomputer 107 functions as a control unit of the entire camera, and collectively controls various processing sequences of the camera.
- the microcomputer 107 is connected to the operation unit 113 and the flash memory 114.
- the microcomputer 107 can control the apparatus in this embodiment to perform the following operations by executing a program:
- the encoded image data is generated as a video file.
- the synthesizing the current image with the past image comprises:
- Image synthesis is performed based on the current image and the brightness information of the past image.
- the performing image synthesis according to the brightness information of the current image and the past image comprises: determining whether the brightness of the pixel in the current image at the same position is greater than the brightness of the pixel in the past image; if yes, the same position The pixels in the past image are replaced with the pixels in the current image, and image synthesis is performed accordingly.
- the camera is a front camera
- the step of acquiring an image by the camera every preset time further comprises: performing image processing on the image.
- the step of performing the encoding process on the captured composite image further includes: performing special effect processing on the captured composite image, where the special effect processing includes basic effect processing, filter effect processing, and/or special scene effects. deal with.
- the memory interface 109 is connected to the recording medium 115, and performs control for writing image data and a file header attached to the image data to the recording medium 115 and reading from the recording medium 115.
- the recording medium 115 is, for example, a recording medium such as a memory card that can be detachably attached to the camera body.
- the recording medium 115 is not limited thereto, and may be a hard disk or the like built in the camera body.
- the LCD driver 110 is connected to the LCD 116, and stores image data processed by the image processor 105 in the SDRAM.
- the image data stored in the SDRAM is read and displayed on the LCD 116, or the image data stored in the JPEG processor 106 is compressed.
- the JPEG processor 106 reads the compressed image data of the SDRAM, decompresses it, and displays the decompressed image data on the LCD 116.
- the LCD 116 is disposed on the back surface of the camera body or the like to perform image display.
- the LCD 116 is provided with a touch panel that detects a user's touch operation.
- the liquid crystal display panel (LCD 116) is disposed as the display portion.
- the present invention is not limited thereto, and various display panels such as an organic EL may be employed.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims (11)
- 一种拍摄光绘视频的方法,所述拍摄光绘视频的方法包括以下步骤:拍摄开始后,通过摄像头连续采集光绘图像;间隔读取所述光绘图像,根据当前的光绘图像与之前采集的光绘图像生成合成图像;抓取所述合成图像,对抓取的合成图像进视频编码处理,根据视频编码处理后的合成图像生成光绘视频。
- 根据权利要求1所述的拍摄光绘视频的方法,其中,所述根据当前的光绘图像与之前采集的光绘图像生成合成图像的步骤包括:从当前的光绘图像和之前采集的光绘图像中,选出满足预设条件的像素,对同一位置的像素执行加法运算,生成合成图像。
- 根据权利要求2所述的拍摄光绘视频的方法,其中,所述选出满足预设条件的像素包括:判断所述像素的亮度参数是否大于预设阈值,若是,则判定所述像素满足预设条件,选出该像素。
- 根据权利要求2所述的拍摄光绘视频的方法,其中,所述选出满足预设条件的像素包括:判断所述像素是否为突变像素;若所述像素为突变像素,则计算出所述突变像素周围预设个数像素的亮度参数的平均值,并判断该平均值是否大于预设阈值,若是,则判定所述突变像素满足预设条件,选出该突变像素;若所述像素不是突变像素,则进一步判断所述像素的亮度参数是否大于预设阈值,若是,则判定所述像素满足预设条件,选出该像素。
- 根据权利要求1至4中任一项所述的拍摄光绘视频的方法,其中,所述对抓取的合成图像进视频编码处理的步骤之前,所述拍摄光绘视频的方法还包括:对抓取的所述合成图像进行特效处理。
- 一种移动终端,所述移动终端包括:采集模块,配置为拍摄开始后,连续采集光绘图像;图像生成模块,配置为间隔读取所述光绘图像,根据当前的光绘图像与之前采集的光绘图像生成合成图像;视频生成模块,配置为抓取所述合成图像,对抓取的合成图像进视频编码处理,根据视频编码处理后的合成图像生成光绘视频。
- 根据权利要求6所述的移动终端,其中,所述图像生成模块,配置为从当前的光绘图像和之前采集的光绘图像中,选出满足预设条件的像素,对同一位置的像素执行加法运算,生成合成图像。
- 根据权利要求7所述的移动终端,其中,所述图像生成模块,配置为判断所述像素的亮度参数是否大于预设阈值,若是,则判定所述像素满足预设条件,选出该像素。
- 根据权利要求7所述的移动终端,其中,所述图像生成模块,配置为判断所述像素是否为突变像素;若所述像素为突变像素,则计算出所述突变像素周围预设个数像素的亮度参数的平均值,并判断该平均值是否大于预设阈值,若是,则判定所述突变像素满足预设条件,选出该突变像素;若所述像素不是突变像素,则进一步判断所述像素的亮度参数是否大于预设阈值,若是,则判定所述像素满足预设条件,选出该像素。
- 根据权利要求6至9中任一项所述的移动终端,其中,所述移动终端还包括:处理模块,配置为对抓取的所述合成图像进行特效处理。
- 一种计算机存储介质,所述计算机存储介质中存储有计算机可执 行指令,所述计算机可执行指令用于执行权利要求1至5所述方法的至少其中之一。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/327,627 US10129488B2 (en) | 2014-07-23 | 2015-06-18 | Method for shooting light-painting video, mobile terminal and computer storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410352575.XA CN104104798A (zh) | 2014-07-23 | 2014-07-23 | 拍摄光绘视频的方法和移动终端 |
CN201410352575.X | 2014-07-23 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016011859A1 true WO2016011859A1 (zh) | 2016-01-28 |
Family
ID=51672589
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/081871 WO2016011859A1 (zh) | 2014-07-23 | 2015-06-18 | 拍摄光绘视频的方法、移动终端和计算机存储介质 |
PCT/CN2015/082987 WO2016011877A1 (zh) | 2014-07-23 | 2015-06-30 | 拍摄光绘视频的方法和移动终端、存储介质 |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/082987 WO2016011877A1 (zh) | 2014-07-23 | 2015-06-30 | 拍摄光绘视频的方法和移动终端、存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10129488B2 (zh) |
CN (1) | CN104104798A (zh) |
WO (2) | WO2016011859A1 (zh) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104079833A (zh) | 2014-07-02 | 2014-10-01 | 深圳市中兴移动通信有限公司 | 拍摄星轨视频的方法和装置 |
CN104104798A (zh) * | 2014-07-23 | 2014-10-15 | 深圳市中兴移动通信有限公司 | 拍摄光绘视频的方法和移动终端 |
CN105072350B (zh) * | 2015-06-30 | 2019-09-27 | 华为技术有限公司 | 一种拍照方法及装置 |
US20170208354A1 (en) * | 2016-01-15 | 2017-07-20 | Hi Pablo Inc | System and Method for Video Data Manipulation |
CN106331482A (zh) * | 2016-08-23 | 2017-01-11 | 努比亚技术有限公司 | 一种照片处理装置和方法 |
CN106534552B (zh) * | 2016-11-11 | 2019-08-16 | 努比亚技术有限公司 | 移动终端及其拍照方法 |
CN106713777A (zh) * | 2016-11-28 | 2017-05-24 | 努比亚技术有限公司 | 一种实现光绘摄影的方法、装置及拍摄设备 |
CN106686297A (zh) * | 2016-11-28 | 2017-05-17 | 努比亚技术有限公司 | 一种实现光绘摄影的方法、装置及拍摄设备 |
CN106713745A (zh) * | 2016-11-28 | 2017-05-24 | 努比亚技术有限公司 | 一种实现光绘摄影的方法、装置及拍摄设备 |
WO2018119632A1 (zh) * | 2016-12-27 | 2018-07-05 | 深圳市大疆创新科技有限公司 | 图像处理的方法、装置和设备 |
KR102401659B1 (ko) * | 2017-03-23 | 2022-05-25 | 삼성전자 주식회사 | 전자 장치 및 이를 이용한 카메라 촬영 환경 및 장면에 따른 영상 처리 방법 |
CN110913118B (zh) * | 2018-09-17 | 2021-12-17 | 腾讯数码(天津)有限公司 | 视频处理方法、装置及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103595925A (zh) * | 2013-11-15 | 2014-02-19 | 深圳市中兴移动通信有限公司 | 照片合成视频的方法和装置 |
WO2014035642A1 (en) * | 2012-08-28 | 2014-03-06 | Mri Lightpainting Llc | Light painting live view |
CN103634530A (zh) * | 2012-08-27 | 2014-03-12 | 三星电子株式会社 | 拍摄装置及其控制方法 |
CN103888683A (zh) * | 2014-03-24 | 2014-06-25 | 深圳市中兴移动通信有限公司 | 移动终端及其拍摄方法 |
CN104104798A (zh) * | 2014-07-23 | 2014-10-15 | 深圳市中兴移动通信有限公司 | 拍摄光绘视频的方法和移动终端 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006243701A (ja) * | 2005-02-07 | 2006-09-14 | Fuji Photo Film Co Ltd | カメラ及びレンズ装置 |
US9307212B2 (en) * | 2007-03-05 | 2016-04-05 | Fotonation Limited | Tone mapping for low-light video frame enhancement |
US9813638B2 (en) * | 2012-08-28 | 2017-11-07 | Hi Pablo, Inc. | Lightpainting live view |
US8830367B1 (en) * | 2013-10-21 | 2014-09-09 | Gopro, Inc. | Frame manipulation to reduce rolling shutter artifacts |
-
2014
- 2014-07-23 CN CN201410352575.XA patent/CN104104798A/zh active Pending
-
2015
- 2015-06-18 US US15/327,627 patent/US10129488B2/en active Active
- 2015-06-18 WO PCT/CN2015/081871 patent/WO2016011859A1/zh active Application Filing
- 2015-06-30 WO PCT/CN2015/082987 patent/WO2016011877A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103634530A (zh) * | 2012-08-27 | 2014-03-12 | 三星电子株式会社 | 拍摄装置及其控制方法 |
WO2014035642A1 (en) * | 2012-08-28 | 2014-03-06 | Mri Lightpainting Llc | Light painting live view |
CN103595925A (zh) * | 2013-11-15 | 2014-02-19 | 深圳市中兴移动通信有限公司 | 照片合成视频的方法和装置 |
CN103888683A (zh) * | 2014-03-24 | 2014-06-25 | 深圳市中兴移动通信有限公司 | 移动终端及其拍摄方法 |
CN104104798A (zh) * | 2014-07-23 | 2014-10-15 | 深圳市中兴移动通信有限公司 | 拍摄光绘视频的方法和移动终端 |
Also Published As
Publication number | Publication date |
---|---|
WO2016011877A1 (zh) | 2016-01-28 |
US20170208259A1 (en) | 2017-07-20 |
US10129488B2 (en) | 2018-11-13 |
CN104104798A (zh) | 2014-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016011859A1 (zh) | 拍摄光绘视频的方法、移动终端和计算机存储介质 | |
WO2016000515A1 (zh) | 拍摄星轨视频的方法、装置和计算机存储介质 | |
WO2016023406A1 (zh) | 物体运动轨迹的拍摄方法、移动终端和计算机存储介质 | |
US8937677B2 (en) | Digital photographing apparatus, method of controlling the same, and computer-readable medium | |
JP4787180B2 (ja) | 撮影装置及び撮影方法 | |
WO2016045457A1 (zh) | 拍摄方法、装置和计算机存储介质 | |
KR101913837B1 (ko) | 파노라마 영상 생성 방법 및 이를 적용한 영상기기 | |
JP6325841B2 (ja) | 撮像装置、撮像方法、およびプログラム | |
US20170134634A1 (en) | Photographing apparatus, method of controlling the same, and computer-readable recording medium | |
WO2016029746A1 (zh) | 拍摄方法、拍摄装置及计算机存储介质 | |
US20130162853A1 (en) | Digital photographing apparatus and method of controlling the same | |
WO2016008359A1 (zh) | 物体运动轨迹图像的合成方法、装置及计算机存储介质 | |
US10127455B2 (en) | Apparatus and method of providing thumbnail image of moving picture | |
WO2017080348A2 (zh) | 一种基于场景的拍照装置、方法、计算机存储介质 | |
WO2016000514A1 (zh) | 拍摄星云视频的方法和装置和计算机存储介质 | |
US8654204B2 (en) | Digtal photographing apparatus and method of controlling the same | |
JP2015177221A (ja) | 撮像装置、撮像方法、データ記録装置、及びプログラム | |
WO2017128914A1 (zh) | 一种拍摄方法及装置 | |
US10762600B2 (en) | Image processing apparatus, image processing method, and non-transitory computer-readable recording medium | |
WO2016169488A1 (zh) | 图像处理方法、装置、计算机存储介质和终端 | |
JP2011239267A (ja) | 撮像装置及び画像処理装置 | |
JPWO2018235382A1 (ja) | 撮像装置、撮像装置の制御方法、及び撮像装置の制御プログラム | |
JP5530304B2 (ja) | 撮像装置および撮影画像表示方法 | |
WO2017071560A1 (zh) | 图片处理方法及装置 | |
JP5625325B2 (ja) | 画像データ記録装置および画像データ記録プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15824067 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15327627 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26/06/17) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15824067 Country of ref document: EP Kind code of ref document: A1 |