CN113225490A - Time-delay photographing method and photographing device thereof - Google Patents
Time-delay photographing method and photographing device thereof Download PDFInfo
- Publication number
- CN113225490A CN113225490A CN202010079931.0A CN202010079931A CN113225490A CN 113225490 A CN113225490 A CN 113225490A CN 202010079931 A CN202010079931 A CN 202010079931A CN 113225490 A CN113225490 A CN 113225490A
- Authority
- CN
- China
- Prior art keywords
- image
- delayed
- time
- images
- delay
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 230000003111 delayed effect Effects 0.000 claims abstract description 92
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 18
- 239000000872 buffer Substances 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 7
- 230000015572 biosynthetic process Effects 0.000 claims description 2
- 238000003786 synthesis reaction Methods 0.000 claims description 2
- 230000006870 function Effects 0.000 description 15
- 230000000694 effects Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000033001 locomotion Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 108010068977 Golgi membrane glycoproteins Proteins 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 239000005338 frosted glass Substances 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 239000003973 paint Substances 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention relates to a time-delay photography method, which comprises the steps of acquiring multi-frame images collected by a camera when a video is recorded; extracting first delay images with corresponding frame numbers from the multi-frame images according to a preset frame rate; carrying out high dynamic range image processing on the first delayed image to obtain a second delayed image; and synthesizing the plurality of frames of the second delayed images according to a time sequence to obtain the delayed shooting video. According to the time-delay photography method, the multi-frame images collected by the camera are extracted according to the preset frame rate, so that a first time-delay image is obtained, then high-dynamic-range image processing is carried out on the first time-delay image, a second time-delay image with optimized brightness and darkness is obtained, and finally the second time-delay image is synthesized to obtain the time-delay photography video, so that the image quality of the time-delay photography video can be optimized, and different use experiences can be provided for users.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a time-delay shooting method and a shooting device of intelligent equipment.
Background
A camera application in an electronic device such as a smart terminal is generally provided with a time-lapse photographing function. Time-lapse photography, which may also be referred to as time-lapse photography (time-lag photography) or time-lapse video recording, is a shooting technique that can compress time, and can reproduce a scene slowly changing process in a short time. The image quality of the video shot by the conventional time-lapse shooting method is not clear enough.
Disclosure of Invention
The invention provides an earphone and a control method thereof, which are used for improving the image quality of a delayed shooting video.
The invention discloses a time-delay shooting method in a first aspect, which comprises the following steps:
when a video is recorded, acquiring a plurality of frames of images collected by a camera;
extracting first delay images with corresponding frame numbers from the multi-frame images according to a preset frame rate;
carrying out high dynamic range image processing on the first delayed image to obtain a second delayed image;
and synthesizing the plurality of frames of the second delayed images according to a time sequence to obtain the delayed shooting video.
A second aspect of the present invention discloses a time-lapse photographing apparatus, comprising:
the image acquisition module is used for acquiring multi-frame images acquired by the camera;
the time-delay frame extracting module is used for extracting a first time-delay image with a corresponding frame number from the multi-frame images according to a preset frame rate;
the image processing module is used for carrying out high dynamic range image processing on the first time delay image to obtain a second time delay image;
and the video synthesis module is used for synthesizing the plurality of frames of the second delayed images according to the time sequence to obtain the delayed shooting video.
A third aspect of the present invention discloses an electronic device, which includes a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor implements the steps of the delayed photography method as described above when executing the computer program.
A fourth aspect of the present invention discloses a readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the delayed photography method as described above.
It can be known from the foregoing embodiments of the present invention that, in the delayed photography method of the present invention, a plurality of frames of images collected by a camera are extracted according to a preset frame rate, so as to obtain a first delayed image, then the first delayed image is subjected to high dynamic range image processing, so as to obtain a second delayed image with optimized brightness, and finally the second delayed image is synthesized to obtain a delayed photography video, so that the image quality of the delayed photography video can be optimized, and different user experiences can be provided for a user.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a delayed photography method according to the present invention;
FIG. 2 is a block diagram of a delayed photographing apparatus according to the present invention;
fig. 3 is a schematic diagram of a module structure of an electronic device according to the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a first aspect of the present invention discloses a delayed shooting method, taking an intelligent terminal as an example, including:
s110, acquiring multi-frame images collected by a camera during video recording;
generally, a camera application of an intelligent terminal is provided with a function option of delayed photography. After the fact that the user opens the camera application is detected, a function option of delayed photography is set in a preview interface displayed by the intelligent terminal. If the user is detected to select the function option, the intelligent terminal can enter a delayed shooting mode of the camera application. Certainly, one or more shooting modes such as a photo mode, a portrait mode, a panorama mode, a video mode, or a slow motion mode may also be set in the preview interface, which is not limited in this embodiment of the application.
After the intelligent terminal enters the delayed photography mode, the preview interface can display a shooting picture currently captured by the camera. Since recording of the delayed video has not yet started, the shot image displayed in real time on the preview interface may be referred to as a preview image. In addition, the preview interface also comprises a recording button for delaying shooting. If the fact that the user clicks the recording button in the preview interface is detected, the fact that the user executes recording operation in the delayed shooting mode is indicated, at the moment, the intelligent terminal can continue to use the camera to collect each frame of shooting picture captured by the camera, and the delayed shooting video starts to be recorded.
For example, the intelligent terminal can acquire each frame of shot picture according to a certain acquisition frequency. For example, the collection frequency is 30 frames per second, and after a user clicks a recording button in a preview interface, the intelligent terminal can collect 30 shooting pictures within 0-1 second, and can also collect 30 shooting pictures within 1-2 seconds. With the lapse of recording time, the frame number of the shot pictures collected by the intelligent terminal is gradually accumulated, and the multi-frame shot pictures extracted by the intelligent terminal from the collected shot pictures according to the frame extraction frequency can finally form the time-delay shooting video obtained by the time-delay shooting.
Specifically, in other embodiments of the present invention, a user may further set a resolution adjustment button on a preview interface of the intelligent terminal, so that the user may select a resolution of each acquired frame of a captured image by himself/herself, specifically, the resolution adjustment button may include resolution categories such as normal, high-definition, super-definition, and the like, so that the user may select a resolution of a camera of the intelligent terminal by himself/herself, and it can be understood that, in practical applications, the user may upload a video processed by a delayed photography method to a network platform for sharing, and each network platform may have limitations on the size and resolution of the uploaded single video, and the limitations of different platforms are different, and by setting the resolution adjustment button, the user may select an appropriate resolution by himself/herself according to the limitations of the corresponding platforms, so as to meet the use requirements of the user.
In the embodiment of the application, the intelligent terminal can copy the video stream formed by each collected frame of shot picture into a two-way video stream. Each path of video stream comprises each frame of shooting picture which is collected in real time after the intelligent terminal starts the delayed shooting and recording function. Furthermore, the intelligent terminal can use one of the video streams to execute the following step S120, so as to complete the display task of the preview interface in the delayed shooting process. Meanwhile, the intelligent terminal can use the other video stream to execute the following steps S130-S150, and the task of making the delayed shooting video is completed.
S120, transmitting the multi-frame image to a preview interface for displaying;
the preview interface can display each frame of shooting picture currently acquired by the camera in real time, for example, a user shoots a sunrise picture through the camera, the sunrise picture is taken from the sun appearing on the horizon to the sun separating from the horizon, generally, the action needs 3 minutes, and then the preview interface can display each frame of shooting sunrise picture acquired within three minutes by the camera in real time; meanwhile, a prompt of shooting can be displayed in the preview interface so as to prompt the user that the user is currently in a recording state; the current recording time length can be displayed to reflect the current recording time length. Specifically, the display mode may be any one of a surface view (planar view) or a surface structure view (surface structure view)/surface structure view (template surface structure view) component, and draw each frame of shot image currently acquired by the camera onto the display of the image pickup device.
S130, extracting first delay images with corresponding frame numbers from the multi-frame images according to a preset frame rate;
the first delayed images are obtained from all the shot images, the first delayed images are distinguished according to the time sequence, and in a certain time zone, the first delayed images with corresponding frame numbers are uniformly selected according to the preset frame rate and the same time interval, so that the continuity of actions is ensured.
Specifically, extracting a first delayed image of a corresponding frame number from the multiple frame images according to a preset frame rate includes:
s131, obtaining a delay multiple of delay shooting;
the time delay multiple refers to the time delay length of time delay photography, the larger the time delay multiple is, the longer the time delay time is, the slower the action is, and the fewer the number of images to be extracted is; the smaller the delay time multiple, the shorter the delay time, the faster the operation, and the larger the number of images to be extracted.
For example, when shooting a sunrise video, if a normal sunrise video is shot and played because the sunrise speed is slow, the user can hardly visually observe the movement of the sun relative to the horizon with the naked eyes, and the watching effect is poor, the delay shooting method can be adopted to accelerate the whole sunrise process into a continuously played short sunrise video in the video, so that the movement of the sun relative to the horizon is accelerated, and the user can more visually watch the sunrise of the sun.
S132, calculating the number of buffer frames of the buffer according to the delay multiple;
after multi-frame images are collected, the images need to enter buffers in an application layer for temporary storage, and the frame rate of each buffer is set in advance, such as 30 frames/s or 60 frames/s.
The buffer frame number of the buffer is the frame rate/delay multiple.
S133, extracting the images of the buffer frame number from the multi-frame images to obtain a first delay image.
And extracting parts of the multi-frame images according to the calculated buffer frame number so as to obtain a first delayed image. For example, 120 frames of pictures are taken by the camera as the number of the buffer frames, in order to achieve the effect of delayed shooting, only 30 frames of pictures are extracted at intervals from the 120 frames of pictures, specifically, one frame of picture is extracted at intervals of three frames of pictures, that is, a part of pictures in a multi-frame image are extracted to obtain a first delayed image.
S140, carrying out high dynamic range image processing on the first delayed image to obtain a second delayed image;
the high dynamic range image processor performs shading optimization processing on the image. It should be noted that in the field related to image technology, the requirement for an image is gradually increased, and then the image closer to a real scene is required to be presented, which means that the dynamic range of the image is larger and closer to the dynamic range visible to human eyes, thereby introducing a High Dynamic Range (HDR) image. Due to the characteristics of HDR images themselves, the processing of HDR images is roughly divided into: acquisition of an HDR image, storage of an HDR image, dynamic range compression, extension of an LDR image to HDR, and image rendering with an HDR image.
Specifically, the performing high dynamic range image processing on the first delayed image to obtain a second delayed image includes:
s141, carrying out image registration on multiple frames of first delayed images with different time sequences to obtain a target image;
the camera sensor generally shoots a plurality of images with different exposure degrees in the same scene, the images are subjected to image registration, the same characteristic points in the images are searched, and two or more images with accurate matching are target images. Specifically, a corresponding algorithm is set in an image processing module of the camera, the same feature points in the two images or the multiple images are searched, for example, if the exposure does not exceed 1% in the same position difference, the two images or the multiple images are defined as the same points, when the position difference in the two images or the multiple images does not exceed 1%, the two images or the multiple images are precisely matched, and correspondingly, the two images or the multiple images are target images.
S142, calculating the irradiance of each pixel point in the target images in multiple frames;
calculating each marked pixel point according to the exposure time and the gray value of a Complementary Metal Oxide Semiconductor (CMOS) chip to obtain the original irradiance of the corresponding pixel point, specifically, obtaining a RAW (unprocessed) image from a target image, wherein the RAW image is the original data of the image, for example, the exposure set when a user takes a picture is-1, the data returned from the RAW image is the image exposed to-1, and the irradiance is the radiant flux on the unit area of the irradiated surface, and subtracting the exposure value from the original irradiance to calculate the irradiance of each pixel point in a plurality of frames of the target image.
And S143, synthesizing the second delayed image according to the irradiance of each pixel point in the target images. And marking the places with large brightness difference in the target image, and adjusting the irradiance pictures with the difference to darken the places with too much brightness and lighten the places with too much brightness so as to synthesize a second time-delay image. For example, by setting a corresponding algorithm in the time-lapse photographing device, the value of the brightness of the same pixel point of the multi-frame pictures of the target image is compared, and if the value of the brightness of the same pixel point of the multi-frame pictures exceeds a threshold value designed by the algorithm, for example, 5%, the pixel point is defined as a place with a large difference in brightness in the target image and is marked, and then the place with too much brightness is darkened and the place with too much brightness is lightened through the time-lapse photographing device, so as to synthesize the second time-lapse image.
Specifically, the synthesizing the second delayed image according to the irradiance of each pixel point in the target images of a plurality of frames includes:
s1431, comparing the radiances of corresponding pixels in multiple frames of the target image to obtain a differential radiance value, for example, if the radiance of the corresponding pixel in one frame of the target image is 10, and the radiance of the corresponding pixel in zero frame of the target image is 20, the differential radiance value of the two frames of the target image is 10; and obtaining the differences of the radiance values on the pixel points according to the differences of the radiances, and obtaining all the distinguishing radiance values on the pixel points.
S1432, normalizing according to the difference radiance value Gaussian weight sum to obtain adjusted irradiance; and carrying out weighted average on all the radiance values to obtain a rationalized adjusted radiance which is an optimized adjusting value. Specifically, the normalization processing is performed according to the sum of the gaussian weights of the differential radiance values, that is, the gaussian blurring processing is performed on the differential radiance values, which is a processing effect widely used in image processing software such as Adobe Photoshop, GIMP, paint. The visual effect of the image generated by the blurring technique is like observing the image through a frosted glass, which is obviously different from the out-of-focus imaging effect of a lens and the effect in a common lighting shadow. Gaussian smoothing is also used in pre-processing stages in computer vision algorithms to enhance the image effect of images at different scale sizes. From the mathematical point of view, the gaussian blurring process of an image is to convolute the image with a normal distribution. Since a normal distribution is also called a gaussian distribution, this technique is called gaussian blur. Convolution of the image with the circular box blur will produce a more accurate out-of-focus imaging effect.
S1433, using the adjusted irradiance for multiple frames of corresponding pixel points in the target image to obtain target pixel points; and adjusting the brightness of each pixel point by using the optimized adjusting value, so that the brightness of each pixel point is more uniform and the pixels are clearer.
And S1434, synthesizing the second delayed image by using the target pixel point. And splicing the clearer pixels to synthesize a final second delay image.
And S150, synthesizing a plurality of frames of the second delayed images according to a time sequence to obtain the delayed shooting video. The time-delay photographic video synthesized by the second time-delay images after multi-frame optimization can be clearer, and the texture of the picture is stronger.
Specifically, the synthesizing multiple frames of the second delayed images according to a time sequence to obtain a delayed video includes:
s151, carrying out beautifying processing on a plurality of frames of the second delayed images in sequence to obtain a third delayed image;
and S152, synthesizing a plurality of frames of the third delayed images according to a time sequence to obtain the delayed shooting video.
The second delayed image can be subjected to beautifying processing, and the delayed video is more attractive.
In the present invention, the multi-frame image may be subjected to high dynamic range image processing before being subjected to the frame extraction operation. Therefore, the displayed image of the preview interface is kept consistent with the image in the delayed shooting video, and the uniformity of the displayed image and the delayed shooting video is improved.
Referring to fig. 2, a second aspect of the present invention discloses a time-lapse photographing apparatus, including: the image processing apparatus includes an image obtaining module 210 for obtaining a plurality of frames of images acquired by a camera, a delayed frame extracting module 220 for extracting a first delayed image with a corresponding frame number from the plurality of frames of images according to a preset frame rate, an image processing module 230 for performing high dynamic range image processing on the first delayed image to obtain a second delayed image, and a video synthesizing module 240 for synthesizing the plurality of frames of the second delayed image according to a time sequence to obtain a delayed video. The specific implementation process of the device is described in detail in the above-mentioned delayed photography method, so that the detailed description is omitted in this embodiment.
A third aspect of the invention discloses an electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, the processor implementing the method as described above when executing the program. Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. Referring to fig. 3, the electronic device 90 includes: a Radio Frequency (RF) circuit 910, a memory 920, an input unit 930, a display unit 940, a sensor 950, an audio circuit 960, a Wireless Fidelity (WiFi) module 970, a processor 980, and a power supply 990. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 3 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. The following describes each component of the electronic device of the present embodiment with reference to fig. 3:
the RF circuit 910 may be used for receiving and transmitting signals during information transceiving, and in particular, for processing the downlink information of the base station to the processor 980 after receiving the downlink information; in addition, the data for designing uplink is transmitted to the base station. In general, the RF circuit 910 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 910 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 920 may be used to store user software and modules, and the processor 980 performs various functional applications and data processing of the electronic device by operating the user software and modules stored in the memory 920. The memory 920 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 920 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 930 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the input unit 930 may include a touch panel 931 and other input devices 932. The touch panel 931, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 931 (e.g., a user's operation on or near the touch panel 931 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a preset program. Alternatively, the touch panel 931 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 980, and can receive and execute commands sent by the processor 980. In addition, the touch panel 931 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 930 may include other input devices 932 in addition to the touch panel 931. In particular, other input devices 932 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 940 may be used to display information input by a user or information provided to the user and various menus of the electronic device. The Display unit 940 may include a Display panel 941, and optionally, the Display panel 941 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 931 may cover the display panel 941, and when the touch panel 931 detects a touch operation on or near the touch panel 931, the touch panel transmits the touch operation to the processor 980 to determine the type of the touch event, and then the processor 980 provides a corresponding visual output on the display panel 941 according to the type of the touch event. Although in fig. 3, the touch panel 931 and the display panel 941 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 931 and the display panel 941 may be integrated to implement the input and output functions of the mobile phone.
The electronic device may also include at least one sensor 950, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 941 according to the brightness of ambient light. The audio circuitry 960, speaker 961, microphone 962 may provide an audio interface between a user and the electronic device. The audio circuit 960 may transmit the electrical signal converted from the received audio data to the speaker 961, and convert the electrical signal into a sound signal for output by the speaker 961; microphone 962, on the other hand, converts collected sound signals into electrical signals, which are received by audio circuit 960 and converted into audio data, which are processed by audio data output processor 980, either through RF circuit 910 for transmission to another electronic device, for example, or output to memory 920 for further processing.
WiFi belongs to short-range wireless transmission technology, and the electronic device can provide wireless broadband internet access to the user through the WiFi module 970. Although fig. 3 shows the WiFi module 970, it is understood that it does not belong to the essential constitution of the electronic device, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 980 is a control center of the electronic device, connects various parts of the entire cellular phone using various interfaces and lines, and performs various functions of the electronic device and processes data by operating or executing user software and/or modules stored in the memory 920 and calling data stored in the memory 920, thereby integrally monitoring the electronic device. Alternatively, processor 980 may include one or more processing units; preferably, the processor 980 may be an integrated application processor that primarily handles operating systems, user interfaces, application programs, and the like. Processor 980 may or may not be integrated with modem processor(s) 980.
The electronic device also includes a power supply 990 (e.g., a battery) for supplying power to the various components, which may be logically connected to the processor 980 via a power management system, such that the functions of managing charging, discharging, and power consumption are performed via the power management system. Although not shown, the electronic device may further include a camera, a bluetooth module, and the like, which are not described in detail herein.
The method, the system and the application program product of the readable storage medium for user key customization during mouse offline provided by the embodiment of the invention comprise the readable storage medium storing the program code, the instruction included in the program code can be used for executing the method in the previous method embodiment, and specific implementation can refer to the method embodiment and is not repeated herein.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a storage medium readable by an electronic device. Based on such understanding, the technical solution of the present invention or a part thereof that contributes to the prior art in essence may be embodied in the form of a software product, where the electronic device application is stored in a storage medium and includes several instructions for enabling an electronic device (which may be a mobile phone, a tablet computer, a vehicle-mounted computer, or a PDA, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
A fourth aspect of the present invention discloses a readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the delayed photography method as described above.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present invention is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no acts or modules are necessarily required of the invention.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In view of the above description of the technical solutions provided by the present invention, those skilled in the art will recognize that there may be variations in the technical solutions and the application ranges according to the concepts of the embodiments of the present invention, and in summary, the content of the present specification should not be construed as limiting the present invention.
Claims (10)
1. A time-lapse photographing method, comprising:
when a video is recorded, acquiring a plurality of frames of images collected by a camera;
extracting first delay images with corresponding frame numbers from the multi-frame images according to a preset frame rate;
carrying out high dynamic range image processing on the first delayed image to obtain a second delayed image;
and synthesizing the plurality of frames of the second delayed images according to a time sequence to obtain the delayed shooting video.
2. The delayed photography method according to claim 1, wherein the performing high dynamic range image processing on the first delayed image to obtain a second delayed image comprises:
carrying out image registration on multiple frames of first time-delay images with different time sequences to obtain a target image;
calculating the irradiance of each pixel point in the target images of multiple frames;
and synthesizing the second delayed image according to the irradiance of each pixel point in the target images.
3. The delayed photography method of claim 2, wherein said synthesizing the second delayed image according to irradiance of each pixel point in a plurality of frames of the target image comprises:
comparing the radiances of corresponding pixel points in the target images of multiple frames to obtain a differential radiance value;
normalizing according to the sum of the Gaussian weights of the differential radiance values to obtain adjusted irradiance;
using the adjusted irradiance for corresponding pixel points in multiple frames of the target image to obtain target pixel points;
and synthesizing the second delayed image by using the target pixel point.
4. The delay shooting method of claim 1, wherein extracting a first delay image of a corresponding number of frames from the plurality of frame images at a preset frame rate comprises:
obtaining the time delay multiple of time delay photography;
calculating the number of buffer frames of the buffer according to the delay multiple;
and extracting the images of the number of the buffer frames from the multi-frame images to obtain a first delay image.
5. The delayed photography method of claim 1, wherein the synthesizing a plurality of frames of the second delayed image in time sequence to obtain a delayed photography video comprises:
carrying out face beautifying treatment on a plurality of frames of the second delayed images in sequence to obtain a third delayed image;
and synthesizing the plurality of frames of the third delayed images according to a time sequence to obtain a delayed shooting video.
6. The delayed photography method according to claim 1, wherein after acquiring the multi-frame image collected by the camera during the video recording, the method further comprises:
and transmitting the multi-frame image to a preview interface for displaying.
7. The delayed photography method according to claim 1, wherein after acquiring the multi-frame image collected by the camera during the video recording, the method further comprises:
and carrying out high dynamic range image processing on the multi-frame images.
8. A time-lapse photographing apparatus, comprising:
the image acquisition module is used for acquiring multi-frame images acquired by the camera;
the time-delay frame extracting module is used for extracting a first time-delay image with a corresponding frame number from the multi-frame images according to a preset frame rate;
the image processing module is used for carrying out high dynamic range image processing on the first time delay image to obtain a second time delay image;
and the video synthesis module is used for synthesizing the plurality of frames of the second delayed images according to the time sequence to obtain the delayed shooting video.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the processor executes the computer program to implement the steps of the delayed photography method according to any one of claims 1 to 7.
10. A readable storage medium having stored thereon a computer program for implementing the steps of the time-lapse photography method according to any one of claims 1 to 7 when executed by a processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010079931.0A CN113225490B (en) | 2020-02-04 | 2020-02-04 | Time-delay photographing method and photographing device thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010079931.0A CN113225490B (en) | 2020-02-04 | 2020-02-04 | Time-delay photographing method and photographing device thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113225490A true CN113225490A (en) | 2021-08-06 |
CN113225490B CN113225490B (en) | 2024-03-26 |
Family
ID=77085415
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010079931.0A Active CN113225490B (en) | 2020-02-04 | 2020-02-04 | Time-delay photographing method and photographing device thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113225490B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113676657A (en) * | 2021-08-09 | 2021-11-19 | 维沃移动通信(杭州)有限公司 | Time-delay shooting method and device, electronic equipment and storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7626614B1 (en) * | 2005-02-15 | 2009-12-01 | Apple Inc. | Transfer function and high dynamic range images |
US8340453B1 (en) * | 2008-08-29 | 2012-12-25 | Adobe Systems Incorporated | Metadata-driven method and apparatus for constraining solution space in image processing techniques |
US20170187954A1 (en) * | 2015-12-25 | 2017-06-29 | Olympus Corporation | Information terminal apparatus, image pickup apparatus, image-information processing system, and image-information processing method |
JP2017118427A (en) * | 2015-12-25 | 2017-06-29 | オリンパス株式会社 | Information terminal device, imaging device, image information processing system, and image information processing method |
WO2017166954A1 (en) * | 2016-03-31 | 2017-10-05 | 努比亚技术有限公司 | Apparatus and method for caching video frame and computer storage medium |
CN107451970A (en) * | 2017-07-28 | 2017-12-08 | 电子科技大学 | A kind of high dynamic range images generation method based on single-frame images |
CN108012080A (en) * | 2017-12-04 | 2018-05-08 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer-readable recording medium |
CN108492262A (en) * | 2018-03-06 | 2018-09-04 | 电子科技大学 | It is a kind of based on gradient-structure similitude without ghost high dynamic range imaging method |
CN109068052A (en) * | 2018-07-24 | 2018-12-21 | 努比亚技术有限公司 | video capture method, mobile terminal and computer readable storage medium |
CN110086985A (en) * | 2019-03-25 | 2019-08-02 | 华为技术有限公司 | A kind of method for recording and electronic equipment of time-lapse photography |
-
2020
- 2020-02-04 CN CN202010079931.0A patent/CN113225490B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7626614B1 (en) * | 2005-02-15 | 2009-12-01 | Apple Inc. | Transfer function and high dynamic range images |
US8340453B1 (en) * | 2008-08-29 | 2012-12-25 | Adobe Systems Incorporated | Metadata-driven method and apparatus for constraining solution space in image processing techniques |
US20170187954A1 (en) * | 2015-12-25 | 2017-06-29 | Olympus Corporation | Information terminal apparatus, image pickup apparatus, image-information processing system, and image-information processing method |
JP2017118427A (en) * | 2015-12-25 | 2017-06-29 | オリンパス株式会社 | Information terminal device, imaging device, image information processing system, and image information processing method |
WO2017166954A1 (en) * | 2016-03-31 | 2017-10-05 | 努比亚技术有限公司 | Apparatus and method for caching video frame and computer storage medium |
CN107451970A (en) * | 2017-07-28 | 2017-12-08 | 电子科技大学 | A kind of high dynamic range images generation method based on single-frame images |
CN108012080A (en) * | 2017-12-04 | 2018-05-08 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer-readable recording medium |
CN108492262A (en) * | 2018-03-06 | 2018-09-04 | 电子科技大学 | It is a kind of based on gradient-structure similitude without ghost high dynamic range imaging method |
CN109068052A (en) * | 2018-07-24 | 2018-12-21 | 努比亚技术有限公司 | video capture method, mobile terminal and computer readable storage medium |
CN110086985A (en) * | 2019-03-25 | 2019-08-02 | 华为技术有限公司 | A kind of method for recording and electronic equipment of time-lapse photography |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113676657A (en) * | 2021-08-09 | 2021-11-19 | 维沃移动通信(杭州)有限公司 | Time-delay shooting method and device, electronic equipment and storage medium |
CN113676657B (en) * | 2021-08-09 | 2023-04-07 | 维沃移动通信(杭州)有限公司 | Time-delay shooting method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113225490B (en) | 2024-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110177221B (en) | Shooting method and device for high dynamic range image | |
CN109547701B (en) | Image shooting method and device, storage medium and electronic equipment | |
Li et al. | Fast multi-exposure image fusion with median filter and recursive filter | |
CN107679482B (en) | Unlocking control method and related product | |
CN107507160B (en) | Image fusion method, terminal and computer readable storage medium | |
RU2731370C1 (en) | Method of living organism recognition and terminal device | |
CN107566739B (en) | photographing method and mobile terminal | |
CN107707827B (en) | High-dynamic image shooting method and mobile terminal | |
CN105809647B (en) | Automatic defogging photographing method, device and equipment | |
WO2018228168A1 (en) | Image processing method and related product | |
CN110930329B (en) | Star image processing method and device | |
WO2019052329A1 (en) | Facial recognition method and related product | |
CN107566749B (en) | Shooting method and mobile terminal | |
CN109218626B (en) | Photographing method and terminal | |
CN110619593A (en) | Double-exposure video imaging system based on dynamic scene | |
CN110876036B (en) | Video generation method and related device | |
WO2014099284A1 (en) | Determining exposure times using split paxels | |
CN107451454B (en) | Unlocking control method and related product | |
US11847769B2 (en) | Photographing method, terminal, and storage medium | |
CN112422798A (en) | Photographing method and device, electronic equipment and storage medium | |
CN115205172A (en) | Image processing method and device, electronic equipment and storage medium | |
CN110363702B (en) | Image processing method and related product | |
CN113808066A (en) | Image selection method and device, storage medium and electronic equipment | |
CN113225490B (en) | Time-delay photographing method and photographing device thereof | |
CN111416936B (en) | Image processing method, image processing device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |