[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN117692791B - Image capturing method, terminal, storage medium and program product - Google Patents

Image capturing method, terminal, storage medium and program product Download PDF

Info

Publication number
CN117692791B
CN117692791B CN202310939433.2A CN202310939433A CN117692791B CN 117692791 B CN117692791 B CN 117692791B CN 202310939433 A CN202310939433 A CN 202310939433A CN 117692791 B CN117692791 B CN 117692791B
Authority
CN
China
Prior art keywords
image
original
quality
cache queue
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310939433.2A
Other languages
Chinese (zh)
Other versions
CN117692791A (en
Inventor
王菊远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310939433.2A priority Critical patent/CN117692791B/en
Publication of CN117692791A publication Critical patent/CN117692791A/en
Application granted granted Critical
Publication of CN117692791B publication Critical patent/CN117692791B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/665Control of cameras or camera modules involving internal camera communication with the image sensor, e.g. synchronising or multiplexing SSIS control signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/86Camera processing pipelines; Components thereof for processing colour signals for controlling the colour saturation of colour signals, e.g. automatic chroma control circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides an image capturing method, a terminal, a storage medium and a program product, and relates to the technical field of image processing, wherein the method comprises the following steps: storing the original image acquired by the image sensor into a cache queue in a memory; detecting a first image with unqualified image quality in a detection image corresponding to the original image, and deleting the original image corresponding to the first image from the cache queue; detecting a highlight image in the detected image in the snapshot observation interval; selecting an original image from the cache queue based on the information of the highlight image to obtain a second image; a snap shot image is generated based on the second image. By applying the image snapshot scheme provided by the embodiment of the application, the memory requirement of the image snapshot process can be reduced.

Description

Image capturing method, terminal, storage medium and program product
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image capturing method, a terminal, a storage medium, and a program product.
Background
Terminals such as mobile phones and tablet computers are generally provided with image sensors, so that the terminals have a photographing function. People often relax and record life through photographing, and have an increasingly high pursuit for photographing experience, and hope to be able to snap to a wonderful moment when photographing.
In view of this, in the related art, the terminal generally caches in the memory a plurality of images acquired by the image sensor before and after the user clicks the photographing button, but in order to obtain the images at the wonderful moment based on the cached images, the number of cached images is often large, which results in a large memory requirement.
Disclosure of Invention
In view of the foregoing, the present application provides an image capturing method, a terminal, a storage medium and a program product, so as to reduce the memory requirement during image capturing.
In a first aspect, an embodiment of the present application provides an image capturing method, where the method includes:
Storing the original image acquired by the image sensor into a cache queue in a memory;
Detecting a first image with unqualified image quality in a detection image corresponding to the original image, and deleting the original image corresponding to the first image from the cache queue;
detecting a highlight image in the detected image in the snapshot observation interval;
Selecting an original image from the cache queue based on the information of the highlight image to obtain a second image;
A snap shot image is generated based on the second image.
From the above, the image quality of the first image is poor, and after the original image corresponding to the first image with poor image quality is removed from the buffer queue, on one hand, the rest original images stored in the buffer queue are all original images with higher image quality, on the other hand, part of buffer space in the buffer queue is released, and the released buffer space can be used for storing other original images with higher image quality. Therefore, the snapshot image can be generated based on the original image with higher image quality in the cache queue, so that when the scheme provided by the embodiment of the application is applied to image snapshot, the cache space in the cache queue occupied by the first image with poor image quality is released in time while the image quality of the generated snapshot image is not influenced, and the memory requirement during the generation of the snapshot image is reduced.
In one embodiment of the present application, the detecting a first image with an image quality not reaching the standard in the detected image corresponding to the original image includes:
Obtaining a dynamic quality representation value of a detection image corresponding to the original image;
and detecting a first image with unqualified image quality in the detection image based on the obtained quality characterization value and the quality threshold value.
The dynamic quality representation value of the detection image reflects the quality information of the detection image in the dynamic dimension or the time dimension, so that based on the obtained dynamic quality representation value and the first quality threshold, a first image with unqualified quality in the dynamic dimension or the time dimension can be timely determined, and the first image is deleted from the cache queue, so that the rest of original images stored in the cache queue are original images with higher quality, and the image quality of the snapshot image generated by the rest of original images stored in the cache queue is improved.
In one embodiment of the present application, the detecting a first image with an image quality not reaching the standard in the detected image corresponding to the original image includes:
Obtaining a static quality characterization value of a detection image corresponding to the original image;
and detecting a first image with unqualified image quality in the detection image based on the obtained quality characterization value and the quality threshold value.
The static quality representation value of the detection image reflects the quality information of the detection image in the static dimension or the space dimension, so that based on the obtained static quality representation value and the second quality threshold, a first image with the quality not reaching the standard in the static dimension or the space dimension can be timely determined, and the first image is deleted from the cache queue, so that the rest of original images stored in the cache queue are original images with higher quality, and the image quality of the snapshot image generated by the rest of original images stored in the cache queue is improved.
In one embodiment of the present application, the dynamic quality characterization value includes at least one of the following information:
The motion amplitude of the object in the detected image;
The method includes detecting a first difference between a motion amplitude of an object in an image and a motion amplitude of an object in an adjacent image.
Therefore, for a single detection image, the dynamic quality representation value can be obtained according to the motion amplitude of the object in the detection image, wherein the motion amplitude reflects the motion condition of the object in the single detection image in the time dimension, and the dynamic quality of the detection image can be accurately measured in the time dimension, so that the dynamic quality representation value obtained based on the motion amplitude can reflect the dynamic quality of the object in the detection image in the time dimension, and the accuracy and the comprehensiveness of the obtained dynamic quality representation value are improved.
The first difference reflects the difference between the motion amplitudes of the objects in the plurality of detection images which are continuous in time, reflects the change of the motion condition of the objects in the plurality of detection images in the time dimension, and can accurately measure the dynamic quality of the detection images in the time dimension, so that the obtained dynamic quality characterization value obtained based on the first difference can reflect the dynamic quality of the objects in the detection images in the time dimension, and the accuracy and the comprehensiveness of the obtained dynamic quality characterization value are improved.
In one embodiment of the present application, the static quality characterization value includes at least one of the following information:
the automatic exposure AE convergence degree of the detection image; the automatic focusing AF convergence degree of the detected image; the automatic white balance AWB convergence of the detected image; the definition of the detection image; the contrast of the detected image; detecting color saturation of the image; the color uniformity of the image is detected.
Therefore, the static quality characterization value can comprise information corresponding to various types of static evaluation indexes, so that the quality information of the image in the static dimension or the space dimension can be reflected more comprehensively and accurately.
In one embodiment of the application, the quality threshold is determined based on and inversely proportional to the length of the cache queue.
On the one hand, a larger quality threshold value can be determined when the length of the cache queue is shorter, and the original images which can be stored in the cache queue are fewer because the length of the cache queue is shorter, so that more first images with unqualified image quality can be determined and deleted from the cache queue based on the larger quality threshold value, the utilization rate of the cache space is improved, the released cache space can be used for storing other original images with higher image quality, and the image quality of the snapshot images generated by the original images stored in the cache queue can be improved; on the other hand, a smaller quality threshold value can be determined when the length of the cache queue is longer, and the original images which can be stored in the cache queue are more because the length of the cache queue is longer, so that fewer first images with unqualified image quality can be determined and deleted from the cache queue based on the smaller quality threshold value, the number of high-quality original images stored in the cache queue is improved, and the image quality of the snapshot images generated based on the original images stored in the cache queue is improved.
In one embodiment of the present application, the length of the buffer queue is smaller than the length of the snapshot observation interval.
Therefore, even if the length of the buffer queue is smaller than that of the snapshot observation interval, the buffer queue can hold all original images except the first image in the snapshot observation interval, so that the original image corresponding to the highlight image can be taken out from the buffer queue based on the information of the highlight image. Therefore, the length of the buffer queue can be smaller than the length of the snapshot observation interval, so that the length of the buffer queue reserved for image snapshot can be reduced, and the memory utilization rate is improved.
In one embodiment of the present application, the selecting the original image from the buffer queue based on the information of the highlight image to obtain the second image includes:
selecting an original image with the acquisition sequence information closest to the acquisition sequence information corresponding to the wonderful image from the cache queue to obtain a third image;
a second image is determined based on the third image.
Therefore, the original image with the acquisition sequence information closest to the acquisition sequence information corresponding to the wonderful image can be selected from the cache queue, and the third image is obtained. Because the original image with the acquisition sequence information closest to the acquisition sequence information corresponding to the wonderful image is often the original image corresponding to the wonderful image or the original image closest to the original image corresponding to the wonderful image, a third image with stronger relevance to the wonderful image can be obtained according to the mode, a second image with stronger relevance to the wonderful image can be obtained based on the third image, and the image quality of the snapshot image generated based on the second image is improved.
In one embodiment of the present application, the determining the second image based on the third image includes:
determining a second difference between the motion amplitude of the object in the third image and the motion amplitude of the object in a fourth image, wherein the fourth image is: the images adjacent to the third image in the cache queue;
if the second difference is greater than a difference threshold, determining the third image as a second image;
otherwise, determining the third image and the fourth image as the second image.
When the second difference is larger than the difference threshold, the third image and the fourth image adjacent to the third image are indicated to have larger difference, and the third image is directly determined to be the second image for generating the snapshot image, so that the fourth image can be prevented from influencing the generation of the snapshot image; when the second difference is not larger than the difference threshold, the third image and the fourth image adjacent to the third image are smaller, namely the third image and the fourth image are closer, at the moment, the third image and the fourth image are both determined to be the second image, and when the snapshot image is generated based on the second image, the snapshot image can be generated based on the image characteristics of the third image and the fourth image more comprehensively, so that the image quality of the generated snapshot image is improved.
In one embodiment of the present application, the detection image is obtained by performing at least one of the following processes on the original image:
downsampling, noise reduction, sharpening, and smoothing.
After the original image is processed, the storage space occupation amount of the image or the noise influence in the image can be reduced while the image characteristics of the original image are maintained, and convenience is provided for subsequent image quality evaluation.
In a second aspect, an embodiment of the present application further provides a terminal, including:
one or more processors, image sensors, and memory;
the memory is coupled to the one or more processors, the memory for storing computer program code comprising computer instructions that the one or more processors invoke the computer instructions to cause the terminal to perform the method of any of the above aspects.
In a third aspect, embodiments of the present application also provide a computer readable storage medium comprising a computer program which, when run on a terminal, causes the terminal to perform the method of any one of the first aspects above.
In a fourth aspect, embodiments of the present application also provide a computer program product comprising executable instructions which, when executed on a computer, cause the computer to perform the method of any of the first aspects above.
In a fifth aspect, an embodiment of the present application further provides a chip system, where the chip system is applied to a terminal, and the chip system includes one or more processors, where the processors are configured to invoke computer instructions to cause the terminal to input data into the chip system, and perform the method according to any one of the first aspect to process the data and output a processing result.
Advantageous effects of the solutions provided by the embodiments in the second aspect, the third aspect, the fourth aspect, and the fifth aspect described above may be referred to the advantageous effects of the solutions provided by the embodiments in the first aspect described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 2 is a software structure block diagram of a terminal according to an embodiment of the present application;
Fig. 3 is a flowchart of a first image capturing method according to an embodiment of the present application;
fig. 4 is a schematic diagram of an image storage flow provided in an embodiment of the present application;
fig. 5a is a schematic diagram of a first snapshot observation interval according to an embodiment of the present application;
fig. 5b is a schematic diagram of a second snapshot observation interval according to an embodiment of the present application;
fig. 5c is a schematic diagram of a third snapshot observation interval according to an embodiment of the present application;
FIG. 6a is a schematic diagram of an image capturing process in the related art;
Fig. 6b is a schematic diagram of an image capturing process according to an embodiment of the present application;
Fig. 7 is a flowchart of a second image capturing method according to an embodiment of the present application;
Fig. 8 is a schematic structural diagram of a chip system according to an embodiment of the present application.
Detailed Description
For a better understanding of the technical solution of the present application, the following detailed description of the embodiments of the present application refers to the accompanying drawings.
In order to clearly describe the technical solution of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function and effect. For example, the first instruction and the second instruction are for distinguishing different user instructions, and the sequence of the instructions is not limited. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
In the present application, the words "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The scheme provided by the embodiment of the application can be applied to terminals provided with image sensors, such as mobile phones, tablet Personal computers, personal digital assistants (Personal DIGITAL ASSISTANT, PDA), intelligent watches, wearable electronic equipment, augmented Reality (Augmented Reality, AR) equipment, virtual Reality (VR) equipment, robots, intelligent glasses and the like.
By way of example, fig. 1 shows a schematic structural diagram of a terminal 100. The terminal 100 may include a processor 110, a display 120, an internal memory 130, a sim (Subscriber Identification Module, subscriber identity module) card interface 140, a usb (Universal Serial Bus ) interface 150, a charge management module 160, a battery management module 161, a battery 162, a sensor module 170, a mobile communication module 180, a wireless communication module 190, an antenna 1, an antenna 2, and the like. The sensor modules 170 may include, among other things, pressure sensors 170A, fingerprint sensors 170B, touch sensors 170C, ambient light sensors 170D, image sensors 170E, and the like.
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the terminal 100. In other embodiments of the application, terminal 100 may include more or less components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the Processor 110 may include a central Processor (Central Processing Unit, CPU), an application Processor (Application Processor, AP), a modem Processor, a graphics Processor (graphics processing unit, GPU), an image signal Processor (IMAGE SIGNAL Processor, ISP), a controller, a video codec, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), a baseband Processor, and/or a neural network Processor (Neural-network Processing Unit, NPU), etc. Wherein the different processing units may be separate components or may be integrated in one or more processors. In some embodiments, terminal 100 can also include one or more processors 110. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution. In other embodiments, memory may also be provided in the processor 110 for storing instructions and data. Illustratively, the memory in the processor 110 may be a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it may be called directly from memory. This avoids repeated accesses and reduces the latency of the processor 110, thereby improving the efficiency of the terminal 100 in processing data or executing instructions.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include Inter-integrated circuit (Inter-INTEGRATED CIRCUIT, I2C) interfaces, inter-integrated circuit audio (Inter-INTEGRATED CIRCUIT SOUND, I2S) interfaces, pulse code modulation (Pulse Code Modulation, PCM) interfaces, universal asynchronous receiver Transmitter (Universal Asynchronous Receiver/Transmitter, UART) interfaces, mobile industry processor interfaces (Mobile Industry Processor Interface, MIPI), general-Purpose Input/Output (GPIO) interfaces, SIM card interfaces, and/or USB interfaces, among others. The USB interface 150 is an interface conforming to the USB standard, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 150 may be used to connect a charger to charge the terminal 100, and may also be used to transfer data between the terminal 100 and peripheral devices. The USB interface 150 may also be used to connect headphones through which audio is played.
It should be understood that the interfacing relationship between the modules illustrated in the embodiment of the present application is for illustrative purposes, and is not limited to the structure of the terminal 100. In other embodiments of the present application, the terminal 100 may also use different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The wireless communication function of the terminal 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 180, the wireless communication module 190, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in terminal 100 may be configured to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
Terminal 100 implements display functions through a GPU, display 120, and an application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display 120 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display 120 is used to display images, videos, and the like. The display 120 includes a display panel. The display panel may employ a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), an Organic Light-Emitting Diode (OLED), an Active-Matrix Organic LIGHT EMITTING Diode (AMOLED), a flexible Light-Emitting Diode (FLED), miniled, microLed, micro-oLed, a Quantum Dot LIGHT EMITTING Diode (QLED), or the like. In some embodiments, terminal 100 may include 1 or more display screens 120.
In some embodiments of the present application, when OLED, AMOLED, FLED or other materials are used for the display panel, the display 120 in fig. 1 may be folded. Here, the display 120 may be folded, which means that the display may be folded at any angle at any portion and may be held at the angle, for example, the display 120 may be folded in half from the middle. Or folded up and down from the middle.
The display 120 of the terminal 100 may be a flexible screen that is currently of great interest due to its unique characteristics and great potential. Compared with the traditional screen, the flexible screen has the characteristics of strong flexibility and bending property, can provide a new interaction mode based on the bending property for the user, and can meet more requirements of the user on the terminal. For a terminal equipped with a foldable display, the foldable display on the terminal can be switched between a small screen in a folded configuration and a large screen in an unfolded configuration at any time. Accordingly, users use a split screen function on a terminal configured with a foldable display screen, also more and more frequently.
The terminal 100 may implement a photographing function through an ISP, an image sensor 170E, a video codec, a GPU, a display screen 120, an application processor, and the like.
The ISP is used to process the data fed back by the image sensor 170E. For example, when photographing, the shutter is opened, light is transmitted to the image sensor 170E through the lens, the optical signal is converted into an electrical signal, and the image sensor 170E transmits the electrical signal to the ISP to be processed, and converts into an image visible to the naked eye. The ISP can carry out algorithm optimization on noise, brightness and color of the image, and can optimize parameters such as exposure, color temperature and the like of a shooting scene.
The image sensor 170E is used to capture photographs or video. The object is projected through a lens to generate an optical image to the image sensor 170E. The image sensor 170E may include a charge coupled device (Charge Coupled Cevice, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The image sensor 170E converts the optical signal into an electrical signal, and then transfers the electrical signal to the ISP to convert into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into a standard Red Green Blue (RGB), YUV format image signal, and the like. In some embodiments, the terminal 100 may include 1 or N image sensors 170e, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the terminal 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, etc.
Video codecs are used to compress or decompress digital video. The terminal 100 may support one or more video codecs. In this way, the terminal 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (Moving Picture Experts Group, MPEG) 1, MPEG2, MPEG3, and MPEG4.
The NPU is a neural Network (Neural-Network, NN) computing processor, and can rapidly process input information by referencing a biological neural Network structure, such as referencing a transmission mode among human brain neurons, and can continuously learn. Applications such as intelligent cognition of the terminal 100 can be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The internal memory 130 may be used to store one or more computer programs, including instructions. The processor 110 may cause the terminal 100 to perform the video matting method provided in some embodiments of the present application, as well as various applications, data processing, and the like, by executing the above-described instructions stored in the internal memory 130. The internal memory 130 may include a storage program area and a storage data area. The storage program area can store an operating system; the storage program area may also store one or more applications (such as gallery, contacts, etc.), etc. The storage data area may store data (e.g., photos, contacts, etc.) created during use of the terminal 100, etc. In addition, the internal memory 130 may include high-speed random access memory, and may also include non-volatile memory, such as one or more disk storage units, flash memory units, universal flash memory (Universal Flash Storage, UFS), and the like. In some embodiments, the processor 110 may cause the terminal 100 to perform the video matting method provided in the embodiments of the present application, as well as other applications and data processing, by executing instructions stored in the internal memory 130, and/or instructions stored in a memory provided in the processor 110.
The internal memory 130 may be used to store a related program of the video matting method provided in the embodiment of the present application, and the processor 110 may be used to call the related program of the video matting method stored in the internal memory 130 when information is presented, so as to execute the video matting method in the embodiment of the present application.
The sensor module 170 may include a pressure sensor 170A, a fingerprint sensor 170B, a touch sensor 170C, an ambient light sensor 170D, and the like.
The pressure sensor 170A is used to sense a pressure signal, which can be converted into an electrical signal. In some embodiments, the pressure sensor 170A may be disposed on the display 120. The pressure sensor 170A may be of various types, such as a resistive pressure sensor, an inductive pressure sensor, or a capacitive pressure sensor. The capacitive pressure sensor may be a device comprising at least two parallel plates of conductive material, the capacitance between the electrodes changing as a force is applied to the pressure sensor 170A, the terminal 100 determining the strength of the pressure based on the change in capacitance. When a touch operation is applied to the display screen 120, the terminal 100 detects the touch operation according to the pressure sensor 170A. The terminal 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 170A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon; and executing the instruction of newly creating the short message when the touch operation with the touch operation intensity being larger than or equal to the first pressure threshold acts on the short message application icon.
The fingerprint sensor 170B is used to collect a fingerprint. The terminal 100 can utilize the collected fingerprint characteristics to realize the functions of unlocking, accessing an application lock, shooting and receiving an incoming call, and the like.
The touch sensor 170C, also referred to as a touch device. The touch sensor 170C may be disposed on the display screen 120, and the touch sensor 170C and the display screen 120 form a touch screen, which is also called a touch screen. The touch sensor 170C is used to detect a touch operation acting thereon or thereabout. The touch sensor 170C may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation may be provided through the display screen 120. In other embodiments, the touch sensor 170C may also be disposed on the surface of the terminal 100 and at a different location than the display 120.
The ambient light sensor 170D is used to sense ambient light level. The terminal 100 may adaptively adjust the brightness of the display 120 according to the perceived ambient light level. The ambient light sensor 170D may also be used to automatically adjust white balance at the time of photographing. Ambient light sensor 170D may also communicate the ambient information in which the device is located to the GPU.
The ambient light sensor 170D is also used to obtain the brightness, light ratio, color temperature, etc. of the acquisition environment.
Fig. 2 is a block diagram of a software architecture of a terminal to which an embodiment of the present application is applicable. The software system of the terminal can adopt a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture or a cloud architecture.
The layered architecture divides the software system of the terminal into several layers, each layer having a distinct role and division of work. The layers communicate with each other through a software interface. In some embodiments, the software system may be divided into five layers, an application layer (applications), an application framework layer (application framework), a hardware abstraction layer (Hardware Abstract Layer, HAL), a driver layer, and a hardware layer, respectively.
The application layer may include a series of application packages that run applications by calling an application program interface (Application Programming Interface, API) provided by the application framework layer. For example, the application package may include applications such as a browser, camera, gallery, music, video, and the like. It will be appreciated that the ports of each of the applications described above may be used to receive data.
The application framework layer provides APIs and programming frameworks for application programs of the application layer. The application framework layer includes a number of predefined functions. For example, the application framework layer may include a window manager, a content provider, a view system, a resource manager, a notification manager, and DHCP (Dynamic Host Configuration Protocol ) module, a camera management module, and the like.
The hardware abstraction layer can comprise a plurality of library modules, wherein the library modules can be a display library module, a motor library module and the like, and the hardware abstraction layer can also comprise an image quality detection module and an image selection module. The terminal system can load a corresponding library module for the equipment hardware, so that the purpose of accessing the equipment hardware by the application program framework layer is achieved.
The driver layer is a layer between hardware and software. The driving layer is used for driving the hardware so that the hardware works. The driving layer at least includes a display driving, an audio driving, a camera device driving, a sensor driving, a motor driving, and the like, which is not limited in the embodiment of the present application. It will be appreciated that display driving, audio driving, camera device driving, sensor driving, motor driving, etc. may all be considered a driving node. Each of the drive nodes described above includes an interface that may be used to receive data.
The hardware layer is the bottom part of the terminal system, and can be composed of various physical components required by a processor, a memory, an input/output interface (I/O) and the like. The hardware layer may also include an image sensor and an image signal processor.
The following describes an interaction flow between layers of a software system in the process of performing image capturing by using the scheme provided by the embodiment of the present application with reference to fig. 2.
Firstly, triggering an image acquisition instruction by a user through a camera application program in an application program layer, transmitting the image acquisition instruction to an application framework layer through a camera access interface by the application program layer, forwarding the image acquisition instruction to a HAL layer by a camera management module in the application framework layer, forwarding the image acquisition instruction to a driving layer by the HAL layer, and finally forwarding the image acquisition instruction to an image sensor and an image signal processor in a hardware layer by the driving layer through a camera equipment driving module; the image sensor and the image signal processor in the hardware layer respond to the image acquisition instruction, acquire an original image, store the original image into a cache queue, and inform the HAL layer to carry out image detection and image selection through the forwarding of the driving layer.
Then, on the one hand, the image quality detection module in the HAL layer invokes an algorithm from the camera algorithm library, detects a first image with unqualified image quality in a tiny (tiny) stream based on the algorithm, and clears the first image from the cache queue, and on the other hand, the image selection module in the HAL layer invokes an algorithm from the camera algorithm library, determines a highlight image from the tiny stream based on the algorithm, and selects an original image from the cache queue based on the highlight image.
Finally, the HAL layer may generate a snapshot image based on the selected original image, and feed the snapshot image back to the gallery application program in the application program layer via forwarding of the application framework layer, so that the gallery application program may display the snapshot image.
The concepts of the tiny flow and the like mentioned in the foregoing are explained in the following specific embodiments, and are not described in detail herein.
The following describes some concepts related to the image capturing scheme provided by the embodiment of the present application.
1. Command trigger time
The user may click a photographing button at a user interface of the camera application program to trigger an image acquisition instruction, and the time of clicking the photographing button may be referred to as an instruction triggering time.
2. Command receiving time
The instruction receiving time refers to a time when the image sensor actually receives an image acquisition instruction.
Because a certain time is required for transmitting the instruction in the terminal, a certain time delay exists between the moment when the image sensor actually receives the image acquisition instruction and the instruction triggering moment, namely, the instruction receiving moment is positioned after the instruction triggering moment.
3. Length of cache queue
The length of the cache queue reflects the size of the corresponding cache space of the cache queue.
In one case, the length of the buffer queue may be described by the number of buffers (buffers) included in the buffer queue, each buffer being used to store an original image.
For example, the length of the buffer queue is 8, which means that the buffer queue contains 8 buffers, that is, the buffer queue can store 8 original images in total.
It should be noted that, each buffer may be used to store, in addition to one original image, identification information such as a frame number, a sequence number, an acquisition timestamp, and the like related to the original image.
4. Length of snapshot observation interval
The snapshot observation interval refers to: the expected time interval to which the image acquisition time of the original image used to generate the snap shot image belongs.
In one case, the length of the snapshot observation interval can be reflected in the number of original images located within the snapshot observation interval at the time of image acquisition.
For example, the length of the snap observation interval is 12, which means that the image acquisition time of 12 original images is located in the snap observation interval, that is, the snap observation interval corresponds to 12 original images.
5. Capturing an original image within an observation interval
The original image in the snap observation interval is as follows: the image acquisition time is located in the original image of the snap observation interval.
The image capturing scheme provided by the embodiment of the application is described in detail below through a specific embodiment.
Referring to fig. 3, a flowchart of a first image capturing method according to an embodiment of the present application is shown, where the method includes the following steps S301 to S305.
Step S301: and storing the original image acquired by the image sensor into a cache queue in the memory.
The image SENSOR may refer to a device that converts an optical image on a photosensitive surface into an electrical signal in a proportional relationship with the optical image by a photoelectric conversion function, and generates an image based on the electrical signal, and may also be referred to as a SENSOR.
The image generated by the image sensor is not subjected to any additional processing and may be referred to as the raw image acquired by the image sensor.
The buffer queue is a block of buffer space with fixed size reserved in the memory and is used for buffering the original image acquired by the image sensor.
Each time the image sensor collects an original image, the original image is stored in the buffer queue, so that the original image collected first is stored in the buffer queue first; in addition, since the size of the buffer queue is fixed, the number of the original images that can be accommodated is also fixed, when the number of the buffered original images in the buffer queue reaches the maximum and there is a new original image to be stored, in order to release the buffer space for storing the new original image to be stored, the original image that is stored in the buffer queue first is cleared from the buffer queue.
In other words, the raw images acquired by the image sensor are stored in a buffer queue in a "first-in first-out" manner.
A more visual description is provided below in connection with fig. 4.
Referring to fig. 4, a schematic diagram of an image storage flow is provided in an embodiment of the present application.
It can be seen that the buffer queue is composed of 8 buffers, and can store 8 original images, wherein the first buffer from left to right in the buffer queue can be called a queue tail (rear) or a queue entry, the last buffer from left to right can be called a queue head (front) or a queue exit, and the original images are stored in the queue head or the idle buffer closest to the queue head. The image storage flow shown in fig. 4 is described below by steps s1-s 5.
Step s1: the image sensor does not collect images, and the buffer queue is empty.
Step s2: the image sensor acquires an original image 1, and the original image 1 is stored in a buffer queue.
The buffer queue is empty initially, and the original image 1 is the 1 st image stored in the buffer queue, so that the original image is directly stored in the last buffer from left to right, that is, in the queue head.
Step s3: the image sensor collects the original image 2, the original image 2 is stored in the buffer queue, and so on, and the original images 3-7 collected by the image sensor are all stored in the buffer queue.
The original image 2 is the 2 nd image stored in the buffer queue, and the original image 1 is stored in the queue head, so the original image 2 is stored in the free buffer closest to the queue head, that is, in the left-right last-to-right buffer.
Similarly, the remaining original images 3-7 are sequentially stored in the buffer queue.
Step s4: the image sensor captures an original image 8, and the original image 8 is stored in a buffer queue.
It can be seen that after the original image 8 is stored in the first buffer (tail of queue) from left to right, all 8 buffers in the buffer queue are occupied, i.e. the buffer queue is full.
Step s5: the image sensor collects original images 9, the original image 1 stored in the buffer queue first is cleared from the buffer queue, the rest of original images 2-8 in the buffer queue move one buffer towards the queue head direction, so that the queue tail is released, and the newly collected original images 9 are stored in the queue tail.
The execution timing of this step will be described below.
In one case, this step may be performed in response to the camera application being turned on.
That is, after the camera application is turned on, the image sensor may begin to capture the original image and store the original image to the cache queue.
In this case, before the instruction triggering time, the image sensor has collected the original image, so that the original image collected before the instruction triggering time is stored in the buffer queue; and after the user clicks the photographing button, the image sensor can further collect the original image, so that the original image collected after the instruction triggering time can be stored in the cache queue.
In another case, this step may be performed when the image sensor receives an image acquisition instruction.
That is, the present step starts to be executed at the instruction receiving timing.
Obviously, in this case, the acquisition time of each original image acquired by the image sensor is located after the instruction trigger time, so the original images stored in the buffer queue are all images acquired after the instruction trigger time.
Step S302: and detecting a first image with unqualified image quality in the detected image corresponding to the original image, and deleting the original image corresponding to the first image from the cache queue.
The detected image corresponding to the original image may be an image obtained after various preprocessing of the original image.
In one case, the above detection image may be obtained by performing at least one of the following processes on the original image:
downsampling, noise reduction, sharpening, and smoothing.
After the original image is processed, the storage space occupation amount of the image or the noise influence in the image can be reduced while the image characteristics of the original image are maintained, and convenience is provided for subsequent image quality evaluation.
In this case, the detected image may be an image obtained by downsampling an original image, and as the image sensor continuously collects the original images, the detected image obtained by downsampling each original image forms an image stream, and the image size of the detected image is smaller than that of the corresponding original image, so the image stream may be referred to as a tini stream.
The following describes a specific manner of detecting a first image whose image quality does not reach the standard in a detected image corresponding to an original image.
In one embodiment, a dynamic quality representation value of a detected image corresponding to an original image may be obtained, and a first image of which the image quality does not reach the standard in the detected image may be detected based on the obtained quality representation value and a first quality threshold.
The dynamic quality characterization value may be obtained based on the motion amplitude of the object in the detected image, and is used for evaluating the quality information of the detected image in the dynamic dimension or the time dimension.
In one case, the dynamic quality characterization value includes at least one of the following information:
1. the magnitude of motion of an object in the image is detected.
The object may be a person, an animal, or the like, and the object in the detected image may be identified based on various object detection algorithms or a pre-trained object recognition model, which is not limited in the embodiment of the present application.
The motion amplitude of the object may have various meanings based on the difference of the image scene corresponding to the detected image, which will be described by way of example.
For example, if the image scene corresponding to the detected image is a person jumping scene, the motion amplitude may be a jumping height of a person or the like.
For another example, if the image scene corresponding to the detected image is a smile scene of a person, the motion amplitude may be a lifting amplitude of a corner of the person's mouth, and so on.
The method for determining the image scene corresponding to the detected image is not limited, and in one case, the detected image may be input into a pre-trained image scene classification model to obtain the image scene output by the image classification model.
Therefore, for a single detection image, the dynamic quality representation value can be obtained according to the motion amplitude of the object in the detection image, wherein the motion amplitude reflects the motion condition of the object in the single detection image in the time dimension, and the dynamic quality of the detection image can be accurately measured in the time dimension, so that the dynamic quality representation value obtained based on the motion amplitude can reflect the dynamic quality of the object in the detection image in the time dimension, and the accuracy and the comprehensiveness of the obtained dynamic quality representation value are improved.
2. A first difference between the motion amplitude of the object in the image and the motion amplitude of the object in the adjacent image is detected.
The adjacent image is another detection image with the corresponding image acquisition time adjacent to the image acquisition time of the detection image.
The meaning of the motion amplitude of the object in the adjacent image is the same as that of the object in the detected image.
The first difference reflects the difference between the motion amplitudes of the objects in the plurality of detection images which are continuous in time, reflects the change of the motion condition of the objects in the plurality of detection images in the time dimension, and can accurately measure the dynamic quality of the detection images in the time dimension, so that the obtained dynamic quality representation value based on the first difference can reflect the dynamic quality of the objects in the detection images in the time dimension, and the accuracy and the comprehensiveness of the obtained dynamic quality representation value are improved.
After the dynamic quality representation value of the detected image corresponding to the original image is obtained, whether the dynamic quality representation value of each detected image is smaller than a first quality threshold value or not can be judged, and if yes, the detected image is determined to be a first image with unqualified image quality.
The first quality threshold may be set by a worker according to experience or actual requirements.
The dynamic quality representation value of the detection image reflects the quality information of the detection image in the dynamic dimension or the time dimension, so that based on the obtained dynamic quality representation value and the first quality threshold, a first image with unqualified quality in the dynamic dimension or the time dimension can be timely determined, and the first image is deleted from the cache queue, so that the rest of original images stored in the cache queue are original images with higher quality, and the image quality of the snapshot image generated by the rest of original images stored in the cache queue is improved.
In another embodiment, a static quality representation value of a detected image corresponding to the original image may be obtained, and a first image of which the image quality does not reach the standard in the detected image is detected based on the obtained static quality representation value and the second quality threshold.
The static quality characterization value may be obtained based on a static evaluation index of the detection image, and is used for evaluating quality information of the detection image in a static dimension or a spatial dimension.
In one case, the static quality characterization value may include at least one of the following information:
an Auto exposure (Automatic Exposure, AE) convergence of the detected image, an Auto Focus (AF) convergence of the detected image, an Auto white balance (Automatic white balance, AWB) convergence of the detected image, a sharpness of the detected image, a contrast of the detected image, a color saturation of the detected image, a color uniformity of the detected image.
Therefore, the static quality characterization value can comprise information corresponding to various types of static evaluation indexes, so that the quality information of the image in the static dimension or the space dimension can be reflected more comprehensively and accurately.
After obtaining the static quality representation value of the detected image corresponding to the original image, judging whether the static quality representation value of each detected image is smaller than a second quality threshold value, if so, determining that the detected image is a first image with unqualified image quality.
The second quality threshold may be set by a worker based on experience or actual demand.
The static quality representation value of the detection image reflects the quality information of the detection image in the static dimension or the space dimension, so that based on the obtained static quality representation value and the second quality threshold, a first image with the quality not reaching the standard in the static dimension or the space dimension can be timely determined, and the first image is deleted from the cache queue, so that the rest of original images stored in the cache queue are original images with higher quality, and the image quality of the snapshot image generated by the rest of original images stored in the cache queue is improved.
In one embodiment of the present application, the first quality threshold and/or the second quality threshold may be determined based on a length of the cache queue and inversely proportional to the length of the cache queue.
For example, the longer the length of the cache queue, the smaller the first quality threshold and/or the second quality threshold may be set; conversely, the shorter the length of the buffer queue, the greater the first quality threshold and/or the second quality threshold may be set.
On the one hand, a larger quality threshold value can be determined when the length of the cache queue is shorter, and the original images which can be stored in the cache queue are fewer because the length of the cache queue is shorter, so that more first images with unqualified image quality can be determined and deleted from the cache queue based on the larger quality threshold value, the utilization rate of the cache space is improved, the released cache space can be used for storing other original images with higher image quality, and the image quality of the snapshot images generated by the original images stored in the cache queue can be improved; on the other hand, a smaller quality threshold value can be determined when the length of the cache queue is longer, and the original images which can be stored in the cache queue are more because the length of the cache queue is longer, so that fewer first images with unqualified image quality can be determined and deleted from the cache queue based on the smaller quality threshold value, the number of high-quality original images stored in the cache queue is improved, and the image quality of the snapshot images generated based on the original images stored in the cache queue is improved.
In still another embodiment, a dynamic quality characterization value and a static quality characterization value of a detected image corresponding to the original image may be obtained respectively, and a first image with an image quality not reaching the standard in the detected image may be detected based on the obtained quality characterization value and the quality threshold.
This embodiment may be obtained by combining the foregoing two embodiments, and will not be described herein.
Therefore, the image quality of the detection image can be comprehensively evaluated from two aspects of the motion dimension and the static dimension, and the first image with the quality not reaching the standard is determined from the detection image based on the image quality of the detection image, so that the first image is deleted from the cache queue, the rest original images stored in the cache queue are original images with higher quality, and the image quality of the snapshot image generated by the follow-up original images stored in the cache queue is further improved.
It should be noted that, the step S302 may be performed after each original image is stored in the buffer queue, or the step S302 may be performed when the buffer queue is detected to be full, which is not limited in the embodiment of the present application.
Step S303: a highlight image in the detected image in the snap observation section is detected.
Depending on the timing of capturing the original image by the image sensor in step S301, the above capturing observation interval may have various cases, which will be described below.
1. When the execution timing of step S301 is that the camera application is turned on:
in this case, before the instruction triggering time, the image sensor has collected the original image, so that the original image collected before the instruction triggering time is stored in the buffer queue; and after the user clicks the photographing button, the image sensor can further collect the original image, so that the original image collected after the instruction triggering time can be stored in the cache queue.
In this case, the snapshot observation interval may be a period of time before and after the instruction trigger time.
The sub-section of the snapshot observation section located before the command trigger time may be referred to as a preceding section, and the sub-section of the snapshot observation section located after the command trigger time may be referred to as a following section.
According to the trigger time of the command, the two cases can be divided into 2 cases, and the following are illustrated by fig. 5a and 5b, respectively.
First case:
referring to fig. 5a, a schematic diagram of a first snapshot observation interval according to an embodiment of the present application is provided.
In fig. 5a, T1 represents the acquisition start time of the image sensor, that is, the program start time of the camera application, T2 represents the instruction trigger time, and T3 represents the instruction reception time.
The snapshot observation interval may be a period of time before and after T2, i.e., a period of time indicated by [ T4, T5 ].
Wherein, T4 can be called as the interval starting time of the snapshot observation interval, and [ T4, T2] is the preceding interval; t5 may be referred to as the end time of the snapshot observation interval, and [ T2, T5] is the above-mentioned subsequent interval.
The time T4 may be determined according to a first preset length of a previous section, which is preset, for example, T2-the first preset length; the time T5 may be determined according to a second preset length of the following interval, which is preset, for example, t2+ the second preset length.
In this case, the command trigger time T2 is longer from the acquisition start time T1, that is, [ T1, T2] > a first preset length.
Second case:
Referring to fig. 5b, a schematic diagram of a second snapshot observation interval according to an embodiment of the present application is provided.
In fig. 5b, T1 represents the acquisition start time of the image sensor, that is, the program start time of the camera application, T2 represents the instruction trigger time, T3 represents the instruction reception time, and T4 represents the section end time of the snapshot observation section.
In this case, the duration of the command trigger time T2 from the acquisition start time T1 is short, that is, [ T1, T2] < a first preset length.
At this time, the length of the preceding section cannot reach the first preset length, the [ T1, T2] may be directly determined as the preceding section to obtain the preceding section as long as possible, and in order to ensure that the snapshot observation section reaches a certain length, the length of the following section may be appropriately prolonged, and the [ T2, T4] may be determined as the following section.
For example, in this case, the length of the subsequent section may be a sum of the second preset length and a target length, which may be a difference between the first preset length and the [ T1, T2] length.
2. When the execution timing of step S101 is that an image acquisition instruction is received:
In this case, the acquisition time of each original image acquired by the image sensor is located after the instruction trigger time, so the original images stored in the buffer queue are all images acquired after the instruction trigger time.
In this case, the snapshot observation interval may be a period of time after the above-described instruction trigger timing.
Referring to fig. 5c, a third snapshot observation interval schematic diagram provided in an embodiment of the present application is shown.
In fig. 5c, T1 represents a command trigger time, T2 represents a collection start time, that is, a command reception time, and T3 represents a section end time of the snapshot observation section.
In this case, the snapshot observation interval may be a time interval shown by [ T2, T3], and T2 is also the interval start time of the snapshot observation interval.
Wherein T3 may be determined based on a third preset length of the snapshot observation interval, e.g., may be t2+ the third preset length.
In this case, the acquisition start time may be a time subsequent to the instruction reception time, not limited to this.
After the snap observation interval is determined, a detection image whose acquisition time is within the snap observation interval may be determined, the determined detection image may be referred to as a detection image within the snap observation interval, and a highlight image is selected from the determined detection images.
Specifically, the detected image with the largest corresponding dynamic quality characterization value may be determined as a highlight image, the detected image with the largest corresponding static quality characterization value may be determined as a highlight image, and the image with the largest sum of the corresponding dynamic quality characterization value and the static quality characterization value may be determined as a highlight image.
Step S304: and selecting an original image from the cache queue based on the information of the highlight image to obtain a second image.
Since the detected image corresponds to the original image and the highlight image is selected from the detected images, the highlight image also corresponds to the original image, and the original image can be selected from the cache queue based on the information of the highlight image.
The information of the highlight image may be any information that may be used to identify the original image, such as an acquisition time stamp of the original image corresponding to the highlight image, an image frame number of the original image corresponding to the highlight image, a sequence number of the original image corresponding to the highlight image stored in the buffer queue, and so on, so that the original image may be selected from the buffer queue based on the information.
In one embodiment, an original image whose acquisition order information is closest to that of the highlight image may be selected from the buffer queue to obtain a third image, and then the second image is determined based on the third image. The specific embodiment is shown in step S705 in the embodiment shown in fig. 7, which will not be described in detail here.
Step S305: a snap image is generated based on the second image.
Specifically, if the number of the selected second images is 1, the second images may be directly determined as the snap images.
In one embodiment of the present application, if the number of the second images selected may be greater than 1, a snap-shot image may be generated based on the plurality of second images, which is described in step S706 in the embodiment shown in fig. 7.
As can be seen from the above, when the scheme provided by the embodiment of the application is applied to image capturing, after the original image acquired by the image sensor is stored in the buffer queue in the memory, a first image with unqualified image quality in the detected image corresponding to the original image can be detected, the original image corresponding to the first image is deleted from the buffer queue, then the original image can be selected from the buffer queue based on the information of the highlight image in the detected image in the capturing observation interval, and finally the captured image is generated based on the selected original image.
After the original image corresponding to the first image with poor image quality is removed from the buffer queue, on one hand, the rest original images stored in the buffer queue are all original images with higher image quality, and on the other hand, part of buffer space in the buffer queue is released, and the released buffer space can be used for storing other original images with higher image quality. Therefore, the snapshot image can be generated based on the original image with higher image quality in the cache queue, so that the cache space in the cache queue occupied by the first image with poor image quality is released in time while the image quality of the generated snapshot image is not influenced, and the memory requirement during generation of the snapshot image is reduced.
In one embodiment of the application, the length of the buffer queue is less than the length of the snapshot observation interval.
First, a relation between a snapshot observation interval and a cache queue is described.
As is clear from the foregoing conceptual explanation, the snapshot observation interval refers to a time interval to which an image acquisition time of an original image intended for generating a snapshot image belongs, and the buffer queue is used for storing the original image.
The buffer queue is usually a reserved fixed buffer space, and if the buffer queue is insufficient to accommodate all the original images in the snapshot observation interval, the situation that the original images acquired later are cleared from the buffer queue due to insufficient length of the buffer queue after the original images acquired later are stored in the buffer queue occurs.
Therefore, in order to ensure that each original image in the snapshot observation interval can be stored in the buffer queue, so that any original image in the snapshot observation interval can be selected from the buffer queue later, in the related art, the length of the buffer queue is often greater than or equal to the length of the snapshot observation interval.
When the scheme provided by the embodiment of the application is applied, the original image with lower image quality can be removed from the buffer queue, and the released buffer space can be used for storing other original images acquired in the snapshot observation interval, so that the length of the buffer queue can be smaller than the length of the snapshot observation interval. Compared with the prior art, the scheme provided by the embodiment of the application can adopt a shorter cache queue to realize image snapshot, reduce the length of the cache queue required to be reserved, and improve the memory utilization rate.
The advantages of the solution provided by the embodiments of the present application compared to the related art will be further described with reference to fig. 6a and 6 b.
Referring first to fig. 6a, a schematic diagram of an image capturing process in the related art is shown.
In fig. 6a, the length of the buffer queue is 4, P1-P5 represents an original image in the snapshot observation interval, the image acquisition moments of P1-P5 are arranged from early to late, P1-P5 are respectively detection images corresponding to P1-P5 in the tini stream, and P1 can be determined to be a highlight image based on the image quality characterization values of P1-P5.
As can be seen from the left branch of fig. 6a, first, P1-P4 are sequentially stored in the buffer queue according to the sequence of the image acquisition time, then, in order to store P5 in the buffer queue, according to the principle of "first in first out", P1 stored first in the buffer queue needs to be cleared from the buffer queue, so that the original image stored in the buffer queue is P2-P5 finally.
Then, since P1 is not stored in the cache queue, the original image P1 corresponding to P1 cannot be fetched from the cache queue based on the information of P1.
As can be seen from the above, in the related art, if the length of the buffer queue is smaller than the length of the snapshot observation interval, a portion of the original image in the snapshot observation interval cannot be stored in the buffer queue, so that the condition that the original image corresponding to the highlight image cannot be taken out from the buffer queue based on the information of the highlight image may occur, and the image quality of the snapshot image generated based on the selected image is reduced.
Referring to fig. 6b again, a schematic diagram of an image capturing process according to an embodiment of the present application is provided.
In fig. 6b, the buffer queue length is 4, P1-P5 represents an original image in the snapshot observation interval, the image acquisition moments of P1-P5 are arranged from early to late, P1-P5 are respectively detection images corresponding to P1-P5 in the tini stream, and P1 can be determined to be a highlight image based on the image quality characterization values of P1-P5.
After each original image is stored in the buffer queue, it can be determined whether the original image is the first image with image quality not reaching the standard, and only the determined flow for the original image P2 is shown in fig. 6 b.
As can be seen from the left branch of fig. 6b, according to the sequence of the image acquisition moments, P1 is first stored in the buffer queue, then P2 is stored in the buffer queue, at this time, P2 is determined to be the first image with unqualified image quality, so that P2 can be deleted from the buffer queue, and buffer capable of accommodating 1 original image is released, and therefore, subsequent P3-P5 can be sequentially stored in the buffer queue. Thus, the original images stored in the final cache queue are P1, P3-P5
Then, since P1 is stored in the cache queue, the original image P1 corresponding to P1 can be fetched from the cache queue based on the information of P1.
From the above, by applying the scheme provided by the embodiment of the application, the first image with unqualified image quality can be timely cleared from the cache queue, so that a part of cache space in the cache queue is released, and the released cache space can accommodate the rest original images in the snapshot observation interval, therefore, even if the length of the cache queue is smaller than that of the snapshot observation interval, the cache queue can accommodate all original images with higher image quality except the first image in the snapshot observation interval, and further, the follow-up original images with higher image quality except the first image can be ensured to be taken out from the cache queue.
In summary, compared with the related art, in the scheme provided by the embodiment of the application, the length of the cache queue can be smaller than the length of the snapshot observation interval, that is, the scheme provided by the embodiment of the application can reduce the length of the cache queue which is required to be reserved for image snapshot and improve the memory utilization rate.
On the basis of the embodiment shown in fig. 3, when the original image is selected from the buffer queue based on the information of the highlight image, the original image whose acquisition order information is closest to the acquisition order information corresponding to the highlight image may be selected from the buffer queue. In view of the above, the embodiment of the application provides a second image capturing method.
Referring to fig. 7, a flowchart of a second image capturing method according to an embodiment of the present application is shown, where the method includes the following steps S701-S706.
Step S701: and storing the original image acquired by the image sensor into a cache queue in the memory.
Step S702: and detecting a first image with unqualified image quality in the detected image corresponding to the original image, and deleting the original image corresponding to the first image from the cache queue.
Step S703: a highlight image in the detected image in the snap observation section is detected.
Step S704: and selecting an original image with the closest acquisition sequence information corresponding to the highlight image from the cache queue to obtain a third image.
The acquisition sequence information corresponding to the original image can be any information which can be used for identifying the original image, such as an acquisition time stamp of the original image, an image frame number of the original image, a sequence number of the original image stored in a cache queue, and the like; the acquisition order information corresponding to the highlight image may be acquisition order information of an original image corresponding to the highlight image.
In this step, when selecting the original image whose acquisition order information is closest to the acquisition order information corresponding to the highlight image from the buffer queue, there are two cases:
in this case, the original image whose acquisition order information is identical to the acquisition order information corresponding to the highlight image is stored in the buffer queue, and the original image can be directly selected from the buffer queue.
In another case, the original image whose acquisition order information is consistent with the acquisition order information corresponding to the highlight image is not stored in the buffer queue, and at this time, the original image whose acquisition order information is closest to the acquisition order information corresponding to the highlight image may be selected from the buffer queue.
The selecting, from the buffer queue, the original image whose acquisition order information is closest to the acquisition order information corresponding to the highlight image may refer to selecting, from the buffer queue, the original image whose absolute difference between the acquisition order information and the acquisition order information corresponding to the highlight image is the smallest.
For example, the acquisition sequence information corresponding to the highlight image is an acquisition time stamp t1, and the original image corresponding to an acquisition time stamp t2 with the smallest absolute difference value of t1 can be selected from the buffer queue.
Step S705: based on the third image, a second image is determined.
Specifically, the second image may be determined based on the third image in the following manner.
In one embodiment, a second difference between the motion amplitude of the object in the third image and the motion amplitude of the object in the fourth image may be determined, and if the second difference is greater than the difference threshold, the third image is determined to be the second image, otherwise, both the third image and the fourth image are determined to be the second image.
Wherein the fourth image is: the images in the queue adjacent to the third image are cached.
The difference threshold may be set by a worker according to experience or actual requirements.
The concept of the motion amplitude has been described in the embodiment step S302 in fig. 3, and the second difference may be obtained based on the first difference obtained in the embodiment step S302, which is not described herein.
When the second difference is larger than the difference threshold, the third image and the fourth image adjacent to the third image are indicated to have larger difference, and the third image is directly determined to be the second image for generating the snapshot image, so that the fourth image can be prevented from influencing the generation of the snapshot image; when the second difference is not larger than the difference threshold, the third image and the fourth image adjacent to the third image are smaller, namely the third image and the fourth image are closer, at the moment, the third image and the fourth image are both determined to be the second image, and when the snapshot image is generated based on the second image, the snapshot image can be generated based on the image characteristics of the third image and the fourth image more comprehensively, so that the image quality of the generated snapshot image is improved.
In another embodiment, the third image may be determined directly as the second image.
Step S706: a snap image is generated based on the second image.
As is known from the above step S705, in some cases, there may be a plurality of second images, in which case, image fusion may be performed on the plurality of second images, so as to generate a snap-shot image.
Therefore, the original image with the acquisition sequence information closest to the acquisition sequence information corresponding to the wonderful image can be selected from the cache queue, and the third image is obtained. Because the original image with the acquisition sequence information closest to the acquisition sequence information corresponding to the wonderful image is often the original image corresponding to the wonderful image or the original image closest to the original image corresponding to the wonderful image, a third image with stronger relevance to the wonderful image can be obtained according to the mode, a second image with stronger relevance to the wonderful image can be obtained based on the third image, and the image quality of the snapshot image generated based on the second image is improved.
The user information related in the embodiment of the application is the authorized information of the user, and the processes of acquiring, storing, using, processing, transmitting, providing, disclosing and the like of the user information are in accordance with the regulations of related laws and regulations and do not violate the popular regulations.
In a specific implementation, the present application further provides a computer storage medium, where the computer storage medium may store a program, where when the program runs, the device where the computer readable storage medium is controlled to execute some or all of the steps in the foregoing embodiments. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like.
In a specific implementation, an embodiment of the present application further provides a computer program product, where the computer program product includes executable instructions, where the executable instructions when executed on a computer cause the computer to perform some or all of the steps in the method embodiment described above.
In a specific implementation, the embodiment of the application further provides a terminal, which comprises:
one or more processors, image sensors, and memory;
The memory is coupled to the one or more processors for storing computer program code comprising computer instructions that are invoked by the one or more processors to cause the terminal to perform the aforementioned image capture method.
As shown in fig. 8, the present application further provides a chip system, where the chip system is applied to the terminal 100, and the chip system includes one or more processors 801, where the processors 801 are configured to invoke computer instructions to enable the terminal 100 to input data to be processed into the chip system, and the chip system processes the data based on the image capturing method provided by the embodiment of the present application and outputs a processing result.
In one possible implementation, the chip system further includes input and output interfaces for inputting and outputting data.
Embodiments of the disclosed mechanisms may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the application may be implemented as a computer program or program code that is executed on a programmable system comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For purposes of the present application, a processing system includes any system having a Processor such as, for example, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), microcontroller, application SPECIFIC INTEGRATED Circuit (ASIC), or microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. Program code may also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described in the present application are not limited in scope by any particular programming language. In either case, the language may be a compiled or interpreted language.
In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. For example, the instructions may be distributed over a network or through other computer readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including but not limited to floppy diskettes, optical disks, compact disk read-only memories (Compact Disc Read Only Memory, CD-ROMs), magneto-optical disks, read-only memories, random access memories, erasable programmable read-only memories (Erasable Programmable Read Only Memory, EPROM), electrically erasable programmable read-only memories (ELECTRICALLY ERASABLE PROGRAMMABLE READ ONLY MEMORY, EEPROM), magnetic or optical cards, flash memory, or tangible machine-readable memory for transmitting information (e.g., carrier waves, infrared signal digital signals, etc.) using the internet in an electrical, optical, acoustical or other form of propagated signal. Thus, a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
In the drawings, some structural or methodological features may be shown in a particular arrangement and/or order. However, it should be understood that such a particular arrangement and/or ordering may not be required. Rather, in some embodiments, these features may be arranged in a different manner and/or order than shown in the drawings of the specification. Additionally, the inclusion of structural or methodological features in a particular figure is not meant to imply that such features are required in all embodiments, and in some embodiments, may not be included or may be combined with other features.
It should be noted that, in the embodiments of the present application, each unit/module mentioned in each device is a logic unit/module, and in physical terms, one logic unit/module may be one physical unit/module, or may be a part of one physical unit/module, or may be implemented by a combination of multiple physical units/modules, where the physical implementation manner of the logic unit/module itself is not the most important, and the combination of functions implemented by the logic unit/module is only a key for solving the technical problem posed by the present application. Furthermore, in order to highlight the innovative part of the present application, the above-described device embodiments of the present application do not introduce units/modules that are less closely related to solving the technical problems posed by the present application, which does not indicate that the above-described device embodiments do not have other units/modules.
It should be noted that in the examples and descriptions of this patent, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
While the application has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the application.

Claims (9)

1. A method of capturing images, the method comprising:
responding to the opened camera application program, and storing the original image acquired by the image sensor into a cache queue in a memory;
Detecting a first image with unqualified image quality in a detection image corresponding to the original image, and deleting the original image corresponding to the first image from the cache queue, wherein the detection image is: preprocessing an original image to obtain an image;
Detecting highlight images in detected images in a snapshot observation interval, wherein the number of original images which can be stored in the cache queue is smaller than the number of original images positioned in the snapshot observation interval at the acquisition time;
Selecting an original image from the cache queue based on the information of the highlight image to obtain a second image;
Generating a snap shot image based on the second image;
the detecting the first image with unqualified image quality in the detected image corresponding to the original image comprises the following steps:
Obtaining a dynamic quality representation value and/or a static quality representation value of a detection image corresponding to the original image; detecting a first image with unqualified image quality in the detected image based on the obtained quality characterization value and a quality threshold, wherein the quality threshold is determined based on the length of the cache queue and is inversely proportional to the length of the cache queue;
the selecting an original image from the cache queue based on the information of the highlight image to obtain a second image includes:
selecting an original image with the acquisition sequence information closest to the acquisition sequence information corresponding to the wonderful image from the cache queue to obtain a third image; a second image is determined based on the third image.
2. The method of claim 1, wherein the dynamic quality characterization value comprises at least one of the following information:
The motion amplitude of the object in the detected image;
The method includes detecting a first difference between a motion amplitude of an object in an image and a motion amplitude of an object in an adjacent image.
3. The method of claim 1, wherein the static quality characterization value comprises at least one of the following information:
the automatic exposure AE convergence degree of the detection image;
The automatic focusing AF convergence degree of the detected image;
the automatic white balance AWB convergence of the detected image;
the definition of the detection image;
the contrast of the detected image;
Detecting color saturation of the image;
The color uniformity of the image is detected.
4. The method of claim 1, wherein the determining a second image based on the third image comprises:
determining a second difference between the motion amplitude of the object in the third image and the motion amplitude of the object in a fourth image, wherein the fourth image is: the images adjacent to the third image in the cache queue;
if the second difference is greater than a difference threshold, determining the third image as a second image;
otherwise, determining the third image and the fourth image as the second image.
5. A method according to any one of claims 1-3, wherein the detection image is obtained by subjecting the original image to at least one of the following:
downsampling, noise reduction, sharpening, and smoothing.
6. A terminal, comprising:
one or more processors, image sensors, and memory;
The memory is coupled with the one or more processors, the memory for storing computer program code comprising computer instructions that the one or more processors invoke to cause the terminal to perform the method of any of claims 1-5.
7. A computer readable storage medium comprising a computer program which, when run on a terminal, causes the terminal to perform the method of any of claims 1 to 5.
8. A computer program product comprising executable instructions which, when executed on a terminal, cause the terminal to perform the method of any of claims 1 to 5.
9. A chip system for a terminal, the chip system comprising one or more processors for invoking computer instructions to cause the terminal to input data into the chip system and to output the result of processing after processing the data by performing the method of any of claims 1 to 5.
CN202310939433.2A 2023-07-27 2023-07-27 Image capturing method, terminal, storage medium and program product Active CN117692791B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310939433.2A CN117692791B (en) 2023-07-27 2023-07-27 Image capturing method, terminal, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310939433.2A CN117692791B (en) 2023-07-27 2023-07-27 Image capturing method, terminal, storage medium and program product

Publications (2)

Publication Number Publication Date
CN117692791A CN117692791A (en) 2024-03-12
CN117692791B true CN117692791B (en) 2024-10-18

Family

ID=90135960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310939433.2A Active CN117692791B (en) 2023-07-27 2023-07-27 Image capturing method, terminal, storage medium and program product

Country Status (1)

Country Link
CN (1) CN117692791B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112771612A (en) * 2019-09-06 2021-05-07 华为技术有限公司 Method and device for shooting image
CN113938602A (en) * 2021-09-08 2022-01-14 荣耀终端有限公司 Image processing method, electronic device, chip and readable storage medium
CN114827342A (en) * 2022-03-15 2022-07-29 荣耀终端有限公司 Video processing method, electronic device and readable medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107172296A (en) * 2017-06-22 2017-09-15 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
CN110147792B (en) * 2019-05-22 2021-05-28 齐鲁工业大学 Medicine package character high-speed detection system and method based on memory optimization
CN113329175A (en) * 2021-05-21 2021-08-31 浙江大华技术股份有限公司 Snapshot method, device, electronic device and storage medium
CN116320783B (en) * 2022-09-14 2023-11-14 荣耀终端有限公司 Method for capturing images in video and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112771612A (en) * 2019-09-06 2021-05-07 华为技术有限公司 Method and device for shooting image
CN113938602A (en) * 2021-09-08 2022-01-14 荣耀终端有限公司 Image processing method, electronic device, chip and readable storage medium
CN114827342A (en) * 2022-03-15 2022-07-29 荣耀终端有限公司 Video processing method, electronic device and readable medium

Also Published As

Publication number Publication date
CN117692791A (en) 2024-03-12

Similar Documents

Publication Publication Date Title
WO2023056795A1 (en) Quick photographing method, electronic device, and computer readable storage medium
CN115689963B (en) Image processing method and electronic equipment
CN114390212B (en) Photographing preview method, electronic device and storage medium
CN115802146B (en) Method for capturing images in video and electronic equipment
CN117499779B (en) Image preview method, device and storage medium
CN117692791B (en) Image capturing method, terminal, storage medium and program product
CN116630354B (en) Video matting method, electronic device, storage medium and program product
WO2024179101A9 (en) Photographing method
WO2024179101A1 (en) Photographing method
CN115802148B (en) Method for acquiring image and electronic equipment
WO2022170866A1 (en) Data transmission method and apparatus, and storage medium
CN115883958A (en) Portrait shooting method
CN115661941A (en) Gesture recognition method and electronic equipment
CN117132515A (en) Image processing method and electronic equipment
CN117689611B (en) Quality prediction network model generation method, image processing method and electronic equipment
CN115460343A (en) Image processing method, apparatus and storage medium
CN117692792A (en) Image capturing method, terminal, storage medium and program product
CN117692752B (en) Image processing method, terminal, storage medium and program product
CN116703729B (en) Image processing method, terminal, storage medium and program product
CN117692751B (en) Image processing method, terminal, storage medium and program product
CN116206321B (en) Form identification method, electronic equipment, storage medium and program product
CN117692753B (en) Photographing method and electronic equipment
CN115988339B (en) Image processing method, electronic device, storage medium, and program product
CN117692763A (en) Photographing method, electronic device, storage medium and program product
CN117156293B (en) Photographing method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant