[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN118509720B - Image processing method, electronic device, storage medium and program product - Google Patents

Image processing method, electronic device, storage medium and program product Download PDF

Info

Publication number
CN118509720B
CN118509720B CN202311733874.3A CN202311733874A CN118509720B CN 118509720 B CN118509720 B CN 118509720B CN 202311733874 A CN202311733874 A CN 202311733874A CN 118509720 B CN118509720 B CN 118509720B
Authority
CN
China
Prior art keywords
image
sensitivity calibration
channel
calibration matrix
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311733874.3A
Other languages
Chinese (zh)
Other versions
CN118509720A (en
Inventor
眭新雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311733874.3A priority Critical patent/CN118509720B/en
Publication of CN118509720A publication Critical patent/CN118509720A/en
Application granted granted Critical
Publication of CN118509720B publication Critical patent/CN118509720B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Adjustment Of Camera Lenses (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

The embodiment of the application provides an image processing method, electronic equipment, a storage medium and a program product, which are applied to the technical field of electronics. According to the method, a first image acquired by a camera when an optical anti-shake module is started is acquired, and a sensitivity calibration matrix corresponding to the first image is determined according to a first timestamp when the first image is started, a second timestamp when the first image is ended, a first coordinate of a position of the optical anti-shake module after each movement in the acquisition process of the first image and a plurality of sensitivity calibration matrices, so that the first image is calibrated, and the relative positions of the optical centers of the lens and the optical center of the N-pixel integrated image sensor are different when the plurality of sensitivity calibration matrices are calibrated. Therefore, the embodiment of the application can improve the condition of poor image quality caused by uneven light received by N same-color pixels when the optical anti-shake module moves, and improve the image quality of the second image obtained after calibration.

Description

Image processing method, electronic device, storage medium, and program product
Technical Field
The present application relates to the field of electronic technology, and in particular, to an image processing method, an electronic device, a storage medium, and a program product.
Background
With the continuous development of electronic technology, electronic devices such as mobile phones and tablet computers become a common tool in daily life and work of people. Currently, some electronic devices are provided with a camera, and a photographing or video recording function is provided for a user based on the camera.
In some cameras of electronic devices, an optical anti-shake (optical image stabilization, OIS) module and an N-pixel-in-one image sensor may be disposed, and in a pixel array of the N-pixel-in-one image sensor, adjacent N same-color pixels may share the same microlens.
However, when the camera starts the optical anti-shake module to perform anti-shake, the light received by the N same-color pixels is uneven, so that the image quality of the image collected by the camera is poor.
Disclosure of Invention
The embodiment of the application provides an image processing method, electronic equipment, a storage medium and a program product, which are used for calibrating a first image acquired by a camera by adopting a sensitivity calibration matrix under the condition that the camera starts an optical anti-shake module to perform anti-shake, so as to improve the image quality of the image.
In a first aspect, an embodiment of the present application provides an image processing method, which is applied to an electronic device, where the electronic device includes a camera, the camera includes an optical anti-shake module, a lens, and an N-pixel-in-one image sensor, the N-pixel-in-one image sensor includes a microlens array and a pixel array, the microlens array includes a plurality of microlenses, each microlens covers N adjacent same-color pixels in the pixel array, and N is an integer greater than 1. The image processing method comprises the steps that an electronic device obtains a first image collected when a camera starts an optical anti-shake module, the electronic device obtains a first time stamp when the first image starts to be collected and a second time stamp when the first image ends to be collected, the electronic device obtains first coordinates of positions of the optical anti-shake module after moving each time in the collection process of the first image, the electronic device determines a sensitivity calibration matrix corresponding to the first image according to the first time stamp, the second time stamp, each first coordinate and a plurality of sensitivity calibration matrixes, the relative positions of the optical centers of the lenses and the optical centers of the N-pixel integrated image sensor are different when the plurality of sensitivity calibration matrixes are calibrated, and the electronic device calibrates the first image by adopting the sensitivity calibration matrix to obtain a second image.
Therefore, under the condition that the camera starts the optical anti-shake module to perform anti-shake, the sensitivity calibration matrix is calculated according to the first timestamp, the second timestamp, the first coordinate of the position of the optical anti-shake module after each movement and the plurality of sensitivity calibration matrices, so that the first image acquired by the camera is calibrated. Therefore, the problem that the calibration of the first image is invalid by adopting the first sensitivity calibration matrix under single optical displacement can be solved, the condition that the image quality of the image is poor due to uneven light received by N homochromatic pixels when the optical anti-shake module moves can be improved, the sensitivities of the N homochromatic pixels in the second image obtained after the calibration are nearly consistent, and the image quality of the second image obtained after the calibration is improved.
In a possible implementation manner, the electronic device determines a sensitivity calibration matrix corresponding to the first image according to the first timestamp, the second timestamp, each first coordinate and a plurality of sensitivity calibration matrices, and the method comprises the steps that the electronic device determines the moving times of the optical anti-shake module in the acquisition process of the first image according to the first timestamp and the second timestamp, the electronic device determines the second coordinates of the centroid positions corresponding to the positions where the optical anti-shake module is located after moving according to the moving times and each first coordinate, and the electronic device determines the sensitivity calibration matrix corresponding to the first image according to the second coordinates and the plurality of sensitivity calibration matrices. Therefore, the center of mass positions corresponding to the positions of the optical anti-shake module moving in the acquisition process of the first image can represent the average moving position of the lens in the acquisition process of the first image to a certain extent, and when the second coordinates corresponding to the center of mass positions are adopted to determine the sensitivity calibration matrix so as to calibrate the first image, the calibration effect is better, and the image quality of the second image obtained after calibration is further improved.
In one possible implementation, the electronic device determines the number of times the optical anti-shake module moves in the process of acquiring the first image according to the first timestamp and the second timestamp, and the electronic device calculates the time interval between the second timestamp and the first timestamp, and determines the ratio between the time interval and the frequency of movement of the optical anti-shake module as the number of times the optical anti-shake module moves in the process of acquiring the first image. Therefore, the moving times of the optical anti-shake module in the acquisition process of the first image are determined according to the ratio of the time interval to the moving frequency of the optical anti-shake module, so that the calculating mode of the moving times is simpler.
In one possible implementation manner, the electronic device determines the second coordinates of the centroid positions corresponding to the positions of the optical anti-shake module after moving according to the moving times and each first coordinate, and the electronic device calculates the second coordinates according to the following formula:
Wherein S n is a second coordinate, n is the number of movements, and P i is a first coordinate of the position of the optical anti-shake module after the ith movement. Therefore, after the first coordinates of the positions of the optical anti-shake module after n times of movement are summed, the second coordinates of the centroid positions corresponding to the positions of the optical anti-shake module after movement are obtained by dividing the sum by the number of movement times, and the calculation mode of the second coordinates of the centroid positions is simpler.
In a possible implementation manner, the electronic device determines a sensitivity calibration matrix corresponding to the first image according to the second coordinate and the plurality of sensitivity calibration matrices, wherein the electronic device decomposes the second coordinate according to the movable direction of the optical anti-shake module to obtain a first coordinate component in the first movable direction and a second coordinate component in the second movable direction, and determines the sensitivity calibration matrix corresponding to the first image according to the first coordinate component, the second coordinate component and the plurality of sensitivity calibration matrices, the first movable direction and the second movable direction are perpendicular to each other, and the first movable direction and the second movable direction are perpendicular to the optical axis direction of the camera. In this way, the first coordinate component, the second coordinate component and the plurality of sensitivity calibration matrices can be adopted to determine the sensitivity calibration matrix, so that the calculated sensitivity calibration matrix is more accurate, and the image quality of the second image obtained after calibration is further improved.
In one possible implementation, the plurality of sensitivity calibration matrices includes a first sensitivity calibration matrix, a second sensitivity calibration matrix, a third sensitivity calibration matrix, a fourth sensitivity calibration matrix, and a fifth sensitivity calibration matrix, wherein the first direction of movement includes a first direction and a second direction that are opposite to each other, and the second direction of movement includes a third direction and a fourth direction that are opposite to each other. When the first sensitivity calibration matrix is calibrated, the optical center of the lens and the optical center of the N-pixel integrated image sensor coincide along the optical axis direction of the camera. When the second sensitivity calibration matrix is calibrated, the optical center of the lens is offset by a first distance along a first direction and a second distance along a third direction relative to the optical center of the N-pixel integrated image sensor, wherein the first distance is the maximum distance that the lens can move along the first direction, and the second distance is the maximum distance that the lens can move along the third direction. When the third sensitivity calibration matrix is calibrated, the optical center of the lens is offset by a first distance along a first direction and a third distance along a fourth direction relative to the optical center of the N-pixel integrated image sensor, wherein the third distance is the maximum distance that the lens can move along the fourth direction. When the fourth sensitivity calibration matrix is calibrated, the optical center of the lens is offset by a fourth distance along the second direction and offset by a third distance along the fourth direction relative to the optical center of the N-pixel integrated image sensor, wherein the fourth distance is the maximum distance that the lens can move along the second direction. The fifth sensitivity calibration matrix is used for calibrating, and the optical center of the lens is offset by a fourth distance along the second direction and offset by a second distance along the third direction relative to the optical center of the N-pixel integrated image sensor. Therefore, a plurality of sensitivity calibration matrixes can be adopted to calculate the sensitivity calibration matrix, the problem that the calibration of the first image by adopting the first sensitivity calibration matrix is invalid under single optical displacement can be solved, and the first image acquired by the camera can be accurately calibrated when the optical anti-shake module is started to move to any position.
In one possible implementation, the electronic device determines a sensitivity calibration matrix corresponding to the first image according to the first coordinate component, the second coordinate component and the plurality of sensitivity calibration matrices, including calculating the sensitivity calibration matrix by the electronic device according to the following formula when the first coordinate component is greater than or equal to 0 and the second coordinate component is greater than or equal to 0:
Wherein Q n is a sensitivity calibration matrix, Q 0 is a first sensitivity calibration matrix, Q 1 is a second sensitivity calibration matrix, Q 2 is a third sensitivity calibration matrix, Q 4 is a fifth sensitivity calibration matrix, S nx is a first coordinate component, S ny is a second coordinate component, S xmax1 is a first distance, and S xmax1 is a second distance. In this way, it is possible to calculate the sensitivity calibration matrix when the optical center of the lens is shifted towards the first direction and/or the third direction with respect to the optical center of the N-pixel unified image sensor.
In one possible implementation, the electronic device determines a sensitivity calibration matrix corresponding to the first image according to the first coordinate component, the second coordinate component and the plurality of sensitivity calibration matrices, including calculating the sensitivity calibration matrix by the electronic device in a case that the first coordinate component is greater than 0 and the second coordinate component is less than 0 by the following formula:
Wherein Q n is a sensitivity calibration matrix, Q 0 is a first sensitivity calibration matrix, Q 1 is a second sensitivity calibration matrix, Q 2 is a third sensitivity calibration matrix, Q 3 is a fourth sensitivity calibration matrix, S nx is a first coordinate component, S ny is a second coordinate component, S xmax1 is a first distance, and S ymax2 is a third distance. In this way, it is possible to calculate the sensitivity calibration matrix when the optical center of the lens is shifted toward the first direction and the fourth direction with respect to the optical center of the N-pixel-integrated image sensor.
In one possible implementation, the electronic device determines a sensitivity calibration matrix corresponding to the first image according to the first coordinate component, the second coordinate component and the plurality of sensitivity calibration matrices, including calculating the sensitivity calibration matrix by the electronic device in a case that the first coordinate component is less than or equal to 0 and the second coordinate component is less than or equal to 0 by the following formula:
Wherein Q n is a sensitivity calibration matrix, Q 0 is a first sensitivity calibration matrix, Q 2 is a third sensitivity calibration matrix, Q 3 is a fourth sensitivity calibration matrix, Q 4 is a fifth sensitivity calibration matrix, S nx is a first coordinate component, S ny is a second coordinate component, S xmax2 is a fourth distance, and S ymax2 is a third distance. In this way, it is possible to calculate the sensitivity calibration matrix when the optical center of the lens is shifted toward the second direction and/or the fourth direction with respect to the optical center of the N-pixel-integrated image sensor.
In one possible implementation, the electronic device determines a sensitivity calibration matrix corresponding to the first image according to the first coordinate component, the second coordinate component and the plurality of sensitivity calibration matrices, including calculating the sensitivity calibration matrix by the electronic device according to the following formula when the first coordinate component is less than 0 and the second coordinate component is greater than 0:
Wherein Q n is a sensitivity calibration matrix, Q 0 is a first sensitivity calibration matrix, Q 1 is a second sensitivity calibration matrix, Q 3 is a fourth sensitivity calibration matrix, Q 4 is a fifth sensitivity calibration matrix, S nx is a first coordinate component, S ny is a second coordinate component, S xmax2 is a fourth distance, and S ymax1 is a second distance. In this way, it is possible to calculate the sensitivity calibration matrix when the optical center of the lens is shifted toward the second direction and the third direction with respect to the optical center of the N-pixel-integrated image sensor.
In one possible implementation, the pixel array includes a plurality of pixel sets, each pixel set includes pixel units corresponding to M color channels, each pixel unit includes pixels corresponding to N color sub-channels, the pixels corresponding to N color sub-channels are adjacent N same-color pixels, and M is an integer greater than 1. The sensitivity calibration matrix comprises a first sensitivity calibration sub-matrix corresponding to H color sub-channels, wherein H is equal to the product of M and N.
In a possible implementation manner, the electronic device calibrates the first image by using a sensitivity calibration matrix to obtain a second image, and the method comprises the steps that the electronic device splits the first image according to H color sub-channels to obtain H first single-channel images, adjusts the sizes of the first sensitivity calibration sub-matrices corresponding to the H color sub-channels to obtain second sensitivity calibration sub-matrices corresponding to the H color sub-channels, wherein the size of each second sensitivity calibration sub-matrix is equal to that of the first single-channel image, calibrates the first single-channel images corresponding to the H color sub-channels by using the second sensitivity calibration sub-matrices corresponding to the H color sub-channels to obtain H second single-channel images, and combines the H second single-channel images to obtain the second image. Therefore, when the camera collects the first image, the pixels corresponding to the sub-channels with different colors receive uneven light, and therefore the second sensitivity calibration sub-matrixes corresponding to the sub-channels with different colors are adopted to calibrate the corresponding first single-channel images respectively, so that the image quality of the calibrated second image is further improved.
In one possible implementation, the electronic device adopts second sensitivity calibration submatrices corresponding to the H color submatrices to calibrate the corresponding first single-channel images respectively to obtain H second single-channel images, and the method comprises the steps that aiming at the same color submatrices, the electronic device adopts each calibration parameter in the second sensitivity calibration submatrices and multiplies the pixel value at the corresponding position in the first single-channel images to obtain the second single-channel images. Therefore, the product of the calibration parameters and the pixel values can be adopted to realize the calibration of the single-channel image, and the calibration mode is simpler.
In one possible implementation manner, before the electronic device determines the sensitivity calibration matrix corresponding to the first image according to the first timestamp, the second timestamp, each first coordinate and the plurality of sensitivity calibration matrices, the electronic device further comprises a plurality of sensitivity calibration matrices calibrated in advance, and each sensitivity calibration matrix comprises sensitivity calibration matrices corresponding to H color sub-channels. The method comprises the steps of calibrating each calibration parameter in a matrix by sensitivity according to each pixel mean value in a single-channel mean value image, calculating the pixel value at a corresponding position in each single-channel test image corresponding to the single-channel mean value image, wherein each pixel mean value in the single-channel mean value image is the average value of the pixel values at the same position in N single-channel test images belonging to the same color channel in H single-channel test images, the H single-channel test images are obtained by splitting the test images according to H color sub-channels after the size of the test images is reduced, and the test images are acquired under the condition that the focusing position of a lens is a preset focusing position and the relative position of the optical center of the lens and the optical center of an image sensor integrated by N pixels is a preset position. Therefore, a plurality of sensitivity calibration matrixes can be calibrated in advance, so that the sensitivity calibration matrixes corresponding to the first image can be calculated quickly, and the image calibration speed is improved.
In one possible implementation, each calibration parameter in the sensitivity calibration matrix is the ratio of the pixel value at the corresponding location in each single channel test image corresponding to the single channel mean image for each pixel mean in the single channel mean image. Therefore, the calculation mode of the sensitivity calibration matrix is simpler.
In a second aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory is configured to store a computer program, and the processor is configured to invoke the computer program to execute the above-mentioned image processing method.
In a third aspect, an embodiment of the present application proposes a computer readable storage medium, in which a computer program or instructions are stored, which when executed, implement the above-mentioned image processing method.
In a fourth aspect, an embodiment of the present application proposes a computer program product comprising a computer program which, when executed, causes a computer to perform the above-mentioned image processing method.
The effects of each possible implementation manner of the second aspect to the fourth aspect are similar to those of the first aspect and the possible designs of the first aspect, and are not described herein.
Drawings
Fig. 1 is a schematic structural diagram of a camera according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an image sensor integrated with N pixels in a camera according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a pixel array in an N-pixel integrated image sensor according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an image sensor with a lens and N pixels in a camera according to an embodiment of the present application, wherein the image sensor is configured to close an optical anti-shake module and start the optical anti-shake module;
fig. 5 is a schematic diagram of a hardware system of an electronic device according to an embodiment of the present application;
fig. 6 is a schematic diagram of a software system of an electronic device according to an embodiment of the present application;
fig. 7 is an interface schematic diagram of an application scenario provided in an embodiment of the present application;
FIG. 8 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a first timestamp and a second timestamp of a first image according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a track of a position of an optical anti-shake module according to an embodiment of the present application after each movement;
FIG. 11 is a schematic diagram of a calibration process of a plurality of sensitivity calibration matrices according to an embodiment of the present application when the optical center of a lens is shifted relative to the optical center of an N-pixel integrated image sensor;
FIG. 12 is a flowchart of a calibration process of a first sensitivity calibration matrix according to an embodiment of the present application;
FIG. 13 is a schematic diagram of a plurality of single channel test images according to an embodiment of the present application;
FIG. 14 is a schematic diagram of a plurality of single-channel mean images according to an embodiment of the present application;
FIG. 15 is a schematic diagram of a plurality of sensitivity calibration matrices according to an embodiment of the present application;
FIG. 16 is a flow chart of a calibration process for a plurality of sensitivity calibration matrices provided by an embodiment of the present application;
FIG. 17 is a schematic diagram of a calculation sensitivity calibration matrix according to an embodiment of the present application;
FIG. 18 is a schematic diagram of module interaction of an image processing method according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
Fig. 20 is a schematic structural diagram of a chip according to an embodiment of the present application.
Detailed Description
In order to clearly describe the technical solution of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function and effect. For example, the first chip and the second chip are merely for distinguishing different chips, and the order of the different chips is not limited. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or" describes an association of associated objects, meaning that there may be three relationships, e.g., A and/or B, and that there may be A alone, while A and B are present, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (a, b, or c) of a, b, c, a-b, a-c, b-c, or a-b-c may be represented, wherein a, b, c may be single or plural.
Currently, some electronic devices are provided with cameras, and in the process of using the electronic devices by users, the cameras in the electronic devices can be adopted to take pictures or record videos.
When a user holds the electronic equipment to shoot, the electronic equipment can shake to a certain extent, so that the problem of imaging blurring of an image acquired by the camera is caused. In order to enhance the stability of the image captured by the camera, an optical anti-shake technique is introduced into the camera, i.e. an optical anti-shake module is provided in the camera.
As shown in fig. 1, the camera may include a lens 10, an image sensor 20, an optical anti-shake module, a bracket 40, and a circuit board 50.
The lens 10 and the image sensor 20 are disposed in this order along the optical axis direction of the camera. When the lens 10 includes a plurality of optical lenses, the plurality of optical lenses may be sequentially stacked along the optical axis direction of the camera. The image sensor 20 may also be referred to as a camera sensor, and the image sensor 20 is fixed on the circuit board 50 and electrically connected to the circuit board 50.
The optical anti-shake module may include a driving motor 30, and the driving motor 30 may be disposed on the bracket 40. The driving motor 30 may be a Voice Coil Motor (VCM), a shape memory alloy (shape memory alloy, SMA) motor, a stepping motor (stepping motor), a piezoelectric motor (piezoelectric motor), etc., which is not limited in the embodiment of the present application.
The driving motor 30 may be electrically connected to a driving chip for controlling a movement state of the driving motor 30 after the driving chip is powered on.
The optical anti-shake technology corrects the optical axis deviation through the floating lens of the lens, the principle is that a gyroscope sensor or an acceleration sensor in the electronic equipment detects tiny shake, then the detected shake data is sent to a microprocessor, such as a driving chip electrically connected with a driving motor 30, the microprocessor calculates the displacement amount required to be compensated according to the shake data, and then the driving motor 30 in the optical anti-shake module is driven to move through the displacement amount, so as to drive the lens 10 to move along a first moving direction and/or a second moving direction, thereby compensating the shake direction and shake displacement of the lens 10, and further effectively improving the problem of imaging blurring caused by shake of the electronic equipment. The first moving direction and the second moving direction are perpendicular to each other, and the first moving direction and the second moving direction are perpendicular to the optical axis direction of the camera.
As shown in fig. 2, the image sensor 20 may be an N-pixel integrated image sensor, and the N-pixel integrated image sensor may include a microlens array 21, a filter 22, and a photosensitive element 23, where the microlens array 21 and the filter 22 are both located between the lens 10 and the photosensitive element 23, and the filter 22 is located between the microlens array 21 and the photosensitive element 23.
The microlens array 21 includes a plurality of microlenses 210 distributed in an array, where the microlenses 210 have a condensing effect, and the microlenses 210 may be OCLs (on chip micro lenses, on-chip microlenses). Light reflected by the photographed object sequentially passes through the lens 10 and then enters the micro lens 210, the micro lens 210 condenses the incident light, the condensed light passes through the optical filter 22 and then is projected onto the photosensitive element 23, and the photosensitive element 23 converts the optical signal into an electrical signal for imaging.
The optical filter 22 may include a filter cell array, which may include a plurality of filter cell sets, each of which may include M filter cells, each of which allows the same color of light to be transmitted therethrough. The photosensitive element 23 may include a photosensitive cell array, which may include a plurality of photosensitive cell sets, each of which may include M photosensitive cells, each of which includes N photosensitive pixels. And, each filter unit in the filter 22 corresponds to each photosensitive unit in the photosensitive element 23 one by one, and N photosensitive pixels in each photosensitive unit are used for receiving the light filtered by the corresponding filter unit. Wherein M and N are integers greater than 1.
Thus, the filter unit array and the photosensitive unit array covered by the filter unit array jointly form a pixel array of the N-pixel integrated image sensor. And the filter unit and the photosensitive unit covered by the filter unit jointly form a pixel unit in the pixel array of the N-pixel integrated image sensor.
In the embodiment of the application, the pixel array of the N-pixel-in-one image sensor comprises a plurality of pixel sets, each pixel set comprises pixel units corresponding to M color channels, each pixel unit comprises pixels corresponding to N color sub-channels, and the pixels corresponding to the N color sub-channels are adjacent N same-color pixels.
As the performance requirements of electronic devices on cameras are increasing, one trend is to increase the resolution of an image by reducing the size of individual pixels and arranging more pixels, but as the pixels of the camera are miniaturized, the sensitivity is reduced. Therefore, the four-pixel integrated image sensor can be used for combining the high sensitivity and the high resolution of the camera. In the pixel array of the four-pixel-in-one image sensor, adjacent four same-color pixels are arranged together to form a pixel which is four times larger than the original pixel area so as to achieve both high sensitivity and high resolution of the camera. For example, when the camera with the four-pixel integrated image sensor is used for image acquisition in a low-illumination environment such as night scenes, the reduction of the resolution of the image can be improved, the noise of the image can be reduced, and the image quality of the image can be improved.
Taking an N-pixel-in-one image sensor as an example of a four-pixel-in-one image sensor, i.e., N is 4, as shown in fig. 3, the pixel array of the four-pixel-in-one image sensor may include a plurality of pixel sets 24, where each pixel set 24 includes pixel units corresponding to four color channels, such as a first pixel unit 241 corresponding to a red color channel, a second pixel unit 242 corresponding to a first green color channel, a third pixel unit 243 corresponding to a second green color channel, and a fourth pixel unit 244 corresponding to a blue color channel.
Each pixel unit comprises pixels corresponding to four color sub-channels, and the pixels corresponding to the four color sub-channels are adjacent four same-color pixels. For example, the first pixel unit 241 includes a pixel corresponding to a first red sub-channel (R0 sub-channel), a pixel corresponding to a second red sub-channel (R1 sub-channel), a pixel corresponding to a third red sub-channel (R2 sub-channel), and a pixel corresponding to a fourth red sub-channel (R3 sub-channel), the second pixel unit 242 includes a pixel corresponding to a first green sub-channel (Gr 0 sub-channel), a pixel corresponding to a second green sub-channel (Gr 1 sub-channel), a pixel corresponding to a third green sub-channel (Gr 2 sub-channel), and a pixel corresponding to a fourth green sub-channel (Gr 3 sub-channel), the third pixel unit 243 includes a pixel corresponding to a fifth green sub-channel (Gb 0 sub-channel), a pixel corresponding to a sixth green sub-channel (Gb 1 sub-channel), a pixel corresponding to a seventh green sub-channel (Gb 2 sub-channel), and a pixel corresponding to an eighth green sub-channel (Gb 3 sub-channel), and the fourth pixel unit 244 includes a pixel corresponding to a first blue sub-channel (B0 sub-channel), a pixel corresponding to a third sub-channel (B2 sub-channel), and a pixel corresponding to a third sub-channel (B2 sub-channel).
It can be understood that, in addition to the four bayer array shown in fig. 3, the pixel array of the N-pixel integrated image sensor in the embodiment of the present application may also be a nine bayer array or a sixteen bayer array, and the specific form of the pixel array of the image sensor is not limited in the embodiment of the present application.
When the pixel array of the N-pixel-in-one image sensor is a bayer array, the pixel array may include a plurality of pixel sets 24, each pixel set 24 includes pixel units corresponding to four color channels, each pixel unit includes pixels corresponding to nine color sub-channels, and the pixels corresponding to the nine color sub-channels are adjacent nine same-color pixels. When the pixel array of the N-pixel-in-one image sensor is a sixteen bayer array, the pixel array may include a plurality of pixel sets 24, each pixel set 24 includes pixel units corresponding to four color channels, each pixel unit includes pixels corresponding to sixteen color sub-channels, and the pixels corresponding to the sixteen color sub-channels are adjacent sixteen same-color pixels.
In order to achieve higher light input, in the N-pixel integrated image sensor according to the embodiment of the present application, each microlens 210 covers N adjacent same-color pixels in the pixel array, that is, N adjacent same-color pixels in each pixel unit share the same lens 210.
Taking a four-pixel-in-one image sensor as an example, each microlens 210 covers four adjacent same-color pixels in the pixel array. For example, one of the microlenses 210 covers the pixel corresponding to the R0 sub-channel, the pixel corresponding to the R1 sub-channel, the pixel corresponding to the R2 sub-channel, and the pixel corresponding to the R3 sub-channel in the first pixel unit 241, and the other microlens 210 covers the pixel corresponding to the Gr0 sub-channel, the pixel corresponding to the Gr1 sub-channel, the pixel corresponding to the Gr2 sub-channel, and the pixel corresponding to the Gr3 sub-channel in the second pixel unit 242.
For the camera, the camera is provided with the optical anti-shake module and the N-pixel integrated image sensor, and in the pixel array of the N-pixel integrated image sensor, N adjacent same-color pixels can share the same micro lens.
However, when the camera starts the optical anti-shake module to perform anti-shake, the optical anti-shake module will move to drive the lens 10 to move, so that the optical center of the lens 10 and the optical center of the N-pixel integrated image sensor will deviate, and the positions of the N same-color pixels relative to the optical center of the lens 10 will not be consistent, so that after the light passing through the lens 10 is focused by the micro lens 210, the light received by the N same-color pixels will not be uniform, for example, some of the N same-color pixels will receive more light, and some of the N same-color pixels will receive less light. In this way, the image quality of the image captured by the camera is poor, that is, the image quality of the image is degraded.
In the related art, for the case that the optical anti-shake module is not provided or the camera with the optical anti-shake module is closed, and in the pixel array of the image sensor with N pixels in one, the adjacent N same-color pixels share the same microlens, in order to maximally reduce the matching relationship between the lens 10 and the image sensor with N pixels in one, the influence on the first sensitivity calibration matrix obtained after calibration, as shown in (a) in fig. 4, when the calibration of the first sensitivity calibration matrix is performed, the optical center of the lens 10 and the optical center of the image sensor with N pixels in one may be set to coincide along the optical axis direction of the camera, that is, the optical displacement between the optical center of the lens 10 and the optical center of the image sensor with N pixels in one is 0, so as to obtain the first sensitivity calibration matrix, and then the first image acquired by the camera may be calibrated by adopting the first sensitivity calibration matrix.
It should be noted that, due to the influence of factors such as the manufacturing process of the N-pixel-in-one image sensor and the incomplete uniformity of the microlenses 210 disposed at the respective positions, the light received by the N same-color pixels may be uneven. Therefore, for the camera without the optical anti-shake module or with the optical anti-shake module closed, the first sensitivity calibration matrix is also required to calibrate the first image acquired by the camera, so as to improve the condition of poor image quality of the image caused by uneven light received by N same-color pixels, and make the sensitivities of N same-color pixels in the calibrated image nearly consistent.
However, in the related art, the first image collected by the camera is calibrated by using the first sensitivity calibration matrix, so that the situation that the light received by N same-color pixels is uneven in a scene where the optical center of the lens 10 coincides with the optical center of the N-pixel integrated image sensor along the optical axis direction of the camera can only be calibrated.
As shown in fig. 4 (b), in the case where the camera starts the optical anti-shake module to perform anti-shake, the optical center of the lens 10 is offset from the optical center of the N-pixel integrated image sensor, that is, the optical displacement between the optical center of the lens 10 and the optical center of the N-pixel integrated image sensor may be d, where d is not equal to 0. Under such circumstances, even if the first sensitivity calibration matrix is used to calibrate the first image acquired by the camera, the situation that the image quality of the image is poor due to uneven light received by the N same-color pixels cannot be effectively improved, that is, the sensitivity difference of the N same-color pixels cannot be calibrated, so that the image quality of the image is degraded due to insufficient calibration.
Based on the above, the embodiment of the application provides an image processing method, which is characterized in that a first image acquired by a camera when an optical anti-shake module is started is acquired, a first timestamp when the first image is acquired, and a second timestamp when the first image is acquired, in the acquisition process of the first image, a first coordinate of a position where the optical anti-shake module is located after each movement is acquired, a sensitivity calibration matrix corresponding to the first image is determined according to the first timestamp, the second timestamp, each first coordinate and a plurality of sensitivity calibration matrices, the relative positions of the optical centers of lenses and the optical centers of an image sensor integrated by N pixels are different when the plurality of sensitivity calibration matrices are calibrated, and the second image is acquired by calibrating the first image by adopting the sensitivity calibration matrix.
Therefore, in the embodiment of the application, under the condition that the camera starts the optical anti-shake module to perform anti-shake, the sensitivity calibration matrix is calculated according to the first timestamp, the second timestamp, the first coordinate of the position where the optical anti-shake module is located after each movement and the plurality of sensitivity calibration matrices, so as to calibrate the first image acquired by the camera. In this way, the problem that the calibration of the first image by using the first sensitivity calibration matrix fails under single optical displacement (that is, the optical displacement between the optical center of the lens 10 and the optical center of the N-pixel integrated image sensor is 0) can be overcome, which can improve the poor image quality of the image caused by uneven light received by the N same-color pixels when the optical anti-shake module moves, so that the sensitivities of the N same-color pixels in the second image obtained after calibration are nearly identical, and the image quality of the second image obtained after calibration is improved.
The image processing method provided by the embodiment of the application can be applied to the electronic equipment with the camera. The electronic device includes a terminal device, which may also be referred to as a terminal (terminal), a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), or the like. The electronic device may be a mobile phone, a smart television, a wearable device, a tablet (Pad), a computer with wireless transceiving function, a Virtual Reality (VR) device, an augmented reality (augmented reality, AR) device, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned-driving (self-driving), a wireless terminal in teleoperation (remote medical surgery), a wireless terminal in smart grid (SMART GRID), a wireless terminal in transportation security (transportation safety), a wireless terminal in smart city (SMART CITY), a wireless terminal in smart home (smart home), and the like. The embodiment of the application does not limit the specific technology and the specific equipment form adopted by the electronic equipment.
In order to better understand the embodiments of the present application, the structure of the electronic device according to the embodiments of the present application is described below.
Fig. 5 shows a schematic diagram of a hardware system architecture of the electronic device 500. Electronic device 500 may include a processor 510, an external memory interface 520, an internal memory 521, a universal serial bus (universal serial bus, USB) interface 530, a charge management module 540, a power management module 541, a battery 542, an antenna 1, an antenna 2, a mobile communication module 550, a wireless communication module 560, an audio module 570, a speaker 570A, a receiver 570B, a microphone 570C, an ear-piece interface 570D, a sensor module 580, keys 590, a motor 591, an indicator 592, a camera 593, a display 594, and a subscriber identity module (subscriber identification module, SIM) card interface 595, among others. Among them, the sensor module 580 may include a gyro sensor 580A and an acceleration sensor 580B.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 500. In other embodiments of the application, electronic device 500 may include more or fewer components than shown, or may combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 510 may include one or more processing units, for example, processor 510 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 510 for storing instructions and data. In some embodiments, the memory in processor 510 is a cache memory. The memory may hold instructions or data that has just been used or recycled by the processor 510. If the processor 510 needs to reuse the instruction or data, it may be called from memory. Repeated accesses are avoided and the latency of the processor 510 is reduced, thereby improving the efficiency of the system.
The charge management module 540 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 540 may receive a charging input of a wired charger through the USB interface 530. In some wireless charging embodiments, the charge management module 540 may receive wireless charging input through a wireless charging coil of the electronic device 500. The charging management module 540 may also provide power to the electronic device through the power management module 541 while charging the battery 542.
The power management module 541 is configured to connect the battery 542, the charge management module 540, and the processor 510. The power management module 541 receives input from the battery 542 and/or the charge management module 540 and provides power to the processor 510, the internal memory 521, the display screen 594, the camera 593, the wireless communication module 560, and the like. The power management module 541 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance), etc. In other embodiments, the power management module 541 may also be disposed in the processor 510. In other embodiments, the power management module 541 and the charge management module 540 may be disposed in the same device.
The wireless communication function of the electronic device 500 may be implemented by the antenna 1, the antenna 2, the mobile communication module 550, the wireless communication module 560, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The mobile communication module 550 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied on the electronic device 500. The mobile communication module 550 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc.
The wireless communication module 560 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), near field communication (NEAR FIELD communication, NFC), infrared (IR), etc., applied to the electronic device 500. The wireless communication module 560 may be one or more devices integrating at least one communication processing module. The wireless communication module 560 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 510. The wireless communication module 560 may also receive a signal to be transmitted from the processor 510, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 550 of electronic device 500 are coupled, and antenna 2 and wireless communication module 560 are coupled, such that electronic device 500 may communicate with a network and other devices through wireless communication techniques.
Electronic device 500 implements display functionality through a GPU, a display screen 594, and an application processor, among others. The GPU is a microprocessor for image processing, and is connected to the display screen 594 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 510 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 594 is used for displaying images, displaying videos, receiving sliding operations, and the like. The display screen 594 includes a display panel. The display panel may employ a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, an organic light-emitting diode (OLED), an active-matrix organic LIGHT EMITTING diod (AMOLED), a flexible light-emitting diode (FLED), miniled, microLed, micro-oLed, a quantum dot LIGHT EMITTING diodes (QLED), or the like. In some embodiments, electronic device 500 may include 1 or more display screens 594.
The electronic device 500 may implement shooting functions through an ISP, a camera 593, a video codec, a GPU, a display screen 594, an application processor, and the like.
The ISP is used to process the data fed back by the camera 593. For example, when photographing, the shutter is opened, light is transmitted to the photosensitive element of the camera through the lens, the optical signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, so that the electrical signal is converted into an image visible to the naked eye. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 593.
The camera 593 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 500 may include 1 or more cameras 593.
The external memory interface 520 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 500. The external memory card communicates with the processor 510 via an external memory interface 520 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 521 may be used to store computer-executable program code that includes instructions. The internal memory 521 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 500 (e.g., audio data, phonebook, etc.), and so on. In addition, internal memory 521 may include high-speed random access memory and may also include non-volatile memory, such as at least one disk storage device, flash memory device, universal flash memory (universal flash storage, UFS), and the like. The processor 510 performs various functional applications of the electronic device 500 and data processing by executing instructions stored in the internal memory 521 and/or instructions stored in a memory provided in the processor.
Electronic device 500 may implement audio functionality through audio module 570, speaker 570A, receiver 570B, microphone 570C, ear speaker interface 570D, and an application processor or the like. Such as music playing, recording, etc.
The gyro sensor 580A may be used to determine a motion gesture of the electronic device 500. In some embodiments, the angular velocity of electronic device 500 about three axes (i.e., x-axis, y-axis, and z-axis) may be determined by gyro sensor 580A. The gyro sensor 580A may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 580A detects the angle of the shake of the electronic apparatus 500, calculates the distance to be compensated for by the lens according to the angle, and allows the lens to counteract the shake of the electronic apparatus 500 by the reverse movement, thereby realizing anti-shake. The gyro sensor 580A can also be used for scenes such as navigation and motion sensing games.
The acceleration sensor 580B may detect the magnitude of acceleration of the electronic device 500 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 500 is stationary. The method can also be used for identifying the gesture of the electronic equipment, and is applied to application programs such as horizontal and vertical screen switching, pedometers and the like.
The keys 590 include a power key, a volume key, etc. The keys 590 may be mechanical keys. Or may be a touch key. The electronic device 500 may receive key inputs, generate key signal inputs related to user settings and function controls of the electronic device 500.
Motor 591 may generate a vibration alert. Motor 591 may be used for incoming call vibration alerting as well as for touch vibration feedback. The indicator 592 may be an indicator light, may be used to indicate a state of charge, a change in charge, may be used to indicate a message, missed call, notification, or the like.
The SIM card interface 595 is used to connect to a SIM card. The SIM card may be inserted into the SIM card interface 595 or removed from the SIM card interface 595 to enable contact and separation with the electronic device 500.
The software system of the electronic device 500 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, a cloud architecture, or the like. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of the electronic device 500 is illustrated.
Fig. 6 is a schematic diagram of a software system structure of an electronic device 500 according to an embodiment of the present application. The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into five layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun rows (Android runtime) and system libraries, a hardware abstraction layer, and a kernel layer, respectively.
The application layer may include a series of application packages. As shown in fig. 6, the application package may include applications such as cameras, settings, and calendars.
The camera application is an application with shooting and video recording functions, and the electronic device can respond to the operation of opening the camera application by a user to shoot or record video. It will be appreciated that the photographing and video recording functions of the camera application may also be invoked by other applications.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 6, the application framework layer may further include a camera service (CAMERA SERVICE), which may be called by the camera application, so as to implement functions such as photographing or video recording.
In addition, as shown in FIG. 6, the application framework layer may also include a window manager, a content provider, a resource manager, a view system, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
Android runtime include core libraries and virtual machines. Android runtime is responsible for scheduling and management of the android system.
The core library comprises two parts, wherein one part is a function required to be called by java language, and the other part is an android core library.
The application layer and the application framework layer run in virtual machines. The virtual machine executes java files of the application layer and the application framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. Such as surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (e.g., openGL ES), two-dimensional graphics engine (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG2, h.262, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The two-dimensional graphics engine is a drawing engine for 2D drawing.
The hardware abstraction layer is a layer of structure that is abstracted between the kernel layer and Android runtime. The hardware abstraction layer may be a package of hardware drivers for the kernel layer that provides a call interface for the application framework layer.
In an embodiment of the application, the hardware abstraction layer may include a camera hardware abstraction module (CAMERA HARDWARE abstraction layer, CAMERA HAL). In some embodiments, in the process of executing the image processing method according to the embodiment of the present application, the camera hardware abstraction module may be configured to obtain a first image acquired by the camera when the optical anti-shake module is started, a first timestamp when the first image is started, a second timestamp when the first image is ended, and a first coordinate of a position where the optical anti-shake module is located after each movement in the process of acquiring the first image, determine a sensitivity calibration matrix corresponding to the first image according to the first timestamp, the second timestamp, each first coordinate, and a plurality of sensitivity calibration matrices, and calibrate the first image by using the sensitivity calibration matrix to obtain the second image.
The kernel layer is a layer between hardware and software. The kernel layer at least comprises a camera driver, a sensor driver, a display driver and the like. In some embodiments, the camera driver is used to control the operation of the camera, the sensor driver is used to control the operation of the sensor, and the display driver is used to control the display screen to display images.
The hardware may be a camera, a sensor, a display screen, etc. In the embodiment of the application, the camera can be a front camera or a rear camera.
Although the embodiment of the application is described with an Android system, the principle of the image processing method is also applicable to electronic devices of iOS or windows and other operating systems.
The following describes the technical scheme of the present application and how the technical scheme of the present application solves the above technical problems in detail with specific embodiments. The following embodiments may be implemented independently or combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
For easy understanding, the embodiment of the application uses the mobile phone as the electronic device, and first, the application scenario of the image processing method is described by combining some user interfaces shown in the embodiment of the application.
When a user lights up a screen of the electronic device and controls the electronic device to be in an unlocked state, the electronic device may display a first interface 701 as shown in (a) of fig. 7. The first interface 701 may be a desktop of an electronic device, on which icons of a plurality of installed application programs, such as a file management application icon, an email application icon, a weather application icon, a calculator application icon, a clock application icon, a recorder application icon, a music application icon, a setting application icon, an address book application icon, a phone application icon, an information application icon, and a camera application icon 7011 are displayed.
The user may perform a touch operation on the camera application icon 7011, which may be a click operation, a long press operation, or the like, so that the electronic device receives the touch operation of the user on the camera application icon 7011, and the electronic device starts the camera application in response to the touch operation.
After the camera application is started, the electronic device may display a second interface 702 as shown in (b) of fig. 7. The second interface 702 may be a preview interface provided by a camera application for implementing a shooting function, and includes a preview box 7021, a shooting control 7022, functional controls corresponding to multiple shooting modes, and the like.
The preview pane 7021 may be used to display a second image obtained by calibrating the first image using the sensitivity calibration matrix. The photographing control 7022 is used to trigger a photographing operation of the electronic device. The functional controls corresponding to the multiple shooting modes can include a night scene mode control, a portrait mode control, a shooting mode control, a video mode control, a professional mode control, more controls for starting more functions in camera applications, and the like.
Therefore, the embodiment of the present application can start the camera application by performing a touch operation on the camera application icon 7011. After the camera application is started, the electronic device may execute a flow corresponding to the image processing method provided by the embodiment of the present application.
It will be understood that the interfaces shown in (a) in fig. 7 and (b) in fig. 7 are merely examples of a user interface in the process of starting a camera application by performing a touch operation on a camera application icon by the electronic device, and are not limited to the embodiments of the present application.
In another scenario, the user may also access the camera application of the electronic device to launch the camera application by invoking a corresponding interface through a third party application installed on the electronic device. After the camera application is started, the electronic device may execute a flow corresponding to the image processing method provided by the embodiment of the present application.
In addition, when the electronic device is in the screen locking state, the user can instruct the electronic device to start the camera application through a gesture of sliding rightward on the display screen of the electronic device. Or the electronic equipment is in a screen locking state, the screen locking interface comprises an icon of the camera application, and the user instructs the electronic equipment to start the camera application by clicking the icon of the camera application.
It should be appreciated that the foregoing is illustrative of an operation of opening a camera application, and that the camera application may also be opened by a voice indication operation, or other operation of the indication electronic device, which is not limited in any way by the present application.
Fig. 8 is a flowchart of an image processing method according to an embodiment of the present application, where the image processing method may be applied to an electronic device, and the electronic device may include a camera, where the camera includes an optical anti-shake module, a lens 10, and an N-pixel-in-one image sensor, where the N-pixel-in-one image sensor includes a microlens array 21 and a pixel array, and the microlens array 21 includes a plurality of microlenses 210, where each microlens 210 covers N adjacent homochromatic pixels in the pixel array, and N is an integer greater than 1.
Referring to fig. 8, the image processing method may specifically include the steps of:
s801, the electronic device acquires a first image acquired by the camera when the optical anti-shake module is started.
In some embodiments, when the camera starts the optical anti-shake module to perform anti-shake, the camera may collect a first image, where the first image is an image in RAW format.
S802, the electronic device acquires a first time stamp when the first image starts to be acquired, and a second time stamp when the first image ends to be acquired.
When the camera acquires the first image, the camera starts to control the photosensitive unit array in the photosensitive element to start exposure from the exposure start time corresponding to the first image, and starts to sequentially read image data generated by the photosensitive unit array from the exposure end time corresponding to the first image so as to obtain the first image. Therefore, in the process of acquiring the frame image of the first image, the acquisition time length comprises the exposure time length of the first image and the reading time length of the first image.
The start point of the first image exposure is defined as the start of frame (SOF) point and the end point of the first image is defined as the end of frame (EOF) point. The first timestamp at the SOF point is defined as T 1 and the second timestamp at the EOF point is defined as T 2.
Thus, as shown in fig. 9, during the acquisition of the first image, the acquisition duration of the first image is the time interval between the second timestamp T 2 and the first timestamp T 1. The first timestamp T 1 represents a timestamp of an exposure start time of the first image, which is also a timestamp of a time when the camera starts to acquire the first image, and the second timestamp T 2 represents a timestamp of a reading end time of the first image, which is also a timestamp of a time when the camera ends to acquire the first image.
S803, in the process of acquiring the first image by the electronic equipment, the optical anti-shake module moves at each time to obtain a first coordinate of the position.
In the process of starting the optical anti-shake module to collect the first image by the camera, the optical anti-shake module can move to drive the lens 10 to move along the first moving direction and/or the second moving direction so as to compensate the influence caused by shake, thereby achieving the purpose of anti-shake.
In this way, the first coordinate of the position of the optical anti-shake module after each movement can be counted in the process of collecting the first image, that is, in the time interval between the second timestamp T 2 and the first timestamp T 1.
As shown in fig. 10, during the acquisition of the first image, the optical anti-shake module may move along the first moving direction (i.e., the X direction in fig. 10) and the second moving direction (i.e., the Y direction in fig. 10), which are moved n times in total, and the first coordinates of the positions of the optical anti-shake module after the n times of movement are P 1 to P n, respectively.
In the process of collecting the first image, the first coordinate of the position of the optical anti-shake module after the ith movement is P i. The first coordinate may include a first coordinate value in the first moving direction and a second coordinate value in the second moving direction.
It can be understood that the first coordinate of the position of the optical anti-shake module after each movement is actually indicative of the relative position between the optical centers of the image sensor combined with the N pixels after the movement of the optical anti-shake module. The relative position between the optical center of the image sensor integrated with N pixels after the optical anti-shake module moves may also be used to characterize the relative position between the optical center of the lens 10 and the optical center of the image sensor integrated with N pixels after the optical anti-shake module moves to drive the lens 10 to move.
For example, in the case where the camera does not activate the optical anti-shake module, the position coordinates of the optical anti-shake module may be (0, 0), which may indicate that the optical center of the lens 10 coincides with the optical center of the N-pixel-integrated image sensor in the optical axis direction of the camera.
S804, the electronic device determines the moving times of the optical anti-shake module in the acquisition process of the first image according to the first timestamp and the second timestamp.
After the electronic device obtains the first time stamp when the first image is acquired and the second time stamp when the first image is acquired, the moving times of the optical anti-shake module in the acquisition process of the first image can be determined according to the first time stamp and the second time stamp.
In some embodiments, the electronic device calculates a time interval between the second timestamp and the first timestamp, and the electronic device determines a ratio between the time interval and a moving frequency of the optical anti-shake module as a moving number of the optical anti-shake module in a process of acquiring the first image. Thus:
wherein n is the number of movements of the optical anti-shake module during the acquisition of the first image, T 2 is the second timestamp, T 1 is the first timestamp, and f 0 is the frequency of movements of the optical anti-shake module.
The moving frequency of the optical anti-shake module can also be called the refreshing frequency of the optical anti-shake module in the acquisition process of the first image.
For example, the time interval between the second timestamp and the first timestamp is 20ms (milliseconds), and the moving frequency of the optical anti-shake module may be 1000Hz, that is, the optical anti-shake module moves once every 1ms interval, so the number of times the optical anti-shake module moves during the acquisition of the first image may be 20 times.
S805, the electronic device determines, according to the number of movements and each first coordinate, a second coordinate of a centroid position corresponding to each position where the optical anti-shake module is located after movement.
After the electronic device obtains the first coordinates of the positions of the optical anti-shake modules after each movement and the number of times of movement of the optical anti-shake modules in the acquisition process of the first image, the electronic device can determine the second coordinates of the mass center positions corresponding to the positions of the optical anti-shake modules after each movement according to the number of times of movement and each first coordinate.
The mass center positions corresponding to the positions of the optical anti-shake module after moving are mass centers of all positions of the optical anti-shake module after moving.
In some embodiments, the electronic device calculates the second coordinate by the following formula:
Wherein S n is a second coordinate, n is the number of movements, and P i is a first coordinate of the position of the optical anti-shake module after the ith movement. The second coordinates may also include a first coordinate component in the first direction of movement and a second coordinate component in the second direction of movement.
S806, the electronic device determines a sensitivity calibration matrix corresponding to the first image according to the second coordinate and the plurality of sensitivity calibration matrices, wherein the relative positions of the optical centers of the lens and the optical center of the N-pixel integrated image sensor are different when the plurality of sensitivity calibration matrices are calibrated.
The electronic device may also obtain a plurality of sensitivity calibration matrices calibrated in advance before using the plurality of sensitivity calibration matrices to determine the sensitivity calibration matrix corresponding to the first image. The plurality of sensitivity calibration matrices comprise a first sensitivity calibration matrix, a second sensitivity calibration matrix, a third sensitivity calibration matrix, a fourth sensitivity calibration matrix and a fifth sensitivity calibration matrix, and the relative positions of the optical center of the lens 10 and the optical center of the N-pixel integrated image sensor are different when the plurality of sensitivity calibration matrices are calibrated.
The optical anti-shake module may drive the lens 10 to move in a first moving direction and a second moving direction. The first moving direction comprises a first direction and a second direction which are opposite to each other, and the second moving direction comprises a third direction and a fourth direction which are opposite to each other. For example, as shown in fig. 11, the first direction may be the X1 direction, the second direction may be the X2 direction, the third direction may be the Y1 direction, and the fourth direction may be the Y2 direction.
As shown in (a) to (e) of fig. 11, which show the relative positions between the optical center of the lens 10 and the optical center of the N-pixel-integrated image sensor during calibration of the respective sensitivity calibration matrices. The intersection of the two dashed lines represents the optical center of the N-pixel unified image sensor.
As shown in fig. 11 (a), the optical center of the lens 10 coincides with the optical center of the N-pixel integrated image sensor in the optical axis direction of the camera at the time of calibration of the first sensitivity calibration matrix. In this case, the relative position of the optical center of the lens 10 with respect to the optical center of the N-pixel-integrated image sensor is G0, and the position coordinates of the G0 position are (0, 0).
As shown in fig. 11 (b), the second sensitivity calibration matrix is configured such that, when calibrating, the optical center of the lens 10 is offset from the optical center of the N-pixel integrated image sensor by a first distance along the first direction and by a second distance along the third direction, where the first distance is the maximum distance that the lens 10 can move along the first direction, and the second distance is the maximum distance that the lens 10 can move along the third direction. In this case, the relative position of the optical center of the lens 10 with respect to the optical center of the N-pixel-integrated image sensor is G1, and the position coordinates of the G1 position are (S xmax1,Symax1),Sxmax1 is the first distance, S ymax1 is the second distance.
As shown in fig. 11 (c), the third sensitivity calibration matrix is configured such that, when calibrating, the optical center of the lens 10 is offset from the optical center of the N-pixel integrated image sensor by a first distance along the first direction and by a third distance along the fourth direction, where the third distance is the maximum distance that the lens 10 can move along the fourth direction. In this case, the relative position of the optical center of the lens 10 with respect to the optical center of the N-pixel-integrated image sensor is G2, and the position coordinate of the G2 position is (S xmax1,-Symax2),Symax2 is the third distance.
As shown in (d) of fig. 11, the fourth sensitivity calibration matrix is offset by a fourth distance in the second direction and by a third distance in the fourth direction with respect to the optical center of the N-pixel integrated image sensor, where the fourth distance is the maximum distance that the lens 10 can move in the second direction during calibration. In this case, the relative position of the optical center of the lens 10 with respect to the optical center of the N-pixel-integrated image sensor is G3, and the position coordinate of the G3 position is (-S xmax2,-Symax2),Sxmax2 is the fourth distance).
As shown in (e) of fig. 11, the fifth sensitivity calibration matrix is offset by a fourth distance in the second direction and by a second distance in the third direction with respect to the optical center of the N-pixel integrated image sensor at the time of calibration. In this case, the relative position of the optical center of the lens 10 with respect to the optical center of the N-pixel-integrated image sensor is G4, and the position coordinate of the G4 position is (-S xmax2,Symax1).
In some embodiments, the first distance may be equal to or different from the fourth distance, and the second distance may be equal to or different from the third distance. And the first distance, the second distance, the third distance and the fourth distance are all positive numbers.
In some embodiments, each sensitivity calibration matrix comprises a sensitivity calibration matrix corresponding to H color sub-channels, H being equal to the product of M and N. The method comprises the steps of calibrating each calibration parameter in a matrix by sensitivity according to each pixel mean value in a single-channel mean value image, calculating the pixel value at a corresponding position in each single-channel test image corresponding to the single-channel mean value image, wherein each pixel mean value in the single-channel mean value image is the average value of the pixel values at the same position in N single-channel test images belonging to the same color channel in H single-channel test images, the H single-channel test images are obtained by splitting the test images according to H color sub-channels after the size of the test images is reduced, and the test images are acquired under the condition that the focusing position of a lens is a preset focusing position and the relative position of the optical center of the lens and the optical center of an image sensor integrated by N pixels is a preset position.
In one implementation, each calibration parameter in the sensitivity calibration matrix is the ratio of the pixel value at the corresponding location in each single channel test image corresponding to the single channel mean image to each pixel mean in the single channel mean image.
Taking the calibration process of the first sensitivity calibration matrix as an example, fig. 12 is a flowchart of the calibration process of the first sensitivity calibration matrix according to the embodiment of the present application. Referring to fig. 12, the calibration process may specifically include the following steps:
S1201, under the condition that the focusing position of the lens is a preset focusing position and the optical center of the lens and the optical center of the N-pixel integrated image sensor are overlapped along the optical axis direction of the camera, the camera is adopted to collect the test image.
The focusing position of the lens at infinity is Z 1, the focusing position of the lens at the closest focusing distance is Z 2, the focusing position of the lens in the camera is (Z 1+Z2)/2, namely, the preset focusing position is 1/2 of the optical stroke of the lens, and the optical center of the lens and the optical center of the N-pixel integrated image sensor are overlapped along the optical axis direction of the camera, namely, the optical displacement between the optical center of the lens 10 and the optical center of the N-pixel integrated image sensor is 0.
Under the condition that the focusing position of the lens is set at a preset focusing position, and the optical center of the lens and the optical center of the N-pixel integrated image sensor are set to coincide along the optical axis direction of the camera, the camera is adopted to shoot a uniform light plate.
When photographing a uniform light panel, the object distance may be 10mm (millimeters). The uniform light plate refers to a planar light source with uniform illuminance and uniform color temperature in an effective area. The test image acquired by the camera may be an image in RAW format of full pixel resolution.
S1202, the size of the test image is reduced.
The size of the test image can be reduced by adopting a bilinear interpolation mode. For example, the original size of the test image is a pixels b pixels, a represents the width of the test image, b represents the height of the test image, a and b are both positive integers, and the size of the reduced test image may be 64 pixels by 48 pixels.
It is understood that the size of the reduced test image is 64 pixels by 48 pixels, which is only an example, and the size of the reduced test image may be other sizes, which is not limited in the embodiment of the present application.
After the size of the test image is reduced, the data volume in the subsequent calculation of the first sensitivity calibration matrix can be reduced, so that the calculation complexity of the first sensitivity calibration matrix is reduced.
And S1203, splitting the test image with the reduced size according to the H color sub-channels to obtain H single-channel test images.
Under the condition that the camera comprises an image sensor with N integrated pixels, and the pixel array of the image sensor with the N integrated pixels comprises pixel units corresponding to M color channels, each pixel unit comprises pixels corresponding to N color sub-channels, the test image comprises M color channels, each color channel comprises N color sub-channels, namely, the test image can comprise H color sub-channels, H is equal to the product of M and N, and the reduced-size test image can also comprise H color sub-channels.
The test image with reduced size can be split according to the H color sub-channels to obtain H single-channel test images, and each single-channel test image corresponds to one color sub-channel.
Taking an N-pixel integrated image sensor as an example, the four-pixel integrated image sensor shown in fig. 3 is taken. The test image and the reduced-size test image may each include four color channels, which are a red channel (R channel), a first green channel (Gr channel), a second green channel (Gb channel), and a blue channel (B channel), respectively. And, each color channel includes four color sub-channels. For example, the red color channel includes a first red color sub-channel (R0 sub-channel), a second red color sub-channel (R1 sub-channel), a third red color sub-channel (R2 sub-channel), and a fourth red color sub-channel (R3 sub-channel), the first green color channel includes a first green sub-channel (Gr 0 sub-channel), a second green sub-channel (Gr 1 sub-channel), a third green sub-channel (Gr 2 sub-channel), and a fourth green sub-channel (Gr 3 sub-channel), the second green color channel includes a fifth green sub-channel (Gb 0 sub-channel), a sixth green sub-channel (Gb 1 sub-channel), a seventh green sub-channel (Gb 2 sub-channel), and an eighth green sub-channel (Gb 3 sub-channel), and the blue color channel includes a first blue sub-channel (B0 sub-channel), a second blue sub-channel (B1 sub-channel), a third blue sub-channel (B2 sub-channel), and a fourth blue sub-channel (B3 sub-channel).
Therefore, in the case where the N-pixel unified image sensor is the four-pixel unified image sensor shown in fig. 3, the test image and the reduced-size test image each include sixteen color sub-channels, which are respectively an R0 sub-channel, an R1 sub-channel, an R2 sub-channel, an R3 sub-channel, a Gr0 sub-channel, a Gr1 sub-channel, a Gr2 sub-channel, a Gr3 sub-channel, a Gb0 sub-channel, a Gb1 sub-channel, a Gb2 sub-channel, a Gb3 sub-channel, a B0 sub-channel, a B1 sub-channel, a B2 sub-channel, and a B3 sub-channel.
Thus, sixteen single-channel test images can be obtained after the test image with reduced size is split according to the color sub-channels, which includes a single channel test image corresponding to the R0 sub-channel as shown in (a) of fig. 13, a single channel test image corresponding to the R1 sub-channel as shown in (B) of fig. 13, a single channel test image corresponding to the R2 sub-channel as shown in (c) of fig. 13, a single channel test image corresponding to the R3 sub-channel as shown in (d) of fig. 13, a single channel test image corresponding to the Gr0 sub-channel as shown in (e) of fig. 13, a single channel test image corresponding to the Gr1 sub-channel as shown in (f) of fig. 13, a single channel test image corresponding to the Gr2 sub-channel as shown in (g) of fig. 13, a single channel test image corresponding to the Gb0 sub-channel as shown in (h) of fig. 13, a single channel test image corresponding to the Gb1 sub-channel as shown in (j) of fig. 13, a single channel test image corresponding to the Gb2 sub-channel as shown in (k) of fig. 13, a single channel test image corresponding to the Gr1 sub-channel as shown in (l) of fig. 13, a single channel test image corresponding to the Gb2 sub-channel as shown in (l) of fig. 13, a single channel test image corresponding to the Gb3 sub-channel as shown in fig. 3, a single channel test image corresponding to the sub-channel as shown in fig. 3, and a single channel test image corresponding to the sub-channel as shown in fig. 3 A single channel test image corresponding to the B2 sub-channel as shown in (o) of fig. 13 and a single channel test image corresponding to the B3 sub-channel as shown in (p) of fig. 13.
For example, the size of the test image after the size reduction is 64 pixels by 48 pixels, and the test image after the size reduction includes sixteen color sub-channels, and the size of each single-channel test image in sixteen single-channel test images obtained after the splitting is 16 pixels by 12 pixels.
And S1204, calculating the average value of the pixel values at the same position in N single-channel test images belonging to the same color channel in the H single-channel test images, and obtaining each pixel average value in the single-channel average value image.
And respectively calculating the average value of the pixel values at the same position in the N single-channel test images aiming at the N single-channel test images of the same color channel to obtain each pixel average value in the single-channel average value image.
For example, the average value of the pixel values of the kth row and the jth column in the N single-channel test images of the same color channel is the pixel average value of the kth row and the jth column in the corresponding single-channel average value image, and k and j are positive integers.
For example, for the single channel test image corresponding to the R0 sub-channel shown in (a) in fig. 13, the single channel test image corresponding to the R1 sub-channel shown in (b) in fig. 13, the single channel test image corresponding to the R2 sub-channel shown in (c) in fig. 13, and the single channel test image corresponding to the R3 sub-channel shown in (d) in fig. 13, the average value of the pixel values at the same position is calculated, and each pixel average value in the single channel average value image corresponding to the R channel shown in (a) in fig. 14 can be obtained.
For the single channel test image corresponding to the Gr0 sub-channel shown in (e) in fig. 13, the single channel test image corresponding to the Gr1 sub-channel shown in (f) in fig. 13, the single channel test image corresponding to the Gr2 sub-channel shown in (g) in fig. 13, and the single channel test image corresponding to the Gr3 sub-channel shown in (h) in fig. 13, the average value of the pixel values at the same position is calculated, and each pixel average value in the single channel average value image corresponding to the Gr channel shown in (b) in fig. 14 can be obtained.
For the single channel test image corresponding to the Gb0 sub-channel shown in (i) of fig. 13, the single channel test image corresponding to the Gb1 sub-channel shown in (j) of fig. 13, the single channel test image corresponding to the Gb2 sub-channel shown in (k) of fig. 13, and the single channel test image corresponding to the Gb3 sub-channel shown in (l) of fig. 13, the average value of the pixel values at the same position is calculated, and each pixel average value in the single channel average value image corresponding to the Gb channel shown in (c) of fig. 14 can be obtained.
For the single channel test image corresponding to the B0 sub-channel shown in (m) in fig. 13, the single channel test image corresponding to the B1 sub-channel shown in (n) in fig. 13, the single channel test image corresponding to the B2 sub-channel shown in (o) in fig. 13, and the single channel test image corresponding to the B3 sub-channel shown in (p) in fig. 13, the average value of the pixel values at the same position is calculated, and each pixel average value in the single channel average value image corresponding to the B channel shown in (d) in fig. 14 can be obtained.
And S1205, calculating the average value of each pixel in the single-channel average value image, and obtaining the sensitivity calibration submatrix by the ratio of the pixel value at the corresponding position in each single-channel test image corresponding to the single-channel average value image.
The calibration parameter of the kth row and the jth column in the sensitivity calibration matrix is the pixel mean value of the kth row and the jth column in the single-channel mean value image, and the ratio of the pixel values of the kth row and the jth column in the single-channel test image corresponding to the single-channel mean value image.
In the case that the number of the single-channel test images is H, the number of the sensitivity calibration matrixes is H, namely the first sensitivity calibration matrix comprises sensitivity calibration matrixes corresponding to the H color sub-channels, and each sensitivity calibration matrix corresponds to one color sub-channel.
Illustratively, the ratio of the pixel value at the corresponding position in the single-channel test image corresponding to the R0 sub-channel shown in fig. 13 is calculated for each pixel mean value in the single-channel mean image corresponding to the R channel shown in fig. 14 (a), resulting in the sensitivity calibration matrix corresponding to the R0 sub-channel shown in fig. 15 (a).
Calculating the ratio of each pixel mean value in the single-channel mean value image corresponding to the R channel shown in (a) in fig. 14 to the pixel value at the corresponding position in the single-channel test image corresponding to the R1 sub-channel shown in (b) in fig. 13, to obtain the sensitivity calibration matrix corresponding to the R1 sub-channel shown in (b) in fig. 15.
Calculating the ratio of each pixel mean value in the single-channel mean value image corresponding to the R channel shown in (a) of fig. 14 to the pixel value at the corresponding position in the single-channel test image corresponding to the R2 sub-channel shown in (c) of fig. 13, to obtain the sensitivity calibration matrix corresponding to the R2 sub-channel shown in (c) of fig. 15.
Calculating the ratio of each pixel mean value in the single-channel mean value image corresponding to the R channel shown in (a) of fig. 14 to the pixel value at the corresponding position in the single-channel test image corresponding to the R3 sub-channel shown in (d) of fig. 13, to obtain the sensitivity calibration matrix corresponding to the R3 sub-channel shown in (d) of fig. 15.
Calculating the ratio of each pixel mean value in the single-channel mean value image corresponding to the Gr channel shown in (b) in FIG. 14 to the pixel value at the corresponding position in the single-channel test image corresponding to the Gr0 sub-channel shown in (e) in FIG. 13 to obtain the sensitivity calibration matrix corresponding to the Gr0 sub-channel shown in (e) in FIG. 15.
Calculating the ratio of each pixel mean value in the single-channel mean value image corresponding to the Gr channel shown in (b) in FIG. 14 to the pixel value at the corresponding position in the single-channel test image corresponding to the Gr1 sub-channel shown in (f) in FIG. 13 to obtain the sensitivity calibration matrix corresponding to the Gr1 sub-channel shown in (f) in FIG. 15.
Calculating the ratio of each pixel mean value in the single-channel mean value image corresponding to the Gr channel shown in (b) in FIG. 14 to the pixel value at the corresponding position in the single-channel test image corresponding to the Gr2 sub-channel shown in (g) in FIG. 13 to obtain the sensitivity calibration matrix corresponding to the Gr2 sub-channel shown in (g) in FIG. 15.
Calculating the ratio of each pixel mean value in the single-channel mean value image corresponding to the Gr channel shown in (b) in fig. 14 to the pixel value at the corresponding position in the single-channel test image corresponding to the Gr3 sub-channel shown in (h) in fig. 13, to obtain a sensitivity calibration matrix corresponding to the Gr3 sub-channel shown in (h) in fig. 15.
Each pixel mean value in the single-channel mean image corresponding to the Gb channel shown in (c) of fig. 14 is calculated, and the ratio of the pixel value at the corresponding position in the single-channel test image corresponding to the Gb0 sub-channel shown in (i) of fig. 13 is calculated, to obtain the sensitivity calibration matrix corresponding to the Gb0 sub-channel shown in (i) of fig. 15.
Each pixel mean value in the single-channel mean image corresponding to the Gb channel shown in (c) of fig. 14 is calculated, and the ratio of the pixel value at the corresponding position in the single-channel test image corresponding to the Gb1 sub-channel shown in (j) of fig. 13 is calculated, to obtain the sensitivity calibration matrix corresponding to the Gb1 sub-channel shown in (j) of fig. 15.
Each pixel mean value in the single-channel mean image corresponding to the Gb channel shown in (c) of fig. 14 is calculated, and the ratio of the pixel value at the corresponding position in the single-channel test image corresponding to the Gb2 sub-channel shown in (k) of fig. 13 is calculated, to obtain the sensitivity calibration matrix corresponding to the Gb2 sub-channel shown in (k) of fig. 15.
Calculating the ratio of each pixel mean value in the single-channel mean value image corresponding to the Gb channel shown in (c) of fig. 14 to the pixel value at the corresponding position in the single-channel test image corresponding to the Gb3 sub-channel shown in (l) of fig. 13, to obtain the sensitivity calibration matrix corresponding to the Gb3 sub-channel shown in (l) of fig. 15.
Calculating the ratio of each pixel mean value in the single-channel mean value image corresponding to the B channel shown in (d) of fig. 14 to the pixel value at the corresponding position in the single-channel test image corresponding to the B0 sub-channel shown in (m) of fig. 13, to obtain the sensitivity calibration matrix corresponding to the B0 sub-channel shown in (m) of fig. 15.
Calculating the ratio of each pixel mean value in the single-channel mean value image corresponding to the B channel shown in (d) of fig. 14 to the pixel value at the corresponding position in the single-channel test image corresponding to the B1 sub-channel shown in (n) of fig. 13, to obtain the sensitivity calibration matrix corresponding to the B1 sub-channel shown in (n) of fig. 15.
Calculating the ratio of each pixel mean value in the single-channel mean value image corresponding to the B channel shown in (d) of fig. 14 to the pixel value at the corresponding position in the single-channel test image corresponding to the B2 sub-channel shown in (o) of fig. 13, to obtain a sensitivity calibration matrix corresponding to the B2 sub-channel shown in (o) of fig. 15.
Calculating the ratio of each pixel mean value in the single-channel mean value image corresponding to the B channel shown in (d) of fig. 14 to the pixel value at the corresponding position in the single-channel test image corresponding to the B3 sub-channel shown in (p) of fig. 13, to obtain a sensitivity calibration matrix corresponding to the B3 sub-channel shown in (p) of fig. 15.
Thus, in the case where the N-pixel unified image sensor is the four-pixel unified image sensor shown in fig. 3, the first sensitivity calibration matrix includes sensitivity calibration matrices corresponding to sixteen color sub-channels as shown in fig. 15, that is, H is equal to 16.
For example, each single-channel test image has a size of 16 pixels by 12 pixels, each single-channel mean image also has a size of 16 pixels by 12 pixels, and each sensitivity calibration matrix also has a size of 16 pixels by 12 pixels.
In summary, according to the steps S1201 to S1205 described above, a first sensitivity calibration matrix may be obtained. In the practical application process, if the camera is not provided with the optical anti-shake module or the optical anti-shake module is closed, the first image acquired by the camera can be calibrated by adopting the first sensitivity calibration matrix, so that the responses of all color sub-channels of the second image obtained after calibration under the same color channel are basically consistent, and the pixel difference degree of H color sub-channels in the second image is reduced.
FIG. 16 is a flow chart of a calibration process for a plurality of sensitivity calibration matrices according to an embodiment of the present application. Referring to fig. 16, the calibration process may specifically include the following steps:
s1601, under the condition that the focusing position of the lens is a preset focusing position and the optical center of the lens and the optical center of the N-pixel integrated image sensor are sequentially positioned at different relative positions, a plurality of test images are acquired by adopting a camera.
S1602, calculating a plurality of sensitivity calibration matrixes according to the plurality of test images.
After the optical anti-shake module is powered on, the optical anti-shake module enters an instruction mode, which is a proxy for starting the optical anti-shake module to move so as to drive the lens to a fixed optical displacement position and keep stable.
For example, the optical anti-shake module may sequentially drive the lens to move to a G0 position shown in (a) of fig. 11, a G1 position shown in (b) of fig. 11, a G2 position shown in (c) of fig. 11, a G3 position shown in (d) of fig. 11, and a G4 position shown in (e) of fig. 11.
Under the condition that the focusing position of the lens is set at a preset focusing position, and the optical center of the lens and the optical center of the N-pixel integrated image sensor are set to coincide along the optical axis direction of the camera, the camera is adopted to shoot a uniform light plate so as to obtain a test image, and the steps from S1202 to S1205 are carried out so as to obtain a first sensitivity calibration matrix.
Correspondingly, under the condition that the focusing position of the lens is set at a preset focusing position, and the optical center of the lens and the optical center of the N-pixel integrated image sensor are offset by a first distance along a first direction and a second distance along a third direction, a camera is adopted to shoot a uniform light plate so as to obtain a test image, and the steps S1202 to S1205 are carried out so as to obtain a second sensitivity calibration matrix.
Under the condition that the focusing position of the lens is set at a preset focusing position, and the optical center of the lens is offset by a first distance along a first direction and a third distance along a fourth direction relative to the optical center of the N-pixel integrated image sensor, a camera is adopted to shoot a uniform light plate to obtain a test image, and the steps from S1202 to S1205 are carried out to obtain a third sensitivity calibration matrix.
Under the condition that the focusing position of the lens is set at a preset focusing position, and the optical center of the lens is offset by a fourth distance along the second direction and offset by a third distance along the fourth direction relative to the optical center of the N-pixel integrated image sensor, a camera is adopted to shoot a uniform light plate to obtain a test image, and the steps from S1202 to S1205 are carried out to obtain a fourth sensitivity calibration matrix.
Under the condition that the focusing position of the lens is set at a preset focusing position, and the optical center of the lens is offset by a fourth distance along a second direction and offset by a second distance along a third direction relative to the optical center of the N-pixel integrated image sensor, a camera is adopted to shoot a uniform light plate to obtain a test image, and the steps from S1202 to S1205 are carried out to obtain a fifth sensitivity calibration matrix.
It should be noted that, the camera used in the calibration process of the sensitivity calibration matrix in the embodiment of the present application and the camera used for acquiring the first image in the execution process of the image processing method may be the same camera or may not be the same camera. In the case that the camera used in the calibration process of the sensitivity calibration matrix is not the same as the camera used in the execution process of the image processing method for acquiring the first image, the type of the camera used in the calibration process of the sensitivity calibration matrix may be the same as the type of the camera used in the execution process of the image processing method for acquiring the first image, for example, the structural composition of the camera used in the calibration process of the sensitivity calibration matrix may be the same as the structural composition of the camera used in the execution process of the image processing method for acquiring the first image.
After the plurality of sensitivity calibration matrices are calibrated in advance according to the mode, the electronic device can determine the sensitivity calibration matrix corresponding to the first image according to the second coordinates and the plurality of sensitivity calibration matrices.
Illustratively, as shown in fig. 17, Q 0 is a first sensitivity calibration matrix, Q 1 is a second sensitivity calibration matrix, Q 2 is a third sensitivity calibration matrix, Q 3 is a fourth sensitivity calibration matrix, and Q 4 is a fifth sensitivity calibration matrix. S n is a second coordinate, and Q n is a sensitivity calibration matrix corresponding to the first image. Thus, the sensitivity calibration matrix corresponding to the first image is related to a plurality of sensitivity calibration matrices among the first, second, third, fourth, and fifth sensitivity calibration matrices in addition to the second coordinates.
In some embodiments, the electronic device decomposes the second coordinate according to the movable direction of the optical anti-shake module to obtain a first coordinate component in the first moving direction and a second coordinate component in the second moving direction, and determines a sensitivity calibration matrix corresponding to the first image according to the first coordinate component, the second coordinate component and the plurality of sensitivity calibration matrices. The first moving direction and the second moving direction are perpendicular to each other, and the first moving direction and the second moving direction are perpendicular to the optical axis direction of the camera.
The electronic device decomposes the second coordinate into a first moving direction to obtain a first coordinate component S nx in the first moving direction, and decomposes the second coordinate into a second moving direction to obtain a second coordinate component S ny in the second moving direction. Then, a first coordinate component, a second coordinate component and a plurality of sensitivity calibration matrixes are adopted, and a sensitivity calibration matrix corresponding to the first image is calculated in a linear interpolation mode.
Since the first moving direction includes a first direction and a second direction which are opposite to each other, the second moving direction includes a third direction and a fourth direction which are opposite to each other. Thus, the first coordinate component is a value greater than 0 when the second coordinate is offset toward the first direction relative to the optical center of the N-pixel unified image sensor, and the first coordinate component is a value less than 0 when the second coordinate is offset toward the second direction relative to the optical center of the N-pixel unified image sensor. Correspondingly, the second coordinate component is a value greater than 0 when the second coordinate is offset toward the third direction relative to the optical center of the N-pixel unified image sensor, and is a value less than 0 when the second coordinate is offset toward the fourth direction relative to the optical center of the N-pixel unified image sensor.
Therefore, based on the magnitude relation between the first coordinate component and 0 and the magnitude relation between the second coordinate component and 0, the following four cases may exist.
In the first case, in the case where the first coordinate component is greater than or equal to 0 and the second coordinate component is greater than or equal to 0, the electronic device calculates the sensitivity calibration matrix by the following formula:
Wherein Q n is a sensitivity calibration matrix, Q 0 is a first sensitivity calibration matrix, Q 1 is a second sensitivity calibration matrix, Q 2 is a third sensitivity calibration matrix, Q 4 is a fifth sensitivity calibration matrix, S nx is a first coordinate component, S ny is a second coordinate component, S xmax1 is a first distance, and S ymax1 is a second distance.
Therefore, in the case where the first coordinate component is greater than or equal to 0 and the second coordinate component is greater than or equal to 0, the electronic device may determine the sensitivity calibration matrix corresponding to the first image according to the first coordinate component, the second coordinate component, the first sensitivity calibration matrix, the second sensitivity calibration matrix, the third sensitivity calibration matrix, and the fifth sensitivity calibration matrix.
The first sensitivity calibration matrix, the second sensitivity calibration matrix, the third sensitivity calibration matrix and the fifth sensitivity calibration matrix comprise sensitivity calibration matrixes corresponding to the H color sub-channels, and the sensitivity calibration matrix also comprises first sensitivity calibration sub-matrixes corresponding to the H color sub-channels.
Therefore, in the above formula, for each color sub-channel, the first sensitivity calibration matrix, the second sensitivity calibration matrix, the third sensitivity calibration matrix and the fifth sensitivity calibration matrix are respectively utilized, each calibration parameter in the corresponding sensitivity calibration sub-matrix of the color sub-channel is included, and the calibration parameter at the corresponding position in the first sensitivity calibration sub-matrix corresponding to the color sub-channel is calculated, so as to obtain the sensitivity calibration matrix.
For example, for the R0 sub-channel, the above formula is adopted, and the calibration parameters of the kth row and the jth column in the first sensitivity calibration sub-matrix corresponding to the R0 sub-channel are calculated by using the calibration parameters of the kth row and the jth column in the sensitivity calibration matrix corresponding to the R0 sub-channel, which are included in the first sensitivity calibration matrix, the second sensitivity calibration matrix, the third sensitivity calibration matrix and the fifth sensitivity calibration matrix, respectively.
In the case where the size of each sensitivity calibration sub-matrix is 16 pixels by 12 pixels, the size of the first sensitivity calibration sub-matrix is also 16 pixels by 12 pixels.
In the second case, in the case where the first coordinate component is greater than 0 and the second coordinate component is less than 0, the electronic device calculates the sensitivity calibration matrix by the following formula:
wherein Q n is a sensitivity calibration matrix, Q 0 is a first sensitivity calibration matrix, Q 1 is a second sensitivity calibration matrix, Q 2 is a third sensitivity calibration matrix, Q 3 is a fourth sensitivity calibration matrix, S nx is a first coordinate component, S ny is a second coordinate component, S xmax1 is a first distance, and S ymax2 is a third distance.
Therefore, in the case that the first coordinate component is greater than 0 and the second coordinate component is less than 0, the electronic device may determine the sensitivity calibration matrix corresponding to the first image according to the first coordinate component, the second coordinate component, the first sensitivity calibration matrix, the second sensitivity calibration matrix, the third sensitivity calibration matrix, and the fourth sensitivity calibration matrix.
The first sensitivity calibration matrix, the second sensitivity calibration matrix, the third sensitivity calibration matrix and the fourth sensitivity calibration matrix comprise sensitivity calibration matrixes corresponding to the H color sub-channels, and the sensitivity calibration matrix also comprises first sensitivity calibration sub-matrixes corresponding to the H color sub-channels.
Therefore, in the above formula, for each color sub-channel, the first sensitivity calibration matrix, the second sensitivity calibration matrix, the third sensitivity calibration matrix and the fourth sensitivity calibration matrix are respectively utilized, each calibration parameter in the corresponding sensitivity calibration sub-matrix of the color sub-channel is included, and the calibration parameter at the corresponding position in the first sensitivity calibration sub-matrix corresponding to the color sub-channel is calculated, so as to obtain the sensitivity calibration matrix.
In the third case, in the case where the first coordinate component is less than or equal to 0 and the second coordinate component is less than or equal to 0, the electronic device calculates the sensitivity calibration matrix by the following formula:
Wherein Q n is a sensitivity calibration matrix, Q 0 is a first sensitivity calibration matrix, Q 2 is a third sensitivity calibration matrix, Q 3 is a fourth sensitivity calibration matrix, Q 4 is a fifth sensitivity calibration matrix, S nx is a first coordinate component, S ny is a second coordinate component, S xmax2 is a fourth distance, and S ymax2 is a third distance.
Therefore, in the case where the first coordinate component is less than or equal to 0 and the second coordinate component is less than or equal to 0, the electronic device may determine the sensitivity calibration matrix corresponding to the first image according to the first coordinate component, the second coordinate component, the first sensitivity calibration matrix, the third sensitivity calibration matrix, the fourth sensitivity calibration matrix, and the fifth sensitivity calibration matrix.
Because the first sensitivity calibration matrix, the third sensitivity calibration matrix, the fourth sensitivity calibration matrix and the fifth sensitivity calibration matrix all comprise sensitivity calibration matrices corresponding to the H color sub-channels, and the sensitivity calibration matrix also comprises first sensitivity calibration sub-matrices corresponding to the H color sub-channels.
Therefore, in the above formula, for each color sub-channel, the first sensitivity calibration matrix, the third sensitivity calibration matrix, the fourth sensitivity calibration matrix and the fifth sensitivity calibration matrix are respectively utilized, each calibration parameter in the corresponding sensitivity calibration sub-matrix of the color sub-channel is included, and the calibration parameter at the corresponding position in the first sensitivity calibration sub-matrix corresponding to the color sub-channel is calculated, so as to obtain the sensitivity calibration matrix.
In the fourth case, in the case where the first coordinate component is smaller than 0 and the second coordinate component is larger than 0, the electronic device calculates the sensitivity calibration matrix by the following formula:
wherein Q n is a sensitivity calibration matrix, Q 0 is a first sensitivity calibration matrix, Q 1 is a second sensitivity calibration matrix, Q 3 is a fourth sensitivity calibration matrix, Q 4 is a fifth sensitivity calibration matrix, S nx is a first coordinate component, S ny is a second coordinate component, S xmax2 is a fourth distance, and S ymax1 is a second distance.
Therefore, in the case that the first coordinate component is smaller than 0 and the second coordinate component is larger than 0, the electronic device may determine the sensitivity calibration matrix corresponding to the first image according to the first coordinate component, the second coordinate component, the first sensitivity calibration matrix, the second sensitivity calibration matrix, the fourth sensitivity calibration matrix, and the fifth sensitivity calibration matrix.
The first sensitivity calibration matrix, the second sensitivity calibration matrix, the fourth sensitivity calibration matrix and the fifth sensitivity calibration matrix comprise sensitivity calibration matrixes corresponding to the H color sub-channels, and the sensitivity calibration matrix also comprises first sensitivity calibration sub-matrixes corresponding to the H color sub-channels.
Therefore, in the above formula, for each color sub-channel, the first sensitivity calibration matrix, the second sensitivity calibration matrix, the fourth sensitivity calibration matrix and the fifth sensitivity calibration matrix are respectively utilized, each calibration parameter in the corresponding sensitivity calibration sub-matrix of the color sub-channel is included, and the calibration parameter at the corresponding position in the first sensitivity calibration sub-matrix corresponding to the color sub-channel is calculated, so as to obtain the sensitivity calibration matrix.
To sum up, according to the steps S804 to S806, the electronic device determines the sensitivity calibration matrix corresponding to the first image according to the first timestamp, the second timestamp, each first coordinate and the plurality of sensitivity calibration matrices.
S807, the electronic device calibrates the first image with the sensitivity calibration matrix to obtain a second image.
After the electronic device calculates the sensitivity calibration matrix corresponding to the first image, the first image can be calibrated by adopting the sensitivity calibration matrix corresponding to the first image to obtain the second image, so that the condition of poor image quality of the image caused by uneven light received by N same-color pixels when the optical anti-shake module moves is improved, the sensitivities of the N same-color pixels in the second image obtained after calibration are nearly consistent, and the image quality of the second image obtained after calibration is improved.
In some embodiments, the electronic device splits the first image according to the H color sub-channels to obtain H first single-channel images, adjusts the sizes of the first sensitivity calibration sub-matrices corresponding to the H color sub-channels to obtain second sensitivity calibration sub-matrices corresponding to the H color sub-channels, wherein the size of each second sensitivity calibration sub-matrix is equal to the size of the first single-channel image, the electronic device calibrates the first single-channel images corresponding to the H color sub-channels to obtain H second single-channel images, and the electronic device combines the H second single-channel images to obtain the second image.
In a possible manner, for the same color sub-channel, the electronic device uses each calibration parameter in the second sensitivity calibration sub-matrix to multiply the pixel value at the corresponding position in the first single-channel image to obtain the second single-channel image.
The electronic device may split the first image according to the H color sub-channels to obtain H first single-channel images, where each first single-channel image corresponds to one color sub-channel.
Because the size of the first single-channel image may not be consistent with the size of the first sensitivity calibration sub-matrix, the electronic device may adjust the size of the first sensitivity calibration sub-matrix corresponding to the H color sub-channels included in the sensitivity calibration matrix by adopting a bilinear interpolation manner, so as to obtain a second sensitivity calibration sub-matrix corresponding to the H color sub-channels.
For example, each first single-channel image has a size of 1200 pixels by 900 pixels, and the first sensitivity calibration sub-matrix has a size of 16 pixels by 12 pixels, so the first sensitivity calibration sub-matrix needs to be adjusted from 16 pixels by 12 pixels to 1200 pixels by 900 pixels to obtain the second sensitivity calibration sub-matrix, i.e. the second sensitivity calibration sub-matrix has a size of 1200 pixels by 900 pixels.
And aiming at the same color sub-channel, the electronic equipment multiplies the pixel value at the corresponding position in the first single-channel image by each calibration parameter in the second sensitivity calibration sub-matrix to obtain a second single-channel image. For example, for the same color sub-channel, the electronic device multiplies the pixel value of the kth row and the jth column in the first single-channel image by the calibration parameter of the kth row and the jth column in the second sensitivity calibration sub-matrix to obtain the pixel value of the kth row and the jth column in the second single-channel image.
According to the mode, the corresponding second single-channel images are calculated for the H color sub-channels respectively, so that H second single-channel images are obtained, and each second single-channel image corresponds to one color sub-channel. And finally, the electronic equipment combines the H second single-channel images to obtain a second image. The second image may also be an image in RAW format.
In summary, the embodiment of the application can calculate the sensitivity calibration matrix according to the first timestamp, the second timestamp, the first coordinate of the position of the optical anti-shake module after each movement and a plurality of sensitivity calibration matrices under the condition that the camera starts the optical anti-shake module to perform anti-shake, so as to calibrate the first image acquired by the camera. Therefore, the problem that the calibration of the first image is invalid by adopting the first sensitivity calibration matrix under single optical displacement can be solved, the condition that the image quality of the image is poor due to uneven light received by N homocolor pixels when the optical anti-shake module moves can be improved, the sensitivities of the N homocolor pixels in the second image obtained after calibration are nearly consistent, and the image quality of the second image obtained after calibration is improved.
For convenience of understanding, in the image processing method provided in the embodiment of the present application, an interaction procedure between each of the involved modules is described below with reference to fig. 18.
As shown in fig. 18, the electronic device may include a camera application, a camera service, a camera hardware abstraction module, a camera driver, and a camera head, the camera head including an optical anti-shake module, a lens 10, and an N-pixel-in-one image sensor, the N-pixel-in-one image sensor including a microlens array 21 and a pixel array, the microlens array 21 including a plurality of microlenses 210, each microlens 210 covering N same color pixels adjacent to each other in the pixel array, N being an integer greater than 1. Referring to fig. 18, the image processing method may specifically include the steps of:
s1801, the camera application receives a touch operation of the camera application icon by the user.
S1802, in response to a touch operation on the camera application icon, the camera application transmits an image preview request to the camera service.
S1803, the camera service sends an image preview request to the camera hardware abstraction module.
S1804, the camera hardware abstraction module sends an image preview request to the camera driver.
S1805, based on the image preview request, the camera driver drives the camera to acquire a first image when the optical anti-shake module is started, acquires a first time stamp when the first image is started to be acquired, acquires a second time stamp when the first image is ended to be acquired, and acquires a first coordinate of a position where the optical anti-shake module is located after each movement in the acquisition process of the first image.
For example, a camera application icon may be displayed on a desktop of the electronic device, and when the user wants to use the camera application, the user may perform a touch operation on the camera application icon, such as a click operation, etc., and the camera application may receive the touch operation on the camera application icon by the user.
The camera application starts the camera application in response to a touch operation of the camera application icon and starts running the camera application on the electronic device. After the camera application is launched, the camera application may send an image preview request to the camera service by invoking a camera access interface in the application framework layer. The image preview request is used for requesting to acquire a first image acquired by the camera when the optical anti-shake module is started, and the image preview request can also include, but is not limited to, a first timestamp when the first image is started to be acquired, a second timestamp when the first image is ended to be acquired, a first coordinate of a position where the optical anti-shake module is located after each movement in the acquisition process of the first image, and the like.
After receiving the image preview request, the camera service sends the image preview request to the camera driver through the camera hardware abstraction module, so that the camera driver can acquire a first image when starting the optical anti-shake module based on the image preview request, acquire a first timestamp when starting to acquire the first image, a second timestamp when ending to acquire the first image, and a first coordinate of a position of the optical anti-shake module after each movement in the acquisition process of the first image.
S1806, the camera sends the first image, the first timestamp, the second timestamp, and each first coordinate to the camera driver.
In some embodiments, a hall sensor may be disposed in the camera, and the hall sensor may be electrically connected with a driving chip in the camera. The Hall sensor is used for collecting Hall data, the Hall data can be used for indicating a first coordinate of a position where the optical anti-shake module is located after moving each time, namely, the Hall data can be used for indicating current position information of the optical anti-shake module. The hall sensor can send hall data to a driving chip in the camera. Therefore, the camera can acquire the first coordinate of the position of the optical anti-shake module after each movement.
After the camera acquires the first image, the first timestamp, the second timestamp, and each first coordinate, the camera may send the first image, the first timestamp, the second timestamp, and each first coordinate to the camera driver.
S1807, the camera driver sends the first image, the first timestamp, the second timestamp, and each first coordinate to the camera hardware abstraction module.
S1808, the camera hardware abstraction module determines a sensitivity calibration matrix corresponding to the first image according to the first timestamp, the second timestamp, each first coordinate and the plurality of sensitivity calibration matrices.
Therefore, after receiving the first timestamp, the second timestamp, and each first coordinate, the camera hardware abstraction module may determine the sensitivity calibration matrix corresponding to the first image according to the first timestamp, the second timestamp, each first coordinate, and the plurality of sensitivity calibration matrices according to the steps S804 to S806.
S1809, the camera hardware abstraction module calibrates the first image by adopting a sensitivity calibration matrix to obtain a second image.
S1810, the camera hardware abstraction module transmits the second image to the camera service.
S1811, the camera service transmits the second image to the camera application.
S1812, the camera application displays the second image.
The camera hardware abstraction module may send the second image to the camera application through the camera service after calibrating the first image with the sensitivity calibration matrix to obtain the second image. After receiving the second image, the camera application may display the second image at the preview interface.
The image processing method provided by the embodiment of the application can be applied to the camera application shooting scene besides the camera application preview scene.
The camera application receives touch operation of a shooting control by a user, responds to the touch operation of the shooting control, and sends an image shooting request to a camera service, wherein the image shooting request can be used for requesting to acquire a first image acquired by a camera when an optical anti-shake module is started, and the image preview request can also comprise, but is not limited to, a first timestamp when the acquisition of the first image is started, a second timestamp when the acquisition of the first image is ended, a first coordinate of a position where the optical anti-shake module is located after each movement in the acquisition process of the first image, and the like. The camera driver is used for driving the camera to acquire a first image when the optical anti-shake module is started, acquiring a first time stamp when the first image is started, acquiring a second time stamp when the first image is ended, and acquiring a first coordinate of a position of the optical anti-shake module after each movement in the acquisition process of the first image. The camera sends the first image, the first timestamp, the second timestamp and each first coordinate to the camera driver, and the camera driver sends the first image, the first timestamp, the second timestamp and each first coordinate to the camera hardware abstraction module. The camera hardware abstraction module determines a sensitivity calibration matrix corresponding to the first image according to the first timestamp, the second timestamp, each first coordinate and the plurality of sensitivity calibration matrices, and the camera hardware abstraction module calibrates the first image by adopting the sensitivity calibration matrix to obtain a second image. The camera hardware abstraction module sends the second image to the camera service, the camera service sends the second image to the camera application, and the camera application stores the second image.
The image processing method provided by the embodiment of the application can be applied to various scenes such as portrait snapshot, pet snapshot, sports snapshot, security protection detection, medical health detection and the like.
The image processing method provided by the embodiment of the present application is described above with reference to fig. 8 to 18, and the device for executing the method provided by the embodiment of the present application is described below. As shown in fig. 19, fig. 19 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. The image processing apparatus may be an electronic device in an embodiment of the present application, or a chip system within an electronic device.
As shown in fig. 19, the image processing apparatus 1900 may include a processing unit 1901. The processing unit 1901 is configured to support the image processing apparatus 1900 to perform the above-described processing steps.
Specifically, the processing unit 1901 is configured to obtain a first image acquired by the camera when the optical anti-shake module is started, the processing unit 1901 is configured to obtain a first timestamp when the first image starts to be acquired and a second timestamp when the first image ends to be acquired, the processing unit 1901 is configured to obtain a first coordinate of a position where the optical anti-shake module is located after each movement in an acquisition process of the first image, the processing unit 1901 is configured to determine a sensitivity calibration matrix corresponding to the first image according to the first timestamp, the second timestamp, each first coordinate and a plurality of sensitivity calibration matrices, wherein the relative positions of an optical center of the lens and an optical center of the N-pixel integrated image sensor are different when the plurality of sensitivity calibration matrices are calibrated, and the processing unit 1901 is configured to calibrate the first image by using the sensitivity calibration matrix to obtain the second image.
In one possible implementation, the image processing apparatus 1900 further includes a storage unit 1902. The memory unit 1902 and the processing unit 1901 are connected by a line. The memory unit 1902 may include one or more memories, which may be one or more devices, devices in a circuit for storing programs or data. The memory unit 1902 may be provided separately and connected to the processing unit 1901 via a communication bus. Memory unit 1902 may also be integrated with processing unit 1901.
The storage unit 1902 may store computer-executable instructions of a method in an electronic device to cause the processing unit 1901 to perform the method in the above-described embodiments. The memory unit 1902 may be a register, a cache or a random access memory (random access memory, RAM), etc., and the memory unit 1902 may be integrated with the processing unit 1901. The memory 1902 may be a read-only memory (ROM) or other type of static storage device that may store static information and instructions, and the memory 1902 may be independent of the processing unit 1901.
Fig. 20 is a schematic structural diagram of a chip according to an embodiment of the present application. As shown in fig. 20, the chip 2000 includes one or more (including two) processors 2001, communication lines 2002, and communication interfaces 2003, and optionally, the chip 2000 further includes a memory 2004.
In some implementations, the memory 2004 stores elements of executable modules or data structures, or a subset thereof, or an extended set thereof.
The methods described above for embodiments of the present application may be applied to the processor 2001 or implemented by the processor 2001. Processor 2001 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry of hardware in the processor 2001 or instructions in the form of software. The processor 2001 may be a general purpose processor (e.g., a microprocessor or a conventional processor), a digital signal processor, an Application SPECIFIC INTEGRATED Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gates, transistor logic, or discrete hardware components, which may be configured to implement or perform the methods, steps, and logic blocks disclosed in embodiments of the application by the processor 2001.
The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in any well-known storage medium such as ram, rom, or EEPROM (ELECTRICALLY ERASABLE PROGRAMMABLE READ ONLY MEMORY, EEPROM). The storage medium is located in a memory 2004, and the processor 2001 reads information in the memory 2004, in combination with its hardware, to perform the steps of the method described above.
The processor 2001, the memory 2004, and the communication interface 2003 can communicate with each other via a communication line 2002.
In the above embodiments, the instructions stored by the memory for execution by the processor may be implemented in the form of a computer program product. The computer program product may be written in the memory in advance, or may be downloaded in the form of software and installed in the memory.
Embodiments of the present application also provide a computer program product comprising one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). Computer readable storage media can be any available media that can be stored by a computer or data storage devices including servers, data centers, etc. that can be integrated with one or more available media. For example, usable media may include magnetic media (e.g., floppy disks, hard disks, or magnetic tape), optical media (e.g., digital versatile disks (DIGITAL VERSATILEDISC, DVD)), or semiconductor media (e.g., solid State Disks (SSDs)), and the like.
An embodiment of the present application provides an electronic device including a processor and a memory, where the memory is configured to store a computer program, and the processor is configured to execute the computer program to perform the above-described image processing method.
The embodiment of the application provides a chip. The chip comprises a processor for invoking a computer program in a memory to perform the technical solutions in the above embodiments. The principle and technical effects of the present application are similar to those of the above-described related embodiments, and will not be described in detail herein.
The embodiment of the application also provides a computer readable storage medium. The computer readable storage medium stores a computer program or instructions. The computer program or instructions, when executed by a processor, implement the above-described methods. The methods described in the above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer readable media can include computer storage media and communication media and can include any medium that can transfer a computer program from one place to another. The storage media may be any target media that is accessible by a computer.
As one possible design, the computer-readable medium may include compact disk read-only memory (CD-ROM), RAM, ROM, EEPROM, or other optical disk storage, which may include magnetic disk storage or other magnetic disk storage devices. Moreover, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, DVD, floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing detailed description of the application has been presented for purposes of illustration and description, and it should be understood that the foregoing is by way of illustration and description only, and is not intended to limit the scope of the application.

Claims (17)

1. The image processing method is applied to electronic equipment, the electronic equipment comprises a camera, the camera comprises an optical anti-shake module, a lens and an image sensor, the image sensor comprises a micro lens array and a pixel array, the pixel array comprises a plurality of pixel sets, each pixel set comprises pixel units corresponding to M color channels, each pixel unit comprises pixels corresponding to N color sub-channels, the pixels corresponding to the N color sub-channels are N adjacent same-color pixels, M is an integer larger than 1, the micro lens array comprises a plurality of micro lenses, each micro lens covers the N adjacent same-color pixels in the pixel array, and N is an integer larger than 1, and the method comprises the following steps:
the electronic equipment acquires a first image acquired by the camera when the optical anti-shake module is started;
the electronic device obtains a first time stamp when the first image starts to be acquired and a second time stamp when the first image ends to be acquired;
in the process of acquiring the first image by the electronic equipment, the optical anti-shake module moves at each time to obtain a first coordinate of the position;
the electronic equipment determines the moving times of the optical anti-shake module in the acquisition process of the first image according to the first timestamp and the second timestamp;
The electronic equipment determines second coordinates of centroid positions corresponding to all positions where the optical anti-shake module is located after moving according to the moving times and each first coordinate;
The electronic equipment determines a sensitivity calibration matrix corresponding to the first image according to the second coordinate and a plurality of sensitivity calibration matrices, wherein the relative positions of the optical center of the lens and the optical center of the image sensor are different when the plurality of sensitivity calibration matrices are calibrated;
And the electronic equipment calibrates the first image by adopting the sensitivity calibration matrix to obtain a second image.
2. The method of claim 1, wherein the determining, by the electronic device, a number of movements of the optical anti-shake module during the acquisition of the first image according to the first timestamp and the second timestamp comprises:
the electronic device calculates a time interval between the second timestamp and the first timestamp;
and the electronic equipment determines the ratio between the time interval and the moving frequency of the optical anti-shake module as the moving times of the optical anti-shake module in the acquisition process of the first image.
3. The method of claim 1, wherein the determining, by the electronic device, the second coordinates of the centroid position corresponding to each position where the optical anti-shake module is located after the movement according to the number of movements and each of the first coordinates includes:
the electronic device calculates the second coordinate by the following formula:
Wherein S n is the second coordinate, n is the number of movements, and P i is the first coordinate of the position of the optical anti-shake module after the ith movement.
4. The method of claim 1, wherein the electronic device determining a sensitivity calibration matrix corresponding to the first image based on the second coordinate and the plurality of sensitivity calibration matrices comprises:
The electronic equipment decomposes the second coordinate according to the movable direction of the optical anti-shake module to obtain a first coordinate component in the first moving direction and a second coordinate component in the second moving direction;
The electronic equipment determines a sensitivity calibration matrix corresponding to the first image according to the first coordinate component, the second coordinate component and the plurality of sensitivity calibration matrices;
The first moving direction and the second moving direction are perpendicular to each other, and the first moving direction and the second moving direction are perpendicular to the optical axis direction of the camera.
5. The method of claim 4, wherein the plurality of sensitivity calibration matrices comprises a first sensitivity calibration matrix, a second sensitivity calibration matrix, a third sensitivity calibration matrix, a fourth sensitivity calibration matrix, and a fifth sensitivity calibration matrix, wherein the first direction of movement comprises a first direction and a second direction that are opposite to each other, and wherein the second direction of movement comprises a third direction and a fourth direction that are opposite to each other;
When the first sensitivity calibration matrix is calibrated, the optical center of the lens and the optical center of the image sensor coincide along the optical axis direction of the camera;
The second sensitivity calibration matrix is used for calibrating, wherein the optical center of the lens is offset by a first distance along the first direction and offset by a second distance along the third direction relative to the optical center of the image sensor, the first distance is the maximum distance of the lens which can move along the first direction, and the second distance is the maximum distance of the lens which can move along the third direction;
the third sensitivity calibration matrix is used for offsetting the first distance along the first direction and offsetting the third distance along the fourth direction relative to the optical center of the image sensor when the third sensitivity calibration matrix is calibrated, wherein the third distance is the maximum distance that the lens can move along the fourth direction;
the fourth sensitivity calibration matrix is used for offsetting the optical center of the lens by a fourth distance along the second direction and offsetting the optical center of the lens by the third distance along the fourth direction relative to the optical center of the image sensor during calibration, wherein the fourth distance is the maximum distance that the lens can move along the second direction;
The fifth sensitivity calibration matrix is configured to offset the optical center of the lens by the fourth distance along the second direction and the second distance along the third direction with respect to the optical center of the image sensor when calibrated.
6. The method of claim 5, wherein the electronic device determining a sensitivity calibration matrix corresponding to the first image from the first coordinate component, the second coordinate component, and the plurality of sensitivity calibration matrices comprises:
In the case where the first coordinate component is greater than or equal to 0 and the second coordinate component is greater than or equal to 0, the electronic device calculates the sensitivity calibration matrix by the following formula:
wherein Q n is the sensitivity calibration matrix, Q 0 is the first sensitivity calibration matrix, Q 1 is the second sensitivity calibration matrix, Q 2 is the third sensitivity calibration matrix, Q 4 is the fifth sensitivity calibration matrix, S nx is the first coordinate component, S ny is the second coordinate component, S xmax1 is the first distance, and S xmax1 is the second distance.
7. The method of claim 5, wherein the electronic device determining a sensitivity calibration matrix corresponding to the first image from the first coordinate component, the second coordinate component, and the plurality of sensitivity calibration matrices comprises:
In the case where the first coordinate component is greater than 0 and the second coordinate component is less than 0, the electronic device calculates the sensitivity calibration matrix by the following formula:
Wherein Q n is the sensitivity calibration matrix, Q 0 is the first sensitivity calibration matrix, Q 1 is the second sensitivity calibration matrix, Q 2 is the third sensitivity calibration matrix, Q 3 is the fourth sensitivity calibration matrix, S nx is the first coordinate component, S ny is the second coordinate component, S xmax1 is the first distance, and S ymax2 is the third distance.
8. The method of claim 5, wherein the electronic device determining a sensitivity calibration matrix corresponding to the first image from the first coordinate component, the second coordinate component, and the plurality of sensitivity calibration matrices comprises:
in the case where the first coordinate component is less than or equal to 0 and the second coordinate component is less than or equal to 0, the electronic device calculates the sensitivity calibration matrix by the following formula:
Wherein Q n is the sensitivity calibration matrix, Q 0 is the first sensitivity calibration matrix, Q 2 is the third sensitivity calibration matrix, Q 3 is the fourth sensitivity calibration matrix, Q 4 is the fifth sensitivity calibration matrix, S nx is the first coordinate component, S ny is the second coordinate component, S xmax2 is the fourth distance, and S ymax2 is the third distance.
9. The method of claim 5, wherein the electronic device determining a sensitivity calibration matrix corresponding to the first image from the first coordinate component, the second coordinate component, and the plurality of sensitivity calibration matrices comprises:
In the case where the first coordinate component is less than 0 and the second coordinate component is greater than 0, the electronic device calculates the sensitivity calibration matrix by the following formula:
Wherein Q n is the sensitivity calibration matrix, Q 0 is the first sensitivity calibration matrix, Q 1 is the second sensitivity calibration matrix, Q 3 is the fourth sensitivity calibration matrix, Q 4 is the fifth sensitivity calibration matrix, S nx is the first coordinate component, S ny is the second coordinate component, S xmax2 is the fourth distance, and S ymax1 is the second distance.
10. The method according to any one of claims 1 to 9, wherein the sensitivity calibration matrix comprises a first sensitivity calibration sub-matrix corresponding to H color sub-channels, the H being equal to the product of the M and the N.
11. The method of claim 10, wherein the electronic device uses the sensitivity calibration matrix to calibrate the first image to obtain a second image, comprising:
The electronic equipment splits the first image according to H color sub-channels to obtain H first single-channel images;
the electronic equipment respectively adjusts the sizes of the first sensitivity calibration submatrices corresponding to the H color sub-channels to obtain second sensitivity calibration submatrices corresponding to the H color sub-channels, wherein the size of each second sensitivity calibration submatrix is equal to the size of the first single-channel image;
The electronic equipment adopts second sensitivity calibration submatrices corresponding to the H color subchannels to calibrate the corresponding first single-channel images respectively to obtain H second single-channel images;
and the electronic equipment combines the H second single-channel images to obtain the second images.
12. The method of claim 11, wherein the electronic device performs calibration on the corresponding first single-channel images by using second sensitivity calibration sub-matrices corresponding to the H color sub-channels, to obtain H second single-channel images, including:
and aiming at the same color sub-channel, the electronic equipment multiplies the pixel value at the corresponding position in the first single-channel image by each calibration parameter in the second sensitivity calibration sub-matrix to obtain the second single-channel image.
13. The method of claim 10, further comprising, prior to the electronic device determining a sensitivity calibration matrix corresponding to the first image based on the first timestamp, the second timestamp, each of the first coordinates, and a plurality of sensitivity calibration matrices:
the electronic equipment acquires a plurality of sensitivity calibration matrixes calibrated in advance, wherein each sensitivity calibration matrix comprises sensitivity calibration matrixes corresponding to H color sub-channels;
Each calibration parameter in the sensitivity calibration matrix is calculated according to each pixel mean value in a single-channel mean value image and a pixel value at a corresponding position in each single-channel test image corresponding to the single-channel mean value image, wherein each pixel mean value in the single-channel mean value image is an average value of pixel values at the same position in N single-channel test images belonging to the same color channel in H single-channel test images;
The H single-channel test images are obtained by reducing the size of the test images and splitting the test images according to H color sub-channels, wherein the test images are acquired under the condition that the focusing position of the lens is a preset focusing position and the relative position of the optical center of the lens and the optical center of the image sensor is a preset position.
14. The method of claim 13, wherein each calibration parameter in the sensitivity calibration matrix is a ratio of pixel values at corresponding locations in each of the single channel test images corresponding to the single channel mean image for each pixel mean in the single channel mean image.
15. An electronic device comprising a memory for storing a computer program and a processor for invoking the computer program to perform the image processing method of any of claims 1 to 14.
16. A computer-readable storage medium, in which a computer program or instructions is stored which, when executed, implements the image processing method according to any one of claims 1 to 14.
17. A computer program product comprising a computer program which, when run, causes a computer to perform the image processing method as claimed in any one of claims 1 to 14.
CN202311733874.3A 2023-12-15 2023-12-15 Image processing method, electronic device, storage medium and program product Active CN118509720B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311733874.3A CN118509720B (en) 2023-12-15 2023-12-15 Image processing method, electronic device, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311733874.3A CN118509720B (en) 2023-12-15 2023-12-15 Image processing method, electronic device, storage medium and program product

Publications (2)

Publication Number Publication Date
CN118509720A CN118509720A (en) 2024-08-16
CN118509720B true CN118509720B (en) 2024-12-20

Family

ID=92242018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311733874.3A Active CN118509720B (en) 2023-12-15 2023-12-15 Image processing method, electronic device, storage medium and program product

Country Status (1)

Country Link
CN (1) CN118509720B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101217621A (en) * 2007-01-05 2008-07-09 安奇逻辑股份有限公司 Camera module, electronic device with same, and their manufacturing method
CN102907103A (en) * 2010-06-02 2013-01-30 索尼公司 Image processing device, image processing method and program

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5680797B2 (en) * 2012-06-07 2015-03-04 富士フイルム株式会社 Imaging apparatus, image processing apparatus, and image processing method
CN109194877B (en) * 2018-10-31 2021-03-02 Oppo广东移动通信有限公司 Image compensation method and apparatus, computer-readable storage medium, and electronic device
CN113129222A (en) * 2020-01-13 2021-07-16 华为技术有限公司 Color shading correction method, terminal device and computer-readable storage medium
JPWO2021261107A1 (en) * 2020-06-25 2021-12-30

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101217621A (en) * 2007-01-05 2008-07-09 安奇逻辑股份有限公司 Camera module, electronic device with same, and their manufacturing method
CN102907103A (en) * 2010-06-02 2013-01-30 索尼公司 Image processing device, image processing method and program

Also Published As

Publication number Publication date
CN118509720A (en) 2024-08-16

Similar Documents

Publication Publication Date Title
CN102378015B (en) Use the image capture of brightness and chromaticity transducer
TWI818211B (en) Eye positioning device and method and 3D display device and method
CN113660408B (en) Anti-shake method and device for video shooting
US9596455B2 (en) Image processing device and method, and imaging device
US20180255307A1 (en) Sequential In-Place Blocking Transposition For Image Signal Processing
CN109903260A (en) Image processing method and image processing apparatus
CN113630558B (en) Camera exposure method and electronic equipment
CN113574856A (en) Image processing apparatus, image processing method, and program
WO2022206595A1 (en) Image processing method and related device
US11570357B2 (en) Imaging device and imaging system
TWI782361B (en) 3D display device, method and terminal
CN115359105B (en) Depth-of-field extended image generation method, device and storage medium
WO2022161011A1 (en) Method for generating image and electronic device
CN114915745B (en) Multi-view video recording method and device and electronic equipment
CN118509720B (en) Image processing method, electronic device, storage medium and program product
KR102374428B1 (en) Graphic sensor, mobile terminal and graphic shooting method
CN112738399A (en) Image processing method and device and electronic equipment
CN115460343B (en) Image processing method, device and storage medium
CN117425091B (en) Image processing method and electronic device
CN110891131A (en) Camera module, processing method and device, electronic equipment, storage medium
WO2021110026A1 (en) Method for realizing 3d image display, and 3d display device
CN116320784B (en) Image processing methods and devices
CN119155544A (en) Mode control method, electronic device, storage medium, and program product
CN116761082B (en) Image processing methods and devices
CN117998203B (en) Image processing method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Unit 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong 518040

Patentee after: Honor Terminal Co.,Ltd.

Country or region after: China

Address before: 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong

Patentee before: Honor Device Co.,Ltd.

Country or region before: China