[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111027489A - Image processing method, terminal and storage medium - Google Patents

Image processing method, terminal and storage medium Download PDF

Info

Publication number
CN111027489A
CN111027489A CN201911271535.1A CN201911271535A CN111027489A CN 111027489 A CN111027489 A CN 111027489A CN 201911271535 A CN201911271535 A CN 201911271535A CN 111027489 A CN111027489 A CN 111027489A
Authority
CN
China
Prior art keywords
infrared
classification model
terminal
image
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911271535.1A
Other languages
Chinese (zh)
Other versions
CN111027489B (en
Inventor
王琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911271535.1A priority Critical patent/CN111027489B/en
Publication of CN111027489A publication Critical patent/CN111027489A/en
Priority to PCT/CN2020/135630 priority patent/WO2021115419A1/en
Application granted granted Critical
Publication of CN111027489B publication Critical patent/CN111027489B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses an image processing method, a terminal and a storage medium, wherein the image processing method comprises the following steps: detecting first infrared information, second infrared information and visible light components corresponding to a current image through a color temperature sensor; the first infrared information and the second infrared information are respectively acquired by the color temperature sensor by utilizing two different receiving and transmitting wave bands; generating a first infrared characteristic value and a second infrared characteristic value according to the first infrared information, the second infrared information and the visible light component; based on a preset classification model, obtaining a scene prediction result according to a brightness parameter, a first infrared characteristic value and a second infrared characteristic value corresponding to a current image, and carrying out image processing according to the scene prediction result; the preset classification model is used for classifying a plurality of scenes according to different spectral energies.

Description

Image processing method, terminal and storage medium
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to an image processing method, a terminal and a storage medium.
Background
When image processing is performed, if the scene where the current image is located can be determined, such as an indoor scene or an outdoor scene, a higher image restoration effect is facilitated. That is, scene prediction becomes one of important functions required when the terminal performs image processing. At present, when a terminal predicts a scene, the terminal can identify the scene by deploying some additional auxiliary devices to acquire specific data; the scene discrimination can also be performed by means of image processing.
However, the scene prediction is performed by means of additional auxiliary equipment, the cost is high in the deployment stage, the preparation work is complex, the universality and the usability of the scene prediction are greatly limited, and the convenience is poor; the current method for scene prediction based on image processing has higher computational complexity, reduces the prediction efficiency, and has poorer accuracy of scene prediction, thereby reducing the processing effect of images.
Disclosure of Invention
The embodiment of the application provides an image processing method, a terminal and a storage medium, which can reduce the complexity of prediction, thereby improving the prediction efficiency, and simultaneously improving the accuracy of scene prediction, and further improving the image processing effect.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides an image processing method, where the image processing method is applied to a first terminal, and the method includes:
detecting first infrared information, second infrared information and visible light components corresponding to a current image through a color temperature sensor; the first infrared information and the second infrared information are respectively acquired by the color temperature sensor by using two different receiving and transmitting wave bands;
generating a first infrared characteristic value and a second infrared characteristic value according to the first infrared information, the second infrared information and the visible light component;
based on a preset classification model, obtaining a scene prediction result according to the brightness parameter corresponding to the current image, the first infrared characteristic value and the second infrared characteristic value, and performing image processing according to the scene prediction result; the preset classification model is used for classifying a plurality of scenes according to different spectral energies.
In a second aspect, an embodiment of the present application provides an image processing method, where the image processing method is applied to a second terminal, and the method includes:
dividing a pre-stored image library to obtain training data and test data; the pre-stored image library stores a plurality of images of different scenes, and the different scenes correspond to different spectral energy;
training a preset loss function by using the training data to obtain an initial classification model;
obtaining a preset classification model according to the test data and the initial classification model; the preset classification model is used for classifying a plurality of scenes according to different spectral energies.
In a third aspect, an embodiment of the present application provides a first terminal, where the first terminal includes: a detection unit, a generation unit, a first acquisition unit,
the detection unit is used for detecting first infrared information, second infrared information and visible light components corresponding to the current image through the color temperature sensor; the first infrared information and the second infrared information are respectively acquired by the color temperature sensor by using two different receiving and transmitting wave bands;
the generating unit is used for generating a first infrared characteristic value and a second infrared characteristic value according to the first infrared information, the second infrared information and the visible light component;
the first obtaining unit is configured to obtain a scene prediction result according to a brightness parameter, the first infrared characteristic value, and the second infrared characteristic value corresponding to the current image based on a preset classification model, so as to perform image processing according to the scene prediction result; the preset classification model is used for classifying a plurality of scenes according to different spectral energies.
In a fourth aspect, an embodiment of the present application provides a second terminal, where the second terminal includes: a dividing unit, a second acquiring unit and a processing unit,
the dividing unit is used for dividing a prestored image library to obtain training data and test data; the pre-stored image library stores a plurality of images of different scenes, and the different scenes correspond to different spectral energy;
the second obtaining unit is used for training a preset loss function by using the training data to obtain an initial classification model; obtaining a preset classification model according to the test data and the initial classification model; the preset classification model is used for classifying a plurality of scenes according to different spectral energies.
In a fifth aspect, the present application provides a first terminal, where the first terminal includes a first processor, and a first memory storing instructions executable by the first processor, and when the instructions are executed by the first processor, the method for processing an image as described above is implemented.
In a sixth aspect, the present application provides a second terminal, where the second terminal includes a second processor, and a second memory storing instructions executable by the second processor, and when the instructions are executed by the second processor, the second terminal implements the image processing method as described above.
In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium, which stores a program, and is applied to a first terminal and a second terminal, where the program, when executed by a processor, implements the image processing method as described above.
The embodiment of the application provides an image processing method, a terminal and a storage medium, wherein a first terminal detects first infrared information, second infrared information and visible light components corresponding to a current image through a color temperature sensor; the first infrared information and the second infrared information are respectively acquired by the color temperature sensor by utilizing two different receiving and transmitting wave bands; generating a first infrared characteristic value and a second infrared characteristic value according to the first infrared information, the second infrared information and the visible light component; based on a preset classification model, obtaining a scene prediction result according to a brightness parameter, a first infrared characteristic value and a second infrared characteristic value corresponding to a current image, and carrying out image processing according to the scene prediction result; the preset classification model is used for classifying a plurality of scenes according to different spectral energies. The second terminal divides a pre-stored image library to obtain training data and test data; the system comprises a pre-stored image library, a plurality of image acquisition units and a plurality of image processing units, wherein the pre-stored image library stores a plurality of images of different scenes, and the different scenes correspond to different spectral energy; training a preset loss function by using training data to obtain an initial classification model; and obtaining a preset classification model according to the test data and the initial classification model. Therefore, according to the image processing method provided by the embodiment of the application, the color temperature sensor can be used for collecting the visible light component and the two different infrared information in the spectrum corresponding to the current image, then the visible light component and the two different infrared information are used for determining the two corresponding infrared characteristic values, and the brightness parameter corresponding to the current image is combined, so that the scene prediction of the current image is realized based on the preset classification model, wherein the preset classification model is obtained by training and testing the infrared characteristic data and the brightness characteristic data based on the image in the pre-stored image library. That is to say, in the application, the terminal trains the preset classification model by using the infrared features and the brightness features of the image, and then predicts the scene of the current image according to the infrared features and the brightness features of the current image based on the preset classification model, so that the image processing can be performed according to the scene prediction result, the prediction complexity can be reduced, the prediction efficiency can be improved, the accuracy of the scene prediction can be improved, and the image processing effect can be improved.
Drawings
FIG. 1 is a first schematic flow chart of an implementation of an image processing method;
FIG. 2 is a first schematic view of a position of a color temperature sensor;
FIG. 3 is a schematic diagram of a second location of the color temperature sensor;
FIG. 4 is a schematic view of a current color temperature sensor;
FIG. 5 is a schematic diagram of a third position of the color temperature sensor;
FIG. 6 is a fourth schematic view of the position of the color temperature sensor;
FIG. 7 is a graphical representation of the spectral response of a color temperature sensor;
FIG. 8 is a schematic view of different detection channels;
FIG. 9 is a diagram of time domain signals before time-frequency transformation;
FIG. 10 is a schematic diagram of a frequency domain signal after time-frequency transformation;
FIG. 11 is a flowchart illustrating a second implementation of the image processing method;
FIG. 12 is a schematic diagram of the spectral power distribution of a fluorescent lamp;
FIG. 13 is a schematic illustration of the spectral power distribution of sunlight;
FIG. 14 is a schematic diagram of the spectral power distribution of an incandescent lamp;
fig. 15 is a first schematic structural diagram of the first terminal;
fig. 16 is a schematic structural diagram of a second terminal;
fig. 17 is a first schematic structural diagram of the second terminal;
fig. 18 is a second schematic structural diagram of the second terminal.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant application and are not limiting of the application. It should be noted that, for the convenience of description, only the parts related to the related applications are shown in the drawings.
There are many schemes for the terminal to perform scene prediction, specifically, there are methods based on external devices, such as Wireless network (Wi-Fi), light sensing, infrared, and other devices; there are also methods based on the image itself. The methods based on the images themselves can be further classified into a traditional threshold classification method and a machine learning-based method.
In different scenes, the mode of the terminal during image processing may be different, for example, in an indoor scene, an Automatic Exposure (AE) needs to be constantly considered to start a power frequency flash resisting strategy; for a low-brightness outdoor scene, a more appropriate Automatic White Balance (AWB) algorithm needs to be selected to restore the image, for example, in the AWB algorithm, if it can be determined that the current light source is an outdoor light source, the AWB color temperature can be simply set to the position of D55, and the picture can obtain a good color restoration effect.
Therefore, the good scene prediction method can help the AWB algorithm to easily obtain a beautiful reduction effect, and the reduction difficulty of the AWB algorithm can be reduced for both low-brightness outdoor scenes and high-brightness indoor scenes. Accordingly, in the AE algorithm, if the scene corresponding to the current image can be accurately determined to be outdoors, the anti-flash problem does not need to be considered at all, so that more flexibility can be provided.
At present, when an Image Processing method is used for scene prediction, on one hand, feature extraction needs to rely on a full-size Image (for example, 4000 × 3000), and a multi-scale filtering method is applied to extract a large number of structural features, while Image Signal Processing (ISP) of a portable terminal such as a mobile phone can only provide a small-size Image (for example, 120 × 90), and at this time, the accuracy of features obtained by the terminal using the filtering method based on the full-size Image is greatly reduced, thereby reducing the accuracy of scene prediction. On the other hand, the image processing method extracts high-dimensional structural related features from the current image, the number of the features is usually large, and real-time processing is difficult to perform when the features are used in a portable terminal such as a mobile phone, so that the prediction efficiency of scene prediction is reduced.
Further, from the practical effect, when the complicated structural features face irregularly divided sky, a solid-color scene, and an indoor artificial building, the prediction accuracy of scene prediction is lowered.
The scene recognition algorithm based on YUV data is located behind the demosaic algorithm on the ISP, and the scene which is finally seen tends to be not well used by AE, AWB and Auto Focus (AF) at the front end of the ISP due to the deviation in the time domain.
In summary, in the prior art, the method for performing scene prediction based on the image processing method has high computational complexity, reduces prediction efficiency, and has poor accuracy of scene prediction. In order to solve the above defect, an embodiment of the present application provides an image processing method, which may first acquire a visible light component and two different pieces of infrared information in a spectrum corresponding to a current image by using a color temperature sensor, then determine two corresponding infrared feature values by using the visible light component and the two different pieces of infrared information, and implement scene prediction of the current image based on a preset classification model in combination with a brightness parameter corresponding to the current image, where the preset classification model is obtained by training and testing infrared feature data and brightness feature data based on images in a pre-stored image library. That is to say, in the application, the terminal trains the preset classification model by using the infrared features and the brightness features of the image, and then predicts the scene of the current image according to the infrared features and the brightness features of the current image based on the preset classification model, so that the image processing can be performed according to the scene prediction result, the prediction complexity can be reduced, the prediction efficiency can be improved, the accuracy of the scene prediction can be improved, and the image processing effect can be improved.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
An embodiment of the present application provides an image processing method, fig. 1 is a schematic implementation flow diagram of the image processing method, as shown in fig. 1, in the embodiment of the present application, a method for a first terminal to perform image processing may include the following steps:
step 101, detecting first infrared information, second infrared information and visible light components corresponding to a current image through a color temperature sensor; the first infrared information and the second infrared information are respectively acquired by the color temperature sensor by utilizing two different receiving and transmitting wave bands.
In an embodiment of the application, the first terminal may first obtain the first infrared information, the second infrared information, and the visible light component through detection of a configured color temperature sensor. The first infrared information and the second infrared information may be different infrared data respectively acquired by the color temperature sensor using two different transceiving bands.
It should be noted that, in the embodiment of the present application, the first terminal may be any device having communication and storage functions, for example: tablet computers, mobile phones, electronic readers, remote controllers, Personal Computers (PCs), notebook computers, vehicle-mounted devices, network televisions, wearable devices, and the like.
Specifically, the first terminal may be a device that performs image processing by using a preset classification model, and the first terminal may also be a device that performs learning training on the preset classification model.
Further, in the embodiment of the present application, the first terminal may be provided with a shooting device for image acquisition, and specifically, the first terminal may be provided with at least one front camera and at least one rear camera.
It is understood that, in the embodiment of the present application, the current image may be obtained by the first terminal being captured by the provided capturing device.
It should be noted that, in the embodiment of the present application, the first terminal may further be provided with a color temperature sensor, and specifically, in the present application, the first terminal may be provided with a color temperature sensor on a side of the front camera, and may also be provided with a color temperature sensor on a side of the rear camera. Specifically, the terminal is provided with a front camera on the front cover and a rear camera on the rear cover, and thus, the color temperature sensor may be disposed in a first region of the front cover; the first area represents an area adjacent to the front camera; alternatively, the color temperature sensor may also be provided in the second region of the rear cover; wherein the second region characterizes a region adjacent to the rear camera.
For example, in the present application, fig. 2 is a first schematic position diagram of a color temperature sensor, and fig. 3 is a second schematic position diagram of the color temperature sensor, where as shown in fig. 2, the color temperature sensor is disposed on the left side of the front camera of the first terminal, and as shown in fig. 3, the color temperature sensor is disposed on the lower side of the rear camera of the first terminal.
At present, in a common setting method of a color temperature sensor, the color temperature sensor is all set in a bang area of a full-face screen, specifically, fig. 4 is a setting schematic diagram of the current color temperature sensor, and as shown in fig. 4, the color temperature sensor is placed below ink in the bang area by a terminal. However, the color temperature sensor provided in the bang area, which accordingly requires a large ink hole to be opened at the terminal, has a large influence on the appearance of Industrial Design (ID).
In contrast, in the embodiment of the present application, the top of the first terminal may be provided with a slit, and thus, the first terminal may dispose the color temperature sensor in the slit of the top. Fig. 5 is a third schematic position view of the color temperature sensor, and fig. 6 is a fourth schematic position view of the color temperature sensor, as shown in fig. 5 and 6, the color temperature sensor is disposed in a gap at the top of the first terminal, the color temperature sensor does not affect the appearance of the first terminal whether on the front side (fig. 5) or the back side (fig. 6) of the first terminal, and the color temperature sensor disposed in the gap does not require the first terminal to have an ink hole.
Further, in this embodiment of the application, the first terminal may detect an environmental parameter corresponding to the current image through a configured color temperature sensor, and specifically, the color temperature sensor may detect parameters such as red R, green G, blue B, visible light C, full spectrum wb (wide band), correlated color temperature (cct), and flash Frequency (FD) of two channels, which are FD1 and FD2, respectively, corresponding to the current image.
Fig. 7 is a graph showing the spectral response of the color temperature sensor, and as shown in fig. 7, the spectral response curves corresponding to R, G, B, C, WB, FD1 and FD2 detected by the color temperature sensor vary with the wavelength.
It should be noted that, in the embodiment of the present application, the first infrared information and the second infrared information are different, and specifically, the first infrared information may be used to measure the intensity of an infrared band between 800nm and 900nm, and the second infrared information may be used to measure the intensity of an infrared band between 950nm and 1000 nm.
Further, in the embodiment of the application, the color temperature sensor configured at the first terminal may detect infrared light in an environment corresponding to a current image through different transceiving bands, so that the first infrared information and the second infrared information may be obtained. Fig. 8 is a schematic diagram of different detection channels, and as shown in fig. 8, the first terminal can perform infrared band detection by using two frequencies, 50Hz and 60Hz respectively.
It should be noted that, in the embodiment of the present application, the first terminal may obtain, through the color temperature sensor, first time domain information obtained by detecting the first infrared channel, that is, obtain the first infrared information; meanwhile, the first terminal can also obtain second time domain information detected and obtained by the second infrared channel through the color temperature sensor, namely obtain second infrared information.
Accordingly, the first terminal may also obtain a component of the visible light band, i.e., obtain a visible light component, through the color temperature sensor.
And 102, generating a first infrared characteristic value and a second infrared characteristic value according to the first infrared information, the second infrared information and the visible light component.
In the embodiment of the application, after the first terminal detects and obtains the first infrared information, the second infrared information and the visible light component by using the color temperature sensor, the first terminal may directly generate the first infrared characteristic value and the second infrared characteristic value corresponding to the current image according to the first infrared information, the second infrared information and the visible light component.
It should be noted that, in the embodiment of the present application, when the first terminal generates the first infrared characteristic value and the second infrared characteristic value, the first terminal may first perform time-frequency transformation on the first infrared information, so as to obtain a first direct current component corresponding to the first infrared information, and may also perform time-frequency transformation on the second infrared information, so as to obtain a second direct current component corresponding to the second infrared information. Fig. 9 is a schematic diagram of a time domain signal before time-frequency transformation, and fig. 10 is a schematic diagram of a frequency domain signal after time-frequency transformation, as shown in fig. 9 and fig. 10, after time-frequency transformation processing is performed, infrared information of a time domain can be converted into a corresponding direct current component.
Further, in the embodiment of the present application, after the first terminal performs time-frequency transform processing on the first infrared information and the second infrared information respectively to obtain the first direct current component and the second direct current component, the first direct current component, the second direct current component, and the visible light component may be used to further generate the first infrared characteristic value and the second infrared characteristic value.
It should be noted that, in the embodiment of the present application, the first terminal may calculate to obtain the first infrared characteristic value by using the second direct current component and the visible light component, and meanwhile, the first terminal may also calculate to obtain the second infrared characteristic value by using the first direct current component and the second direct current component.
Further, in an embodiment of the present application, the first infrared characteristic may be used to measure the intensity of the infrared band from 800nm to 900nm, and the second infrared characteristic may be used to measure the intensity of the infrared band from 950nm to 1000 nm.
103, based on a preset classification model, obtaining a scene prediction result according to a brightness parameter, a first infrared characteristic value and a second infrared characteristic value corresponding to a current image, and performing image processing according to the scene prediction result; the preset classification model is used for classifying a plurality of scenes according to different spectral energies.
In the embodiment of the application, after the first terminal generates the first infrared characteristic value and the second infrared characteristic value according to the first infrared information, the second infrared information and the visible light component, a scene prediction result corresponding to the current image may be obtained by using a brightness parameter, the first infrared characteristic value and the second infrared characteristic value corresponding to the current image based on a preset classification model, and then the current image may be subjected to image processing according to the scene prediction result.
It should be noted that, in the embodiment of the present application, the preset classification model may be used to classify a plurality of scenes according to different spectral energies, so as to obtain the types of the scenes. Specifically, the preset classification model may be a classifier obtained by training the first terminal based on the infrared features and the brightness features. That is, the first terminal may distinguish the outdoor scene from the indoor scene according to the difference of the spectral energy by using the preset classification model.
It can be understood that, in the embodiment of the present application, as can be seen from the spectral energy distribution of different light sources such as a fluorescent lamp, a sunlight, an incandescent lamp, and the like, the energy of the infrared band from 800nm to 900nm in an indoor fluorescent lamp scene is very weak, while the energy of the infrared band from 800nm to 900nm in the sunlight exists quite strongly, and the energy starts to be attenuated severely after 950nm, in contrast, the energy of the incandescent lamp in the infrared band from 800nm to 1000nm shows a stronger trend. Therefore, the color temperature sensor can be directly used for detecting the obtained infrared band information to obtain the distinctive characteristic information. That is, in the present application, the first terminal may use infrared information obtained by the color temperature sensor as feature information for scene prediction based on a preset classification model according to different spectral energies.
Further, in the embodiment of the present application, the preset classification model for scene prediction may be a logistic regression model, a bayesian classifier, ensemble learning, a decision tree, a Support Vector Machine (SVM) model, or other typical classification models.
For example, in an embodiment of the application, the first terminal may train the preset classification model based on parameters, such as infrared characteristic data and luminance characteristic data, corresponding to a pre-stored image library, so that the preset classification model obtained through training may output a classification parameter corresponding to the current image based on a luminance parameter, a first infrared characteristic value, and a second infrared characteristic value corresponding to the current image.
Further, in an embodiment of the application, the first terminal needs to obtain the brightness parameter corresponding to the current image before obtaining the scene prediction result according to the brightness parameter, the first infrared characteristic value, and the second infrared characteristic value corresponding to the current image based on the preset classification model to perform image processing according to the scene prediction result, and specifically, the first terminal may read the corresponding attribute parameter from the attribute information corresponding to the current image, and then determine the brightness parameter corresponding to the current image by using the attribute parameter.
It should be noted that, in the embodiment of the present application, the attribute parameter may be a specific parameter corresponding to an image obtained by a shooting device when the first terminal shoots a current image. Specifically, the attribute parameters may include an aperture value parameter, a shutter speed parameter, and a sensitivity parameter.
Specifically, in the embodiment of the present application, the aperture value parameter Av is a quantized expression of an aperture value F _ number, and the aperture is usually expressed in an F value, and further, the aperture value parameter Av may be expressed as log (F _ number); the Shutter Speed parameter Tv is a quantified expression of the Shutter Speed, which is usually expressed in a fractional form of 1/Shutter _ Speed, and further, the Shutter Speed parameter Tv can be expressed as log (1/Shutter _ Speed); the sensitivity parameter Sv is a quantitative expression of the sensitivity (ISO), and further, the sensitivity parameter Sv may be expressed as log (ISO).
Further, in the embodiments of the present application, the AE algorithm generally adjusts the brightness of an image by adjusting the aperture size, shutter speed, and sensitivity. The Av value under outdoor natural light is larger than the Av value in the room, the Tv value under outdoor natural light is larger than the Tv value in the room, and the Sv value under outdoor natural light is smaller than the Sv value in the room.
Specifically, in the embodiment of the present application, the attribute information is set for an image captured by a capturing device configured for the first terminal, and is used for storing the attribute information and the capturing data of the recorded image. That is, the attribute information includes attribute information corresponding to the current image and the shot data.
For example, the first terminal reads attribute parameters such as Av, Tv, and Sv corresponding to the current image from a pre-stored Exchangeable image file format (Exif).
It is understood that, in the embodiment of the present application, the aperture value parameter, the shutter speed parameter, and the sensitivity parameter may effectively reflect the brightness in the scene where the current image is located, and therefore, the brightness parameter may be further determined according to the aperture value parameter, the shutter speed parameter, and the sensitivity parameter, so as to predict the scene corresponding to the current image.
It should be noted that, in the embodiment of the present application, after acquiring the attribute parameter corresponding to the current image, the first terminal may perform normalization processing on the attribute parameter, and then obtain the luminance parameter.
In the embodiments of the present application, normalization is a dimensionless processing means, and makes the absolute value of the physical system value become a certain relative value relationship. Specifically, the normalization process has become an effective way to simplify the calculation and reduce the magnitude.
Further, in the embodiment of the application, since the preset classification model is obtained by the first terminal based on the parameter training of the pre-stored image library corresponding to the infrared characteristic data, the brightness characteristic data and the like, when the first terminal performs the scene prediction on the current image by using the preset classification model, the first terminal needs to obtain the brightness parameter corresponding to the current image first.
It should be noted that, in the embodiment of the present application, although the aperture value parameter, the shutter speed parameter, and the sensitivity parameter are specific attribute parameters corresponding to the shooting device when the first terminal shoots the current image, for different scenes, values of the aperture value parameter, the shutter speed parameter, and the sensitivity parameter have a large difference, for example, an Av value under outdoor natural light is greater than an Av value in a room, a Tv value under outdoor natural light is greater than a Tv value in the room, and a Sv value under outdoor natural light is less than a Sv value in the room. Therefore, when the first terminal performs scene prediction on the current image, normalization processing may be performed on the aperture value parameter, the shutter speed parameter, and the sensitivity parameter corresponding to the current image, respectively, and then the normalized aperture value parameter, the normalized shutter speed parameter, and the normalized sensitivity parameter are used as luminance characteristic information to be input into the preset classification model.
It should be noted that, in the embodiment of the present application, since the preset classification model is based on the infrared feature data and the luminance feature data for the learning training, when the first terminal performs scene prediction by using the preset classification model, the first terminal needs to use the infrared parameter corresponding to the current image in addition to the luminance parameter corresponding to the current image, and therefore, the first terminal needs to combine the first infrared feature value and the second infrared feature value representing the infrared parameter on the basis of the luminance parameter, so as to obtain the classification parameter corresponding to the current image. Wherein the classification parameter is used for predicting the scene.
It can be understood that, in the embodiment of the application, when the first terminal obtains the scene prediction result according to the brightness parameter, the first infrared characteristic value, and the second infrared characteristic value corresponding to the current image based on the preset classification model, and performs image processing according to the scene prediction result, the first terminal may first obtain the classification parameter based on the preset classification model, and then determine the scene prediction result corresponding to the current image according to the classification parameter.
It should be noted that, in the embodiment of the present application, after the first terminal outputs the classification parameter corresponding to the current image based on the preset classification model, the first terminal may directly determine the scene prediction result corresponding to the current image by using the classification parameter. Specifically, the first terminal may perform scene prediction by using the classification parameter, and obtain a scene prediction result. The scene prediction result may be an indoor scene or an outdoor scene.
It can be understood that, in the embodiment of the present application, when the first terminal performs scene prediction by using the classification parameter, when the classification parameter belongs to a first preset value range, the first terminal may consider a scene prediction result as an indoor scene; when the classification parameter belongs to the second preset value range, the first terminal may consider the scene prediction result as an outdoor scene.
For example, in the present application, the terminal may be provided with a first preset value range and a second preset value range corresponding to the preset classification model, wherein the first preset value range and the second preset value range are not overlapped. For example, the first preset numerical range may be set to (-20, 0), and the second preset numerical range may be set to (0, 33).
It should be noted that, in the embodiment of the present application, the first preset value range and the second preset value range are set corresponding to the preset classification model, that is, the first preset value range and the second preset value range set by the terminal may also be different for different preset classification models, and therefore, the values of the first preset value range and the second preset value range are not specifically limited in the present application.
Further, in an embodiment of the present application, after determining a scene prediction result corresponding to the current image according to the classification parameter, the first terminal may further process the current image by using the scene prediction result of the current image. Specifically, the first terminal may perform white balance processing, brightness adjustment processing, and the like on the current image using the scene prediction result.
For example, in the embodiment of the present application, if the scene prediction result is an outdoor scene, when the current image is automatically white-balanced by using the scene prediction result, the color temperature and the color deviation value duv may be directly set, so that a relatively ideal white balance effect may be obtained, and an image with better quality after white balance may be obtained. For example, when the scene prediction result is used to perform AWB processing on a large-area solid-color scene, the processing parameters are R/G1.000 and B/G1.008, and when the scene prediction result is not used to perform AWB processing, the processing parameters are R/G0.9712 and B/G1.0594.
That is to say, in the application, an accurate scene prediction result is very important for the application of the AWB algorithm, and when the scene is determined to be outdoors, the AWB algorithm can set the color temperature to 5000-5000 k and the color deviation value to 0.001-0.005, so that a relatively ideal white balance effect can be obtained. The method can obtain better and ideal processing effect for outdoor scenes lacking sky reference under low brightness and large-area pure-color scenes.
Illustratively, in the embodiment of the present application, if the scene prediction result is an outdoor scene, when the brightness of the current image is adjusted by the AE algorithm using the scene prediction result, the influence of the strobe does not need to be considered, but the motion blur is directly suppressed by reducing the exposure time, so that the adjusted image solves the blur problem. For example, in the case of an outdoor scene, when the luminance adjustment processing is performed using the scene prediction result, the exposure time can be directly reduced to suppress the motion blur, whereas in contrast to the case where the luminance adjustment processing is not performed using the scene prediction result, the influence of the strobe needs to be considered.
According to the image processing method provided by the embodiment of the application, a first terminal detects first infrared information, second infrared information and visible light components corresponding to a current image through a color temperature sensor; the first infrared information and the second infrared information are respectively acquired by the color temperature sensor by utilizing two different receiving and transmitting wave bands; generating a first infrared characteristic value and a second infrared characteristic value according to the first infrared information, the second infrared information and the visible light component; based on a preset classification model, obtaining a scene prediction result according to a brightness parameter, a first infrared characteristic value and a second infrared characteristic value corresponding to a current image, and carrying out image processing according to the scene prediction result; the preset classification model is used for classifying a plurality of scenes according to different spectral energies. Therefore, according to the image processing method provided by the embodiment of the application, the color temperature sensor can be used for collecting the visible light component and the two different infrared information in the spectrum corresponding to the current image, then the visible light component and the two different infrared information are used for determining the two corresponding infrared characteristic values, and the brightness parameter corresponding to the current image is combined, so that the scene prediction of the current image is realized based on the preset classification model, wherein the preset classification model is obtained by training and testing the infrared characteristic data and the brightness characteristic data based on the image in the pre-stored image library. That is to say, in the application, the terminal trains the preset classification model by using the infrared features and the brightness features of the image, and then predicts the scene of the current image according to the infrared features and the brightness features of the current image based on the preset classification model, so that the image processing can be performed according to the scene prediction result, the prediction complexity can be reduced, the prediction efficiency can be improved, the accuracy of the scene prediction can be improved, and the image processing effect can be improved.
Based on the foregoing embodiment, in an embodiment of the application, the first terminal obtains a scene prediction result according to a brightness parameter, a first infrared characteristic value, and a second infrared characteristic value corresponding to a current image based on a preset classification model, so that when performing image processing according to the scene prediction result, the brightness parameter, the first infrared characteristic value, and the second infrared characteristic value may be input to the preset classification model, and a classification parameter is output.
In the embodiment of the application, the first terminal generates the first infrared characteristic value and the second infrared characteristic value according to the first infrared information, the second infrared information and the visible light component, and after the brightness parameter is obtained, the brightness parameter, the first infrared characteristic value and the second infrared characteristic value can be input to the preset classification model, so that the classification parameter corresponding to the current image can be output and obtained.
Further, in the embodiment of the application, the first terminal may use luminance parameters such as the normalized aperture value parameter, the normalized shutter speed parameter, and the normalized sensitivity parameter as luminance features corresponding to the current image, and use the first infrared feature value and the second infrared feature value as infrared features corresponding to the current image, that is, the first terminal inputs the luminance features and the infrared features corresponding to the current image into the preset classification model, and may output the classification parameters representing the scene type of the current image.
In the embodiment of the application, further, when the first terminal generates the first infrared characteristic value and the second infrared characteristic value according to the first infrared information, the second infrared information and the visible light component, the first terminal may specifically perform time-frequency conversion processing on the first infrared information and the second infrared information, respectively, so as to obtain a first direct current component corresponding to the first infrared information and a second direct current component corresponding to the second infrared information; then, the first direct current component, the second direct current component and the visible light component can be used for further generating a first infrared characteristic value and a second infrared characteristic value.
It should be noted that, in the embodiment of the present application, when the first terminal generates the first infrared characteristic value, the first infrared characteristic value IR1 may be obtained by calculation based on the second direct current component Dc (FD2) and the visible light component C according to the following formula (1):
IR1=(Dc(FD2)-C)/Dc(FD2) (1)
it should be noted that, in the embodiment of the present application, when the first terminal generates the second infrared characteristic value, the second infrared characteristic value IR2 may be obtained by calculation based on the first direct current component Dc (FD1) and the second direct current component Dc (FD2) according to the following formula (2):
IR2=(Dc(FD1)-Dc(FD2))/Dc(FD2) (2)
the operator of Dc represents the direct current component of the corresponding channel, FD1DC is Dc (FD1), FD2DC is Dc (FD 2).
According to the image processing method provided by the embodiment of the application, the color temperature sensor can be used for collecting the visible light component and two different pieces of infrared information in the spectrum corresponding to the current image, then the visible light component and the two different pieces of infrared information are used for determining the two corresponding infrared characteristic values, and the brightness parameter corresponding to the current image is combined, so that the scene prediction of the current image is realized based on the preset classification model, wherein the preset classification model is obtained by training and testing the infrared characteristic data and the brightness characteristic data based on the image in the pre-stored image library. That is to say, in the application, the terminal trains the preset classification model by using the infrared features and the brightness features of the image, and then predicts the scene of the current image according to the infrared features and the brightness features of the current image based on the preset classification model, so that the image processing can be performed according to the scene prediction result, the prediction complexity can be reduced, the prediction efficiency can be improved, the accuracy of the scene prediction can be improved, and the image processing effect can be improved.
Yet another embodiment of the present application provides an image processing method, fig. 11 is a schematic implementation flow diagram of the image processing method, and as shown in fig. 11, the method for the second terminal to perform image processing may include the following steps:
step 201, dividing a pre-stored image library to obtain training data and test data; the pre-stored image library stores a plurality of images of different scenes, and the different scenes correspond to different spectral energy.
In the embodiment of the application, the second terminal may first perform division processing on the pre-stored image library, so as to obtain training data and test data. Specifically, the pre-stored image library may store a plurality of images of different scenes, wherein the plurality of images of different scenes in the pre-stored image library correspond to different spectral energies.
Further, in the embodiments of the present application, the second terminal may be any device having communication and storage functions. For example: tablet computers, mobile phones, electronic readers, remote controllers, Personal Computers (PCs), notebook computers, vehicle-mounted devices, network televisions, wearable devices, and the like.
Specifically, the second terminal may be a device that performs learning training on a preset classification model, where the second terminal may also be a device that performs image processing by using the preset classification model. That is, in the present application, the first terminal and the second terminal may be the same device.
It should be noted that, in the embodiment of the present application, the pre-stored image library may be used for training and testing the preset classification model.
Further, in an embodiment of the present application, the pre-stored image library may include a plurality of images of indoor scenes and a plurality of images of outdoor scenes. Further, in the application, the terminal can randomly divide the images of different scenes in the pre-stored image library, so that training data and test data can be obtained. The training data and the test data are completely different, that is, the data corresponding to one image in the pre-stored image library can only be one of the training data or the test data.
It should be noted that, in the embodiment of the present application, when the terminal obtains the training data and the test data by dividing the pre-stored image library, the terminal may first divide the images of different scenes in the pre-stored image library into the training image and the test image. Specifically, in the embodiment of the application, when the second terminal divides the pre-stored image library, it is necessary to follow a principle that the training image and the test image are not coincident, that is, any one image in the pre-stored image library can only be one of the training image or the test image.
Illustratively, the second terminal stores a pre-stored image library in which 1024 images of indoor scenes are stored, and 1134 images of outdoor scenes are stored, and when the second terminal performs training of the preset classification model, 80% of the images can be randomly extracted from the pre-stored image library as training images, and 20% of the images can be extracted as test images.
Further, in the embodiment of the application, after the second terminal divides the pre-stored image library into the training image and the test image, the training data may be generated according to the first infrared characteristic data and the first brightness characteristic data corresponding to the training image, and meanwhile, the test data may be generated according to the second infrared characteristic data and the second brightness characteristic data corresponding to the test image.
It should be noted that, in the embodiment of the present application, when the second terminal performs the training of the preset classification model, it needs to combine the luminance information and the infrared information of the image, and therefore, the training data includes the infrared information and the luminance information corresponding to the training image, that is, the first infrared feature data and the first luminance feature data; meanwhile, the test data includes infrared information and luminance information corresponding to the test image, i.e., second infrared characteristic data and second luminance characteristic data.
Further, in an embodiment of the present application, the first infrared feature data may include two different infrared direct current components corresponding to the training image. Accordingly, the second infrared characteristic data may include two different infrared direct current components corresponding to the test image.
Further, in an embodiment of the present application, the first luminance characteristic data may include an aperture value parameter, a shutter speed parameter, and a sensitivity parameter corresponding to the training image. Accordingly, the second luminance characteristic data may include an aperture value parameter, a shutter speed parameter, and a sensitivity parameter corresponding to the test image.
Therefore, in the embodiment of the application, five pieces of feature information are required by the second terminal when the preset classification model is trained, and specifically include two different infrared direct current components and three aperture value parameters, shutter speed parameters and sensitivity parameters representing brightness.
Step 202, training the preset loss function by using the training data to obtain an initial classification model.
In the embodiment of the application, after the second terminal divides the pre-stored image library to obtain the training data and the test data, the second terminal may train the pre-stored loss function by using the training data, so as to obtain the initial classification model.
It should be noted that, in the embodiment of the present application, the second terminal may perform training of the preset classification model by using a logistic regression model, a bayesian classifier, ensemble learning, a decision tree, an SVM model, and other typical classification models. For example, when the second terminal is only trained by using the SVM model, the preset Loss function may be a Hinge Loss parameter (Hinge Loss) as shown in the following formula (3):
Figure BDA0002314330090000131
where y characterizes the hinge loss and x characterizes the function interval, i.e., x ═ y (w × x + b). Specifically, in the application, when the second terminal is trained based on the above formula (3), w and b need to be calculated on the premise of minimizing the hinge loss, so that the preset classification model can be obtained.
It can be understood that, in the embodiment of the present application, when the second terminal trains the preset loss function by using the training data, the preset loss function may be trained according to the first infrared characteristic data and the first luminance characteristic data, so that the initial classification model may be obtained.
Further, in the embodiment of the present application, when the second terminal trains the initial classification model by using the training data, since the training data includes five pieces of feature information, the second terminal may select to train the initial classification model by using a linear kernel when selecting the training parameters, specifically, the step size is 0.01, and the gamma is 60000.
Step 203, obtaining a preset classification model according to the test data and the initial classification model; the preset classification model is used for classifying a plurality of scenes according to different spectral energies.
In the embodiment of the application, after the second terminal trains the preset loss function by using the training data to obtain the initial classification model, the second terminal may continue to further obtain the preset classification model according to the test data and the initial classification model.
It should be noted that, in the embodiment of the present application, the preset classification model may be used to classify a plurality of scenes according to different spectral energies, so as to obtain the types of the scenes. Specifically, the preset classification model may be a classifier obtained by training the first terminal based on the infrared features and the brightness features. That is, the first terminal may distinguish the outdoor scene from the indoor scene according to the difference of the spectral energy by using the preset classification model.
Further, in the implementation of the present application, after the second terminal completes training of the initial classification model based on the training data, the initial classification model may be tested according to the test data, so that the preset classification model may be obtained.
It should be noted that, in the embodiment of the present application, when the second terminal obtains the preset classification model according to the test data and the initial classification model, the second terminal may first test the initial classification model by using the second infrared characteristic data and the second luminance characteristic data, so as to obtain a test result; and then, the initial classification model can be corrected according to the test result, and finally, the preset classification model can be obtained.
It can be understood that, in the embodiment of the present application, the test result may be an accuracy parameter, specifically, when the second terminal performs test processing on the initial classification model according to the test data, the second terminal may obtain an accuracy parameter corresponding to the test data according to the test data and the initial classification model, and if the accuracy parameter is smaller than the preset accuracy threshold, the second terminal may perform adjustment processing on the initial classification model according to the test data, so as to obtain the preset classification model.
Therefore, in the embodiment of the application, the second terminal can send the test data into the trained initial classification model for testing, verify the accuracy of the model, obtain the accuracy parameters corresponding to the test data, and then send the test data with the wrong judgment into the initial classification model again for fine-tuning according to the accuracy parameters, so that the generalization of the initial classification model is improved, and the preset classification model is finally obtained.
For example, in the present application, the second terminal may continuously perform training of the preset classification models with different numbers of rounds based on the pre-stored image library, specifically, for training with different numbers of rounds, training data and test data obtained by dividing the second terminal are different, and a final obtained result is also different. Table 1 shows statistics of test results, and as shown in table 1, the accuracy and the comprehensive accuracy of prediction for a scene are different based on different training data and different preset classification models obtained by training the test data.
TABLE 1
Number of test rounds Indoor accuracy Outdoor accuracy Comprehensive accuracy
1 96.41% 96.36% 96.38%
2 95.52% 96.87% 96.19%
3 96.21% 96.73% 96.47%
4 96.79% 96.12% 96.45%
5 96.33% 96.38% 96.35%
Average 96.25% 96.49% 96.36%
Therefore, the second terminal can select the preset classification model with more excellent accuracy to perform image processing after continuously performing training of the preset classification models with different turns based on the pre-stored image library to obtain different preset classification models.
In this application, light in the spectrum from 380nm to 780nm is detectable by the human eye and is referred to as the visible light band. The region 800nm back is usually referred to as the infrared band and is imperceptible to the human eye. Fig. 12 is a schematic diagram of spectral energy distribution of a fluorescent lamp, fig. 13 is a schematic diagram of spectral energy distribution of sunlight, and fig. 14 is a schematic diagram of spectral energy distribution of an incandescent lamp, as shown in fig. 12, 13, and 14, as can be seen from spectral energy distribution conditions of different light sources such as a fluorescent lamp, sunlight, and an incandescent lamp, energy of an infrared band of 800nm to 900nm in an indoor fluorescent lamp scene is very weak, while energy of an infrared band of 800nm to 900nm in sunlight exists quite strongly, and the infrared band starts to attenuate sharply after 950nm, in contrast, energy of an incandescent lamp in an infrared band of 800nm to 1000nm shows a trend of becoming stronger. Therefore, the color temperature sensor can be directly used for detecting the obtained infrared band information to obtain the distinctive characteristic information. That is to say, in the present application, the second terminal may use the infrared information obtained by the color temperature sensor as the feature information for training the preset classification model, and accordingly, the second terminal may use the infrared information obtained by the color temperature sensor as the feature information for performing scene prediction based on the preset classification model.
It should be noted that, in the embodiment of the present application, when the preset classification model is generated, the terminal trains the preset classification model based on parameters, such as the infrared characteristic data and the luminance characteristic data, corresponding to the pre-stored image library, and therefore, when the preset classification model obtained based on the training is used for image processing, the terminal may determine the scene type of the current image by using the luminance parameter, the first infrared characteristic value, and the second infrared characteristic value, corresponding to the current image.
That is, in the embodiment of the present application, whether in the process of generating the preset classification model or in the process of using the preset classification model, the feature information of the image required by the terminal includes both the corresponding infrared feature and the corresponding brightness feature.
According to the image processing method provided by the embodiment of the application, the second terminal divides the pre-stored image library to obtain training data and test data; the system comprises a pre-stored image library, a plurality of image acquisition units and a plurality of image processing units, wherein the pre-stored image library stores a plurality of images of different scenes, and the different scenes correspond to different spectral energy; training a preset loss function by using training data to obtain an initial classification model; and obtaining a preset classification model according to the test data and the initial classification model. Therefore, according to the image processing method provided by the embodiment of the application, the color temperature sensor can be used for collecting the visible light component and the two different infrared information in the spectrum corresponding to the current image, then the visible light component and the two different infrared information are used for determining the two corresponding infrared characteristic values, and the brightness parameter corresponding to the current image is combined, so that the scene prediction of the current image is realized based on the preset classification model, wherein the preset classification model is obtained by training and testing the infrared characteristic data and the brightness characteristic data based on the image in the pre-stored image library. That is to say, in the application, the terminal trains the preset classification model by using the infrared features and the brightness features of the image, and then predicts the scene of the current image according to the infrared features and the brightness features of the current image based on the preset classification model, so that the image processing can be performed according to the scene prediction result, the prediction complexity can be reduced, the prediction efficiency can be improved, the accuracy of the scene prediction can be improved, and the image processing effect can be improved.
Based on the foregoing embodiments, in yet another embodiment of the present application, fig. 15 is a schematic diagram of a first terminal, and as shown in fig. 15, the first terminal 1 according to the present application may include a detecting unit 11, a generating unit 12, a first obtaining unit 13, and a processing unit 14.
The detection unit 11 is configured to detect first infrared information, second infrared information, and a visible light component corresponding to a current image through a color temperature sensor; the first infrared information and the second infrared information are respectively acquired by the color temperature sensor by using two different receiving and transmitting wave bands;
the generating unit 12 is configured to generate a first infrared characteristic value and a second infrared characteristic value according to the first infrared information, the second infrared information, and the visible light component;
the first obtaining unit 13 is configured to obtain a scene prediction result according to a brightness parameter corresponding to the current image, the first infrared characteristic value, and the second infrared characteristic value based on a preset classification model, so as to perform image processing according to the scene prediction result; the preset classification model is used for classifying a plurality of scenes according to different spectral energies.
Further, in an embodiment of the present application, the first obtaining unit 13 is specifically configured to input the brightness parameter, the first infrared characteristic value, and the second infrared characteristic value into the preset classification model, and output the classification parameter; when the classification parameter belongs to a first preset value range, determining that the scene prediction result is an indoor scene; when the classification parameter belongs to a second preset value range, determining that the scene prediction result is an outdoor scene; wherein the first predetermined range of values and the second predetermined range of values are not coincident.
Further, in an embodiment of the present application, the generating unit 12 is specifically configured to perform time-frequency transform processing on the first infrared information to obtain a first direct current component; performing time-frequency transformation processing on the second infrared information to obtain a second direct current component; determining the first infrared characteristic value according to the second direct current component and the visible light component; and determining the second infrared characteristic value according to the first direct current component and the second direct current component.
Further, in an embodiment of the present application, the first obtaining unit 13 is further configured to obtain a scene prediction result according to the brightness parameter, the first infrared characteristic value, and the second infrared characteristic value corresponding to the current image based on a preset classification model, so as to read an attribute parameter corresponding to the current image before performing image processing according to the scene prediction result; and carrying out normalization processing on the attribute parameters to obtain the brightness parameters.
Further, in the embodiment of the present application, the attribute parameters include an aperture value parameter, a shutter speed parameter, and a sensitivity parameter.
Further, in an embodiment of the present application, the processing unit 14 is specifically configured to perform automatic white balance processing on the current image by using the scene prediction result, so as to obtain a white-balanced image.
Further, in an embodiment of the present application, the processing unit 14 is further specifically configured to perform brightness adjustment on the current image by using the scene prediction result, so as to obtain an adjusted image.
Further, in the embodiment of the present application, the first terminal is provided with a front camera on a front cover, a rear camera on a rear cover, and the color temperature sensor is disposed in a first region of the front cover; wherein the first region characterizes a region adjacent to the front camera; alternatively, the color temperature sensor is disposed in a second region of the rear cover; wherein the second region characterizes a region adjacent to the rear camera.
Further, in the embodiment of the present application, a slit is disposed at a top portion of the first terminal, and the color temperature sensor is disposed in the slit.
Fig. 16 is a schematic diagram illustrating a composition structure of the first terminal, and as shown in fig. 16, the first terminal 1 according to the embodiment of the present application may further include a first processor 15, a first memory 16 storing an executable instruction of the first processor 15, and further, the first terminal 1 may further include a first communication interface 17, and a first bus 18 for connecting the first processor 15, the first memory 16, and the first communication interface 17.
In an embodiment of the present invention, the first Processor 15 may be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a ProgRAMmable Logic Device (PLD), a Field ProgRAMmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor. It is understood that the electronic devices for implementing the above processor functions may be other devices, and the embodiments of the present application are not limited in particular. The first terminal 1 may further comprise a first memory 16, which first memory 16 may be connected to the first processor 15, wherein the first memory 16 is adapted to store executable program code comprising computer operating instructions, the first memory 16 may comprise a high speed RAM memory and may further comprise a non-volatile memory, e.g. at least two disk memories.
In the embodiment of the present application, the first bus 18 is used to connect the first communication interface 17, the first processor 15, and the first memory 16 and the intercommunication among these devices.
In an embodiment of the present application, the first memory 16 is used for storing instructions and data.
Further, in an embodiment of the present application, the first processor 15 is configured to detect first infrared information, second infrared information, and a visible light component corresponding to a current image through a color temperature sensor; the first infrared information and the second infrared information are respectively acquired by the color temperature sensor by using two different receiving and transmitting wave bands; generating a first infrared characteristic value and a second infrared characteristic value according to the first infrared information, the second infrared information and the visible light component; based on a preset classification model, obtaining a scene prediction result according to the brightness parameter corresponding to the current image, the first infrared characteristic value and the second infrared characteristic value, and performing image processing according to the scene prediction result; the preset classification model is used for classifying a plurality of scenes according to different spectral energies.
In practical applications, the first Memory 16 may be a volatile Memory (volatile Memory), such as a Random-Access Memory (RAM); or a non-volatile Memory (non-volatile Memory) such as a Read-Only Memory (ROM), a flash Memory (flash Memory), a Hard Disk (HDD) or a Solid-State Drive (SSD); or a combination of the above types of memories and provides instructions and data to the first processor 15.
In addition, each functional module in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
Based on the understanding that the technical solution of the present embodiment essentially or a part contributing to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium, and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
According to the first terminal provided by the embodiment of the application, the first terminal detects first infrared information, second infrared information and visible light components corresponding to a current image through the color temperature sensor; the first infrared information and the second infrared information are respectively acquired by the color temperature sensor by utilizing two different receiving and transmitting wave bands; generating a first infrared characteristic value and a second infrared characteristic value according to the first infrared information, the second infrared information and the visible light component; based on a preset classification model, obtaining a scene prediction result according to a brightness parameter, a first infrared characteristic value and a second infrared characteristic value corresponding to a current image, and carrying out image processing according to the scene prediction result; the preset classification model is used for classifying a plurality of scenes according to different spectral energies. Therefore, according to the image processing method provided by the embodiment of the application, the color temperature sensor can be used for collecting the visible light component and the two different infrared information in the spectrum corresponding to the current image, then the visible light component and the two different infrared information are used for determining the two corresponding infrared characteristic values, and the brightness parameter corresponding to the current image is combined, so that the scene prediction of the current image is realized based on the preset classification model, wherein the preset classification model is obtained by training and testing the infrared characteristic data and the brightness characteristic data based on the image in the pre-stored image library. That is to say, in the application, the terminal trains the preset classification model by using the infrared features and the brightness features of the image, and then predicts the scene of the current image according to the infrared features and the brightness features of the current image based on the preset classification model, so that the image processing can be performed according to the scene prediction result, the prediction complexity can be reduced, the prediction efficiency can be improved, the accuracy of the scene prediction can be improved, and the image processing effect can be improved.
Based on the foregoing embodiment, in yet another embodiment of the present application, fig. 17 is a schematic diagram of a first composition structure of the second terminal, and as shown in fig. 17, the second terminal 2 provided in this embodiment of the present application may include a dividing unit 21 and a second obtaining unit 22.
The dividing unit 21 is configured to divide a pre-stored image library to obtain training data and test data; the pre-stored image library stores a plurality of images of different scenes, and the different scenes correspond to different spectral energy;
the second obtaining unit 22 is configured to train a preset loss function by using the training data to obtain an initial classification model; obtaining a preset classification model according to the test data and the initial classification model; the preset classification model is used for classifying a plurality of scenes according to different spectral energies.
Further, in an embodiment of the present application, the dividing unit 21 is specifically configured to divide the plurality of images into a training image and a test image; generating the training data according to first infrared characteristic data and first brightness characteristic data corresponding to the training image; and generating the test data according to the second infrared characteristic data and the second brightness characteristic data corresponding to the test image.
Further, in an embodiment of the present application, the second obtaining unit 22 is specifically configured to train the preset loss function according to the first infrared feature data and the first brightness feature data, so as to obtain the initial classification model.
Further, in an embodiment of the present application, the second obtaining unit 22 is further specifically configured to test the initial classification model by using the second infrared characteristic data and the second luminance characteristic data, so as to obtain a test result; and correcting the initial classification model according to the test result to obtain the preset classification model.
Fig. 18 is a schematic diagram illustrating a composition structure of a second terminal, and as shown in fig. 18, the second terminal 2 according to the embodiment of the present application may further include a second processor 23, a second memory 24 storing an executable instruction of the second processor 23, and further, the second terminal 2 may further include a second communication interface 25, and a second bus 26 for connecting the second processor 23, the second memory 24, and the second communication interface 25.
In an embodiment of the present application, the second terminal 2 may further comprise a second memory 24, the second memory 24 may be connected to the second processor 23, wherein the second memory 24 is configured to store executable program code, the program code comprising computer operation instructions, and the second memory 24 may comprise a high-speed RAM memory and may further comprise a non-volatile memory, such as at least two disk memories.
In the embodiment of the present application, the second bus 26 is used for connecting the second communication interface 25, the second processor 23 and the second memory 24 and for mutual communication between these devices.
In an embodiment of the present application, the second memory 24 is used for storing instructions and data.
Further, in an embodiment of the present application, the second processor 23 is configured to divide a pre-stored image library to obtain training data and test data; the pre-stored image library stores a plurality of images of different scenes, and the different scenes correspond to different spectral energy; training a preset loss function by using the training data to obtain an initial classification model; obtaining a preset classification model according to the test data and the initial classification model; the preset classification model is used for classifying a plurality of scenes according to different spectral energies.
According to the second terminal provided by the embodiment of the application, the second terminal divides the pre-stored image library to obtain training data and test data; the system comprises a pre-stored image library, a plurality of image acquisition units and a plurality of image processing units, wherein the pre-stored image library stores a plurality of images of different scenes, and the different scenes correspond to different spectral energy; training a preset loss function by using training data to obtain an initial classification model; and obtaining a preset classification model according to the test data and the initial classification model. Therefore, according to the image processing method provided by the embodiment of the application, the color temperature sensor can be used for collecting the visible light component and the two different infrared information in the spectrum corresponding to the current image, then the visible light component and the two different infrared information are used for determining the two corresponding infrared characteristic values, and the brightness parameter corresponding to the current image is combined, so that the scene prediction of the current image is realized based on the preset classification model, wherein the preset classification model is obtained by training and testing the infrared characteristic data and the brightness characteristic data based on the image in the pre-stored image library. That is to say, in the application, the terminal trains the preset classification model by using the infrared features and the brightness features of the image, and then predicts the scene of the current image according to the infrared features and the brightness features of the current image based on the preset classification model, so that the image processing can be performed according to the scene prediction result, the prediction complexity can be reduced, the prediction efficiency can be improved, the accuracy of the scene prediction can be improved, and the image processing effect can be improved.
An embodiment of the present application provides a computer-readable storage medium on which a program is stored, which when executed by a processor implements the image processing method as described above.
Specifically, the program instructions corresponding to an image processing method in the present embodiment may be stored on a storage medium such as an optical disc, a hard disc, a usb disk, or the like, and when the program instructions corresponding to an image processing method in the storage medium are read or executed by an electronic device, the method includes the steps of:
detecting first infrared information, second infrared information and visible light components corresponding to a current image through a color temperature sensor; the first infrared information and the second infrared information are respectively acquired by the color temperature sensor by using two different receiving and transmitting wave bands;
generating a first infrared characteristic value and a second infrared characteristic value according to the first infrared information, the second infrared information and the visible light component;
based on a preset classification model, obtaining a scene prediction result according to the brightness parameter corresponding to the current image, the first infrared characteristic value and the second infrared characteristic value, and performing image processing according to the scene prediction result; the preset classification model is used for classifying a plurality of scenes according to different spectral energies.
When program instructions in a storage medium corresponding to an image processing method are read or executed by an electronic device, the method further comprises the steps of:
dividing a pre-stored image library to obtain training data and test data; the pre-stored image library stores a plurality of images of different scenes, and the different scenes correspond to different spectral energy;
training a preset loss function by using the training data to obtain an initial classification model;
obtaining a preset classification model according to the test data and the initial classification model; the preset classification model is used for classifying a plurality of scenes according to different spectral energies.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of implementations of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart block or blocks and/or flowchart block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks in the flowchart and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application.

Claims (18)

1. An image processing method applied to a first terminal, the method comprising:
detecting first infrared information, second infrared information and visible light components corresponding to a current image through a color temperature sensor; the first infrared information and the second infrared information are respectively acquired by the color temperature sensor by using two different receiving and transmitting wave bands;
generating a first infrared characteristic value and a second infrared characteristic value according to the first infrared information, the second infrared information and the visible light component;
based on a preset classification model, obtaining a scene prediction result according to the brightness parameter corresponding to the current image, the first infrared characteristic value and the second infrared characteristic value, and performing image processing according to the scene prediction result; the preset classification model is used for classifying a plurality of scenes according to different spectral energies.
2. The method according to claim 1, wherein the obtaining a scene prediction result according to the brightness parameter, the first infrared feature value, and the second infrared feature value corresponding to the current image based on a preset classification model, so as to perform image processing according to the scene prediction result comprises:
inputting the brightness parameter, the first infrared characteristic value and the second infrared characteristic value into the preset classification model, and outputting a classification parameter;
when the classification parameter belongs to a first preset value range, determining that the scene prediction result is an indoor scene;
when the classification parameter belongs to a second preset value range, determining that the scene prediction result is an outdoor scene; wherein the first predetermined range of values and the second predetermined range of values are not coincident.
3. The method of claim 1, wherein generating a first infrared signature value and a second infrared signature value from the first infrared information, the second infrared information, and the visible light component comprises:
performing time-frequency transformation processing on the first infrared information to obtain a first direct current component; performing time-frequency transformation processing on the second infrared information to obtain a second direct current component;
determining the first infrared characteristic value according to the second direct current component and the visible light component;
and determining the second infrared characteristic value according to the first direct current component and the second direct current component.
4. The method according to claim 1, wherein before obtaining a scene prediction result according to the brightness parameter, the first infrared feature value, and the second infrared feature value corresponding to the current image based on a preset classification model, and performing image processing according to the scene prediction result, the method further comprises:
reading attribute parameters corresponding to the current image;
and carrying out normalization processing on the attribute parameters to obtain the brightness parameters.
5. The method according to claim 4, wherein the attribute parameters include an aperture value parameter, a shutter speed parameter, and a sensitivity parameter.
6. The method of claim 1, wherein the image processing according to the scene prediction result comprises:
and carrying out automatic white balance processing on the current image by using the scene prediction result to obtain a white-balanced image.
7. The method of claim 1, wherein the image processing according to the scene prediction result comprises:
and utilizing the scene prediction result to adjust the brightness of the current image to obtain an adjusted image.
8. The method of claim 1, wherein the first terminal is provided with a front camera on a front cover and a rear camera on a rear cover,
the color temperature sensor is disposed in a first region of the front cover; wherein the first region characterizes a region adjacent to the front camera;
or,
the color temperature sensor is arranged in a second area of the rear cover; wherein the second region characterizes a region adjacent to the rear camera.
9. The method of claim 1, wherein a top portion of the first terminal is provided with a slot,
the color temperature sensor is disposed in the slit.
10. An image processing method applied to a second terminal, the method comprising:
dividing a pre-stored image library to obtain training data and test data; the pre-stored image library stores a plurality of images of different scenes, and the different scenes correspond to different spectral energy;
training a preset loss function by using the training data to obtain an initial classification model;
obtaining a preset classification model according to the test data and the initial classification model; the preset classification model is used for classifying a plurality of scenes according to different spectral energies.
11. The method according to claim 10, wherein the dividing the pre-stored image library to obtain training data and test data comprises:
dividing the plurality of images into training images and test images;
generating the training data according to first infrared characteristic data and first brightness characteristic data corresponding to the training image;
and generating the test data according to the second infrared characteristic data and the second brightness characteristic data corresponding to the test image.
12. The method of claim 11, wherein the training a predetermined loss function using the training data to obtain an initial classification model comprises:
and training the preset loss function according to the first infrared characteristic data and the first brightness characteristic data to obtain the initial classification model.
13. The method of claim 11, wherein obtaining a predetermined classification model based on the test data and the initial classification model comprises:
testing the initial classification model by using the second infrared characteristic data and the second brightness characteristic data to obtain a test result;
and correcting the initial classification model according to the test result to obtain the preset classification model.
14. A first terminal, characterized in that the first terminal comprises: a detection unit, a generation unit, a first acquisition unit,
the detection unit is used for detecting first infrared information, second infrared information and visible light components corresponding to the current image through the color temperature sensor; the first infrared information and the second infrared information are respectively acquired by the color temperature sensor by using two different receiving and transmitting wave bands;
the generating unit is used for generating a first infrared characteristic value and a second infrared characteristic value according to the first infrared information, the second infrared information and the visible light component;
the first obtaining unit is configured to obtain a scene prediction result according to a brightness parameter, the first infrared characteristic value, and the second infrared characteristic value corresponding to the current image based on a preset classification model, so as to perform image processing according to the scene prediction result; the preset classification model is used for classifying a plurality of scenes according to different spectral energies.
15. A second terminal, characterized in that the second terminal comprises: a dividing unit and a second obtaining unit,
the dividing unit is used for dividing a prestored image library to obtain training data and test data; the pre-stored image library stores a plurality of images of different scenes, and the different scenes correspond to different spectral energy;
the second obtaining unit is used for training a preset loss function by using the training data to obtain an initial classification model; obtaining a preset classification model according to the test data and the initial classification model; the preset classification model is used for classifying a plurality of scenes according to different spectral energies.
16. A first terminal, characterized in that the first terminal comprises a first processor, a first memory having stored therein first processor-executable instructions that, when executed by the first processor, implement the method according to any one of claims 1-9.
17. A second terminal, characterized in that the second terminal comprises a second processor, a second memory storing instructions executable by the second processor, which instructions, when executed by the second processor, implement the method according to any of claims 10-13.
18. A computer-readable storage medium, having a program stored thereon, for use in a first terminal and a second terminal, wherein the program, when executed by a processor, implements the method of any one of claims 1-13.
CN201911271535.1A 2019-12-12 2019-12-12 Image processing method, terminal and storage medium Active CN111027489B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911271535.1A CN111027489B (en) 2019-12-12 2019-12-12 Image processing method, terminal and storage medium
PCT/CN2020/135630 WO2021115419A1 (en) 2019-12-12 2020-12-11 Image processing method, terminal, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911271535.1A CN111027489B (en) 2019-12-12 2019-12-12 Image processing method, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111027489A true CN111027489A (en) 2020-04-17
CN111027489B CN111027489B (en) 2023-10-20

Family

ID=70208843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911271535.1A Active CN111027489B (en) 2019-12-12 2019-12-12 Image processing method, terminal and storage medium

Country Status (2)

Country Link
CN (1) CN111027489B (en)
WO (1) WO2021115419A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111918047A (en) * 2020-07-27 2020-11-10 Oppo广东移动通信有限公司 Photographing control method and device, storage medium and electronic equipment
CN112750448A (en) * 2020-08-07 2021-05-04 腾讯科技(深圳)有限公司 Sound scene recognition method, device, equipment and storage medium
WO2021115419A1 (en) * 2019-12-12 2021-06-17 Oppo广东移动通信有限公司 Image processing method, terminal, and storage medium
CN114338962A (en) * 2020-09-29 2022-04-12 华为技术有限公司 Image forming method and apparatus
CN115242949A (en) * 2022-07-21 2022-10-25 Oppo广东移动通信有限公司 Camera module and electronic equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071369B (en) * 2022-12-13 2023-07-14 哈尔滨理工大学 Infrared image processing method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101779109A (en) * 2007-07-25 2010-07-14 Nxp股份有限公司 indoor/outdoor detection
CN103493212A (en) * 2011-03-29 2014-01-01 欧司朗光电半导体有限公司 Unit for determining the type of a dominating light source by means of two photodiodes
WO2017084428A1 (en) * 2015-11-17 2017-05-26 努比亚技术有限公司 Information processing method, electronic device and computer storage medium
CN106993175A (en) * 2016-01-20 2017-07-28 瑞昱半导体股份有限公司 Produce the method that the pixel used for realizing auto kine bias function computing screens scope
CN107622281A (en) * 2017-09-20 2018-01-23 广东欧珀移动通信有限公司 Image classification method, device, storage medium and mobile terminal
CN108027278A (en) * 2015-08-26 2018-05-11 株式会社普瑞密斯 Lighting detecting device and its method
CN108304821A (en) * 2018-02-14 2018-07-20 广东欧珀移动通信有限公司 Image-recognizing method and device, image acquiring method and equipment, computer equipment and non-volatile computer readable storage medium storing program for executing
CN108881876A (en) * 2018-08-17 2018-11-23 Oppo广东移动通信有限公司 The method, apparatus and electronic equipment of white balance processing are carried out to image
WO2019071623A1 (en) * 2017-10-14 2019-04-18 华为技术有限公司 Method for capturing images and electronic device
CN110233971A (en) * 2019-07-05 2019-09-13 Oppo广东移动通信有限公司 A kind of image pickup method and terminal, computer readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105898260B (en) * 2016-04-07 2018-01-19 广东欧珀移动通信有限公司 A kind of method and device for adjusting camera white balance
CN109977731B (en) * 2017-12-27 2021-10-29 深圳市优必选科技有限公司 Scene identification method, scene identification equipment and terminal equipment
CN109784237A (en) * 2018-12-29 2019-05-21 北京航天云路有限公司 The scene classification method of residual error network training based on transfer learning
CN109685746B (en) * 2019-01-04 2021-03-05 Oppo广东移动通信有限公司 Image brightness adjusting method and device, storage medium and terminal
CN111027489B (en) * 2019-12-12 2023-10-20 Oppo广东移动通信有限公司 Image processing method, terminal and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101779109A (en) * 2007-07-25 2010-07-14 Nxp股份有限公司 indoor/outdoor detection
CN103493212A (en) * 2011-03-29 2014-01-01 欧司朗光电半导体有限公司 Unit for determining the type of a dominating light source by means of two photodiodes
CN108027278A (en) * 2015-08-26 2018-05-11 株式会社普瑞密斯 Lighting detecting device and its method
WO2017084428A1 (en) * 2015-11-17 2017-05-26 努比亚技术有限公司 Information processing method, electronic device and computer storage medium
CN106993175A (en) * 2016-01-20 2017-07-28 瑞昱半导体股份有限公司 Produce the method that the pixel used for realizing auto kine bias function computing screens scope
CN107622281A (en) * 2017-09-20 2018-01-23 广东欧珀移动通信有限公司 Image classification method, device, storage medium and mobile terminal
WO2019071623A1 (en) * 2017-10-14 2019-04-18 华为技术有限公司 Method for capturing images and electronic device
CN108304821A (en) * 2018-02-14 2018-07-20 广东欧珀移动通信有限公司 Image-recognizing method and device, image acquiring method and equipment, computer equipment and non-volatile computer readable storage medium storing program for executing
CN108881876A (en) * 2018-08-17 2018-11-23 Oppo广东移动通信有限公司 The method, apparatus and electronic equipment of white balance processing are carried out to image
CN110233971A (en) * 2019-07-05 2019-09-13 Oppo广东移动通信有限公司 A kind of image pickup method and terminal, computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈中钱;: "摄像成像设备的图像质量色偏客观评价方法" *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021115419A1 (en) * 2019-12-12 2021-06-17 Oppo广东移动通信有限公司 Image processing method, terminal, and storage medium
CN111918047A (en) * 2020-07-27 2020-11-10 Oppo广东移动通信有限公司 Photographing control method and device, storage medium and electronic equipment
CN112750448A (en) * 2020-08-07 2021-05-04 腾讯科技(深圳)有限公司 Sound scene recognition method, device, equipment and storage medium
CN112750448B (en) * 2020-08-07 2024-01-16 腾讯科技(深圳)有限公司 Sound scene recognition method, device, equipment and storage medium
CN114338962A (en) * 2020-09-29 2022-04-12 华为技术有限公司 Image forming method and apparatus
CN114338962B (en) * 2020-09-29 2023-04-18 华为技术有限公司 Image forming method and apparatus
CN115242949A (en) * 2022-07-21 2022-10-25 Oppo广东移动通信有限公司 Camera module and electronic equipment

Also Published As

Publication number Publication date
WO2021115419A1 (en) 2021-06-17
CN111027489B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN111027489A (en) Image processing method, terminal and storage medium
US9813635B2 (en) Method and apparatus for auto exposure value detection for high dynamic range imaging
US10855885B2 (en) Image processing apparatus, method therefor, and storage medium
WO2018103314A1 (en) Photograph-capture method, apparatus, terminal, and storage medium
CN110830794B (en) Light source detection method, terminal and storage medium
US9460521B2 (en) Digital image analysis
CN113452980B (en) Image processing method, terminal and storage medium
US10382734B2 (en) Electronic device and color temperature adjusting method
CN108737728B (en) Image shooting method, terminal and computer storage medium
JP7152065B2 (en) Image processing device
US10721449B2 (en) Image processing method and device for auto white balance
US20140125836A1 (en) Robust selection and weighting for gray patch automatic white balancing
CN111654643B (en) Exposure parameter determination method and device, unmanned aerial vehicle and computer readable storage medium
US8665355B2 (en) Image capture with region-based adjustment of contrast
US11457189B2 (en) Device for and method of correcting white balance of image
CN111163302B (en) Scene color restoration method, terminal and storage medium
US20200228770A1 (en) Lens rolloff assisted auto white balance
CN106454140B (en) A kind of information processing method and electronic equipment
KR20200145670A (en) Device and method for correcting white balance of image
CN110909696B (en) Scene detection method and device, storage medium and terminal equipment
CN105163040A (en) Image processing method and mobile terminal
CN110929663B (en) Scene prediction method, terminal and storage medium
CN109345602A (en) Image processing method and device, storage medium, electronic equipment
US8953063B2 (en) Method for white balance adjustment
CN113055665B (en) Image processing method, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant