[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111310541A - Scene prediction method, terminal and storage medium - Google Patents

Scene prediction method, terminal and storage medium Download PDF

Info

Publication number
CN111310541A
CN111310541A CN201911184047.7A CN201911184047A CN111310541A CN 111310541 A CN111310541 A CN 111310541A CN 201911184047 A CN201911184047 A CN 201911184047A CN 111310541 A CN111310541 A CN 111310541A
Authority
CN
China
Prior art keywords
information
sample
direct current
current component
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911184047.7A
Other languages
Chinese (zh)
Other versions
CN111310541B (en
Inventor
王琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911184047.7A priority Critical patent/CN111310541B/en
Publication of CN111310541A publication Critical patent/CN111310541A/en
Application granted granted Critical
Publication of CN111310541B publication Critical patent/CN111310541B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Color Television Image Signal Generators (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides a scene prediction method, a terminal and a storage medium, and the method comprises the following steps: acquiring a first direct current component of light in a shooting scene in a first frequency channel, a second direct current component of the light in a second frequency channel and a visible light band component; extracting first optical frequency information and second optical frequency information from the first direct current component, and acquiring first optical intensity information corresponding to the first optical frequency information and second optical intensity information corresponding to the second optical frequency information; determining first infrared band information and second infrared band information according to the first direct current component, the second direct current component and the visible light band component; and inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information into a preset classification model to obtain a scene prediction result aiming at a shooting scene.

Description

Scene prediction method, terminal and storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to a scene prediction method, a terminal, and a storage medium.
Background
When taking a picture, if the scene in which the user is currently located, such as an indoor scene or an outdoor scene, can be determined, a higher picture taking effect is facilitated. That is, scene prediction may become one of important reference information required when the terminal performs image processing. When the terminal predicts the scene, the terminal can identify the indoor and outdoor scenes by deploying some additional auxiliary equipment to collect specific data; the distinction between indoor and outdoor scenes can also be made by means of image processing.
However, the scene prediction is performed by means of additional auxiliary equipment, the cost is high in the deployment stage, the preparation work is complex, the universality and the usability of the scene prediction are greatly limited, and the convenience is poor; the current method for scene prediction based on image processing has higher computational complexity, reduces prediction efficiency and has poorer scene prediction accuracy.
Disclosure of Invention
The embodiment of the application provides a scene prediction method, a terminal and a storage medium, which can reduce the complexity of prediction and improve the prediction efficiency and the accuracy of scene prediction.
The technical scheme of the application is realized as follows:
the embodiment of the application provides a scene prediction method, which comprises the following steps:
acquiring a first direct current component of light rays in a shooting scene in a first frequency channel, a second direct current component of the light rays in a second frequency channel and a visible light band component, wherein the radiation intensity of the first frequency channel is greater than that of the second frequency channel;
extracting first optical frequency information and second optical frequency information from the first direct current component, and acquiring first optical intensity information corresponding to the first optical frequency information and second optical intensity information corresponding to the second optical frequency information, wherein the first optical frequency information and the second optical frequency information are two optical frequency information with the maximum amplitude values in the first direct current component;
determining first infrared band information and second infrared band information according to the first direct current component, the second direct current component and the visible light band component;
and inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information into a preset classification model to obtain a scene prediction result aiming at the shooting scene.
In the above method, the acquiring a first direct current component of a light ray in a shooting scene in a first frequency channel and a second direct current component of the light ray in a second frequency channel includes:
acquiring first time domain information of the first frequency channel and second time domain information of the second frequency channel through a color temperature sensor;
performing time-frequency transformation operation on the first time domain information to obtain first frequency domain information;
taking a direct current component from the first frequency domain information of the first frequency channel to obtain the first direct current component;
performing time-frequency transformation operation on the second time domain information to obtain second frequency domain information;
and taking a direct current component from the second frequency domain information of the second frequency channel to obtain the second direct current component.
In the above method, the determining first infrared band information and second infrared band information according to the first direct current component, the second direct current component, and the visible light band component includes:
determining the first infrared band information according to the second direct current component and the visible light band component;
and determining the second infrared band information according to the first direct current component and the second direct current component.
In the above method, the inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information, and the second light intensity information into a preset classification model to obtain a scene prediction result for the shooting scene includes:
respectively normalizing the first optical frequency information and the second optical frequency information according to a preset frequency value to obtain normalized first optical frequency information and normalized second optical frequency information;
respectively carrying out normalization processing on the first light intensity information and the second light intensity information according to a preset light intensity value to obtain normalized first light intensity information and normalized second light intensity information;
and performing scene prediction on the first infrared band information, the second infrared band information, the normalized first light frequency information, the normalized first light intensity information, the normalized second light frequency information and the normalized second light intensity information by using the classification parameters obtained by training the preset classification model to obtain a scene prediction result.
In the above method, before the acquiring the first direct current component of the light in the shooting scene at the first frequency channel, the second direct current component of the second frequency channel, and the visible light band component, the method further includes:
acquiring training sample data of a training sample image and a training sample scene of the training sample image;
inputting the training sample data into an initial classification model to obtain a sample classification result;
inputting the training sample scene and the sample classification result into a preset loss function to obtain a loss function value;
and training the initial classification model by using the loss function value to obtain a preset classification model.
In the above method, the acquiring training sample data of the training sample image includes:
acquiring a first sample direct current component of a first frequency channel, a second sample direct current component of a second frequency channel and a sample visible light wave band component in the training sample image;
determining first sample infrared band information, second sample infrared band information, first sample optical frequency information, first sample light intensity information, second sample optical frequency information and second sample light intensity information of the training sample image according to the first sample direct current component, the second sample direct current component and the sample visible light band component;
and determining the first sample infrared band information, the second sample infrared band information, the first sample light frequency information, the first sample light intensity information, the second sample light frequency information and the second sample light intensity information as the training sample data.
In the above method, the determining, according to the first sample direct-current component, the second sample direct-current component, and the sample visible light band component, first sample infrared band information, second sample infrared band information, first sample light frequency information, first sample light intensity information, second sample light frequency information, and second sample light intensity information of the training sample image includes:
extracting the first sample optical frequency information and the second sample optical frequency information from the first sample direct current component, and obtaining the first sample optical intensity information corresponding to the first sample optical frequency information and the second sample optical intensity information corresponding to the second sample optical frequency information;
determining the first sample infrared band information according to the second sample direct current component and the sample visible band component;
and determining the infrared band information of the second sample according to the first sample direct current component and the second sample direct current component.
In the above method, the inputting the training sample data into an initial classification model to obtain a sample classification result includes:
and inputting the first sample infrared band information, the second sample infrared band information, the first sample light frequency information, the first sample light intensity information, the second sample light frequency information and the second sample light intensity information into the initial classification model to obtain the sample classification result.
In the above method, after the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information, and the second light intensity information are input into a preset classification model to obtain a scene prediction result for the shooting scene, the method further includes:
determining an Automatic White Balance (AWB) parameter according to the scene prediction result;
and carrying out white balance correction on the image by adopting the AWB parameters.
An embodiment of the present application provides a terminal, including:
the device comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring a first direct current component of light in a shooting scene in a first frequency channel, a second direct current component of the light in a second frequency channel and a visible light waveband component, and the radiation intensity of the first frequency channel is greater than that of the second frequency channel;
an extracting unit, configured to extract first optical frequency information and second optical frequency information from the first direct current component, and obtain first optical intensity information corresponding to the first optical frequency information and second optical intensity information corresponding to the second optical frequency information, where the first optical frequency information and the second optical frequency information are two optical frequency information with a largest amplitude value in the first direct current component;
a determining unit, configured to determine first infrared band information and second infrared band information according to the first direct current component, the second direct current component, and the visible light band component;
and the scene prediction unit is used for inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information into a preset classification model to obtain a scene prediction result aiming at the shooting scene.
An embodiment of the present application provides a terminal, including: a processor, a memory, and a communication bus; the processor, when executing the operating program stored in the memory, implements the method of any of the above.
The embodiment of the application provides a storage medium, on which a computer program is stored, and the computer program is applied to a terminal, and when the computer program is executed by a processor, the computer program realizes the method according to any one of the above items.
The embodiment of the application provides a scene prediction method, a terminal and a storage medium, wherein the method comprises the following steps: acquiring a first direct current component of light rays in a shooting scene in a first frequency channel, a second direct current component of the light rays in a second frequency channel and a visible light band component, wherein the radiation intensity of the first frequency channel is greater than that of the second frequency channel; extracting first optical frequency information and second optical frequency information from the first direct current component, and acquiring first optical intensity information corresponding to the first optical frequency information and second optical intensity information corresponding to the second optical frequency information, wherein the first optical frequency information and the second optical frequency information are two optical frequency information with the maximum amplitude values in the first direct current component; determining first infrared band information and second infrared band information according to the first direct current component, the second direct current component and the visible light band component; and inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information into a preset classification model to obtain a scene prediction result aiming at a shooting scene. By adopting the implementation scheme, the terminal takes the first infrared band information and the second infrared band information of two frequency channels in the spectrum as classification features to participate in indoor and outdoor classification, the terminal takes the light frequency information and the frequency intensity in the first frequency channel as classification features to participate in indoor and outdoor classification, different scenes can be distinguished according to energy variation trends corresponding to different infrared bands, and then the accuracy of scene prediction is improved.
Drawings
Fig. 1 is a first flowchart of a scene prediction method according to an embodiment of the present application;
fig. 2 is a schematic view showing a first placement position of an exemplary color temperature sensor on a terminal according to an embodiment of the present disclosure;
fig. 3 is a schematic view illustrating a placement position of an exemplary color temperature sensor on a terminal according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a placement position of an exemplary color temperature sensor at a terminal according to the prior art;
fig. 5 is a schematic view illustrating a placement position of an exemplary color temperature sensor on a display screen side of a terminal according to an embodiment of the present disclosure;
fig. 6 is a schematic view illustrating a placement position of an exemplary color temperature sensor on a rear camera side of a terminal according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of radiation intensities of a first frequency channel and a second frequency channel of an exemplary color temperature sensor provided by an embodiment of the present application;
fig. 8 is a schematic diagram of time domain information corresponding to an exemplary frequency channel provided in an embodiment of the present application;
fig. 9 is a schematic diagram of exemplary frequency domain information obtained by performing time-frequency conversion on time domain information according to an embodiment of the present application;
FIG. 10 is a graphical illustration of an exemplary spectral response of a color temperature sensor provided by an embodiment of the present application;
FIG. 11 is a graph illustrating an exemplary spectral power distribution of a fluorescent lamp according to an embodiment of the present disclosure;
FIG. 12 is an exemplary daylight spectral power distribution provided by an embodiment of the present application;
FIG. 13 is an exemplary incandescent lamp spectral power distribution provided by an embodiment of the present application;
fig. 14 is a flowchart illustrating an exemplary scene determination method according to an embodiment of the present application;
fig. 15 is a flowchart of a scene prediction method according to an embodiment of the present application;
fig. 16 is a first schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application. And are not intended to limit the present application.
There are many schemes for the terminal to predict indoor and outdoor scenes, specifically, there are methods based on external devices, such as Wireless network (WiFi), light sensing, infrared, and other devices; there are also methods based on the image itself. The methods based on the images themselves can be further classified into a traditional threshold classification method and a machine learning-based method.
In different scenes, the mode of the terminal during image processing may be different, for example, in an indoor scene, an Automatic Exposure (AE) needs to be constantly considered to start a power frequency flash resisting strategy; for a low-brightness outdoor scene, a more appropriate Automatic White Balance (AWB) algorithm needs to be selected to restore the image, for example, in the AWB algorithm, if it can be determined that the current light source is an outdoor light source, the AWB color temperature can be simply set to the position of D55, and the picture can obtain a good color restoration effect.
Therefore, the good scene prediction method can help the AWB algorithm to improve the image color reduction effect, and the reduction difficulty of the AWB algorithm can be reduced for both low-brightness outdoor scenes and high-brightness indoor scenes. Accordingly, in the AE algorithm, if the scene corresponding to the current image can be accurately determined to be outdoors, the anti-flash problem does not need to be considered at all, so that more flexibility can be provided.
At present, when an Image Processing method is used for scene prediction, on one hand, feature extraction needs to rely on a full-size Image (for example, 4000 × 3000), and a multi-scale filtering method is applied to extract a large number of structural features, while Image Signal Processing (ISP) of a portable terminal such as a mobile phone can only provide a small-size Image (for example, 120 × 90), and at this time, the accuracy of features obtained by the terminal using the filtering method based on the full-size Image is greatly reduced, thereby reducing the accuracy of scene prediction. On the other hand, the image processing method extracts high-dimensional structural related features from the current image, the number of the features is usually large, and real-time processing is difficult to perform when the features are used in a portable terminal such as a mobile phone, so that the prediction efficiency of scene prediction is reduced.
Further, from the practical effect, when the complicated structural features face irregularly divided sky, a solid-color scene, and an indoor artificial building, the prediction accuracy of scene prediction is lowered.
The scene recognition algorithm based on YUV data is located behind the demosaic algorithm on the ISP, and the scene which is finally seen tends to be not well used by AE, AWB and Auto Focus (AF) at the front end of the ISP due to the deviation in the time domain.
In summary, in the prior art, the method for performing scene prediction based on the image processing method has high computational complexity, reduces prediction efficiency, and has poor accuracy of scene prediction.
Example one
An embodiment of the present application provides a scene prediction method, as shown in fig. 1, the method may include:
s101, acquiring a first direct current component of light in a shooting scene in a first frequency channel, a second direct current component of the light in a second frequency channel and a visible light waveband component, wherein the radiation intensity of the first frequency channel is greater than that of the second frequency channel.
The scene prediction method provided by the embodiment of the application is suitable for the scene of judging indoor and outdoor scenes in the process of processing the acquired images.
In the embodiment of the present application, the terminal may be any device having communication and storage functions, for example: tablet computers, mobile phones, electronic readers, remote controllers, Personal Computers (PCs), notebook computers, vehicle-mounted devices, network televisions, wearable devices, and the like.
In the embodiment of the application, a color temperature sensor is arranged on the terminal, specifically, the color temperature sensor may be arranged on one side of a front camera of the terminal, as shown in fig. 2, the color temperature sensor is arranged on the left side of the front camera of the terminal; the color temperature sensor may be disposed on the rear camera side of the terminal, and as shown in fig. 3, the color temperature sensor may be disposed below the rear camera of the terminal.
The color temperature sensor can be arranged in the bang area of the whole screen, specifically, fig. 4 is a schematic diagram of arrangement of the color temperature sensor, and as shown in fig. 4, the terminal places the color temperature sensor under ink in the bang area.
The terminal may also have a color temperature sensor disposed in the slit at the top. Fig. 5 is a schematic view of the placement position of the color temperature sensor on the display screen side of the terminal, and fig. 6 is a schematic view of the placement position of the color temperature sensor on the rear camera side of the terminal.
In the embodiment of the application, when the terminal collects a current image, the color temperature sensor is started, and the color temperature sensor is used for respectively obtaining a first direct current component of a first frequency channel, a second direct current component of a second frequency channel and a visible light waveband component, wherein the radiation intensity of the first frequency channel of the color temperature sensor is greater than that of the second frequency channel of the color temperature sensor, as shown in fig. 7, an abscissa represents time and an ordinate represents radiation intensity, and as can be seen from fig. 7, the radiation intensity of the channel corresponding to 50HZ is greater than that of the channel corresponding to 60HZ, so that the first frequency channel of the color temperature sensor can be the channel corresponding to 50HZ, and the second frequency channel of the color temperature sensor can be the channel corresponding to 60 HZ.
Specifically, the terminal acquires first time domain information of a first frequency channel and second time domain information of a second frequency channel through a color temperature sensor; then, the terminal performs time-frequency transformation operation on the first time domain information to obtain first frequency domain information; obtaining a direct current component from the first frequency domain information of the first frequency channel to obtain a first direct current component; the terminal performs time-frequency transformation operation on the second time domain information to obtain second frequency domain information; and taking a direct current component from the second frequency domain information of the second frequency channel to obtain a second direct current component.
For example, fig. 8 is time domain information corresponding to a frequency channel, and fig. 9 is frequency domain information corresponding to the time domain information shown in fig. 8, which is obtained by performing time-frequency conversion on the time domain information shown in fig. 8.
In this embodiment, the terminal further uses the color temperature sensor to obtain the visible light band component, fig. 10 is a spectral curve response diagram of the color temperature sensor, as shown in fig. 10, as the wavelength changes, the changes of the spectral response curves corresponding to R, G, B, C (visible light band component), WB (full spectrum), FD1 (first frequency channel) and FD2 (second frequency channel) detected by the color temperature sensor are different, and the terminal may determine the first time domain information of the first frequency channel, the second time domain information of the second frequency channel and the visible light band component according to the spectral curve response diagram of the color temperature sensor.
S102, extracting first optical frequency information and second optical frequency information from the first direct current component, and acquiring first optical intensity information corresponding to the first optical frequency information and second optical intensity information corresponding to the second optical frequency information, wherein the first optical frequency information and the second optical frequency information are two optical frequency information with the maximum amplitude values in the first direct current component.
After the terminal acquires a first direct current component of light in a shooting scene in a first frequency channel, a second direct current component of the light in a second frequency channel and a visible light waveband component, the terminal extracts first light frequency information and second light spectrum information from the first direct current component and acquires first light intensity information corresponding to the first light frequency information and second light intensity information corresponding to the second light frequency information.
In the embodiment of the application, the terminal searches first light frequency information with the maximum amplitude and second light frequency information with the amplitude only smaller than the first light frequency information from the first direct current component, and obtains first light intensity information corresponding to the first light frequency information and second light intensity information corresponding to the second light frequency information by using the color temperature sensor.
S103, determining first infrared band information and second infrared band information according to the first direct current component, the second direct current component and the visible light band component.
After the terminal respectively acquires a first direct current component of a first frequency channel, a second direct current component of a second frequency channel and a visible light band component through a color temperature sensor, the terminal determines first infrared band information and second infrared band information according to the first direct current component, the second direct current component and the visible light band component.
In the embodiment of the application, the terminal determines the first infrared band information according to the second direct current component and the visible light band component.
Specifically, the terminal inputs the second direct current component and the visible light band component into formula (1) to obtain the first infrared band information.
IR1=(FD2DC-C)/FD2DC (1)
Where IR1 is the first infrared band information, C is the visible band component, FD2 is the second frequency domain information of the second frequency channel, and DC is the DC component operation, so FD2DC is the second DC component.
In the embodiment of the application, the terminal determines the second infrared band information according to the first direct current component and the second direct current component.
Specifically, the terminal inputs the first direct current component and the second direct current component into formula (2) to obtain second infrared band information.
IR2=(FD2DC-FD2DC)/FD1DC (2)
Wherein, IR2 is the second infrared band information, FD1 is the first frequency domain information of the first frequency channel, and FD1DC is the first dc component.
It should be noted that, as shown in fig. 11, the energy distribution of the spectrum of the fluorescent lamp indicates that the energy of the 800nm-900nm infrared band is weak in an indoor scene; as shown in fig. 12, for the energy distribution of the daylight spectrum, it can be known that the infrared band of 800nm to 900nm in the daylight scene has stronger energy, and the energy of the infrared band starts to be attenuated severely after 950 nm; as shown in fig. 13, the energy of the 800nm-900nm infrared band in the scene of the incandescent lamp is increasingly stronger for the spectral energy distribution of the incandescent lamp. Therefore, the infrared band intensity of 800nm-900nm and the infrared band intensity of 950nm-1000nm are combined, different indoor and outdoor shooting scenes can be distinguished, the first infrared band information is used for measuring the infrared band intensity of 800nm-900nm in the embodiment of the application, and the second infrared band information is used for measuring the infrared band intensity of 950nm-1000nm in the embodiment of the application.
S104, inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information into a preset classification model to obtain a scene prediction result aiming at a shooting scene.
After the terminal acquires the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information, the terminal inputs the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information into a preset classification model to obtain a scene prediction result for a shooting scene.
In the embodiment of the application, after the terminal acquires the first optical frequency information, the second optical frequency information, the first optical intensity information and the second optical intensity information, the terminal normalizes the first optical frequency information and the second optical frequency information according to a preset frequency value to obtain normalized first optical frequency information and normalized second optical frequency information; the terminal respectively carries out normalization processing on the first light intensity information and the second light intensity information according to a preset light intensity value to obtain normalized first light intensity information and normalized second light intensity information.
In practical application, the terminal performs normalization processing on the first optical frequency information and the second optical frequency information by using 200 Hz; the terminal normalizes the first light intensity information and the second light intensity information using 65535.
In the embodiment of the application, a preset classification model is preset in the terminal, the terminal inputs 6 parameter characteristics of first infrared band information, second infrared band information, normalized first light frequency information, normalized first light intensity information, normalized second light frequency information and normalized second light intensity information into the preset classification model, and the terminal predicts the 6 parameters by using classification parameters obtained by self training of the preset classification model to obtain a scene prediction result.
Optionally, the preset classification model may be an SVM model, a bayes classifier, ensemble learning or decision tree, and the like, and is specifically selected according to an actual situation, and the embodiment of the present application is not specifically limited.
Further, after the terminal determines the scene prediction result of the shooting scene, the terminal may determine the AWB parameter according to the scene prediction result, and then perform white balance correction on the image by using the AWB parameter, at this time, the terminal considers different light source information corresponding to indoor and outdoor scenes when determining the AWB parameter, so that the color reduction effect of the image can be improved.
Exemplarily, as shown in fig. 14, a specific implementation procedure of an exemplary scenario determination procedure provided in an embodiment of the present application includes:
1. the terminal reads time domain information of a color temperature sensor FD 1;
2. the terminal performs time-frequency transformation on FD1 time domain information and acquires a direct current component FD1 DC;
3. acquiring two frequencies FD1Q1 and FD1Q2 with the strongest amplitude values and corresponding intensities FD1M1 and FD1M2 from FD1DC, and respectively carrying out normalization processing on FD1Q1 and FD1Q2, FD1M1 and FD1M 2;
4. the terminal reads time domain information of a color temperature sensor FD 2;
5. the terminal performs time-frequency transformation on FD2 time domain information and acquires a direct current component FD2DC of the time-frequency transformation;
6. the terminal acquires a visible light waveband component C;
7. the terminal calculates IR1 using IR1 ═ (FD2DC-C)/FD2 DC;
8. the terminal calculates IR2 using IR2 ═ (FD2DC-FD2DC)/FD1 DC;
9. the terminal selects a loss function of a preset training model and sets learning parameters;
10. the terminal utilizes a loss function and a learning parameter degree preset classification model to train to obtain a preset classification model;
11. the terminal inputs IR1, IR2, FD1Q, FD1Q2, FD1M1 and FD1M2 into a preset classification model for scene prediction, and a prediction result is obtained.
The terminal participates in indoor and outdoor classification by using the first infrared band information and the second infrared band information of two frequency channels in a spectrum as classification features, the terminal participates in indoor and outdoor classification by using light frequency information and frequency intensity in the first frequency channel as classification features, different scenes can be distinguished according to energy change trends corresponding to different infrared bands, and then scene prediction accuracy is improved.
Example two
Based on the first embodiment, in the embodiment of the present application, before the terminal acquires the first direct current component of the first frequency channel, the second direct current component of the second frequency channel, and the visible light band component through the color temperature sensor, a scene prediction method is further provided, as shown in fig. 15, the method may include:
s201, obtaining training sample data of the training sample image and a training sample scene of the training sample image.
The scene prediction method provided by the embodiment of the application is suitable for the scene of training the classification model.
In the embodiment of the application, the terminal can firstly divide the pre-stored image library so as to obtain the training sample image and the test sample image.
In the embodiments of the present application, the terminal may be any device having communication and storage functions. For example: tablet computers, mobile phones, electronic readers, remote controllers, Personal Computers (PCs), notebook computers, vehicle-mounted devices, network televisions, wearable devices, and the like.
It should be noted that, in the embodiment of the present application, the pre-stored image library may be used for training and testing the preset classification model.
Further, in an embodiment of the present application, the pre-stored image library may include a plurality of images of indoor scenes and a plurality of images of outdoor scenes. Further, in the application, the terminal can randomly divide the images of different scenes in the pre-stored image library, so that a training sample image and a test sample image can be obtained. The training sample image and the test sample image are completely different, that is, the sample data corresponding to one sample image in the pre-stored image library can only be one of the training sample data or the test sample data.
Illustratively, 1024 images of the indoor scene and 1134 images of the outdoor scene are stored in a pre-stored image library stored in the terminal, and when the terminal performs training of the preset classification model, 80% of the images can be randomly extracted from the pre-stored image library as training images and 20% of the images can be randomly extracted as test images.
In the embodiment of the application, a terminal obtains a first sample direct current component of a first frequency channel, a second sample direct current component of a second frequency channel and a sample visible light band component in a training sample image; then, the terminal determines first sample infrared band information, second sample infrared band information, first sample light frequency information, first sample light intensity information, second sample light frequency information and second sample light intensity information of the training sample image according to the first sample direct current component, the second sample direct current component and the sample visible light band component; the terminal determines the first sample infrared band information, the second sample infrared band information, the first sample light frequency information, the first sample light intensity information, the second sample light frequency information and the second sample light intensity information as training sample data.
Specifically, the process of determining the first sample infrared band information, the second sample infrared band information, the first sample optical frequency information, the first sample optical intensity information, the second sample optical frequency information, and the second sample optical intensity information of the training sample image by the terminal according to the first sample direct current component, the second sample direct current component, and the sample visible light band component is as follows: the terminal extracts first sample light frequency information and second sample light frequency information from the first sample direct current component, and obtains first sample light intensity information corresponding to the first sample light frequency information and second sample light intensity information corresponding to the second sample light frequency information; then, the terminal determines first sample infrared band information according to the second sample direct current component and the sample visible band component; and the terminal determines the infrared band information of the second sample according to the direct current component of the first sample and the direct current component of the second sample.
It should be noted that the process of acquiring the training sample data by the terminal in the training stage is consistent with the process of acquiring the first infrared band information, the second infrared band information, the first optical frequency information, the first optical intensity information, the second optical frequency information, and the second optical intensity information by the terminal in the testing stage, and is not described herein again.
S202, inputting training sample data into the initial classification model to obtain a sample classification result.
After the terminal acquires training sample data of the training sample images and training sample scenes of the training sample images, the terminal inputs the training sample data into the initial classification model to obtain sample classification results.
In the embodiment of the application, the terminal inputs the first sample infrared band information, the second sample infrared band information, the first sample light frequency information, the first sample light intensity information, the second sample light frequency information and the second sample light intensity information into the initial classification model to obtain a sample classification result.
And S203, inputting the training sample scene and the sample classification result into a preset loss function to obtain a loss function value.
And when the terminal inputs training sample data into the initial classification model to obtain a sample classification result, the terminal inputs a training sample scene and the sample classification result into a preset loss function to obtain a loss function value.
In the embodiment of the application, the preset loss function used by the terminal is a hinge loss function.
In the embodiment of the application, the terminal inputs the sample classification result corresponding to the training sample data and the training sample scene of the training sample image into a preset loss function to obtain a loss function value.
And S204, training the initial classification model by using the loss function value to obtain a preset classification model.
And when the terminal inputs the training sample scene and the sample classification result into a preset loss function to obtain a loss function value, the terminal trains the initial classification model by using the loss function value to obtain a preset classification model.
In the embodiment of the present application, since the training sample data in the present application includes 6 training feature parameters, when the training parameters are selected, the linear kernel is used to train the initial classification model, specifically, the step length is 0.01, and the gamma is 60000.
In the embodiment of the application, the initial classification model is trained by utilizing the training parameters, so that the loss function value is minimum, at the moment, the preset classification model is trained, and then, the terminal can realize the scene prediction process by using the preset classification model.
The terminal participates in indoor and outdoor classification by using the first infrared band information and the second infrared band information of two frequency channels in a spectrum as classification features, the terminal participates in indoor and outdoor classification by using light frequency information and frequency intensity in the first frequency channel as classification features, different scenes can be distinguished according to energy change trends corresponding to different infrared bands, and then scene prediction accuracy is improved.
EXAMPLE III
An embodiment of the present application provides a terminal, as shown in fig. 16, where the terminal 1 includes:
the device comprises an acquisition unit 10, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first direct current component of light in a shooting scene in a first frequency channel, a second direct current component of the light in a second frequency channel and a visible light waveband component, and the radiation intensity of the first frequency channel is greater than that of the second frequency channel;
an extracting unit 11, configured to extract first optical frequency information and second optical frequency information from the first direct current component, and obtain first optical intensity information corresponding to the first optical frequency information and second optical intensity information corresponding to the second optical frequency information, where the first optical frequency information and the second optical frequency information are two optical frequency information with a largest amplitude in the first direct current component;
a determining unit 12, configured to determine first infrared band information and second infrared band information according to the first direct current component, the second direct current component, and the visible light band component;
and a scene prediction unit 13, configured to input the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information, and the second light intensity information into a preset classification model, so as to obtain a scene prediction result for the shooting scene.
Optionally, the terminal further includes: the time-frequency transformation unit and the direct current component taking unit;
the acquiring unit 10 is further configured to acquire, by using a color temperature sensor, first time domain information of the first frequency channel and second time domain information of the second frequency channel;
the time-frequency transformation unit is further configured to perform time-frequency transformation operation on the first time domain information to obtain first frequency domain information; performing time-frequency transformation operation on the second time domain information to obtain second frequency domain information;
the direct current component obtaining unit is further configured to obtain a direct current component from the first frequency domain information of the first frequency channel to obtain the first direct current component; and taking a direct current component from the second frequency domain information of the second frequency channel to obtain the second direct current component.
Optionally, the determining unit 12 is further configured to determine the first infrared band information according to the second direct current component and the visible light band component; and determining the second infrared band information according to the first direct current component and the second direct current component.
Optionally, the terminal further includes: a normalization unit;
the normalization unit is configured to perform normalization processing on the first optical frequency information and the second optical frequency information respectively according to a preset frequency value to obtain normalized first optical frequency information and normalized second optical frequency information; respectively carrying out normalization processing on the first light intensity information and the second light intensity information according to a preset light intensity value to obtain normalized first light intensity information and normalized second light intensity information;
the scene prediction unit 13 is further configured to perform scene prediction on the first infrared band information, the second infrared band information, the normalized first optical frequency information, the normalized first optical intensity information, the normalized second optical frequency information, and the normalized second optical intensity information by using the classification parameters obtained by training the preset classification model, so as to obtain the scene prediction result.
Optionally, the terminal further includes: an input unit and a training unit;
the obtaining unit 10 is further configured to obtain training sample data of a training sample image and a training sample scene of the training sample image;
the input unit is used for inputting the training sample data into an initial classification model to obtain a sample classification result; inputting the training sample scene and the sample classification result into a preset loss function to obtain a loss function value;
and the training unit is used for training the initial classification model by using the loss function value to obtain a preset classification model.
Optionally, the obtaining unit 10 is further configured to obtain a first sample direct current component of a first frequency channel, a second sample direct current component of a second frequency channel, and a sample visible light band component in the training sample image;
the determining unit 12 is further configured to determine first sample infrared band information, second sample infrared band information, first sample optical frequency information, first sample optical intensity information, second sample optical frequency information, and second sample optical intensity information of the training sample image according to the first sample direct current component, the second sample direct current component, and the sample visible light band component; and determining the first sample infrared band information, the second sample infrared band information, the first sample light frequency information, the first sample light intensity information, the second sample light frequency information and the second sample light intensity information as the training sample data.
Optionally, the extracting unit 11 is further configured to extract the first sample optical frequency information and the second sample optical frequency information from the first sample direct current component, and obtain the first sample optical intensity information corresponding to the first sample optical frequency information and the second sample optical intensity information corresponding to the second sample optical frequency information;
the determining unit 12 is further configured to determine the first sample infrared band information according to the second sample direct current component and the sample visible band component; and determining the infrared band information of the second sample according to the first sample direct current component and the second sample direct current component.
Optionally, the input unit is further configured to input the first sample infrared band information, the second sample infrared band information, the first sample light frequency information, the first sample light intensity information, the second sample light frequency information, and the second sample light intensity information into the initial classification model, so as to obtain the sample classification result.
Optionally, the terminal further includes: a white balance correction unit;
the determining unit 12 is further configured to determine an automatic white balance AWB parameter according to the scene prediction result;
and the white balance correction unit is used for carrying out white balance correction on the image by adopting the AWB parameters.
According to the terminal provided by the embodiment of the application, a first direct current component of a light ray in a shooting scene in a first frequency channel, a second direct current component in a second frequency channel and a visible light wave band component are obtained, and the radiation intensity of the first frequency channel is greater than that of the second frequency channel; extracting first optical frequency information and second optical frequency information from the first direct current component, and acquiring first optical intensity information corresponding to the first optical frequency information and second optical intensity information corresponding to the second optical frequency information, wherein the first optical frequency information and the second optical frequency information are two optical frequency information with the maximum amplitude values in the first direct current component; determining first infrared band information and second infrared band information according to the first direct current component, the second direct current component and the visible light band component; and inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information into a preset classification model to obtain a scene prediction result aiming at a shooting scene. Therefore, according to the terminal provided by the embodiment, the terminal takes the first infrared band information and the second infrared band information of two frequency channels in a spectrum as classification features to participate in indoor and outdoor classification, the terminal takes the light frequency information and the frequency intensity in the first frequency channel as classification features to participate in indoor and outdoor classification, different scenes can be distinguished according to energy change trends corresponding to different infrared bands, and then the accuracy of scene prediction is improved.
Fig. 17 is a schematic diagram of a composition structure of a terminal 1 according to an embodiment of the present application, and in practical application, based on the same disclosure concept of the foregoing embodiment, as shown in fig. 17, the terminal 1 according to the present embodiment includes: a processor 14, a memory 15, and a communication bus 16.
In a Specific embodiment, the obtaining unit 10, the extracting unit 11, the determining unit 12, the scene predicting unit 13, the time-frequency transforming unit, the dc component obtaining unit, the normalizing unit input unit, the training unit and the white balance correcting unit may be implemented by a processor 14 located on the terminal 1, and the processor 14 may be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing terminal (DSPD), a Digital Signal Processing Device (DSPD), a Programmable Logic terminal (PLD), a Field Programmable Gate Array (FPGA), a CPU, a controller, a microcontroller and a microprocessor. It is understood that the electronic device for implementing the above-mentioned processor function may be other devices, and the embodiment is not limited in particular.
In the embodiment of the present application, the communication bus 16 is used for realizing connection communication between the processor 14 and the memory 15; the processor 14 described above implements the following scene prediction method when executing the execution program stored in the memory 15:
the processor 14 is configured to obtain a first direct current component of a light ray in a shooting scene in a first frequency channel, a second direct current component in a second frequency channel, and a visible light band component, where a radiation intensity of the first frequency channel is greater than a radiation intensity of the second frequency channel; extracting first optical frequency information and second optical frequency information from the first direct current component, and acquiring first optical intensity information corresponding to the first optical frequency information and second optical intensity information corresponding to the second optical frequency information, wherein the first optical frequency information and the second optical frequency information are two optical frequency information with the maximum amplitude values in the first direct current component; determining first infrared band information and second infrared band information according to the first direct current component, the second direct current component and the visible light band component; and inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information into a preset classification model to obtain a scene prediction result aiming at the shooting scene.
Optionally, the processor 14 is further configured to acquire, by using a color temperature sensor, first time domain information of the first frequency channel and second time domain information of the second frequency channel; performing time-frequency transformation operation on the first time domain information to obtain first frequency domain information; taking a direct current component from the first frequency domain information of the first frequency channel to obtain the first direct current component; performing time-frequency transformation operation on the second time domain information to obtain second frequency domain information; and taking a direct current component from the second frequency domain information of the second frequency channel to obtain the second direct current component.
Optionally, the processor 14 is further configured to determine the first infrared band information according to the second direct current component and the visible light band component; and determining the second infrared band information according to the first direct current component and the second direct current component.
Optionally, the processor 14 is further configured to perform normalization processing on the first optical frequency information and the second optical frequency information respectively according to preset frequency values, so as to obtain normalized first optical frequency information and normalized second optical frequency information; respectively carrying out normalization processing on the first light intensity information and the second light intensity information according to a preset light intensity value to obtain normalized first light intensity information and normalized second light intensity information; and performing scene prediction on the first infrared band information, the second infrared band information, the normalized first light frequency information, the normalized first light intensity information, the normalized second light frequency information and the normalized second light intensity information by using the classification parameters obtained by training the preset classification model to obtain a scene prediction result.
Optionally, the processor 14 is further configured to obtain training sample data of a training sample image and a training sample scene of the training sample image; inputting the training sample data into an initial classification model to obtain a sample classification result; inputting the training sample scene and the sample classification result into a preset loss function to obtain a loss function value; and training the initial classification model by using the loss function value to obtain a preset classification model.
Optionally, the processor 14 is further configured to obtain a first sample direct current component of a first frequency channel, a second sample direct current component of a second frequency channel, and a sample visible light band component in the training sample image; determining first sample infrared band information, second sample infrared band information, first sample optical frequency information, first sample light intensity information, second sample optical frequency information and second sample light intensity information of the training sample image according to the first sample direct current component, the second sample direct current component and the sample visible light band component; and determining the first sample infrared band information, the second sample infrared band information, the first sample light frequency information, the first sample light intensity information, the second sample light frequency information and the second sample light intensity information as the training sample data.
Optionally, the processor 14 is further configured to extract the first sample optical frequency information and the second sample optical frequency information from the first sample direct current component, and obtain the first sample optical intensity information corresponding to the first sample optical frequency information and the second sample optical intensity information corresponding to the second sample optical frequency information; determining the first sample infrared band information according to the second sample direct current component and the sample visible band component; and determining the infrared band information of the second sample according to the first sample direct current component and the second sample direct current component.
Optionally, the processor 14 is further configured to input the first sample infrared band information, the second sample infrared band information, the first sample light frequency information, the first sample light intensity information, the second sample light frequency information, and the second sample light intensity information into the initial classification model, so as to obtain the sample classification result.
Optionally, the processor 14 is further configured to determine an automatic white balance AWB parameter according to the scene prediction result; and carrying out white balance correction on the image by adopting the AWB parameters.
The embodiment of the present application provides a storage medium, on which a computer program is stored, where the computer readable storage medium stores one or more programs, where the one or more programs are executable by one or more processors and are applied to a terminal, and the computer program implements the scene prediction method according to the first embodiment and the second embodiment.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an image display device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present disclosure.
The above description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application.

Claims (12)

1. A method for scene prediction, the method comprising:
acquiring a first direct current component of light rays in a shooting scene in a first frequency channel, a second direct current component of the light rays in a second frequency channel and a visible light band component, wherein the radiation intensity of the first frequency channel is greater than that of the second frequency channel;
extracting first optical frequency information and second optical frequency information from the first direct current component, and acquiring first optical intensity information corresponding to the first optical frequency information and second optical intensity information corresponding to the second optical frequency information, wherein the first optical frequency information and the second optical frequency information are two optical frequency information with the maximum amplitude values in the first direct current component;
determining first infrared band information and second infrared band information according to the first direct current component, the second direct current component and the visible light band component;
and inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information into a preset classification model to obtain a scene prediction result aiming at the shooting scene.
2. The method of claim 1, wherein the obtaining a first direct current component of a light ray in a shooting scene at a first frequency channel and a second direct current component at a second frequency channel comprises:
acquiring first time domain information of the first frequency channel and second time domain information of the second frequency channel through a color temperature sensor;
performing time-frequency transformation operation on the first time domain information to obtain first frequency domain information;
taking a direct current component from the first frequency domain information of the first frequency channel to obtain the first direct current component;
performing time-frequency transformation operation on the second time domain information to obtain second frequency domain information;
and taking a direct current component from the second frequency domain information of the second frequency channel to obtain the second direct current component.
3. The method of claim 1, wherein determining first infrared band information and second infrared band information from the first direct current component, the second direct current component, and the visible band component comprises:
determining the first infrared band information according to the second direct current component and the visible light band component;
and determining the second infrared band information according to the first direct current component and the second direct current component.
4. The method of claim 1, wherein the inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information, and the second light intensity information into a preset classification model to obtain a scene prediction result for the shooting scene comprises:
respectively normalizing the first optical frequency information and the second optical frequency information according to a preset frequency value to obtain normalized first optical frequency information and normalized second optical frequency information;
respectively carrying out normalization processing on the first light intensity information and the second light intensity information according to a preset light intensity value to obtain normalized first light intensity information and normalized second light intensity information;
and performing scene prediction on the first infrared band information, the second infrared band information, the normalized first light frequency information, the normalized first light intensity information, the normalized second light frequency information and the normalized second light intensity information by using the classification parameters obtained by training the preset classification model to obtain a scene prediction result.
5. The method of claim 1, wherein the acquiring the first direct current component of the light ray in the shooting scene at the first frequency channel, the second direct current component of the light ray at the second frequency channel and the visible light band component are preceded by:
acquiring training sample data of a training sample image and a training sample scene of the training sample image;
inputting the training sample data into an initial classification model to obtain a sample classification result;
inputting the training sample scene and the sample classification result into a preset loss function to obtain a loss function value;
and training the initial classification model by using the loss function value to obtain a preset classification model.
6. The method of claim 5, wherein the obtaining training sample data for a training sample image comprises:
acquiring a first sample direct current component of a first frequency channel, a second sample direct current component of a second frequency channel and a sample visible light wave band component in the training sample image;
determining first sample infrared band information, second sample infrared band information, first sample optical frequency information, first sample light intensity information, second sample optical frequency information and second sample light intensity information of the training sample image according to the first sample direct current component, the second sample direct current component and the sample visible light band component;
and determining the first sample infrared band information, the second sample infrared band information, the first sample light frequency information, the first sample light intensity information, the second sample light frequency information and the second sample light intensity information as the training sample data.
7. The method of claim 6, wherein determining first sample infrared band information, second sample infrared band information, first sample optical frequency information, first sample optical intensity information, second sample optical frequency information, and second sample optical intensity information for the training sample image based on the first sample direct current component, the second sample direct current component, and the sample visible light band component comprises:
extracting the first sample optical frequency information and the second sample optical frequency information from the first sample direct current component, and obtaining the first sample optical intensity information corresponding to the first sample optical frequency information and the second sample optical intensity information corresponding to the second sample optical frequency information;
determining the first sample infrared band information according to the second sample direct current component and the sample visible band component;
and determining the infrared band information of the second sample according to the first sample direct current component and the second sample direct current component.
8. The method of claim 6, wherein inputting the training sample data into an initial classification model to obtain a sample classification result comprises:
and inputting the first sample infrared band information, the second sample infrared band information, the first sample light frequency information, the first sample light intensity information, the second sample light frequency information and the second sample light intensity information into the initial classification model to obtain the sample classification result.
9. The method of claim 1, wherein after the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information, and the second light intensity information are input into a preset classification model to obtain a scene prediction result for the shooting scene, the method further comprises:
determining an Automatic White Balance (AWB) parameter according to the scene prediction result;
and carrying out white balance correction on the image by adopting the AWB parameters.
10. A terminal, characterized in that the terminal comprises:
the device comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring a first direct current component of light in a shooting scene in a first frequency channel, a second direct current component of the light in a second frequency channel and a visible light waveband component, and the radiation intensity of the first frequency channel is greater than that of the second frequency channel;
an extracting unit, configured to extract first optical frequency information and second optical frequency information from the first direct current component, and obtain first optical intensity information corresponding to the first optical frequency information and second optical intensity information corresponding to the second optical frequency information, where the first optical frequency information and the second optical frequency information are two optical frequency information with a largest amplitude value in the first direct current component;
a determining unit, configured to determine first infrared band information and second infrared band information according to the first direct current component, the second direct current component, and the visible light band component;
and the scene prediction unit is used for inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information into a preset classification model to obtain a scene prediction result aiming at the shooting scene.
11. A terminal, characterized in that the terminal comprises: a processor, a memory, and a communication bus; the processor, when executing the execution program stored in the memory, implements the method of any of claims 1-9.
12. A storage medium having stored thereon a computer program for application to a terminal, characterized in that the computer program, when being executed by a processor, is adapted to carry out the method according to any one of claims 1-9.
CN201911184047.7A 2019-11-27 2019-11-27 Scene prediction method, terminal and storage medium Active CN111310541B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911184047.7A CN111310541B (en) 2019-11-27 2019-11-27 Scene prediction method, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911184047.7A CN111310541B (en) 2019-11-27 2019-11-27 Scene prediction method, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111310541A true CN111310541A (en) 2020-06-19
CN111310541B CN111310541B (en) 2023-09-29

Family

ID=71159674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911184047.7A Active CN111310541B (en) 2019-11-27 2019-11-27 Scene prediction method, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111310541B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103294983A (en) * 2012-02-24 2013-09-11 北京明日时尚信息技术有限公司 Scene recognition method in static picture based on partitioning block Gabor characteristics
CN103413142A (en) * 2013-07-22 2013-11-27 中国科学院遥感与数字地球研究所 Remote sensing image land utilization scene classification method based on two-dimension wavelet decomposition and visual sense bag-of-word model
CN103493212A (en) * 2011-03-29 2014-01-01 欧司朗光电半导体有限公司 Unit for determining the type of a dominating light source by means of two photodiodes
CN105846896A (en) * 2016-05-16 2016-08-10 苏州安莱光电科技有限公司 Visible light OFDM communication device for infrared compensation total range light modulation
CN107622281A (en) * 2017-09-20 2018-01-23 广东欧珀移动通信有限公司 Image classification method, device, storage medium and mobile terminal
CN108027278A (en) * 2015-08-26 2018-05-11 株式会社普瑞密斯 Lighting detecting device and its method
CN108470169A (en) * 2018-05-23 2018-08-31 国政通科技股份有限公司 Face identification system and method
CN109379584A (en) * 2018-11-26 2019-02-22 北京科技大学 Camera system and image quality adjusting method under a kind of complex environment light application conditions

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103493212A (en) * 2011-03-29 2014-01-01 欧司朗光电半导体有限公司 Unit for determining the type of a dominating light source by means of two photodiodes
CN103294983A (en) * 2012-02-24 2013-09-11 北京明日时尚信息技术有限公司 Scene recognition method in static picture based on partitioning block Gabor characteristics
CN103413142A (en) * 2013-07-22 2013-11-27 中国科学院遥感与数字地球研究所 Remote sensing image land utilization scene classification method based on two-dimension wavelet decomposition and visual sense bag-of-word model
CN108027278A (en) * 2015-08-26 2018-05-11 株式会社普瑞密斯 Lighting detecting device and its method
CN105846896A (en) * 2016-05-16 2016-08-10 苏州安莱光电科技有限公司 Visible light OFDM communication device for infrared compensation total range light modulation
CN107622281A (en) * 2017-09-20 2018-01-23 广东欧珀移动通信有限公司 Image classification method, device, storage medium and mobile terminal
CN108470169A (en) * 2018-05-23 2018-08-31 国政通科技股份有限公司 Face identification system and method
CN109379584A (en) * 2018-11-26 2019-02-22 北京科技大学 Camera system and image quality adjusting method under a kind of complex environment light application conditions

Also Published As

Publication number Publication date
CN111310541B (en) 2023-09-29

Similar Documents

Publication Publication Date Title
JP6478129B2 (en) Method, processor, mobile device, program and computer-readable storage medium
US9813635B2 (en) Method and apparatus for auto exposure value detection for high dynamic range imaging
CN111027489B (en) Image processing method, terminal and storage medium
CN111163302B (en) Scene color restoration method, terminal and storage medium
WO2019052329A1 (en) Facial recognition method and related product
US10027878B2 (en) Detection of object in digital image
CN108668093A (en) The generation method and device of HDR image
CN105338338A (en) Method and device for detecting imaging condition
CN107871309B (en) Detection method, detection device, and recording medium
CN108174185A (en) A kind of photographic method, device and terminal
WO2022042573A1 (en) Application control method and apparatus, electronic device, and readable storage medium
CN104902143B (en) A kind of image de-noising method and device based on resolution ratio
US20200322530A1 (en) Electronic device and method for controlling camera using external electronic device
CN113906730B (en) Electronic apparatus for obtaining skin image and control method thereof
CN104535178A (en) Light strength value detecting method and terminal
CN105957020B (en) Video generation device and image generating method
CN111310541B (en) Scene prediction method, terminal and storage medium
CN112036277B (en) Face recognition method, electronic equipment and computer readable storage medium
CN113989387A (en) Camera shooting parameter adjusting method and device and electronic equipment
CN110929663B (en) Scene prediction method, terminal and storage medium
CN110969196B (en) Scene prediction method, terminal and storage medium
CN111602390A (en) Terminal white balance processing method, terminal and computer readable storage medium
CN105163040B (en) A kind of image processing method and mobile terminal
CN111567034A (en) Exposure compensation method, device and computer readable storage medium
CN113422893A (en) Image acquisition method and device, storage medium and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant