CN111310541B - Scene prediction method, terminal and storage medium - Google Patents
Scene prediction method, terminal and storage medium Download PDFInfo
- Publication number
- CN111310541B CN111310541B CN201911184047.7A CN201911184047A CN111310541B CN 111310541 B CN111310541 B CN 111310541B CN 201911184047 A CN201911184047 A CN 201911184047A CN 111310541 B CN111310541 B CN 111310541B
- Authority
- CN
- China
- Prior art keywords
- information
- sample
- light
- frequency
- direct current
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/35—Categorising the entire scene, e.g. birthday party or wedding scene
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/141—Control of illumination
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Color Television Image Signal Generators (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the application provides a scene prediction method, a terminal and a storage medium, comprising the following steps: acquiring a first direct current component of light rays in a shooting scene in a first frequency channel, a second direct current component of light rays in a second frequency channel and a visible light wave band component; extracting first light frequency information and second light frequency information from the first direct current component, and acquiring first light intensity information corresponding to the first light frequency information and second light intensity information corresponding to the second light frequency information; determining first infrared band information and second infrared band information according to the first direct current component, the second direct current component and the visible light band component; and inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information into a preset classification model to obtain a scene prediction result aiming at the shooting scene.
Description
Technical Field
The present application relates to the field of image processing, and in particular, to a scene prediction method, a terminal, and a storage medium.
Background
When photographing, if the scene such as an indoor scene or an outdoor scene can be determined, the higher image photographing effect can be obtained. That is, scene prediction becomes one of important reference information required for the terminal to perform image processing. When the terminal predicts the scene, the terminal can not only collect specific data by deploying some additional auxiliary equipment and then recognize indoor and outdoor scenes; the distinction between indoor and outdoor scenes can also be carried out by means of image processing.
However, the scene prediction is performed by means of additional auxiliary equipment, the cost is high in a deployment stage, the preparation work is complex, the universality and usability of the scene prediction are greatly limited, and the convenience is poor; the current method for predicting the scene based on image processing has higher computational complexity, reduces the prediction efficiency and has poorer scene prediction accuracy.
Disclosure of Invention
The embodiment of the application provides a scene prediction method, a terminal and a storage medium, which can reduce the complexity of prediction and improve the prediction efficiency and the accuracy of scene prediction.
The technical scheme of the application is realized as follows:
the embodiment of the application provides a scene prediction method, which comprises the following steps:
acquiring a first direct current component of light rays in a shooting scene in a first frequency channel, a second direct current component of light rays in a second frequency channel and a visible light wave band component, wherein the radiation intensity of the first frequency channel is larger than that of the second frequency channel;
extracting first light frequency information and second light frequency information from the first direct current component, and acquiring first light intensity information corresponding to the first light frequency information and second light intensity information corresponding to the second light frequency information, wherein the first light frequency information and the second light frequency information are two light frequency information with the largest amplitude in the first direct current component;
Determining first infrared band information and second infrared band information according to the first direct current component, the second direct current component and the visible light band component;
and inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information into a preset classification model to obtain a scene prediction result aiming at the shooting scene.
In the above method, the acquiring the first dc component of the light in the shooting scene in the first frequency channel and the second dc component in the second frequency channel includes:
acquiring first time domain information of the first frequency channel and second time domain information of the second frequency channel through a color temperature sensor;
performing time-frequency transformation operation on the first time domain information to obtain first frequency domain information;
obtaining a direct current component from the first frequency domain information of the first frequency channel to obtain the first direct current component;
performing time-frequency transformation operation on the second time domain information to obtain second frequency domain information;
and obtaining a direct current component of the second frequency domain information of the second frequency channel to obtain the second direct current component.
In the above method, the determining the first infrared band information and the second infrared band information according to the first direct current component, the second direct current component, and the visible light band component includes:
determining the first infrared band information according to the second direct current component and the visible light band component;
and determining the second infrared band information according to the first direct current component and the second direct current component.
In the above method, the inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information, and the second light intensity information into a preset classification model to obtain a scene prediction result for the shooting scene includes:
respectively carrying out normalization processing on the first optical frequency information and the second optical frequency information according to a preset frequency value to obtain normalized first optical frequency information and normalized second optical frequency information;
respectively carrying out normalization processing on the first light intensity information and the second light intensity information according to a preset light intensity value to obtain normalized first light intensity information and normalized second light intensity information;
And performing scene prediction on the first infrared band information, the second infrared band information, the normalized first light frequency information, the normalized first light intensity information, the normalized second light frequency information and the normalized second light intensity information by using the classification parameters obtained by training the preset classification model to obtain a scene prediction result.
In the above method, the method further includes, before the acquiring the first direct current component of the first frequency channel, the second direct current component of the second frequency channel, and the visible light band component, the light in the shooting scene:
acquiring training sample data of a training sample image and a training sample scene of the training sample image;
inputting the training sample data into an initial classification model to obtain a sample classification result;
inputting the training sample scene and the sample classification result into a preset loss function to obtain a loss function value;
and training the initial classification model by using the loss function value to obtain a preset classification model.
In the above method, the acquiring training sample data of the training sample image includes:
Acquiring a first sample direct current component of a first frequency channel, a second sample direct current component of a second frequency channel and a sample visible light wave band component in the training sample image;
determining first sample infrared band information, second sample infrared band information, first sample light frequency information, first sample light intensity information, second sample light frequency information and second sample light intensity information of the training sample image according to the first sample direct current component, the second sample direct current component and the sample visible light band component;
and determining the first sample infrared band information, the second sample infrared band information, the first sample light frequency information, the first sample light intensity information, the second sample light frequency information and the second sample light intensity information as the training sample data.
In the above method, the determining the first sample infrared band information, the second sample infrared band information, the first sample optical frequency information, the first sample optical intensity information, the second sample optical frequency information, and the second sample optical intensity information of the training sample image according to the first sample dc component, the second sample dc component, and the sample visible light band component includes:
Extracting the first sample optical frequency information and the second sample optical frequency information from the first sample direct current component, and acquiring the first sample light intensity information corresponding to the first sample optical frequency information and the second sample light intensity information corresponding to the second sample optical frequency information;
determining the first sample infrared band information according to the second sample direct current component and the sample visible light band component;
and determining the second sample infrared band information according to the first sample direct current component and the second sample direct current component.
In the above method, the inputting the training sample data into the initial classification model to obtain a sample classification result includes:
and inputting the first sample infrared band information, the second sample infrared band information, the first sample light frequency information, the first sample light intensity information, the second sample light frequency information and the second sample light intensity information into the initial classification model to obtain the sample classification result.
In the above method, after the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information, and the second light intensity information are input into a preset classification model, a scene prediction result for the photographed scene is obtained, the method further includes:
Determining an automatic white balance AWB parameter according to the scene prediction result;
and performing white balance correction on the image by adopting the AWB parameters.
The embodiment of the application provides a terminal, which comprises:
the device comprises an acquisition unit, a first frequency channel, a second frequency channel and a visible light wave band component, wherein the acquisition unit is used for acquiring a first direct current component, a second direct current component and a visible light wave band component of light rays in a shooting scene, and the radiation intensity of the first frequency channel is larger than that of the second frequency channel;
the extraction unit is used for extracting first light frequency information and second light frequency information from the first direct current component, and acquiring first light intensity information corresponding to the first light frequency information and second light intensity information corresponding to the second light frequency information, wherein the first light frequency information and the second light frequency information are two light frequency information with the largest amplitude value in the first direct current component;
a determining unit configured to determine first infrared band information and second infrared band information according to the first direct current component, the second direct current component, and the visible light band component;
the scene prediction unit is used for inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information into a preset classification model to obtain a scene prediction result aiming at the shooting scene.
The embodiment of the application provides a terminal, which comprises: a processor, a memory, and a communication bus; the processor, when executing a memory-stored operating program, implements a method as described in any one of the preceding claims.
An embodiment of the present application provides a storage medium having stored thereon a computer program for application to a terminal, which when executed by a processor implements a method as described in any of the above.
The embodiment of the application provides a scene prediction method, a terminal and a storage medium, wherein the method comprises the following steps: acquiring a first direct current component of light rays in a shooting scene in a first frequency channel, a second direct current component of light rays in a second frequency channel and a visible light wave band component, wherein the radiation intensity of the first frequency channel is larger than that of the second frequency channel; extracting first light frequency information and second light frequency information from the first direct current component, and acquiring first light intensity information corresponding to the first light frequency information and second light intensity information corresponding to the second light frequency information, wherein the first light frequency information and the second light frequency information are two light frequency information with the largest amplitude in the first direct current component; determining first infrared band information and second infrared band information according to the first direct current component, the second direct current component and the visible light band component; and inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information into a preset classification model to obtain a scene prediction result aiming at the shooting scene. According to the implementation scheme, the terminal takes the first infrared band information and the second infrared band information of the two frequency channels in the spectrum as classification characteristics to participate in indoor and outdoor classification, takes the light frequency information and the frequency intensity in the first frequency channel as classification characteristics to participate in indoor and outdoor classification, can distinguish different scenes according to energy change trends corresponding to different infrared bands, further improves the accuracy of scene prediction, and carries out scene prediction by the terminal through six characteristic parameters, namely the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information, so that the complexity of prediction is reduced, and further the prediction efficiency is improved.
Drawings
Fig. 1 is a flowchart of a scene prediction method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a placement position of an exemplary color temperature sensor in a terminal according to an embodiment of the present application;
fig. 3 is a schematic diagram two of a placement position of an exemplary color temperature sensor in a terminal according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a placement position of an exemplary color temperature sensor in a terminal according to the prior art;
fig. 5 is a schematic view of a placement position of an exemplary color temperature sensor on a side of a terminal display screen according to an embodiment of the present application;
fig. 6 is a schematic view of a placement position of an exemplary color temperature sensor on one side of a rear camera of a terminal according to an embodiment of the present application;
FIG. 7 is a schematic diagram of radiation intensities of a first frequency channel and a second frequency channel of an exemplary color temperature sensor according to an embodiment of the present application;
fig. 8 is a schematic diagram of time domain information corresponding to an exemplary frequency channel according to an embodiment of the present application;
fig. 9 is a schematic diagram of frequency domain information obtained by performing time-frequency conversion on time information according to an exemplary embodiment of the present application;
FIG. 10 is a graph showing spectral response of an exemplary color temperature sensor according to an embodiment of the present application;
FIG. 11 is an exemplary spectral power distribution of a fluorescent lamp according to an embodiment of the present application;
FIG. 12 is an exemplary daylight spectral energy distribution provided by an embodiment of the application;
FIG. 13 is an exemplary spectral power distribution of an incandescent lamp in accordance with an embodiment of the application;
fig. 14 is a flowchart of an exemplary scene determination method according to an embodiment of the present application;
FIG. 15 is a second flowchart of a scene prediction method according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 17 is a schematic diagram of a second structure of a terminal according to an embodiment of the present application.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the application. And are not intended to limit the application.
There are many schemes for indoor and outdoor scene prediction by the terminal, specifically, methods based on external devices exist, such as wireless network (Wireless Fidelity, wiFi), light sensation, infrared and other devices; there are also methods based on the image itself. The image-based method can be classified into a conventional threshold classification method and a machine learning-based method.
The manner in which the terminal performs image processing may be different in different scenarios, for example, in an indoor scenario, automatic exposure (Automatic Exposure, AE) needs to take into consideration the starting of the anti-power-frequency flash policy at all times; for low-brightness outdoor scenes, a more suitable automatic white balance (Automatic white balance, AWB) algorithm needs to be selected to restore the image, for example, in the AWB algorithm, if the current light source can be judged to be an outdoor light source, the AWB color temperature can be simply set to the position of D55, and the picture can obtain a good color restoration effect.
Therefore, the good scene prediction method can help the AWB algorithm to improve the image color restoration effect, and the restoration difficulty of the AWB algorithm can be reduced no matter for low-brightness outdoor scenes or high-brightness indoor scenes. Accordingly, in the AE algorithm, if the scene corresponding to the current image can be accurately determined to be outdoor, the problem of anti-flash is not required to be considered at all, so that more flexibility can be provided.
Currently, when an image processing method is used for scene prediction, on one hand, feature extraction needs to rely on a full-size image (such as 4000×3000), and a plurality of structural features are extracted by applying a multi-scale filtering method, but image signal processing (Image Signal Processing, ISP) of a portable terminal such as a mobile phone and the like generally only can provide a small-size image (such as 120×90), at this time, the precision of the features obtained by the terminal by using the filtering method based on the full-size image is greatly reduced, so that the accuracy of scene prediction is reduced. On the other hand, the image processing method extracts high-dimensional structural related features from the current image, and the number of features is generally large, so that real-time processing is difficult to perform when the image processing method is used in a portable terminal such as a mobile phone, and the prediction efficiency of scene prediction is reduced.
Further, from the practical effect, the prediction accuracy of scene prediction is reduced when the complex structural features face irregularly divided sky, solid-color scenes and indoor artificial buildings.
The YUV data-based scene recognition algorithm is located after the demosaicing algorithm on the ISP, tends to see the scene finally, and cannot be well used by AE, AWB and Auto Focus (AF) at the front end of the ISP due to the deviation in the time domain.
In summary, in the prior art, the method for performing scene prediction based on the image processing method has higher computational complexity, reduces prediction efficiency, and has poor accuracy of scene prediction.
Example 1
An embodiment of the present application provides a scene prediction method, as shown in fig. 1, where the method may include:
s101, acquiring a first direct current component of light rays in a shooting scene in a first frequency channel, a second direct current component of light rays in a second frequency channel and a visible light wave band component, wherein the radiation intensity of the first frequency channel is larger than that of the second frequency channel.
The scene prediction method provided by the embodiment of the application is suitable for judging the indoor and outdoor scenes in the process of image processing of the acquired images.
In the embodiment of the present application, the terminal may be any device having communication and storage functions, for example: tablet computers, cell phones, electronic readers, remote controllers, personal computers (Personal Computer, PCs), notebook computers, vehicle-mounted devices, network televisions, wearable devices and the like.
In the embodiment of the application, a color temperature sensor is arranged on the terminal, specifically, the color temperature sensor can be arranged at one side of a front camera of the terminal, as shown in fig. 2, and the left side of the front camera of the terminal is provided with the color temperature sensor; the color temperature sensor may be disposed at one side of the rear camera of the terminal, as shown in fig. 3, and disposed at the lower side of the rear camera of the terminal.
The color temperature sensor may be disposed in the Liu Haiou field of the full screen, and in particular, fig. 4 is a schematic diagram of one arrangement of the color temperature sensor, as shown in fig. 4, where the terminal places the color temperature sensor under the ink in the bang region.
The terminal may also place a color temperature sensor in the slit at the top. Fig. 5 is a schematic view of a placement position of a color temperature sensor on a display screen side of a terminal, and fig. 6 is a schematic view of a placement position of a color temperature sensor on a rear camera side of a terminal.
In the embodiment of the application, when the terminal acquires the current image, the color temperature sensor is started, and the first direct current component of the first frequency channel, the second direct current component of the second frequency channel and the visible light wave band component are respectively acquired by the color temperature sensor, wherein the radiation intensity of the first frequency channel of the color temperature sensor is larger than that of the second frequency channel of the color temperature sensor, in practical application, as shown in fig. 7, the abscissa represents time and the ordinate represents radiation intensity, and as can be seen from fig. 7, the radiation intensity of the channel corresponding to 50HZ is larger than that of the channel corresponding to 60HZ, so the first frequency channel of the color temperature sensor can be the channel corresponding to 50HZ, and the second frequency channel of the color temperature sensor can be the channel corresponding to 60 HZ.
Specifically, the terminal acquires first time domain information of a first frequency channel and second time domain information of a second frequency channel through a color temperature sensor; then, the terminal performs time-frequency conversion operation on the first time domain information to obtain first frequency domain information; obtaining a direct current component of the first frequency domain information of the first frequency channel to obtain a first direct current component; the terminal performs time-frequency transformation operation on the second time domain information to obtain second frequency domain information; and obtaining a direct current component of the second frequency domain information of the second frequency channel to obtain a second direct current component.
Exemplary, fig. 8 is time domain information corresponding to the frequency channel, and fig. 9 is frequency domain information corresponding to the time domain information shown in fig. 8 obtained by performing time-frequency conversion on the time domain information shown in fig. 8.
In the embodiment of the present application, the terminal further obtains a visible light band component by using a color temperature sensor, fig. 10 is a schematic diagram of a spectral curve response of the color temperature sensor, and as shown in fig. 10, along with a wavelength change, a change of a spectral response curve corresponding to R, G, B, C (visible light band component), WB (full spectrum), FD1 (first frequency channel) and FD2 (second frequency channel) obtained by detection of the color temperature sensor is different, and the terminal may give the schematic diagram of the spectral curve response of the color temperature sensor to determine the first time domain information of the first frequency channel, the second time domain information of the second frequency channel and the visible light band component.
S102, extracting first light frequency information and second light frequency information from the first direct current component, and acquiring first light intensity information corresponding to the first light frequency information and second light intensity information corresponding to the second light frequency information, wherein the first light frequency information and the second light frequency information are two light frequency information with the largest amplitude in the first direct current component.
After the terminal acquires a first direct current component of light rays in a shooting scene in a first frequency channel, a second direct current component of a second frequency channel and a visible light wave band component, the terminal extracts first light frequency information and second light frequency spectrum information from the first direct current component, and acquires first light intensity information corresponding to the first light frequency information and second light intensity information corresponding to the second light frequency information.
In the embodiment of the application, the terminal searches the first light frequency information with the largest amplitude and the second light frequency information with the amplitude smaller than the first light frequency information from the first direct current component, and acquires the first light intensity information corresponding to the first light frequency information and the second light intensity information corresponding to the second light frequency information by utilizing the color temperature sensor.
S103, determining first infrared band information and second infrared band information according to the first direct current component, the second direct current component and the visible light band component.
After the terminal respectively acquires the first direct current component of the first frequency channel, the second direct current component of the second frequency channel and the visible light wave band component through the color temperature sensor, the terminal determines the first infrared wave band information and the second infrared wave band information according to the first direct current component, the second direct current component and the visible light wave band component.
In the embodiment of the application, the terminal determines the first infrared band information according to the second direct current component and the visible light band component.
Specifically, the terminal inputs the second direct current component and the visible light band component into the formula (1) to obtain first infrared band information.
IR1=(FD2DC-C)/FD2DC (1)
Wherein, IR1 is the first infrared band information, C is the visible band component, FD2 is the second frequency domain information of the second frequency channel, and DC is the direct current component operation, so FD2DC is the second direct current component.
In the embodiment of the application, the terminal determines the second infrared band information according to the first direct current component and the second direct current component.
Specifically, the terminal inputs the first direct current component and the second direct current component into the formula (2) to obtain second infrared band information.
IR2=(FD2DC-FD2DC)/FD1DC (2)
Wherein IR2 is the second infrared band information, FD1 is the first frequency domain information of the first frequency channel, and FD1DC is the first direct current component.
As shown in fig. 11, the spectrum energy distribution of the fluorescent lamp indicates that the infrared band energy of 800nm-900nm is weak in the indoor scene; as shown in fig. 12, the energy distribution of the light spectrum of the day is that the energy in the infrared band of 800nm-900nm exists strongly in the scene of the day, and the energy in the infrared band begins to decay strongly after 950 nm; as shown in fig. 13, the spectrum energy distribution of the incandescent lamp shows that the energy of the 800nm-900nm infrared band in the incandescent lamp scene is stronger. Therefore, by combining the infrared band intensity of 800nm-900nm and the infrared band intensity of 950nm-1000nm, different shooting scenes inside and outside the room can be assisted to be distinguished.
S104, inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information into a preset classification model to obtain a scene prediction result aiming at a shooting scene.
After the terminal acquires the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information, the terminal inputs the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information into a preset classification model to obtain a scene prediction result aiming at a shooting scene.
In the embodiment of the application, after acquiring first optical frequency information, second optical frequency information, first light intensity information and second light intensity information, a terminal normalizes the first optical frequency information and the second optical frequency information according to a preset frequency value to obtain normalized first optical frequency information and normalized second optical frequency information; and the terminal respectively normalizes the first light intensity information and the second light intensity information according to a preset light intensity value to obtain normalized first light intensity information and normalized second light intensity information.
In practical application, the terminal performs normalization processing on the first optical frequency information and the second optical frequency information by using 200 HZ; the terminal normalizes the first light intensity information and the second light intensity information using 65535.
In the embodiment of the application, a preset classification model is preset in the terminal, the terminal inputs 6 parameter characteristics of the first infrared band information, the second infrared band information, the normalized first light frequency information, the normalized first light intensity information, the normalized second light frequency information and the normalized second light intensity information into the preset classification model, and the terminal predicts the 6 parameters by using classification parameters obtained by self training of the preset classification model to obtain a scene prediction result.
Optionally, the preset classification model may be an SVM model, a bayesian classifier, an ensemble learning or decision tree, or the like, which is specifically selected according to the actual situation, and the embodiment of the present application is not specifically limited.
Further, after the terminal determines the scene prediction result of the shooting scene, the terminal can determine the AWB parameters according to the scene prediction result, and then the AWB parameters are adopted to carry out white balance correction on the image, at the moment, the terminal considers different light source information corresponding to the indoor and outdoor scenes when determining the AWB parameters, so that the color reduction effect of the image can be improved.
As shown in fig. 14, an exemplary scene determination procedure provided for an embodiment of the present application includes:
1. the terminal reads in the time domain information of the color temperature sensor FD 1;
2. the terminal performs time-frequency conversion on FD1 time domain information and acquires a direct current component FD1DC thereof;
3. acquiring two frequencies FD1Q1 and FD1Q2 with strongest amplitudes and corresponding intensities FD1M1 and FD1M2 from FD1DC, and respectively carrying out normalization processing on FD1Q1 and FD1Q2, FD1M1 and FD1M 2;
4. the terminal reads in the time domain information of the color temperature sensor FD 2;
5. the terminal performs time-frequency conversion on FD2 time domain information and acquires a direct current component FD2DC thereof;
6. the terminal acquires a visible light wave band component C;
7. the terminal calculates IR1 using IR 1= (FD 2 DC-C)/FD 2DC;
8. the terminal calculates IR2 using IR 2= (FD 2DC-FD2 DC)/FD 1DC;
9. the terminal selects a loss function of a preset training model and sets learning parameters;
10. the terminal trains by using a loss function and a learning parameter preset classification model to obtain the preset classification model;
11. the terminal inputs IR1, IR2, FD1Q, FD Q2, FD1M1 and FD1M2 into a preset classification model to conduct scene prediction, and a prediction result is obtained.
It can be understood that the terminal takes the first infrared band information and the second infrared band information of the two frequency channels in the spectrum as classification characteristics to participate in indoor and outdoor classification, takes the light frequency information and the frequency intensity in the first frequency channel as classification characteristics to participate in indoor and outdoor classification, can distinguish different scenes according to energy change trends corresponding to different infrared bands, further improves the accuracy of scene prediction, and carries out scene prediction by using six characteristic parameters, namely the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information, so that the complexity of prediction is reduced, and the prediction efficiency is further improved.
Example two
Based on the first embodiment, in the embodiment of the present application, before the terminal obtains the first dc component of the first frequency channel, the second dc component of the second frequency channel, and the visible light band component respectively through the color temperature sensor, a scene prediction method is further provided, as shown in fig. 15, where the method may include:
s201, training sample data of a training sample image and a training sample scene of the training sample image are obtained.
The scene prediction method provided by the embodiment of the application is suitable for the scene of training the classification model.
In the embodiment of the application, the terminal can divide the pre-stored image library to obtain the training sample image and the test sample image.
In the embodiment of the application, the terminal can be any device with communication and storage functions. For example: tablet computers, cell phones, electronic readers, remote controllers, personal computers (Personal Computer, PCs), notebook computers, vehicle-mounted devices, network televisions, wearable devices and the like.
It should be noted that, in the embodiment of the present application, the pre-stored image library may be used for training and testing the preset classification model.
Further, in an embodiment of the present application, the pre-stored image library may include images of a plurality of indoor scenes and images of a plurality of outdoor scenes. Further, in the application, the terminal can randomly divide images of different scenes in the pre-stored image library, so that training sample images and test sample images can be obtained. The training sample image and the test sample image are completely different, that is, sample data corresponding to one sample image in the pre-stored image library can only be one of the training sample data or the test sample data.
In an exemplary embodiment, the pre-stored image library stored in the terminal stores 1024 images of indoor scenes and 1134 images of outdoor scenes, and when the terminal performs training of the preset classification model, 80% of the images can be randomly extracted from the pre-stored image library to serve as training images, and 20% of the images serve as test images.
In the embodiment of the application, a terminal acquires a first sample direct current component of a first frequency channel, a second sample direct current component of a second frequency channel and a sample visible light wave band component in a training sample image; then, the terminal determines first sample infrared band information, second sample infrared band information, first sample light frequency information, first sample light intensity information, second sample light frequency information and second sample light intensity information of the training sample image according to the first sample direct current component, the second sample direct current component and the sample visible light band component; the terminal determines the first sample infrared band information, the second sample infrared band information, the first sample light frequency information, the first sample light intensity information, the second sample light frequency information and the second sample light intensity information as training sample data.
Specifically, the process of determining the first sample infrared band information, the second sample infrared band information, the first sample light frequency information, the first sample light intensity information, the second sample light frequency information and the second sample light intensity information of the training sample image by the terminal according to the first sample direct current component, the second sample direct current component and the sample visible light band component is as follows: the terminal extracts first sample light frequency information and second sample light frequency information from the first sample direct current component, and acquires first sample light intensity information corresponding to the first sample light frequency information and second sample light intensity information corresponding to the second sample light frequency information; then, the terminal determines the infrared band information of the first sample according to the direct current component of the second sample and the visible band component of the sample; and the terminal determines the infrared band information of the second sample according to the first sample direct current component and the second sample direct current component.
It should be noted that, the process of acquiring training sample data by the terminal in the training stage is consistent with the process of acquiring the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information by the terminal in the testing stage, which is not described herein.
S202, inputting training sample data into an initial classification model to obtain a sample classification result.
After the terminal acquires training sample data of the training sample image and a training sample scene of the training sample image, the terminal inputs the training sample data into the initial classification model to obtain a sample classification result.
In the embodiment of the application, a terminal inputs first sample infrared band information, second sample infrared band information, first sample optical frequency information, first sample light intensity information, second sample optical frequency information and second sample light intensity information into an initial classification model to obtain a sample classification result.
S203, inputting the training sample scene and the sample classification result into a preset loss function to obtain a loss function value.
After the training sample data is input into the initial classification model by the terminal to obtain a sample classification result, the training sample scene and the sample classification result are input into a preset loss function by the terminal to obtain a loss function value.
In the embodiment of the application, the preset loss function used by the terminal is a hinge loss function.
In the embodiment of the application, a terminal inputs a sample classification result corresponding to training sample data and a training sample scene of a training sample image into a preset loss function to obtain a loss function value.
S204, training the initial classification model by using the loss function value to obtain a preset classification model.
After a training sample scene and a sample classification result are input into a preset loss function by the terminal to obtain a loss function value, the terminal trains an initial classification model by using the loss function value to obtain the preset classification model.
In the embodiment of the application, since the training sample data in the application comprises 6 training characteristic parameters, when the training parameters are selected, the initial classification model is used for training by using the linear check, specifically, the step length is 0.01 and the gamma is 60000.
In the embodiment of the application, the initial classification model is trained by using the training parameters so that the loss function value is minimum, at the moment, the preset classification model is trained, and then, the terminal can use the preset classification model to realize the scene prediction process.
It can be understood that the terminal takes the first infrared band information and the second infrared band information of the two frequency channels in the spectrum as classification characteristics to participate in indoor and outdoor classification, takes the light frequency information and the frequency intensity in the first frequency channel as classification characteristics to participate in indoor and outdoor classification, can distinguish different scenes according to energy change trends corresponding to different infrared bands, further improves the accuracy of scene prediction, and carries out scene prediction by using six characteristic parameters, namely the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information, so that the complexity of prediction is reduced, and the prediction efficiency is further improved.
Example III
An embodiment of the present application provides a terminal, as shown in fig. 16, the terminal 1 includes:
an obtaining unit 10, configured to obtain a first direct current component of a light ray in a shooting scene in a first frequency channel, a second direct current component in a second frequency channel, and a visible light band component, where a radiation intensity of the first frequency channel is greater than a radiation intensity of the second frequency channel;
an extracting unit 11, configured to extract first light frequency information and second light frequency information from the first dc component, and obtain first light intensity information corresponding to the first light frequency information and second light intensity information corresponding to the second light frequency information, where the first light frequency information and the second light frequency information are two light frequency information with the largest amplitude in the first dc component;
a determining unit 12 for determining first infrared band information and second infrared band information from the first direct current component, the second direct current component, and the visible light band component;
the scene prediction unit 13 is configured to input the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information, and the second light intensity information into a preset classification model, and obtain a scene prediction result for the shooting scene.
Optionally, the terminal further includes: the time-frequency conversion unit and the direct current component taking unit;
the acquiring unit 10 is further configured to acquire, by using a color temperature sensor, first time domain information of the first frequency channel and second time domain information of the second frequency channel;
the time-frequency conversion unit is further used for performing time-frequency conversion operation on the first time domain information to obtain first frequency domain information; performing time-frequency transformation operation on the second time domain information to obtain second frequency domain information;
the direct current component obtaining unit is further configured to obtain a direct current component of the first frequency domain information of the first frequency channel, so as to obtain the first direct current component; and obtaining a direct current component of the second frequency domain information of the second frequency channel to obtain the second direct current component.
Optionally, the determining unit 12 is further configured to determine the first infrared band information according to the second direct current component and the visible light band component; and determining the second infrared band information according to the first direct current component and the second direct current component.
Optionally, the terminal further includes: a normalization unit;
the normalization unit is used for respectively carrying out normalization processing on the first optical frequency information and the second optical frequency information according to a preset frequency value to obtain normalized first optical frequency information and normalized second optical frequency information; respectively carrying out normalization processing on the first light intensity information and the second light intensity information according to a preset light intensity value to obtain normalized first light intensity information and normalized second light intensity information;
The scene prediction unit 13 is further configured to perform scene prediction on the first infrared band information, the second infrared band information, the normalized first light frequency information, the normalized first light intensity information, the normalized second light frequency information, and the normalized second light intensity information by using the classification parameters obtained by training the preset classification model, so as to obtain the scene prediction result.
Optionally, the terminal further includes: an input unit and a training unit;
the acquiring unit 10 is further configured to acquire training sample data of a training sample image and a training sample scene of the training sample image;
the input unit is used for inputting the training sample data into an initial classification model to obtain a sample classification result; inputting the training sample scene and the sample classification result into a preset loss function to obtain a loss function value;
the training unit is used for training the initial classification model by using the loss function value to obtain a preset classification model.
Optionally, the acquiring unit 10 is further configured to acquire a first sample dc component of a first frequency channel, a second sample dc component of a second frequency channel, and a sample visible light band component in the training sample image;
The determining unit 12 is further configured to determine first sample infrared band information, second sample infrared band information, first sample optical frequency information, first sample optical intensity information, second sample optical frequency information, and second sample optical intensity information of the training sample image according to the first sample dc component, the second sample dc component, and the sample visible light band component; and determining the first sample infrared band information, the second sample infrared band information, the first sample light frequency information, the first sample light intensity information, the second sample light frequency information and the second sample light intensity information as the training sample data.
Optionally, the extracting unit 11 is further configured to extract the first sample optical frequency information and the second sample optical frequency information from the first sample dc component, and obtain the first sample optical intensity information corresponding to the first sample optical frequency information and the second sample optical intensity information corresponding to the second sample optical frequency information;
the determining unit 12 is further configured to determine the first sample infrared band information according to the second sample dc component and the sample visible band component; and determining the second sample infrared band information according to the first sample direct current component and the second sample direct current component.
Optionally, the input unit is further configured to input the first sample infrared band information, the second sample infrared band information, the first sample optical frequency information, the first sample optical intensity information, the second sample optical frequency information, and the second sample optical intensity information into the initial classification model, so as to obtain the sample classification result.
Optionally, the terminal further includes: a white balance correction unit;
the determining unit 12 is further configured to determine an automatic white balance AWB parameter according to the scene prediction result;
the white balance correction unit is used for carrying out white balance correction on the image by adopting the AWB parameters.
According to the terminal provided by the embodiment of the application, a first direct current component of light rays in a shooting scene in a first frequency channel, a second direct current component in a second frequency channel and a visible light wave band component are obtained, and the radiation intensity of the first frequency channel is larger than that of the second frequency channel; extracting first light frequency information and second light frequency information from the first direct current component, and acquiring first light intensity information corresponding to the first light frequency information and second light intensity information corresponding to the second light frequency information, wherein the first light frequency information and the second light frequency information are two light frequency information with the largest amplitude in the first direct current component; determining first infrared band information and second infrared band information according to the first direct current component, the second direct current component and the visible light band component; and inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information into a preset classification model to obtain a scene prediction result aiming at the shooting scene. Therefore, the terminal provided by the embodiment takes the first infrared band information and the second infrared band information of the two frequency channels in the spectrum as classification characteristics to participate in indoor and outdoor classification, takes the light frequency information and the frequency intensity in the first frequency channel as classification characteristics to participate in indoor and outdoor classification, can distinguish different scenes according to energy variation trends corresponding to different infrared bands, further improves the accuracy of scene prediction, and carries out scene prediction by using six characteristic parameters, namely the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information, so that the complexity of prediction is reduced, and the prediction efficiency is further improved.
Fig. 17 is a schematic diagram of a second component structure of a terminal 1 according to an embodiment of the present application, in practical application, based on the same disclosure concept of the above embodiment, as shown in fig. 17, the terminal 1 of this embodiment includes: a processor 14, a memory 15 and a communication bus 16.
In the process of the specific embodiment, the acquiring unit 10, the extracting unit 11, the determining unit 12, the scene predicting unit 13, the time-frequency transforming unit, the direct current component extracting unit, the normalizing unit input unit, the training unit and the white balance correcting unit may be implemented by a processor 14 located on the terminal 1, where the processor 14 may be at least one of an application specific integrated circuit (ASIC, application Specific Integrated Circuit), a digital signal processor (DSP, digital Signal Processor), a digital signal processing terminal (DSPD, digital Signal Processing Device), a programmable logic terminal (PLD, programmable Logic Device), a field programmable gate array (FPGA, field Programmable Gate Array), a CPU, a controller, a microcontroller and a microprocessor. It will be appreciated that the electronics for implementing the above-described processor functions may be other for different devices, and the present embodiment is not particularly limited.
In the embodiment of the present application, the communication bus 16 is used to implement connection communication between the processor 14 and the memory 15; the processor 14 implements the following scene prediction method when executing the running program stored in the memory 15:
the processor 14 is configured to obtain a first direct current component of a light ray in a shooting scene in a first frequency channel, a second direct current component in a second frequency channel, and a visible light band component, where a radiation intensity of the first frequency channel is greater than a radiation intensity of the second frequency channel; extracting first light frequency information and second light frequency information from the first direct current component, and acquiring first light intensity information corresponding to the first light frequency information and second light intensity information corresponding to the second light frequency information, wherein the first light frequency information and the second light frequency information are two light frequency information with the largest amplitude in the first direct current component; determining first infrared band information and second infrared band information according to the first direct current component, the second direct current component and the visible light band component; and inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information into a preset classification model to obtain a scene prediction result aiming at the shooting scene.
Optionally, the processor 14 is further configured to obtain, by using a color temperature sensor, first time domain information of the first frequency channel and second time domain information of the second frequency channel; performing time-frequency transformation operation on the first time domain information to obtain first frequency domain information; obtaining a direct current component from the first frequency domain information of the first frequency channel to obtain the first direct current component; performing time-frequency transformation operation on the second time domain information to obtain second frequency domain information; and obtaining a direct current component of the second frequency domain information of the second frequency channel to obtain the second direct current component.
Optionally, the processor 14 is further configured to determine the first infrared band information according to the second direct current component and the visible light band component; and determining the second infrared band information according to the first direct current component and the second direct current component.
Optionally, the processor 14 is further configured to normalize the first optical frequency information and the second optical frequency information according to a preset frequency value, so as to obtain normalized first optical frequency information and normalized second optical frequency information; respectively carrying out normalization processing on the first light intensity information and the second light intensity information according to a preset light intensity value to obtain normalized first light intensity information and normalized second light intensity information; and performing scene prediction on the first infrared band information, the second infrared band information, the normalized first light frequency information, the normalized first light intensity information, the normalized second light frequency information and the normalized second light intensity information by using the classification parameters obtained by training the preset classification model to obtain a scene prediction result.
Optionally, the processor 14 is further configured to acquire training sample data of a training sample image and a training sample scene of the training sample image; inputting the training sample data into an initial classification model to obtain a sample classification result; inputting the training sample scene and the sample classification result into a preset loss function to obtain a loss function value; and training the initial classification model by using the loss function value to obtain a preset classification model.
Optionally, the processor 14 is further configured to obtain a first sample dc component of a first frequency channel, a second sample dc component of a second frequency channel, and a sample visible light band component in the training sample image; determining first sample infrared band information, second sample infrared band information, first sample light frequency information, first sample light intensity information, second sample light frequency information and second sample light intensity information of the training sample image according to the first sample direct current component, the second sample direct current component and the sample visible light band component; and determining the first sample infrared band information, the second sample infrared band information, the first sample light frequency information, the first sample light intensity information, the second sample light frequency information and the second sample light intensity information as the training sample data.
Optionally, the processor 14 is further configured to extract the first sample optical frequency information and the second sample optical frequency information from the first sample dc component, and obtain the first sample optical intensity information corresponding to the first sample optical frequency information and the second sample optical intensity information corresponding to the second sample optical frequency information; determining the first sample infrared band information according to the second sample direct current component and the sample visible light band component; and determining the second sample infrared band information according to the first sample direct current component and the second sample direct current component.
Optionally, the processor 14 is further configured to input the first sample infrared band information, the second sample infrared band information, the first sample optical frequency information, the first sample optical intensity information, the second sample optical frequency information, and the second sample optical intensity information into the initial classification model, so as to obtain the sample classification result.
Optionally, the processor 14 is further configured to determine an automatic white balance AWB parameter according to the scene prediction result; and performing white balance correction on the image by adopting the AWB parameters.
An embodiment of the present application provides a storage medium having stored thereon a computer program, where the computer readable storage medium stores one or more programs, where the one or more programs are executable by one or more processors and applied to a terminal, where the computer program implements the scene prediction method as described in the first embodiment and the second embodiment.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present disclosure may be embodied essentially or in a part contributing to the related art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing an image display device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present disclosure.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the present application.
Claims (12)
1. A method of scene prediction, the method comprising:
acquiring a first direct current component of light rays in a shooting scene in a first frequency channel, a second direct current component of light rays in a second frequency channel and a visible light wave band component, wherein the radiation intensity of the first frequency channel is larger than that of the second frequency channel;
extracting first light frequency information and second light frequency information from the first direct current component, and acquiring first light intensity information corresponding to the first light frequency information and second light intensity information corresponding to the second light frequency information, wherein the first light frequency information and the second light frequency information are two light frequency information with the largest amplitude in the first direct current component;
determining first infrared band information and second infrared band information according to the first direct current component, the second direct current component and the visible light band component; the first infrared band information is used for measuring the infrared band intensity of 800nm-900nm, and the second infrared band information is used for measuring the infrared band intensity of 950nm-1000 nm;
Inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information into a preset classification model to obtain a scene prediction result aiming at the shooting scene; the scene prediction result includes an indoor scene and an outdoor scene.
2. The method of claim 1, wherein the acquiring a first dc component of the light in the photographed scene at the first frequency channel and a second dc component of the light at the second frequency channel comprises:
acquiring first time domain information of the first frequency channel and second time domain information of the second frequency channel through a color temperature sensor;
performing time-frequency transformation operation on the first time domain information to obtain first frequency domain information;
obtaining a direct current component from the first frequency domain information of the first frequency channel to obtain the first direct current component;
performing time-frequency transformation operation on the second time domain information to obtain second frequency domain information;
and obtaining a direct current component of the second frequency domain information of the second frequency channel to obtain the second direct current component.
3. The method of claim 1, wherein determining first infrared band information and second infrared band information from the first dc component, the second dc component, and the visible band component comprises:
Determining the first infrared band information according to the second direct current component and the visible light band component;
and determining the second infrared band information according to the first direct current component and the second direct current component.
4. The method according to claim 1, wherein inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information, and the second light intensity information into a preset classification model to obtain a scene prediction result for the photographed scene includes:
respectively carrying out normalization processing on the first optical frequency information and the second optical frequency information according to a preset frequency value to obtain normalized first optical frequency information and normalized second optical frequency information;
respectively carrying out normalization processing on the first light intensity information and the second light intensity information according to a preset light intensity value to obtain normalized first light intensity information and normalized second light intensity information;
and performing scene prediction on the first infrared band information, the second infrared band information, the normalized first light frequency information, the normalized first light intensity information, the normalized second light frequency information and the normalized second light intensity information by using the classification parameters obtained by training the preset classification model to obtain a scene prediction result.
5. The method of claim 1, wherein the acquiring light rays in the captured scene is preceded by a first dc component of a first frequency channel, a second dc component of a second frequency channel, and a visible band component, the method further comprising:
acquiring training sample data of a training sample image and a training sample scene of the training sample image;
inputting the training sample data into an initial classification model to obtain a sample classification result;
inputting the training sample scene and the sample classification result into a preset loss function to obtain a loss function value;
and training the initial classification model by using the loss function value to obtain a preset classification model.
6. The method of claim 5, wherein the acquiring training sample data of the training sample image comprises:
acquiring a first sample direct current component of a first frequency channel, a second sample direct current component of a second frequency channel and a sample visible light wave band component in the training sample image;
determining first sample infrared band information, second sample infrared band information, first sample light frequency information, first sample light intensity information, second sample light frequency information and second sample light intensity information of the training sample image according to the first sample direct current component, the second sample direct current component and the sample visible light band component;
And determining the first sample infrared band information, the second sample infrared band information, the first sample light frequency information, the first sample light intensity information, the second sample light frequency information and the second sample light intensity information as the training sample data.
7. The method of claim 6, wherein determining the first sample infrared band information, the second sample infrared band information, the first sample light frequency information, the first sample light intensity information, the second sample light frequency information, and the second sample light intensity information of the training sample image based on the first sample dc component, the second sample dc component, and the sample visible band component comprises:
extracting the first sample optical frequency information and the second sample optical frequency information from the first sample direct current component, and acquiring the first sample light intensity information corresponding to the first sample optical frequency information and the second sample light intensity information corresponding to the second sample optical frequency information;
determining the first sample infrared band information according to the second sample direct current component and the sample visible light band component;
And determining the second sample infrared band information according to the first sample direct current component and the second sample direct current component.
8. The method of claim 6, wherein inputting the training sample data into an initial classification model results in a sample classification result, comprising:
and inputting the first sample infrared band information, the second sample infrared band information, the first sample light frequency information, the first sample light intensity information, the second sample light frequency information and the second sample light intensity information into the initial classification model to obtain the sample classification result.
9. The method according to claim 1, wherein after inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information, and the second light intensity information into a preset classification model to obtain a scene prediction result for the photographed scene, the method further comprises:
determining an automatic white balance AWB parameter according to the scene prediction result;
and performing white balance correction on the image by adopting the AWB parameters.
10. A terminal, the terminal comprising:
the device comprises an acquisition unit, a first frequency channel, a second frequency channel and a visible light wave band component, wherein the acquisition unit is used for acquiring a first direct current component, a second direct current component and a visible light wave band component of light rays in a shooting scene, and the radiation intensity of the first frequency channel is larger than that of the second frequency channel;
the extraction unit is used for extracting first light frequency information and second light frequency information from the first direct current component, and acquiring first light intensity information corresponding to the first light frequency information and second light intensity information corresponding to the second light frequency information, wherein the first light frequency information and the second light frequency information are two light frequency information with the largest amplitude value in the first direct current component;
a determining unit configured to determine first infrared band information and second infrared band information according to the first direct current component, the second direct current component, and the visible light band component; the first infrared band information is used for measuring the infrared band intensity of 800nm-900nm, and the second infrared band information is used for measuring the infrared band intensity of 950nm-1000 nm;
the scene prediction unit is used for inputting the first infrared band information, the second infrared band information, the first light frequency information, the first light intensity information, the second light frequency information and the second light intensity information into a preset classification model to obtain a scene prediction result aiming at the shooting scene; the scene prediction result includes an indoor scene and an outdoor scene.
11. A terminal, the terminal comprising: a processor, a memory, and a communication bus; the processor, when executing a memory-stored operating program, implements the method according to any one of claims 1-9.
12. A storage medium having stored thereon a computer program for application to a terminal, characterized in that the computer program, when executed by a processor, implements the method according to any of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911184047.7A CN111310541B (en) | 2019-11-27 | 2019-11-27 | Scene prediction method, terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911184047.7A CN111310541B (en) | 2019-11-27 | 2019-11-27 | Scene prediction method, terminal and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111310541A CN111310541A (en) | 2020-06-19 |
CN111310541B true CN111310541B (en) | 2023-09-29 |
Family
ID=71159674
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911184047.7A Active CN111310541B (en) | 2019-11-27 | 2019-11-27 | Scene prediction method, terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111310541B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103294983A (en) * | 2012-02-24 | 2013-09-11 | 北京明日时尚信息技术有限公司 | Scene recognition method in static picture based on partitioning block Gabor characteristics |
CN103413142A (en) * | 2013-07-22 | 2013-11-27 | 中国科学院遥感与数字地球研究所 | Remote sensing image land utilization scene classification method based on two-dimension wavelet decomposition and visual sense bag-of-word model |
CN103493212A (en) * | 2011-03-29 | 2014-01-01 | 欧司朗光电半导体有限公司 | Unit for determining the type of a dominating light source by means of two photodiodes |
CN105846896A (en) * | 2016-05-16 | 2016-08-10 | 苏州安莱光电科技有限公司 | Visible light OFDM communication device for infrared compensation total range light modulation |
CN107622281A (en) * | 2017-09-20 | 2018-01-23 | 广东欧珀移动通信有限公司 | Image classification method, device, storage medium and mobile terminal |
CN108027278A (en) * | 2015-08-26 | 2018-05-11 | 株式会社普瑞密斯 | Lighting detecting device and its method |
CN108470169A (en) * | 2018-05-23 | 2018-08-31 | 国政通科技股份有限公司 | Face identification system and method |
CN109379584A (en) * | 2018-11-26 | 2019-02-22 | 北京科技大学 | Camera system and image quality adjusting method under a kind of complex environment light application conditions |
-
2019
- 2019-11-27 CN CN201911184047.7A patent/CN111310541B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103493212A (en) * | 2011-03-29 | 2014-01-01 | 欧司朗光电半导体有限公司 | Unit for determining the type of a dominating light source by means of two photodiodes |
CN103294983A (en) * | 2012-02-24 | 2013-09-11 | 北京明日时尚信息技术有限公司 | Scene recognition method in static picture based on partitioning block Gabor characteristics |
CN103413142A (en) * | 2013-07-22 | 2013-11-27 | 中国科学院遥感与数字地球研究所 | Remote sensing image land utilization scene classification method based on two-dimension wavelet decomposition and visual sense bag-of-word model |
CN108027278A (en) * | 2015-08-26 | 2018-05-11 | 株式会社普瑞密斯 | Lighting detecting device and its method |
CN105846896A (en) * | 2016-05-16 | 2016-08-10 | 苏州安莱光电科技有限公司 | Visible light OFDM communication device for infrared compensation total range light modulation |
CN107622281A (en) * | 2017-09-20 | 2018-01-23 | 广东欧珀移动通信有限公司 | Image classification method, device, storage medium and mobile terminal |
CN108470169A (en) * | 2018-05-23 | 2018-08-31 | 国政通科技股份有限公司 | Face identification system and method |
CN109379584A (en) * | 2018-11-26 | 2019-02-22 | 北京科技大学 | Camera system and image quality adjusting method under a kind of complex environment light application conditions |
Also Published As
Publication number | Publication date |
---|---|
CN111310541A (en) | 2020-06-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101858646B1 (en) | Apparatus and method for fusion of image | |
US9813635B2 (en) | Method and apparatus for auto exposure value detection for high dynamic range imaging | |
US8593522B2 (en) | Digital camera, image processing apparatus, and image processing method | |
CN111027489B (en) | Image processing method, terminal and storage medium | |
US11321830B2 (en) | Image detection method and apparatus and terminal | |
WO2019052329A1 (en) | Facial recognition method and related product | |
US10027878B2 (en) | Detection of object in digital image | |
CN105049718A (en) | Image processing method and terminal | |
US11678180B2 (en) | Iris recognition workflow | |
CN111163302B (en) | Scene color restoration method, terminal and storage medium | |
US20150131902A1 (en) | Digital Image Analysis | |
CN104902143B (en) | A kind of image de-noising method and device based on resolution ratio | |
US11989863B2 (en) | Method and device for processing image, and storage medium | |
US20100123802A1 (en) | Digital image signal processing method for performing color correction and digital image signal processing apparatus operating according to the digital image signal processing method | |
CN104535178A (en) | Light strength value detecting method and terminal | |
CN111310541B (en) | Scene prediction method, terminal and storage medium | |
EP3893488A1 (en) | Solid-state imaging device, solid-state imaging method, and electronic apparatus | |
CN111327827A (en) | Shooting scene recognition control method and device and shooting equipment | |
CN112036277B (en) | Face recognition method, electronic equipment and computer readable storage medium | |
US20100201866A1 (en) | Digital photographing apparatus which sets a shutter speed according to a frequency of an illumination apparatus | |
CN114418914A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN110929663B (en) | Scene prediction method, terminal and storage medium | |
CN111416936B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN110969196B (en) | Scene prediction method, terminal and storage medium | |
CN111567034A (en) | Exposure compensation method, device and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |