Disclosure of Invention
In view of the foregoing, it is desirable to provide an image processing method, apparatus, computer device, and storage medium for improving accuracy of evaluating a target image.
In a first aspect, the present application provides an image processing method, the method comprising:
acquiring a target image to be evaluated;
Dividing the target image into a first area and a second area according to the pixel brightness value of the target image;
extracting first image characteristic information and second image characteristic information from the first area and the second area respectively, wherein the image characteristic information at least comprises gray characteristic information, image detail information and color characteristic information;
and determining an evaluation result of the target image according to the first image characteristic information and the second image characteristic information.
In some embodiments of the present application, the acquiring a target image to be evaluated includes:
acquiring an image to be evaluated, and dividing the preprocessing image into a first area and a second area according to the pixel brightness value of the image to be evaluated;
Acquiring a first area brightness average value of the first area and a second area brightness average value of the second area;
Obtaining the image light ratio of the image to be evaluated according to the ratio of the brightness average value of the second area to the brightness average value of the first area;
and if the image light ratio is larger than a preset light ratio threshold value, determining the image to be evaluated as a target image.
In some embodiments of the present application, the dividing the target image into a first region and a second region according to the pixel brightness value of the target image includes:
Acquiring gray values corresponding to all pixel points in the target image;
According to the gray level dividing threshold value and the gray level value of each pixel point, determining a first pixel point with the gray level value larger than the gray level dividing threshold value and a second pixel point with the gray level value not larger than the gray level dividing threshold value;
And determining the area where the first pixel point is located as a first area, and determining the area where the second pixel point is located as a second area.
In some embodiments of the present application, the extracting the first image feature information and the second image feature information from the first region and the second region, respectively, includes:
Acquiring a maximum value and a minimum value of pixel brightness in the target image;
Acquiring a first normalization parameter of the first area and a second normalization parameter of the second area according to the ratio between the maximum pixel brightness value and the minimum pixel brightness value;
Based on the first normalization parameters, carrying out normalization processing on the first image information of the first area, and determining first image characteristic information according to the first image information after normalization processing;
And carrying out normalization processing on the second image information of the second area based on the second normalization parameters, and determining second image characteristic information according to the second image information after normalization processing.
In some embodiments of the present application, the first image information includes first gray scale information of each pixel point in the first region, and the second image information includes second gray scale information of each pixel point in the second region;
the normalizing processing is performed on the first image information of the first area based on the first normalizing parameter, and the determining the first image feature information according to the normalized first image information includes:
Carrying out normalization processing on the first gray information of each pixel point in the first region based on the first normalization parameters, and calculating the entropy value of the normalized first gray information to obtain a first gray entropy value;
the normalizing processing is performed on the second image information of the second area based on the second normalizing parameter, and the determining the second image feature information according to the normalized second image information includes:
And carrying out normalization processing on the second gray information of each pixel point in the second region based on the second normalization parameters, and calculating the entropy value of the normalized second gray information to obtain a second gray entropy value.
In some embodiments of the present application, the determining, according to the first image feature information and the second image feature information, an evaluation result of the target image includes:
If the first image characteristic information and the second image characteristic information both accord with the preset image quality parameter conditions, determining an evaluation result of the target image as a high dynamic range image, and acquiring weights corresponding to the image quality parameters;
And acquiring the dynamic range quality value of the target image according to each image quality parameter and the weight corresponding to each image quality parameter.
In some embodiments of the present application, the determining, according to the first image feature information and the second image feature information, an evaluation result of the target image includes:
And if any one of the first image characteristic information or the second image characteristic information does not accord with the preset image quality parameter condition, determining the evaluation result of the target image as a non-high dynamic range image.
In a second aspect, the present application provides an image processing apparatus comprising:
The target image acquisition module is used for acquiring a target image to be evaluated;
the image region segmentation module is used for segmenting the target image into a first region and a second region according to the pixel brightness value of the target image;
The characteristic information extraction module is used for extracting first image characteristic information and second image characteristic information from the first area and the second area respectively, wherein the image characteristic information at least comprises gray characteristic information, image detail information and color characteristic information;
and the image information acquisition module is used for determining an evaluation result of the target image according to the first image characteristic information and the second image characteristic information.
In a third aspect, the present application also provides a computer device comprising:
One or more processors;
Memory, and
One or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to implement the image processing method.
In a fourth aspect, the present application also provides a computer readable storage medium having stored thereon a computer program to be loaded by a processor for performing the steps of the image processing method.
In a fifth aspect, embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method provided in the first aspect.
The image processing method, the image processing device, the computer equipment and the storage medium are used for obtaining the target image to be evaluated, dividing the target image into a first area and a second area according to the pixel brightness value of the target image, further extracting first image characteristic information and second image characteristic information from the first area and the second area respectively, wherein the image characteristic information at least comprises gray characteristic information, image detail information and color characteristic information, and finally determining the evaluation result of the target image according to the first image characteristic information and the second image characteristic information. The image is segmented through the pixel brightness information in the target image, and then the image characteristic information is extracted independently aiming at areas with different brightness in the target image, so that the influence of the brightness in the image on the acquisition of the image characteristic information in the target image is reduced, the representation capability of the image characteristic information on the information in the target image is improved, and the accuracy of the subsequent evaluation on the target image is improved.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present application, the word "for example" is used to mean "serving as an example, instance, or illustration. Any embodiment described as "for example" in this disclosure is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the application. In the following description, details are set forth for purposes of explanation. It will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In other instances, well-known structures and processes have not been described in detail so as not to obscure the description of the application with unnecessary detail. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
In the embodiment of the present application, it should be noted that, because the image processing method provided by the present application is executed in a computer device, the processing object of each computer device exists in the form of data or information, for example, an image, which is essentially image data, it should be understood that in the subsequent embodiment, if brightness, gray information, color information, etc. are mentioned, all corresponding data exist, so that the computer device can process the data, and detailed description thereof is omitted herein.
In the embodiment of the present application, it should be further noted that the image processing method provided in the embodiment of the present application may be applied to an image processing system as shown in fig. 1. The image processing system includes a terminal 100 and a server 200, where the terminal 100 may be a device including a camera device or a display device, and specifically may be a desktop terminal or a mobile terminal, for example, may be one of a mobile phone, a tablet computer, a notebook computer, or the like, or may be a camera for information acquisition, storage, and transmission. The server 200 may be a stand-alone server, or may be a server network or a server cluster of servers, including but not limited to a computer, a network host, a single network server, a plurality of network server sets, or a cloud server of multiple servers. Wherein the Cloud server is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing).
It will be appreciated by those skilled in the art that the application environment shown in fig. 1 is merely an application scenario of the present application, and is not limited to the application scenario of the present application, and other application environments may also include more or fewer computer devices than those shown in fig. 1, for example, only 1 server 200 is shown in fig. 1, and it will be appreciated that the image processing system may also include one or more other servers, which is not limited herein. In addition, as shown in FIG. 1, the image processing system may also include a memory for storing data, such as image data.
It should be further noted that, the schematic view of the image processing system shown in fig. 1 is only an example, and the image processing system and the scene described in the embodiment of the present invention are for more clearly describing the technical solution of the embodiment of the present invention, and do not constitute a limitation on the technical solution provided by the embodiment of the present invention, and those skilled in the art can know that, with the evolution of the image processing system and the appearance of a new service scene, the technical solution provided by the embodiment of the present invention is equally applicable to similar technical problems.
Referring to fig. 2, an embodiment of the present application provides an image processing method, mainly for the server 200 in fig. 1, which includes steps S210 to S240, specifically as follows:
s210, acquiring a target image to be evaluated.
The target image refers to an image that needs to be evaluated whether the target image is a high dynamic range image, a dynamic range section and other image quality, and includes, but is not limited to, a picture, a video frame in a video and the like.
Specifically, the target image may be an image acquired by a photographing device of the terminal 100, the terminal 100 acquires the image and then sends the image to the server 200, the server 200 evaluates the image acquired by the terminal 100 to determine whether the image acquired by the terminal 100 is an evaluation result such as a high dynamic range image and a dynamic range section of the target image, so as to obtain relevant information of image quality of the image acquired by the terminal 100, and parameters and procedures of links such as image imaging, image processing, image storage and the like may be adjusted according to the evaluation result to improve the image quality of the target image. Further, according to the evaluation result, shooting performance of the terminal 100 corresponding to the shooting device can be determined.
Further, the high dynamic range image means an image having a larger exposure dynamic range than a normal image, that is, the high dynamic range image is an image having richer luminance information, and when the image light in a certain image is smaller, the image is less likely to be a high dynamic range image. Therefore, in one embodiment, the step S210 is to obtain the target image to be evaluated, and includes obtaining the image to be evaluated, dividing the preprocessing image into a first area and a second area according to the pixel brightness value of the image to be evaluated, obtaining the first area brightness average value of the first area and the second area brightness average value of the second area, obtaining the image light ratio of the image to be evaluated according to the ratio of the second area brightness average value to the first area brightness average value, and determining the image to be evaluated as the target image if the image light ratio is greater than the preset light ratio threshold value.
The pixel brightness value can reflect the brightness degree (i.e., brightness) of the pixel point, and can be specifically calculated according to the values of the corresponding R channel, B channel and G channel on each pixel point. Furthermore, the pixel brightness value can also be measured by the gray value, specifically, the image to be evaluated can be converted into a corresponding gray image, and then the pixel brightness value of the corresponding pixel point in the image to be evaluated can be determined according to the gray value of each pixel point on the gray image.
The first area and the second area may refer to a bright area and a dark area in the image to be evaluated. Specifically, the first region may be a bright region and the second region may be a dark region, or the first region may be a dark region and the second region may be a bright region, and the present invention is not limited thereto. After the image to be evaluated is obtained, specifically, the pixel brightness value of the image to be evaluated can be obtained, the pixel point with the pixel brightness value larger than the preset threshold value is divided into one area, and the pixel point with the pixel brightness value smaller than or equal to the preset threshold value is divided into the other area, so that the first area and the second area are obtained.
Wherein the image light ratio refers to the difference in brightness between the highlights and shadows in the image. Specifically, the image light ratio of the image to be evaluated can be determined through the brightness difference of the first area and the second area, namely, the brightness difference of the bright area and the dark area, when the image light ratio is larger, the possibility that the image to be evaluated is a high dynamic range image is larger, and when the image light ratio is smaller, the possibility that the image to be evaluated is a high dynamic range image is smaller.
Specifically, after an image to be evaluated is obtained, dividing a preprocessed image into a first area and a second area according to pixel brightness values of the image to be evaluated, further obtaining a pixel brightness value average value of each pixel point in the first area, namely obtaining a first area brightness average value, obtaining a second area brightness average value by obtaining a pixel brightness value average value of each pixel point in the second area, and then determining the possibility that the image to be evaluated is a high dynamic range image according to the ratio (namely, the image light ratio) of the second area brightness average value to the first area brightness average value. And when the image light ratio is not greater than the preset light ratio threshold, the possibility that the image to be evaluated is a high dynamic range image is smaller, and a subsequent image processing method is not needed.
S220, dividing the target image into a first area and a second area according to the pixel brightness value of the target image.
For example, an area with higher brightness may have an overexposure condition, that is, the brightness value of the area needs to be darkened to extract richer and more accurate image information, while an area with lower brightness may have an underexposure condition, that is, the brightness value of the area needs to be increased to extract richer and more accurate image information. Therefore, after the target image is acquired, the regions are partitioned according to the pixel brightness values of the target image, so that the image characteristic information of the regions with different brightness can be extracted in a targeted manner.
As described above, the first area and the second area may refer to a bright area and a dark area in the image to be evaluated, specifically, pixel points with pixel brightness values greater than a preset threshold may be divided into one area, and pixel points with pixel brightness values less than or equal to the preset threshold may be divided into another area, so as to obtain the first area and the second area.
In one embodiment, step S220 includes dividing the target image into a first region and a second region according to pixel brightness values of the target image, wherein the step S includes obtaining gray values corresponding to pixels in the target image, determining a first pixel with gray values greater than a gray division threshold and a second pixel with gray values not greater than the gray division threshold according to a gray division threshold and the gray values of the pixels, determining the region where the first pixel is located as the first region, and determining the region where the second pixel is located as the second region.
The pixel brightness value can be measured through the gray value, specifically, the target image can be converted into a corresponding gray image, the pixel brightness value of the corresponding pixel point in the target image can be determined according to the gray value of each pixel point on the gray image, and the gray value of the pixel point can be calculated according to the values of the corresponding R channel, the corresponding B channel and the corresponding G channel on each pixel point. After the gray value of each pixel point of the target image is obtained, dividing the pixel point with the gray value larger than the gray dividing threshold value into one area, and dividing the pixel point with the gray value smaller than or equal to the gray dividing threshold value into another area to obtain a first area and a second area.
The gray level dividing threshold value can be set according to actual situations. For example, the gray scale division threshold may be set to 155 with the gray scale value of the pixel between 0 and 255, the pixel with the gray scale value of [0,155] may be divided into one region, which is a region of lower brightness, i.e., a dark region, and the pixel with the gray scale value of (155, 255) may be divided into another region, which is a region of higher brightness, i.e., a bright region, after the gray scale value of each pixel of the target image is acquired.
S230, extracting first image characteristic information and second image characteristic information from the first area and the second area respectively, wherein the image characteristic information at least comprises gray characteristic information, image detail information and color characteristic information.
The image feature information includes first image feature information and second image feature information, and refers to information extracted in a corresponding region and used for representing image features, such as color features, texture features, gray scale (brightness) features, and the like.
Specifically, the image characteristic information includes at least gradation characteristic information, image detail information, and color characteristic information. The gray characteristic information in the image characteristic information can be an entropy value of gray information and is used for reflecting the dynamic range degree of the gray of the image, the color characteristic information in the image characteristic information can be color saturation and color deviation of pixel points and can reflect whether the color of the image is bright, real and the like, the image detail information in the image characteristic information can be contrast between the pixel points of the pixel points and can be a SFR (Spatial Frequency Response) value or a MTF (Modulation Transfer Function) value, and the image detail information can reflect the definition of the image.
Wherein after dividing the target image into two regions, the server extracts first image feature information from the first region and second image feature information from the second region. Specifically, gray scale feature information, image detail information and color feature information corresponding to a first region are extracted from the first region and determined as first image feature information, and gray scale feature information, image detail information and color feature information corresponding to a second region are extracted from a second region and determined as second image feature information.
Further, as mentioned above, too bright an image of a region with a larger brightness value tends to result in image information loss, where the brightness value of the region needs to be dimmed to extract richer and more accurate image information, while too dark an image of a region with a smaller brightness value tends to result in image information loss, where the brightness value of the region needs to be increased to extract richer and more accurate image information. Thus, in one embodiment, as shown in fig. 3, step S230, extracting the first image feature information and the second image feature information from the first region and the second region, respectively, includes:
S310, obtaining a maximum value and a minimum value of pixel brightness in the target image.
S320, obtaining a first normalization parameter of the first area and a second normalization parameter of the second area according to the ratio between the maximum pixel brightness value and the minimum pixel brightness value.
And S330, carrying out normalization processing on the first image information of the first area based on the first normalization parameter, and determining first image characteristic information according to the first image information after normalization processing.
And S340, carrying out normalization processing on the second image information of the second area based on the second normalization parameters, and determining second image characteristic information according to the second image information after normalization processing.
Taking a first area as a bright area and a second area as a dark area as an example, taking the first normalization parameter as a bright normalization parameter corresponding to the bright area and the second normalization parameter as a dark normalization parameter, specifically, the server obtains the maximum value and the minimum value of the pixel brightness value in the target image, namely obtaining a pixel brightness maximum value R h and a pixel brightness minimum value R l, then obtaining the ratio of the pixel brightness maximum value to the pixel brightness minimum value, obtaining a bright normalization parameter M h (namely M h=Rh/Rl) corresponding to the bright area, and obtaining the ratio of the pixel brightness minimum value to the pixel brightness maximum value, obtaining a dark normalization parameter M l (namely M l=Rl/Rh) corresponding to the dark area, further obtaining the image information of the bright area from the bright area, normalizing the image information of the bright area through the bright normalization parameter M h to uniformly reduce the brightness of the bright area, reducing the influence of the picture over brightness on the extraction of image characteristic information, then extracting the corresponding image characteristic information from the bright area after brightness adjustment, obtaining the corresponding image characteristic information of the dark area from the bright area, and obtaining the uniform brightness information of the dark area after the bright area is reduced, and finally carrying out the image characteristic information extraction from the dark area after the bright area is normalized.
It can be understood that the first area may be a dark area, the corresponding second area is a bright area, the first normalization parameter is a dark normalization parameter corresponding to the dark area, and the second normalization parameter is a bright normalization parameter.
The brightness of the first area is unified by utilizing the first normalization parameters corresponding to the first area, the brightness of the second area is unified by utilizing the second normalization parameters of the second area, the influence of excessive brightness or excessive darkness of a picture on the extraction of image characteristic information is reduced, the first area and the second area are under the condition of the same light ratio, and then the gray entropy value, the image detail information and the color characteristic information under the same light ratio are calculated, so that the richer and more accurate image characteristic information is extracted.
The first image information comprises first gray information of each pixel point in a first area, the second image information comprises second gray information of each pixel point in a second area, the gray characteristic information comprises gray entropy values, normalization processing is carried out on the first image information of the first area based on first normalization parameters, the first image characteristic information is determined according to the normalized first image information, normalization processing is carried out on the first gray information of each pixel point in the first area based on the first normalization parameters, entropy values of the normalized first gray information are calculated to obtain first gray entropy values, normalization processing is carried out on the second image information of the second area based on the second normalization parameters, the second image characteristic information is determined according to the normalized second image information, normalization processing is carried out on the second gray information of each pixel point in the second area based on the second normalization parameters, and entropy values of the normalized second gray information are calculated to obtain second gray entropy values.
Specifically, the gradation value normalization for the first region can be expressed by the following formula (1), and the gradation value normalization for the second region can be expressed by the following formula (2), specifically as follows:
histgs1(K1)=Mh×C(K1)(1)
histgs2(K2)=Ml×C(K2)(2)
Wherein hist gs1(K1) represents the normalized gray information of each pixel in the first region, hist gs2(K2) represents the normalized gray information of each pixel in the second region, M h represents the normalized parameter of the first region, M l represents the normalized parameter of the second region, K 1 represents the pixel luminance value of each pixel in the first region, K 2 represents the pixel luminance value of each pixel in the second region, C (·) represents the process of converting the pixel luminance value of the pixel into the gray value, i.e., C (K 1) represents the gray value of the pixel of the first region, and C (K 2) represents the gray value of the pixel of the second region.
After the normalized gray information of the first region is obtained, the gray entropy value corresponding to the first region may be calculated by the following formula (3),
After the normalized gray information of the second region is obtained, the gray entropy value corresponding to the second region may be calculated by the following equation (4).
The first area and the second area are respectively subjected to normalization processing, the brightness of the second area is uniformly lightened by utilizing a second normalization parameter M l, the first area is uniformly reduced by utilizing a first normalization parameter M h, the first area and the second area are under the same light ratio, the gray entropy value under the same light ratio is further calculated, and the accurate dynamic range entropy value is extracted.
The first image information may further include first image detail information, the second image information may further include second image detail information, normalization processing is performed on the first image information of the first area based on the first normalization parameter, the first image feature information is determined according to the first image information after normalization processing, specifically, the image detail information of the first area is obtained from the first area and normalized based on the first normalization parameter to obtain image feature information corresponding to the image detail information, similarly, normalization processing is performed on the second image information of the second area based on the second normalization parameter, the second image feature information is determined according to the second image detail information after normalization processing, specifically, the image detail information is obtained from the second area and normalized based on the second normalization parameter, and the image feature information of the second area is obtained.
The brightness of the first area is unified by utilizing the first normalization parameters corresponding to the first area, and the second area is unified by utilizing the second normalization parameters of the second area, so that the first area and the second area are under the same light ratio, and further image details such as texture information and contrast under the same light ratio are calculated, and rich and accurate image detail information is extracted.
The first image information may further include first color information, the second image information may include second color information, the first image information of the first area is normalized based on the first normalization parameter, the first image feature information is determined according to the normalized first image information, specifically, color information is obtained from the first area, the color information of the first area is normalized based on the first normalization parameter to obtain first image feature information corresponding to the color information, the second image information of the second area is normalized based on the second normalization parameter, the second image feature information is determined according to the normalized second image information, specifically, the color information is obtained from the second area, the color information of the second area is normalized based on the second normalization parameter, and the second image feature information corresponding to the color information is obtained.
Specifically, the color information is obtained from the first area, specifically, the image data corresponding to the first area in RGB format is converted into the image data in HSV format by using a color model conversion algorithm, and then different colors are extracted according to the value corresponding to the converted HSV. Likewise, the step of acquiring color information from the second area is the same as that of acquiring color information from the first area, except that the data processing object is different. The first area and the second area with different brightness are respectively and independently processed, the brightness of the first area is unified by utilizing the first normalization parameter corresponding to the first area, the second area is unified by utilizing the second normalization parameter of the second area, so that the first area and the second area are under the same light ratio, and further, the image color information such as the saturation under the same light ratio is calculated, and the rich and accurate color characteristic information is extracted.
S240, determining an evaluation result of the target image according to the first image characteristic information and the second image characteristic information.
After the first image feature information and the second image feature information are obtained, whether the first image feature information and the second image feature information accord with the preset image quality parameter condition or not can be judged.
Specifically, the first image feature information includes at least gray feature information, image detail information, and color feature information of the first region, and the second image feature information includes at least gray feature information, image detail information, and color feature information of the second region. The gray characteristic information may be a gray entropy value of a corresponding region, if the first gray entropy value of the first region and the second gray entropy value of the second region are both greater than a preset entropy value threshold, the first gray entropy value and the second gray entropy value are both effective entropy values and accord with preset image quality parameter conditions, and if any one of the first gray entropy value of the first region or the second gray entropy value of the second region is not greater than the preset entropy value threshold, the gray entropy value smaller than the entropy value threshold is an invalid entropy value and does not accord with the preset image quality parameter conditions.
Similarly, setting a corresponding information threshold for the image detail information, if the image detail information of the first area and the image detail information of the second area are both larger than a preset information threshold, the image detail information of the first area and the image detail information of the second area are both effective values and accord with preset image quality parameter conditions, and if any one of the image detail information of the first area and the image detail information of the second area is not larger than the preset information threshold, the image detail information smaller than the information threshold is invalid and does not accord with the preset image quality parameter conditions. And setting corresponding information thresholds for the color information, wherein if the color characteristic information of the first area and the color characteristic information of the second area are both larger than a preset information threshold, the color characteristic information of the first area and the color characteristic information of the second area are both effective values and accord with preset image quality parameter conditions, and if any one of the color characteristic information of the first area and the color characteristic information of the second area is not larger than a preset entropy threshold, the color characteristic information smaller than the entropy threshold is invalid information and does not accord with the preset image quality parameter conditions.
And if any one of the first image characteristic information or the second image characteristic information does not accord with the preset image quality parameter condition, determining the evaluation result of the target image as a non-high dynamic range image. It can be understood that when any one of the gray feature information, the image detail information or the color feature information in the first area and the second area does not meet the preset image quality parameter condition, the target image is a non-high dynamic range image, and at this time, the reason that the image light of the target image is relatively large may be that when the image capturing device captures the target image, foreign matters are blocked in front of the capturing lens, so that the brightness of the local area image is too dark, and when the capturing lens directly captures the light source in the capturing scene, the brightness of the local area image is too bright. Further, if the evaluation result of the target image is a non-high dynamic range image, abnormal information may be output to the terminal according to the evaluation result of the non-high dynamic range image, so that the user knows that the photographed image is abnormal.
And if the first image characteristic information and the second image characteristic information meet the preset image quality parameter conditions, determining an evaluation result of the target image as a high dynamic range image, acquiring weights corresponding to the image quality parameters, and acquiring dynamic range quality values of the target image according to the image quality parameters and the weights corresponding to the image quality parameters. It can be understood that when the gray information, the contrast information or the color information in the first area and the second area meet the preset image quality parameter condition, the target image is a high dynamic range image, and the weights of the first image feature information and the second image feature information can be set according to the display requirements of the dark area and the bright area, so that the first image feature information and the second image feature information are weighted and calculated to obtain the dynamic range quality value of the target image. Specifically, the dynamic range quality value of the target image can be calculated by the following formulas (5), (6), (7):
F=F1+F2(5)
F1=A1×Entropygs1+A2×T1+A3×C1(6)
F2=B1×Entropygs2+B2×T2+B3×C2(7)
wherein, A=A 1+A2+A3,B=B1+B2+B3, a+b=1.
Wherein F represents a dynamic range quality value, F 1 represents a dynamic range quality value of a first region, F 2 represents a dynamic range quality value of a second region, entropy gs1 represents a first gray entropy value of the first region, entropy gs2 represents a second gray entropy value of the second region, T 1 represents image detail information of the first region, T 2 represents image detail information of the second region, C 1 represents color feature information of the first region, and C 2 represents color feature information of the second region.
The image processing method comprises the steps of obtaining a target image to be evaluated, dividing the target image into a first area and a second area according to pixel brightness values of the target image, further extracting first image feature information and second image feature information from the first area and the second area respectively, wherein the image feature information at least comprises gray feature information, image detail information and color feature information, and finally determining an evaluation result of the target image according to the first image feature information and the second image feature information. The image is segmented through the pixel brightness information in the target image, and then the image characteristic information is extracted from the areas with different brightness in the target image, so that the influence of the brightness in the image on the image information in the target image is reduced, the representation capability of the image characteristic information on the information in the target image is improved, and the accuracy of the subsequent evaluation on the target image is improved.
Referring to fig. 4, the image processing method provided by the present application is further described below with reference to fig. 4. As shown in fig. 4, the image processing method includes:
And step 1, segmenting a bright and dark part region, acquiring an image to be evaluated, and segmenting the preprocessing image into the bright part region and the dark part region according to the pixel brightness value of the image to be evaluated.
Step 2, whether the image light ratio reaches a judging threshold T is judged by acquiring a brightness average value L h of a bright area and a brightness average value L l of a dark area, judging whether the ratio (L h/Ll) of the brightness average value L h to the brightness average value L l of the dark area reaches the judging threshold T of a high dynamic picture, if L h/Ll is more than or equal to T, the image to be evaluated is a high dynamic range image, executing the next step at the moment, so as to enter a high dynamic range image quality testing step, and if L h/Ll is less than T, the image to be evaluated is a high dynamic range image, and entering a conventional image quality testing step at the moment.
It can be understood that when the image light of the image to be evaluated is smaller than the determination threshold T, the image to be evaluated is less likely to be a high dynamic range image, the image to be evaluated is less likely to be too bright or too dark, and the brightness of the whole image is relatively average, so that image processing is not needed to be performed in a targeted partition.
And3, calculating a light ratio normalization factor M, wherein the light ratio normalization factor M comprises a high light normalization factor M h corresponding to a bright area and a low light normalization factor M l corresponding to a dark area.
Specifically, obtaining the maximum value and the minimum value of the pixel brightness value in the target image to obtain a pixel brightness maximum value R h and a pixel brightness minimum value R l, then obtaining the ratio of the pixel brightness maximum value to the pixel brightness minimum value to obtain a bright portion normalization parameter M h (namely M h=Rh/Rl) corresponding to the bright portion region, and obtaining the ratio of the pixel brightness minimum value to the pixel brightness maximum value to obtain a dark portion normalization parameter M l (namely M l=Rl/Rh) corresponding to the dark portion region
Step 4, normalization of gray values, namely, normalization of gray values of bright areas can be achieved through the following formula (8), and normalization of gray values of dark areas can be achieved through the following formula (9), specifically, the following steps are achieved:
histgs1(K1)=Mh×C(K1)(8)
histgs2(K2)=Ml×C(K2)(9)
Wherein hist gs1(K1) represents the normalized gray-scale information of each pixel in the bright area, hist gs2(K2) represents the normalized gray-scale information of each pixel in the dark area, M h represents the normalized parameter of the bright area, M l represents the normalized parameter of the dark area, K 1 represents the pixel luminance value of each pixel in the bright area, K 2 represents the pixel luminance value of each pixel in the dark area, C (K 1) represents the process of converting the pixel luminance value of the pixel in the bright area into the gray-scale value, i.e., the gray-scale value of the pixel in the bright area, and C (K 2) represents the process of converting the pixel luminance value of the pixel in the dark area into the gray-scale value, i.e., the gray-scale value of the pixel in the dark area.
And 5, calculating entropy values of the bright and dark areas, namely after the normalized gray level information of the bright areas is obtained, calculating bright gray level entropy values corresponding to the bright areas according to the following formula (10), and after the normalized gray level information of the dark areas is obtained, calculating dark gray level entropy values corresponding to the bright areas according to the following formula (11), wherein the formula is specifically as follows:
And 6, judging the validity of the entropy values of the bright and dark areas, namely, if the bright gray entropy value of the bright area and the dark gray entropy value of the dark area are both larger than a preset entropy threshold value, the bright gray entropy value and the dark gray entropy value are both valid entropy values, the preset image quality parameter conditions are met, the bright gray entropy value and the dark gray entropy value are stored, and step 7 and step 8 are executed, and if any one of the bright gray entropy value of the bright area and the dark gray entropy value of the dark area is not larger than the preset entropy threshold value, the gray entropy value smaller than the entropy threshold value is invalid entropy value and does not meet the preset image quality parameter conditions, at the moment, the input dynamic range value of the image to be evaluated is invalid, the input image is required to be checked again, and the image anomaly information can be output.
Extracting image detail information of a bright and dark region, namely extracting image detail information t 1 from the bright region and extracting image detail information t 2 from the dark region, carrying out normalization processing on the image detail information t 1 by using a bright normalization factor (specifically, the normalization processing can be realized by the following formula (12)) to obtain image detail information in the bright region, and carrying out normalization processing on the image detail information t 2 by using a dark normalization factor (specifically, the normalization processing can be realized by the following formula (13)) to obtain image detail information in the dark region;
T1=Mh*f1(t1)(12)
T2=Ml*f1(t2)(13)
Where T 1 is normalized image detail information of the bright area, T 2 is normalized image detail information of the dark area, f 1 (. Cndot.) represents a process of extracting image detail information from the image information, f 1(t1) represents image detail information extracted from the bright area, and f 1(t2) represents image detail information extracted from the dark area.
Extracting color characteristic information of the bright and dark regions, namely extracting image color information c 1 from the bright region and extracting image color information c 2 from the dark region, normalizing the image color information c 1 by using a bright normalization factor (specifically, the normalization process can be realized by the following formula (14)) to obtain color characteristic information in the bright region, and normalizing the image color information c 2 by using a dark normalization factor (specifically, the normalization process can be realized by the following formula (15)) to obtain color characteristic information in the dark region;
C1=Mh*f2(c1)(14)
C2=Ml*f2(c2)(15)
Wherein C 1 is normalized image detail information of a bright region, C 2 is normalized image detail information of a dark region, f 2 (DEG) represents a process of extracting color feature information from the image information, f 2(c1) represents the image detail information extracted from the bright region, and f 2(c2) represents the image detail information extracted from the dark region.
And 9, judging the validity of the image detail information and the color feature information, wherein if the image detail information and the color feature information of the bright area and the image detail information and the color feature information of the dark area are both larger than a preset information threshold, the image detail information and the color feature information of the bright area and the image detail information and the color feature information of the dark area are both valid values, conform to preset image quality parameter conditions, the image detail information and the color feature information of the bright area and the image detail information and the color feature information of the dark area are stored, and step 10 is executed, and if any one of the image detail information and the color feature information of the first area, the image detail information and the color feature information of the dark area is not larger than a preset information threshold, the image detail information or the color feature information smaller than the information threshold is invalid information and does not conform to the preset image quality parameter conditions, at the moment, the dynamic range value of the input image to be evaluated is invalid, the input image is required to be checked again, and the image abnormal information can be output.
Step 10, calculating a high dynamic range quality value, namely carrying out weighted summation on a bright part gray entropy value of a bright part area, image detail information and color characteristic information, and a dark part gray entropy value of a dark part area, the image detail information and the color characteristic information to obtain the high dynamic range quality value, wherein the high dynamic range quality value can be realized by the following formulas (16) to (18):
F=F1+F2(16)
F1=A1×Entropygs1+A2×T1+A3×C1(17)
F2=B1×Entropygs2+B2×T2+B3×C2(18)
wherein, A=A 1+A2+A3,B=B1+B2+B3, a+b=1.
Wherein F represents a dynamic range quality value, F 1 represents a dynamic range quality value of a bright region, F 2 represents a dynamic range quality value of a dark region, entropy gs1 represents a bright gray entropy value of the bright region, entropy gs2 represents a second gray entropy value of the dark region, T 1 represents image detail information of the bright region, T 2 represents image detail information of the dark region, C 1 represents color feature information of the bright region, and C 2 represents color feature information of the dark region.
And step 11, outputting an image evaluation result.
According to the image processing scheme, image quality influences brought by different light ratios of an image to be evaluated are considered, the image light ratios of a bright area and a dark area of the image to be evaluated are extracted, the brightness ratios of the bright area and the dark area are obtained, further, after brightness normalization processing is carried out on image data of the bright area and image data of the dark area, dynamic range entropy values (namely gray entropy values) of the bright area and the dark area are respectively calculated, image quality parameters such as image detail information under the same light ratio and color feature information under the same light ratio are obtained, and then a dynamic range image quality evaluation result is obtained based on the dynamic range entropy values, the image detail information and the color feature information. Compared with the existing dynamic range image quality evaluation method, the influence of the change of the light ratio of the equipment is effectively reduced, the image quality level of the dynamic range is truly reflected, and the evaluation accuracy is improved.
In order to better implement the image processing method provided by the embodiment of the present application, on the basis of the image processing method provided by the embodiment of the present application, an image processing apparatus is further provided in the embodiment of the present application, as shown in fig. 5, where the image processing apparatus 500 includes:
The target image acquisition module 510 is configured to acquire a target image to be evaluated;
an image region segmentation module 520, configured to segment the target image into a first region and a second region according to the pixel brightness value of the target image;
A feature information extracting module 530, configured to extract first image feature information and second image feature information from the first region and the second region, respectively, where the image feature information includes at least gray feature information, image detail information, and color feature information;
the image information obtaining module 540 is configured to determine an evaluation result of the target image according to the first image feature information and the second image feature information.
In some embodiments of the present application, the target image obtaining module 510 is further configured to obtain an image to be evaluated, divide the preprocessed image into a first area and a second area according to a pixel brightness value of the image to be evaluated, obtain a first area brightness average value of the first area and a second area brightness average value of the second area, obtain an image light ratio of the image to be evaluated according to a ratio of the second area brightness average value to the first area brightness average value, and determine the image to be evaluated as the target image if the image light ratio is greater than a preset light ratio threshold.
In some embodiments of the present application, the image region segmentation module 520 is further configured to obtain a gray value corresponding to each pixel in the target image, determine, according to the gray segmentation threshold and the gray value of each pixel, a first pixel having a gray value greater than the gray segmentation threshold and a second pixel having a gray value not greater than the gray segmentation threshold, determine a region where the first pixel is located as a first region, and determine a region where the second pixel is located as a second region.
In some embodiments of the present application, the feature information extraction module 530 is further configured to obtain a maximum pixel brightness value and a minimum pixel brightness value in the target image, obtain a first normalization parameter of the first area and a second normalization parameter of the second area according to a ratio between the maximum pixel brightness value and the minimum pixel brightness value, normalize the first image information of the first area based on the first normalization parameter, determine the first image feature information according to the normalized first image information, normalize the second image information of the second area based on the second normalization parameter, and determine the second image feature information according to the normalized second image information.
In some embodiments of the present application, the first image information includes first gray scale information, the second image information includes second gray scale information, the first image feature information includes a first gray scale entropy value, the second image feature information includes a second gray scale entropy value, the feature information extraction module 530 is configured to normalize the first gray scale information based on a first normalization parameter, calculate an entropy value of the normalized first gray scale information to obtain a first gray scale entropy value, normalize the second gray scale information based on a second normalization parameter, calculate an entropy value of the normalized second gray scale information to obtain a second gray scale entropy value
In some embodiments of the present application, the image information obtaining module 540 is further configured to determine an evaluation result of the target image as a high dynamic range image and obtain weights corresponding to the image quality parameters when the first image feature information and the second image feature information both meet a preset image quality parameter condition;
and acquiring the dynamic range quality value of the target image according to each image quality parameter and the weight corresponding to each image quality parameter.
In some embodiments of the present application, the image information obtaining module 540 is further configured to determine the evaluation result of the target image as a non-high dynamic range image when any one of the first image feature information or the second image feature information does not meet the preset image quality parameter condition.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, and no further description is given here. The respective modules in the above-described page display device may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In some embodiments of the application, the image processing apparatus 500 may be implemented in the form of a computer program which is executable on a computer device as shown in fig. 6. The memory of the computer device may store various program modules constituting the image processing apparatus 500, such as the target image acquisition module 510, the image region segmentation module 520, the feature information extraction module 530, and the image information acquisition module 540 shown in fig. 5. The computer program constituted by the respective program modules causes the processor to execute the steps in the image processing method of the respective embodiments of the present application described in the present specification.
For example, the computer apparatus shown in fig. 6 may perform step S210 through the target image acquisition module 510 in the image processing device 500 shown in fig. 5. The computer device may perform step S220 through the image region segmentation module 520. The computer device may perform step S230 through the feature information extraction module 530. The step S240 may be performed by the image information acquisition module 540 by a computer device including a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external computer device through a network connection. The computer program is executed by a processor to implement an image processing method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 6 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In some embodiments of the present application, a computer device is provided that includes one or more processors, memory, and one or more applications, wherein the one or more applications are stored in the memory and configured to perform the steps of the image processing method described above by the processor. The steps of the image processing method here may be the steps in the image processing methods of the respective embodiments described above.
In some embodiments of the present application, a computer-readable storage medium is provided, in which a computer program is stored, the computer program being loaded by a processor, so that the processor performs the steps of the above-mentioned image processing method. The steps of the image processing method here may be the steps in the image processing methods of the respective embodiments described above.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein can include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing describes in detail an image processing method, apparatus, computer device and storage medium according to embodiments of the present application, and specific examples are provided herein to illustrate the principles and embodiments of the present application, and the above description of the embodiments is only for aiding in understanding the method and core concept of the present application, and meanwhile, for those skilled in the art, according to the concept of the present application, there are variations in the specific embodiments and application scope, so that the disclosure should not be construed as limiting the application.