[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114429476B - Image processing method, device, computer equipment and storage medium - Google Patents

Image processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN114429476B
CN114429476B CN202210090096.XA CN202210090096A CN114429476B CN 114429476 B CN114429476 B CN 114429476B CN 202210090096 A CN202210090096 A CN 202210090096A CN 114429476 B CN114429476 B CN 114429476B
Authority
CN
China
Prior art keywords
image
information
area
value
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210090096.XA
Other languages
Chinese (zh)
Other versions
CN114429476A (en
Inventor
王鹏
张永兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou TCL Mobile Communication Co Ltd
Original Assignee
Huizhou TCL Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou TCL Mobile Communication Co Ltd filed Critical Huizhou TCL Mobile Communication Co Ltd
Priority to CN202210090096.XA priority Critical patent/CN114429476B/en
Publication of CN114429476A publication Critical patent/CN114429476A/en
Application granted granted Critical
Publication of CN114429476B publication Critical patent/CN114429476B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本申请提供一种图像处理方法、装置、计算机设备以及存储介质,该方法通过获取待评测的目标图像,根据目标图像的像素亮度值,将目标图像分割为第一区域和第二区域;进而分别从第一区域和第二区域中提取第一图像特征信息和第二图像特征信息,其中,图像特征信息至少包括灰度特征信息、图像细节信息和色彩特征信息;最后,根据第一图像特征信息以及第二图像特征信息,确定目标图像的评测结果。通过目标图像中像素亮度信息对图像进行分割,进而针对目标图像中的不同亮度的区域,单独进行图像特征信息提取,减少图像中亮度对目标图像中图像特征信息获取的影响,提高图像特征信息对目标图像中信息的表征能力,以提高后续对目标图像的评测的准确性。

The present application provides an image processing method, device, computer equipment and storage medium. The method obtains a target image to be evaluated, and divides the target image into a first area and a second area according to the pixel brightness value of the target image; then extracts first image feature information and second image feature information from the first area and the second area respectively, wherein the image feature information at least includes grayscale feature information, image detail information and color feature information; finally, the evaluation result of the target image is determined according to the first image feature information and the second image feature information. The image is segmented according to the pixel brightness information in the target image, and then the image feature information is extracted separately for the areas with different brightness in the target image, so as to reduce the influence of the brightness in the image on the acquisition of the image feature information in the target image, and improve the representation ability of the image feature information on the information in the target image, so as to improve the accuracy of the subsequent evaluation of the target image.

Description

Image processing method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence technology, and in particular, to an image processing method, an image processing device, a computer device, and a computer readable storage medium (simply referred to as a storage medium).
Background
In the process of capturing an image, an image capturing apparatus (e.g., a digital camera) often performs photometry on a shooting environment, so as to selectively adjust brightness of different areas in the captured image according to ambient light ratio information obtained by the photometry result, so as to obtain a high dynamic range image. However, the adjustment of the brightness of the image often results in degradation of the image quality, such as degradation of definition, deviation of color, etc., so that the high dynamic range image after brightness adjustment needs to be evaluated, and then the foregoing brightness adjustment process is finely adjusted according to the evaluation result, so as to improve the image quality such as image definition. However, the existing evaluation method is often used for evaluating the image directly according to various image quality parameters of the image, and the accuracy is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image processing method, apparatus, computer device, and storage medium for improving accuracy of evaluating a target image.
In a first aspect, the present application provides an image processing method, the method comprising:
acquiring a target image to be evaluated;
Dividing the target image into a first area and a second area according to the pixel brightness value of the target image;
extracting first image characteristic information and second image characteristic information from the first area and the second area respectively, wherein the image characteristic information at least comprises gray characteristic information, image detail information and color characteristic information;
and determining an evaluation result of the target image according to the first image characteristic information and the second image characteristic information.
In some embodiments of the present application, the acquiring a target image to be evaluated includes:
acquiring an image to be evaluated, and dividing the preprocessing image into a first area and a second area according to the pixel brightness value of the image to be evaluated;
Acquiring a first area brightness average value of the first area and a second area brightness average value of the second area;
Obtaining the image light ratio of the image to be evaluated according to the ratio of the brightness average value of the second area to the brightness average value of the first area;
and if the image light ratio is larger than a preset light ratio threshold value, determining the image to be evaluated as a target image.
In some embodiments of the present application, the dividing the target image into a first region and a second region according to the pixel brightness value of the target image includes:
Acquiring gray values corresponding to all pixel points in the target image;
According to the gray level dividing threshold value and the gray level value of each pixel point, determining a first pixel point with the gray level value larger than the gray level dividing threshold value and a second pixel point with the gray level value not larger than the gray level dividing threshold value;
And determining the area where the first pixel point is located as a first area, and determining the area where the second pixel point is located as a second area.
In some embodiments of the present application, the extracting the first image feature information and the second image feature information from the first region and the second region, respectively, includes:
Acquiring a maximum value and a minimum value of pixel brightness in the target image;
Acquiring a first normalization parameter of the first area and a second normalization parameter of the second area according to the ratio between the maximum pixel brightness value and the minimum pixel brightness value;
Based on the first normalization parameters, carrying out normalization processing on the first image information of the first area, and determining first image characteristic information according to the first image information after normalization processing;
And carrying out normalization processing on the second image information of the second area based on the second normalization parameters, and determining second image characteristic information according to the second image information after normalization processing.
In some embodiments of the present application, the first image information includes first gray scale information of each pixel point in the first region, and the second image information includes second gray scale information of each pixel point in the second region;
the normalizing processing is performed on the first image information of the first area based on the first normalizing parameter, and the determining the first image feature information according to the normalized first image information includes:
Carrying out normalization processing on the first gray information of each pixel point in the first region based on the first normalization parameters, and calculating the entropy value of the normalized first gray information to obtain a first gray entropy value;
the normalizing processing is performed on the second image information of the second area based on the second normalizing parameter, and the determining the second image feature information according to the normalized second image information includes:
And carrying out normalization processing on the second gray information of each pixel point in the second region based on the second normalization parameters, and calculating the entropy value of the normalized second gray information to obtain a second gray entropy value.
In some embodiments of the present application, the determining, according to the first image feature information and the second image feature information, an evaluation result of the target image includes:
If the first image characteristic information and the second image characteristic information both accord with the preset image quality parameter conditions, determining an evaluation result of the target image as a high dynamic range image, and acquiring weights corresponding to the image quality parameters;
And acquiring the dynamic range quality value of the target image according to each image quality parameter and the weight corresponding to each image quality parameter.
In some embodiments of the present application, the determining, according to the first image feature information and the second image feature information, an evaluation result of the target image includes:
And if any one of the first image characteristic information or the second image characteristic information does not accord with the preset image quality parameter condition, determining the evaluation result of the target image as a non-high dynamic range image.
In a second aspect, the present application provides an image processing apparatus comprising:
The target image acquisition module is used for acquiring a target image to be evaluated;
the image region segmentation module is used for segmenting the target image into a first region and a second region according to the pixel brightness value of the target image;
The characteristic information extraction module is used for extracting first image characteristic information and second image characteristic information from the first area and the second area respectively, wherein the image characteristic information at least comprises gray characteristic information, image detail information and color characteristic information;
and the image information acquisition module is used for determining an evaluation result of the target image according to the first image characteristic information and the second image characteristic information.
In a third aspect, the present application also provides a computer device comprising:
One or more processors;
Memory, and
One or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to implement the image processing method.
In a fourth aspect, the present application also provides a computer readable storage medium having stored thereon a computer program to be loaded by a processor for performing the steps of the image processing method.
In a fifth aspect, embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method provided in the first aspect.
The image processing method, the image processing device, the computer equipment and the storage medium are used for obtaining the target image to be evaluated, dividing the target image into a first area and a second area according to the pixel brightness value of the target image, further extracting first image characteristic information and second image characteristic information from the first area and the second area respectively, wherein the image characteristic information at least comprises gray characteristic information, image detail information and color characteristic information, and finally determining the evaluation result of the target image according to the first image characteristic information and the second image characteristic information. The image is segmented through the pixel brightness information in the target image, and then the image characteristic information is extracted independently aiming at areas with different brightness in the target image, so that the influence of the brightness in the image on the acquisition of the image characteristic information in the target image is reduced, the representation capability of the image characteristic information on the information in the target image is improved, and the accuracy of the subsequent evaluation on the target image is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of an image processing method according to an embodiment of the present application;
FIG. 2 is a flow chart of an image processing method according to an embodiment of the application;
FIG. 3 is a flowchart illustrating steps for extracting first image feature information and second image feature information from a first region and a second region, respectively, according to an embodiment of the present application;
FIG. 4 is a flow chart of another image processing method according to an embodiment of the application;
fig. 5 is a schematic diagram of the structure of an image processing apparatus in the embodiment of the present application;
fig. 6 is a schematic diagram of a computer device in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present application, the word "for example" is used to mean "serving as an example, instance, or illustration. Any embodiment described as "for example" in this disclosure is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the application. In the following description, details are set forth for purposes of explanation. It will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In other instances, well-known structures and processes have not been described in detail so as not to obscure the description of the application with unnecessary detail. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
In the embodiment of the present application, it should be noted that, because the image processing method provided by the present application is executed in a computer device, the processing object of each computer device exists in the form of data or information, for example, an image, which is essentially image data, it should be understood that in the subsequent embodiment, if brightness, gray information, color information, etc. are mentioned, all corresponding data exist, so that the computer device can process the data, and detailed description thereof is omitted herein.
In the embodiment of the present application, it should be further noted that the image processing method provided in the embodiment of the present application may be applied to an image processing system as shown in fig. 1. The image processing system includes a terminal 100 and a server 200, where the terminal 100 may be a device including a camera device or a display device, and specifically may be a desktop terminal or a mobile terminal, for example, may be one of a mobile phone, a tablet computer, a notebook computer, or the like, or may be a camera for information acquisition, storage, and transmission. The server 200 may be a stand-alone server, or may be a server network or a server cluster of servers, including but not limited to a computer, a network host, a single network server, a plurality of network server sets, or a cloud server of multiple servers. Wherein the Cloud server is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing).
It will be appreciated by those skilled in the art that the application environment shown in fig. 1 is merely an application scenario of the present application, and is not limited to the application scenario of the present application, and other application environments may also include more or fewer computer devices than those shown in fig. 1, for example, only 1 server 200 is shown in fig. 1, and it will be appreciated that the image processing system may also include one or more other servers, which is not limited herein. In addition, as shown in FIG. 1, the image processing system may also include a memory for storing data, such as image data.
It should be further noted that, the schematic view of the image processing system shown in fig. 1 is only an example, and the image processing system and the scene described in the embodiment of the present invention are for more clearly describing the technical solution of the embodiment of the present invention, and do not constitute a limitation on the technical solution provided by the embodiment of the present invention, and those skilled in the art can know that, with the evolution of the image processing system and the appearance of a new service scene, the technical solution provided by the embodiment of the present invention is equally applicable to similar technical problems.
Referring to fig. 2, an embodiment of the present application provides an image processing method, mainly for the server 200 in fig. 1, which includes steps S210 to S240, specifically as follows:
s210, acquiring a target image to be evaluated.
The target image refers to an image that needs to be evaluated whether the target image is a high dynamic range image, a dynamic range section and other image quality, and includes, but is not limited to, a picture, a video frame in a video and the like.
Specifically, the target image may be an image acquired by a photographing device of the terminal 100, the terminal 100 acquires the image and then sends the image to the server 200, the server 200 evaluates the image acquired by the terminal 100 to determine whether the image acquired by the terminal 100 is an evaluation result such as a high dynamic range image and a dynamic range section of the target image, so as to obtain relevant information of image quality of the image acquired by the terminal 100, and parameters and procedures of links such as image imaging, image processing, image storage and the like may be adjusted according to the evaluation result to improve the image quality of the target image. Further, according to the evaluation result, shooting performance of the terminal 100 corresponding to the shooting device can be determined.
Further, the high dynamic range image means an image having a larger exposure dynamic range than a normal image, that is, the high dynamic range image is an image having richer luminance information, and when the image light in a certain image is smaller, the image is less likely to be a high dynamic range image. Therefore, in one embodiment, the step S210 is to obtain the target image to be evaluated, and includes obtaining the image to be evaluated, dividing the preprocessing image into a first area and a second area according to the pixel brightness value of the image to be evaluated, obtaining the first area brightness average value of the first area and the second area brightness average value of the second area, obtaining the image light ratio of the image to be evaluated according to the ratio of the second area brightness average value to the first area brightness average value, and determining the image to be evaluated as the target image if the image light ratio is greater than the preset light ratio threshold value.
The pixel brightness value can reflect the brightness degree (i.e., brightness) of the pixel point, and can be specifically calculated according to the values of the corresponding R channel, B channel and G channel on each pixel point. Furthermore, the pixel brightness value can also be measured by the gray value, specifically, the image to be evaluated can be converted into a corresponding gray image, and then the pixel brightness value of the corresponding pixel point in the image to be evaluated can be determined according to the gray value of each pixel point on the gray image.
The first area and the second area may refer to a bright area and a dark area in the image to be evaluated. Specifically, the first region may be a bright region and the second region may be a dark region, or the first region may be a dark region and the second region may be a bright region, and the present invention is not limited thereto. After the image to be evaluated is obtained, specifically, the pixel brightness value of the image to be evaluated can be obtained, the pixel point with the pixel brightness value larger than the preset threshold value is divided into one area, and the pixel point with the pixel brightness value smaller than or equal to the preset threshold value is divided into the other area, so that the first area and the second area are obtained.
Wherein the image light ratio refers to the difference in brightness between the highlights and shadows in the image. Specifically, the image light ratio of the image to be evaluated can be determined through the brightness difference of the first area and the second area, namely, the brightness difference of the bright area and the dark area, when the image light ratio is larger, the possibility that the image to be evaluated is a high dynamic range image is larger, and when the image light ratio is smaller, the possibility that the image to be evaluated is a high dynamic range image is smaller.
Specifically, after an image to be evaluated is obtained, dividing a preprocessed image into a first area and a second area according to pixel brightness values of the image to be evaluated, further obtaining a pixel brightness value average value of each pixel point in the first area, namely obtaining a first area brightness average value, obtaining a second area brightness average value by obtaining a pixel brightness value average value of each pixel point in the second area, and then determining the possibility that the image to be evaluated is a high dynamic range image according to the ratio (namely, the image light ratio) of the second area brightness average value to the first area brightness average value. And when the image light ratio is not greater than the preset light ratio threshold, the possibility that the image to be evaluated is a high dynamic range image is smaller, and a subsequent image processing method is not needed.
S220, dividing the target image into a first area and a second area according to the pixel brightness value of the target image.
For example, an area with higher brightness may have an overexposure condition, that is, the brightness value of the area needs to be darkened to extract richer and more accurate image information, while an area with lower brightness may have an underexposure condition, that is, the brightness value of the area needs to be increased to extract richer and more accurate image information. Therefore, after the target image is acquired, the regions are partitioned according to the pixel brightness values of the target image, so that the image characteristic information of the regions with different brightness can be extracted in a targeted manner.
As described above, the first area and the second area may refer to a bright area and a dark area in the image to be evaluated, specifically, pixel points with pixel brightness values greater than a preset threshold may be divided into one area, and pixel points with pixel brightness values less than or equal to the preset threshold may be divided into another area, so as to obtain the first area and the second area.
In one embodiment, step S220 includes dividing the target image into a first region and a second region according to pixel brightness values of the target image, wherein the step S includes obtaining gray values corresponding to pixels in the target image, determining a first pixel with gray values greater than a gray division threshold and a second pixel with gray values not greater than the gray division threshold according to a gray division threshold and the gray values of the pixels, determining the region where the first pixel is located as the first region, and determining the region where the second pixel is located as the second region.
The pixel brightness value can be measured through the gray value, specifically, the target image can be converted into a corresponding gray image, the pixel brightness value of the corresponding pixel point in the target image can be determined according to the gray value of each pixel point on the gray image, and the gray value of the pixel point can be calculated according to the values of the corresponding R channel, the corresponding B channel and the corresponding G channel on each pixel point. After the gray value of each pixel point of the target image is obtained, dividing the pixel point with the gray value larger than the gray dividing threshold value into one area, and dividing the pixel point with the gray value smaller than or equal to the gray dividing threshold value into another area to obtain a first area and a second area.
The gray level dividing threshold value can be set according to actual situations. For example, the gray scale division threshold may be set to 155 with the gray scale value of the pixel between 0 and 255, the pixel with the gray scale value of [0,155] may be divided into one region, which is a region of lower brightness, i.e., a dark region, and the pixel with the gray scale value of (155, 255) may be divided into another region, which is a region of higher brightness, i.e., a bright region, after the gray scale value of each pixel of the target image is acquired.
S230, extracting first image characteristic information and second image characteristic information from the first area and the second area respectively, wherein the image characteristic information at least comprises gray characteristic information, image detail information and color characteristic information.
The image feature information includes first image feature information and second image feature information, and refers to information extracted in a corresponding region and used for representing image features, such as color features, texture features, gray scale (brightness) features, and the like.
Specifically, the image characteristic information includes at least gradation characteristic information, image detail information, and color characteristic information. The gray characteristic information in the image characteristic information can be an entropy value of gray information and is used for reflecting the dynamic range degree of the gray of the image, the color characteristic information in the image characteristic information can be color saturation and color deviation of pixel points and can reflect whether the color of the image is bright, real and the like, the image detail information in the image characteristic information can be contrast between the pixel points of the pixel points and can be a SFR (Spatial Frequency Response) value or a MTF (Modulation Transfer Function) value, and the image detail information can reflect the definition of the image.
Wherein after dividing the target image into two regions, the server extracts first image feature information from the first region and second image feature information from the second region. Specifically, gray scale feature information, image detail information and color feature information corresponding to a first region are extracted from the first region and determined as first image feature information, and gray scale feature information, image detail information and color feature information corresponding to a second region are extracted from a second region and determined as second image feature information.
Further, as mentioned above, too bright an image of a region with a larger brightness value tends to result in image information loss, where the brightness value of the region needs to be dimmed to extract richer and more accurate image information, while too dark an image of a region with a smaller brightness value tends to result in image information loss, where the brightness value of the region needs to be increased to extract richer and more accurate image information. Thus, in one embodiment, as shown in fig. 3, step S230, extracting the first image feature information and the second image feature information from the first region and the second region, respectively, includes:
S310, obtaining a maximum value and a minimum value of pixel brightness in the target image.
S320, obtaining a first normalization parameter of the first area and a second normalization parameter of the second area according to the ratio between the maximum pixel brightness value and the minimum pixel brightness value.
And S330, carrying out normalization processing on the first image information of the first area based on the first normalization parameter, and determining first image characteristic information according to the first image information after normalization processing.
And S340, carrying out normalization processing on the second image information of the second area based on the second normalization parameters, and determining second image characteristic information according to the second image information after normalization processing.
Taking a first area as a bright area and a second area as a dark area as an example, taking the first normalization parameter as a bright normalization parameter corresponding to the bright area and the second normalization parameter as a dark normalization parameter, specifically, the server obtains the maximum value and the minimum value of the pixel brightness value in the target image, namely obtaining a pixel brightness maximum value R h and a pixel brightness minimum value R l, then obtaining the ratio of the pixel brightness maximum value to the pixel brightness minimum value, obtaining a bright normalization parameter M h (namely M h=Rh/Rl) corresponding to the bright area, and obtaining the ratio of the pixel brightness minimum value to the pixel brightness maximum value, obtaining a dark normalization parameter M l (namely M l=Rl/Rh) corresponding to the dark area, further obtaining the image information of the bright area from the bright area, normalizing the image information of the bright area through the bright normalization parameter M h to uniformly reduce the brightness of the bright area, reducing the influence of the picture over brightness on the extraction of image characteristic information, then extracting the corresponding image characteristic information from the bright area after brightness adjustment, obtaining the corresponding image characteristic information of the dark area from the bright area, and obtaining the uniform brightness information of the dark area after the bright area is reduced, and finally carrying out the image characteristic information extraction from the dark area after the bright area is normalized.
It can be understood that the first area may be a dark area, the corresponding second area is a bright area, the first normalization parameter is a dark normalization parameter corresponding to the dark area, and the second normalization parameter is a bright normalization parameter.
The brightness of the first area is unified by utilizing the first normalization parameters corresponding to the first area, the brightness of the second area is unified by utilizing the second normalization parameters of the second area, the influence of excessive brightness or excessive darkness of a picture on the extraction of image characteristic information is reduced, the first area and the second area are under the condition of the same light ratio, and then the gray entropy value, the image detail information and the color characteristic information under the same light ratio are calculated, so that the richer and more accurate image characteristic information is extracted.
The first image information comprises first gray information of each pixel point in a first area, the second image information comprises second gray information of each pixel point in a second area, the gray characteristic information comprises gray entropy values, normalization processing is carried out on the first image information of the first area based on first normalization parameters, the first image characteristic information is determined according to the normalized first image information, normalization processing is carried out on the first gray information of each pixel point in the first area based on the first normalization parameters, entropy values of the normalized first gray information are calculated to obtain first gray entropy values, normalization processing is carried out on the second image information of the second area based on the second normalization parameters, the second image characteristic information is determined according to the normalized second image information, normalization processing is carried out on the second gray information of each pixel point in the second area based on the second normalization parameters, and entropy values of the normalized second gray information are calculated to obtain second gray entropy values.
Specifically, the gradation value normalization for the first region can be expressed by the following formula (1), and the gradation value normalization for the second region can be expressed by the following formula (2), specifically as follows:
histgs1(K1)=Mh×C(K1)(1)
histgs2(K2)=Ml×C(K2)(2)
Wherein hist gs1(K1) represents the normalized gray information of each pixel in the first region, hist gs2(K2) represents the normalized gray information of each pixel in the second region, M h represents the normalized parameter of the first region, M l represents the normalized parameter of the second region, K 1 represents the pixel luminance value of each pixel in the first region, K 2 represents the pixel luminance value of each pixel in the second region, C (·) represents the process of converting the pixel luminance value of the pixel into the gray value, i.e., C (K 1) represents the gray value of the pixel of the first region, and C (K 2) represents the gray value of the pixel of the second region.
After the normalized gray information of the first region is obtained, the gray entropy value corresponding to the first region may be calculated by the following formula (3),
After the normalized gray information of the second region is obtained, the gray entropy value corresponding to the second region may be calculated by the following equation (4).
The first area and the second area are respectively subjected to normalization processing, the brightness of the second area is uniformly lightened by utilizing a second normalization parameter M l, the first area is uniformly reduced by utilizing a first normalization parameter M h, the first area and the second area are under the same light ratio, the gray entropy value under the same light ratio is further calculated, and the accurate dynamic range entropy value is extracted.
The first image information may further include first image detail information, the second image information may further include second image detail information, normalization processing is performed on the first image information of the first area based on the first normalization parameter, the first image feature information is determined according to the first image information after normalization processing, specifically, the image detail information of the first area is obtained from the first area and normalized based on the first normalization parameter to obtain image feature information corresponding to the image detail information, similarly, normalization processing is performed on the second image information of the second area based on the second normalization parameter, the second image feature information is determined according to the second image detail information after normalization processing, specifically, the image detail information is obtained from the second area and normalized based on the second normalization parameter, and the image feature information of the second area is obtained.
The brightness of the first area is unified by utilizing the first normalization parameters corresponding to the first area, and the second area is unified by utilizing the second normalization parameters of the second area, so that the first area and the second area are under the same light ratio, and further image details such as texture information and contrast under the same light ratio are calculated, and rich and accurate image detail information is extracted.
The first image information may further include first color information, the second image information may include second color information, the first image information of the first area is normalized based on the first normalization parameter, the first image feature information is determined according to the normalized first image information, specifically, color information is obtained from the first area, the color information of the first area is normalized based on the first normalization parameter to obtain first image feature information corresponding to the color information, the second image information of the second area is normalized based on the second normalization parameter, the second image feature information is determined according to the normalized second image information, specifically, the color information is obtained from the second area, the color information of the second area is normalized based on the second normalization parameter, and the second image feature information corresponding to the color information is obtained.
Specifically, the color information is obtained from the first area, specifically, the image data corresponding to the first area in RGB format is converted into the image data in HSV format by using a color model conversion algorithm, and then different colors are extracted according to the value corresponding to the converted HSV. Likewise, the step of acquiring color information from the second area is the same as that of acquiring color information from the first area, except that the data processing object is different. The first area and the second area with different brightness are respectively and independently processed, the brightness of the first area is unified by utilizing the first normalization parameter corresponding to the first area, the second area is unified by utilizing the second normalization parameter of the second area, so that the first area and the second area are under the same light ratio, and further, the image color information such as the saturation under the same light ratio is calculated, and the rich and accurate color characteristic information is extracted.
S240, determining an evaluation result of the target image according to the first image characteristic information and the second image characteristic information.
After the first image feature information and the second image feature information are obtained, whether the first image feature information and the second image feature information accord with the preset image quality parameter condition or not can be judged.
Specifically, the first image feature information includes at least gray feature information, image detail information, and color feature information of the first region, and the second image feature information includes at least gray feature information, image detail information, and color feature information of the second region. The gray characteristic information may be a gray entropy value of a corresponding region, if the first gray entropy value of the first region and the second gray entropy value of the second region are both greater than a preset entropy value threshold, the first gray entropy value and the second gray entropy value are both effective entropy values and accord with preset image quality parameter conditions, and if any one of the first gray entropy value of the first region or the second gray entropy value of the second region is not greater than the preset entropy value threshold, the gray entropy value smaller than the entropy value threshold is an invalid entropy value and does not accord with the preset image quality parameter conditions.
Similarly, setting a corresponding information threshold for the image detail information, if the image detail information of the first area and the image detail information of the second area are both larger than a preset information threshold, the image detail information of the first area and the image detail information of the second area are both effective values and accord with preset image quality parameter conditions, and if any one of the image detail information of the first area and the image detail information of the second area is not larger than the preset information threshold, the image detail information smaller than the information threshold is invalid and does not accord with the preset image quality parameter conditions. And setting corresponding information thresholds for the color information, wherein if the color characteristic information of the first area and the color characteristic information of the second area are both larger than a preset information threshold, the color characteristic information of the first area and the color characteristic information of the second area are both effective values and accord with preset image quality parameter conditions, and if any one of the color characteristic information of the first area and the color characteristic information of the second area is not larger than a preset entropy threshold, the color characteristic information smaller than the entropy threshold is invalid information and does not accord with the preset image quality parameter conditions.
And if any one of the first image characteristic information or the second image characteristic information does not accord with the preset image quality parameter condition, determining the evaluation result of the target image as a non-high dynamic range image. It can be understood that when any one of the gray feature information, the image detail information or the color feature information in the first area and the second area does not meet the preset image quality parameter condition, the target image is a non-high dynamic range image, and at this time, the reason that the image light of the target image is relatively large may be that when the image capturing device captures the target image, foreign matters are blocked in front of the capturing lens, so that the brightness of the local area image is too dark, and when the capturing lens directly captures the light source in the capturing scene, the brightness of the local area image is too bright. Further, if the evaluation result of the target image is a non-high dynamic range image, abnormal information may be output to the terminal according to the evaluation result of the non-high dynamic range image, so that the user knows that the photographed image is abnormal.
And if the first image characteristic information and the second image characteristic information meet the preset image quality parameter conditions, determining an evaluation result of the target image as a high dynamic range image, acquiring weights corresponding to the image quality parameters, and acquiring dynamic range quality values of the target image according to the image quality parameters and the weights corresponding to the image quality parameters. It can be understood that when the gray information, the contrast information or the color information in the first area and the second area meet the preset image quality parameter condition, the target image is a high dynamic range image, and the weights of the first image feature information and the second image feature information can be set according to the display requirements of the dark area and the bright area, so that the first image feature information and the second image feature information are weighted and calculated to obtain the dynamic range quality value of the target image. Specifically, the dynamic range quality value of the target image can be calculated by the following formulas (5), (6), (7):
F=F1+F2(5)
F1=A1×Entropygs1+A2×T1+A3×C1(6)
F2=B1×Entropygs2+B2×T2+B3×C2(7)
wherein, A=A 1+A2+A3,B=B1+B2+B3, a+b=1.
Wherein F represents a dynamic range quality value, F 1 represents a dynamic range quality value of a first region, F 2 represents a dynamic range quality value of a second region, entropy gs1 represents a first gray entropy value of the first region, entropy gs2 represents a second gray entropy value of the second region, T 1 represents image detail information of the first region, T 2 represents image detail information of the second region, C 1 represents color feature information of the first region, and C 2 represents color feature information of the second region.
The image processing method comprises the steps of obtaining a target image to be evaluated, dividing the target image into a first area and a second area according to pixel brightness values of the target image, further extracting first image feature information and second image feature information from the first area and the second area respectively, wherein the image feature information at least comprises gray feature information, image detail information and color feature information, and finally determining an evaluation result of the target image according to the first image feature information and the second image feature information. The image is segmented through the pixel brightness information in the target image, and then the image characteristic information is extracted from the areas with different brightness in the target image, so that the influence of the brightness in the image on the image information in the target image is reduced, the representation capability of the image characteristic information on the information in the target image is improved, and the accuracy of the subsequent evaluation on the target image is improved.
Referring to fig. 4, the image processing method provided by the present application is further described below with reference to fig. 4. As shown in fig. 4, the image processing method includes:
And step 1, segmenting a bright and dark part region, acquiring an image to be evaluated, and segmenting the preprocessing image into the bright part region and the dark part region according to the pixel brightness value of the image to be evaluated.
Step 2, whether the image light ratio reaches a judging threshold T is judged by acquiring a brightness average value L h of a bright area and a brightness average value L l of a dark area, judging whether the ratio (L h/Ll) of the brightness average value L h to the brightness average value L l of the dark area reaches the judging threshold T of a high dynamic picture, if L h/Ll is more than or equal to T, the image to be evaluated is a high dynamic range image, executing the next step at the moment, so as to enter a high dynamic range image quality testing step, and if L h/Ll is less than T, the image to be evaluated is a high dynamic range image, and entering a conventional image quality testing step at the moment.
It can be understood that when the image light of the image to be evaluated is smaller than the determination threshold T, the image to be evaluated is less likely to be a high dynamic range image, the image to be evaluated is less likely to be too bright or too dark, and the brightness of the whole image is relatively average, so that image processing is not needed to be performed in a targeted partition.
And3, calculating a light ratio normalization factor M, wherein the light ratio normalization factor M comprises a high light normalization factor M h corresponding to a bright area and a low light normalization factor M l corresponding to a dark area.
Specifically, obtaining the maximum value and the minimum value of the pixel brightness value in the target image to obtain a pixel brightness maximum value R h and a pixel brightness minimum value R l, then obtaining the ratio of the pixel brightness maximum value to the pixel brightness minimum value to obtain a bright portion normalization parameter M h (namely M h=Rh/Rl) corresponding to the bright portion region, and obtaining the ratio of the pixel brightness minimum value to the pixel brightness maximum value to obtain a dark portion normalization parameter M l (namely M l=Rl/Rh) corresponding to the dark portion region
Step 4, normalization of gray values, namely, normalization of gray values of bright areas can be achieved through the following formula (8), and normalization of gray values of dark areas can be achieved through the following formula (9), specifically, the following steps are achieved:
histgs1(K1)=Mh×C(K1)(8)
histgs2(K2)=Ml×C(K2)(9)
Wherein hist gs1(K1) represents the normalized gray-scale information of each pixel in the bright area, hist gs2(K2) represents the normalized gray-scale information of each pixel in the dark area, M h represents the normalized parameter of the bright area, M l represents the normalized parameter of the dark area, K 1 represents the pixel luminance value of each pixel in the bright area, K 2 represents the pixel luminance value of each pixel in the dark area, C (K 1) represents the process of converting the pixel luminance value of the pixel in the bright area into the gray-scale value, i.e., the gray-scale value of the pixel in the bright area, and C (K 2) represents the process of converting the pixel luminance value of the pixel in the dark area into the gray-scale value, i.e., the gray-scale value of the pixel in the dark area.
And 5, calculating entropy values of the bright and dark areas, namely after the normalized gray level information of the bright areas is obtained, calculating bright gray level entropy values corresponding to the bright areas according to the following formula (10), and after the normalized gray level information of the dark areas is obtained, calculating dark gray level entropy values corresponding to the bright areas according to the following formula (11), wherein the formula is specifically as follows:
And 6, judging the validity of the entropy values of the bright and dark areas, namely, if the bright gray entropy value of the bright area and the dark gray entropy value of the dark area are both larger than a preset entropy threshold value, the bright gray entropy value and the dark gray entropy value are both valid entropy values, the preset image quality parameter conditions are met, the bright gray entropy value and the dark gray entropy value are stored, and step 7 and step 8 are executed, and if any one of the bright gray entropy value of the bright area and the dark gray entropy value of the dark area is not larger than the preset entropy threshold value, the gray entropy value smaller than the entropy threshold value is invalid entropy value and does not meet the preset image quality parameter conditions, at the moment, the input dynamic range value of the image to be evaluated is invalid, the input image is required to be checked again, and the image anomaly information can be output.
Extracting image detail information of a bright and dark region, namely extracting image detail information t 1 from the bright region and extracting image detail information t 2 from the dark region, carrying out normalization processing on the image detail information t 1 by using a bright normalization factor (specifically, the normalization processing can be realized by the following formula (12)) to obtain image detail information in the bright region, and carrying out normalization processing on the image detail information t 2 by using a dark normalization factor (specifically, the normalization processing can be realized by the following formula (13)) to obtain image detail information in the dark region;
T1=Mh*f1(t1)(12)
T2=Ml*f1(t2)(13)
Where T 1 is normalized image detail information of the bright area, T 2 is normalized image detail information of the dark area, f 1 (. Cndot.) represents a process of extracting image detail information from the image information, f 1(t1) represents image detail information extracted from the bright area, and f 1(t2) represents image detail information extracted from the dark area.
Extracting color characteristic information of the bright and dark regions, namely extracting image color information c 1 from the bright region and extracting image color information c 2 from the dark region, normalizing the image color information c 1 by using a bright normalization factor (specifically, the normalization process can be realized by the following formula (14)) to obtain color characteristic information in the bright region, and normalizing the image color information c 2 by using a dark normalization factor (specifically, the normalization process can be realized by the following formula (15)) to obtain color characteristic information in the dark region;
C1=Mh*f2(c1)(14)
C2=Ml*f2(c2)(15)
Wherein C 1 is normalized image detail information of a bright region, C 2 is normalized image detail information of a dark region, f 2 (DEG) represents a process of extracting color feature information from the image information, f 2(c1) represents the image detail information extracted from the bright region, and f 2(c2) represents the image detail information extracted from the dark region.
And 9, judging the validity of the image detail information and the color feature information, wherein if the image detail information and the color feature information of the bright area and the image detail information and the color feature information of the dark area are both larger than a preset information threshold, the image detail information and the color feature information of the bright area and the image detail information and the color feature information of the dark area are both valid values, conform to preset image quality parameter conditions, the image detail information and the color feature information of the bright area and the image detail information and the color feature information of the dark area are stored, and step 10 is executed, and if any one of the image detail information and the color feature information of the first area, the image detail information and the color feature information of the dark area is not larger than a preset information threshold, the image detail information or the color feature information smaller than the information threshold is invalid information and does not conform to the preset image quality parameter conditions, at the moment, the dynamic range value of the input image to be evaluated is invalid, the input image is required to be checked again, and the image abnormal information can be output.
Step 10, calculating a high dynamic range quality value, namely carrying out weighted summation on a bright part gray entropy value of a bright part area, image detail information and color characteristic information, and a dark part gray entropy value of a dark part area, the image detail information and the color characteristic information to obtain the high dynamic range quality value, wherein the high dynamic range quality value can be realized by the following formulas (16) to (18):
F=F1+F2(16)
F1=A1×Entropygs1+A2×T1+A3×C1(17)
F2=B1×Entropygs2+B2×T2+B3×C2(18)
wherein, A=A 1+A2+A3,B=B1+B2+B3, a+b=1.
Wherein F represents a dynamic range quality value, F 1 represents a dynamic range quality value of a bright region, F 2 represents a dynamic range quality value of a dark region, entropy gs1 represents a bright gray entropy value of the bright region, entropy gs2 represents a second gray entropy value of the dark region, T 1 represents image detail information of the bright region, T 2 represents image detail information of the dark region, C 1 represents color feature information of the bright region, and C 2 represents color feature information of the dark region.
And step 11, outputting an image evaluation result.
According to the image processing scheme, image quality influences brought by different light ratios of an image to be evaluated are considered, the image light ratios of a bright area and a dark area of the image to be evaluated are extracted, the brightness ratios of the bright area and the dark area are obtained, further, after brightness normalization processing is carried out on image data of the bright area and image data of the dark area, dynamic range entropy values (namely gray entropy values) of the bright area and the dark area are respectively calculated, image quality parameters such as image detail information under the same light ratio and color feature information under the same light ratio are obtained, and then a dynamic range image quality evaluation result is obtained based on the dynamic range entropy values, the image detail information and the color feature information. Compared with the existing dynamic range image quality evaluation method, the influence of the change of the light ratio of the equipment is effectively reduced, the image quality level of the dynamic range is truly reflected, and the evaluation accuracy is improved.
In order to better implement the image processing method provided by the embodiment of the present application, on the basis of the image processing method provided by the embodiment of the present application, an image processing apparatus is further provided in the embodiment of the present application, as shown in fig. 5, where the image processing apparatus 500 includes:
The target image acquisition module 510 is configured to acquire a target image to be evaluated;
an image region segmentation module 520, configured to segment the target image into a first region and a second region according to the pixel brightness value of the target image;
A feature information extracting module 530, configured to extract first image feature information and second image feature information from the first region and the second region, respectively, where the image feature information includes at least gray feature information, image detail information, and color feature information;
the image information obtaining module 540 is configured to determine an evaluation result of the target image according to the first image feature information and the second image feature information.
In some embodiments of the present application, the target image obtaining module 510 is further configured to obtain an image to be evaluated, divide the preprocessed image into a first area and a second area according to a pixel brightness value of the image to be evaluated, obtain a first area brightness average value of the first area and a second area brightness average value of the second area, obtain an image light ratio of the image to be evaluated according to a ratio of the second area brightness average value to the first area brightness average value, and determine the image to be evaluated as the target image if the image light ratio is greater than a preset light ratio threshold.
In some embodiments of the present application, the image region segmentation module 520 is further configured to obtain a gray value corresponding to each pixel in the target image, determine, according to the gray segmentation threshold and the gray value of each pixel, a first pixel having a gray value greater than the gray segmentation threshold and a second pixel having a gray value not greater than the gray segmentation threshold, determine a region where the first pixel is located as a first region, and determine a region where the second pixel is located as a second region.
In some embodiments of the present application, the feature information extraction module 530 is further configured to obtain a maximum pixel brightness value and a minimum pixel brightness value in the target image, obtain a first normalization parameter of the first area and a second normalization parameter of the second area according to a ratio between the maximum pixel brightness value and the minimum pixel brightness value, normalize the first image information of the first area based on the first normalization parameter, determine the first image feature information according to the normalized first image information, normalize the second image information of the second area based on the second normalization parameter, and determine the second image feature information according to the normalized second image information.
In some embodiments of the present application, the first image information includes first gray scale information, the second image information includes second gray scale information, the first image feature information includes a first gray scale entropy value, the second image feature information includes a second gray scale entropy value, the feature information extraction module 530 is configured to normalize the first gray scale information based on a first normalization parameter, calculate an entropy value of the normalized first gray scale information to obtain a first gray scale entropy value, normalize the second gray scale information based on a second normalization parameter, calculate an entropy value of the normalized second gray scale information to obtain a second gray scale entropy value
In some embodiments of the present application, the image information obtaining module 540 is further configured to determine an evaluation result of the target image as a high dynamic range image and obtain weights corresponding to the image quality parameters when the first image feature information and the second image feature information both meet a preset image quality parameter condition;
and acquiring the dynamic range quality value of the target image according to each image quality parameter and the weight corresponding to each image quality parameter.
In some embodiments of the present application, the image information obtaining module 540 is further configured to determine the evaluation result of the target image as a non-high dynamic range image when any one of the first image feature information or the second image feature information does not meet the preset image quality parameter condition.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, and no further description is given here. The respective modules in the above-described page display device may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In some embodiments of the application, the image processing apparatus 500 may be implemented in the form of a computer program which is executable on a computer device as shown in fig. 6. The memory of the computer device may store various program modules constituting the image processing apparatus 500, such as the target image acquisition module 510, the image region segmentation module 520, the feature information extraction module 530, and the image information acquisition module 540 shown in fig. 5. The computer program constituted by the respective program modules causes the processor to execute the steps in the image processing method of the respective embodiments of the present application described in the present specification.
For example, the computer apparatus shown in fig. 6 may perform step S210 through the target image acquisition module 510 in the image processing device 500 shown in fig. 5. The computer device may perform step S220 through the image region segmentation module 520. The computer device may perform step S230 through the feature information extraction module 530. The step S240 may be performed by the image information acquisition module 540 by a computer device including a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external computer device through a network connection. The computer program is executed by a processor to implement an image processing method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 6 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In some embodiments of the present application, a computer device is provided that includes one or more processors, memory, and one or more applications, wherein the one or more applications are stored in the memory and configured to perform the steps of the image processing method described above by the processor. The steps of the image processing method here may be the steps in the image processing methods of the respective embodiments described above.
In some embodiments of the present application, a computer-readable storage medium is provided, in which a computer program is stored, the computer program being loaded by a processor, so that the processor performs the steps of the above-mentioned image processing method. The steps of the image processing method here may be the steps in the image processing methods of the respective embodiments described above.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein can include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing describes in detail an image processing method, apparatus, computer device and storage medium according to embodiments of the present application, and specific examples are provided herein to illustrate the principles and embodiments of the present application, and the above description of the embodiments is only for aiding in understanding the method and core concept of the present application, and meanwhile, for those skilled in the art, according to the concept of the present application, there are variations in the specific embodiments and application scope, so that the disclosure should not be construed as limiting the application.

Claims (9)

1. An image processing method, comprising:
acquiring a target image to be evaluated;
Dividing the target image into a first area and a second area according to the pixel brightness value of the target image;
extracting first image characteristic information and second image characteristic information from the first area and the second area respectively, wherein the image characteristic information at least comprises gray characteristic information, image detail information and color characteristic information;
determining an evaluation result of the target image according to the first image characteristic information and the second image characteristic information;
The gray scale feature information includes a gray scale entropy value, and the determining, according to the first image feature information and the second image feature information, an evaluation result of the target image includes:
If the first gray entropy value of the first region and the second gray entropy value of the second region are both larger than a preset entropy threshold value, and the image detail information and the color feature information of the first region and the image detail information and the color feature information of the second region are both larger than a preset information threshold value, determining that the target image is a high dynamic range image as the evaluation result, and carrying out weighted summation on the first gray entropy value of the first region, the image detail information and the color feature information, and the second gray entropy value of the second region, the image detail information and the color feature information to obtain a high dynamic range quality value;
If the first gray entropy value of the first region and the second gray entropy value of the second region are not larger than a preset entropy threshold value, or the image detail information and the color feature information of the first region and the image detail information and the color feature information of the second region are not larger than a preset information threshold value, determining that the target image is a non-high dynamic range image as the evaluation result;
the extracting first image feature information and second image feature information from the first region and the second region, respectively, includes:
Acquiring a maximum value and a minimum value of pixel brightness in the target image;
Acquiring a first normalization parameter of the first area and a second normalization parameter of the second area according to the ratio between the maximum pixel brightness value and the minimum pixel brightness value;
Based on the first normalization parameters, carrying out normalization processing on the first image information of the first area, and determining first image characteristic information according to the first image information after normalization processing;
And carrying out normalization processing on the second image information of the second area based on the second normalization parameters, and determining second image characteristic information according to the second image information after normalization processing.
2. The method according to claim 1, wherein the acquiring the target image to be evaluated comprises:
Acquiring an image to be evaluated, and dividing the image to be evaluated into a first area and a second area according to the pixel brightness value of the image to be evaluated;
Acquiring a first area brightness average value of the first area and a second area brightness average value of the second area;
Obtaining the image light ratio of the image to be evaluated according to the ratio of the brightness average value of the second area to the brightness average value of the first area;
and if the image light ratio is larger than a preset light ratio threshold value, determining the image to be evaluated as a target image.
3. The method of claim 1, wherein the dividing the target image into a first region and a second region according to the pixel brightness value of the target image comprises:
Acquiring gray values corresponding to all pixel points in the target image;
According to the gray level dividing threshold value and the gray level value of each pixel point, determining a first pixel point with the gray level value larger than the gray level dividing threshold value and a second pixel point with the gray level value not larger than the gray level dividing threshold value;
And determining the area where the first pixel point is located as a first area, and determining the area where the second pixel point is located as a second area.
4. The method of claim 1, wherein the first image information comprises first gray scale information for each pixel in the first region, and the second image information comprises second gray scale information for each pixel in the second region;
the normalizing processing is performed on the first image information of the first area based on the first normalizing parameter, and the determining the first image feature information according to the normalized first image information includes:
Carrying out normalization processing on the first gray information of each pixel point in the first region based on the first normalization parameters, and calculating the entropy value of the normalized first gray information to obtain a first gray entropy value;
the normalizing processing is performed on the second image information of the second area based on the second normalizing parameter, and the determining the second image feature information according to the normalized second image information includes:
And carrying out normalization processing on the second gray information of each pixel point in the second region based on the second normalization parameters, and calculating the entropy value of the normalized second gray information to obtain a second gray entropy value.
5. The method according to claim 1, wherein determining the evaluation result of the target image according to the first image feature information and the second image feature information includes:
If the first image characteristic information and the second image characteristic information both accord with the preset image quality parameter conditions, determining an evaluation result of the target image as a high dynamic range image, and acquiring weights corresponding to the image quality parameters;
And acquiring the dynamic range quality value of the target image according to each image quality parameter and the weight corresponding to each image quality parameter.
6. The method according to claim 1, wherein determining the evaluation result of the target image according to the first image feature information and the second image feature information includes:
And if any one of the first image characteristic information or the second image characteristic information does not accord with the preset image quality parameter condition, determining the evaluation result of the target image as a non-high dynamic range image.
7. An image processing apparatus, characterized in that the apparatus comprises:
The target image acquisition module is used for acquiring a target image to be evaluated;
the image region segmentation module is used for segmenting the target image into a first region and a second region according to the pixel brightness value of the target image;
The characteristic information extraction module is used for extracting first image characteristic information and second image characteristic information from the first area and the second area respectively, wherein the image characteristic information at least comprises gray characteristic information, image detail information and color characteristic information;
The image information acquisition module is used for determining an evaluation result of the target image according to the first image characteristic information and the second image characteristic information;
The gray scale feature information includes a gray scale entropy value, and the determining, according to the first image feature information and the second image feature information, an evaluation result of the target image includes:
If the first gray entropy value of the first region and the second gray entropy value of the second region are both larger than a preset entropy threshold value, and the image detail information and the color feature information of the first region and the image detail information and the color feature information of the second region are both larger than a preset information threshold value, determining that the target image is a high dynamic range image as the evaluation result, and carrying out weighted summation on the first gray entropy value of the first region, the image detail information and the color feature information, and the second gray entropy value of the second region, the image detail information and the color feature information to obtain a high dynamic range quality value;
If the first gray entropy value of the first region and the second gray entropy value of the second region are not larger than a preset entropy threshold value, or the image detail information and the color feature information of the first region and the image detail information and the color feature information of the second region are not larger than a preset information threshold value, determining that the target image is a non-high dynamic range image as the evaluation result;
the extracting first image feature information and second image feature information from the first region and the second region, respectively, includes:
Acquiring a maximum value and a minimum value of pixel brightness in the target image;
Acquiring a first normalization parameter of the first area and a second normalization parameter of the second area according to the ratio between the maximum pixel brightness value and the minimum pixel brightness value;
Based on the first normalization parameters, carrying out normalization processing on the first image information of the first area, and determining first image characteristic information according to the first image information after normalization processing;
And carrying out normalization processing on the second image information of the second area based on the second normalization parameters, and determining second image characteristic information according to the second image information after normalization processing.
8. A computer device, the computer device comprising:
One or more processors;
and one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to implement the image processing method of any one of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, the computer program being loaded by a processor to perform the steps of the image processing method of any one of claims 1 to 6.
CN202210090096.XA 2022-01-25 2022-01-25 Image processing method, device, computer equipment and storage medium Active CN114429476B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210090096.XA CN114429476B (en) 2022-01-25 2022-01-25 Image processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210090096.XA CN114429476B (en) 2022-01-25 2022-01-25 Image processing method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114429476A CN114429476A (en) 2022-05-03
CN114429476B true CN114429476B (en) 2025-01-03

Family

ID=81313370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210090096.XA Active CN114429476B (en) 2022-01-25 2022-01-25 Image processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114429476B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115762439A (en) * 2022-11-04 2023-03-07 南京汇川工业视觉技术开发有限公司 Image debugging method, device, equipment and storage medium
CN116152369B (en) * 2022-12-31 2023-09-22 廊坊奎达信息技术有限公司 Image dynamic visualization method based on big data technology
CN116071351B (en) * 2023-03-06 2023-06-30 山东金利康面粉有限公司 Flour quality visual detection system based on flour bran star identification

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717892A (en) * 2019-09-18 2020-01-21 宁波大学 A Tone Mapped Image Quality Evaluation Method
CN111582100A (en) * 2020-04-28 2020-08-25 浙江大华技术股份有限公司 Target object detection method and device
CN113163127A (en) * 2020-12-31 2021-07-23 广州极飞科技股份有限公司 Image processing method, image processing device, electronic equipment and storage medium

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5635677B2 (en) * 2010-04-19 2014-12-03 ドルビー ラボラトリーズ ライセンシング コーポレイション High dynamic range, visual dynamic range and wide color range image and video quality assessment
CN104346809A (en) * 2014-09-23 2015-02-11 上海交通大学 Image quality evaluation method for image quality dataset adopting high dynamic range
JP6718257B2 (en) * 2016-03-01 2020-07-08 日本テレビ放送網株式会社 Image quality evaluation device, image quality evaluation method and program
CN106067177B (en) * 2016-06-15 2020-06-26 深圳市万普拉斯科技有限公司 HDR scene detection method and device
CN107767363A (en) * 2017-09-05 2018-03-06 天津大学 It is a kind of based on natural scene without refer to high-dynamics image quality evaluation algorithm
CN108010023B (en) * 2017-12-08 2020-03-27 宁波大学 High dynamic range image quality evaluation method based on tensor domain curvature analysis
CN108074220B (en) * 2017-12-11 2020-07-14 上海顺久电子科技有限公司 Image processing method and device and television
CN109377465A (en) * 2018-10-26 2019-02-22 北京布本智能科技有限公司 An image quality identification method based on image information entropy
CN110706196B (en) * 2018-11-12 2022-09-30 浙江工商职业技术学院 Clustering perception-based no-reference tone mapping image quality evaluation algorithm
CN111080595A (en) * 2019-12-09 2020-04-28 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable medium
CN111899243B (en) * 2020-07-28 2025-02-07 阳光保险集团股份有限公司 Image clarity evaluation method, device and computer-readable storage medium
CN112750086B (en) * 2020-08-31 2024-10-15 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and storage medium
CN113344803B (en) * 2021-05-08 2024-03-19 浙江大华技术股份有限公司 Image adjusting method, device, electronic device and storage medium
CN113327208B (en) * 2021-06-17 2022-10-04 烟台艾睿光电科技有限公司 High dynamic range image tone mapping method, device, electronic equipment and medium
CN113747062B (en) * 2021-08-25 2023-05-26 Oppo广东移动通信有限公司 HDR scene detection method and device, terminal and readable storage medium
CN113850828B (en) * 2021-11-30 2022-02-22 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, electronic device, storage medium, and program product

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717892A (en) * 2019-09-18 2020-01-21 宁波大学 A Tone Mapped Image Quality Evaluation Method
CN111582100A (en) * 2020-04-28 2020-08-25 浙江大华技术股份有限公司 Target object detection method and device
CN113163127A (en) * 2020-12-31 2021-07-23 广州极飞科技股份有限公司 Image processing method, image processing device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114429476A (en) 2022-05-03

Similar Documents

Publication Publication Date Title
CN114429476B (en) Image processing method, device, computer equipment and storage medium
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
CN108335279B (en) Image fusion and HDR imaging
CN110033418B (en) Image processing method, image processing device, storage medium and electronic equipment
CN112565636B (en) Image processing method, device, equipment and storage medium
CN109829859B (en) Image processing method and terminal equipment
CN111064904A (en) Dark light image enhancement method
CN109922275B (en) Self-adaptive adjustment method and device of exposure parameters and shooting equipment
WO2021143300A1 (en) Image processing method and apparatus, electronic device and storage medium
US20220174222A1 (en) Method for marking focused pixel, electronic device, storage medium, and chip
CN110047060B (en) Image processing method, image processing device, storage medium and electronic equipment
CN116645527A (en) Image recognition method, system, electronic device and storage medium
CN113691724A (en) HDR scene detection method and device, terminal and readable storage medium
CN117893455B (en) Image brightness and contrast adjusting method
CN111539975B (en) Method, device, equipment and storage medium for detecting moving object
CN117218039A (en) Image processing method, device, computer equipment and storage medium
CN112446833A (en) Image processing method, intelligent terminal and storage medium
CN113422893B (en) Image acquisition method and device, storage medium and mobile terminal
CN115170420A (en) Image contrast processing method and system
CN113436106A (en) Underwater image enhancement method and device and computer storage medium
CN113891081A (en) Video processing method, device and equipment
CN119094896B (en) Self-adaptive overexposure and darkness-inhibiting automatic exposure method and control device
CN111147760B (en) Light field camera, luminosity adjusting method and device thereof and electronic equipment
CN117135329A (en) Image processing method, device, display equipment and storage medium
CN116233379A (en) Image brightness adjustment method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant