CN113781428B - Image processing method and device, electronic equipment and storage medium - Google Patents
Image processing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113781428B CN113781428B CN202111054581.3A CN202111054581A CN113781428B CN 113781428 B CN113781428 B CN 113781428B CN 202111054581 A CN202111054581 A CN 202111054581A CN 113781428 B CN113781428 B CN 113781428B
- Authority
- CN
- China
- Prior art keywords
- image
- target
- labeling
- value
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 27
- 238000002372 labelling Methods 0.000 claims abstract description 236
- 238000012545 processing Methods 0.000 claims abstract description 33
- 238000000034 method Methods 0.000 claims description 27
- 238000010606 normalization Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention discloses an image processing method, an image processing device, computer equipment and a storage medium. The image processing method comprises the following steps: acquiring a target area image of an image to be processed; determining multi-dimensional image annotation parameters of the target area image; determining a region labeling result of the target region image according to the image labeling parameters; wherein the multi-dimensional image annotation parameters comprise at least one of image definition, image symmetry, image brightness and image noise; determining the weight of each image annotation parameter; and determining a target image labeling result of the image to be processed according to the weight of the image labeling parameter and the region labeling result. The technical scheme of the embodiment of the invention can determine the image quality according to the multidimensional image annotation parameters, thereby improving the accuracy and rationality of image quality determination and further improving the accuracy and rationality of image processing.
Description
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to an image processing method, an image processing device, computer equipment and a storage medium.
Background
Pictures and video have become increasingly important as information carriers in society today, and the processing of images has become a widespread and fundamental problem. The existing image processing generally adopts a method for marking an image to determine the image quality, so that the image processing is realized. However, the unreasonable image annotation in the existing image processing method causes lower accuracy and rationality of determining the image quality, thereby causing lower accuracy and rationality of image processing.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, electronic equipment and a storage medium, which can improve the accuracy and rationality of image quality determination, and further improve the accuracy and rationality of image processing.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
Acquiring a target area image of an image to be processed;
Determining multi-dimensional image annotation parameters of the target area image;
Determining a region labeling result of the target region image according to the image labeling parameters; wherein the multi-dimensional image annotation parameters comprise at least one of image definition, image symmetry, image brightness and image noise;
determining the weight of each image annotation parameter;
And determining a target image labeling result of the image to be processed according to the weight of the image labeling parameter and the region labeling result.
In a second aspect, an embodiment of the present invention further provides an image processing apparatus, including:
The target area image acquisition module is used for acquiring a target area image of the image to be processed;
The image annotation parameter determining module is used for determining multi-dimensional image annotation parameters of the target area image;
the region labeling result determining module is used for determining a region labeling result of the target region image according to the image labeling parameters; wherein the multi-dimensional image annotation parameters comprise at least one of image definition, image symmetry, image brightness and image noise;
the image annotation parameter weight determining module is used for determining the weight of each image annotation parameter;
and the target image annotation result determining module is used for determining the target image annotation result of the image to be processed according to the weight of the image annotation parameter and the region annotation result.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
One or more processors;
a storage means for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the image processing methods provided by any of the embodiments of the present invention.
In a fourth aspect, an embodiment of the present invention further provides a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing method provided by any embodiment of the present invention.
According to the embodiment of the invention, the target area image of the image to be processed is obtained, the multi-dimensional image annotation parameters of the target area image are determined, and the area annotation result of the target area image is determined according to the image annotation parameters, so that after the weight of each image annotation parameter is determined, the target image annotation result of the image to be processed is determined according to the weight of the image annotation parameters and the area annotation result, the problems of poor accuracy and rationality of image processing caused by unreasonable image annotation in the existing image processing method are solved, the image quality can be determined according to the multi-dimensional image annotation parameters, and the accuracy and rationality of image quality determination are improved, and the accuracy and rationality of image processing are further improved.
Drawings
Fig. 1 is a flowchart of an image processing method according to a first embodiment of the present invention;
fig. 2 is a schematic diagram of an image processing apparatus according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof.
It should be further noted that, for convenience of description, only some, but not all of the matters related to the present invention are shown in the accompanying drawings. Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently, or at the same time. Furthermore, the order of the operations may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
The terms first and second and the like in the description and in the claims and drawings of embodiments of the invention are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to the listed steps or elements but may include steps or elements not expressly listed.
Example 1
Fig. 1 is a flowchart of an image processing method according to a first embodiment of the present invention, where the method may be applied to determining image quality according to multi-dimensional image labeling parameters to process an image, and the method may be performed by an image processing apparatus, where the apparatus may be implemented by software and/or hardware, and may generally be directly integrated into an electronic device that performs the method. As shown in fig. 1, the image processing method may include the steps of:
S110, acquiring a target area image of the image to be processed.
The image to be processed may be any image to be processed, for example, may be a picture image, or may be a video image, etc., and the specific image content of the image to be processed is not limited in the embodiment of the present invention. The target area image may be an image of a certain area of the image to be processed, for example, may be a preset area image, or may be a randomly selected area image, which is not limited in the embodiment of the present invention.
In the embodiment of the present invention, the target area image of the image to be processed may be the image to be processed stored in the acquisition database, or the target area image of the image to be processed may be further acquired after the image to be processed captured in real time is acquired. The method for acquiring the image to be processed is not limited.
In an optional implementation manner of the embodiment of the present invention, before acquiring the target area image of the image to be processed, the method may further include: acquiring an original gray level image of an image to be processed; dividing an original gray image into a first set number of area images; and respectively carrying out normalization processing on each area image to obtain a target area image.
Wherein the original gray image may be an original image expressed in gray. The first set number may be a set number, for example, may be a certain determined value, or may also be a value determined according to a specific application scenario, which is not limited in the embodiment of the present invention. The area image may be an image obtained by dividing an original gray image, for example, may be an image with the same size, or may be an image with different sizes. The normalization process may be an image conversion method that reduces or even eliminates gray scale inconsistencies in the image while preserving the diagnostically valuable gray scale differences.
Specifically, the original gray-scale image of the image to be processed may be obtained, or the original gray-scale image stored in the database may be obtained, or the original gray-scale image photographed in real time may be obtained. After the original gray level image of the image to be processed is obtained, the original gray level image can be further divided into a first set number of area images, and normalization processing is performed on each area image to obtain a target area image, so that the target area image is processed. Alternatively, the original gray-scale image may be divided into nine area images of the same size. For example, if the size of the original gray-scale image is 3m×3n, the size of the area image obtained after division is m×n.
S120, determining multi-dimensional image annotation parameters of the target area image; wherein the multi-dimensional image annotation parameter comprises at least one of image sharpness, image symmetry, image brightness and image noise.
The image labeling parameter may be any parameter that can characterize the image quality, for example, may be image sharpness, image symmetry, etc., which is not limited in this embodiment.
In the embodiment of the invention, after the target area image of the image to be processed is acquired, the multi-dimensional image annotation parameters of the target area image can be further determined, so that the area annotation result of the target area image is determined according to the image annotation parameters, and the image to be processed is processed. Alternatively, the multi-dimensional image annotation parameter may comprise at least one of image sharpness, image symmetry, image brightness, and image noise.
S130, determining a region labeling result of the target region image according to the image labeling parameters.
The region labeling result may be a labeling result determined according to the image labeling parameter.
In the embodiment of the invention, after the multi-dimensional image annotation parameters of the target area image are determined, the area annotation result of the target area image can be further determined according to the image annotation parameters, so that the annotation result of the image to be processed is determined according to the area annotation result, and the image processing of the image to be processed is realized. For example, when it is determined that the image labeling parameter of the target area image includes image sharpness, the area labeling result of the target area image may be further determined according to the image sharpness. When the image annotation parameters of the target area image are determined to comprise image symmetry, the area annotation result of the target area image can be further determined according to the image symmetry. When the image annotation parameter of the target area image is determined to comprise the image brightness, the area annotation result of the target area image can be further determined according to the image brightness. When the image annotation parameter of the target area image is determined to comprise image noise, the area annotation result of the target area image can be further determined according to the image noise.
S140, determining the weight of each image annotation parameter.
In the embodiment of the invention, after the region labeling result of the target region image is determined according to the image labeling parameters, the weight of each image labeling parameter can be further led to determine the target image labeling result of the image to be processed according to the weight of the image labeling parameters and the region labeling result.
In an optional implementation manner of the embodiment of the present invention, determining the weight of the image labeling parameter may include: acquiring a sample image for determining the weight of the image annotation parameter; obtaining expected labeling results of image labeling parameters of each sample image; and determining the weight of the image annotation parameters according to the expected annotation result.
The sample image may be an image that can be used as a sample, as determined by screening. The expected labeling result may be a labeling result obtained by image labeling of the sample image.
Specifically, a sample image for determining the weight of the image annotation parameter is obtained, and the obtained image annotation parameter of the sample image is annotated to obtain the expected annotation result of the image annotation parameter of each sample image, so that the weight of the image annotation parameter is determined according to the expected annotation result.
Alternatively, the expected labeling result of the image labeling parameters of each sample image may be obtained by obtaining a plurality of labeling results of the image labeling of each sample image, and performing weighted average on the plurality of labeling results after preprocessing, so as to obtain the expected labeling result according to the labeling result after weighted average. For example, 100 labeling results of image labeling of each sample image are obtained, and the 100 labeling results are preprocessed and then weighted and averaged, so that the expected labeling result is obtained by multiplying the labeling result after weighted and averaged by 100.
And S150, determining a target image labeling result of the image to be processed according to the weight of the image labeling parameter and the region labeling result.
The target image labeling result may be a labeling result of an image to be processed, and may be used for determining image quality, for example, the target image labeling result may be excellent or unqualified, which is not limited in the embodiment of the present invention.
In the embodiment of the invention, after the weight of each image labeling parameter is determined, the target image labeling result of the image to be processed can be further determined according to the weight of the image labeling parameter and the region labeling result, so as to determine the image quality, thereby realizing the image processing. For example, if the target image labeling result is excellent, it may be determined that the image quality of the image to be processed is good. If the labeling result of the target image is unqualified, the image quality of the image to be processed can be determined to be poor.
In an optional implementation manner of the embodiment of the present invention, determining a target image labeling result of an image to be processed according to a weight of an image labeling parameter and a region labeling result may include: acquiring the gray value of each target area image pixel point; calculating an image definition value, an image symmetry value, an image brightness value and an image noise value of each target area image according to the pixel point gray value of each target area image; and determining a target image labeling result of the image to be processed according to the image definition value, the image symmetry value, the image brightness value, the image noise value and the weight of each image labeling parameter of each target area image.
The image sharpness value may be a value of the image in the sharpness dimension. The image symmetry value may be a value of the image in the dimension of symmetry. The image brightness value may be the value of the image in the dimension of brightness. The image noise value may be the value of the image in the noise dimension.
Specifically, by acquiring the pixel gray value of each target area image, the image definition value, the image symmetry value, the image brightness value and the image noise value of each target area image can be further calculated according to the pixel gray value of each target area image, so as to determine the target image labeling result of the image to be processed according to the image definition value, the image symmetry value, the image brightness value and the image noise value of each target area image and the weights of each image labeling parameter. For example, if the image sharpness value of the target area image 1 is A1, the image symmetry value is B1, the image brightness value is C1, and the image noise value is D1, and the weight corresponding to the image sharpness is A1, the weight corresponding to the image symmetry is B1, the weight corresponding to the image brightness is C1, and the weight corresponding to the image noise is D1, the labeling result of the target area image 1 is a1+b1+b1+c1+c1+d1.
Optionally, determining the target image labeling result of the image to be processed according to the image definition value, the image symmetry value, the image brightness value, the image noise value and the weight of each image labeling parameter of each target area image may include: acquiring the position of an area image of each target area image in an image to be processed; calculating the normalized gray level difference value of the image definition value, the image symmetry value, the image brightness value and the image noise value of each target area image according to the area image position; determining an initial region labeling result of the normalized gray level difference value according to the weight of each image labeling parameter; and determining a target image labeling result of the image to be processed according to the initial region labeling result of each target region image.
The position of the region image may be a position of the target region image in the image to be processed, for example, may be an upper left corner position of the image to be processed, or may be a middle position of the image to be processed, which is not limited in the embodiment of the present invention. The normalized gray-scale difference value may be a difference value of gray-scale values before and after the image normalization process. For example, if the gray value before the image normalization process is X1 and the gray value after the normalization process is X2, the normalized gray difference value may be |x1-x2|. The initial region annotation result may be an annotation result determined from the normalized gray scale difference value of the image annotation parameter.
Specifically, after calculating the image definition value, the image symmetry value, the image brightness value and the image noise value of each target area image according to the pixel point gray value of each target area image, the area image position of each target area image in the image to be processed can be further obtained, and the normalized gray difference value of the image definition value, the image symmetry value, the image brightness value and the image noise value of each target area image is calculated according to the area image position, so that the initial area labeling result of the normalized gray difference value is determined according to the weight of each image labeling parameter, and the target image labeling result of the image to be processed is determined according to the initial area labeling result of each target area image. For example, if the normalized gray-scale difference value of the image sharpness value of the target area image 1 is AA1, the normalized gray-scale difference value of the image symmetry value is BB1, the normalized gray-scale difference value of the image brightness value is CC1, and the normalized gray-scale difference value of the image noise value is DD1, and the weight corresponding to the image sharpness is a1, the weight corresponding to the image symmetry is b1, the weight corresponding to the image brightness is c1, and the weight corresponding to the image noise is d1, the initial area labeling result of the target area image 1 is a1×aa1+b1×bb1+c1×cc1+d1×dd1.
Optionally, determining the target image labeling result of the image to be processed according to the initial region labeling result of each target region image may include: determining initial region labeling results and values of any second set number of target region images; and determining a target image labeling result of the image to be processed according to the initial region labeling result and the value.
The second set number may be another set number, for example, a certain determined value, or a value determined according to a specific application scenario, which is not limited in the embodiment of the present invention.
Specifically, after the initial region labeling result of the normalized gray level difference value is determined according to the weight of each image labeling parameter, the sum value of the initial region labeling results of any second set number of target region images can be further determined, so that the target image labeling result of the image to be processed is determined according to the sum value of the initial region labeling results. Alternatively, the second set number may be 6. That is, the sum of the initial region labeling results for determining any second set number of target region images may be the sum of the initial expected labeling results for determining any 6 target region images.
Optionally, determining the target image labeling result of the image to be processed according to the initial region labeling result and the value may include: under the condition that the initial region labeling result and the value are larger than a first preset threshold value, determining the target image labeling result as a first target image labeling result; under the condition that the initial region labeling result and the value are smaller than or equal to a first preset threshold value and larger than a second preset threshold value, determining the target image labeling result as a second target image labeling result; under the condition that the initial region labeling result and the value are smaller than or equal to a second preset threshold value and larger than a third preset threshold value, determining the target image labeling result as a third target image labeling result; under the condition that the initial region labeling result and the value are smaller than or equal to a third preset threshold value and larger than a fourth preset threshold value, determining the target image labeling result as a fourth target image labeling result; and under the condition that the initial region labeling result and the value are smaller than or equal to a fourth preset threshold value, determining the target image labeling result as a fifth target image labeling result.
The first preset threshold value can be a preset arbitrary threshold value, and can be used for first determining the target image labeling result. Alternatively, the first preset threshold may be 550. The first target image annotation result may be a target image annotation result determined according to a first preset threshold. Alternatively, the first target image annotation result may be a top-quality image. The second preset threshold may be another preset arbitrary threshold, and may be used to determine a second target image labeling result. Alternatively, the second preset threshold may be 540. The second target image annotation result may be another target image annotation result determined according to the first preset threshold and the second preset threshold. Alternatively, the second target image annotation result may be an excellent image. The third preset threshold may be another arbitrary threshold that is preset, and may be used to determine a third target image labeling result. Alternatively, the third preset threshold may be 480. The third target image annotation result may be another target image annotation result determined according to the second preset threshold and the third preset threshold. Alternatively, the third target image labeling result may be a good image. The fourth preset threshold may be another preset arbitrary threshold, and may be used to determine a fourth target image labeling result. Alternatively, the fourth preset threshold may be 360. The fourth target image annotation result may be another target image annotation result determined according to the third preset threshold and the fourth preset threshold. Alternatively, the fourth target image annotation result may be a qualified image. The fifth target image annotation result may be another target image annotation result determined according to the fourth preset threshold. Alternatively, the fifth target image annotation result may be a failed image.
Specifically, after determining the initial region labeling results and values of any second set number of target region images, a threshold range in which the initial region labeling results and values are located may be further determined, so as to determine the target image labeling results. If the initial region labeling result sum value is greater than a first preset threshold value, the target image labeling result can be determined to be a first target image labeling result. If the sum of the initial region labeling results is smaller than or equal to the first preset threshold value and larger than the second preset threshold value, the target image labeling result can be determined to be the second target image labeling result. If the sum of the initial region labeling results is smaller than or equal to the second preset threshold value and larger than the third preset threshold value, the target image labeling result can be determined to be a third target image labeling result. If the sum of the initial region labeling results is less than or equal to the third preset threshold value and greater than the fourth preset threshold value, the target image labeling result can be determined to be the fourth target image labeling result. If the initial region labeling result sum value is smaller than or equal to a fourth preset threshold value, the target image labeling result can be determined to be a fifth target image labeling result. For example, if the initial region labeling result and value are greater than 550, then the target image labeling result may be determined to be a top-quality image. If the initial region labeling result sum value is 550 or less and 540 or more, the target image labeling result may be determined to be an excellent image. If the sum of the initial region labeling results is less than or equal to 540 and greater than 480, the target image labeling result can be determined to be a good image. If the sum of the initial region labeling results is less than or equal to 480 and greater than 360, the target image labeling result can be determined to be a qualified image. If the initial region labeling result sum value is less than or equal to 360, the target image labeling result can be determined to be a failed image.
According to the technical scheme, the target area image of the image to be processed is obtained, the multi-dimensional image annotation parameters of the target area image are determined, and the area annotation result of the target area image is determined according to the image annotation parameters, so that after the weight of each image annotation parameter is determined, the target image annotation result of the image to be processed is determined according to the weight of the image annotation parameter and the area annotation result, the problems that the accuracy and the rationality of image processing are poor due to unreasonable image annotation in the existing image processing method are solved, the image quality can be determined according to the multi-dimensional image annotation parameters, and the accuracy and the rationality of image quality determination are improved, and the accuracy and the rationality of image processing are further improved.
Example two
Fig. 2 is a schematic diagram of an image processing apparatus according to a second embodiment of the present invention, as shown in fig. 2, the apparatus includes: a target region image acquisition module 210, an image annotation parameter determination module 220, a region annotation result determination module 230, an image annotation parameter weight determination module 240, and a target image annotation result determination module 250, wherein:
a target area image acquisition module 210, configured to acquire a target area image of an image to be processed;
An image annotation parameter determining module 220, configured to determine multi-dimensional image annotation parameters of the target region image; wherein the multi-dimensional image annotation parameters comprise at least one of image definition, image symmetry, image brightness and image noise;
The region labeling result determining module 230 is configured to determine a region labeling result of the target region image according to the image labeling parameter;
an image labeling parameter weight determining module 240, configured to determine a weight of each of the image labeling parameters;
the target image labeling result determining module 250 is configured to determine a target image labeling result of the image to be processed according to the weight of the image labeling parameter and the region labeling result.
According to the technical scheme, the target area image of the image to be processed is obtained, the multi-dimensional image annotation parameters of the target area image are determined, and the area annotation result of the target area image is determined according to the image annotation parameters, so that after the weight of each image annotation parameter is determined, the target image annotation result of the image to be processed is determined according to the weight of the image annotation parameter and the area annotation result, the problems that the accuracy and the rationality of image processing are poor due to unreasonable image annotation in the existing image processing method are solved, the image quality can be determined according to the multi-dimensional image annotation parameters, and the accuracy and the rationality of image quality determination are improved, and the accuracy and the rationality of image processing are further improved.
Optionally, the image labeling parameter weight determining module 240 may be specifically configured to:
Acquiring a sample image for determining the weight of the image annotation parameter; obtaining expected labeling results of image labeling parameters of each sample image; and determining the weight of the image annotation parameters according to the expected annotation result.
Optionally, the target area image acquisition module 210 may be specifically configured to:
acquiring an original gray level image of an image to be processed; dividing an original gray image into a first set number of area images; and respectively carrying out normalization processing on each area image to obtain a target area image.
Optionally, the target image labeling result determining module 250 may be specifically configured to:
Acquiring the gray value of each target area image pixel point; calculating an image definition value, an image symmetry value, an image brightness value and an image noise value of each target area image according to the pixel point gray value of each target area image; and determining a target image labeling result of the image to be processed according to the image definition value, the image symmetry value, the image brightness value, the image noise value and the weight of each image labeling parameter of each target area image.
Optionally, the target image labeling result determining module 250 may be further specifically configured to:
Acquiring the position of each target area image in the area image of each target area; calculating the normalized gray level difference value of the image definition value, the image symmetry value, the image brightness value and the image noise value of each target area image according to the area image position; determining an initial region labeling result of the normalized gray level difference value according to the weight of each image labeling parameter; and determining a target image labeling result of the image to be processed according to the initial region labeling result of each target region image.
Optionally, the target image labeling result determining module 250 may be further configured to:
Determining initial region labeling results and values of any second set number of target region images; and determining a target image labeling result of the image to be processed according to the initial region labeling result and the value.
Optionally, the target image labeling result determining module 250 may be further configured to:
Under the condition that the initial region labeling result and the value are larger than a first preset threshold value, determining the target image labeling result as a first target image labeling result; under the condition that the initial region labeling result and the value are smaller than or equal to a first preset threshold value and larger than a second preset threshold value, determining the target image labeling result as a second target image labeling result; under the condition that the initial region labeling result and the value are smaller than or equal to a second preset threshold value and larger than a third preset threshold value, determining the target image labeling result as a third target image labeling result; under the condition that the initial region labeling result and the value are smaller than or equal to a third preset threshold value and larger than a fourth preset threshold value, determining the target image labeling result as a fourth target image labeling result; and under the condition that the initial region labeling result and the value are smaller than or equal to a fourth preset threshold value, determining the target image labeling result as a fifth target image labeling result.
The image processing device can execute the image processing method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method. Technical details not described in detail in this embodiment may be referred to the image processing method provided in any embodiment of the present invention.
Since the image processing apparatus described above is an apparatus capable of executing the image processing method in the embodiment of the present application, a person skilled in the art will be able to understand the specific implementation of the image processing apparatus in the embodiment of the present application and various modifications thereof based on the image processing method described in the embodiment of the present application, and therefore how the image processing apparatus implements the image processing method in the embodiment of the present application will not be described in detail herein. The apparatus used by those skilled in the art to implement the image processing method according to the embodiments of the present application is within the scope of the present application.
Example III
Fig. 3 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention. Fig. 3 illustrates a block diagram of an exemplary electronic device 12 suitable for use in implementing embodiments of the present invention. The electronic device 12 shown in fig. 3 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 3, the electronic device 12 is in the form of a general purpose computing device. Components of the electronic device 12 may include, but are not limited to: one or more processors 16, a memory 28, a bus 18 that connects the various system components, including the memory 28 and the processor 16.
Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include industry standard architecture (Industry Standard Architecture, ISA) bus, micro channel architecture (Micro Channel Architecture, MCA) bus, enhanced ISA bus, video electronics standards association (Video Electronics Standards Association, VESA) local bus, and peripheral component interconnect (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile memory, such as random access memory (Random Access Memory, RAM) 30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 3, commonly referred to as a "hard disk drive"). Although not shown in fig. 3, a disk drive for reading from and writing to a removable nonvolatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from and writing to a removable nonvolatile optical disk (e.g., a Compact Disc-Read Only Memory (CD-ROM), digital versatile Disc (Digital Video Disc-Read Only Memory, DVD-ROM), or other optical media), may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods of the embodiments described herein.
The electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the electronic device 12, and/or any devices (e.g., network card, modem, etc.) that enable the electronic device 12 to communicate with one or more other computing devices. Such communication may be via an Input/Output (I/O) interface 22. Also, electronic device 12 may communicate with one or more networks such as a local area network (Local Area Network, LAN), a wide area network Wide Area Network, a WAN, and/or a public network such as the internet via network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 over the bus 18. It should be appreciated that although not shown in fig. 3, other hardware and/or software modules may be used in connection with electronic device 12, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, (Redundant Arrays of INDEPENDENT DISKS, RAID) systems, tape drives, data backup storage systems, and the like.
The processor 16 executes a program stored in the memory 28 to thereby execute various functional applications and data processing, and implements the image processing method provided by the embodiment of the present invention: acquiring a target area image of an image to be processed; determining multi-dimensional image annotation parameters of the target area image; wherein the multi-dimensional image annotation parameters comprise at least one of image definition, image symmetry, image brightness and image noise; determining a region labeling result of the target region image according to the image labeling parameters; determining the weight of each image annotation parameter; and determining a target image labeling result of the image to be processed according to the weight of the image labeling parameter and the region labeling result.
Example IV
A fourth embodiment of the present invention also provides a computer storage medium storing a computer program which, when executed by a computer processor, is configured to perform the image processing method according to any one of the above embodiments of the present invention: acquiring a target area image of an image to be processed; determining multi-dimensional image annotation parameters of the target area image; wherein the multi-dimensional image annotation parameters comprise at least one of image definition, image symmetry, image brightness and image noise; determining a region labeling result of the target region image according to the image labeling parameters; determining the weight of each image annotation parameter; and determining a target image labeling result of the image to be processed according to the weight of the image labeling parameter and the region labeling result.
The computer storage media of embodiments of the invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an erasable programmable Read-Only Memory ((Erasable Programmable Read Only Memory, EPROM) or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, radio Frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.
Claims (8)
1. An image processing method, comprising:
Acquiring a target area image of an image to be processed;
determining multi-dimensional image annotation parameters of the target area image; wherein the multi-dimensional image annotation parameters comprise at least one of image definition, image symmetry, image brightness and image noise;
determining a region labeling result of the target region image according to the image labeling parameters;
determining the weight of each image annotation parameter;
determining a target image labeling result of the image to be processed according to the weight of the image labeling parameter and the region labeling result;
The determining the weight of each image annotation parameter comprises the following steps:
Acquiring a sample image for determining the weight of the image annotation parameter;
obtaining expected labeling results of image labeling parameters of each sample image;
Determining the weight of the image annotation parameters according to the expected annotation result;
the obtaining the expected labeling result of the image labeling parameters of each sample image includes:
Acquiring a plurality of labeling results of image labeling of each sample image;
Preprocessing a plurality of marking results, carrying out weighted average, and obtaining the expected marking result according to the marking result after weighted average;
The determining the target image labeling result of the image to be processed according to the weight of the image labeling parameter and the region labeling result comprises the following steps:
acquiring the gray value of each target area image pixel point;
Calculating an image definition value, an image symmetry value, an image brightness value and an image noise value of each target area image according to the pixel point gray value of each target area image;
and determining a target image labeling result of the image to be processed according to the image definition value, the image symmetry value, the image brightness value and the image noise value of each target area image and the weight of each image labeling parameter.
2. The method of claim 1, wherein prior to acquiring the target area image of the image to be processed, further comprising:
acquiring an original gray level image of the image to be processed;
Dividing the original gray image into a first set number of area images;
and respectively carrying out normalization processing on each area image to obtain the target area image.
3. The method of claim 1, wherein determining the target image annotation result for the image to be processed based on the image sharpness values, the image symmetry values, the image brightness values, the image noise values, and the weights for the image annotation parameters for each of the target region images comprises:
Acquiring the region image position of each target region image in the image to be processed;
Calculating the normalized gray level difference value of the image definition value, the image symmetry value, the image brightness value and the image noise value of each target area image according to the area image position;
determining an initial region labeling result of the normalized gray level difference value according to the weight of each image labeling parameter;
and determining the target image labeling result of the image to be processed according to the initial region labeling result of each target region image.
4. A method according to claim 3, wherein said determining the target image annotation result of the image to be processed based on the initial region annotation result of each of the target region images comprises:
Determining initial region labeling results and values of any second set number of target region images;
and determining a target image labeling result of the image to be processed according to the initial region labeling result and the value.
5. The method of claim 4, wherein determining the target image annotation result for the image to be processed based on the initial region annotation result and the value comprises:
under the condition that the initial region labeling result and the value are larger than a first preset threshold value, determining the target image labeling result as a first target image labeling result;
under the condition that the initial region labeling result and the value are smaller than or equal to a first preset threshold value and larger than a second preset threshold value, determining the target image labeling result as a second target image labeling result;
under the condition that the initial region labeling result and the value are smaller than or equal to a second preset threshold value and larger than a third preset threshold value, determining the target image labeling result as a third target image labeling result;
under the condition that the initial region labeling result and the value are smaller than or equal to a third preset threshold value and larger than a fourth preset threshold value, determining the target image labeling result as a fourth target image labeling result;
and under the condition that the initial region labeling result and the value are smaller than or equal to a fourth preset threshold value, determining the target image labeling result as a fifth target image labeling result.
6. An image processing apparatus, comprising:
The target area image acquisition module is used for acquiring a target area image of the image to be processed;
The image annotation parameter determining module is used for determining multi-dimensional image annotation parameters of the target area image; wherein the multi-dimensional image annotation parameters comprise at least one of image definition, image symmetry, image brightness and image noise;
the region labeling result determining module is used for determining a region labeling result of the target region image according to the image labeling parameters;
the image annotation parameter weight determining module is used for determining the weight of each image annotation parameter;
the target image annotation result determining module is used for determining a target image annotation result of the image to be processed according to the weight of the image annotation parameter and the region annotation result;
the image annotation parameter weight determining module is specifically configured to:
Acquiring a sample image for determining the weight of the image annotation parameter;
obtaining expected labeling results of image labeling parameters of each sample image;
Determining the weight of the image annotation parameters according to the expected annotation result;
The image annotation parameter weight determining module is further configured to:
Acquiring a plurality of labeling results of image labeling of each sample image;
Preprocessing a plurality of marking results, carrying out weighted average, and obtaining the expected marking result according to the marking result after weighted average;
the target image annotation result determining module is specifically configured to:
acquiring the gray value of each target area image pixel point;
Calculating an image definition value, an image symmetry value, an image brightness value and an image noise value of each target area image according to the pixel point gray value of each target area image;
and determining a target image labeling result of the image to be processed according to the image definition value, the image symmetry value, the image brightness value and the image noise value of each target area image and the weight of each image labeling parameter.
7. An electronic device, the electronic device comprising:
One or more processors;
a storage means for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the image processing method of any of claims 1-5.
8. A computer storage medium having stored thereon a computer program, which when executed by a processor implements the image processing method according to any of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111054581.3A CN113781428B (en) | 2021-09-09 | 2021-09-09 | Image processing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111054581.3A CN113781428B (en) | 2021-09-09 | 2021-09-09 | Image processing method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113781428A CN113781428A (en) | 2021-12-10 |
CN113781428B true CN113781428B (en) | 2024-10-11 |
Family
ID=78841998
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111054581.3A Active CN113781428B (en) | 2021-09-09 | 2021-09-09 | Image processing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113781428B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112102309A (en) * | 2020-09-27 | 2020-12-18 | 中国建设银行股份有限公司 | Method, device and equipment for determining image quality evaluation result |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107481238A (en) * | 2017-09-20 | 2017-12-15 | 众安信息技术服务有限公司 | Image quality measure method and device |
CN111079740A (en) * | 2019-12-02 | 2020-04-28 | 咪咕文化科技有限公司 | Image quality evaluation method, electronic device, and computer-readable storage medium |
-
2021
- 2021-09-09 CN CN202111054581.3A patent/CN113781428B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112102309A (en) * | 2020-09-27 | 2020-12-18 | 中国建设银行股份有限公司 | Method, device and equipment for determining image quality evaluation result |
Also Published As
Publication number | Publication date |
---|---|
CN113781428A (en) | 2021-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109447154B (en) | Picture similarity detection method, device, medium and electronic equipment | |
CN110189336B (en) | Image generation method, system, server and storage medium | |
CN111986159B (en) | Electrode defect detection method and device for solar cell and storage medium | |
CN113436100B (en) | Method, apparatus, device, medium, and article for repairing video | |
CN108156452B (en) | Method, device and equipment for detecting sensor and storage medium | |
CN110390295B (en) | Image information identification method and device and storage medium | |
CN110796108A (en) | Method, device and equipment for detecting face quality and storage medium | |
CN116433692A (en) | Medical image segmentation method, device, equipment and storage medium | |
CN113643260A (en) | Method, apparatus, device, medium and product for detecting image quality | |
CN111753114A (en) | Image pre-labeling method and device and electronic equipment | |
CN110781849A (en) | Image processing method, device, equipment and storage medium | |
CN112287734A (en) | Screen-fragmentation detection and training method of convolutional neural network for screen-fragmentation detection | |
CN114972113A (en) | Image processing method and device, electronic equipment and readable storage medium | |
CN113936232A (en) | Screen fragmentation identification method, device, equipment and storage medium | |
CN113781428B (en) | Image processing method and device, electronic equipment and storage medium | |
CN111382643B (en) | Gesture detection method, device, equipment and storage medium | |
CN112035732A (en) | Method, system, equipment and storage medium for expanding search results | |
CN111124862B (en) | Intelligent device performance testing method and device and intelligent device | |
CN113780163B (en) | Page loading time detection method and device, electronic equipment and medium | |
CN113610856B (en) | Method and device for training image segmentation model and image segmentation | |
CN114821034A (en) | Training method and device of target detection model, electronic equipment and medium | |
CN113591787A (en) | Method, device, equipment and storage medium for identifying optical fiber link component | |
CN111143346B (en) | Tag group variability determination method and device, electronic equipment and readable medium | |
CN114119365A (en) | Application detection method, device, equipment and storage medium | |
CN113205092A (en) | Text detection method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |