CN111192213A - Image defogging adaptive parameter calculation method, image defogging method and system - Google Patents
Image defogging adaptive parameter calculation method, image defogging method and system Download PDFInfo
- Publication number
- CN111192213A CN111192213A CN201911374990.4A CN201911374990A CN111192213A CN 111192213 A CN111192213 A CN 111192213A CN 201911374990 A CN201911374990 A CN 201911374990A CN 111192213 A CN111192213 A CN 111192213A
- Authority
- CN
- China
- Prior art keywords
- image
- defogging
- value
- dark channel
- values
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000003044 adaptive effect Effects 0.000 title claims abstract description 23
- 238000004364 calculation method Methods 0.000 title claims description 30
- 238000012545 processing Methods 0.000 claims description 43
- 238000001914 filtration Methods 0.000 claims description 27
- 238000010606 normalization Methods 0.000 claims description 8
- 230000005540 biological transmission Effects 0.000 claims description 4
- 238000013480 data collection Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 3
- 238000011835 investigation Methods 0.000 claims description 3
- 238000003672 processing method Methods 0.000 claims description 3
- 230000006835 compression Effects 0.000 claims description 2
- 238000007906 compression Methods 0.000 claims description 2
- 230000004927 fusion Effects 0.000 claims description 2
- 238000012549 training Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The method comprises the steps of training an equipment gain circle, acquiring an equipment number through image decoding data, calling out a corresponding equipment gain circle, identifying an image, and outputting a self-adaptive parameter k matched in the gain circle; outputting the identified dark channel image through a dark channel first-pass algorithm; identifying a sky area and outputting an identified sky area image; fusing the values of the dark channel image and the sky area image to obtain an input image; and calculating the atmospheric light curtain value before defogging according to a formula, and further restoring the fog-free image. The invention discloses a novel image defogging method and a novel image defogging system for calculating adaptive parameters by utilizing a Radviz algorithm.
Description
Technical Field
The invention relates to the field of image processing, in particular to a method for calculating an image defogging adaptive parameter, an image defogging method and an image defogging system.
Background
The existing defogging-based paper patents are endlessly developed, the defogging is firstly completed by solving the atmospheric light value and the transmissivity based on the Dark Channel assumption based on the classic defogging algorithm of doctor of Hommine, namely Single Image Haze Removal Using Dark Channel Prior, and then the defogging treatment is performed based on a fog imaging model.
The existing defogging scheme has the disadvantages of too low running speed and too high resource consumption. Both methods have a vulnerability, such as the way of doctor of Homingsu, and real-time processing of the CCD in industry is limited by a large number of floating point operations; the defogging through the retinex algorithm is easy to generate color cast and is only suitable for static images, and other novel algorithms in real time have defects in the processing of boundaries, color cast, internal blocks and other places, and are particularly greatly influenced by parameters.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a novel image defogging method and a novel image defogging system for calculating adaptive parameters by using a Radviz algorithm.
In order to solve the technical problem, the invention is solved by the following technical scheme:
a method for calculating an image defogging adaptive parameter comprises the following steps:
step 1, acquiring an image data set P ═ { A, X, Ymean }, wherein A represents a global atmospheric light value, X is the complexity of image texture, and Ymean is the average brightness of an image;
step 2, carrying out normalization processing on the data set P, substituting the P into a Radviz algorithm, and acquiring a balance point position in a circle of the Radviz algorithm to obtain a self-adaptive parameter k;
step 3, repeating the method in the step 1-2 by using a plurality of images with different gains to obtain a plurality of balance point positions, and distributing output values into a circle by artificially judging a faultless self-adaptive parameter k;
step 4, obtaining gain circles distributed with k under all conditions through interpolation;
and 5, acquiring the equipment number through image decoding data, calculating an image data set P, calling out an equipment gain circle obtained in the step 4, substituting the P into the gain circle to identify the image, and outputting a self-adaptive parameter k matched with the input image.
Optionally, the normalization process calculation formula:t is an independent variable used for searching the maximum value and the minimum value in all the investigation values under the object j attribute; j is the attribute of the object; i is under the attributeThe number of objects investigated;
the tensile force balance condition is met:wherein O isjCoordinates, x, at which the respective attribute liesijTo balance the point position, aijA tensile value for each attribute; then, the balance point position calculation formula:
optionally, the normalization process is performed after singular points are excluded from the image dataset P ═ { a, X, Ymean }.
Optionally, the global atmospheric light value a value calculating method includes: taking the value of pixel points 0.1% before the pixel value of the image, and calculating the average value of the selected pixel points; the average brightness Ymean calculation method of the image is to add the values of all the pixel points and divide the added values by the number of the pixel points.
Optionally, the method for calculating the complexity X of the image texture includes: through histogram calculation, let the gray level of the image be L, L be the number of different gray levels, the average value of the gray levels be Lmean, where the corresponding histogram is h (L), then, the calculation formula of X is
An image defogging method comprises the steps of obtaining the calculated adaptive parameter k, obtaining an equipment number through image decoding data, calling out a corresponding equipment gain circle, identifying an image and outputting the adaptive parameter k matched in the gain circle;
outputting the identified dark channel image through a dark channel first-pass algorithm;
identifying a sky area and outputting an identified sky area image;
fusing the values of the dark channel image and the sky area image to obtain Iin;
performing edge-preserving filtering on Iin to obtain Y1, calculating Iin-Y1 values and obtaining absolute values Y2 of the Iin-Y1 values, and performing edge-preserving filtering on Y2 once to obtain Yf;
the calculation formula of the atmospheric light curtain value yshuchuu before defogging is as follows: yshuchuu ═ Y1-Yf;
i (x): a hazy image; a: a global atmospheric light value; yout: fog-free images.
Optionally, the dark channel prior calculation formula is: idark ═ miny∈s(x)(minc∈{r,g,b}Ic(y));
c is an independent variable; ic (y) is any one color channel of image I; idark is dark channel data, s (x) is a sliding window with x as the sliding center, radius R, and R as the custom.
Optionally, the processing method for identifying the sky region:
converting the three primary color images into ycbcr images, and extracting Y components;
filtering the Y component by using Gaussian filtering;
identifying gradient values of each pixel point of the Y component in each direction, comparing and selecting the maximum value to obtain an output S, and storing the output S as the gradient of the image as the input of the next step;
setting 2 thresholds, wherein one is a gray level threshold on a Y component, the other is a gradient maximum S, calculating the variance of the gradient maximum S, setting a variance threshold, and obtaining the Isky by truncating a sky area through the gray level threshold and the variance threshold. And a small mean filter is applied to the Isky.
Optionally, a value fusion formula of the dark channel image and the sky region image is as follows:
Iin=fix((b*Isky+(255-b)*Idark)/255);
b is a weight parameter used for controlling the effect of finally processing the sky.
Optionally, the edge-preserving filtering method includes: and setting the area as an area with the size of r, identifying pixels similar to the pixels with the values in the radius of r, setting the similarity threshold value of the pixels as d, carrying out mean value filtering if the similarity is greater than the threshold value d, replacing the original pixel point value with the pixel point value subjected to mean value filtering, and reserving the area with the similarity less than the threshold value d as a boundary area.
An image defogging system comprises a front-end module, a defogging module and a rear-end module, wherein the front-end module comprises a video data collection module and an encoder, and transmits collected video data and a sensor device pattern to the encoder for compression; and the back-end module comprises a video data coding and collecting module and a decoder and is used for decoding the data obtained by network transmission.
The video signal is input into the front-end module to carry out video data acquisition and coding, the processed data is transmitted to the defogging module through the network, the defogging processing is carried out through the image defogging method, and the defogged data is transmitted to the rear-end module through the network to be decoded.
Optionally, the defogging module includes a dark channel processing module, a sky area processing module, and a special defogging processing module;
the dark channel processing module is used for identifying a dark channel image;
the sky region processing module is used for identifying a sky region image;
and the special defogging processing module is used for fusing the image data processed by the dark channel processing module and the sky area processing module and inputting the fused image data into the special defogging processing module for defogging.
The invention has the beneficial effects that:
1. the Radviz algorithm adopted by the invention completes the self-adaptive parameter identification, and can simply and quickly obtain the information of the images and the rules among the images. Meanwhile, the image processing can be further combined with such visualization algorithm and clustering algorithm, so that the image processing effect is more stable.
2. The invention effectively reserves the sky in the defogging module, and well inhibits the spiral line phenomenon caused by defogging in a sky area on a plurality of imaging systems.
3. The special defogging processing module is designed, so that the edge-protecting filtering flow is increased, the abnormal conditions such as black edges and the like in the later period are avoided, the object is more real, and the contrast is better.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of an image defogging system;
FIG. 2 is a flow diagram of cloud defogging module processing;
FIG. 3 is a schematic diagram of the balance of Radviz at one point;
fig. 4 is a distribution diagram of the Radviz adaptive parameter k.
Detailed Description
The present invention will be described in further detail with reference to examples, which are illustrative of the present invention and are not to be construed as being limited thereto.
Example 1:
as shown in fig. 1 and 2: an image defogging system comprises a front-end module, a cloud defogging module and a rear-end module. The front-end module comprises a video data collection module and an MPEG-4.10-based encoder, collected video data and sensor equipment styles are transmitted to the MPEG-4.10 encoder to be compressed, and compressed code streams are transmitted to the cloud end through network transmission to be processed in the next step.
The back end module comprises an MPEG-4.10 coding collection module and an MPEG-4.10 decoder of video data, decodes the data obtained by network transmission and plays the data by a video player.
As shown in fig. 2: the defogging module comprises a dark channel processing module, a sky area processing module and a special defogging processing module;
the dark channel processing module is used for identifying a dark channel image; the sky region processing module is used for identifying a sky region image;
and the special defogging processing module is used for fusing the image data processed by the dark channel processing module and the sky area processing module and inputting the fused image data into the special defogging processing module for defogging.
Example 2:
in the conventional input image I, the length of the image is m, the width is n, and the bit width of the image is Nbit.
An image defogging method comprises the steps of training a sensor device gain circle distributed with defogging adaptive parameters based on a Radviz algorithm according to machine learning.
And acquiring the equipment number through image decoding data, calling out a corresponding sensor equipment gain circle, identifying the image, and outputting a self-adaptive parameter k matched in the gain circle.
Then, outputting the identified dark channel image through a dark channel prior algorithm, wherein the dark channel prior calculation formula is as follows: idark ═ miny∈s(x)(minc∈{r,g,b}Ic(y)); c is an independent variable, and aims to select one of 3 channels of RGB, and obtain the minimum value of each pixel point compared by the 3 channels; ic (y) is any one color channel of image I; idark is dark channel data, s (x) is a sliding window centered at x, radius R, R is custom. The most suitable embodiment selects 7 by selecting 3 to 7 pixel points with the radius.
Identifying a sky area and outputting an identified sky area image; and fusing the values of the dark channel image and the sky area image to obtain Iin.
Further performing edge-preserving filtering on Iin to obtain Y1, calculating Iin-Y1 values and obtaining absolute values Y2 of the Iin-Y1 values, and performing edge-preserving filtering on Y2 for the first time to obtain Yf;
the calculation formula of the atmospheric light curtain value yshuchuu before defogging is as follows: yshuchuu ═ Y1-Yf;
i (x): a hazy image; a: a global atmospheric light value; yout: fog-free images.
The sky area identification processing method comprises the following steps:
1. converting the three primary color images into ycbcr images, and extracting Y components;
2. filtering the Y component by using Gaussian filtering;
3. identifying gradient values of each pixel point of the Y component in each direction, comparing and selecting the maximum value to obtain an output S, and storing the output S as the gradient of the image as the input of the next step;
4. setting 2 thresholds, one for the grey level threshold on the Y component, e.g. setting 0.7X 2N≤Y≤2NCalculating the variance of the maximum gradient value S, setting a variance threshold value, and intercepting a sky area through the gray level threshold value and the variance threshold value to obtain Isky;
5. a small mean filtering is performed on Isky, for example, a 5 by 5 full 1 mean filtering may be performed, or in practical applications, another data may be selected to be mean filtering, where sky area Isky sets the result of mean filtering to 0.
The edge-preserving filtering method comprises the following steps: setting the radius of the pixel point as 3-9, identifying the pixel similar to the pixel value in the radius of r, and setting the similarity threshold as d, if the similarity is lower than 10%, then the pixel point is considered as the boundary, and can be reserved. If the similarity is larger than the threshold value d, carrying out mean value filtering of [ m/40, n/40], and then replacing the original pixel point value with the pixel point value after mean value filtering.
Example 3:
the Radviz algorithm is a tension and spring concept, and the whole algorithm operates on a circle.
A method for calculating image defogging adaptive parameters comprises the following steps:
step 1, acquiring 3 image data sets P ═ { a, X, Ymean }, where a represents a global atmospheric light value, X is a complexity of an image texture, and Ymean is an average brightness of an image. Note: radviz can support the existence of many data sets, but is optimally around 3-7.
Step 2, as shown in fig. 3: and carrying out normalization processing on the data set P, substituting the P into a Radviz algorithm, and acquiring the position of a balance point in a circle of the Radviz algorithm to obtain a self-adaptive parameter k.
And 3, repeating the method in the step 1-2 by using a plurality of images with different gains to obtain a plurality of adaptive parameters k, and distributing output values of the adaptive parameters k into a circle by artificially judging the error-free adaptive parameters k.
And 4, obtaining gain circles distributed with k under all conditions through interpolation.
And 5, acquiring the equipment number through image decoding data, calculating an image data set P, calling out the equipment gain circle obtained in the step 4, substituting the P into the gain circle to identify the image, and outputting the adaptive parameter k matched with the input image.
Wherein, the normalization processing calculation formula is as follows:t is an independent variable used for searching the maximum value and the minimum value in all the investigation values under the object j attribute; j is the attribute of the object; i is the number of objects investigated under the attribute;
the tensile force balance condition is met:wherein O isjCoordinates, x, at which the respective attribute liesijTo balance the point position, aijA tensile value for each attribute; then, the balance point position calculation formula:
according to the method, at least 1000 images are investigated and calculated, at least 20 images with different gains of different scenes are obtained, the different scenes refer to the difference of indoor and outdoor backlight normal light, the gains are usually the gains in chips such as digital gain, analog gain, ISP gain and the like, the gains are usually changed along with the change of brightness, all image frames are distributed on a circle to obtain a distribution diagram of k, and then the automatic gain parameter k which is free of errors through observation of each frame is distributed on the circle as an output value to form a circle, wherein the circle at the moment is a regular circle with 1000 points.
And further, interpolating the k value of each position by software through an interpolation method to obtain the automatic gain parameters k distributed under all conditions. Fig. 4 is a distribution diagram of the Radviz adaptive parameter k, which shows the possible obtained law of k value, which may be an arc law, and may be a straight line law. For example, the distribution rule of the middle vertical line, the adaptive parameter k may be smaller as the average brightness is higher.
The image data set P is normalized after eliminating singular points, the singular points are abnormal values of 3 components of input A, X and Ymean, the range control of each value can be set according to actual requirements, only the value in the range is taken, such as the global atmospheric light value A, the assumed image bit width is Nbit, and the value range of A is 0.7 multiplied by 2N≤A≤2NSo as not to greatly influence the result.
The global atmosphere light value A value calculation method comprises the following steps: taking the value of pixel points 0.1% before the pixel value of the image, and calculating the average value of the selected pixel points; the average brightness Ymean calculation method of the image is to add the values of all the pixel points and divide the added values by the number of the pixel points.
The complexity X calculation method of the image texture comprises the following steps: through histogram calculation, let the gray level of the image be L, L be the number of different gray levels, the average value of the gray levels be Lmean, where the corresponding histogram is h (L), then, the calculation formula of X is
It should be noted that:
reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, the appearances of the phrase "one embodiment" or "an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
Furthermore, it should be noted that all equivalent or simple changes made to the structure, features and principles described in the present patent concept are included in the protection scope of the present patent. Various modifications, additions and substitutions for the specific embodiments described may be made by those skilled in the art without departing from the scope of the invention as defined in the accompanying claims.
Claims (13)
1. A method for calculating an image defogging adaptive parameter is characterized by comprising the following steps of:
step 1, acquiring an image data set P ═ { A, X, Ymean }, wherein A represents a global atmospheric light value, X is the complexity of image texture, and Ymean is the average brightness of an image;
step 2, carrying out normalization processing on the data set P, substituting the P into a Radviz algorithm, and acquiring a balance point position in a circle of the Radviz algorithm to obtain a self-adaptive parameter k;
step 3, repeating the method in the step 1-2 by using a plurality of images with different gains to obtain a plurality of balance point positions, and distributing output values into a circle by artificially judging a faultless self-adaptive parameter k;
step 4, obtaining gain circles distributed with k under all conditions through interpolation;
and 5, acquiring the equipment number through image decoding data, calculating an image data set P, calling out an equipment gain circle obtained in the step 4, substituting the P into the gain circle to identify the image, and outputting a self-adaptive parameter k matched with the input image.
2. The method of calculating adaptive parameters for image defogging according to claim 1,
normalization processing calculation formula:t is an independent variable used for searching the maximum value and the minimum value in all the investigation values under the object j attribute; j is the attribute of the object; i is the number of objects investigated under the attribute;
the tensile force balance condition is met:wherein O isjCoordinates, x, at which the respective attribute liesijTo balance the point position, aijA tensile value for each attribute;
3. the method according to claim 1, wherein the normalization process is performed after singular points are excluded from the image data set P ═ { a, X, Ymean }.
4. The method for calculating the adaptive parameter for image defogging according to claim 1, wherein the global atmospheric light value a value calculation method comprises: taking the value of pixel points 0.1% before the pixel value of the image, and calculating the average value of the selected pixel points;
the average brightness Ymean calculation method of the image is to add the values of all the pixel points and divide the added values by the number of the pixel points.
5. The method of calculating adaptive parameters for image defogging according to claim 1,
the complexity X calculation method of the image texture comprises the following steps: through histogram calculation, let the gray level of the image be L, L be the number of different gray levels, the average value of the gray levels be Lmean, where the corresponding histogram is h (L), then, the calculation formula of X is
6. An image defogging method is characterized in that the obtained adaptive parameter k calculated according to any one of claims 1 to 5 is obtained, the device number is obtained through image decoding data, a corresponding device gain circle is called out, an image is identified, and the adaptive parameter k matched in the gain circle is output;
outputting the identified dark channel image through a dark channel first-pass algorithm;
identifying a sky area and outputting an identified sky area image;
fusing the values of the dark channel image and the sky area image to obtain Iin;
To IinPerforming edge-preserving filtering to obtain Y1, calculating Iin-Y1 values and obtaining absolute values Y2 of the Iin-Y1 values, and performing edge-preserving filtering on Y2 once to obtain Yf;
the calculation formula of the atmospheric light curtain value yshuchuu before defogging is as follows: yshuchuu ═ Y1-Yf;
i (x): a hazy image; a: a global atmospheric light value; yout: fog-free images.
7. The image defogging method according to claim 6, wherein said dark channel prior's calculation formula:
Idark=miny∈s(x)(minc∈{r,g,b}Ic(y));
c is an independent variable; i isc(y) is any one color channel of image I; idark is dark channel data, s (x) is a sliding window with x as the sliding center, radius R, and R as the custom.
8. The image defogging method according to claim 6,
the sky area identification processing method comprises the following steps:
converting the three primary color images into ycbcr images, and extracting Y components;
filtering the Y component by using Gaussian filtering;
identifying gradient values of each pixel point of the Y component in each direction, comparing and selecting the maximum value to obtain an output S, and storing the output S as the gradient of the image as the input of the next step;
setting 2 thresholds, wherein one is a gray level threshold on a Y component, the other is a gradient maximum S, calculating the variance of the gradient maximum S, setting a variance threshold, and obtaining the Isky by truncating a sky area through the gray level threshold and the variance threshold.
9. The image defogging method according to claim 6, wherein the value fusion formula of the dark channel image and the sky area image is as follows:
Iin=fix((b*Isky+(255-b)*Idark)/255);
b is a weight parameter used for controlling the effect of finally processing the sky.
10. The image defogging method according to claim 6,
the edge-preserving filtering method comprises the following steps: and setting the area as an area with the size of r, identifying pixels similar to the pixels with the values in the radius of r, setting the similarity threshold value of the pixels as d, carrying out mean value filtering if the similarity is greater than the threshold value d, replacing the original pixel point value with the pixel point value subjected to mean value filtering, and reserving the area with the similarity less than the threshold value d as a boundary area.
11. An image defogging method according to claim 8 or 9, wherein a small mean value filtering is performed on the Isky.
12. An image defogging system is characterized by comprising a front-end module, a defogging module and a rear-end module,
the front-end module comprises a video data collection module and an encoder, and transmits the collected video data and the sensor device style to the encoder for compression;
the back end module comprises a video data coding and collecting module and a decoder and is used for decoding the data obtained by network transmission;
the video signal is input into the front-end module to carry out video data acquisition and coding, the processed data is transmitted to the defogging module through the network, the defogging processing is carried out by the method of claim 6, and the defogged data is transmitted to the rear-end module through the network to be decoded.
13. The image defogging system according to claim 12, wherein the defogging module comprises a dark channel processing module, a sky region processing module and a special defogging processing module;
the dark channel processing module is used for identifying a dark channel image;
the sky region processing module is used for identifying a sky region image;
and the special defogging processing module is used for fusing the image data processed by the dark channel processing module and the sky area processing module and inputting the fused image data into the special defogging processing module for defogging.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911374990.4A CN111192213B (en) | 2019-12-27 | 2019-12-27 | Image defogging self-adaptive parameter calculation method, image defogging method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911374990.4A CN111192213B (en) | 2019-12-27 | 2019-12-27 | Image defogging self-adaptive parameter calculation method, image defogging method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111192213A true CN111192213A (en) | 2020-05-22 |
CN111192213B CN111192213B (en) | 2023-11-14 |
Family
ID=70707734
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911374990.4A Active CN111192213B (en) | 2019-12-27 | 2019-12-27 | Image defogging self-adaptive parameter calculation method, image defogging method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111192213B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114004834A (en) * | 2021-12-31 | 2022-02-01 | 山东信通电子股份有限公司 | Method, equipment and device for analyzing foggy weather condition in image processing |
CN116110053A (en) * | 2023-04-13 | 2023-05-12 | 济宁能源发展集团有限公司 | Container surface information detection method based on image recognition |
CN116630349A (en) * | 2023-07-25 | 2023-08-22 | 山东爱福地生物股份有限公司 | Straw returning area rapid segmentation method based on high-resolution remote sensing image |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013141210A (en) * | 2011-12-30 | 2013-07-18 | Hitachi Ltd | Image defogging apparatus, image defogging method, and image processing system |
KR101426484B1 (en) * | 2014-04-29 | 2014-08-06 | 한양대학교 산학협력단 | System for processing spray image and the method for the same |
CN104050162A (en) * | 2013-03-11 | 2014-09-17 | 富士通株式会社 | Data processing method and data processing device |
CN106055580A (en) * | 2016-05-23 | 2016-10-26 | 中南大学 | Radviz-based fuzzy clustering result visualization method |
CN106530246A (en) * | 2016-10-28 | 2017-03-22 | 大连理工大学 | Image dehazing method and system based on dark channel and non-local prior |
WO2017175231A1 (en) * | 2016-04-07 | 2017-10-12 | Carmel Haifa University Economic Corporation Ltd. | Image dehazing and restoration |
CN209118462U (en) * | 2019-01-04 | 2019-07-16 | 江苏弘冉智能科技有限公司 | A kind of visualization phase battle array intelligent fire alarm system of front-end convergence |
US20190287219A1 (en) * | 2018-03-15 | 2019-09-19 | National Chiao Tung University | Video dehazing device and method |
-
2019
- 2019-12-27 CN CN201911374990.4A patent/CN111192213B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013141210A (en) * | 2011-12-30 | 2013-07-18 | Hitachi Ltd | Image defogging apparatus, image defogging method, and image processing system |
CN104050162A (en) * | 2013-03-11 | 2014-09-17 | 富士通株式会社 | Data processing method and data processing device |
KR101426484B1 (en) * | 2014-04-29 | 2014-08-06 | 한양대학교 산학협력단 | System for processing spray image and the method for the same |
WO2017175231A1 (en) * | 2016-04-07 | 2017-10-12 | Carmel Haifa University Economic Corporation Ltd. | Image dehazing and restoration |
CN106055580A (en) * | 2016-05-23 | 2016-10-26 | 中南大学 | Radviz-based fuzzy clustering result visualization method |
CN106530246A (en) * | 2016-10-28 | 2017-03-22 | 大连理工大学 | Image dehazing method and system based on dark channel and non-local prior |
US20190287219A1 (en) * | 2018-03-15 | 2019-09-19 | National Chiao Tung University | Video dehazing device and method |
CN209118462U (en) * | 2019-01-04 | 2019-07-16 | 江苏弘冉智能科技有限公司 | A kind of visualization phase battle array intelligent fire alarm system of front-end convergence |
Non-Patent Citations (1)
Title |
---|
杨旭;任世卿;苗芳;: "一种改进的基于暗通道先验的图像去雾算法", 沈阳理工大学学报, no. 06 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114004834A (en) * | 2021-12-31 | 2022-02-01 | 山东信通电子股份有限公司 | Method, equipment and device for analyzing foggy weather condition in image processing |
CN116110053A (en) * | 2023-04-13 | 2023-05-12 | 济宁能源发展集团有限公司 | Container surface information detection method based on image recognition |
CN116630349A (en) * | 2023-07-25 | 2023-08-22 | 山东爱福地生物股份有限公司 | Straw returning area rapid segmentation method based on high-resolution remote sensing image |
CN116630349B (en) * | 2023-07-25 | 2023-10-20 | 山东爱福地生物股份有限公司 | Straw returning area rapid segmentation method based on high-resolution remote sensing image |
Also Published As
Publication number | Publication date |
---|---|
CN111192213B (en) | 2023-11-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108876743B (en) | Image rapid defogging method, system, terminal and storage medium | |
CN116229276B (en) | River entering pollution discharge detection method based on computer vision | |
CN104517110B (en) | The binarization method and system of a kind of image in 2 D code | |
CN112288658A (en) | Underwater image enhancement method based on multi-residual joint learning | |
US20100098331A1 (en) | System and method for segmenting foreground and background in a video | |
CN109740721B (en) | Wheat ear counting method and device | |
CN111192213B (en) | Image defogging self-adaptive parameter calculation method, image defogging method and system | |
CN110097522B (en) | Single outdoor image defogging method based on multi-scale convolution neural network | |
CN108154492B (en) | A kind of image based on non-local mean filtering goes haze method | |
CN111476744A (en) | Underwater image enhancement method based on classification and atmospheric imaging model | |
CN112053298B (en) | Image defogging method | |
CN111369477A (en) | Method for pre-analysis and tool self-adaptation of video recovery task | |
CN115661008A (en) | Image enhancement processing method, device, equipment and medium | |
CN118297837B (en) | Infrared simulator virtual image enhancement system based on image processing | |
CN108898561A (en) | A kind of defogging method, server and the system of the Misty Image containing sky areas | |
CN111859022A (en) | Cover generation method, electronic device and computer-readable storage medium | |
CN117974459A (en) | Low-illumination image enhancement method integrating physical model and priori | |
CN117876233A (en) | Mapping image enhancement method based on unmanned aerial vehicle remote sensing technology | |
CN117496019A (en) | Image animation processing method and system for driving static image | |
CN112532938B (en) | Video monitoring system based on big data technology | |
CN110633705A (en) | Low-illumination imaging license plate recognition method and device | |
CN114418874A (en) | Low-illumination image enhancement method | |
CN114913099A (en) | Method and system for processing video file | |
CN113379631A (en) | Image defogging method and device | |
CN112533024A (en) | Face video processing method and device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 311400 4th floor, building 9, Yinhu innovation center, No.9 Fuxian Road, Yinhu street, Fuyang District, Hangzhou City, Zhejiang Province Applicant after: Zhejiang Xinmai Microelectronics Co.,Ltd. Address before: 311400 4th floor, building 9, Yinhu innovation center, No.9 Fuxian Road, Yinhu street, Fuyang District, Hangzhou City, Zhejiang Province Applicant before: Hangzhou xiongmai integrated circuit technology Co.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |