CN113628153A - Shadow region detection method and device - Google Patents
Shadow region detection method and device Download PDFInfo
- Publication number
- CN113628153A CN113628153A CN202010322714.XA CN202010322714A CN113628153A CN 113628153 A CN113628153 A CN 113628153A CN 202010322714 A CN202010322714 A CN 202010322714A CN 113628153 A CN113628153 A CN 113628153A
- Authority
- CN
- China
- Prior art keywords
- region
- sub
- image
- preset
- subregion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 33
- 238000000034 method Methods 0.000 claims abstract description 39
- 239000013598 vector Substances 0.000 claims description 42
- 238000004422 calculation algorithm Methods 0.000 claims description 24
- 230000002159 abnormal effect Effects 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 10
- 238000012544 monitoring process Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000000903 blocking effect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 238000010521 absorption reaction Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a shadow region detection method and a shadow region detection device, and relates to the technical field of computers. One embodiment of the method comprises: dividing a scene image of a region to be detected into a plurality of sub-regions according to preset image characteristics to obtain a plurality of sub-region images; acquiring a sub-region template image of each sub-region from the template image of the region to be detected; determining the feature similarity corresponding to each subregion according to the preset image features of the subregion image of each subregion and the subregion template image; judging whether the feature similarity meets a preset similarity condition or not; if yes, determining that the shadow exists in the sub-area; otherwise, determining that no shadow is present in the sub-region. The method and the device can improve the accuracy of the shadow detection result, are easy to expand and have better robustness.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for detecting a shadow area.
Background
In the prior art, when a shadow area is detected, an object is generally subjected to area segmentation, and then whether the object belongs to the shadow area is judged by judging whether the characteristics of each small area are similar to those of an adjacent non-shadow area or not. However, the detection method has low accuracy and poor robustness.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for detecting a shadow area, which can improve accuracy of a shadow detection result, and are easy to expand and have better robustness.
According to an aspect of the embodiments of the present invention, there is provided a shadow area detection method, including:
dividing a scene image of a region to be detected into a plurality of sub-regions according to preset image characteristics to obtain a plurality of sub-region images;
acquiring a sub-region template image of each sub-region from the template image of the region to be detected;
determining the feature similarity corresponding to each subregion according to the preset image features of the subregion image of each subregion and the subregion template image;
judging whether the feature similarity meets a preset similarity condition or not; if yes, determining that the shadow exists in the sub-area; otherwise, determining that no shadow is present in the sub-region.
Optionally, dividing the scene image of the region to be detected into a plurality of sub-regions by a Quick-shift algorithm according to the image characteristics.
Optionally, before dividing the scene image of the region to be detected into a plurality of sub-regions according to the preset image features, the method further includes: confirming that an abnormal area exists in the scene image;
after obtaining the plurality of sub-region images, filtering the sub-region images without abnormal regions in the plurality of sub-region images.
Optionally, the preset image feature comprises at least one of: texture features, luminance features, color features, gradient features.
Optionally, the preset image features comprise texture features;
determining the feature similarity corresponding to each sub-region according to the preset image features of the sub-region image of each sub-region and the sub-region template image, wherein the feature similarity comprises the following steps: determining first texture information of a sub-region image of the sub-region and second texture information of a sub-region template image of the sub-region by adopting an LBP algorithm; judging the LBP histogram distance between the subregion image of the subregion and the subregion template image according to the first texture information and the second texture information;
judging whether the feature similarity meets a preset similarity condition or not, including: judging whether the distance of the LBP histogram is smaller than a preset distance threshold value or not; if so, judging that the feature similarity meets a preset similarity condition; otherwise, judging that the feature similarity does not meet the preset similarity condition.
Optionally, the preset image features comprise color features;
determining the feature similarity corresponding to each sub-region according to the preset image features of the sub-region image of each sub-region and the sub-region template image, wherein the feature similarity comprises the following steps: determining a first color vector of a subregion image of the subregion in an RGB space and a second color vector of a subregion template image of the subregion in the RGB space; determining an included angle between a difference vector between the second color vector and the first color vector and the second vector;
judging whether the feature similarity meets a preset similarity condition or not, including: judging whether the included angle is larger than a preset included angle threshold value or not; if so, judging that the feature similarity meets a preset similarity condition; otherwise, judging that the feature similarity does not meet the preset similarity condition.
Optionally, the preset image features comprise brightness features;
determining the feature similarity corresponding to each sub-region according to the preset image features of the sub-region image of each sub-region and the sub-region template image, wherein the feature similarity comprises the following steps: determining a first intensity in the LAB space of a sub-region map of the sub-region and a second intensity in the LAB space of a sub-region template image of the sub-region; determining a luminance ratio between the second luminance and the first luminance;
judging whether the feature similarity meets a preset similarity condition or not, including: judging whether the brightness ratio is larger than a preset ratio threshold value or not; if so, judging that the shadow exists in the subarea image of the subarea; otherwise, judging that no shadow exists in the subregion image of the subregion.
According to a second aspect of the embodiments of the present invention, there is provided an apparatus for shadow area detection, including:
the region dividing module is used for dividing the scene image of the region to be detected into a plurality of sub-regions according to preset image characteristics to obtain a plurality of sub-region images;
the template acquisition module is used for acquiring a sub-region template image of each sub-region from the template image of the region to be detected;
the similarity determining module is used for determining the feature similarity corresponding to each subregion according to the preset image features of the subregion image and the subregion template image of each subregion;
the shadow judging module is used for judging whether the feature similarity meets a preset similarity condition or not; if yes, determining that the shadow exists in the sub-area; otherwise, determining that no shadow is present in the sub-region.
Optionally, the region dividing module divides the scene image of the region to be detected into a plurality of sub-regions according to the image features by a Quick-shift algorithm.
Optionally, the region dividing module is further configured to: before dividing a scene image of a region to be detected into a plurality of sub-regions according to preset image characteristics, confirming that an abnormal region exists in the scene image; and after obtaining a plurality of subarea images, filtering the subarea images without abnormal areas in the plurality of subarea images.
Optionally, the preset image feature comprises at least one of: texture features, luminance features, color features, gradient features.
Optionally, the preset image features comprise texture features;
the similarity determining module determines the feature similarity corresponding to each sub-region according to the preset image features of the sub-region image of each sub-region and the sub-region template image, and the feature similarity determining module comprises: determining first texture information of a sub-region image of the sub-region and second texture information of a sub-region template image of the sub-region by adopting an LBP algorithm; judging the LBP histogram distance between the subregion image of the subregion and the subregion template image according to the first texture information and the second texture information;
the shadow judging module judges whether the feature similarity meets a preset similarity condition or not, and comprises the following steps: judging whether the distance of the LBP histogram is smaller than a preset distance threshold value or not; if so, judging that the feature similarity meets a preset similarity condition; otherwise, judging that the feature similarity does not meet the preset similarity condition.
Optionally, the preset image features comprise color features;
the similarity determining module determines the feature similarity corresponding to each sub-region according to the preset image features of the sub-region image of each sub-region and the sub-region template image, and the feature similarity determining module comprises: determining a first color vector of a subregion image of the subregion in an RGB space and a second color vector of a subregion template image of the subregion in the RGB space; determining an included angle between a difference vector between the second color vector and the first color vector and the second vector;
the shadow judging module judges whether the feature similarity meets a preset similarity condition or not, and comprises the following steps: judging whether the included angle is larger than a preset included angle threshold value or not; if so, judging that the feature similarity meets a preset similarity condition; otherwise, judging that the feature similarity does not meet the preset similarity condition.
Optionally, the preset image features comprise brightness features;
the similarity determining module determines the feature similarity corresponding to each sub-region according to the preset image features of the sub-region image of each sub-region and the sub-region template image, and the feature similarity determining module comprises: determining a first intensity in the LAB space of a sub-region map of the sub-region and a second intensity in the LAB space of a sub-region template image of the sub-region; determining a luminance ratio between the second luminance and the first luminance;
the shadow judging module judges whether the feature similarity meets a preset similarity condition or not, and comprises the following steps: judging whether the brightness ratio is larger than a preset ratio threshold value or not; if so, judging that the shadow exists in the subarea image of the subarea; otherwise, judging that no shadow exists in the subregion image of the subregion.
According to a third aspect of embodiments of the present invention, there is provided an electronic device for shadow area detection, including:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method provided by the first aspect of the embodiments of the present invention.
According to a fourth aspect of embodiments of the present invention, there is provided a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the method provided by the first aspect of embodiments of the present invention.
One embodiment of the above invention has the following advantages or benefits: by dividing the scene image of the region to be detected into a plurality of sub-region images and comparing the preset image characteristics of each sub-region image with the image characteristics of the corresponding region in the template image collected in advance, the accuracy of the shadow detection result can be improved, the expansion is easy, and the robustness is good.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of the main flow of a method of shadow area detection of an embodiment of the invention;
FIG. 2 is a schematic representation of a Mean-shift vector in an alternative embodiment of the invention;
FIG. 3 is a schematic representation of another Mean-shift vector in an alternative embodiment of the invention;
FIG. 4 is a graphical illustration of the convergence of the Mean-shift algorithm in an alternative embodiment of the present invention;
FIG. 5 is a schematic data flow diagram illustrating a method for detecting whether a shadow exists in a monitoring image of a fire fighting access according to an embodiment of the present invention;
FIG. 6 is a schematic flow chart of detecting whether there is a shadow in a monitoring image of a fire fighting access by applying the method of the embodiment of the present invention;
FIG. 7 is a schematic diagram of the main blocks of an apparatus for shadow region detection according to an embodiment of the present invention;
FIG. 8 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 9 is a schematic structural diagram of a computer system suitable for implementing a terminal device or a server according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The object present in the area to be detected itself can create shadows under lighting conditions. In addition, when a foreign object exists in the region to be detected, the existence of the foreign object can partially shield the region to be detected. Both of these situations are often mistaken for an abnormal situation. The embodiment of the invention can accurately identify whether the reason of the abnormal condition is the existence of the shadow or the occlusion.
According to an aspect of an embodiment of the present invention, there is provided a method of shadow region detection.
Fig. 1 is a schematic diagram of a main flow of a shadow area detection method according to an embodiment of the present invention, and as shown in fig. 1, the shadow area detection method includes: step S101, step S102, step S103, and step S104.
In step S101, a scene image of a region to be detected is divided into a plurality of sub-regions according to preset image features, so as to obtain a plurality of sub-region images.
The main purpose of this step is to divide the scene image into a plurality of sub-regions with similar preset image characteristics. The preset image features are used for reflecting feature information of the image, and feature content and feature quantity can be selectively set according to actual conditions. Optionally, the preset image features comprise at least one of: texture features, brightness features, color features, and gradient features (in general, if a sub-region is a shadow region, the template image and the scene image corresponding to the sub-region should have gradient information with similar shapes and weak intensities). Certainly, the preset image features may also include other features, for example, if the position and the direction of the light source can be obtained in advance by learning a plurality of frames of pictures, if the object blocking position is a, a shadow region may occur near the position a with a high probability along the direction of the light source, and then, by combining with other image features (texture, illuminance, color, and the like), a detection result with a higher accuracy can be obtained.
The mode of dividing the scene image of the region to be detected into a plurality of sub-regions can be selectively set according to actual conditions, as long as each divided sub-region image has similar preset image characteristics.
In some embodiments, the partitioning is performed using the Mean-shift algorithm. Illustratively, given a d-dimensional space RdN sample points, optionally a point x in spaceiThen the basic form of the Mean-Shift vector is defined as:
Skis a high-dimensional sphere region with a radius h, and satisfies the set of y points of the following relationship:
Sk(x)={y:(y-xi)T(y-xi)<h2}
k denotes that of the n sample points, k fall within SkIn regions, the box line arrows in FIG. 2 represent Mean-Shift vectors
And then, a high-dimensional sphere area is made by taking the end point of the Mean-Shift vector as the center of a circle. As shown in fig. 3, the above steps are repeated to obtain a new Mean-Shift vector. This is repeated until convergence to the place where the probability density is the greatest, i.e., the most dense place, as shown in fig. 4.
The basic Mean-Shift vector is added to the kernel function, then the Mean-Shift algorithm is transformed to:
to maximize the above function, the above equation is derived, and the Mean-Shift vector takes the negative direction of the above derivation. If the kernel function is a Gaussian function, then the new circle center coordinate x is the following equation (1):
the specific flow of the Mean-Shift algorithm is as follows:
(1) selecting x as the center of a circle in space, taking h as the radius to be used as a high-dimensional sphere, and recording all points falling in the high-dimensional sphere as xi;
(2) Computing Mean-Shift vector MhIf the modulus is less than or equal to the set value epsilon, the process is ended. Otherwise, calculating a new circle center by using the following formula (1) and entering the step (1).
The application of the Mean-Shift algorithm to image clustering generally has the following steps:
(1) setting the feature space of the image (e.g., converting the image to LAB space, three components of L ab being three components of the feature vector, and additionally setting position x y as the other 2 components of the feature vector);
(2) setting the size of the window of each point (namely the radius h of a high-dimensional sphere, which is used for calculating the range of the neighbor when mean shift is carried out);
(3) and (3) performing Mean-Shift algorithm calculation, and dividing points with the same mode (namely same absorption base) calculation result into the same type to obtain a sub-region.
The Mean-shift algorithm is adopted for regional division, the number of classes does not need to be specified in advance, parameters except the window size do not need to be set, and the method is convenient to implement.
In other embodiments, the scene image of the region to be detected is divided into a plurality of sub-regions by a Quick-shift algorithm according to the image characteristics. The Quick-shift algorithm is an accelerated version of the mean-shift algorithm, the Quick-shift algorithm does not need to use gradients to find a probability density mode, each iteration is only to move each point to the nearest point which increases the probability density, and the algorithm is simple and small in calculation amount.
In the embodiment of the present invention, before dividing the scene image of the region to be detected into a plurality of sub-regions according to the preset image characteristics, the method may further include: and confirming that an abnormal area exists in the scene image. After obtaining the plurality of sub-region images, the method may further include filtering out the sub-region images in which the abnormal region does not exist in the plurality of sub-region images. Therefore, the calculation amount of the subsequent steps can be simplified, and the detection speed can be improved.
In step S102, a sub-region template image of each sub-region is acquired from the template image of the region to be detected.
The template image is an image template of the area to be detected in a non-shadow and non-blocking state, which is acquired in advance. And dividing the template image to obtain a plurality of sub-region template images. The division mode of the template image is the same as the division mode of the scene image of the region to be detected, and details are not repeated here.
In step S103, a feature similarity corresponding to each sub-region is determined according to preset image features of the sub-region image of each sub-region and the sub-region template image. In step S104, it is determined whether the feature similarity satisfies a preset similarity condition; if yes, determining that the shadow exists in the sub-area; otherwise, determining that no shadow is present in the sub-region.
The feature similarity is used for indicating the similarity degree between the subregion image and the subregion template image, and the higher the similarity degree is, the more probable the shadow exists in the subregion image. The measurement index of the feature similarity can be selectively set according to the actual situation, such as Euclidean distance, cosine similarity, and the like. When there is more than one preset image feature, the similarity metric indexes corresponding to the preset image features may be the same or different.
Optionally, the preset image features comprise texture features. Determining the feature similarity corresponding to each sub-region according to the preset image features of the sub-region image of each sub-region and the sub-region template image, wherein the feature similarity comprises the following steps: determining first texture information of a sub-region image of the sub-region and second texture information of a sub-region template image of the sub-region by adopting an LBP algorithm; and judging the LBP histogram distance between the subregion image of the subregion and the subregion template image according to the first texture information and the second texture information. Judging whether the feature similarity meets a preset similarity condition or not, including: judging whether the distance of the LBP histogram is smaller than a preset distance threshold value or not; if so, judging that the feature similarity meets a preset similarity condition; otherwise, judging that the feature similarity does not meet the preset similarity condition.
If the subarea is a shadow area, the corresponding subarea image Zs?And subregion template image ZNSShould have the same texture, then Zs?Corresponding LBP histogram and ZNSThe corresponding LBP histogram distance should be less than a preset distance threshold. Whereas if Z iss?For the occlusion region, the texture of the two regions should be different, and the corresponding LBP histogram distance should be greater than a preset distance threshold.
The texture similarity degree between the subregion image and the subregion template image is determined by adopting an LBP (Local Binary Pattern) algorithm, the calculation speed is high, the light illumination degree is insensitive, the effect is good, and the accuracy of texture description and comparison of a shadow region and a non-shadow region can be improved.
Optionally, the preset image features comprise color features. Determining the feature similarity corresponding to each sub-region according to the preset image features of the sub-region image of each sub-region and the sub-region template image, wherein the feature similarity comprises the following steps: determining a first color vector of a subregion image of the subregion in an RGB space and a second color vector of a subregion template image of the subregion in the RGB space; an angle between a difference vector between the second color vector and the first color vector and the second vector is determined. Judging whether the feature similarity meets a preset similarity condition or not, including: judging whether the included angle is larger than a preset included angle threshold value or not; if so, judging that the feature similarity meets a preset similarity condition; otherwise, judging that the feature similarity does not meet the preset similarity condition.
The color similarity is judged based on the reflection theory, so that the accuracy and the robustness of a judgment result can be improved.
Illustratively, reflection theory exists as follows:
Ij=(tjcos(θ)Ld+Le)Rj
in the formula IjIs the color vector of the j point of the RGB space, LdAnd LeRepresenting direct and diffuse light in the environment, theta being the angle between the direct light and the normal of the object surface, RjTo an emissivity, tjIndicating whether the sub-region is a shadow or non-shadow region, tj0 denotes a shaded area, tj1 is represented as a non-shaded area. In order to further improve the robustness of the detection result, the color median in the RGB space of each pair of sub-regions to be determined may be recorded for analysis. With INsRepresenting the color median, I, of the subregion template imageSRepresenting the color median of the subregion image, if the subregionIf the region is a shadow region, then:
ID=INS-Is=(cos(θ)Ld)Rmedian
in the formula, RmedianRepresenting the median of the emissivity in the sub-region. If there is no diffuse reflection in the environment, theory IDAnd INsThe included angle is 0, but there is a certain proportion of diffuse reflection light in the environment, so IDAnd INSA smaller included angle exists, and Z can be judged by setting an included angle threshold values?Whether it is a shadow area, in particular, when IDAnd INSIf the included angle between the shadow areas is larger than the preset included angle threshold value, judging that the sub-area is a shadow area, otherwise, judging that the sub-area is not the shadow area.
Optionally, the preset image features comprise brightness features. Determining the feature similarity corresponding to each sub-region according to the preset image features of the sub-region image of each sub-region and the sub-region template image, wherein the feature similarity comprises the following steps: determining a first intensity in the LAB space of a sub-region map of the sub-region and a second intensity in the LAB space of a sub-region template image of the sub-region; a luminance ratio between the second luminance and the first luminance is determined. Judging whether the feature similarity meets a preset similarity condition or not, including: judging whether the brightness ratio is larger than a preset ratio threshold value or not; if so, judging that the shadow exists in the subarea image of the subarea; otherwise, judging that no shadow exists in the subregion image of the subregion.
The LAB (Lab color space) color space is a color-opponent space, the dimension L represents luminance, and A and B represent color opponent dimensions. The brightness similarity is judged based on the LAB space, and the accuracy and the robustness of a judgment result can be improved. Based on the color space coordinate of CIE XYZ (a standard chromaticity system) of nonlinear compression, converting the subregion images and the subregion template images into an LAB space respectively, and taking the median of L-dimension components in the LAB space to judge the similarity and marking the median as L respectively in order to further improve the robustness of detection resultssAnd LNS. If each sub-region is a shadow region, its corresponding LNs/LsShould be greater than a predetermined ratio threshold T, T>1。
The following takes the fire passage monitoring field as an example, and the embodiment of the invention is exemplarily described with reference to fig. 5 and 6. The fire fighting access is an access for rescuing firefighters and evacuating trapped persons. The fire fighting access is mainly used when a fire disaster occurs, requires 24 hours to ensure that the access is smooth, and is a special access for fire fighting, rescue and evacuation of trapped people for fire fighters when the fire disaster occurs.
FIG. 5 is a schematic data flow diagram illustrating a method for detecting whether a shadow exists in a monitoring image of a fire fighting access according to an embodiment of the present invention. As shown in fig. 5, a scene image of the fire fighting access and a template image in a non-blocking and non-shadow state (the template image is usually acquired in advance) are acquired by a camera, and then the scene image and the template image are transmitted to a monitoring center through a network. And judging whether the scene image has abnormity or not by an abnormal area detection module. If not, the process ends. Otherwise, detecting whether a shadow area exists through a shadow area detection module (namely an execution body for realizing the method of the embodiment of the invention). When the shadow area is judged to exist, whether the scene image has an abnormality after the shadow area is removed is further judged. If yes, alarming.
The shadow region detection module specifically inputs an ROI region of the image to be determined, that is, inputs a sub-region image with occlusion, as shown in fig. 6.
And judging whether the counting of all the subarea images is finished. If so, combining the shadow areas and outputting a result, otherwise, calculating the similarity of the color space based on a reflection theory.
And judging whether the similarity is larger than a threshold value. If not, skipping to the step of judging whether all the subarea images are counted, otherwise, converting the subarea images into an LAB space, counting the median of the L component, and judging the median L of the L component of the subarea image templateForm panelThe median L of the L component of the subregion imageFeature(s)Whether the ratio between is greater than the proportional threshold. And if not, skipping to the step of judging whether the counting of all the subarea images is finished. If so, each willThe sub-regions describe the texture using the LBP algorithm.
And determining whether the histogram distance of the sub-region histogram of the texture description and the histogram of the sub-region template image is smaller than a distance threshold value. If so, marking the current sub-area as a shadow area; otherwise, skipping to the step of judging whether counting of all the sub-area images is finished.
Currently, the industry mostly uses a visual scheme for monitoring to ensure the smoothness of a fire fighting access. In actual use, however, it is easy to cause a false alarm due to the existence of a shadow area. The method and the device make full use of the template information, and can judge whether each subregion is a shadow region more accurately by comparing the similarity of the characteristics of the color, the brightness, the texture and the like of the non-shadow region at the same position of the scene image of the region to be judged and the template image. And the LBP algorithm is adopted to judge the texture similarity, so that a more robust and accurate judgment result can be provided.
In addition to the fire monitoring field mentioned in the above example, the method of the embodiment of the present invention may also be applied to other fields, for example, a series of scenes using RGB images, such as image restoration, video monitoring, vehicle monitoring in a complex background (such as a complex scene of cloudy day, haze day, building obstruction, etc.), license plate recognition, and the like.
According to a second aspect of the embodiments of the present invention, there is provided an apparatus for implementing the above method.
Fig. 7 is a schematic diagram of main blocks of an apparatus for shadow area detection according to an embodiment of the present invention, and as shown in fig. 7, the apparatus 700 for shadow area detection includes:
the region dividing module 701 divides the scene image of the region to be detected into a plurality of sub-regions according to preset image characteristics to obtain a plurality of sub-region images;
a template obtaining module 702, configured to obtain a sub-region template image of each sub-region from the template image of the region to be detected;
the similarity determining module 703 is configured to determine a feature similarity corresponding to each sub-region according to a preset image feature of the sub-region image of each sub-region and the sub-region template image;
a shadow determining module 704, configured to determine whether the feature similarity satisfies a preset similarity condition; if yes, determining that the shadow exists in the sub-area; otherwise, determining that no shadow is present in the sub-region.
Optionally, the region dividing module divides the scene image of the region to be detected into a plurality of sub-regions according to the image features by a Quick-shift algorithm.
Optionally, the region dividing module is further configured to: before dividing a scene image of a region to be detected into a plurality of sub-regions according to preset image characteristics, confirming that an abnormal region exists in the scene image; and after obtaining a plurality of subarea images, filtering the subarea images without abnormal areas in the plurality of subarea images.
Optionally, the preset image feature comprises at least one of: texture features, luminance features, color features, gradient features.
Optionally, the preset image features comprise texture features;
the similarity determining module determines the feature similarity corresponding to each sub-region according to the preset image features of the sub-region image of each sub-region and the sub-region template image, and the feature similarity determining module comprises: determining first texture information of a sub-region image of the sub-region and second texture information of a sub-region template image of the sub-region by adopting an LBP algorithm; judging the LBP histogram distance between the subregion image of the subregion and the subregion template image according to the first texture information and the second texture information;
the shadow judging module judges whether the feature similarity meets a preset similarity condition or not, and comprises the following steps: judging whether the distance of the LBP histogram is smaller than a preset distance threshold value or not; if so, judging that the feature similarity meets a preset similarity condition; otherwise, judging that the feature similarity does not meet the preset similarity condition.
Optionally, the preset image features comprise color features;
the similarity determining module determines the feature similarity corresponding to each sub-region according to the preset image features of the sub-region image of each sub-region and the sub-region template image, and the feature similarity determining module comprises: determining a first color vector of a subregion image of the subregion in an RGB space and a second color vector of a subregion template image of the subregion in the RGB space; determining an included angle between a difference vector between the second color vector and the first color vector and the second vector;
the shadow judging module judges whether the feature similarity meets a preset similarity condition or not, and comprises the following steps: judging whether the included angle is larger than a preset included angle threshold value or not; if so, judging that the feature similarity meets a preset similarity condition; otherwise, judging that the feature similarity does not meet the preset similarity condition.
Optionally, the preset image features comprise brightness features;
the similarity determining module determines the feature similarity corresponding to each sub-region according to the preset image features of the sub-region image of each sub-region and the sub-region template image, and the feature similarity determining module comprises: determining a first intensity in the LAB space of a sub-region map of the sub-region and a second intensity in the LAB space of a sub-region template image of the sub-region; determining a luminance ratio between the second luminance and the first luminance;
the shadow judging module judges whether the feature similarity meets a preset similarity condition or not, and comprises the following steps: judging whether the brightness ratio is larger than a preset ratio threshold value or not; if so, judging that the shadow exists in the subarea image of the subarea; otherwise, judging that no shadow exists in the subregion image of the subregion.
According to a third aspect of embodiments of the present invention, there is provided an electronic device for shadow area detection, including:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method provided by the first aspect of the embodiments of the present invention.
According to a fourth aspect of embodiments of the present invention, there is provided a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the method provided by the first aspect of embodiments of the present invention.
Fig. 8 shows an exemplary system architecture 800 to which the method of shadow region detection or the apparatus of shadow region detection of embodiments of the invention may be applied.
As shown in fig. 8, the system architecture 800 may include terminal devices 801, 802, 803, a network 804, and a server 805. The network 804 serves to provide a medium for communication links between the terminal devices 801, 802, 803 and the server 805. Network 804 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
A user may use the terminal devices 801, 802, 803 to interact with a server 805 over a network 804 to receive or send messages or the like. The terminal devices 801, 802, 803 may have installed thereon various messenger client applications such as a monitoring class application, a shopping class application, a web browser application, a search class application, an instant messaging tool, a mailbox client, social platform software, and the like (by way of example only).
The terminal devices 801, 802, 803 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 805 may be a server providing various services, such as a background management server (for example only) providing support for monitoring type websites browsed by users using the terminal devices 801, 802, 803. The background management server may analyze and perform other processing on the received data such as the shadow detection request, and feed back a processing result (e.g., a shadow area — just an example) to the terminal device.
It should be noted that the method for detecting a shadow area provided by the embodiment of the present invention is generally executed by the server 805, and accordingly, the device for detecting a shadow area is generally disposed in the server 805.
It should be understood that the number of terminal devices, networks, and servers in fig. 8 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 9, shown is a block diagram of a computer system 900 suitable for use with a terminal device implementing an embodiment of the present invention. The terminal device shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 9, the computer system 900 includes a Central Processing Unit (CPU)901 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)902 or a program loaded from a storage section 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data necessary for the operation of the system 900 are also stored. The CPU 901, ROM 902, and RAM 903 are connected to each other via a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
The following components are connected to the I/O interface 905: an input portion 906 including a keyboard, a mouse, and the like; an output section 907 including components such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 908 including a hard disk and the like; and a communication section 909 including a network interface card such as a LAN card, a modem, or the like. The communication section 909 performs communication processing via a network such as the internet. The drive 910 is also connected to the I/O interface 905 as necessary. A removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 910 as necessary, so that a computer program read out therefrom is mounted into the storage section 908 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 909, and/or installed from the removable medium 911. The above-described functions defined in the system of the present invention are executed when the computer program is executed by a Central Processing Unit (CPU) 901.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor comprising: the region dividing module is used for dividing the scene image of the region to be detected into a plurality of sub-regions according to preset image characteristics to obtain a plurality of sub-region images; the template acquisition module is used for acquiring a sub-region template image of each sub-region from the template image of the region to be detected; the similarity determining module is used for determining the feature similarity corresponding to each subregion according to the preset image features of the subregion image and the subregion template image of each subregion; the shadow judging module is used for judging whether the feature similarity meets a preset similarity condition or not; if yes, determining that the shadow exists in the sub-area; otherwise, determining that no shadow is present in the sub-region. The names of the modules do not form a limitation on the modules themselves under certain conditions, for example, the region division module may also be described as a "module for determining whether the feature similarity satisfies a preset similarity condition".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: dividing a scene image of a region to be detected into a plurality of sub-regions according to preset image characteristics to obtain a plurality of sub-region images; acquiring a sub-region template image of each sub-region from the template image of the region to be detected; determining the feature similarity corresponding to each subregion according to the preset image features of the subregion image of each subregion and the subregion template image; judging whether the feature similarity meets a preset similarity condition or not; if yes, determining that the shadow exists in the sub-area; otherwise, determining that no shadow is present in the sub-region.
According to the technical scheme of the embodiment of the invention, the scene image of the area to be detected is divided into a plurality of sub-area images, and the preset image characteristics of each sub-area image are compared with the image characteristics of the corresponding area in the template image collected in advance, so that the accuracy of the shadow detection result can be improved, the expansion is easy, and the robustness is good.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A method of shadow region detection, comprising:
dividing a scene image of a region to be detected into a plurality of sub-regions according to preset image characteristics to obtain a plurality of sub-region images;
acquiring a sub-region template image of each sub-region from the template image of the region to be detected; the template image is an image template of the area to be detected in a shadow-free and shielding-free state;
determining the feature similarity corresponding to each subregion according to the preset image features of the subregion image of each subregion and the subregion template image;
judging whether the feature similarity meets a preset similarity condition or not; if yes, determining that the shadow exists in the sub-area; otherwise, determining that no shadow is present in the sub-region.
2. The method of claim 1, wherein the scene image of the region to be detected is divided into a plurality of sub-regions by a Quick-shift algorithm according to the image features.
3. The method of claim 1, wherein before dividing the scene image of the region to be detected into a plurality of sub-regions according to the preset image characteristics, the method further comprises: confirming that an abnormal area exists in the scene image;
after obtaining the plurality of sub-region images, filtering the sub-region images without abnormal regions in the plurality of sub-region images.
4. The method of claim 1, wherein the preset image features comprise at least one of: texture features, luminance features, color features, gradient features.
5. The method of claim 4, wherein the predetermined image features comprise texture features;
determining the feature similarity corresponding to each sub-region according to the preset image features of the sub-region image of each sub-region and the sub-region template image, wherein the feature similarity comprises the following steps: determining first texture information of a sub-region image of the sub-region and second texture information of a sub-region template image of the sub-region by adopting an LBP algorithm; judging the LBP histogram distance between the subregion image of the subregion and the subregion template image according to the first texture information and the second texture information;
judging whether the feature similarity meets a preset similarity condition or not, including: judging whether the distance of the LBP histogram is smaller than a preset distance threshold value or not; if so, judging that the feature similarity meets a preset similarity condition; otherwise, judging that the feature similarity does not meet the preset similarity condition.
6. The method of claim 4, wherein the preset image features comprise color features;
determining the feature similarity corresponding to each sub-region according to the preset image features of the sub-region image of each sub-region and the sub-region template image, wherein the feature similarity comprises the following steps: determining a first color vector of a subregion image of the subregion in an RGB space and a second color vector of a subregion template image of the subregion in the RGB space; determining an included angle between a difference vector between the second color vector and the first color vector and the second vector;
judging whether the feature similarity meets a preset similarity condition or not, including: judging whether the included angle is larger than a preset included angle threshold value or not; if so, judging that the feature similarity meets a preset similarity condition; otherwise, judging that the feature similarity does not meet the preset similarity condition.
7. The method of claim 4, wherein the preset image features comprise luminance features;
determining the feature similarity corresponding to each sub-region according to the preset image features of the sub-region image of each sub-region and the sub-region template image, wherein the feature similarity comprises the following steps: determining a first intensity in the LAB space of a sub-region map of the sub-region and a second intensity in the LAB space of a sub-region template image of the sub-region; determining a luminance ratio between the second luminance and the first luminance;
judging whether the feature similarity meets a preset similarity condition or not, including: judging whether the brightness ratio is larger than a preset ratio threshold value or not; if so, judging that the shadow exists in the subarea image of the subarea; otherwise, judging that no shadow exists in the subregion image of the subregion.
8. An apparatus for shadow region detection, comprising:
the region dividing module is used for dividing the scene image of the region to be detected into a plurality of sub-regions according to preset image characteristics to obtain a plurality of sub-region images;
the template acquisition module is used for acquiring a sub-region template image of each sub-region from the template image of the region to be detected;
the similarity determining module is used for determining the feature similarity corresponding to each subregion according to the preset image features of the subregion image and the subregion template image of each subregion;
the shadow judging module is used for judging whether the feature similarity meets a preset similarity condition or not; if yes, determining that the shadow exists in the sub-area; otherwise, determining that no shadow is present in the sub-region.
9. An electronic device for shadow region detection, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010322714.XA CN113628153A (en) | 2020-04-22 | 2020-04-22 | Shadow region detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010322714.XA CN113628153A (en) | 2020-04-22 | 2020-04-22 | Shadow region detection method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113628153A true CN113628153A (en) | 2021-11-09 |
Family
ID=78376340
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010322714.XA Pending CN113628153A (en) | 2020-04-22 | 2020-04-22 | Shadow region detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113628153A (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101266132A (en) * | 2008-04-30 | 2008-09-17 | 西安工业大学 | Running disorder detection method based on MPFG movement vector |
CN101739551A (en) * | 2009-02-11 | 2010-06-16 | 北京智安邦科技有限公司 | Method and system for identifying moving objects |
JP2010237976A (en) * | 2009-03-31 | 2010-10-21 | Kyushu Institute Of Technology | Light source information obtaining device, shading detection device, shading removal device, and those methods and programs |
CN102298781A (en) * | 2011-08-16 | 2011-12-28 | 长沙中意电子科技有限公司 | Motion shadow detection method based on color and gradient characteristics |
CN105469054A (en) * | 2015-11-25 | 2016-04-06 | 天津光电高斯通信工程技术股份有限公司 | Model construction method of normal behaviors and detection method of abnormal behaviors |
KR20160037481A (en) * | 2014-09-29 | 2016-04-06 | 에스케이텔레콤 주식회사 | Shadow removal method for image recognition and apparatus using the same |
CN107146210A (en) * | 2017-05-05 | 2017-09-08 | 南京大学 | A kind of detection based on image procossing removes shadow method |
CN107767390A (en) * | 2017-10-20 | 2018-03-06 | 苏州科达科技股份有限公司 | The shadow detection method and its system of monitor video image, shadow removal method |
CN109544605A (en) * | 2018-05-23 | 2019-03-29 | 安徽大学 | Moving shadow detection method based on space-time relation modeling |
-
2020
- 2020-04-22 CN CN202010322714.XA patent/CN113628153A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101266132A (en) * | 2008-04-30 | 2008-09-17 | 西安工业大学 | Running disorder detection method based on MPFG movement vector |
CN101739551A (en) * | 2009-02-11 | 2010-06-16 | 北京智安邦科技有限公司 | Method and system for identifying moving objects |
JP2010237976A (en) * | 2009-03-31 | 2010-10-21 | Kyushu Institute Of Technology | Light source information obtaining device, shading detection device, shading removal device, and those methods and programs |
CN102298781A (en) * | 2011-08-16 | 2011-12-28 | 长沙中意电子科技有限公司 | Motion shadow detection method based on color and gradient characteristics |
KR20160037481A (en) * | 2014-09-29 | 2016-04-06 | 에스케이텔레콤 주식회사 | Shadow removal method for image recognition and apparatus using the same |
CN105469054A (en) * | 2015-11-25 | 2016-04-06 | 天津光电高斯通信工程技术股份有限公司 | Model construction method of normal behaviors and detection method of abnormal behaviors |
CN107146210A (en) * | 2017-05-05 | 2017-09-08 | 南京大学 | A kind of detection based on image procossing removes shadow method |
CN107767390A (en) * | 2017-10-20 | 2018-03-06 | 苏州科达科技股份有限公司 | The shadow detection method and its system of monitor video image, shadow removal method |
CN109544605A (en) * | 2018-05-23 | 2019-03-29 | 安徽大学 | Moving shadow detection method based on space-time relation modeling |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110544258B (en) | Image segmentation method and device, electronic equipment and storage medium | |
US10984266B2 (en) | Vehicle lamp detection methods and apparatuses, methods and apparatuses for implementing intelligent driving, media and devices | |
US8774514B2 (en) | Method of and apparatus for classifying image using histogram analysis, and method of and apparatus for recognizing text image using the histogram analysis | |
CN111444921A (en) | Scratch defect detection method and device, computing equipment and storage medium | |
US20130330004A1 (en) | Finding text in natural scenes | |
CN111950543B (en) | Target detection method and device | |
CN109657543B (en) | People flow monitoring method and device and terminal equipment | |
Lee et al. | Improved census transform for noise robust stereo matching | |
CN114639143B (en) | Portrait archiving method, device and storage medium based on artificial intelligence | |
CN115049954A (en) | Target identification method, device, electronic equipment and medium | |
CN113762314B (en) | Firework detection method and device | |
CN108492284B (en) | Method and apparatus for determining perspective shape of image | |
CN110895811B (en) | Image tampering detection method and device | |
Lam et al. | Highly accurate texture-based vehicle segmentation method | |
CN113435452A (en) | Electrical equipment nameplate text detection method based on improved CTPN algorithm | |
CN114445825A (en) | Character detection method and device, electronic equipment and storage medium | |
CN110765875B (en) | Method, equipment and device for detecting boundary of traffic target | |
Qi et al. | Cascaded cast shadow detection method in surveillance scenes | |
CN113326766A (en) | Training method and device of text detection model and text detection method and device | |
Dai et al. | Robust and accurate moving shadow detection based on multiple features fusion | |
CN113628153A (en) | Shadow region detection method and device | |
WO2024016715A1 (en) | Method and apparatus for testing imaging consistency of system, and computer storage medium | |
CN117853573A (en) | Video processing method, device, electronic equipment and computer readable medium | |
CN110852250A (en) | Vehicle weight removing method and device based on maximum area method and storage medium | |
CN110634155A (en) | Target detection method and device based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |