[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN118279841B - Intelligent expressway inspection escape-proof monitoring system based on data fusion - Google Patents

Intelligent expressway inspection escape-proof monitoring system based on data fusion Download PDF

Info

Publication number
CN118279841B
CN118279841B CN202410711303.8A CN202410711303A CN118279841B CN 118279841 B CN118279841 B CN 118279841B CN 202410711303 A CN202410711303 A CN 202410711303A CN 118279841 B CN118279841 B CN 118279841B
Authority
CN
China
Prior art keywords
image
suspected vehicle
processed
suspected
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410711303.8A
Other languages
Chinese (zh)
Other versions
CN118279841A (en
Inventor
张平
刘继峰
胡建斌
王周洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mcc Guizhou Construction Investment Development Co ltd
Original Assignee
Mcc Guizhou Construction Investment Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mcc Guizhou Construction Investment Development Co ltd filed Critical Mcc Guizhou Construction Investment Development Co ltd
Priority to CN202410711303.8A priority Critical patent/CN118279841B/en
Publication of CN118279841A publication Critical patent/CN118279841A/en
Application granted granted Critical
Publication of CN118279841B publication Critical patent/CN118279841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to the field of image enhancement processing, in particular to an intelligent expressway inspection escape prevention monitoring system based on data fusion, which collects vehicle images and preprocesses the collected vehicle images to obtain images to be processed; identifying suspected vehicle areas from the image to be processed, and determining the initial value of a fuzzy core of each suspected vehicle area; determining an adjustment coefficient of a blur kernel initial value of each suspected vehicle region in the image to be processed according to the distance evaluation parameter of each suspected vehicle region close to the camera and the difference between the image to be processed and the adjacent frame image of the image to be processed; adjusting the initial value of the fuzzy kernel corresponding to each suspected vehicle area based on the adjustment coefficient corresponding to each suspected vehicle area, so as to obtain the preset value of the fuzzy kernel corresponding to each suspected vehicle area; deblurring the corresponding suspected vehicle region based on the fuzzy core preset value, so as to obtain an image to be detected; and carrying out vehicle identification based on the image to be detected.

Description

Intelligent expressway inspection escape-proof monitoring system based on data fusion
Technical Field
The application relates to the field of image enhancement processing, in particular to an intelligent expressway inspection escape-proof monitoring system based on data fusion.
Background
Along with the development of leading edge technologies such as 5G technology and intelligent technology, the highway inspection escape prevention system is further developed towards automation and intelligence for guaranteeing the fairness, disclosure and high efficiency of highway charging management. The current common intelligent expressway inspection escape-proof monitoring system generally comprises technologies of data fusion analysis, intelligent video monitoring, path restoration and the like.
The information of a plurality of data sources can be integrated together during data fusion analysis, so that the system can monitor and analyze the conditions of vehicles on the expressway more comprehensively. Because the information of a plurality of data sources is comprehensively analyzed, the accuracy of the system for identifying abnormal conditions and fee evasion vehicles is improved, and the false alarm rate is reduced.
At present, although data fusion analysis has many advantages, it also has some disadvantages due to the operation of fusion. For example, the acquired image is too blurred, which results in inaccurate and reliable analysis results after data fusion and even partial information loss. The AI auditing system distinguishes each vehicle through the label, but the system cannot identify the vehicle because the image before data fusion is too fuzzy, so that deblurring treatment is required to be carried out on the image before data fusion.
The commonly used deblurring algorithms in the prior art are divided into two classes, one class is based on blind deconvolution and the other class is based on non-blind deconvolution, the difference between the two is whether the size of the blur kernel is known. The algorithm based on the non-blind deblurring convolution comprisesThe filter, the inverse filter and the reverse projection algorithm are suitable for knowing the blur kernel, and can restore the image more accurately. But as such, it is necessary to know the exact blur kernel situation, which is difficult to achieve in real-world applications, and which is sensitive to noise and requires additional processing. The blind deconvolution-based algorithm includes inverse filtering-based algorithmsAlgorithm based on least squaresDeconvolution algorithmAlgorithms that are applicable to various types of blur cases where the blur kernel is not known. But the method is sensitive to various noise points, and is easy to cause inaccurate estimated fuzzy kernel, so that the quality of the processed image is low, a clearer image cannot be obtained, the accuracy of fused data is low, and the identification of the vehicle is affected.
Disclosure of Invention
In order to solve the problems, the application provides an intelligent expressway inspection escape-proof monitoring system based on data fusion, which can reduce the situations of false alarms or reminders. And the time and calculation force waste can be reduced.
The technical scheme provided by the application is as follows: the utility model provides an intelligent highway inspection prevents escaping monitoring system based on data fusion, include:
The image acquisition module is used for acquiring a vehicle image and preprocessing the acquired vehicle image to obtain an image to be processed;
the fuzzy core calculation module is used for identifying suspected vehicle areas from the image to be processed and determining fuzzy core initial values of the suspected vehicle areas; determining an adjustment coefficient of a blur kernel initial value of each suspected vehicle region in the image to be processed according to the distance evaluation parameter of each suspected vehicle region close to the camera and the difference between the image to be processed and the adjacent frame image of the image to be processed; adjusting the blur kernel initial value corresponding to each suspected vehicle region based on the adjustment coefficient corresponding to each suspected vehicle region, so as to obtain a blur kernel preset value corresponding to each suspected vehicle region;
the deblurring processing module is used for carrying out deblurring operation on the corresponding suspected vehicle area based on the blur kernel preset value so as to obtain an image to be detected;
the identification module is used for identifying the vehicle based on the image to be detected;
Wherein, the fuzzy core calculation module comprises:
The vehicle region identification module is used for identifying the connected regions in the image to be processed and calculating shape evaluation parameters of each connected region; determining suspected vehicle areas in the image to be processed based on shape evaluation parameters of each connected area;
the initial value calculation module is used for determining the initial value of the blur kernel of each suspected vehicle area based on the area of the suspected vehicle area.
Wherein, the vehicle region identification module is used for: counting the occurrence times of the shape evaluation parameters of each connected domain, and determining the connected domain with the occurrence times smaller than the corresponding preset value as the suspected vehicle region.
Wherein the vehicle region identification module includes:
The connected domain determining module is used for identifying the connected regions of the image to be processed by using a connected domain marking algorithm, and combining adjacent connected regions with the area smaller than a preset value, so as to obtain the connected domain in the image to be processed;
the parameter calculation module is used for determining the shape evaluation parameter of the current connected domain based on the shape description parameter of the current connected domain and the overall difference degree of all edge segments in the current connected domain relative to the current connected domain; the shape description parameter of the current connected domain is the ratio of the area and the perimeter of the current connected domain.
The parameter calculation module is used for calculating the shape evaluation parameter of the current connected domain by using the following formula:
Wherein, A shape evaluation parameter indicating the connected domain b,Representing connected domainIs defined by the area of the (c),Representing connected domainIs provided with a pair of grooves having a circumference,Representing connected domainIs used for the shape description parameters of the (c),And (3) withRespectively represent connected domainsMiddle (f)The length of the projection of the individual edge segments in the horizontal and vertical direction,And (3) withRespectively represent connected domainsThe average of the lengths of all edge segments projected in the horizontal and vertical directions,Representing connected domainThe total number of mid-edge segments,Indicating the degree of overall difference of all edge segments in connected domain b with respect to connected domain b.
Wherein, the fuzzy core calculation module comprises:
The distance evaluation module is used for obtaining a distance evaluation parameter of the current suspected vehicle area close to the camera based on the ratio of the area of the current suspected vehicle area to the average value of the areas of all the suspected vehicle areas;
The difference determining module is used for determining a first change difference between the image to be processed and the current suspected vehicle region in the previous frame image and the neighborhood thereof based on the distance between the current suspected vehicle region in the image to be processed and the neighborhood thereof and the distance between the current suspected vehicle region in the previous frame image of the image to be processed and the neighborhood thereof; the method comprises the steps of determining a first change difference between a current suspected vehicle region in a frame image of a to-be-processed image and a neighborhood of the current suspected vehicle region in the frame image of the to-be-processed image, and determining a second change difference between the to-be-processed image and the current suspected vehicle region in the next frame image and the neighborhood of the current suspected vehicle region in the next frame image;
the adjustment coefficient calculation module is used for calculating an adjustment coefficient of a fuzzy core initial value of the current suspected vehicle region based on the distance evaluation parameter corresponding to the current suspected vehicle region, the first change difference between the current suspected vehicle region and the neighborhood thereof, the second change difference between the current suspected vehicle region and the neighborhood thereof and the total neighborhood number of the current suspected vehicle region;
and the adjusting module is used for adjusting the initial value of the fuzzy core corresponding to each suspected vehicle area based on the adjusting coefficient corresponding to each suspected vehicle area, so as to obtain the preset value of the fuzzy core corresponding to each suspected vehicle area.
The difference determining module is used for calculating and obtaining a first change difference between the current suspected vehicle area and the neighborhood thereof by using the following formula:
Wherein, Representing a first variance difference between the suspected vehicle region c and the jth neighborhood,Representing the distance from the suspected vehicle region c to the jth neighborhood in the image to be processed,Representing the distance from the suspected vehicle region c to the j-th neighborhood in the previous frame of the image to be processed,Distance features representing all suspected vehicle regions in the image to be processedIs used for the average value of (a),Distance features representing all suspected vehicle regions in the previous frame of the image to be processedAverage value of (d), distance characteristics of suspected vehicle regionAn average value representing the distance of the suspected vehicle region from its neighborhood,Distance features representing all suspected vehicle regions in the image to be processedStandard deviation of (2),Distance features representing all suspected vehicle regions in the previous frame of the image to be processedStandard deviation of (2);
The difference determining module is further configured to calculate a second variation difference between the current suspected vehicle region and its neighborhood using the following formula:
Wherein, Representing a second variation difference between the suspected vehicle region c and the jth neighborhood,Representing the distance from the suspected vehicle region c to the jth neighborhood in the next frame of the image to be processed,Distance features representing all suspected vehicle regions in the next frame of the image to be processedIs used for the average value of (a),Distance features representing all suspected vehicle regions in the next frame of the image to be processedStandard deviation of (2).
The adjustment coefficient calculation module is used for calculating an adjustment coefficient of a fuzzy core initial value of the current suspected vehicle area by using the following formula:
Wherein, An adjustment coefficient indicating an initial value of the blur kernel of the suspected vehicle region c,Representing a first variance difference between the suspected vehicle region c and the jth neighborhood,Representing a second variation difference between the suspected vehicle region c and the jth neighborhood, m representing the total number of neighbors of the suspected vehicle region c,The area of the suspected vehicle region c is indicated,Represents the average of the areas of all suspected vehicle areas,A distance evaluation parameter corresponding to the suspected vehicle region c is indicated,Is an exponential function based on a natural constant e.
Wherein, deblurring processing module is used for: setting the direction of a linear blur kernel according to the road direction, determining the blur kernel of the current suspected vehicle area by using a motion blur function based on the direction of the linear blur kernel and the blur kernel preset value of the current suspected vehicle area, performing deblurring operation on the current suspected vehicle area by using an inverse filtering algorithm based on the blur kernel of the current suspected vehicle area to obtain a deblurred area to be detected, and forming the image to be detected by all the areas to be detected after the deblurring operation of the suspected vehicle area.
Wherein, the identification module is used for: and processing the image to be detected by using a target detection algorithm to obtain tag information of the vehicle, acquiring running data of the vehicle based on the tag information of the vehicle, and fusing the form data of the vehicle by using a weighted average method and an information fusion algorithm so as to determine whether the vehicle has illegal behaviors.
The application has the beneficial effects that: the application provides an intelligent expressway inspection escape-proof monitoring system based on data fusion, which comprises the following components: the device comprises an image acquisition module, a fuzzy core calculation module, a deblurring processing module and an identification module. The image acquisition module is used for acquiring a vehicle image and preprocessing the acquired vehicle image to obtain an image to be processed; the fuzzy core calculation module is used for identifying suspected vehicle areas from the image to be processed and determining fuzzy core initial values of the suspected vehicle areas; determining an adjustment coefficient of a blur kernel initial value of each suspected vehicle region in the image to be processed according to the distance evaluation parameter of each suspected vehicle region close to the camera and the difference between the image to be processed and the adjacent frame image of the image to be processed; adjusting the blur kernel initial value corresponding to each suspected vehicle region based on the adjustment coefficient corresponding to each suspected vehicle region, so as to obtain a blur kernel preset value corresponding to each suspected vehicle region; the deblurring processing module is used for carrying out deblurring operation on the corresponding suspected vehicle area based on the blur kernel preset value so as to obtain an image to be detected; the identification module is used for identifying the vehicle based on the image to be detected. The fuzzy core preset value of the defuzzification algorithm of each suspected vehicle area is determined by the fuzzy core calculation module, defuzzification operation is carried out on the corresponding suspected vehicle area by the defuzzification processing module based on the fuzzy core preset value of each suspected vehicle area, an image to be detected is obtained, and vehicle identification is carried out by using the defuzzified image to be detected, so that false alarms or reminders caused by false analysis caused by image blurring can be effectively removed. The method can also reduce the waste of time and calculation force caused by the condition that the vehicles in the image range are more required to be identified for multiple times.
Drawings
For a clearer description of the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the description below are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art, wherein:
FIG. 1 is a schematic diagram of a first embodiment of an intelligent highway inspection escape-proof monitoring system based on data fusion;
FIG. 2 is a schematic diagram illustrating an embodiment of the fuzzy core computing module of FIG. 1;
Fig. 3 is a schematic structural diagram of an embodiment of the vehicle region identification module in fig. 2.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Because data can be acquired from a plurality of different images when the data are fused, and because the speed of a vehicle on a highway is too high, a camera is difficult to focus in real time, the condition of vehicle blurring can occur in the shot image, the data obtained from the blurred image are inaccurate, and a plurality of negative influences such as information loss, vehicle misjudgment and the like can occur when the fused data are analyzed, so that the passing condition of a normal vehicle is influenced. Based on the above, the application provides an intelligent expressway inspection escape-proof monitoring system based on data fusion, which determines a fuzzy core initial value by analyzing the area where a vehicle is located; the method comprises the steps of analyzing the position of a vehicle in an image by combining an imaging principle of near and far, calculating an adjustment coefficient according to the distance between vehicle areas by combining the influence of shooting angles to adjust the initial value of a fuzzy core so as to calculate a preset value of the fuzzy core, determining the size of the fuzzy core, performing deblurring operation on the image according to the size of the fuzzy core and a motion fuzzy function to obtain a clear image to be detected, and fusing the image to be detected, so that the probability of false alarm can be effectively reduced, and the waste of time and calculation force can be reduced.
The deblurring method adopted in the intelligent expressway inspection escape-proof monitoring system based on data fusion is an inverse filtering method based on blind deconvolution, solves the problem of unknown blur kernels of an inverse filtering algorithm by estimating the size of the blur kernels, and solves the problem of different areas of images with different quality caused by deblurring of images by using the same blur kernel through self-adaptive adjustment of the size of the blur kernels. The following describes the scheme of the present application in detail.
Referring to fig. 1, a schematic structural diagram of a first embodiment of an intelligent expressway inspection escape-proof monitoring system based on data fusion according to the present invention specifically includes: an image acquisition module 11, a blur kernel calculation module 12, a deblurring processing module 13 and an identification module 14.
The image acquisition module 11 is used for acquiring a vehicle image and preprocessing the acquired vehicle image to obtain an image to be processed. Specifically, vehicle images at high speed are acquired through a high-definition bayonet snapshot system and a high-definition camera. Preprocessing the acquired vehicle image, for example, using a low-pass filtering algorithm to perform noise reduction processing, and converting the noise-reduced image into a gray level image to obtain a noise-reduced gray level image, wherein the noise-reduced gray level image is the image to be processed.
The blur kernel calculation module 12 is configured to identify suspected vehicle regions from the image to be processed, and determine a blur kernel initial value of each suspected vehicle region; determining an adjustment coefficient of a blur kernel initial value of each suspected vehicle region in the image to be processed according to the distance evaluation parameter of each suspected vehicle region close to the camera and the difference between the image to be processed and the adjacent frame image of the image to be processed; and adjusting the blur kernel initial value corresponding to each suspected vehicle region based on the adjustment coefficient corresponding to each suspected vehicle region, thereby obtaining the blur kernel preset value corresponding to each suspected vehicle region.
Specifically, in an embodiment, please refer to fig. 2, which is a schematic structural diagram of an embodiment of the blur kernel computing module in fig. 1, the blur kernel computing module 12 includes a vehicle region identifying module 21, an initial value computing module 22, a distance evaluating module 23, a difference determining module 24, an adjustment coefficient computing module 25, and an adjustment module 26.
The vehicle region identification module 21 is used for identifying connected regions in the image to be processed and calculating shape evaluation parameters of each connected region; and determining a suspected vehicle region in the image to be processed based on the shape evaluation parameters of each connected region.
Specifically, the vehicle region identification module 21 is configured to: counting the occurrence times of the shape evaluation parameters of each connected domain, and determining the connected domain with the occurrence times smaller than the corresponding preset value as the suspected vehicle region.
Further, please refer to fig. 3, which is a schematic structural diagram of an embodiment of the vehicle region identification module in fig. 2, the vehicle region identification module 21 includes a connected region determination module 31 and a parameter calculation module 32.
The connected domain determining module 31 is configured to identify the connected regions of the image to be processed by using a connected domain marking algorithm, and combine adjacent connected regions with an area smaller than a preset value, so as to obtain a connected domain in the image to be processed. Specifically, the Sobel operator is utilized to carry out edge detection on an image to be processed, an edge detection image of the image to be processed is obtained, the edge detection image is converted into a binary image, connected areas in the binary image are identified and marked by using a connected area marking algorithm such as a scanning line algorithm, adjacent connected areas with the area smaller than a preset value are combined to form a plurality of larger connected areas, so that the connected areas in the image to be processed are obtained, and the connected areas are marked as a total connected area set.
The parameter calculation module 32 is configured to determine a shape evaluation parameter of the current connected domain based on the shape description parameter of the current connected domain and the overall difference degree of all edge segments in the current connected domain relative to the current connected domain; the shape description parameter of the current connected domain is the ratio of the area and the perimeter of the current connected domain.
In one embodiment, all edge segments in the connected domain are determined using parameter calculation module 32. For example, any connected domain in the total connected domain set is selected as the current connected domain and is marked as connected domain b. Extracting connected domainIf a point on the edge is arbitrarily selected, moving the edge clockwise along the edge, stopping moving and marking the edge which is moved before as an edge segment once the moving direction changes by more than 15 degrees, and then restarting moving, and circulating until all edges of the connected domain b are marked completely; thus, the connected domainIs divided into smaller segments, denoted edge segments.
Further, the parameter calculation module 32 is configured to calculate the shape evaluation parameter of the current connected domain according to the following formula:
Wherein, A shape evaluation parameter indicating the connected domain b,Representing connected domainIs defined by the area of the (c),Representing connected domainIs provided with a pair of grooves having a circumference,Representing connected domainIs used for the shape description parameters of the (c),And (3) withRespectively represent connected domainsMiddle (f)The length of the projection of the individual edge segments in the horizontal and vertical direction,And (3) withRespectively represent connected domainsThe average of the lengths of all edge segments projected in the horizontal and vertical directions,Representing connected domainThe total number of mid-edge segments,Indicating the degree of difference of the ith edge segment in connected domain b with respect to connected domain b.The sum of the degrees of difference of all edge segments in the connected domain b with respect to the connected domain b, i.e., the overall degree of difference of all edge segments in the connected domain b with respect to the connected domain b, is represented.
In the above-mentioned formula(s),Representing connected domainFor determining the complexity or compactness of the connected domain b. If it isIf the ratio of the pattern (b) is larger, the shape of the connected domain (b) is simpler and is biased to be square or round, because the more complex the pattern is, the more the edge of the pattern is bent, the longer the perimeter of the pattern is, resulting in the ratioThe smaller the shape evaluation parameter of the connected domain b is, the smaller the shape evaluation parameter is; conversely, ifThe smaller the ratio of the connected domain b, the flatter the shape of the connected domain b is, and the more the connected domain b is inclined to be in a strip shape, and the smaller the shape evaluation parameter of the connected domain b is.Representing the overall difference degree of all edge segments in the connected domain b relative to the connected domain b, wherein the larger the overall difference degree is, the larger the shape evaluation parameter of the connected domain b is; the smaller the overall difference degree is, the smaller the shape evaluation parameter of the connected domain b is.
The shape evaluation parameters of the connected domain are calculated based on the above formula by the parameter calculation module 32. Because the street lamp and the white dotted line on the expressway appear once every other segment, and the white solid line on the roadside has an approximately parallel relation with the road edge, the gradient of the corresponding edge has similarity and repeatability, and the area where the vehicle is located has partial similarity and repeatability, but is obviously far less than that of the non-vehicle area. Recording the shape evaluation parameters of each connected domain in the total connected domain set to form a shape evaluation parameter sequence, circularly traversing the shape evaluation parameter sequence, and counting the occurrence times of the shape evaluation parameters of each connected domain to obtain a preset valueScreening each shape evaluation parameter for threshold value, setting preset valueFor the occurrence number smaller than the preset valueThe corresponding connected domain can be preliminarily determined to belong to a suspected vehicle region; for greater than or equal to the preset valueMay be initially determined to be not part of the suspected vehicle region.
By the method, all suspected vehicle areas in the image to be processed are obtained. Further, the initial value calculation module 22 in the blur kernel calculation module 12 is utilized to determine the blur kernel initial value of each suspected vehicle region based on the area of the suspected vehicle region. In one embodiment, one half of the mean value of the areas of the suspected vehicle regions is taken as the initial value of the blur kernel, and is recorded as the initial value of the blur kernel
And when the distances between the multiple frames of images and other vehicles of the same vehicle are changed, the relative position and the speed of the vehicle are changed. According to the visual perspective effect, the distant vehicle is relatively close in the image, the near vehicle is gradually far away from the camera, so that when the distance between the frames of the vehicle is changed greatly, the vehicle is indicated to be approaching the camera quickly, and if the distance between the frames is changed slightly, the vehicle is indicated to be approaching the camera at a lower speed. According to the effect of the near-far size in photography, when a vehicle approaches a camera, the vehicle presents a larger size in an image, and when the vehicle is far away from the camera, the vehicle presents a smaller size. Accordingly, the size of the blur kernel corresponding to the vehicle at the far distance at a low speed should be reduced, and the size of the blur kernel corresponding to the vehicle at the near distance at a high speed should be appropriately increased.
Further, referring to fig. 2, the distance evaluation module 23 in the blur kernel calculation module 12 is configured to obtain a distance evaluation parameter of the current suspected vehicle region near the camera based on a ratio of an area of the current suspected vehicle region to an average value of areas of all suspected vehicle regions.
The difference determining module 24 is configured to determine a first variation difference between the image to be processed and the current suspected vehicle region in the previous frame of image and its neighborhood based on a distance between the current suspected vehicle region in the image to be processed and its neighborhood and a distance between the current suspected vehicle region in the previous frame of image and its neighborhood. In a specific embodiment, selecting any one of the suspected vehicle regions as the current suspected vehicle region, recording the distance from the geometric center of the current suspected vehicle region to the geometric center of the other suspected vehicle regions, and determining the shortest distanceThe suspicious vehicle region is marked as a neighborhood of the current suspicious vehicle region. The connected domains with similar shapes in different frames are marked as the corresponding areas of the same vehicle, namely the corresponding suspected vehicle areas in different frames can be understood.
In one embodiment, the variance determining module 24 is configured to calculate a first variance between the current suspected vehicle region and its neighborhood using the following formula:
wherein any one of the suspected vehicle regions is designated as a suspected vehicle region c, Representing a first variance difference between the suspected vehicle region c and the jth neighborhood,Representing the distance from the suspected vehicle region c to the jth neighborhood in the image to be processed,Representing the distance from the suspected vehicle region c to the j-th neighborhood in the previous frame of the image to be processed,Distance features representing all suspected vehicle regions in the image to be processedIs used for the average value of (a),Distance features representing all suspected vehicle regions in the previous frame of the image to be processedAverage value of (d), distance characteristics of suspected vehicle regionAn average value representing the distance of the suspected vehicle region from its neighborhood,Distance features representing all suspected vehicle regions in the image to be processedStandard deviation of (2),Distance features representing all suspected vehicle regions in the previous frame of the image to be processedStandard deviation of (2).
Further, the difference determining module 24 is configured to determine a second variation difference between the image to be processed and the current suspected vehicle region in the next frame of image and its neighborhood based on a distance between the current suspected vehicle region in the image to be processed and its neighborhood and a distance between the current suspected vehicle region in the next frame of image to be processed and its neighborhood. In one embodiment, the variance determining module 24 is further configured to calculate a second variance between the current suspected vehicle region and its neighborhood using the following formula:
Wherein, Representing a second variation difference between the suspected vehicle region c and the jth neighborhood,Representing the distance from the suspected vehicle region c to the jth neighborhood in the next frame of the image to be processed,Distance features representing all suspected vehicle regions in the next frame of the image to be processedIs used for the average value of (a),Distance features representing all suspected vehicle regions in the next frame of the image to be processedStandard deviation of (2).
Further, the adjustment coefficient calculating module 25 is utilized to calculate an adjustment coefficient of the blur kernel initial value of the current suspected vehicle region based on the distance evaluation parameter corresponding to the current suspected vehicle region, the first variation difference between the current suspected vehicle region and the neighborhood thereof, the second variation difference between the current suspected vehicle region and the neighborhood thereof, and the total number of the neighborhood of the current suspected vehicle region. In one embodiment, the adjustment coefficient calculating module 25 is configured to calculate the adjustment coefficient of the initial value of the blur kernel of the current suspected vehicle region according to the following formula:
Wherein, An adjustment coefficient indicating an initial value of the blur kernel of the suspected vehicle region c,Representing a first variance difference between the suspected vehicle region c and the jth neighborhood,Representing a second variation difference between the suspected vehicle region c and the jth neighborhood, m representing the total number of neighbors of the suspected vehicle region c,The area of the suspected vehicle region c is indicated,Represents the average of the areas of all suspected vehicle areas,A distance evaluation parameter corresponding to the suspected vehicle region c is indicated,Is an exponential function based on a natural constant e and is used for adjusting the value range of the function.
In the above-mentioned formula(s),Representing suspected vehicle regionsDistance evaluation parameters close to the camera, the smaller the distance evaluation parameters, the suspected vehicle regionThe smaller the area is relative to the areas of other suspected vehicle regionsFarther from the camera, the preset blur kernel initial value needs to be properly adjusted down; the greater the distance evaluation parameter, the suspected vehicle regionThe greater the difference in area from the areas of other suspected vehicle areas, the more suspected vehicle areasCloser to the camera, a preset blur kernel initial value needs to be properly adjusted. First variation differenceDifferent from the second variationThe larger the vehicle position change is, the larger the preset blur kernel initial value should be properly adjusted.For average adjustment of neighborhood vehicle to suspected vehicle regionIs a vehicle influence of the vehicle.
After determining the corresponding adjustment coefficient T of each suspected vehicle area and the initial value of the fuzzy core of each suspected vehicle areaThen, the adjustment module 26 is used to adjust the initial value of the blur kernel corresponding to each suspected vehicle region based on the adjustment coefficient T corresponding to each suspected vehicle regionAnd adjusting to obtain the fuzzy core preset value corresponding to each suspected vehicle area. Taking a suspected vehicle area c as an example, a fuzzy core preset value corresponding to the suspected vehicle area cThe calculation mode of (a) is as follows:
Wherein, Is the adjustment coefficient of the suspected vehicle region c,The initial value of the blur kernel for the suspected vehicle region c.
Traversing all the suspected vehicle areas, and determining the fuzzy core preset value of all the suspected vehicle areas. With continued reference to fig. 1, the deblurring processing module 13 performs deblurring operation on the corresponding suspected vehicle region based on the preset value of the blur kernel, so as to obtain an image to be detected. Since the vehicle travels on the expressway at a high speed and there is substantially no steering at a small angle, the blur of the vehicle is a motion blur formed by a quick movement of the vehicle linearly and a quick exposure of the camera. The deblurring processing module 13 sets the direction of the linear blur kernel according to the road direction. Fuzzy core preset value based on direction of linear fuzzy core, current suspected vehicle region such as suspected vehicle region c by using motion fuzzy functionDetermining a blur kernel of a current suspected vehicle area, for example, a suspected vehicle area c, performing deblurring operation on the current suspected vehicle area, for example, the suspected vehicle area c, based on the blur kernel of the current suspected vehicle area, for example, the suspected vehicle area c, to obtain a deblurred to-be-detected area, wherein all the to-be-detected areas after the deblurring operation of the suspected vehicle area form the to-be-detected image. It can be understood that the image to be detected after the deblurring operation is a clear image. In a specific embodiment, the blur kernel is further normalized, and then the deblurring operation is performed on the current suspected vehicle area, for example, the suspected vehicle area c, based on the normalized blur kernel.
The recognition module 14 performs vehicle recognition based on the image to be detected. Specifically, the image to be detected is processed by using a target detection algorithm to obtain tag information of the vehicle, running data of the vehicle is obtained based on the tag information of the vehicle, and form data of the vehicle is fused by using a weighted average method and an information fusion algorithm, so that whether the vehicle has illegal behaviors or not is determined.
In a specific embodiment, the image to be detected is processed through a target detection algorithm to obtain tag information of the vehicle, and driving data of the vehicle, such as user basic data, historical traffic data, inspection data and credit data information of the vehicle, is obtained by combining the tag information of the vehicle. And fusing the obtained data by using a weighted average method and an information fusion algorithm, and analyzing whether the vehicle has fee evasion or the risk of fee evasion by combining a charging system of the expressway.
And according to the analysis result, an alarm and early warning information can be generated, and decision support is provided for traffic management departments, such as interception of fee-escaping vehicles, treatment of traffic illegal behaviors and the like.
Aiming at the existing data fusion algorithm, the vehicle inspection escape prevention on the expressway needs to be realized, the information of the multiparty data sources is required to be fused and analyzed, but if the image acquired by the data sources is too fuzzy, the analysis result after the data fusion is not accurate or reliable enough, and even part of information is lost. According to the scheme, through edge detection and combination of the shape and edge length characteristics of the connected domain, a plurality of suspected vehicle areas and fuzzy kernel initial values can be obtained, the suspected vehicle areas are analyzed, a fuzzy kernel preset value corresponding to each suspected vehicle area is obtained according to the imaging rule of the near size and the far size and the relation between the shooting angle and the area where the vehicle is located, and the image is deblurred according to the fuzzy kernel preset value and by combining a motion fuzzy function and an inverse filtering method, so that a clear image is obtained. And (3) performing deblurring treatment on all the images, obtaining vehicle information by combining a target detection algorithm with a vehicle tag system, fusing the information, analyzing the information, and providing decision support for traffic management departments. Therefore, false alarms or reminders caused by false analysis due to image blurring can be effectively removed. The method can also reduce the waste of time and calculation force caused by the condition that the vehicles in the image range are more required to be identified for multiple times.
The foregoing is only the embodiments of the present application, and therefore, the scope of the present application is not limited by the above embodiments, and all equivalent structures or equivalent processes using the descriptions and the drawings of the present application or direct or indirect application in other related technical fields are included in the scope of the present application.

Claims (6)

1. Intelligent highway inspection anti-escape monitoring system based on data fusion, its characterized in that includes:
The image acquisition module is used for acquiring a vehicle image and preprocessing the acquired vehicle image to obtain an image to be processed;
the fuzzy core calculation module comprises a distance evaluation module, a difference determination module, an adjustment coefficient calculation module and an adjustment module:
The distance evaluation module is used for obtaining a distance evaluation parameter of the current suspected vehicle area close to the camera based on the ratio of the area of the current suspected vehicle area to the average value of the areas of all the suspected vehicle areas;
The difference determining module is used for determining a first change difference between the image to be processed and the current suspected vehicle region in the previous frame image and the neighborhood thereof based on the distance between the current suspected vehicle region in the image to be processed and the neighborhood thereof and the distance between the current suspected vehicle region in the previous frame image of the image to be processed and the neighborhood thereof; the method comprises the steps of determining a first change difference between a current suspected vehicle region in a frame image of a to-be-processed image and a neighborhood of the current suspected vehicle region in the frame image of the to-be-processed image, and determining a second change difference between the to-be-processed image and the current suspected vehicle region in the next frame image and the neighborhood of the current suspected vehicle region in the next frame image;
the adjustment coefficient calculation module is used for calculating an adjustment coefficient of a fuzzy core initial value of the current suspected vehicle region based on the distance evaluation parameter corresponding to the current suspected vehicle region, the first change difference between the current suspected vehicle region and the neighborhood thereof, the second change difference between the current suspected vehicle region and the neighborhood thereof and the total neighborhood number of the current suspected vehicle region;
the adjusting module is used for adjusting the initial value of the fuzzy core corresponding to each suspected vehicle area based on the adjusting coefficient corresponding to each suspected vehicle area, so as to obtain the preset value of the fuzzy core corresponding to each suspected vehicle area;
the deblurring processing module is used for carrying out deblurring operation on the corresponding suspected vehicle area based on the blur kernel preset value so as to obtain an image to be detected;
the identification module is used for identifying the vehicle based on the image to be detected;
the fuzzy core calculation module includes:
The vehicle region identification module is used for identifying the connected regions in the image to be processed and calculating shape evaluation parameters of each connected region; determining suspected vehicle areas in the image to be processed based on shape evaluation parameters of each connected area;
The initial value calculation module is used for determining the initial value of the fuzzy core of each suspected vehicle area based on the area of the suspected vehicle area;
The deblurring processing module is used for: setting the direction of a linear blur kernel according to the road direction, determining the blur kernel of the current suspected vehicle area by using a motion blur function based on the direction of the linear blur kernel and the blur kernel preset value of the current suspected vehicle area, performing deblurring operation on the current suspected vehicle area by using an inverse filtering algorithm based on the blur kernel of the current suspected vehicle area to obtain a deblurred area to be detected, and forming the image to be detected by all the areas to be detected after deblurring operation of the suspected vehicle area;
The identification module is used for: and processing the image to be detected by using a target detection algorithm to obtain tag information of the vehicle, acquiring running data of the vehicle based on the tag information of the vehicle, and fusing the running data of the vehicle by using a weighted average method and an information fusion algorithm so as to determine whether the vehicle has illegal behaviors.
2. The intelligent highway inspection escape-proof monitoring system based on data fusion according to claim 1, wherein the vehicle region identification module is configured to: counting the occurrence times of the shape evaluation parameters of each connected domain, and determining the connected domain with the occurrence times smaller than the corresponding preset value as the suspected vehicle region.
3. The intelligent highway inspection escape-proof monitoring system based on data fusion according to claim 1, wherein the vehicle region identification module comprises:
The connected domain determining module is used for identifying the connected regions of the image to be processed by using a connected domain marking algorithm, and combining adjacent connected regions with the area smaller than a preset value, so as to obtain the connected domain in the image to be processed;
the parameter calculation module is used for determining the shape evaluation parameter of the current connected domain based on the shape description parameter of the current connected domain and the overall difference degree of all edge segments in the current connected domain relative to the current connected domain; the shape description parameter of the current connected domain is the ratio of the area and the perimeter of the current connected domain.
4. The intelligent expressway inspection escape-proof monitoring system based on data fusion according to claim 3, wherein the parameter calculation module is configured to calculate the shape evaluation parameter of the current connected domain by using the following formula:
Wherein, A shape evaluation parameter indicating the connected domain b,Representing connected domainIs defined by the area of the (c),Representing connected domainIs provided with a pair of grooves having a circumference,Representing connected domainIs used for the shape description parameters of the (c),And (3) withRespectively represent connected domainsMiddle (f)The length of the projection of the individual edge segments in the horizontal and vertical direction,And (3) withRespectively represent connected domainsThe average of the lengths of all edge segments projected in the horizontal and vertical directions,Representing connected domainThe total number of mid-edge segments,Indicating the degree of overall difference of all edge segments in connected domain b with respect to connected domain b.
5. The intelligent expressway inspection escape-proof monitoring system based on data fusion according to claim 1, wherein the difference determining module is configured to calculate a first variation difference between a current suspected vehicle region and a neighborhood thereof by using the following formula:
Wherein, Representing a first variance difference between the suspected vehicle region c and the jth neighborhood,Representing the distance from the suspected vehicle region c to the jth neighborhood in the image to be processed,Representing the distance from the suspected vehicle region c to the j-th neighborhood in the previous frame of the image to be processed,Distance features representing all suspected vehicle regions in the image to be processedIs used for the average value of (a),Distance features representing all suspected vehicle regions in the previous frame of the image to be processedAverage value of (d), distance characteristics of suspected vehicle regionAn average value representing the distance of the suspected vehicle region from its neighborhood,Distance features representing all suspected vehicle regions in the image to be processedStandard deviation of (2),Distance features representing all suspected vehicle regions in the previous frame of the image to be processedStandard deviation of (2);
The difference determining module is further configured to calculate a second variation difference between the current suspected vehicle region and its neighborhood using the following formula:
Wherein, Representing a second variation difference between the suspected vehicle region c and the jth neighborhood,Representing the distance from the suspected vehicle region c to the jth neighborhood in the next frame of the image to be processed,Distance features representing all suspected vehicle regions in the next frame of the image to be processedIs used for the average value of (a),Distance features representing all suspected vehicle regions in the next frame of the image to be processedStandard deviation of (2).
6. The intelligent expressway inspection escape prevention monitoring system based on data fusion according to claim 5, wherein the adjustment coefficient calculation module is configured to calculate an adjustment coefficient of a fuzzy core initial value of a current suspected vehicle area by using the following formula:
Wherein, An adjustment coefficient indicating an initial value of the blur kernel of the suspected vehicle region c,Representing a first variance difference between the suspected vehicle region c and the jth neighborhood,Representing a second variation difference between the suspected vehicle region c and the jth neighborhood, m representing the total number of neighbors of the suspected vehicle region c,The area of the suspected vehicle region c is indicated,Represents the average of the areas of all suspected vehicle areas,A distance evaluation parameter corresponding to the suspected vehicle region c is indicated,Is an exponential function based on a natural constant e.
CN202410711303.8A 2024-06-04 2024-06-04 Intelligent expressway inspection escape-proof monitoring system based on data fusion Active CN118279841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410711303.8A CN118279841B (en) 2024-06-04 2024-06-04 Intelligent expressway inspection escape-proof monitoring system based on data fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410711303.8A CN118279841B (en) 2024-06-04 2024-06-04 Intelligent expressway inspection escape-proof monitoring system based on data fusion

Publications (2)

Publication Number Publication Date
CN118279841A CN118279841A (en) 2024-07-02
CN118279841B true CN118279841B (en) 2024-07-26

Family

ID=91649833

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410711303.8A Active CN118279841B (en) 2024-06-04 2024-06-04 Intelligent expressway inspection escape-proof monitoring system based on data fusion

Country Status (1)

Country Link
CN (1) CN118279841B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462019A (en) * 2020-04-20 2020-07-28 武汉大学 Image deblurring method and system based on deep neural network parameter estimation
CN113256565A (en) * 2021-04-29 2021-08-13 中冶华天工程技术有限公司 Intelligent restoration method for motion blurred image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3234908A4 (en) * 2014-12-19 2018-05-23 Nokia Technologies OY Method, apparatus and computer program product for blur estimation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462019A (en) * 2020-04-20 2020-07-28 武汉大学 Image deblurring method and system based on deep neural network parameter estimation
CN113256565A (en) * 2021-04-29 2021-08-13 中冶华天工程技术有限公司 Intelligent restoration method for motion blurred image

Also Published As

Publication number Publication date
CN118279841A (en) 2024-07-02

Similar Documents

Publication Publication Date Title
US8798314B2 (en) Detection of vehicles in images of a night time scene
KR100912746B1 (en) Method for traffic sign detection
Pless et al. Evaluation of local models of dynamic backgrounds
TWI409718B (en) Method of locating license plate of moving vehicle
KR101848019B1 (en) Method and Apparatus for Detecting Vehicle License Plate by Detecting Vehicle Area
EP2665039B1 (en) Detection of near-field camera obstruction
Wicaksono et al. Speed estimation on moving vehicle based on digital image processing
Bedruz et al. Real-time vehicle detection and tracking using a mean-shift based blob analysis and tracking approach
CN118096815B (en) Road abnormal event detection system based on machine vision
CN112417955A (en) Patrol video stream processing method and device
Nejati et al. License plate recognition based on edge histogram analysis and classifier ensemble
CN116402852A (en) Dynamic high-speed target tracking method and device based on event camera
CN118279841B (en) Intelligent expressway inspection escape-proof monitoring system based on data fusion
KR et al. Moving vehicle identification using background registration technique for traffic surveillance
KR101563543B1 (en) Lpr system for recognition of compact car and two wheel vehicle
US11631183B2 (en) Method and system for motion segmentation
CN113936242B (en) Video image interference detection method, system, device and medium
CN116152758A (en) Intelligent real-time accident detection and vehicle tracking method
CN116189038A (en) Picture abnormality judging method, device, equipment and storage medium
Chaiyawatana et al. Robust object detection on video surveillance
Shaweddy et al. Vehicle counting framework for intelligent traffic monitoring system
Bauer A Feature-based Approach for the Recognition of Image Quality Degradation in Automotive Applications
Praphananurak et al. A framework for origin-destination estimation using license plate recognition for Thai rural traffic
Naraghi A comparative study of background estimation algorithms
Tian et al. Intelligent Spot Detection for Degraded Image Sequences Based on Machine Vision

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant