The content of the invention
In order to overcome the drawbacks described above under the prior art, it is an object of the invention to provide it is a kind of based on the aircraft of video into
Go out berth automatic testing method, this method can realize the moment on schedule of all-weather intelligent detection airplane flight, and Detection accuracy
It is higher.
The technical scheme is that:
A kind of aircraft disengaging berth automatic testing method based on video:Obtain the monitoring that covering aircraft shuts down berth region
The video image in region identifies moving target and carries out motion target tracking, and the characteristic point for extracting image in detection zone is gone forward side by side
Row Feature Points Matching track, obtain matched feature point set, it is continuous centainly frame number video image in meet aircraft drive into or
Sail out of direction requirement characteristic point logarithm meet the requirements quantity when, further make formally to enter position or the judgement formally offed normal, when
The displacement of feature point set is less than certain pixel, and moving target does not drive into and then sails out of aircraft shutdown berth region, then judges
Formally enter position for aircraft;When the displacement of feature point set is less than or equal to certain pixel, and moving target is not sailed out of and then driven into
Aircraft shuts down berth region, then is judged as that aircraft is formally offed normal, and the detection zone is shut down to correspond to aircraft on video image
The setting regions in berth region.
Identifying the method for moving target can be:Prospect is extracted by mixture Gaussian background model, obtains binary picture
Picture, expansion process is carried out to binary image, and target group is formed to the foreground extraction boundary rectangle in the image after expansion process
Block, the targeted mass are moving target.
Wherein extracting the method for boundary rectangle can be:The profile of extraction prospect connected region, by point set on contour line
The rectangle corresponding to rectangular area between ordinate and the maximum and minimum value of abscissa is as the boundary rectangle.It is described
Rectangular area is substantially exactly region shared by the targeted mass.
Carry out motion target tracking method be preferably:Judge the certain of former frame targeted mass position in present frame
Whether distance is interior has targeted mass, if so, targeted mass and former frame targeted mass overlapped region in calculating present frame
Area account for the area of former frame targeted mass region ratio and/or two targeted mass central points between distance, when described
For ratio not less than proportion threshold value and/or the distance no more than distance threshold, then it is same target group to judge two targeted mass
Block, record targeted mass center position variation, when targeted mass into be then lifted off detection zone or leave then into
Enter detection zone and make respective markers.In same frame between different target agglomerate can by for it establishes tracking mark and number into
Row is distinguished.
Give up included number of pixels less than 50 or its length or width are less than the targeted mass of 10 pixels, i.e., so
Targeted mass not as motion target tracking object.
During motion target tracking, if targeted mass, which is continuously lost, reaches certain frame number (such as more than 10 frames),
Judge that the targeted mass fails.
The Harris angle points of image in detection zone are extracted to the video image after gray processing as characteristic point, and using base
Feature Points Matching tracking is carried out in the optical flow algorithm of feature, Pyramid technology iteration meter is preferably carried out on the basis of optical flow algorithm
Optical flow field is calculated, image is subjected to Pyramid technology, it is minimum in the resolution ratio of pyramidal top layer's image, it is counted by top layer
Calculate light stream value, the result of calculating plus last layer initial value as next layer of light stream initial value, then to next layer of calculating light stream
, light stream iteration is carried out in other layers in addition to top, last layer is iterated to and just forms light stream vector.
The aircraft drives into direction requirement:30 ° of 0 °≤θ <;The aircraft sails out of direction requirement:150°
The angle that < θ≤180 °, wherein θ are formed by the vector of direction line and the characteristic point that matches, the direction line is according to aircraft
The straight line that plane nose is directed toward by airplane tail group set into the prescribed direction of the detection zone, the characteristic point that matches
Vector be directed toward on present frame the coordinate points of characteristic point of matching by the coordinate points of former frame characteristic point.
Berth automatic testing method is passed in and out for the foregoing aircraft based on video described in any one, may be employed as follows
Step:
Image acquisition:The video image of real-time monitoring area is obtained, the monitoring area covering aircraft shuts down berth area
Domain;
Image preprocessing:Berth region is shut down according to aircraft and unifies the setting detection zone on the video images, is passed through
Acquired image is converted into gray level image by gray processing, is carried out or is handled without scaling, the contracting before setting detection zone
It is that the video image of distinct device acquisition is carried out to the scaling processing of uniform length and width to put processing;
Motion estimate:Before being extracted to the video image or the gray level image using mixture Gaussian background model
Scape obtains binary image, and expansion process is carried out to binary image, external to the foreground extraction in the image after expansion process
Rectangle forms targeted mass, and the boundary rectangle is that point set ordinate and abscissa be most on the contour line of prospect connected region
The rectangle corresponding to rectangular area between big value and minimum value;
Motion target tracking:Tracking mark is established for targeted mass and is numbered, and judges former frame targeted mass in present frame
Whether there is targeted mass in the certain distance of position, if so, calculating targeted mass and former frame target group in present frame
The area of block overlapped region is accounted between ratio and/or the two targeted mass central points of the area of former frame targeted mass region
Distance, then judge that two targeted mass are no more than distance threshold not less than proportion threshold value and/or the distance when the ratio
Same targeted mass copies to the information of former frame targeted mass in present frame, to the same targeted mass of different frame
The unification being numbered records the variation of same its center position of targeted mass;If targeted mass, which is continuously lost, reaches one
Framing number then judges that the targeted mass fails;It is then lifted off detection zone or leaves to subsequently enter inspection when targeted mass enters
It surveys region and then carries out respective markers;
Feature Points Matching tracks:To the image zooming-out Harris angle points in detection zone in the gray level image as feature
Point carries out matched jamming to characteristic point based on pyramid optical flow algorithm, Pyramid technology is carried out on the basis of optical flow algorithm and is changed
In generation, calculates optical flow field, and image is carried out Pyramid technology, minimum in the resolution ratio of pyramidal top layer's image, is opened by top layer
Begin to calculate light stream value, the result of calculating plus last layer initial value as next layer of light stream initial value, then next layer is calculated
Optical flow field carries out light stream iteration in other layers in addition to top, iterates to last layer and just form light stream vector;
Aircraft disengaging is shut down berth and is judged:For the feature point set to match that Feature Points Matching tracks, phase is calculated
The angle theta between vector and direction line corresponding to matched characteristic point, the aircraft, which drives into direction requirement, is:30 ° of 0 °≤θ <;
The aircraft sails out of direction requirement:150 ° of < θ≤180 °, wherein θ are formed by the vector of direction line and the characteristic point that matches
Angle, the direction line be according to aircraft into the detection zone prescribed direction set aircraft is directed toward by airplane tail group
The straight line of head, the vector of the characteristic point that matches are directed toward on present frame the feature that matches by the coordinate points of former frame characteristic point
The coordinate points of point;
Meet aircraft in the video image of continuous certain frame number and drive into or sail out of the characteristic point logarithm of direction requirement and meet
During requested number, make access position mark, further make formally to enter position judgement:If the displacement of feature point set is less than certain pixel, and
Targeted mass does not enter into the mark for being then lifted off detection zone, then is judged as that aircraft formally enters position;When continuous a certain number of
Frame meet thereon aircraft sail out of direction requirement characteristic point to reach setting quantity when, valid mark of offing normal, further make formally
It offs normal judgement:If the displacement of feature point set is less than or equal to certain pixel, and targeted mass does not leave and subsequently enters detection zone
The mark in domain is then judged as that aircraft is formally offed normal.
The combination of a kind of following optimal way or a variety of optimal ways may be employed:
(1) in described image pre-treatment step, by the way that number is continuously selected to be more than 3 and is less than the boundary point definition of 10
The detection zone, the detection zone are sequentially connected with the closed geometric shape area encompassed to be formed for all boundary points;
(2) in the motion estimate step, mixture Gaussian background model parameter value is:Gaussian Profile number is 3,
Default standard deviation multiple is 2.5, and the Minimizing measure for estimating background is 0.5, initial weight 0.02, initial variance 18;
(3) the motion estimate step carries out once every certain frame number;Institute can pass in and out every frame number according to aircraft
Movement velocity during berth determines that carrying out a motion estimate every several frames can fit on the basis of identifying purpose is reached
When raising detection efficiency.
(4) in the motion target tracking step, first judge the size of targeted mass, give up be less than comprising number of pixels
50 or its length or width be less than the targeted mass of 10 pixels, i.e., these targeted mass are not as pair of motion target tracking
As if targeted mass, which is continuously lost, meets or exceeds 10 frames, judging that the targeted mass fails;
(5) in the Feature Points Matching tracking step, the displacement between the characteristic point to match per a pair is first calculated, if
Displacement is more than 0.1 pixel, then calculates corresponding angle theta;
(6) aircraft access position or standard are offed normal when judging, a certain number of frames are continuously tracked, during which if without continuously losing
It more than 3 frames remakes access position or standard is offed normal mark;
(7) aircraft access position or standard are offed normal when judging, continuous tracking is not less than 50 frames or 50 frames or so, if meeting aircraft
It drives into direction requirement or when aircraft sails out of the characteristic point of direction requirement to reaching setting quantity, remakes access position or standard is offed normal mark
Note;
(8) make formally to enter position or it is formal off normal when judging, the reference value of feature point set displacement judgement takes 0.1 pixel;
(9) the Feature Points Matching tracking step carries out once every certain frame number (such as 5 frames), and institute can root every frame number
Movement velocity when passing in and out berth according to aircraft determines that matched jamming can reached by carrying out a motion estimate every several frames
Calculation analysis work amount is reduced on the basis of purpose, so as to improve detection efficiency.
Beneficial effects of the present invention are:
By set detection zone can only or emphasis is carried out in the region aircraft disengaging berth coherent detection, from
And detection efficiency is improved, and effectively reduce misinformation probability.
Moving target beyond being got off the plane due to normal condition will not rest on aircraft and shut down in the region of berth, pass through
Detection zone is entered and left to targeted mass to be marked, and aircraft can be entered and left to moving target and shuts down berth region
Different conditions judged, so as to effectively exclude aircraft beyond moving object for example guide car enter or leave aircraft shut down pool
It causes to report by mistake in position region.
Before carrying out motion target tracking, the apparent target group for being unlikely to be aircraft is first weeded out according to the size of targeted mass
Block can substantially reduce the calculation amount of targeted mass tracking.Standard as targeted mass size whether judging to reject can be with
According to the pixel of video capture device, the distance of equipment distance objective, the angle of shooting, the actual conditions such as actual size of target
It determines.
Targeted mass, which is continuously lost, meets or exceeds 10 frames, then judges that the targeted mass fails, can have by this method
Effect excludes other moving targets, such as garbage truck, lorry.
Pyramid technology iteration is carried out on the basis of optical flow algorithm to calculate optical flow field, it is quick moving target can be calculated
Light stream under situation of movement overcomes traditional optical flow algorithm and just meets continuous gray scale vacation only under conditions of thin tail sheep
And if the defects of discontinuously causing light stream estimation failure because of gradation of image under big displacement.
In the Feature Points Matching tracking step, the displacement between the characteristic point to match per a pair is first calculated, if position
It moves and is more than certain value (such as 0.1 pixel), then calculate corresponding angle theta, and then judge that moving target enters or leaves
Berth is shut down, some characteristic points for actually belonging to background can be effectively excluded, so as to substantially reduce the calculation amount of angle theta and fortune
Moving-target enters or leaves the amount for the discriminatory analysis for shutting down berth, improves aircraft disengaging berth detection effect to a certain extent
Rate.
The scope driven into direction requirement and aircraft by setting aircraft and sail out of angle theta in the requirement of direction, it is possible to prevente effectively from
It is acted with aircraft but movement locus often wrong report caused by different shelter bridge etc..
Specific embodiment
The present invention provides a kind of aircrafts based on video to pass in and out berth automatic testing method, as shown in figs. 1-7, including with
Lower step:
(1) image acquisition
On airport, monitoring area installs network cameras, obtains real-time airport monitoring image;Monitoring imagery zone can cover
Lid aircraft shuts down berth region.
(2) video image pre-processes
Image preprocessing can include:Image scaling, the division of detection zone, direction line is set and image gray processing.
The video figure gathered for the video camera for making different manufacturers, different size, different batches that airport uses
As follow-up unified image procossing can be generally applicable to, can the video image that different cameras gathers be subjected to unified length first
The scaling of degree and width is handled, and uses Zoom method as bilinear interpolation, and then stop berth position according to aircraft sets manually
A fixed Airplane detection region (polygon in such as Fig. 1), continuous selection number is more than 3 and less than 10 sides as shown in Figure 1
Boundary's point is surrounded berth is shut down.The related inspection in berth is passed in and out by setting detection zone that can only carry out aircraft in the region
It surveys, improves detection efficiency, effectively reduce misinformation probability.
Direction line sets the direction setting for then entering according to aircraft and shutting down berth, and direction line as shown in Figure 1 is (with the arrow
Line) plane nose is directed toward by airplane tail group.
Image gray processing is then that coloured image is converted into gray level image according to formula 1, to be carried followed by characteristic point
It takes and matches and gray level image is provided.
Gray=(R*30+G*59+B*11+50)/100 formula 1
Gray is image intensity value in formula 1, and R, G, B are respectively current pixel point rgb value.
(3) the moving target foreground extraction based on mixture Gaussian background model
Multi-modal background is modeled based on the gauss hybrid models of pixel very effectively, adapts to the variation of background
(such as light gradient), and can substantially meet in practical application to the requirement of real-time of algorithm.Using mixed Gauss model to background
Image is modeled, and the meaning of so-called " mixed Gaussian " is exactly that each pixel is made of multiple single Gaussian Profile mixing.
Using being detected to moving target based on the algorithm of target detection of mixed Gaussian, it is necessary to be carried out according to following step.
1st, initialization model parameter
For single Gaussian Profile, probability density function can be represented by formula 2.
Wherein, x is random variable vector, represents the color component of 3 passages of pixel;μ is the mean vector of Gaussian Profile,
It represents the average of single Gauss model, embodies the center of each Unimodal Distribution;Σ represents the variance of single Gauss model, embodies every
The unstable degree of the width of a Unimodal Distribution, i.e. pixel.By several single Gauss model linear combinations, gauss hybrid models are formed
P(xt), linear combination is as shown in formula 3.
Wherein, t represents t moment, ωi,tRepresenting weights shared by the list Gaussian distribution model, M refers to single Gauss model total number,
The total number at peak in pixel value multi-modal is shown as, the size of M commonly relies on the specific distribution situation of pixel value, and M values are got over
Greatly, illustrate that peak value is more, so the ability of processing fluctuation is stronger.M is generally 3-5.
2nd, matching distribution is found out
Pixel value at a certain pixel of t moment is xi, μi,tFor the average of the single Gaussian Profile of i-th of t moment,For t moment
The variance of i-th of single Gaussian Profile.To all single Gaussian Profiles, judge whether to meet formula 4 respectively, if meeting the formula,
Illustrate that current pixel is matched with the list Gaussian Profile, which just should be larger.
Wherein, τσFor default standard deviation multiple.
3rd, gauss hybrid models are updated
The parameter update of mixed Gauss model is complex, it will not only update the parameter of gauss of distribution function, including equal
Value and variance will also update the weight of each distribution function, and the corresponding weight more new formula of i-th of list Gaussian Profile of t moment is public affairs
Formula 5.
ωi,t=(1- α) ωi,t-1+αMi,tFormula 5
Wherein, α is the learning rate of single Gaussian Profile respective weights, and when α is big, the renewal rate of weight is fast;Mi,tFor single Gauss
The impact factor of distribution, its value is in two kinds of situation.The first situation:Current pixel value and pixel gauss hybrid models set
In some single Gauss model matching, if matched model more than one, be considered as only that there are one optimal matchings, then
The corresponding M of the Gauss modeli,tIt is worth for 1.Another situation:M corresponding with the unmatched list Gauss model of current pixel valuei,t
It is worth for 0., it is necessary to which these weights are normalized after the corresponding weight of all single Gaussian Profiles is obtained, formula 6 is to return
One changes formula.
, it is necessary to update the mean μ of the model when current pixel value is matched with some single modelI, tWith standard σi,t.According to
The characteristics of probability distribution, certainly will influence whether the probability distribution originally estimated.Update method is as shown in formula 7:
μi,t=(1- ρ) μi,t-1+ρxt
Wherein, ρ is the Studying factors of the list Gaussian Profile, and value is ρ=α/ωi,t-1。
When current pixel value is not matched with any one single model in gauss hybrid models set, from original Models Sets
The single model of weight minimum in a current gauss hybrid models set is removed in conjunction, while adds in a new single model,
Weight is the minimum value in all single model weights, and average is the average of Current observation pixel value, variance for one it is given compared with
Big constant.
4th, the distribution sorting of pixel will be represented
By fi,t=ωi,t/σi,tAs priority criteria of the single Gaussian Profile as background distributions is judged, work as fi,t=
ωi,t/σi,tIllustrate when being worth big its as background distributions weights are big, probability is high.Background pixel model can pass through following steps
It establishes:
(1) the priority factor ω of each single Gaussian Profile is calculatedi,t/σi,t;
(2) according to priority factor ωi,t/σi,tAll single Gaussian Profile of big wisp carried out by order from high to low
Sequence, ωi,t/σi,tValue it is bigger, the list Gaussian Profile is bigger as the possibility of background distributions;Otherwise, it is as background
The possibility of distribution is with regard to smaller, and if there is newly-established single Gaussian Profile, that just needs to substitute the single Gaussian Profile for coming last position;
(3) background model of N number of single Gaussian Profile as scene is selected from M single Gaussian Profile according to formula 8.
Wherein, T is the minimum threshold (or Minimizing measure) of estimation background, and the description back of the body can be obtained by the size for adjusting T
Optimal single Gaussian Profile combination of scape.T has a great impact to the validity of algorithm, and determining for value is most important.If T takes
Be worth it is too small, such as only by the use of a Gaussian Profile as background distributions, then gauss hybrid models have reformed into single Gauss model;T takes
It is worth excessive, the distribution of weights very little can be also served as background distributions, excessively sensitive background distributions may absorb some movements
Foreground pixel point.
The overall procedure of moving object detection program based on Gaussian mixture model-universal background model is divided into two systemic circulations (referring to figure
3) it is, to each two field picture circular treatment first, followed by each pixel in same two field picture is handled.It is each handling
, it is necessary to be carried out step by step according to below step during a pixel:If, it is necessary to first be carried out to pixel if the first two field picture
It models and each single Gaussian Profile is initialized;It needs to find out and Current observation pixel value phase if not the first two field picture
Matched list Gaussian Profile;If the single Gaussian Profile to match can be searched out, just to the parameter and weights of the list Gaussian Profile
It is updated (if the single Gaussian Profile more than one to match, it is only necessary to update the parameter of single Gaussian Profile of maximum weight
And weights), in addition, the corresponding weights of remaining single Gaussian Profile need it is appropriate reduce, if without matched single Gauss model,
It needs newly-built one single Gauss model and substitutes single Gaussian Profile of weights minimum in master mould;The laggard of Gauss model is updated
Row foreground extraction, if then its pixel does not have the matched Gauss model of institute, therefore other Gaussian mode centainly to moving target
The sum of type weight proportion is necessarily greater than background threshold, and foreground extraction is carried out according to the principle.
The value such as table 1 of some useful parameters in realizing based on Gaussian mixture model-universal background model moving object detection algorithmic procedure
It is shown.The flow chart of algorithm is as shown in figure 3, by taking the video image of Fig. 2 as an example, and effect is such as after above-mentioned algorithm carries out foreground extraction
Shown in Fig. 4.
1 mixed Gauss model initial parameter value of table
(4) foreground target extraction boundary rectangle and targeted mass tracking
After extracting prospect by mixture Gaussian background model, Morphological scale-space is carried out to the image object of binaryzation ---
Expansion then to the Objective extraction boundary rectangle after expansion, forms targeted mass.It is by before extraction to extract target boundary rectangle
The profile of connected region after scape target binaryzation, passes through the maximum and minimum of point set ordinate on contour line and abscissa
Value determines the boundary rectangle of its contour curve (referring to the rectangle frame in Fig. 5).
It needs continuously to track the targeted mass after extraction target boundary rectangle, first determines whether the targeted mass size, if
Targeted mass include number of pixels be less than 50 or its length or width then give up the target less than 10 pixels, no longer analyze this
Targeted mass, the moving target that will be substantially unlikely to be aircraft foreclose, and improve detection efficiency, save detection resource.After
The continuous next targeted mass of analysis, if meeting size requirements, establishes a tracking mark for the targeted mass and numbers, when
When calculating next two field picture (being at this time current frame image), just determine whether there is mesh in targeted mass certain distance before
Agglomerate is marked, judges whether it is same target by calculating the overlapping relation of targeted mass and former frame targeted mass in present frame
Agglomerate if it is, the information of former frame targeted mass is copied in present frame targeted mass, and connects former frame target
Agglomerate central point and present frame targeted mass central point, its central point when red curve as shown in Figure 5 is exactly targeted mass tracking
Between connecting line.Since the translational speed of guide car at that time is faster than aircraft, the targeted mass tracking corresponding to guide car
When connecting line it is longer.During tracking, if targeted mass is continuously lost more than 10 frames, it may determine that the targeted mass loses
Effect, such as can be accidentally excluded by lorry, garbage truck etc. by the method outside moving target.
Before the overlapping relation refers to that targeted mass and the area of former frame targeted mass overlapped region account in present frame
Distance between the ratio of the area of one frame targeted mass region and/or two targeted mass central points, when the ratio is not small
In proportion threshold value and/or the distance no more than distance threshold, then it is same targeted mass to judge two targeted mass.
Targeted mass detect and track above-mentioned is the processing based on entire image, in order to improve Airplane detection effect
Rate, the target of present invention extraction in the detection area is further analyzed processing, if there is targeted mass enters detection zone
Domain is then marked, which will pass in and out berth in next aircraft and judge to play a key effect.If it is aircraft enters
It can be still in after shutting down berth in berth, that is, represent the targeted mass of aircraft and be still in detection zone, and other movement mesh
Mark will not then rest on aircraft and shut down in the region of berth;And it is then finally to be driven out to aircraft to moving by quiet that aircraft, which is driven out to shutdown berth,
Berth region is shut down, i.e., aircraft is entered back into after will not leaving and shuts down berth region.Therefore in order to exclude the target beyond aircraft into
Entering to shut down berth or leave shutdown berth to cause to report by mistake, according to targeted mass, state is judged the present invention in the detection area,
For example, if targeted mass is to enter and leave detection zone outside detection zone, which is carried out corresponding
Mark, which will coordinate feature point tracking below as the final key message for judging aircraft disengaging shutdown berth,
For example, when judging formally to enter, if some targeted mass has into the mark of detection zone is then lifted off, may determine that
Moving target representated by the targeted mass is not aircraft.
(5) the Feature Points Matching tracking based on pyramid optical flow algorithm
Characteristic point, this hair are extracted firstly the need of to entire detection zone based on the tracking of pyramid optical flow algorithm Feature Points Matching
The characteristic point of bright extraction is preferably Harris angle points, and extraction Harris angle point steps are:
(1) each pixel of image is filtered in the hope of horizontally and vertically direction using horizontal, vertical difference operator
Variable quantity IxAnd Iy, and then acquire the value of four elements in matrix m (referring to formula 9):
The gray value of image sequence, I are represented with function I (x, y)x、IyPartial derivatives of the I (x, y) for x, y is represented respectively.
(2) high speed smothing filtering is carried out to four elements of m, obtains new m'.Smothing filtering is carried out using formula 10:
(3) the angle point amount of corresponding each pixel is calculated using m':
(4) it is considered as Harris angle points to meet two conditions simultaneously in cim, i.e.,:Cim is more than a certain threshold value;Cim is
Local modulus maxima in certain neighborhood.
Feature Points Matching is carried out according to pyramid optical flow algorithm after extraction characteristic point, optical flow algorithm is based on following three vacations
If:Brightness constancy between a consecutive frames;The movement of the b Time Continuous or image comparison that changes with time is slow;C spaces one
Cause property, the pixel of same image have identical movement.
Assume to obtain by brightness constancy
Ixu+Iyv+It=0 formula 12
In formula, u is the x-component of speed, and v is the y-component of speed.In the case of image interframe movement very little, at this time one
The pixel motion of a regional area is consistent, can establish the system equation of neighborhood territory pixel to solve the movement of center pixel.
Assuming that n point forms a rigid agglomerate, then following n equation can be established.
The least square of the equation is established, passes through equation (ATA) d=ATB solves minimum | | Ad-b | |2, i.e.,
As (ATA) can the inverse time, non trivial solution is
A velocity is assigned to the characteristic point detected in image by calculating, material is thus formed a movements
Vector field.According to the velocity feature of each characteristic point, dynamic analysis can be carried out to image.When there is moving object in image
When, there is relative motions for target and background.The velocity certainty and the velocity of background that moving object is formed are
Difference can so calculate the position of moving object.Traditional optical flow algorithm only just meets under conditions of thin tail sheep
Continuous gray scale is it is assumed that and incite somebody to action discontinuous, light stream estimation failure in big displacement hypograph gray scale.It is quick in order to calculate moving target
Light stream under situation of movement introduces and Pyramid technology iteration is carried out on the basis of optical flow algorithm to calculate optical flow field.It will figure
It is minimum in the resolution ratio of pyramidal top layer's image as carrying out Pyramid technology, light stream value is calculated so by top layer,
The result of calculating plus last layer initial value as next layer of light stream initial value, then to next layer of adjacent calculating optical flow field,
Light stream iteration is carried out in other layers in addition to top layer, last layer is iterated to and just forms light stream vector.
In order to improve detection efficiency, reduce calculation amount, the present invention each two field picture can not be carried out feature point extraction and
Matching, but an extracting and matching feature points are carried out every 5 frames, Fig. 6 is with pyramid optical flow algorithm matched jamming feature
The result figure of point.
(6) judgement in berth is shut down in aircraft disengaging
By a series of processing in front, final goal is that detection aircraft enters or leave shutdown berth, passes through front gold word
Tower optical flow algorithm carries out Feature Points Matching tracking, obtains matched feature point set, then calculates per the position between a pair of of characteristic point
It moves, the angle of direction line and the matched characteristic point vector formation is then further calculated if greater than 0.1 pixel, according to angle
Formula calculates its angle
In formula:VectorFor direction line, vectorIt is matched characteristic point by the seat of previous moment (i.e. former frame) characteristic point
Mark is directed toward the coordinate for the characteristic point that present frame matches, and θ is the angle between two vectors.
If aircraft enters shutdown berth, then angle theta should be less than 30 degree, otherwise be that other targets enter shutdown berth.
Therefore only statistics meets the characteristic point logarithm of above-mentioned angle condition to the present invention, if number of feature points meet certain condition and
The targeted mass talked about according to front will be marked after detection zone, and aircraft could be triggered by only meeting these conditions
Into the next step analysis for shutting down berth.If 50 frames or so are continuously tracked without continuous after finding that aircraft enters and shuts down berth
It loses and more than 3 frames then triggers aircraft access bit flag, then whether (judged less than 0.1 pixel according to the displacement of feature point set
Whether targeted mass stopped in detection zone) and targeted mass whether have into being then lifted off the mark of detection zone
(judging whether targeted mass represents aircraft) is known that whether aircraft formally enters position, is to transmit into information.
Similarly, aircraft, which leaves, shuts down berth, then angle theta is more than 150 degree, and then statistics meets the characteristic point of the angle condition
Logarithm, if characteristic point logarithm meets certain condition and will be marked after detection zone according to preceding aim agglomerate,
The next step analysis that aircraft sails out of shutdown berth could be triggered by only meeting these conditions.When discovery aircraft starts to sail out of shutdown
50 frames or so are continuously tracked behind berth aircraft standard is triggered if without continuously losing more than 3 frames and offed normal mark, then judge inspection
Survey region in whether have multipair feature point set displacement be more than 0.1 pixel (judging whether targeted mass has left detection zone) with
And whether targeted mass has and leaves the mark for subsequently entering detection zone (judging whether targeted mass represents aircraft), if do not had
The feature point set of 0.1 pixel of displacement is had more than, then illustrates that targeted mass has been moved off detection zone, if targeted mass leaves
The mark of detection zone is subsequently entered, then the targeted mass is not aircraft.Thus whether aircraft formally offs normal, and is, transmission is offed normal
Information.
During judging that aircraft enters position or offs normal, continuously tracking a certain number of frames, (50 frame as escribed above is left
It is right), that is, reserving the regular hour makees lasting observation, and general 1 second 25 frames of acquisition are carrying out a feature point extraction every 5 frames
In the case of matching and tracking, continuous 50 frames that track are equivalent to the time by 10 seconds, drive into and leave for general aircraft and stop
For machine berth needs 20 second time, 10 seconds or so its errors of observation disclosure satisfy that testing requirements.
The present invention is to be handled using intelligent image with mode identification technology to rely on, and operation rail during berth is passed in and out according to aircraft
Mark, using video monitoring image, based on the method that Gaussian Mixture background modeling and Feature Points Matching track realize to aircraft into
Going out the automatic detection in berth, this method eliminates the various influence factors that may cause wrong report using multiple means, therefore significantly
Improve Detection accuracy.