[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN104966045B - Aircraft disengaging berth automatic testing method based on video - Google Patents

Aircraft disengaging berth automatic testing method based on video Download PDF

Info

Publication number
CN104966045B
CN104966045B CN201510153902.3A CN201510153902A CN104966045B CN 104966045 B CN104966045 B CN 104966045B CN 201510153902 A CN201510153902 A CN 201510153902A CN 104966045 B CN104966045 B CN 104966045B
Authority
CN
China
Prior art keywords
aircraft
targeted mass
image
berth
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510153902.3A
Other languages
Chinese (zh)
Other versions
CN104966045A (en
Inventor
苏杰
何彬
董华宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING TERRAVISION TECHNOLOGY CO., LTD.
Original Assignee
BEIJING TIANRUI KONGJIAN TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING TIANRUI KONGJIAN TECHNOLOGY Co Ltd filed Critical BEIJING TIANRUI KONGJIAN TECHNOLOGY Co Ltd
Priority to CN201510153902.3A priority Critical patent/CN104966045B/en
Publication of CN104966045A publication Critical patent/CN104966045A/en
Application granted granted Critical
Publication of CN104966045B publication Critical patent/CN104966045B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of aircrafts based on video to pass in and out berth automatic testing method, obtain the video image that covering aircraft shuts down the monitoring area in berth region, identification moving target simultaneously carries out motion target tracking, it extracts the characteristic point of image in detection zone and carries out matched jamming, meet in the video image of continuous certain frame number aircraft drive into or sail out of direction requirement characteristic point logarithm meet the requirements quantity when, further make formally to enter position or the judgement offed normal, when the displacement of feature point set is less than certain pixel, and moving target does not drive into and then sails out of aircraft shutdown berth region, then it is judged as that aircraft formally enters position;When the displacement of feature point set is less than or equal to certain pixel, and moving target does not sail out of and then drives into aircraft and shut down berth region, then is judged as that aircraft is formally offed normal, and the detection zone is corresponds to the setting regions that aircraft shuts down berth region on video image.The present invention can realize the moment on schedule of all-weather intelligent detection airplane flight, and Detection accuracy is higher.

Description

Aircraft disengaging berth automatic testing method based on video
Technical field
The present invention relates to a kind of aircrafts based on video processing and analysis to pass in and out berth automatic testing method, belongs to digitized map As processing and field of intelligent video surveillance.
Background technology
China's airplane flight takes off on schedule removes catch as new standard using aircraft.It is that global civil aviation circle is general to remove wheel shelves Technical term.As the automobile on ground, to avoid automobile car slipping, when automobile is parked in parking stall, one is placed before automobile A plate washer.After aircraft removes wheel shelves, you can start engine and slide.It is the SS of flight on schedule with the original hatch door that closes It compares, new standard is it is meant that not only airline has carried out passenger's all upper neat preparations, but also airport has been carried out luggage and loaded onto Aircraft, aviation fuel have carried out oiling, and blank pipe sends the instruction that aircraft taxi takes off to runway, i.e. each chain of civil aviaton is all carried out The preparation of flight takeoff.It is a progress compared with original pass hatch door is standard, because closing chamber is behind the door, aircraft may not Start, it may appear that passenger, which is sitting in for a long time on aircraft, waits the phenomenon that taking off.
According to new standard, aircraft upper wheel shelves, aircraft can first can remove wheel shelves before leaving berth immediately after entering shutdown berth Referred to as lower whorl shelves.The time of lower whorl shelves on aircraft can be obtained indirectly by detecting aircraft disengaging berth.And at this stage not yet The research detected automatically specifically for aircraft disengaging berth.
The content of the invention
In order to overcome the drawbacks described above under the prior art, it is an object of the invention to provide it is a kind of based on the aircraft of video into Go out berth automatic testing method, this method can realize the moment on schedule of all-weather intelligent detection airplane flight, and Detection accuracy It is higher.
The technical scheme is that:
A kind of aircraft disengaging berth automatic testing method based on video:Obtain the monitoring that covering aircraft shuts down berth region The video image in region identifies moving target and carries out motion target tracking, and the characteristic point for extracting image in detection zone is gone forward side by side Row Feature Points Matching track, obtain matched feature point set, it is continuous centainly frame number video image in meet aircraft drive into or Sail out of direction requirement characteristic point logarithm meet the requirements quantity when, further make formally to enter position or the judgement formally offed normal, when The displacement of feature point set is less than certain pixel, and moving target does not drive into and then sails out of aircraft shutdown berth region, then judges Formally enter position for aircraft;When the displacement of feature point set is less than or equal to certain pixel, and moving target is not sailed out of and then driven into Aircraft shuts down berth region, then is judged as that aircraft is formally offed normal, and the detection zone is shut down to correspond to aircraft on video image The setting regions in berth region.
Identifying the method for moving target can be:Prospect is extracted by mixture Gaussian background model, obtains binary picture Picture, expansion process is carried out to binary image, and target group is formed to the foreground extraction boundary rectangle in the image after expansion process Block, the targeted mass are moving target.
Wherein extracting the method for boundary rectangle can be:The profile of extraction prospect connected region, by point set on contour line The rectangle corresponding to rectangular area between ordinate and the maximum and minimum value of abscissa is as the boundary rectangle.It is described Rectangular area is substantially exactly region shared by the targeted mass.
Carry out motion target tracking method be preferably:Judge the certain of former frame targeted mass position in present frame Whether distance is interior has targeted mass, if so, targeted mass and former frame targeted mass overlapped region in calculating present frame Area account for the area of former frame targeted mass region ratio and/or two targeted mass central points between distance, when described For ratio not less than proportion threshold value and/or the distance no more than distance threshold, then it is same target group to judge two targeted mass Block, record targeted mass center position variation, when targeted mass into be then lifted off detection zone or leave then into Enter detection zone and make respective markers.In same frame between different target agglomerate can by for it establishes tracking mark and number into Row is distinguished.
Give up included number of pixels less than 50 or its length or width are less than the targeted mass of 10 pixels, i.e., so Targeted mass not as motion target tracking object.
During motion target tracking, if targeted mass, which is continuously lost, reaches certain frame number (such as more than 10 frames), Judge that the targeted mass fails.
The Harris angle points of image in detection zone are extracted to the video image after gray processing as characteristic point, and using base Feature Points Matching tracking is carried out in the optical flow algorithm of feature, Pyramid technology iteration meter is preferably carried out on the basis of optical flow algorithm Optical flow field is calculated, image is subjected to Pyramid technology, it is minimum in the resolution ratio of pyramidal top layer's image, it is counted by top layer Calculate light stream value, the result of calculating plus last layer initial value as next layer of light stream initial value, then to next layer of calculating light stream , light stream iteration is carried out in other layers in addition to top, last layer is iterated to and just forms light stream vector.
The aircraft drives into direction requirement:30 ° of 0 °≤θ <;The aircraft sails out of direction requirement:150° The angle that < θ≤180 °, wherein θ are formed by the vector of direction line and the characteristic point that matches, the direction line is according to aircraft The straight line that plane nose is directed toward by airplane tail group set into the prescribed direction of the detection zone, the characteristic point that matches Vector be directed toward on present frame the coordinate points of characteristic point of matching by the coordinate points of former frame characteristic point.
Berth automatic testing method is passed in and out for the foregoing aircraft based on video described in any one, may be employed as follows Step:
Image acquisition:The video image of real-time monitoring area is obtained, the monitoring area covering aircraft shuts down berth area Domain;
Image preprocessing:Berth region is shut down according to aircraft and unifies the setting detection zone on the video images, is passed through Acquired image is converted into gray level image by gray processing, is carried out or is handled without scaling, the contracting before setting detection zone It is that the video image of distinct device acquisition is carried out to the scaling processing of uniform length and width to put processing;
Motion estimate:Before being extracted to the video image or the gray level image using mixture Gaussian background model Scape obtains binary image, and expansion process is carried out to binary image, external to the foreground extraction in the image after expansion process Rectangle forms targeted mass, and the boundary rectangle is that point set ordinate and abscissa be most on the contour line of prospect connected region The rectangle corresponding to rectangular area between big value and minimum value;
Motion target tracking:Tracking mark is established for targeted mass and is numbered, and judges former frame targeted mass in present frame Whether there is targeted mass in the certain distance of position, if so, calculating targeted mass and former frame target group in present frame The area of block overlapped region is accounted between ratio and/or the two targeted mass central points of the area of former frame targeted mass region Distance, then judge that two targeted mass are no more than distance threshold not less than proportion threshold value and/or the distance when the ratio Same targeted mass copies to the information of former frame targeted mass in present frame, to the same targeted mass of different frame The unification being numbered records the variation of same its center position of targeted mass;If targeted mass, which is continuously lost, reaches one Framing number then judges that the targeted mass fails;It is then lifted off detection zone or leaves to subsequently enter inspection when targeted mass enters It surveys region and then carries out respective markers;
Feature Points Matching tracks:To the image zooming-out Harris angle points in detection zone in the gray level image as feature Point carries out matched jamming to characteristic point based on pyramid optical flow algorithm, Pyramid technology is carried out on the basis of optical flow algorithm and is changed In generation, calculates optical flow field, and image is carried out Pyramid technology, minimum in the resolution ratio of pyramidal top layer's image, is opened by top layer Begin to calculate light stream value, the result of calculating plus last layer initial value as next layer of light stream initial value, then next layer is calculated Optical flow field carries out light stream iteration in other layers in addition to top, iterates to last layer and just form light stream vector;
Aircraft disengaging is shut down berth and is judged:For the feature point set to match that Feature Points Matching tracks, phase is calculated The angle theta between vector and direction line corresponding to matched characteristic point, the aircraft, which drives into direction requirement, is:30 ° of 0 °≤θ <; The aircraft sails out of direction requirement:150 ° of < θ≤180 °, wherein θ are formed by the vector of direction line and the characteristic point that matches Angle, the direction line be according to aircraft into the detection zone prescribed direction set aircraft is directed toward by airplane tail group The straight line of head, the vector of the characteristic point that matches are directed toward on present frame the feature that matches by the coordinate points of former frame characteristic point The coordinate points of point;
Meet aircraft in the video image of continuous certain frame number and drive into or sail out of the characteristic point logarithm of direction requirement and meet During requested number, make access position mark, further make formally to enter position judgement:If the displacement of feature point set is less than certain pixel, and Targeted mass does not enter into the mark for being then lifted off detection zone, then is judged as that aircraft formally enters position;When continuous a certain number of Frame meet thereon aircraft sail out of direction requirement characteristic point to reach setting quantity when, valid mark of offing normal, further make formally It offs normal judgement:If the displacement of feature point set is less than or equal to certain pixel, and targeted mass does not leave and subsequently enters detection zone The mark in domain is then judged as that aircraft is formally offed normal.
The combination of a kind of following optimal way or a variety of optimal ways may be employed:
(1) in described image pre-treatment step, by the way that number is continuously selected to be more than 3 and is less than the boundary point definition of 10 The detection zone, the detection zone are sequentially connected with the closed geometric shape area encompassed to be formed for all boundary points;
(2) in the motion estimate step, mixture Gaussian background model parameter value is:Gaussian Profile number is 3, Default standard deviation multiple is 2.5, and the Minimizing measure for estimating background is 0.5, initial weight 0.02, initial variance 18;
(3) the motion estimate step carries out once every certain frame number;Institute can pass in and out every frame number according to aircraft Movement velocity during berth determines that carrying out a motion estimate every several frames can fit on the basis of identifying purpose is reached When raising detection efficiency.
(4) in the motion target tracking step, first judge the size of targeted mass, give up be less than comprising number of pixels 50 or its length or width be less than the targeted mass of 10 pixels, i.e., these targeted mass are not as pair of motion target tracking As if targeted mass, which is continuously lost, meets or exceeds 10 frames, judging that the targeted mass fails;
(5) in the Feature Points Matching tracking step, the displacement between the characteristic point to match per a pair is first calculated, if Displacement is more than 0.1 pixel, then calculates corresponding angle theta;
(6) aircraft access position or standard are offed normal when judging, a certain number of frames are continuously tracked, during which if without continuously losing It more than 3 frames remakes access position or standard is offed normal mark;
(7) aircraft access position or standard are offed normal when judging, continuous tracking is not less than 50 frames or 50 frames or so, if meeting aircraft It drives into direction requirement or when aircraft sails out of the characteristic point of direction requirement to reaching setting quantity, remakes access position or standard is offed normal mark Note;
(8) make formally to enter position or it is formal off normal when judging, the reference value of feature point set displacement judgement takes 0.1 pixel;
(9) the Feature Points Matching tracking step carries out once every certain frame number (such as 5 frames), and institute can root every frame number Movement velocity when passing in and out berth according to aircraft determines that matched jamming can reached by carrying out a motion estimate every several frames Calculation analysis work amount is reduced on the basis of purpose, so as to improve detection efficiency.
Beneficial effects of the present invention are:
By set detection zone can only or emphasis is carried out in the region aircraft disengaging berth coherent detection, from And detection efficiency is improved, and effectively reduce misinformation probability.
Moving target beyond being got off the plane due to normal condition will not rest on aircraft and shut down in the region of berth, pass through Detection zone is entered and left to targeted mass to be marked, and aircraft can be entered and left to moving target and shuts down berth region Different conditions judged, so as to effectively exclude aircraft beyond moving object for example guide car enter or leave aircraft shut down pool It causes to report by mistake in position region.
Before carrying out motion target tracking, the apparent target group for being unlikely to be aircraft is first weeded out according to the size of targeted mass Block can substantially reduce the calculation amount of targeted mass tracking.Standard as targeted mass size whether judging to reject can be with According to the pixel of video capture device, the distance of equipment distance objective, the angle of shooting, the actual conditions such as actual size of target It determines.
Targeted mass, which is continuously lost, meets or exceeds 10 frames, then judges that the targeted mass fails, can have by this method Effect excludes other moving targets, such as garbage truck, lorry.
Pyramid technology iteration is carried out on the basis of optical flow algorithm to calculate optical flow field, it is quick moving target can be calculated Light stream under situation of movement overcomes traditional optical flow algorithm and just meets continuous gray scale vacation only under conditions of thin tail sheep And if the defects of discontinuously causing light stream estimation failure because of gradation of image under big displacement.
In the Feature Points Matching tracking step, the displacement between the characteristic point to match per a pair is first calculated, if position It moves and is more than certain value (such as 0.1 pixel), then calculate corresponding angle theta, and then judge that moving target enters or leaves Berth is shut down, some characteristic points for actually belonging to background can be effectively excluded, so as to substantially reduce the calculation amount of angle theta and fortune Moving-target enters or leaves the amount for the discriminatory analysis for shutting down berth, improves aircraft disengaging berth detection effect to a certain extent Rate.
The scope driven into direction requirement and aircraft by setting aircraft and sail out of angle theta in the requirement of direction, it is possible to prevente effectively from It is acted with aircraft but movement locus often wrong report caused by different shelter bridge etc..
Description of the drawings
Fig. 1 is the video image there is provided detection zone and direction line;
Fig. 2 is a certain frame video image citing;
Fig. 3 is based on mixture Gaussian background model moving object detection flow chart;
Fig. 4 is the image after the foreground extraction of Fig. 2 and Morphological scale-space;
Fig. 5 is targeted mass recognition and tracking effect diagram;
Fig. 6 is pyramid optical flow algorithm tracking characteristics point result figure;
The aircraft disengaging berth automatic testing method flow chart of Fig. 7 present invention.
Specific embodiment
The present invention provides a kind of aircrafts based on video to pass in and out berth automatic testing method, as shown in figs. 1-7, including with Lower step:
(1) image acquisition
On airport, monitoring area installs network cameras, obtains real-time airport monitoring image;Monitoring imagery zone can cover Lid aircraft shuts down berth region.
(2) video image pre-processes
Image preprocessing can include:Image scaling, the division of detection zone, direction line is set and image gray processing.
The video figure gathered for the video camera for making different manufacturers, different size, different batches that airport uses As follow-up unified image procossing can be generally applicable to, can the video image that different cameras gathers be subjected to unified length first The scaling of degree and width is handled, and uses Zoom method as bilinear interpolation, and then stop berth position according to aircraft sets manually A fixed Airplane detection region (polygon in such as Fig. 1), continuous selection number is more than 3 and less than 10 sides as shown in Figure 1 Boundary's point is surrounded berth is shut down.The related inspection in berth is passed in and out by setting detection zone that can only carry out aircraft in the region It surveys, improves detection efficiency, effectively reduce misinformation probability.
Direction line sets the direction setting for then entering according to aircraft and shutting down berth, and direction line as shown in Figure 1 is (with the arrow Line) plane nose is directed toward by airplane tail group.
Image gray processing is then that coloured image is converted into gray level image according to formula 1, to be carried followed by characteristic point It takes and matches and gray level image is provided.
Gray=(R*30+G*59+B*11+50)/100 formula 1
Gray is image intensity value in formula 1, and R, G, B are respectively current pixel point rgb value.
(3) the moving target foreground extraction based on mixture Gaussian background model
Multi-modal background is modeled based on the gauss hybrid models of pixel very effectively, adapts to the variation of background (such as light gradient), and can substantially meet in practical application to the requirement of real-time of algorithm.Using mixed Gauss model to background Image is modeled, and the meaning of so-called " mixed Gaussian " is exactly that each pixel is made of multiple single Gaussian Profile mixing. Using being detected to moving target based on the algorithm of target detection of mixed Gaussian, it is necessary to be carried out according to following step.
1st, initialization model parameter
For single Gaussian Profile, probability density function can be represented by formula 2.
Wherein, x is random variable vector, represents the color component of 3 passages of pixel;μ is the mean vector of Gaussian Profile, It represents the average of single Gauss model, embodies the center of each Unimodal Distribution;Σ represents the variance of single Gauss model, embodies every The unstable degree of the width of a Unimodal Distribution, i.e. pixel.By several single Gauss model linear combinations, gauss hybrid models are formed P(xt), linear combination is as shown in formula 3.
Wherein, t represents t moment, ωi,tRepresenting weights shared by the list Gaussian distribution model, M refers to single Gauss model total number, The total number at peak in pixel value multi-modal is shown as, the size of M commonly relies on the specific distribution situation of pixel value, and M values are got over Greatly, illustrate that peak value is more, so the ability of processing fluctuation is stronger.M is generally 3-5.
2nd, matching distribution is found out
Pixel value at a certain pixel of t moment is xi, μi,tFor the average of the single Gaussian Profile of i-th of t moment,For t moment The variance of i-th of single Gaussian Profile.To all single Gaussian Profiles, judge whether to meet formula 4 respectively, if meeting the formula, Illustrate that current pixel is matched with the list Gaussian Profile, which just should be larger.
Wherein, τσFor default standard deviation multiple.
3rd, gauss hybrid models are updated
The parameter update of mixed Gauss model is complex, it will not only update the parameter of gauss of distribution function, including equal Value and variance will also update the weight of each distribution function, and the corresponding weight more new formula of i-th of list Gaussian Profile of t moment is public affairs Formula 5.
ωi,t=(1- α) ωi,t-1+αMi,tFormula 5
Wherein, α is the learning rate of single Gaussian Profile respective weights, and when α is big, the renewal rate of weight is fast;Mi,tFor single Gauss The impact factor of distribution, its value is in two kinds of situation.The first situation:Current pixel value and pixel gauss hybrid models set In some single Gauss model matching, if matched model more than one, be considered as only that there are one optimal matchings, then The corresponding M of the Gauss modeli,tIt is worth for 1.Another situation:M corresponding with the unmatched list Gauss model of current pixel valuei,t It is worth for 0., it is necessary to which these weights are normalized after the corresponding weight of all single Gaussian Profiles is obtained, formula 6 is to return One changes formula.
, it is necessary to update the mean μ of the model when current pixel value is matched with some single modelI, tWith standard σi,t.According to The characteristics of probability distribution, certainly will influence whether the probability distribution originally estimated.Update method is as shown in formula 7:
μi,t=(1- ρ) μi,t-1+ρxt
Wherein, ρ is the Studying factors of the list Gaussian Profile, and value is ρ=α/ωi,t-1
When current pixel value is not matched with any one single model in gauss hybrid models set, from original Models Sets The single model of weight minimum in a current gauss hybrid models set is removed in conjunction, while adds in a new single model, Weight is the minimum value in all single model weights, and average is the average of Current observation pixel value, variance for one it is given compared with Big constant.
4th, the distribution sorting of pixel will be represented
By fi,ti,ti,tAs priority criteria of the single Gaussian Profile as background distributions is judged, work as fi,t= ωi,ti,tIllustrate when being worth big its as background distributions weights are big, probability is high.Background pixel model can pass through following steps It establishes:
(1) the priority factor ω of each single Gaussian Profile is calculatedi,ti,t
(2) according to priority factor ωi,ti,tAll single Gaussian Profile of big wisp carried out by order from high to low Sequence, ωi,ti,tValue it is bigger, the list Gaussian Profile is bigger as the possibility of background distributions;Otherwise, it is as background The possibility of distribution is with regard to smaller, and if there is newly-established single Gaussian Profile, that just needs to substitute the single Gaussian Profile for coming last position;
(3) background model of N number of single Gaussian Profile as scene is selected from M single Gaussian Profile according to formula 8.
Wherein, T is the minimum threshold (or Minimizing measure) of estimation background, and the description back of the body can be obtained by the size for adjusting T Optimal single Gaussian Profile combination of scape.T has a great impact to the validity of algorithm, and determining for value is most important.If T takes Be worth it is too small, such as only by the use of a Gaussian Profile as background distributions, then gauss hybrid models have reformed into single Gauss model;T takes It is worth excessive, the distribution of weights very little can be also served as background distributions, excessively sensitive background distributions may absorb some movements Foreground pixel point.
The overall procedure of moving object detection program based on Gaussian mixture model-universal background model is divided into two systemic circulations (referring to figure 3) it is, to each two field picture circular treatment first, followed by each pixel in same two field picture is handled.It is each handling , it is necessary to be carried out step by step according to below step during a pixel:If, it is necessary to first be carried out to pixel if the first two field picture It models and each single Gaussian Profile is initialized;It needs to find out and Current observation pixel value phase if not the first two field picture Matched list Gaussian Profile;If the single Gaussian Profile to match can be searched out, just to the parameter and weights of the list Gaussian Profile It is updated (if the single Gaussian Profile more than one to match, it is only necessary to update the parameter of single Gaussian Profile of maximum weight And weights), in addition, the corresponding weights of remaining single Gaussian Profile need it is appropriate reduce, if without matched single Gauss model, It needs newly-built one single Gauss model and substitutes single Gaussian Profile of weights minimum in master mould;The laggard of Gauss model is updated Row foreground extraction, if then its pixel does not have the matched Gauss model of institute, therefore other Gaussian mode centainly to moving target The sum of type weight proportion is necessarily greater than background threshold, and foreground extraction is carried out according to the principle.
The value such as table 1 of some useful parameters in realizing based on Gaussian mixture model-universal background model moving object detection algorithmic procedure It is shown.The flow chart of algorithm is as shown in figure 3, by taking the video image of Fig. 2 as an example, and effect is such as after above-mentioned algorithm carries out foreground extraction Shown in Fig. 4.
1 mixed Gauss model initial parameter value of table
(4) foreground target extraction boundary rectangle and targeted mass tracking
After extracting prospect by mixture Gaussian background model, Morphological scale-space is carried out to the image object of binaryzation --- Expansion then to the Objective extraction boundary rectangle after expansion, forms targeted mass.It is by before extraction to extract target boundary rectangle The profile of connected region after scape target binaryzation, passes through the maximum and minimum of point set ordinate on contour line and abscissa Value determines the boundary rectangle of its contour curve (referring to the rectangle frame in Fig. 5).
It needs continuously to track the targeted mass after extraction target boundary rectangle, first determines whether the targeted mass size, if Targeted mass include number of pixels be less than 50 or its length or width then give up the target less than 10 pixels, no longer analyze this Targeted mass, the moving target that will be substantially unlikely to be aircraft foreclose, and improve detection efficiency, save detection resource.After The continuous next targeted mass of analysis, if meeting size requirements, establishes a tracking mark for the targeted mass and numbers, when When calculating next two field picture (being at this time current frame image), just determine whether there is mesh in targeted mass certain distance before Agglomerate is marked, judges whether it is same target by calculating the overlapping relation of targeted mass and former frame targeted mass in present frame Agglomerate if it is, the information of former frame targeted mass is copied in present frame targeted mass, and connects former frame target Agglomerate central point and present frame targeted mass central point, its central point when red curve as shown in Figure 5 is exactly targeted mass tracking Between connecting line.Since the translational speed of guide car at that time is faster than aircraft, the targeted mass tracking corresponding to guide car When connecting line it is longer.During tracking, if targeted mass is continuously lost more than 10 frames, it may determine that the targeted mass loses Effect, such as can be accidentally excluded by lorry, garbage truck etc. by the method outside moving target.
Before the overlapping relation refers to that targeted mass and the area of former frame targeted mass overlapped region account in present frame Distance between the ratio of the area of one frame targeted mass region and/or two targeted mass central points, when the ratio is not small In proportion threshold value and/or the distance no more than distance threshold, then it is same targeted mass to judge two targeted mass.
Targeted mass detect and track above-mentioned is the processing based on entire image, in order to improve Airplane detection effect Rate, the target of present invention extraction in the detection area is further analyzed processing, if there is targeted mass enters detection zone Domain is then marked, which will pass in and out berth in next aircraft and judge to play a key effect.If it is aircraft enters It can be still in after shutting down berth in berth, that is, represent the targeted mass of aircraft and be still in detection zone, and other movement mesh Mark will not then rest on aircraft and shut down in the region of berth;And it is then finally to be driven out to aircraft to moving by quiet that aircraft, which is driven out to shutdown berth, Berth region is shut down, i.e., aircraft is entered back into after will not leaving and shuts down berth region.Therefore in order to exclude the target beyond aircraft into Entering to shut down berth or leave shutdown berth to cause to report by mistake, according to targeted mass, state is judged the present invention in the detection area, For example, if targeted mass is to enter and leave detection zone outside detection zone, which is carried out corresponding Mark, which will coordinate feature point tracking below as the final key message for judging aircraft disengaging shutdown berth, For example, when judging formally to enter, if some targeted mass has into the mark of detection zone is then lifted off, may determine that Moving target representated by the targeted mass is not aircraft.
(5) the Feature Points Matching tracking based on pyramid optical flow algorithm
Characteristic point, this hair are extracted firstly the need of to entire detection zone based on the tracking of pyramid optical flow algorithm Feature Points Matching The characteristic point of bright extraction is preferably Harris angle points, and extraction Harris angle point steps are:
(1) each pixel of image is filtered in the hope of horizontally and vertically direction using horizontal, vertical difference operator Variable quantity IxAnd Iy, and then acquire the value of four elements in matrix m (referring to formula 9):
The gray value of image sequence, I are represented with function I (x, y)x、IyPartial derivatives of the I (x, y) for x, y is represented respectively.
(2) high speed smothing filtering is carried out to four elements of m, obtains new m'.Smothing filtering is carried out using formula 10:
(3) the angle point amount of corresponding each pixel is calculated using m':
(4) it is considered as Harris angle points to meet two conditions simultaneously in cim, i.e.,:Cim is more than a certain threshold value;Cim is Local modulus maxima in certain neighborhood.
Feature Points Matching is carried out according to pyramid optical flow algorithm after extraction characteristic point, optical flow algorithm is based on following three vacations If:Brightness constancy between a consecutive frames;The movement of the b Time Continuous or image comparison that changes with time is slow;C spaces one Cause property, the pixel of same image have identical movement.
Assume to obtain by brightness constancy
Ixu+Iyv+It=0 formula 12
In formula, u is the x-component of speed, and v is the y-component of speed.In the case of image interframe movement very little, at this time one The pixel motion of a regional area is consistent, can establish the system equation of neighborhood territory pixel to solve the movement of center pixel. Assuming that n point forms a rigid agglomerate, then following n equation can be established.
The least square of the equation is established, passes through equation (ATA) d=ATB solves minimum | | Ad-b | |2, i.e.,
As (ATA) can the inverse time, non trivial solution is
A velocity is assigned to the characteristic point detected in image by calculating, material is thus formed a movements Vector field.According to the velocity feature of each characteristic point, dynamic analysis can be carried out to image.When there is moving object in image When, there is relative motions for target and background.The velocity certainty and the velocity of background that moving object is formed are Difference can so calculate the position of moving object.Traditional optical flow algorithm only just meets under conditions of thin tail sheep Continuous gray scale is it is assumed that and incite somebody to action discontinuous, light stream estimation failure in big displacement hypograph gray scale.It is quick in order to calculate moving target Light stream under situation of movement introduces and Pyramid technology iteration is carried out on the basis of optical flow algorithm to calculate optical flow field.It will figure It is minimum in the resolution ratio of pyramidal top layer's image as carrying out Pyramid technology, light stream value is calculated so by top layer, The result of calculating plus last layer initial value as next layer of light stream initial value, then to next layer of adjacent calculating optical flow field, Light stream iteration is carried out in other layers in addition to top layer, last layer is iterated to and just forms light stream vector.
In order to improve detection efficiency, reduce calculation amount, the present invention each two field picture can not be carried out feature point extraction and Matching, but an extracting and matching feature points are carried out every 5 frames, Fig. 6 is with pyramid optical flow algorithm matched jamming feature The result figure of point.
(6) judgement in berth is shut down in aircraft disengaging
By a series of processing in front, final goal is that detection aircraft enters or leave shutdown berth, passes through front gold word Tower optical flow algorithm carries out Feature Points Matching tracking, obtains matched feature point set, then calculates per the position between a pair of of characteristic point It moves, the angle of direction line and the matched characteristic point vector formation is then further calculated if greater than 0.1 pixel, according to angle Formula calculates its angle
In formula:VectorFor direction line, vectorIt is matched characteristic point by the seat of previous moment (i.e. former frame) characteristic point Mark is directed toward the coordinate for the characteristic point that present frame matches, and θ is the angle between two vectors.
If aircraft enters shutdown berth, then angle theta should be less than 30 degree, otherwise be that other targets enter shutdown berth. Therefore only statistics meets the characteristic point logarithm of above-mentioned angle condition to the present invention, if number of feature points meet certain condition and The targeted mass talked about according to front will be marked after detection zone, and aircraft could be triggered by only meeting these conditions Into the next step analysis for shutting down berth.If 50 frames or so are continuously tracked without continuous after finding that aircraft enters and shuts down berth It loses and more than 3 frames then triggers aircraft access bit flag, then whether (judged less than 0.1 pixel according to the displacement of feature point set Whether targeted mass stopped in detection zone) and targeted mass whether have into being then lifted off the mark of detection zone (judging whether targeted mass represents aircraft) is known that whether aircraft formally enters position, is to transmit into information.
Similarly, aircraft, which leaves, shuts down berth, then angle theta is more than 150 degree, and then statistics meets the characteristic point of the angle condition Logarithm, if characteristic point logarithm meets certain condition and will be marked after detection zone according to preceding aim agglomerate, The next step analysis that aircraft sails out of shutdown berth could be triggered by only meeting these conditions.When discovery aircraft starts to sail out of shutdown 50 frames or so are continuously tracked behind berth aircraft standard is triggered if without continuously losing more than 3 frames and offed normal mark, then judge inspection Survey region in whether have multipair feature point set displacement be more than 0.1 pixel (judging whether targeted mass has left detection zone) with And whether targeted mass has and leaves the mark for subsequently entering detection zone (judging whether targeted mass represents aircraft), if do not had The feature point set of 0.1 pixel of displacement is had more than, then illustrates that targeted mass has been moved off detection zone, if targeted mass leaves The mark of detection zone is subsequently entered, then the targeted mass is not aircraft.Thus whether aircraft formally offs normal, and is, transmission is offed normal Information.
During judging that aircraft enters position or offs normal, continuously tracking a certain number of frames, (50 frame as escribed above is left It is right), that is, reserving the regular hour makees lasting observation, and general 1 second 25 frames of acquisition are carrying out a feature point extraction every 5 frames In the case of matching and tracking, continuous 50 frames that track are equivalent to the time by 10 seconds, drive into and leave for general aircraft and stop For machine berth needs 20 second time, 10 seconds or so its errors of observation disclosure satisfy that testing requirements.
The present invention is to be handled using intelligent image with mode identification technology to rely on, and operation rail during berth is passed in and out according to aircraft Mark, using video monitoring image, based on the method that Gaussian Mixture background modeling and Feature Points Matching track realize to aircraft into Going out the automatic detection in berth, this method eliminates the various influence factors that may cause wrong report using multiple means, therefore significantly Improve Detection accuracy.

Claims (9)

1. a kind of aircraft disengaging berth automatic testing method based on video, it is characterised in that obtain covering aircraft and shut down berth area The video image of the monitoring area in domain identifies moving target and carries out motion target tracking, extracts the spy of image in detection zone Sign point simultaneously carries out Feature Points Matching tracking, obtains matched feature point set, meets in the video image of continuous certain frame number and flies Machine drive into or sail out of direction requirement characteristic point logarithm meet the requirements quantity when, further make formally to enter position or formally off normal Judge, when feature point set displacement be less than certain pixel, and moving target do not drive into then sail out of aircraft shut down berth region, Then it is judged as that aircraft formally enters position;When the displacement of feature point set is less than or equal to certain pixel, and moving target is not sailed out of and connect It and drives into aircraft shutdown berth region, be then judged as that aircraft is formally offed normal, the detection zone flies to correspond on video image Machine shuts down the setting regions in berth region, and carrying out the method for motion target tracking is:Judge former frame targeted mass in present frame Whether there is targeted mass in the certain distance of position, if so, calculating targeted mass and former frame target group in present frame The area of block overlapped region is accounted between ratio and/or the two targeted mass central points of the area of former frame targeted mass region Distance, then judge that two targeted mass are no more than distance threshold not less than proportion threshold value and/or the distance when the ratio Same targeted mass, record targeted mass center position variation, when targeted mass into be then lifted off detection zone or Person, which leaves, to be subsequently entered detection zone and makees respective markers.
2. the aircraft disengaging berth automatic testing method based on video as described in claim 1, it is characterised in that identification movement Mesh calibration method is:Prospect is extracted by mixture Gaussian background model, obtains binary image, binary image is carried out swollen Swollen processing, forms the foreground extraction boundary rectangle in the image after expansion process targeted mass, and the targeted mass is to transport Moving-target.
3. the aircraft disengaging berth automatic testing method based on video as claimed in claim 2, it is characterised in that extraction is external The method of rectangle is:The profile of extraction prospect connected region, by the maximum of point set ordinate and abscissa on contour line and The rectangle corresponding to rectangular area between minimum value is as the boundary rectangle.
4. the aircraft disengaging berth automatic testing method based on video as claimed in claim 3, it is characterised in that include picture Plain number be less than 50 or its length or width be less than object of the targeted mass not as motion target tracking of 10 pixels.
5. the aircraft disengaging berth automatic testing method based on video as claimed in claim 4, it is characterised in that moving target During tracking, if targeted mass continuously loses certain frame number, judge that the targeted mass fails.
6. the aircraft disengaging berth automatic testing method based on video as claimed in claim 4, it is characterised in that gray processing The Harris angle points of image are as characteristic point in video image extraction detection zone afterwards, and use the optical flow algorithm of feature based Feature Points Matching tracking is carried out, Pyramid technology iterative calculation optical flow field is carried out on the basis of optical flow algorithm, image is carried out Pyramid technology, it is minimum in the resolution ratio of pyramidal top layer's image, light stream value, the result of calculating are calculated by top layer In addition light stream initial value of the initial value of last layer as next layer, then to next layer of calculating optical flow field, in its in addition to top He carries out light stream iteration by layer, iterates to last layer and just forms light stream vector.
7. the aircraft disengaging berth automatic testing method based on video as claimed in claim 6, it is characterised in that the aircraft Driving into direction requirement is:30 ° of 0 °≤θ <;The aircraft sails out of direction requirement:150 ° of < θ≤180 °, wherein θ are direction line The angle formed with the vector for the characteristic point that matches, the direction line are to enter the regulation side of the detection zone according to aircraft To the straight line that plane nose is directed toward by airplane tail group of setting, the vectorial seat by former frame characteristic point of the characteristic point that matches Punctuate is directed toward the coordinate points for the characteristic point that matches on present frame.
8. the disengaging berth automatic testing method of the aircraft based on video as described in claim 1,2,3,4,5,6 or 7, feature It is to include the following steps:
Image acquisition:The video image of real-time monitoring area is obtained, the monitoring area covering aircraft shuts down berth region;
Image preprocessing:Berth region is shut down according to aircraft and unifies the setting detection zone on the video images, passes through gray scale Change and acquired image is converted into gray level image, carried out before setting detection zone or handled without scaling, at the scaling Reason is that the video image of distinct device acquisition is carried out to the scaling processing of uniform length and width;
Motion estimate:Prospect is extracted using mixture Gaussian background model to the video image or the gray level image, Binary image is obtained, expansion process is carried out to binary image, to the external square of foreground extraction in the image after expansion process Shape forms targeted mass, and the boundary rectangle is the maximum of point set ordinate and abscissa on the contour line of prospect connected region The rectangle corresponding to rectangular area between value and minimum value;
Motion target tracking:Tracking mark is established for targeted mass and is numbered, and is judged in present frame where former frame targeted mass Whether there is targeted mass in the certain distance of position, if so, calculating targeted mass and former frame targeted mass phase in present frame The area of overlapping region account for the area of former frame targeted mass region ratio and/or two targeted mass central points between away from From when the ratio not less than proportion threshold value and/or the distance no more than distance threshold, then it is same to judge two targeted mass A targeted mass copies to the information of former frame targeted mass in present frame, and the same targeted mass of different frame is carried out The unification of number records the variation of same its center position of targeted mass;If targeted mass, which is continuously lost, reaches a framing Number then judges that the targeted mass fails;It is then lifted off detection zone or leaves to subsequently enter detection zone when targeted mass enters Domain then carries out respective markers;
Feature Points Matching tracks:To the image zooming-out Harris angle points in detection zone in the gray level image as characteristic point, Matched jamming is carried out to characteristic point based on pyramid optical flow algorithm, Pyramid technology iteration meter is carried out on the basis of optical flow algorithm Optical flow field is calculated, image is subjected to Pyramid technology, it is minimum in the resolution ratio of pyramidal top layer's image, it is counted by top layer Calculate light stream value, the result of calculating plus last layer initial value as next layer of light stream initial value, then to next layer of calculating light stream , light stream iteration is carried out in other layers in addition to top, last layer is iterated to and just forms light stream vector;
Aircraft disengaging is shut down berth and is judged:For the feature point set to match that Feature Points Matching tracks, calculating matches Characteristic point corresponding to vector and the angle theta between direction line, the aircraft drive into direction requirement is:30 ° of 0 °≤θ <;It is described Aircraft sails out of direction requirement:150 ° of < θ≤180 °, the folder that wherein θ is formed by the vector of direction line and the characteristic point that matches Angle, the direction line are to be directed toward plane nose by airplane tail group according to what the prescribed direction of aircraft into the detection zone was set Straight line, the vector of the characteristic point that matches, which is directed toward by the coordinate points of former frame characteristic point on present frame, to match characteristic point Coordinate points;
Meet aircraft in the video image of continuous certain frame number and drive into or sail out of the characteristic point logarithm of direction requirement and meet the requirements During quantity, make access position mark, further make formally to enter position judgement:If the displacement of feature point set is less than certain pixel, and target Agglomerate does not enter into the mark for being then lifted off detection zone, then is judged as that aircraft formally enters position;When continuous a certain number of frames its On meet aircraft sail out of direction requirement characteristic point to reach setting quantity when, valid mark of offing normal, further work formally off normal Judge:If the displacement of feature point set is less than or equal to certain pixel, and targeted mass does not leave and subsequently enters detection zone Mark, then be judged as that aircraft is formally offed normal.
9. the aircraft disengaging berth automatic testing method based on video as claimed in claim 8, it is characterised in that using following A kind of combination of optimal way or a variety of optimal ways:
In described image pre-treatment step, by the way that boundary point of the number more than 3 and less than 10 is continuously selected to define the inspection Region is surveyed, the detection zone is sequentially connected with the closed geometric shape area encompassed to be formed for all boundary points;
In the motion estimate step, mixture Gaussian background model parameter value is:Gaussian Profile number is 3, default Standard deviation multiple is 2.5, and the Minimizing measure for estimating background is 0.5, initial weight 0.02, initial variance 18;
The motion estimate step carries out once every certain frame number;
In the motion target tracking step, first judge the size of targeted mass, give up included number of pixels less than 50 or Its length or width are less than the targeted mass of 10 pixels, i.e., these targeted mass not as motion target tracking object, such as Fruit targeted mass, which is continuously lost, meets or exceeds 10 frames, then judges that the targeted mass fails;
In the Feature Points Matching tracking step, the displacement between the characteristic point to match per a pair is first calculated, if displacement is big In 0.1 pixel, then calculate corresponding angle theta;
Aircraft access position or standard are offed normal when judging, a certain number of frames are continuously tracked, during which if without continuously losing more than 3 frames It remakes access position or standard is offed normal mark;
Aircraft access position or standard are offed normal when judging, continuous tracking is driven into direction requirement or flown not less than 50 frames if meeting aircraft When machine sails out of the characteristic point of direction requirement to reaching setting quantity, remake access position or standard is offed normal mark;
Make formally to enter position or it is formal off normal when judging, the reference value of feature point set displacement judgement takes 0.1 pixel;
The Feature Points Matching tracking step carries out once every certain frame number.
CN201510153902.3A 2015-04-02 2015-04-02 Aircraft disengaging berth automatic testing method based on video Active CN104966045B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510153902.3A CN104966045B (en) 2015-04-02 2015-04-02 Aircraft disengaging berth automatic testing method based on video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510153902.3A CN104966045B (en) 2015-04-02 2015-04-02 Aircraft disengaging berth automatic testing method based on video

Publications (2)

Publication Number Publication Date
CN104966045A CN104966045A (en) 2015-10-07
CN104966045B true CN104966045B (en) 2018-06-05

Family

ID=54220083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510153902.3A Active CN104966045B (en) 2015-04-02 2015-04-02 Aircraft disengaging berth automatic testing method based on video

Country Status (1)

Country Link
CN (1) CN104966045B (en)

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894823B (en) * 2016-06-03 2018-08-21 北京精英智通科技股份有限公司 A kind of parking detection method and apparatus and system
CN105844959B (en) * 2016-06-13 2018-07-24 北京精英智通科技股份有限公司 The determination method, device and vehicle that vehicle enters position go out the determination method of position, device
CN106127805A (en) * 2016-06-17 2016-11-16 北京精英智通科技股份有限公司 A kind of vehicle enters position detecting method and system
JP7111444B2 (en) * 2016-07-11 2022-08-02 株式会社リコー Judgment unit, process judgment device and process judgment method
CN106204594A (en) * 2016-07-12 2016-12-07 天津大学 A kind of direction detection method of dispersivity moving object based on video image
CN106228835B (en) * 2016-07-18 2019-04-26 北京精英智通科技股份有限公司 A kind of parking stall Parking judgment method and system
CN109478333A (en) * 2016-09-30 2019-03-15 富士通株式会社 Object detection method, device and image processing equipment
CN106530818A (en) * 2016-12-30 2017-03-22 北京航空航天大学 Intelligent parking lot management system based on video processing technology
CN107220983B (en) * 2017-04-13 2019-09-24 中国农业大学 A kind of live pig detection method and system based on video
CN108021883B (en) * 2017-12-04 2020-07-21 深圳市赢世体育科技有限公司 Method, device and storage medium for recognizing movement pattern of sphere
CN108257149A (en) * 2017-12-25 2018-07-06 翟玉婷 A kind of Ship Target real-time tracking detection method based on optical flow field
CN108960052A (en) * 2018-05-28 2018-12-07 南京邮电大学 Ship overload detecting method based on video flowing
CN109040708A (en) * 2018-09-20 2018-12-18 珠海瑞天安科技发展有限公司 A kind of aircraft level ground monitoring method and system based on panoramic video
CN109544592B (en) * 2018-10-26 2023-01-17 天津理工大学 Moving object detection algorithm for camera movement
CN109754411A (en) * 2018-11-22 2019-05-14 济南艾特网络传媒有限公司 Building pivot frame larceny detection method and system are climbed based on optical flow method target following
CN109697420A (en) * 2018-12-17 2019-04-30 长安大学 A kind of Moving target detection and tracking towards urban transportation
CN109584558A (en) * 2018-12-17 2019-04-05 长安大学 A kind of traffic flow statistics method towards Optimization Control for Urban Traffic Signals
CN109871786A (en) * 2019-01-30 2019-06-11 浙江大学 A kind of flight ground safeguard job specification process detection system
CN109887343B (en) * 2019-04-04 2020-08-25 中国民航科学技术研究院 Automatic acquisition and monitoring system and method for flight ground service support nodes
CN110136168B (en) * 2019-04-26 2021-06-18 北京航空航天大学 Multi-rotor speed measuring method based on feature point matching and optical flow method
CN110097659A (en) * 2019-05-16 2019-08-06 深圳市捷赛机电有限公司 Catch, the time recording method for removing catch and Related product on a kind of aircraft
CN110164153A (en) * 2019-05-30 2019-08-23 哈尔滨理工大学 A kind of adaptive timing method of traffic signals
CN110211159A (en) * 2019-06-06 2019-09-06 中国民航科学技术研究院 A kind of aircraft position detection system and method based on image/video processing technique
CN110210427B (en) * 2019-06-06 2021-04-23 中国民航科学技术研究院 Corridor bridge working state detection system and method based on image processing technology
CN112347810B (en) * 2019-08-07 2024-08-02 杭州萤石软件有限公司 Method and device for detecting moving target object and storage medium
CN111368277A (en) * 2019-11-21 2020-07-03 北汽福田汽车股份有限公司 Vehicle starting method and device, storage medium and vehicle
CN111539974B (en) * 2020-04-07 2022-11-11 北京明略软件系统有限公司 Method and device for determining track, computer storage medium and terminal
CN111709341B (en) * 2020-06-09 2023-04-28 杭州云视通互联网科技有限公司 Method and system for detecting operation state of passenger elevator car
CN112528729B (en) * 2020-10-19 2024-09-27 浙江大华技术股份有限公司 Video-based aircraft bridge event detection method and device
CN112530205A (en) * 2020-11-23 2021-03-19 北京正安维视科技股份有限公司 Airport parking apron airplane state detection method and device
CN114822084A (en) * 2021-01-28 2022-07-29 阿里巴巴集团控股有限公司 Traffic control method, target tracking method, system, device, and storage medium
CN112861773B (en) * 2021-03-04 2024-08-20 超级视线科技有限公司 Multi-level-based berth state detection method and system
CN112949548A (en) * 2021-03-19 2021-06-11 捻果科技(深圳)有限公司 Automatic identification method for wheel gear placement specification on civil aviation parking apron
CN113723222B (en) * 2021-08-12 2024-02-27 捻果科技(深圳)有限公司 Automatic identification method for temporary parking area occupied by unpowered equipment for long time
CN114155490B (en) * 2021-12-08 2024-02-27 北京航易智汇科技有限公司 Airport airplane berth warning lamp safety control system and method
CN115083203B (en) * 2022-08-19 2022-11-15 深圳云游四海信息科技有限公司 Method and system for inspecting parking in road based on image recognition berth
CN115880646A (en) * 2023-02-20 2023-03-31 中国民航大学 Method for identifying in-out-of-position state of airplane

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102252619A (en) * 2011-04-21 2011-11-23 中国民航大学 Displacement distance measuring and displaying system in airplane berthing process
CN102567093A (en) * 2011-12-20 2012-07-11 广州粤嵌通信科技股份有限公司 Berth type recognizing method applied in visual berth automatic guiding system
CN102915638A (en) * 2012-10-07 2013-02-06 复旦大学 Surveillance video-based intelligent parking lot management system
CN104071351A (en) * 2014-06-20 2014-10-01 中国民航大学 Monitoring system for taking off and landing of plane on airport runway

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ATE273548T1 (en) * 2001-12-20 2004-08-15 Safegate Int Ab IDENTIFICATION OF THE CENTERLINE IN A COUPLING GUIDANCE SYSTEM
DE102010020208A1 (en) * 2010-05-12 2011-11-17 Volkswagen Ag Method for parking or parking a vehicle and corresponding assistance system and vehicle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102252619A (en) * 2011-04-21 2011-11-23 中国民航大学 Displacement distance measuring and displaying system in airplane berthing process
CN102567093A (en) * 2011-12-20 2012-07-11 广州粤嵌通信科技股份有限公司 Berth type recognizing method applied in visual berth automatic guiding system
CN102915638A (en) * 2012-10-07 2013-02-06 复旦大学 Surveillance video-based intelligent parking lot management system
CN104071351A (en) * 2014-06-20 2014-10-01 中国民航大学 Monitoring system for taking off and landing of plane on airport runway

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于视觉的飞机泊位自动引导关键技术研究;郭艳颖;《中国博士学位论文全文数据库 信息科技辑》;20140615(第06(2014)期);第1.2.1,2.2.1,5.5.1-5.5.2节 *

Also Published As

Publication number Publication date
CN104966045A (en) 2015-10-07

Similar Documents

Publication Publication Date Title
CN104966045B (en) Aircraft disengaging berth automatic testing method based on video
CN106203265B (en) A kind of Construction Fugitive Dust Pollution source monitors automatically and coverage forecasting system and method
CN105373135B (en) A kind of method and system of aircraft docking guidance and plane type recognition based on machine vision
CN110136449A (en) Traffic video frequency vehicle based on deep learning disobeys the method for stopping automatic identification candid photograph
CN103218831B (en) A kind of video frequency motion target classifying identification method based on profile constraint
CN106845364B (en) Rapid automatic target detection method
CN109949361A (en) A kind of rotor wing unmanned aerial vehicle Attitude estimation method based on monocular vision positioning
CN109785363A (en) A kind of unmanned plane video motion Small object real-time detection and tracking
Sommer et al. Flying object detection for automatic UAV recognition
CN103258432A (en) Traffic accident automatic identification processing method and system based on videos
CN103268470B (en) Object video real-time statistical method based on any scene
CN103903278A (en) Moving target detection and tracking system
CN103605971B (en) Method and device for capturing face images
CN104268851A (en) ATM self-service business hall behavior analysis method based on depth information
CN107133973A (en) A kind of ship detecting method in bridge collision prevention system
CN102447835A (en) Non-blind area multi-target cooperative tracking method and system
CN107220603A (en) Vehicle checking method and device based on deep learning
CN109190444A (en) A kind of implementation method of the lane in which the drivers should pay fees vehicle feature recognition system based on video
WO2012005461A2 (en) Method for automatically calculating information on clouds
CN106127812B (en) A kind of passenger flow statistical method of the non-gate area in passenger station based on video monitoring
CN104517095A (en) Head division method based on depth image
CN107273852A (en) Escalator floor plates object and passenger behavior detection algorithm based on machine vision
CN102930524A (en) Method for detecting heads based on vertically-placed depth cameras
CN109359549A (en) A kind of pedestrian detection method based on mixed Gaussian and HOG_LBP
CN112581503A (en) Multi-target detection and tracking method under multiple visual angles

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Su Jie

Inventor after: Lin Shuhan

Inventor after: He Bin

Inventor after: Dong Huayu

Inventor before: Su Jie

Inventor before: He Bin

Inventor before: Dong Huayu

CB03 Change of inventor or designer information
CP03 Change of name, title or address

Address after: 100102 201A 2, No. 106 building, Li Ze Chung garden, Chaoyang District, Beijing.

Patentee after: BEIJING TERRAVISION TECHNOLOGY CO., LTD.

Address before: 100102 201A, block E, two Wangjing Road, Chaoyang District, Beijing 2

Patentee before: Beijing Tianrui Kongjian Technology Co., Ltd.

CP03 Change of name, title or address