CN103400380B - The single camera submarine target three-dimensional track analogy method of fusion image matrix offset - Google Patents
The single camera submarine target three-dimensional track analogy method of fusion image matrix offset Download PDFInfo
- Publication number
- CN103400380B CN103400380B CN201310317119.7A CN201310317119A CN103400380B CN 103400380 B CN103400380 B CN 103400380B CN 201310317119 A CN201310317119 A CN 201310317119A CN 103400380 B CN103400380 B CN 103400380B
- Authority
- CN
- China
- Prior art keywords
- target
- camera
- submarine target
- motion
- underwater
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses the single camera submarine target three-dimensional track analogy method of a kind of fusion image matrix offset, under single camera image-forming condition, based on Bayes tracking framework, in conjunction with underwater scene depth information and camera motion offset vector information, the 3 D motion trace of simulation submarine target.Under Bayes tracking framework, underwater video is carried out target following, output target's center location parameter, dark primary elder generation checking method is utilized to calculate underwater scene depth information, calculate the SURF feature of background dot in consecutive frame simultaneously, carry out characteristic matching, obtain the camera motion offset vector of consecutive frame, last combining target positional information, underwater scene depth information and the 3 D motion trace of camera motion offset vector information simulation submarine target.The present invention can true in single camera video, be reliably achieved the simulation of the 3 D motion trace to submarine target, and operation efficiency is high.
Description
Technical field
The present invention relates to the single camera submarine target 3 D motion trace simulation of a kind of fusion image matrix offset
Method, belongs to technical field of machine vision.
Background technology
In recent years, along with developing rapidly of science and technology, DV and the digital photographing of digital technology are used
The principle of machine has been applied successfully to Underwater Camera, the manufacturing and designing of underwater camera, investigates in deep-sea science
It is widely applied in developing with coastal ocean.Under water in machine vision, the application of higher level needs fixed
The position of target in the every two field picture in position, and target following key technology the most therein, last result is
The movement locus of energy simulated target.
The existing minority algorithm being capable of the simulation of submarine target three-dimensional track all uses multiple-camera stereopsis
Feel technology, and require extremely harsh camera calibration.Cause the hardware complexity of algorithm and computation complexity relatively
Height, is difficult to meet the needs of routine application.Additionally, being widely used of mobile camera brings new challenge.
Along with the movement of video camera, it is not only the object of motion, and the position array of whole video image all there occurs
Skew.Therefore, in this case, common Method for Underwater Target Tracking is the most applicable, but needs
The objective course deviation caused due to camera motion skew is compensated.
Based on the problems referred to above, a kind of by obtaining the camera motion offset vector between adjacent video frames, and will
It is necessary with the exploitation of the method for simulation submarine target 3 D motion trace as important parameter.
Summary of the invention
Goal of the invention: the technical problem to be solved is to provide a kind of submarine target 3 D motion trace mould
Plan method, by combining target positional information, underwater scene depth information and camera motion offset vector letter
Breath realizes the simulation of the 3 D motion trace to submarine target.
Summary of the invention: for solving above-mentioned technical problem, the technical solution adopted in the present invention is:
The single camera submarine target three-dimensional track analogy method of fusion image matrix offset, comprises the steps:
Under single camera image-forming condition, realize the tracking to submarine target, output based on Bayesian filter framework
Underwater target position coordinate, utilizes dark primary elder generation checking method to calculate underwater scene depth information, calculates adjacent simultaneously
In frame, the SURF feature of background dot, carries out images match based on this feature, obtains camera motion offset vector,
Last by camera motion offset vector correction underwater target position coordinate, export real submarine target central point
Position coordinates, in conjunction with underwater scene depth information, the 3 D motion trace of simulation submarine target.
Wherein, the calculating process of camera motion offset vector based on SURF feature is: calculate in consecutive frame
The SURF characteristic point of image background, constructs its characteristic vector, then uses Euclidean distance pair each characteristic point
Characteristic vector carries out similarity measurement, obtains distance set, sets threshold value, carries out characteristic matching, finally to phase
In adjacent frame, the characteristic point of all couplings is subtracted each other respectively, obtains a distance difference set, then calculates its meansigma methods,
Just the motion excursion vector of video camera has been obtained
Beneficial effect: the present invention is a kind of mould carrying out submarine target 3 D motion trace first based on single camera
Plan method, the method its in the case of common monocular video, it is possible to true, be reliably achieved submarine target
Carry out 3 D motion trace simulation, the submarine target 3 D motion trace analogy method of the present invention, significantly reduce
Follow the tracks of the complexity that system hardware is built, and need not loaded down with trivial details camera calibration, the computation complexity of algorithm
It is significantly reduced, during therefore this method can be loaded into this underwater video system more widely, Technique Popularizing
Property significantly improves.
Accompanying drawing explanation
Fig. 1 is the flow chart of submarine target 3 D motion trace analogy method of the present invention;
Fig. 2 is calculating camera motion offset vector in submarine target 3 D motion trace analogy method of the present invention
Flow chart;
Fig. 3 is 9*9 square frame filter template.
Detailed description of the invention
Below in conjunction with specific embodiment, it is further elucidated with the present invention, it should be understood that these embodiments are merely to illustrate this
Invention rather than restriction the scope of the present invention, after having read the present invention, those skilled in the art are to this
The amendment of the bright various equivalent form of values all falls within the application claims limited range.
As it is shown in figure 1, the single camera submarine target three-dimensional track analogy method of fusion image matrix offset, bag
Include following steps:
Under single camera image-forming condition, realize the tracking to submarine target, output based on Bayesian filter framework
Underwater target position coordinate, utilize dark primary elder generation checking method calculate underwater scene depth information, i.e. submarine target with
And the distance between background and video camera, calculate the SURF feature of background dot in consecutive frame simultaneously, special based on this
Levy and carry out images match, obtain camera motion offset vector, by camera motion offset vector correction mesh under water
Cursor position coordinate, exports real submarine target center position, in conjunction with underwater scene depth information, simulation
The 3 D motion trace of submarine target.
Wherein, the calculating process of camera motion offset vector based on SURF feature mainly includes following three
Step: the generation of feature point detection, Feature Descriptor, the coupling of adjacent video frames characteristic point and camera motion are inclined
Move the calculating of vector.
Application SURF feature carries out in feature point detection, and SURF algorithm uses Hessian matrix to extract spy
Levy a little:
L(x,σ)=G(σ)*I(x),
Wherein, σ is yardstick, and g (σ) is two-dimensional Gaussian function, and L (x, σ) is the volume of G (σ) and integral image
Long-pending.
In SURF algorithm, replace second order Gauss filtering, the side of 9*9 by square frame filtering (box filters) approximation
Frame wave filter is as it is shown on figure 3, scale factor σ=1.2 of corresponding second order Gauss filtering.On original image, pass through
Expand square frame filter size and form the image pyramid of different scale, and use integral image that image convolution is entered
Row accelerates, and solves further and obtains Hessian determinant of a matrix:
detH=DxxDyy-(0.9Dxy)2
The extreme point that Hessian matrix is detected, 8 consecutive points of each extreme point and unified yardstick thereof with
And each 9 points of its upper and lower two yardstick, form the three-dimensional neighborhood of a 3*3*3.By each extreme point with vertical
In body field, remaining 26 point compares, only when the value of extreme point is more than all 26 consecutive points,
Just using this extreme point as candidate feature point.In metric space, interpolation arithmetic is carried out after obtaining candidate feature point,
Obtain stable characteristic point position and place scale-value.
In order to ensure rotational invariance, first obtain characteristic point direction.Structure is with characteristic point as the center of circle, and (s is 6s
The yardstick of characteristic point) it is the neighborhood of the radius little wave response of Haar in x and y direction, and give these response values
With different Gauss weight coefficients so that the response the closer to characteristic point is contributed the biggest, then by 60 ° of interior x
It is added with the y direction little wave response of Haar and forms a local direction vector, travel through whole border circular areas, finally
Selecting long vector direction is this feature point principal direction.
Behind selected characteristic point direction, centered by characteristic point, construct the square field of a length of 20, by this window
Mouth neighborhood is divided into the subregion of 4*4, each region is calculated to horizontal direction and the Vertical Square of 5*5 sampled point
To the little wave response of Haar, be denoted as dx and dy respectively, and give weight with Gauss window function to response value
Coefficient.Obtain four-dimensional vector: a V=(∑ dx,∑dy,∑|dx|,∑|dx|).To each characteristic point
16 sub regions are the formation of the description vectors of 64 dimensions, are carrying out the normalized of vector, are forming characteristic point
Description son.
The similarity measurement of two characteristic vectors uses Euclidean distance to calculate:
In formula: XikRepresent the kth element of ith feature point character pair vector, X in previous frame imagejkAfter being
The kth element of jth Feature point correspondence characteristic vector in one two field picture, N is characterized the dimension of vector.
For the characteristic vector of the Feature point correspondence of previous frame image, calculate it and all characteristic points of a later frame image
In characteristic of correspondence vector set, the Euclidean distance of each characteristic vector, obtains distance set, then adjusts the distance
Set carries out sequence from small to large.Set a threshold value, when minimum Eustachian distance and time minimum Eustachian distance
When ratio is less than the threshold value set, it is believed that the two characteristic point is coupling.Threshold value is chosen the least, mates logarithm
Mesh is the fewest, but more stable.The characteristic point assuming the coupling that t frame and t-1 frame obtain isWithTwo
Person is for comprising x, and the three-dimensional coordinate of y, z is vectorial.Finally the characteristic point of all couplings is subtracted each other respectively,
Obtain a set, then calculate its meansigma methods, just obtain the offset vector of camera:
With the image coordinate system in the first frame for initial coordinate system and as reference, i.e. assume δ1=(0,0,0),
Having been obtained the motion excursion vector of camera by above formula, thus revised the coordinate position of t frame, correction formula is as follows:
Finally combine underwater scene depth information and submarine target center position coordinates just can be in image coordinate system
Output can the three-dimensional track of simulated target movement tendency.
Claims (1)
1. a single camera submarine target three-dimensional track analogy method for fusion image matrix offset, its feature exists
In: comprise the steps:
Under single camera image-forming condition, realize the tracking to submarine target, output based on Bayesian filter framework
Underwater target position coordinate, utilizes dark primary elder generation checking method to calculate underwater scene depth information, calculates adjacent simultaneously
In frame, the SURF feature of background dot, carries out images match based on this feature, obtains camera motion offset vector,
Last by camera motion offset vector correction underwater target position coordinate, export real submarine target central point
Position coordinates, in conjunction with underwater scene depth information, the 3 D motion trace of simulation submarine target;
Wherein, the calculating process of described camera motion offset vector based on SURF feature is: calculate adjacent
The SURF characteristic point of image background in frame, constructs its characteristic vector to each characteristic point, then use Euclidean away from
Carry out similarity measurement to characteristic vector, obtain distance set, set threshold value, carry out characteristic matching, finally
The characteristic point of couplings all in consecutive frame is subtracted each other respectively, obtains a distance difference set, then it is average to calculate it
Value, has just obtained the motion excursion vector of video camera.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310317119.7A CN103400380B (en) | 2013-07-25 | 2013-07-25 | The single camera submarine target three-dimensional track analogy method of fusion image matrix offset |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310317119.7A CN103400380B (en) | 2013-07-25 | 2013-07-25 | The single camera submarine target three-dimensional track analogy method of fusion image matrix offset |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103400380A CN103400380A (en) | 2013-11-20 |
CN103400380B true CN103400380B (en) | 2016-11-23 |
Family
ID=49563992
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310317119.7A Expired - Fee Related CN103400380B (en) | 2013-07-25 | 2013-07-25 | The single camera submarine target three-dimensional track analogy method of fusion image matrix offset |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103400380B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108280386B (en) * | 2017-01-05 | 2020-08-28 | 浙江宇视科技有限公司 | Monitoring scene detection method and device |
CN108184096B (en) * | 2018-01-08 | 2020-09-11 | 北京艾恩斯网络科技有限公司 | Panoramic monitoring device, system and method for airport running and sliding area |
CN110659547B (en) * | 2018-06-29 | 2023-07-14 | 比亚迪股份有限公司 | Object recognition method, device, vehicle and computer-readable storage medium |
CN114245096B (en) * | 2021-12-08 | 2023-09-15 | 安徽新华传媒股份有限公司 | Intelligent photographing 3D simulation imaging system |
CN114092523B (en) * | 2021-12-20 | 2024-07-02 | 常州星宇车灯股份有限公司 | Matrix reading lamp capable of tracking hands by lamplight and control method thereof |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5448936A (en) * | 1994-08-23 | 1995-09-12 | Hughes Aircraft Company | Destruction of underwater objects |
CN102592290A (en) * | 2012-02-16 | 2012-07-18 | 浙江大学 | Method for detecting moving target region aiming at underwater microscopic video |
CN102622764A (en) * | 2012-02-23 | 2012-08-01 | 大连民族学院 | Target tracking method on basis of movable camera platform |
-
2013
- 2013-07-25 CN CN201310317119.7A patent/CN103400380B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5448936A (en) * | 1994-08-23 | 1995-09-12 | Hughes Aircraft Company | Destruction of underwater objects |
CN102592290A (en) * | 2012-02-16 | 2012-07-18 | 浙江大学 | Method for detecting moving target region aiming at underwater microscopic video |
CN102622764A (en) * | 2012-02-23 | 2012-08-01 | 大连民族学院 | Target tracking method on basis of movable camera platform |
Non-Patent Citations (1)
Title |
---|
蔡荣太等.《视频目标跟踪算法综述》.《视频应用与工程》.2010,(第12期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN103400380A (en) | 2013-11-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhu et al. | RGB-D local implicit function for depth completion of transparent objects | |
CN110849367B (en) | Indoor positioning and navigation method based on visual SLAM fused with UWB | |
CN111325794B (en) | Visual simultaneous localization and map construction method based on depth convolution self-encoder | |
CN103400380B (en) | The single camera submarine target three-dimensional track analogy method of fusion image matrix offset | |
CN110070025B (en) | Monocular image-based three-dimensional target detection system and method | |
CN106203342A (en) | Target identification method based on multi-angle local feature coupling | |
CN104346608B (en) | Sparse depth figure denseization method and apparatus | |
CN110929578A (en) | Anti-blocking pedestrian detection method based on attention mechanism | |
CN103413352A (en) | Scene three-dimensional reconstruction method based on RGBD multi-sensor fusion | |
CN114424250A (en) | Structural modeling | |
CN108734713A (en) | A kind of traffic image semantic segmentation method based on multi-characteristic | |
CN109934848A (en) | A method of the moving object precise positioning based on deep learning | |
CN105528785A (en) | Binocular visual image stereo matching method | |
CN111046767B (en) | 3D target detection method based on monocular image | |
CN104794737B (en) | A kind of depth information Auxiliary Particle Filter tracking | |
CN103996201A (en) | Stereo matching method based on improved gradient and adaptive window | |
CN109118544B (en) | Synthetic aperture imaging method based on perspective transformation | |
CN104657986B (en) | A kind of quasi- dense matching extended method merged based on subspace with consistency constraint | |
CN102184540A (en) | Sub-pixel level stereo matching method based on scale space | |
CN105335952B (en) | Matching power flow computational methods and device and parallax value calculating method and equipment | |
CN113780389B (en) | Deep learning semi-supervised dense matching method and system based on consistency constraint | |
CN103458261A (en) | Video scene variation detection method based on stereoscopic vision | |
CN105513094A (en) | Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation | |
CN103955682A (en) | Behavior recognition method and device based on SURF interest points | |
CN116612468A (en) | Three-dimensional target detection method based on multi-mode fusion and depth attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20200622 Address after: 266590 No. 579, Bay Road, Huangdao District, Shandong, Qingdao Patentee after: Chen Erkui Address before: Xikang Road, Gulou District of Nanjing city of Jiangsu Province, No. 1 210098 Patentee before: HOHAI University |
|
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20161123 |