CN106658023A - End-to-end visual odometer and method based on deep learning - Google Patents
End-to-end visual odometer and method based on deep learning Download PDFInfo
- Publication number
- CN106658023A CN106658023A CN201611191845.9A CN201611191845A CN106658023A CN 106658023 A CN106658023 A CN 106658023A CN 201611191845 A CN201611191845 A CN 201611191845A CN 106658023 A CN106658023 A CN 106658023A
- Authority
- CN
- China
- Prior art keywords
- network
- training
- light stream
- interframe
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C22/00—Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an end-to-end visual odometer and method based on deep learning. The end-to-end visual odometer based on the deep learning comprises cascaded optical flow network and inter-frame estimation network; according to adjacent frames in a data concentration image sequence, the optical flow network selects an endpoint error (EPE) between an output optical flow vector and reference data, utilizes the EPE as a loss function, performs network training and outputs a generated optical flow; and the inter-frame estimation network utilizes an optical flow image as an input, establishes a loss function based on a distance between a six-degrees-of-freedom output position vector and the reference data, iterates a training network and then perform inter-frame estimation. According to the end-to-end visual odometer and method based on the deep learning, an optical flow network module and an inter-frame estimation network module are respectively trained by utilizing different input/output data, are cascaded to form an end-to-end visual odometer module, and then are subject to deep training, thereby optimizing parameters. The hierarchical training method can dramatically reduce the training time and improve the training efficiency.
Description
Technical field
The present invention relates to a kind of end-to-end visual odometry and method based on deep learning.
Background technology
Visual odometry is the method that robot estimates displacement using vision sensor, is robot localization, map
The basic technology of structure, avoidance and path planning contour level task.
Traditional visual odometry is based primarily upon the space geometry relation of interframe visual signature, estimates robot interframe position
Appearance, therefore also referred to as interframe is estimated.Feature is divided into sparse features and the class of dense characteristic two, corresponds respectively to image local information table
Show and represented with global information.Traditional feature needs artificial selection or calculates, and causes to represent image information with certain artificial
Property and limitation, while the accuracy of dependence characteristics matching, single etc. in the reply illumination variation of image, motion blur, texture
Situation has larger limitation, have impact on its estimated accuracy.
The content of the invention
The present invention is in order to solve the above problems, it is proposed that a kind of end-to-end visual odometry and side based on deep learning
Method, the present invention realizes estimate from original image to interframe straight using interframe estimating depth nerual network technique end to end
Output is connect, relative to conventional method, the technology is without the need for manual extraction feature or light stream picture, without the need for construction feature description, nothing
Interframe characteristic matching is needed, the geometric operation of complexity need not be more carried out.
To achieve these goals, the present invention is adopted the following technical scheme that:
A kind of end-to-end visual odometry based on deep learning, including the light flow network and interframe estimation network of cascade,
Consecutive frame of the smooth flow network in image sequence in data set, chooses the light between output light stream vectors and reference data
Flow endpoint error is loss function, and after carrying out network training, by the light stream picture output for generating, the interframe estimates network with light
Stream picture builds loss function, iteration as input based on the distance between six degree of freedom output pose vector and reference data
Training network, carries out interframe estimation.
The smooth flow network and interframe estimate that network is stratification training method.
The smooth flow network is convolutional neural networks training aids.
The smooth flow network is chosen between output light stream vectors and reference data using consecutive frame consecutive image as input
Light stream end point error enters the network training that the sequential frame image for being about to be input into generates light stream picture as loss function.
The interframe estimates that the training of whole light stream picture, using light stream picture as input, is divided into global optical flow by network
The local of figure training and multiple sub-light stream pictures is trained, and the feature of finally both outputs of combination, output to full articulamentum completes base
Estimate network in the interframe of light stream.
The interframe estimates that network is using KITTI data set training networks.
The interframe estimates that network is come training network using generated data.
A kind of end-to-end vision mileage method of estimation based on deep learning is adjacent in image sequence in data set
Frame, the light stream end point error chosen between output light stream vectors and reference data is loss function, after carrying out network training, is generated
Light stream picture, according to light stream picture, based on the distance between six degree of freedom output pose vector and reference data loss letter is built
Number, repetitive exercise network carries out interframe estimation.
Light stream mixed-media network modules mixed-media is respectively trained using different inputoutput datas and interframe estimates mixed-media network modules mixed-media, finally incite somebody to action both
Cascade, further profound training, Optimal Parameters.
Beneficial effects of the present invention are:
(1) present invention chooses or calculates feature compared to conventional method without the need for artificial, eliminates the larger feature of error
With process, more without the need for complicated geometric operation, with it is directly perceived simply the characteristics of;
(2) stratification deep neural network training method proposed by the present invention, is capable of achieving light flow network and interframe estimates net
Network parallel training, improves training speed;
(3) present invention in light flow network application, improve optical flow computation speed so that algorithm real-time is carried
Rise;
(4) present invention is respectively trained light stream mixed-media network modules mixed-media using different inputoutput datas and interframe estimates mixed-media network modules mixed-media,
Finally both are cascaded into composition visual odometry module end to end, further profound training, Optimal Parameters.This is hierarchical
Training method can be greatly reduced the training time, improve training effectiveness.
Description of the drawings
Fig. 1 is the system structure diagram of the present invention;
Fig. 2 is the light stream network diagram based on convolutional neural networks of the present invention;
Fig. 3 is that the interframe of the present invention estimates network diagram.
Specific embodiment:
Below in conjunction with the accompanying drawings the invention will be further described with embodiment.
A kind of interframe estimating depth nerual network technique end to end, realizes estimate from original image to interframe direct
Output, is a modular visual odometry.Relative to conventional method, the technology is without the need for manual extraction feature or light flow graph
As, without the need for construction feature description, without the need for interframe characteristic matching, need not more carry out the geometric operation of complexity.
As shown in figure 1, the odometer of the present invention includes two submodules:Light stream mixed-media network modules mixed-media and interframe estimate network mould
Block.Two modules adopt stratification training method, i.e., be respectively trained light stream mixed-media network modules mixed-media and frame using different inputoutput datas
Between estimate mixed-media network modules mixed-media, finally both are cascaded into composition visual odometry module end to end, further profound training, excellent
Change parameter.The hierarchical training method can be greatly reduced the training time, improve training effectiveness, be also the excellent of deep neural network
One of gesture.Comprise the following steps that:
The structure of light flow network:Light flow network can be made up of convolutional neural networks (CNN), and by True Data or synthesis
Data carry out network training, using consecutive frame consecutive image as input, choose the light between output light stream vectors and reference data
Flow endpoint error (endpoint error, EPE) realizes the net generated to light stream from input sequential frame image as loss function
Network training.
As shown in Fig. 2 respectively the i-th two field picture and i+1 two field picture input CNN networks are exported into respective characteristics of image
Represent;Frame image features are represented before and after combination, further input into deeper CNN networks;Improved by upper convolutional network
The pond operating result resolution ratio of CNN networks, exports dense global optical flow figure pixel-by-pixel.
Interframe estimates the structure of network:The network using light stream picture as input, with six degree of freedom output pose vector with
The distance between reference data builds loss function, repetitive exercise network.Fig. 3 is illustrated using local light stream picture and global light
Stream picture is respectively trained combination of network and completes the process that the interframe based on light stream is estimated.This process can select KITTI data sets or
Generated data carrys out training network, and calculates input light stream by traditional optical flow algorithm.
During the foundation of interframe estimation module, first global optical flow figure is divided into into multiple local light stream subgraphs, then
Global optical flow figure and local light stream subgraph are input into into respectively CNN networks, light stream local feature are obtained and global characteristics is represented.By light
Flow local feature and global characteristics are represented and are combined, be input to full articulamentum, obtain the frame of six-freedom degree pose vector representation
Between estimate.
Training process can be divided into three phases:Local light stream subgraph first used as input, estimate as output, training by interframe
Network;Secondly using global optical flow figure as input, interframe is estimated as output, training network;Finally, by local light stream subgraph and
Global optical flow figure simultaneously as input, estimate as output, further training network by interframe.
Realize end-to-end visual odometry:The light flow network that cascade is trained and the interframe based on light stream estimate network,
Using the consecutive frame of image sequence in data set as whole network input, with six degree of freedom output vector and reference data away from
From construction loss function, repetitive exercise Optimal Parameters, quick, accurate, the end-to-end visual odometry of robust is realized.
Although the above-mentioned accompanying drawing that combines is described to the specific embodiment of the present invention, not to present invention protection model
The restriction enclosed, one of ordinary skill in the art should be understood that on the basis of technical scheme those skilled in the art are not
Need the various modifications made by paying creative work or deformation still within protection scope of the present invention.
Claims (9)
1. a kind of end-to-end visual odometry based on deep learning, is characterized in that:Estimate including the light flow network and interframe of cascade
Meter network, consecutive frame of the smooth flow network in image sequence in data set chooses output light stream vectors and reference data
Between light stream end point error be loss function, after carrying out network training, by the light stream picture output for generating, the interframe is estimated
Network builds loss letter using light stream picture as input based on the distance between six degree of freedom output pose vector and reference data
Number, repetitive exercise network carries out interframe estimation.
2. a kind of end-to-end visual odometry based on deep learning as claimed in claim 1, is characterized in that:The smooth drift net
Network and interframe estimate that network is stratification training method.
3. a kind of end-to-end visual odometry based on deep learning as claimed in claim 1, is characterized in that:The smooth drift net
Network is convolutional neural networks training aids.
4. a kind of end-to-end visual odometry based on deep learning as claimed in claim 1, is characterized in that:The smooth drift net
Network chooses the light stream end point error between output light stream vectors and reference data as damage using consecutive frame consecutive image as input
Function is lost, enters the network training that the sequential frame image for being about to be input into generates light stream picture.
5. a kind of end-to-end visual odometry based on deep learning as claimed in claim 1, is characterized in that:
The interframe estimates that the training of whole light stream picture, using light stream picture as input, is divided into global optical flow figure instruction by network
Practice and the local of multiple sub-light stream pictures is trained, the feature of finally both outputs of combination, output to full articulamentum is completed based on light
The interframe of stream estimates network.
6. a kind of end-to-end visual odometry based on deep learning as claimed in claim 1, is characterized in that:The interframe is estimated
Meter network is using KITTI data set training networks.
7. a kind of end-to-end visual odometry based on deep learning as claimed in claim 1, is characterized in that:The interframe is estimated
Meter network is come training network using generated data.
8. a kind of end-to-end vision mileage method of estimation based on deep learning, is characterized in that:According to image sequence in data set
In consecutive frame, choose output light stream vectors and reference data between light stream end point error be loss function, carry out network instruction
After white silk, light stream picture is generated, according to light stream picture, based on the distance between six degree of freedom output pose vector and reference data structure
Loss function is built, repetitive exercise network carries out interframe estimation.
9. a kind of end-to-end vision mileage method of estimation based on deep learning as claimed in claim 8, is characterized in that:Using
Different inputoutput datas are respectively trained light stream mixed-media network modules mixed-media and interframe estimates mixed-media network modules mixed-media, finally cascade both, further
Profound level training, Optimal Parameters.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611191845.9A CN106658023B (en) | 2016-12-21 | 2016-12-21 | A kind of end-to-end visual odometry and method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611191845.9A CN106658023B (en) | 2016-12-21 | 2016-12-21 | A kind of end-to-end visual odometry and method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106658023A true CN106658023A (en) | 2017-05-10 |
CN106658023B CN106658023B (en) | 2019-12-03 |
Family
ID=58833548
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611191845.9A Active CN106658023B (en) | 2016-12-21 | 2016-12-21 | A kind of end-to-end visual odometry and method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106658023B (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107289967A (en) * | 2017-08-17 | 2017-10-24 | 珠海市微半导体有限公司 | Detachable optics odometer and mobile robot |
CN107527358A (en) * | 2017-08-23 | 2017-12-29 | 北京图森未来科技有限公司 | A kind of dense optical flow method of estimation and device |
CN107909602A (en) * | 2017-12-08 | 2018-04-13 | 长沙全度影像科技有限公司 | A kind of moving boundaries method of estimation based on deep learning |
CN108122249A (en) * | 2017-12-20 | 2018-06-05 | 长沙全度影像科技有限公司 | A kind of light stream method of estimation based on GAN network depth learning models |
CN108303094A (en) * | 2018-01-31 | 2018-07-20 | 深圳市拓灵者科技有限公司 | The Position Fixing Navigation System and its positioning navigation method of array are merged based on multiple vision sensor |
CN108648216A (en) * | 2018-04-19 | 2018-10-12 | 长沙学院 | A kind of visual odometry method and system based on light stream and deep learning |
CN108881952A (en) * | 2018-07-02 | 2018-11-23 | 上海商汤智能科技有限公司 | Video generation method and device, electronic equipment and storage medium |
CN109272493A (en) * | 2018-08-28 | 2019-01-25 | 中国人民解放军火箭军工程大学 | A kind of monocular vision odometer method based on recursive convolution neural network |
CN109656134A (en) * | 2018-12-07 | 2019-04-19 | 电子科技大学 | A kind of end-to-end decision-making technique of intelligent vehicle based on space-time joint recurrent neural network |
CN109708658A (en) * | 2019-01-14 | 2019-05-03 | 浙江大学 | A kind of visual odometry method based on convolutional neural networks |
CN109785376A (en) * | 2017-11-15 | 2019-05-21 | 富士通株式会社 | Training method, estimation of Depth equipment and the storage medium of estimation of Depth device |
CN109978924A (en) * | 2017-12-27 | 2019-07-05 | 长沙学院 | A kind of visual odometry method and system based on monocular |
CN110111366A (en) * | 2019-05-06 | 2019-08-09 | 北京理工大学 | A kind of end-to-end light stream estimation method based on multistage loss amount |
CN110310299A (en) * | 2019-07-03 | 2019-10-08 | 北京字节跳动网络技术有限公司 | Method and apparatus for training light stream network and handling image |
CN110335337A (en) * | 2019-04-28 | 2019-10-15 | 厦门大学 | A method of based on the end-to-end semi-supervised visual odometry for generating confrontation network |
CN110378936A (en) * | 2019-07-30 | 2019-10-25 | 北京字节跳动网络技术有限公司 | Optical flow computation method, apparatus and electronic equipment |
CN110599542A (en) * | 2019-08-30 | 2019-12-20 | 北京影谱科技股份有限公司 | Method and device for local mapping of adaptive VSLAM (virtual local area model) facing to geometric area |
CN111127557A (en) * | 2019-12-13 | 2020-05-08 | 中国电子科技集团公司第二十研究所 | Visual SLAM front-end attitude estimation method based on deep learning |
CN111164969A (en) * | 2017-09-28 | 2020-05-15 | Lg电子株式会社 | Method and apparatus for transmitting or receiving 6DOF video using stitching and re-projection related metadata |
CN111192312A (en) * | 2019-12-04 | 2020-05-22 | 中广核工程有限公司 | Depth image acquisition method, device, equipment and medium based on deep learning |
CN111260680A (en) * | 2020-01-13 | 2020-06-09 | 杭州电子科技大学 | RGBD camera-based unsupervised pose estimation network construction method |
CN111539988A (en) * | 2020-04-15 | 2020-08-14 | 京东方科技集团股份有限公司 | Visual odometer implementation method and device and electronic equipment |
CN111627051A (en) * | 2019-02-27 | 2020-09-04 | 中强光电股份有限公司 | Electronic device and method for estimating optical flow |
CN111833400A (en) * | 2020-06-10 | 2020-10-27 | 广东工业大学 | Camera position and posture positioning method |
CN112344922A (en) * | 2020-10-26 | 2021-02-09 | 中国科学院自动化研究所 | Monocular vision odometer positioning method and system |
CN112648997A (en) * | 2019-10-10 | 2021-04-13 | 成都鼎桥通信技术有限公司 | Method and system for positioning based on multitask network model |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160267678A1 (en) * | 2014-05-08 | 2016-09-15 | The Trustees Of The University Of Pennsylvania | Methods, systems, and computer readable media for visual odometry using rigid structures identified by antipodal transform |
US20160339587A1 (en) * | 2014-08-25 | 2016-11-24 | Google Inc. | Methods And Systems For Providing Landmarks To Facilitate Robot Localization And Visual Odometry |
US20160349379A1 (en) * | 2015-05-28 | 2016-12-01 | Alberto Daniel Lacaze | Inertial navigation unit enhaced with atomic clock |
-
2016
- 2016-12-21 CN CN201611191845.9A patent/CN106658023B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160267678A1 (en) * | 2014-05-08 | 2016-09-15 | The Trustees Of The University Of Pennsylvania | Methods, systems, and computer readable media for visual odometry using rigid structures identified by antipodal transform |
US20160339587A1 (en) * | 2014-08-25 | 2016-11-24 | Google Inc. | Methods And Systems For Providing Landmarks To Facilitate Robot Localization And Visual Odometry |
US20160349379A1 (en) * | 2015-05-28 | 2016-12-01 | Alberto Daniel Lacaze | Inertial navigation unit enhaced with atomic clock |
Non-Patent Citations (3)
Title |
---|
ALEXEY DOSOVITSKIY 等: "《F1owNet: Learning Optical Flow with Convolutional Networks》", 《IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 * |
GABRIELE COSTANTE 等: "《Exploring Representation Learning With CNNs for Frame-to-Frame Ego-Motion Estimation》", 《IEEE ROBOTICS AND AUTOMATION LETTERS》 * |
THOMAS BROX等: "《High Accuracy Optical Flow Estimation Based on a Theory for Warping》", 《IEEE CONFERENCE ON EUROPEAN CONFERENCE ON COMPUTER VISION》 * |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107289967B (en) * | 2017-08-17 | 2023-06-09 | 珠海一微半导体股份有限公司 | Separable optical odometer and mobile robot |
CN107289967A (en) * | 2017-08-17 | 2017-10-24 | 珠海市微半导体有限公司 | Detachable optics odometer and mobile robot |
CN107527358A (en) * | 2017-08-23 | 2017-12-29 | 北京图森未来科技有限公司 | A kind of dense optical flow method of estimation and device |
CN111164969A (en) * | 2017-09-28 | 2020-05-15 | Lg电子株式会社 | Method and apparatus for transmitting or receiving 6DOF video using stitching and re-projection related metadata |
CN111164969B (en) * | 2017-09-28 | 2021-11-02 | Lg电子株式会社 | Method and apparatus for transmitting or receiving 6DOF video using stitching and re-projection related metadata |
CN109785376B (en) * | 2017-11-15 | 2023-02-28 | 富士通株式会社 | Training method of depth estimation device, depth estimation device and storage medium |
CN109785376A (en) * | 2017-11-15 | 2019-05-21 | 富士通株式会社 | Training method, estimation of Depth equipment and the storage medium of estimation of Depth device |
CN107909602A (en) * | 2017-12-08 | 2018-04-13 | 长沙全度影像科技有限公司 | A kind of moving boundaries method of estimation based on deep learning |
CN108122249A (en) * | 2017-12-20 | 2018-06-05 | 长沙全度影像科技有限公司 | A kind of light stream method of estimation based on GAN network depth learning models |
CN109978924A (en) * | 2017-12-27 | 2019-07-05 | 长沙学院 | A kind of visual odometry method and system based on monocular |
CN108303094A (en) * | 2018-01-31 | 2018-07-20 | 深圳市拓灵者科技有限公司 | The Position Fixing Navigation System and its positioning navigation method of array are merged based on multiple vision sensor |
CN108648216A (en) * | 2018-04-19 | 2018-10-12 | 长沙学院 | A kind of visual odometry method and system based on light stream and deep learning |
CN108881952B (en) * | 2018-07-02 | 2021-09-14 | 上海商汤智能科技有限公司 | Video generation method and device, electronic equipment and storage medium |
CN108881952A (en) * | 2018-07-02 | 2018-11-23 | 上海商汤智能科技有限公司 | Video generation method and device, electronic equipment and storage medium |
CN109272493A (en) * | 2018-08-28 | 2019-01-25 | 中国人民解放军火箭军工程大学 | A kind of monocular vision odometer method based on recursive convolution neural network |
CN109656134A (en) * | 2018-12-07 | 2019-04-19 | 电子科技大学 | A kind of end-to-end decision-making technique of intelligent vehicle based on space-time joint recurrent neural network |
CN109708658A (en) * | 2019-01-14 | 2019-05-03 | 浙江大学 | A kind of visual odometry method based on convolutional neural networks |
CN111627051A (en) * | 2019-02-27 | 2020-09-04 | 中强光电股份有限公司 | Electronic device and method for estimating optical flow |
CN111627051B (en) * | 2019-02-27 | 2023-12-15 | 中强光电股份有限公司 | Electronic device and method for estimating optical flow |
US11532090B2 (en) | 2019-02-27 | 2022-12-20 | Coretronic Corporation | Electronic device and method for estimating optical flow |
TWI725398B (en) * | 2019-02-27 | 2021-04-21 | 中強光電股份有限公司 | Electronic device and method for estimating optical flow |
CN110335337B (en) * | 2019-04-28 | 2021-11-05 | 厦门大学 | Method for generating visual odometer of antagonistic network based on end-to-end semi-supervision |
CN110335337A (en) * | 2019-04-28 | 2019-10-15 | 厦门大学 | A method of based on the end-to-end semi-supervised visual odometry for generating confrontation network |
CN110111366A (en) * | 2019-05-06 | 2019-08-09 | 北京理工大学 | A kind of end-to-end light stream estimation method based on multistage loss amount |
CN110310299A (en) * | 2019-07-03 | 2019-10-08 | 北京字节跳动网络技术有限公司 | Method and apparatus for training light stream network and handling image |
CN110378936A (en) * | 2019-07-30 | 2019-10-25 | 北京字节跳动网络技术有限公司 | Optical flow computation method, apparatus and electronic equipment |
CN110378936B (en) * | 2019-07-30 | 2021-11-05 | 北京字节跳动网络技术有限公司 | Optical flow calculation method and device and electronic equipment |
CN110599542A (en) * | 2019-08-30 | 2019-12-20 | 北京影谱科技股份有限公司 | Method and device for local mapping of adaptive VSLAM (virtual local area model) facing to geometric area |
CN112648997A (en) * | 2019-10-10 | 2021-04-13 | 成都鼎桥通信技术有限公司 | Method and system for positioning based on multitask network model |
CN111192312A (en) * | 2019-12-04 | 2020-05-22 | 中广核工程有限公司 | Depth image acquisition method, device, equipment and medium based on deep learning |
CN111192312B (en) * | 2019-12-04 | 2023-12-26 | 中广核工程有限公司 | Depth image acquisition method, device, equipment and medium based on deep learning |
CN111127557A (en) * | 2019-12-13 | 2020-05-08 | 中国电子科技集团公司第二十研究所 | Visual SLAM front-end attitude estimation method based on deep learning |
CN111260680B (en) * | 2020-01-13 | 2023-01-03 | 杭州电子科技大学 | RGBD camera-based unsupervised pose estimation network construction method |
CN111260680A (en) * | 2020-01-13 | 2020-06-09 | 杭州电子科技大学 | RGBD camera-based unsupervised pose estimation network construction method |
CN111539988A (en) * | 2020-04-15 | 2020-08-14 | 京东方科技集团股份有限公司 | Visual odometer implementation method and device and electronic equipment |
CN111539988B (en) * | 2020-04-15 | 2024-04-09 | 京东方科技集团股份有限公司 | Visual odometer implementation method and device and electronic equipment |
CN111833400A (en) * | 2020-06-10 | 2020-10-27 | 广东工业大学 | Camera position and posture positioning method |
CN111833400B (en) * | 2020-06-10 | 2023-07-28 | 广东工业大学 | Camera pose positioning method |
CN112344922A (en) * | 2020-10-26 | 2021-02-09 | 中国科学院自动化研究所 | Monocular vision odometer positioning method and system |
Also Published As
Publication number | Publication date |
---|---|
CN106658023B (en) | 2019-12-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106658023A (en) | End-to-end visual odometer and method based on deep learning | |
Eldesokey et al. | Confidence propagation through cnns for guided sparse depth regression | |
Costante et al. | Exploring representation learning with cnns for frame-to-frame ego-motion estimation | |
CN109756690B (en) | Light-weight video interpolation method based on feature-level optical flow | |
CN108876814B (en) | Method for generating attitude flow image | |
KR20030062313A (en) | Image conversion and encoding techniques | |
CN111640173A (en) | Cloud rendering method and system for home-based roaming animation based on specific path | |
CN111862213A (en) | Positioning method and device, electronic equipment and computer readable storage medium | |
CN113822284B (en) | RGBD image semantic segmentation method based on boundary attention | |
CN111783582A (en) | Unsupervised monocular depth estimation algorithm based on deep learning | |
CN108986166A (en) | A kind of monocular vision mileage prediction technique and odometer based on semi-supervised learning | |
CN104299245B (en) | Augmented reality tracking based on neutral net | |
CN108491763B (en) | Unsupervised training method and device for three-dimensional scene recognition network and storage medium | |
CN110223382B (en) | Single-frame image free viewpoint three-dimensional model reconstruction method based on deep learning | |
Mascaro et al. | Diffuser: Multi-view 2d-to-3d label diffusion for semantic scene segmentation | |
CN111340868A (en) | Autonomous decision control method of unmanned underwater vehicle based on visual depth estimation | |
CN111311664B (en) | Combined unsupervised estimation method and system for depth, pose and scene flow | |
CN103942843A (en) | Fairway and ship three-dimensional model dynamic presenting method based on video | |
CN113850900B (en) | Method and system for recovering depth map based on image and geometric clues in three-dimensional reconstruction | |
CN112801064A (en) | Model training method, electronic device and storage medium | |
CN106056622A (en) | Multi-view depth video recovery method based on Kinect camera | |
Yue et al. | Semi-supervised monocular depth estimation based on semantic supervision | |
KR20180086548A (en) | Gesture recognition method and system for user interaction | |
CN107909602A (en) | A kind of moving boundaries method of estimation based on deep learning | |
CN114155314A (en) | Intelligent wall painting method, system, equipment and storage medium based on image recognition and character recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |