CN110288650A - Data processing method and end of scan for VSLAM - Google Patents
Data processing method and end of scan for VSLAM Download PDFInfo
- Publication number
- CN110288650A CN110288650A CN201910443746.2A CN201910443746A CN110288650A CN 110288650 A CN110288650 A CN 110288650A CN 201910443746 A CN201910443746 A CN 201910443746A CN 110288650 A CN110288650 A CN 110288650A
- Authority
- CN
- China
- Prior art keywords
- scan
- model
- coordinate
- module
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of data processing methods and end of scan for VSLAM, the data processing method is realized by an end of scan, the data processing method includes: the model of place that the end of scan obtains an object space, and identifies the wall in the model of place;The basic framework model of the object space is obtained according to the wall;With default unit to the basic framework model specification space coordinates;The end of scan is obtained in the coordinate of the space coordinates to obtain end of scan in the position of object space.Data processing method and end of scan for VSLAM of the invention being capable of quick positioning device self-position, facilitate the calibration of system-computed and indoor object, object, which can not only be positioned, can also obtain all directions of object, facilitate end of scan to search, obtain position.
Description
Technical field
The present invention relates to a kind of data processing methods and end of scan for VSLAM.
Background technique
SLAM is the abbreviation of Simultaneous Localization and Mapping, mean " while positioning and building
Figure ".It refers to that moving object according to the information of sensor, calculates self-position, the process of one side constructing environment map on one side.Mesh
Before, the application field of SLAM mainly has robot, virtual reality and augmented reality.Its purposes includes the positioning of sensor itself,
And subsequent path planning, scene understanding.
With the difference of sensor and mounting means, the implementation and difficulty of SLAM has very big difference.By biography
Sensor divides, and SLAM is broadly divided into laser, vision two major classes.Wherein, laser SLAM research is more early, and theoretical and engineering compares into
It is ripe.
Earliest SLAM blank is application militarily, and main sensors are military radars.SLAM technology develop to as
Modern decades, it is still the skill of mainstream that the SLAM Technical comparing using laser radar as master reference is stable, reliable at present
Art scheme.But with the fast development of computer vision technique recent years, SLAM technology is increasingly used in household machine
The Visual SLAM (abbreviation VSLAM) of device people, unmanned plane, AR equipment, view-based access control model gradually starts to show up prominently.
In the prior art, inaccurate when being positioned in space, it is easy error, and arithmetic speed is slow.
Summary of the invention
It is inaccurate when the technical problem to be solved by the present invention is to be positioned in space in the prior art to overcome, it is easy
Mistake, and the defect that arithmetic speed is slow, provide one kind can quick positioning device self-position, facilitate system-computed and interior
The calibration of object, can not only position object can also obtain all directions of object, facilitate end of scan to search, obtain position
The data processing method and end of scan for VSLAM.
The present invention is to solve above-mentioned technical problem by following technical proposals:
A kind of data processing method for VSLAM, it is characterized in that, the data processing method passes through an end of scan
It realizes, the data processing method includes:
The end of scan obtains the model of place of an object space, and identifies the wall in the model of place;
The basic framework model of the object space is obtained according to the wall;
With default unit to the basic framework model specification space coordinates;
The end of scan is obtained in the coordinate of the space coordinates to obtain end of scan in the position of object space.
Preferably, obtaining the end of scan in the coordinate of the space coordinates to obtain end of scan in object space
Position, comprising:
The end of scan obtains an at least 3D image in the object space;
For the target feature point on wall in 3D image, the target signature is searched in the basic framework model
The position of point simultaneously obtains coordinate of the position in the space coordinates;
The end of scan is obtained in the space coordinate according to the positional relationship of the end of scan and target feature point
The coordinate of system.
Preferably, obtaining the end of scan in the sky according to the positional relationship of the end of scan and target feature point
Between coordinate system coordinate, comprising:
Camera site of the end of scan in the object space obtains two 3D images;
According to the distance of camera site to two target feature points and the shooting direction of described two 3D images institute at folder
The distance between wall is used as the positional relationship where angle obtains camera site and target feature point;
The end of scan is obtained in the space coordinate according to the positional relationship of the end of scan and target feature point
The coordinate of system.
Preferably, the end of scan includes the 2D camera of a known altitude, the model of place is 3D model, is obtained
The end of scan is in the coordinate of the space coordinates to obtain end of scan in the position of object space, comprising:
Each position point generates a 2D image;
In 2D camera obtain 2D camera in the position of object space according to preset rules in the horizontal plane, it is described pre-
If rule is to be arranged observation point and by searching for observation identical with the 2D image on the horizontal plane in model of place
The image that point place obtains, using the coordinate of observation point as the position of the 2D camera;
According to end of scan described in the position acquisition of the 2D camera the space coordinates coordinate.
Preferably, the end of scan obtains the model of place of an object space, and identify the wall in the model of place
Body, comprising:
Several location points of the end of scan in object space are scanned;
Each position point generates a 3D model, and obtains plane farthest in every 3D model;
Identify the splicing characteristic point on every 3D model;
Whole 3D models are sutured to generate the model of place, according on every 3D model using identical splicing characteristic point
The farthest plane and the mobile distance and direction of end of scan obtain the wall in model of place.
The present invention also provides a kind of end of scan based on VSLAM, it is characterized in that, the end of scan includes an acquisition
Module, a computing module, a setting module and a locating module,
The model of place for obtaining module and being used to obtain an object space, and identify the wall in the model of place;
The computing module is used to obtain the basic framework model of the object space according to the wall;
The setting module is used for default unit to the basic framework model specification space coordinates;
The locating module be used for obtain the end of scan the space coordinates coordinate to obtain end of scan
In the position of object space.
Preferably, the end of scan includes a shooting module and a searching module,
The shooting module in the object space for obtaining an at least 3D image;
For the target feature point on wall in 3D image, the searching module is used in the basic framework model
It searches the position of the target feature point and obtains coordinate of the position in the space coordinates;
The locating module is used to obtain the scanning eventually according to the positional relationship of the end of scan and target feature point
Hold the coordinate in the space coordinates.
Preferably,
The shooting module obtains two 3D images for the camera site in the object space;
The locating module is used for distance and described two 3D images according to camera site to two target feature points
Shooting direction angle obtain camera site and target feature point where the distance between wall be used as the positional relationship;
The locating module is used to obtain the scanning eventually according to the positional relationship of the end of scan and target feature point
Hold the coordinate in the space coordinates.
Preferably, the end of scan includes the 2D camera of a known altitude, the model of place is 3D model,
The 2D camera generates a 2D image for each position point;
The locating module is used to obtain 2D camera in target according to preset rules in the horizontal plane in 2D camera institute
The position in space, the preset rules be arranged on the horizontal plane in model of place observation point and by searching for it is described
The image obtained at the identical observation point of 2D image, using the coordinate of observation point as the position of the 2D camera;
The locating module is also used to the end of scan according to the position acquisition of the 2D camera and sits in the space
Mark the coordinate of system.
Preferably, the end of scan further includes a generation module and an identification module,
The acquisition module is scanned for several location points of the end of scan in object space;
The generation module generates a 3D model for each position point, and obtains plane farthest in every 3D model;
The identification module splicing characteristic point on every 3D model for identification;
The acquisition module is also used to suture whole 3D models to generate the scene mould using identical splicing characteristic point
Type obtains in model of place according to the distance of the farthest plane and end of scan movement on every 3D model and direction
Wall.
On the basis of common knowledge of the art, above-mentioned each optimum condition, can any combination to get each preferable reality of the present invention
Example.
The positive effect of the present invention is that:
Data processing method and end of scan for VSLAM of the invention can quick positioning device self-position, side
Just the calibration of system-computed and indoor object, can not only position object can also obtain all directions of object, conveniently sweep
Terminal is retouched to search, obtain position.
Detailed description of the invention
Fig. 1 is the flow chart of the data processing method of the embodiment of the present invention 1.
Specific embodiment
The present invention is further illustrated below by the mode of embodiment, but does not therefore limit the present invention to the reality
It applies among a range.
Embodiment 1
The present embodiment provides a kind of end of scan based on VSLAM, the end of scan is Household floor-sweeping machine device people.
The end of scan obtains module, a computing module, a setting module and a locating module including one,
The model of place for obtaining module and being used to obtain an object space, and identify the wall in the model of place;
The computing module is used to obtain the basic framework model of the object space according to the wall;
The setting module is used for default unit to the basic framework model specification space coordinates;
The locating module be used for obtain the end of scan the space coordinates coordinate to obtain end of scan
In the position of object space.
Specifically, the present embodiment by following manner obtain the end of scan the space coordinates coordinate to obtain
Take end of scan in the position of object space:
The end of scan includes a shooting module and a searching module,
The shooting module in the object space for obtaining an at least 3D image;
For the target feature point on wall in 3D image, the searching module is used in the basic framework model
It searches the position of the target feature point and obtains coordinate of the position in the space coordinates;
The locating module is used to obtain the scanning eventually according to the positional relationship of the end of scan and target feature point
Hold the coordinate in the space coordinates.
Specifically,
The shooting module obtains two 3D images for the camera site in the object space;
The locating module is used for distance and described two 3D images according to camera site to two target feature points
Shooting direction angle obtain camera site and target feature point where the distance between wall be used as the positional relationship;
The locating module is used to obtain the scanning eventually according to the positional relationship of the end of scan and target feature point
Hold the coordinate in the space coordinates.
The end of scan further includes a generation module and an identification module,
The acquisition module is scanned for several location points of the end of scan in object space;
The generation module generates a 3D model for each position point, and obtains plane farthest in every 3D model;
The identification module splicing characteristic point on every 3D model for identification;
The acquisition module is also used to suture whole 3D models to generate the scene mould using identical splicing characteristic point
Type obtains in model of place according to the distance of the farthest plane and end of scan movement on every 3D model and direction
Wall.
Referring to Fig. 1, using above-mentioned end of scan, the present embodiment provides a kind of data processing methods, comprising:
Step 100, the end of scan obtain the model of place of an object space, and identify the wall in the model of place
Body;
Step 101, the basic framework model that the object space is obtained according to the wall;
Basic framework model in the present embodiment refers to model composed by the wall in house, that is, eliminates furniture, electricity
The model of the objects such as device.
Step 102, with default unit to the basic framework model specification space coordinates;
Position, the direction of furniture and electric appliance in basic framework model can be determined after establishing space coordinates
Position.
Step 103, the end of scan obtain an at least 3D image in the object space;
Step 104, for the target feature point on wall in 3D image, searched in the basic framework model described in
The position of target feature point simultaneously obtains coordinate of the position in the space coordinates;
Step 105 obtains the end of scan described according to the positional relationship of the end of scan and target feature point
The coordinate of space coordinates.
Coordinate of the target feature point in basic framework model scans it is known that being obtained according to target feature point in the present embodiment
The coordinate of terminal.
Specifically, step 105 includes:
Camera site of the end of scan in the object space obtains two 3D images;
According to the distance of camera site to two target feature points and the shooting direction of described two 3D images institute at folder
The distance between wall is used as the positional relationship where angle obtains camera site and target feature point;
The end of scan is obtained in the space coordinate according to the positional relationship of the end of scan and target feature point
The coordinate of system.
Step 103 to step 105 realizes the acquisition end of scan and is swept in the coordinate of the space coordinates with obtaining
Terminal is retouched in the position of object space.
Wherein, step 100 includes:
Several location points of the end of scan in object space are scanned;
Each position point generates a 3D model, and obtains plane farthest in every 3D model;
Identify the splicing characteristic point on every 3D model;
Whole 3D models are sutured to generate the model of place, according on every 3D model using identical splicing characteristic point
The farthest plane and the mobile distance and direction of end of scan obtain the wall in model of place.
Embodiment 2
The present embodiment is substantially the same manner as Example 1, the difference is that only:
Specifically, the present embodiment by following manner obtain the end of scan the space coordinates coordinate to obtain
Take end of scan in the position of object space:
The end of scan includes the 2D camera of a known altitude, and the model of place is 3D model,
The 2D camera generates a 2D image for each position point;
The locating module is used to obtain 2D camera in target according to preset rules in the horizontal plane in 2D camera institute
The position in space, the preset rules be arranged on the horizontal plane in model of place observation point and by searching for it is described
The image obtained at the identical observation point of 2D image, using the coordinate of observation point as the position of the 2D camera;
The locating module is also used to the end of scan according to the position acquisition of the 2D camera and sits in the space
Mark the coordinate of system.
Correspondingly, include: after step 102 in embodiment 1
Each position point generates a 2D image;
In 2D camera obtain 2D camera in the position of object space according to preset rules in the horizontal plane, it is described pre-
If rule is to be arranged observation point and by searching for observation identical with the 2D image on the horizontal plane in model of place
The image that point place obtains, using the coordinate of observation point as the position of the 2D camera;
According to end of scan described in the position acquisition of the 2D camera the space coordinates coordinate.
Although specific embodiments of the present invention have been described above, it will be appreciated by those of skill in the art that these
It is merely illustrative of, protection scope of the present invention is defined by the appended claims.Those skilled in the art is not carrying on the back
Under the premise of from the principle and substance of the present invention, many changes and modifications may be made, but these are changed
Protection scope of the present invention is each fallen with modification.
Claims (10)
1. a kind of data processing method for VSLAM, which is characterized in that the data processing method is real by an end of scan
Existing, the data processing method includes:
The end of scan obtains the model of place of an object space, and identifies the wall in the model of place;
The basic framework model of the object space is obtained according to the wall;
With default unit to the basic framework model specification space coordinates;
The end of scan is obtained in the coordinate of the space coordinates to obtain end of scan in the position of object space.
2. data processing method as described in claim 1, which is characterized in that obtain the end of scan in the space coordinate
The coordinate of system is to obtain end of scan in the position of object space, comprising:
The end of scan obtains an at least 3D image in the object space;
For the target feature point on wall in 3D image, the target feature point is searched in the basic framework model
Position simultaneously obtains coordinate of the position in the space coordinates;
The end of scan is obtained in the space coordinates according to the positional relationship of the end of scan and target feature point
Coordinate.
3. data processing method as claimed in claim 2, which is characterized in that according to the end of scan and target feature point
Positional relationship obtains the end of scan in the coordinate of the space coordinates, comprising:
Camera site of the end of scan in the object space obtains two 3D images;
It is obtained according to the shooting direction angle of the distance of camera site to two target feature points and described two 3D images
The distance between wall is used as the positional relationship where taking camera site and target feature point;
The end of scan is obtained in the space coordinates according to the positional relationship of the end of scan and target feature point
Coordinate.
4. data processing method as described in claim 1, which is characterized in that the end of scan includes the 2D of a known altitude
Camera, the model of place are 3D model, obtain the end of scan in the coordinate of the space coordinates to obtain scanning
Terminal is in the position of object space, comprising:
Each position point generates a 2D image;
2D camera in the horizontal plane according to preset rules obtain 2D camera in the position of object space, the default rule
Then for observation point is set on the horizontal plane in model of place and by searching for observation point identical with the 2D image at
Acquired image, using the coordinate of observation point as the position of the 2D camera;
According to end of scan described in the position acquisition of the 2D camera the space coordinates coordinate.
5. data processing method as described in claim 1, which is characterized in that the end of scan obtains the field of an object space
Scape model, and identify the wall in the model of place, comprising:
Several location points of the end of scan in object space are scanned;
Each position point generates a 3D model, and obtains plane farthest in every 3D model;
Identify the splicing characteristic point on every 3D model;
Whole 3D models are sutured to generate the model of place, according to the institute on every 3D model using identical splicing characteristic point
State the wall in the mobile distance of farthest plane and end of scan and direction acquisition model of place.
6. a kind of end of scan based on VSLAM, which is characterized in that the end of scan includes an acquisition module, a calculating mould
Block, a setting module and a locating module,
The model of place for obtaining module and being used to obtain an object space, and identify the wall in the model of place;
The computing module is used to obtain the basic framework model of the object space according to the wall;
The setting module is used for default unit to the basic framework model specification space coordinates;
The locating module be used for obtain the end of scan the space coordinates coordinate to obtain end of scan in mesh
Mark the position in space.
7. end of scan as described in claim 1, which is characterized in that the end of scan includes that a shooting module and one look into
Module is looked for,
The shooting module in the object space for obtaining an at least 3D image;
For the target feature point on wall in 3D image, the searching module in the basic framework model for searching
The position of the target feature point simultaneously obtains coordinate of the position in the space coordinates;
The locating module, which is used to obtain the end of scan according to the positional relationship of the end of scan and target feature point, to exist
The coordinate of the space coordinates.
8. end of scan as claimed in claim 7, which is characterized in that
The shooting module obtains two 3D images for the camera site in the object space;
The locating module is used for the bat of distance and described two 3D images according to camera site to two target feature points
It takes the photograph the distance between wall where direction angle obtains camera site and target feature point and is used as the positional relationship;
The locating module, which is used to obtain the end of scan according to the positional relationship of the end of scan and target feature point, to exist
The coordinate of the space coordinates.
9. end of scan as claimed in claim 6, which is characterized in that the end of scan includes the 2D camera shooting of a known altitude
Head, the model of place are 3D model,
The 2D camera generates a 2D image for each position point;
The locating module is used to obtain 2D camera in object space according to preset rules in the horizontal plane in 2D camera institute
Position, the preset rules be arranged on the horizontal plane in model of place observation point and by searching for the 2D shadow
As the image obtained at identical observation point, using the coordinate of observation point as the position of the 2D camera;
The locating module is also used to the end of scan according to the position acquisition of the 2D camera in the space coordinates
Coordinate.
10. end of scan as claimed in claim 6, which is characterized in that the end of scan further include a generation module and
One identification module,
The acquisition module is scanned for several location points of the end of scan in object space;
The generation module generates a 3D model for each position point, and obtains plane farthest in every 3D model;
The identification module splicing characteristic point on every 3D model for identification;
The acquisition module is also used to suture whole 3D models to generate the model of place, root using identical splicing characteristic point
The wall in model of place is obtained according to the distance and direction of the farthest plane and end of scan movement on every 3D model
Body.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910443746.2A CN110288650B (en) | 2019-05-27 | 2019-05-27 | Data processing method and scanning terminal for VSLAM |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910443746.2A CN110288650B (en) | 2019-05-27 | 2019-05-27 | Data processing method and scanning terminal for VSLAM |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110288650A true CN110288650A (en) | 2019-09-27 |
CN110288650B CN110288650B (en) | 2023-02-10 |
Family
ID=68002343
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910443746.2A Active CN110288650B (en) | 2019-05-27 | 2019-05-27 | Data processing method and scanning terminal for VSLAM |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110288650B (en) |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103148804A (en) * | 2013-03-04 | 2013-06-12 | 清华大学 | Indoor unknown structure identification method based on laser scanning |
CN104484522A (en) * | 2014-12-11 | 2015-04-01 | 西南科技大学 | Method for building robot simulation drilling system based on reality scene |
CN106023307A (en) * | 2016-07-12 | 2016-10-12 | 深圳市海达唯赢科技有限公司 | Three-dimensional model rapid reconstruction method and system based on field environment |
CN206300653U (en) * | 2016-12-26 | 2017-07-04 | 影动(北京)科技有限公司 | A kind of space positioning apparatus in virtual reality system |
CN108230247A (en) * | 2017-12-29 | 2018-06-29 | 达闼科技(北京)有限公司 | Generation method, device, equipment and the application program of three-dimensional map based on high in the clouds |
CN108320333A (en) * | 2017-12-29 | 2018-07-24 | 中国银联股份有限公司 | The scene adaptive method of scene ecad virtual reality conversion equipment and virtual reality |
CN108564651A (en) * | 2018-02-26 | 2018-09-21 | 盎锐(上海)信息科技有限公司 | Body scan data device and data creation method with data systematic function |
CN108682027A (en) * | 2018-05-11 | 2018-10-19 | 北京华捷艾米科技有限公司 | VSLAM realization method and systems based on point, line Fusion Features |
CN108759669A (en) * | 2018-05-31 | 2018-11-06 | 武汉中观自动化科技有限公司 | A kind of self-positioning 3-D scanning method and system in interior |
CN108885459A (en) * | 2018-06-08 | 2018-11-23 | 珊口(深圳)智能科技有限公司 | Air navigation aid, navigation system, mobile control system and mobile robot |
CN108932353A (en) * | 2017-05-23 | 2018-12-04 | 上海宝冶集团有限公司 | The localization method between buildings model and entity is realized using mixed reality technology |
CN109074084A (en) * | 2017-08-02 | 2018-12-21 | 珊口(深圳)智能科技有限公司 | Control method, device, system and the robot being applicable in of robot |
CN109102537A (en) * | 2018-06-25 | 2018-12-28 | 中德人工智能研究院有限公司 | A kind of three-dimensional modeling method and system of laser radar and the combination of ball curtain camera |
CN109587628A (en) * | 2018-12-14 | 2019-04-05 | 深圳力维智联技术有限公司 | A kind of interior real-time location method and device |
CN109643445A (en) * | 2016-09-19 | 2019-04-16 | 苏州宝时得电动工具有限公司 | Installation and locating method and device |
CN109685850A (en) * | 2018-12-29 | 2019-04-26 | 百度在线网络技术(北京)有限公司 | A kind of located lateral method and mobile unit |
CN109685889A (en) * | 2018-04-27 | 2019-04-26 | 福建优合创智教育发展有限公司 | A kind of scene Scan orientation method, storage medium and system |
-
2019
- 2019-05-27 CN CN201910443746.2A patent/CN110288650B/en active Active
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103148804A (en) * | 2013-03-04 | 2013-06-12 | 清华大学 | Indoor unknown structure identification method based on laser scanning |
CN104484522A (en) * | 2014-12-11 | 2015-04-01 | 西南科技大学 | Method for building robot simulation drilling system based on reality scene |
CN106023307A (en) * | 2016-07-12 | 2016-10-12 | 深圳市海达唯赢科技有限公司 | Three-dimensional model rapid reconstruction method and system based on field environment |
CN109643445A (en) * | 2016-09-19 | 2019-04-16 | 苏州宝时得电动工具有限公司 | Installation and locating method and device |
CN206300653U (en) * | 2016-12-26 | 2017-07-04 | 影动(北京)科技有限公司 | A kind of space positioning apparatus in virtual reality system |
CN108932353A (en) * | 2017-05-23 | 2018-12-04 | 上海宝冶集团有限公司 | The localization method between buildings model and entity is realized using mixed reality technology |
CN109074084A (en) * | 2017-08-02 | 2018-12-21 | 珊口(深圳)智能科技有限公司 | Control method, device, system and the robot being applicable in of robot |
CN108230247A (en) * | 2017-12-29 | 2018-06-29 | 达闼科技(北京)有限公司 | Generation method, device, equipment and the application program of three-dimensional map based on high in the clouds |
CN108320333A (en) * | 2017-12-29 | 2018-07-24 | 中国银联股份有限公司 | The scene adaptive method of scene ecad virtual reality conversion equipment and virtual reality |
CN108564651A (en) * | 2018-02-26 | 2018-09-21 | 盎锐(上海)信息科技有限公司 | Body scan data device and data creation method with data systematic function |
CN109685889A (en) * | 2018-04-27 | 2019-04-26 | 福建优合创智教育发展有限公司 | A kind of scene Scan orientation method, storage medium and system |
CN108682027A (en) * | 2018-05-11 | 2018-10-19 | 北京华捷艾米科技有限公司 | VSLAM realization method and systems based on point, line Fusion Features |
CN108759669A (en) * | 2018-05-31 | 2018-11-06 | 武汉中观自动化科技有限公司 | A kind of self-positioning 3-D scanning method and system in interior |
CN108885459A (en) * | 2018-06-08 | 2018-11-23 | 珊口(深圳)智能科技有限公司 | Air navigation aid, navigation system, mobile control system and mobile robot |
CN109102537A (en) * | 2018-06-25 | 2018-12-28 | 中德人工智能研究院有限公司 | A kind of three-dimensional modeling method and system of laser radar and the combination of ball curtain camera |
CN109587628A (en) * | 2018-12-14 | 2019-04-05 | 深圳力维智联技术有限公司 | A kind of interior real-time location method and device |
CN109685850A (en) * | 2018-12-29 | 2019-04-26 | 百度在线网络技术(北京)有限公司 | A kind of located lateral method and mobile unit |
Non-Patent Citations (1)
Title |
---|
于志超: "室内空间信息可视化技术研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Also Published As
Publication number | Publication date |
---|---|
CN110288650B (en) | 2023-02-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110070615B (en) | Multi-camera cooperation-based panoramic vision SLAM method | |
CN112894832B (en) | Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium | |
CN112525162B (en) | System and method for measuring image distance of power transmission line by unmanned aerial vehicle | |
WO2021208442A1 (en) | Three-dimensional scene reconstruction system and method, device, and storage medium | |
CN110033489B (en) | Method, device and equipment for evaluating vehicle positioning accuracy | |
CN108297115B (en) | Autonomous repositioning method for robot | |
AU2007355942B2 (en) | Arrangement and method for providing a three dimensional map representation of an area | |
WO2018145291A1 (en) | System and method for real-time location tracking of drone | |
CN112184812B (en) | Method for improving identification and positioning precision of unmanned aerial vehicle camera to april tag and positioning method and system | |
CN111275015A (en) | Unmanned aerial vehicle-based power line inspection electric tower detection and identification method and system | |
CN110634138A (en) | Bridge deformation monitoring method, device and equipment based on visual perception | |
CN114004977A (en) | Aerial photography data target positioning method and system based on deep learning | |
CN113494906B (en) | Unmanned measuring method and system of image total station for recognizing target by machine learning | |
Koch et al. | Wide-area egomotion estimation from known 3d structure | |
CN118168545A (en) | Positioning navigation system and method for weeding robot based on multi-source sensor fusion | |
Ullrich et al. | Using hybrid multi-station adjustment for an integrated camera laser-scanner system | |
CN113987246A (en) | Automatic picture naming method, device, medium and electronic equipment for unmanned aerial vehicle inspection | |
CN109636850A (en) | Visible light localization method in faced chamber under intelligent lamp | |
CN110288650A (en) | Data processing method and end of scan for VSLAM | |
Troccoli et al. | A shadow based method for image to model registration | |
CN116957360A (en) | Space observation and reconstruction method and system based on unmanned aerial vehicle | |
CN110176034A (en) | Localization method and end of scan for VSLAM | |
Zhang et al. | LiDAR and Camera Calibration Using Pyramid and Checkerboard Calibrators | |
Armenakis et al. | Feasibility study for pose estimation of small UAS in known 3D environment using geometric hashing | |
WO2023047799A1 (en) | Image processing device, image processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20221222 Address after: 200120 room 607, building 2, No. 2555, xiupu Road, Pudong New Area, Shanghai Applicant after: SHANGHAI ONWING INFORMATION TECHNOLOGY Co.,Ltd. Address before: 201703 No.206, building 1, no.3938 Huqingping Road, Qingpu District, Shanghai Applicant before: UNRE (SHANGHAI) INFORMATION TECHNOLOGY Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |