[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109724603A - A kind of Indoor Robot air navigation aid based on environmental characteristic detection - Google Patents

A kind of Indoor Robot air navigation aid based on environmental characteristic detection Download PDF

Info

Publication number
CN109724603A
CN109724603A CN201910015546.7A CN201910015546A CN109724603A CN 109724603 A CN109724603 A CN 109724603A CN 201910015546 A CN201910015546 A CN 201910015546A CN 109724603 A CN109724603 A CN 109724603A
Authority
CN
China
Prior art keywords
robot
point
global
map
semantic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910015546.7A
Other languages
Chinese (zh)
Inventor
董洪义
王文奎
丑武胜
李宇航
宋辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Daya Bay Nuclear Power Operations and Management Co Ltd
Original Assignee
Beihang University
Daya Bay Nuclear Power Operations and Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University, Daya Bay Nuclear Power Operations and Management Co Ltd filed Critical Beihang University
Priority to CN201910015546.7A priority Critical patent/CN109724603A/en
Publication of CN109724603A publication Critical patent/CN109724603A/en
Pending legal-status Critical Current

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The present invention provides a kind of Indoor Robot air navigation aids based on environmental characteristic detection, to solve the problems, such as that current location navigation semantic information is insufficient, degree of intelligence is lower.The method of the present invention includes: to establish semantic map of the indoor grille map in conjunction with object space in robotic end training object detection model;Robot rotates in place one week, and the object being observed is identified using object detection model, the global pose of robot is sought using Maximum Likelihood Estimation;Robotic end subscribes to the sound result topic of remote control terminal, and the semantic dictionary in obtained sound result and semantic map is carried out mapping matching, identifies the destination locations to be navigated, robotic end plans global path using the center line inside corridor.The present invention solve the problems, such as tradition can not initial alignment, positioning efficiently, precisely, the path of planning more meets perception of the robot to circumstances not known, and safer, the semantic navigation realized can be more convenient to be integrated in robot product.

Description

A kind of Indoor Robot air navigation aid based on environmental characteristic detection
Technical field
The invention belongs to robot navigation fields, are related to object detection technology, location and navigation technology, centerline path planning Technology etc..
Background technique
Positioning is premise and the basis that mobile robot carries out a variety of navigation tasks.According to different task phases, positioning Global localization and two kinds of posture tracking can be divided into.Global localization, which refers to, passes through self-sensor in the case where not knowing initial bit appearance Device and algorithm are extrapolated automatically from the pose in global map;Posture tracking refers to the premise in known upper period pose Under, by sensor and map, extrapolate the pose in next period.
Global localization is due to lacking priori posture information, and required observation information is even more important, therefore, as laser radar The point cloud data of observation is often difficult to realize autonomous Global localization, current Global localization algorithm is big due to lacking semantic information It is mostly based on vision.According to the mode of realization, the positioning based on road sign can be divided into, positioning and base based on images match In the positioning of detection.
Localization method based on road sign is to be believed by placing certain road sign in the environment using sensor acquisition road sign Breath calculates the relative distance of current robot and road sign to speculate global pose.Natural landmark or people can be used in road sign For road sign.The road sign that document [1] is utilized convenient for observation and detection passes through filtering noise reduction, mark identification, binary conversion treatment, letter The methods of breath extraction, realizes the accurate detection of road sign, and further obtained accurate robot global position.Document [2] it by acquisition ceiling image, is effectively had identified using the color segmentation and edge matching scheduling algorithm of image in ceiling On road sign, and further by coordinate transform obtain robot pose, to realize the Global localization of robot.Also have one A little scholars realize Global localization, such as the storage robot of Amazon by adding the road signs such as two dimensional code on ground.
Localization method based on images match, in vision SLAM (positioning and map structuring immediately) and the ground based on key frame In figure, the Global localization that key frame carries out images match can use.All frames are configured to vocabulary first by Dong in document [3] The form of tree, then On-line matching, carries out Global localization in conjunction with geometrical constraint and key frame, although relatively efficiently reducing not Necessary feature, but calculation amount is still larger.Glocker is defined between key frame using block formula Hamming distance in document [4] Diversity factor, and by image carry out close-coupled coding, the matching speed of present frame and key frame is effectively promoted.
If map is sparse cloud composition, it can use key point and carry out Global localization.Specific method can be first The relationship between the feature in point map and observed image is calculated, is positioned using stochastical sampling consistency (RANSAC) method It calculates.Known three-dimensional map is projected to horizontal plane and forms two-dimensional map by Jaramillo in document [5], and two-dimensionally by this Figure carries out the matching based on 2D feature with true picture, and further calculates the matrixing of camera and environment, only this side Method calculation amount is huge.Cavallari of subsequent document [6] and the Shotton of document [7] et al. in succession using return forest with from Adapt to the 3D relationship between the searching picture point cloud such as adjustment.
Localization method based on detection refers to using visual sensor, extracts to the feature in environment, then and The priori knowledges such as some maps are matched, to calculate the posture of robot.There are also a kind of locating schemes based on detection It is that camera is set in the overall situation, the Soccer robot in label, such as document [8] is then placed on robot body, is passed through Marker is placed in robot, global camera identifies rear calculating robot's pose, to realize Global localization.Document [9] completely new color template identification butterfly badge card is devised, image segmentation is carried out using threshold process, may be implemented in real time Mobile robot global positioning.
Current robot global path planning generally uses Djistra algorithm or A* algorithm, especially A* algorithm, Since it is with enlightening search, it is widely used in the path planning of robot indoors.A* algorithm be 1980 by What Nilsson was proposed, this is a kind of fast speed under latticed map (Grid) state and the inspiration that can obtain shortest path Formula seeks diameter algorithm, also known as A-star algorithm, and calculates the most efficient method of optimal path in a static environment.Searcher Formula is divided into state space search and two kinds of heuristic search: state space search mode is that exhaustion is asked one by one from origin-to-destination Solution, includes breadth First and two kinds of ways of search of depth-first, and this search can be used in small-scale space, if but A wide range of space, calculation amount can become huge, and efficiency is very low;And heuristic search is first to do one in entire state space Assessment, obtains best position, then scan for finally reaching target from this position.Appraisal in inspiration by evaluation function Lai It completes, formula is as follows:
F (n)=g (n)+h (n)
Wherein f (n) is the heuristic cost from present node n to destination node, and g (n) is represented in configuration space from first For beginning node to the true raster path cost of present node, h (n) is the inspiration value of the shortest path from n to terminal, is led herein If h (n) embodies the heuristic information of search because g (n) be it is known, as h (n) > > g (n), it is convenient to omit g (n) and mention High efficiency, reference can be made to document [10].H (n) heuristic function represents the information and constraint when estimating each node, and system can root Part of nodes is correspondingly rejected according to the size of h (n), needs to consider the equilibrium problem of h (n) herein.If h (n) comprising letter Breath is more, then when calculating, calculation amount can become larger, and dealing with can be slack-off;If comprising information it is very few, it is possible to can lose Should existing restrictive condition, order of accuarcy will receive influence.Therefore, how to balance the information content of h (n) is the key that A* algorithm With emphasis, referring to document [10].
In conclusion the location technology first in existing air navigation aid is dependent on handmarking etc., there are certain rings Border interference after the scene such as calamity of some complexity, is not easy in the case where placing road sign, it is not easy to play a role.In addition, Since road sign is artificially to place, the flexibility ratio of localization method is inadequate.Secondly, the method based on images match needs to consume largely Memory source, from the point of view of the realization of embedded end, it is still desirable to consume huge memory source.Finally, above-mentioned dependence point cloud Method is disagreeableness for robot there is no semantic information is taken out from environment.How ring is efficiently extracted out Semantic information in border, and using the vision processing algorithm in current forward position, it is the emphasis of Global localization research.
On navigation problem, although the path at A* algorithmic rule as global shortest path, in leading for robot During boat, shortest path, which is not necessarily, is best suited for robot, as shown in Figure 1, from origin-to-destination, it is complete shown in white line Office's shortest path can be very close to two turnings, however when robot is very close to turning, it will usually lead to the problem of two, one It is self-contained laser radar when barrier is closer, is often not allowed and generates certain wrong report barrier, or even turning Reflective medium at angle causes laser ranging inaccurate, and the failure or front for directly resulting in positioning generate unknown barrier;Second A is that map may have occurred part change, and originally the corner of spaciousness there may be new object, such robot close to when Problem more easily occurs.
Bibliography is as follows:
[1] localization for Mobile Robot Navigation System Design [D] Southeast China University of the Xu Decheng based on artificial landmark, 2016.
[2] Zhu Yingying, Xie Ming, Wang Deming wait research [J] of based on the mobile robot visual orientation method of ceiling Modern electronic technology, 2016,39 (23): 137-140.
[3]Dong Z,Zhang G,Jia J,et al.Keyframe-based real-time camera tracking[J].Iccv,2009,118(2):97-110.
[4]Glocker B,Izadi S,Shotton J,et al.Real-time RGB-D camera relocalization[C]//IEEE International Symposium on Mixed and Augmented Reality.IEEE Computer Society,2013:173-179.
[5]Jaramillo C,Dryanovski I,Valenti R G,et al.6-DoF pose localization in 3D point-cloud dense maps using a monocular camera[C]//IEEE International Conference on Robotics and Biomimetics.IEEE,2013:1747-1752.
[6]Cavallari T,Golodetz S,Lord N A,et al.On-the-Fly Adaptation of Regression Forests for Online Camera Relocalisation[J].2017:218-227.
[7]Shotton J,Glocker B,Zach C,et al.Scene Coordinate Regression Forests for Camera Relocalization in RGB-D Images[C]//IEEE Conference on Computer Vision and Pattern Recognition.IEEE Computer Society,2013:2930-2937.
[8]Zickler S,Bruce J,Biswas J,et al.CMDragons 2009extended team description[C]//Proc.14th International RoboCup Symposium,Singapore.2010.
[9]Weitzenfeld A,Biswas J,Akar M,et al.Robocup small-size league: past,present and future[C]//Robot Soccer World Cup.Springer,Cham,2014:611- 623.
[10] level map structuring and navigation system [D] Xiamen University in Zhao Cheng view-based access control model-speech interactive room, 2014.
Summary of the invention
The problem that the present invention is insufficient for current location navigation semantic information, degree of intelligence is lower, provides one kind and is based on The Indoor Robot air navigation aid of environmental characteristic detection, by establishing semantic map so that positioning navigation method have it is higher Semantic information, reached accurate positioning and high semantic navigation effect, had that precision is high, simple and effective advantage, for Meaning with higher is applied in the commercialization of robot localization navigation.
Indoor Robot air navigation aid provided by the invention based on environmental characteristic detection, includes the following steps:
Step 1: in robotic end, first establishing indoor object data set, training object detection model resettles indoor grille Semantic map of the map in conjunction with object space;
Step 2: Global localization being carried out to robot in robotic end;Robot rotates in place one week, utilizes object detection Model identifies the object being observed, and is matched with the semantic information in semantic map, obtains the position for being observed object, so The global pose of robot is sought using Maximum Likelihood Estimation afterwards;
Step 3: after remote control terminal receives voice input, identifying and issue sound result topic, in robotic end Semantic dictionary in obtained sound result and semantic map is carried out mapping by the sound result topic for subscribing to remote control terminal Match, identifies the destination locations to be navigated;Then, robotic end plans global path using the center line inside corridor, controls machine Device people's bobbin movement.
A kind of the advantages of indoor navigation method based on environmental characteristic detection of the invention, is: solving current ROS (machine People's operating system) in AMCL algorithm shortcoming initial pose problem, and take full advantage of the navigation information of high semanteme, be more in line with The cognition of the mankind.The path planned in path planning more meets perception of the robot to circumstances not known, safer.Wherein, Using the localization method based on Maximum Likelihood Estimation, positioning has the characteristics that efficient, accurate, and solving traditional problem can not The shortcomings that initial alignment.The positioning based on semantic map and paths planning method that the present invention is realized, have safety, more intelligence Can the characteristics of, highly integrated semantic information can be more easily integrated in robot product.
Detailed description of the invention
Fig. 1 be legacy paths planing method there are the problem of schematic diagram;
Fig. 2 is the flow chart for carrying out robot global positioning in step 2 of the present invention based on Maximum Likelihood Estimation;
Fig. 3 is the spatial model schematic diagram of robot and testee;
Fig. 4 is in robot rotation to the schematic diagram of the selection of observed quantity;
Fig. 5 is the observation data decimation schematic diagram for carrying out ranging at a distance from object to robot using laser radar;
Fig. 6 is the schematic diagram that the present invention accelerates traversal spatial position using spatial pyramid algorithm;
Fig. 7 is the processing speed contrast schematic diagram of Global localization under the spatial pyramid method under different step-lengths;
Fig. 8 is the flow chart of the air navigation aid based on semantic map in step 3 of the present invention;
Fig. 9 is by the keyword and the matched schematic diagram of navigation terminal progress in semantic navigation;
Figure 10 is the node communication figure in semantic navigation;
Figure 11 is the realization block schematic illustration of semantic navigation module;
Figure 12 is the centre line marks point schematic diagram that the improved paths planning method of the present invention is chosen;
Figure 13 is the experiment porch hardware frame schematic diagram that the present invention realizes autonomous navigation system;
Figure 14 is detection effect schematic diagram of the object detection model on test set in present invention experiment;
Figure 15 is the comparison diagram of model forward speed;
Figure 16 is the semantic map that experiment is established;
Figure 17 is the Global localization lab diagram of the method for the present invention;
Figure 18 is Global localization experiment precision figure;
Figure 19 is influence schematic diagram of the robot rotation speed for Global localization precision;
Figure 20 is influence schematic diagram of the quantity of object to be detected for positioning accuracy;
Figure 21 is the path schematic diagram based on center line of planning.
Specific embodiment
Illustrate technical solution of the present invention with reference to the accompanying drawings and examples.
Indoor Robot air navigation aid provided by the invention based on environmental characteristic detection, generally includes following steps:
Step 1: initially setting up indoor object data set, the present invention uses SSD (Single Shot MultiBox Detector, the more frame detectors of single object) carry out object detection model training and test, then again with Gmapping algorithm fusion Establish semantic map of the grating map in conjunction with object space;
Step 2: carrying out robot global positioning using the method for Maximum-likelihood estimation.When robot rotates in place, adopt Detection object is carried out with object detection model, and is matched with the semantic information in semantic map, Maximum-likelihood estimation is utilized Method is estimated, the global appearance of robot is finally estimated;
Step 3: automatically being recognized based on above-mentioned semantic map and global approach in navigation using the input of voice Position in the semantic map to be navigated.Meanwhile in global path planning, using the planning side based on corridor center line Method.
Gmapping algorithm described in step 1 is a kind of positioning and map structuring algorithm immediately, can construct room in real time Interior map, the calculation amount needed for constructing small scene map is smaller and precision is higher.Semantic map is established and stored to robotic end.
The method that step 2 carries out robot global positioning is described below, overall flow is as shown in Figure 2.Pacify with robot Equipped with camera and laser radar, RGB image is shot by camera and carries out object detection, is identified using object detection model The object being observed obtains robot by laser radar and is observed the distance between object, is then observed object to each The observation data of body carry out data fusion, observation dictionary are obtained after a week when robot rotates in place, then in conjunction with semantic map The position for obtaining testee, the optimal solution of robot initial pose is sought by Maximum Likelihood Estimation, i.e., global pose And it releases.
After initial power-on, the model of Global localization is black as shown in figure 3, under the coordinate system of global map for robot Color dot is robot position, and Grey Point is the position of observed objects.Due to the posture information of not no priori, and camera Observable environmental information is considerably less, therefore selection allows robot to rotate in place one week herein, adequately to perceive surrounding ring Object in border.After the object being observed using the identification of object detection model, it can get in conjunction with semantic map and be observed object Position.
It can be obtained by following formula based on known semantic map and object detection model, robot pose:
In formula, Δ θkIndicate rotation angle of the robot relative to initial pose when detecting that k-th is observed object Degree, xkAnd ykIt indicates to be observed location information of the object in map, x k-th0、y0With θ0For the initial posture information of robot, This is also Global localization variable to be asked.x0、y0For the position coordinates of robot, θ0For the pose angle of robot, such as Fig. 3 institute Show, θ0Angle for the x-axis being arranged in robot center and map.RkIndicate k-th of measurement for being observed object and robot Distance, θkThe angle of object and robot center is observed for k-th.
In robot rotary course, due to that can have the detection of multiframe for the same object, the present invention is implemented Select in example in the picture center portion separate existing object as effective observed quantity.It is long for the transverse direction of acquisition of the embodiment of the present invention Degree is the image of 640 pixels, the object center that selection lateral coordinates occur in 315 to 325 in image coordinate, and records this When rotation angle Δ θ.As shown in figure 4, in the detection of a certain frame although the door of the white of the leftmost side is detected, but by In its center not in extraction scope, it is therefore desirable to give up to fall, be retained in the center of the door being detected in transverse center.
Testee solves at a distance from robot as shown in figure 5, the resolving power of laser radar is 0.250, therefore is selected The observation data of the 356 to 365th this ten points immediately ahead of laser radar, and ask it average, as current object and robot Observed range.Laser radar is mounted in robot.
It is above-mentioned to can use nonlinear optimization library g2o (General Graph the problem of solving initial pose by observed quantity Optimization, standard drawing optimization), it is solved by nonlinear optimization.Nonlinear optimization library g2o is one based on figure knot The optimizer of structure is made of hypergraph many sides and point, and vertex representation wants optimised variable, and side is then the optimised variable of connection Bridge, in optimization process, the value on vertex can become closer to optimal value, and vertex value is directly selected after optimization as defeated Result out.If solving global pose with nonlinear optimization library g2o, need initial point pose, all conducts of observation point coordinate Vertex, using observation and the matrixing of initial point as side, the method calculation amount of this figure optimization is larger, and relies on multiple C+ + library, such as Eigen matrix library, in transplanting and in calculating, there are inconveniences.
The invention proposes the calculating that global pose is carried out using Maximum-likelihood estimation (MLE) method.Entire calculating process Two steps can be divided into: x being estimated according to MLE method first0And y0;Then on this basis, yaw angle θ is further estimated0
Indoors under environment, robot between testee at a distance from be held essentially constant, it is therefore assumed that laser radar obtains The distance taken can represent the actual distance between robot and testee.Based on Bayesian probability, available following formula:
P(x0,y0|xk,yk,Rk)∝P(xk,yk,Rk|x0,y0)P(x0,y0)
Wherein, P (x0,y0|xk,yk,Rk) indicate known k-th of object space (xk,yk) and distance Rk, robot is located at (x0,y0) probability;P(x0,y0) indicate that robot is located at position (x0,y0) probability, P (xk,yk,Rk|x0,y0) indicate robot Positioned at position (x0,y0) under the conditions of k-th of object be located at (xk,yk) and distance RkProbability.
According to Maximum-likelihood estimation theory, in order to maximize probability distribution P (x0,y0|xk,yk,Rk), it is only necessary to it maximizes general Rate is distributed P (xk,yk,Rk|x0,y0).When specific implementation, (x in map is traversed0,y0) each in value range may Point calculates the total cost of each point, and the point for possessing minimum cost can be taken as final location point.Herein using euclidean away from From come the matching degree measuring current point and be observed between object.Specific formula for calculation is as follows:
Wherein, N indicates the total quantity for being observed object.
Further, according to find out come position coordinates (x0,y0), using following formula, counted for each matched point Initial angle is calculated, it is then average again, θ can be obtained0
When the object being detected has multiple possible positions in space, counted using the mode of exact matching It calculates.It is specifically all matched one time using all possible points in space, seeks all possible minimum value.This mode ensure that fixed The accuracy of position.In specific implementation, observed quantity is the sequence of one group of testee, in semantic map, may be deposited all Permutation and combination method all enumerate one time, such as there are 3 objects A, 4 objects B, 5 object C in global map, then In matching, 3 × 4 × 5=60 matching possibility, by this 60 possible composite sequences all with the location matches of robot one Time, then the position of the corresponding robot of the corresponding matching sequence of minimum cost is best global pose estimation.
Further, in order to which the speed for accelerating to traverse spatial position has been used sky by the inspiration of image pyramid herein Between pyramidal method accelerate to calculate.Concrete mode is as shown in fig. 6, point three scales carry out in the entire space of map indoors Search carries out global registration first with 5 meters of unit, and 10 meters of ranges represent global optimum bit around the optimum position obtained The possibility distribution set;Then carry out around the point searching for for second within the scope of 10 meters, be this time with 1 meter of unit into Row is searched for, and represents optimum position distribution within the scope of 2 meters obtained;Finally similarly in the space of 0.1 meter of precision again Secondary search matching finally can be obtained precision and estimate in the global optimum position of 0.1m range.
Fig. 7 illustrates the processing speed of Global localization algorithm under different spaces pyramid method, and ordinate indicates the time, single Position is ms.It in the case that first rectangle indicates that step is 0.1m, is equivalently employed without and is handled using any acceleration, time-consuming has altogether 2135ms;Second rectangle indicates that the first process from step 1m to step 0.1m, time-consuming are 287.5ms, it can be seen that It drastically reduces and calculates the time;Third rectangle indicates that the calculating process from step 5m to step 0.1m, time-consuming are 193.3ms, speed has further promotion in this case;The last one is the present invention finally by the way of, from Step 5m to step 1m arrives step 0.1m again, and time-consuming is 110.3, and calculation amount in this case only just corresponds to not use The 5% of the calculation amount of this method, it can be seen that significantly improve computational efficiency.
Illustrate the method in step 3 of the present invention based on semantic map and global localization method building semantic navigation below.It is real The process of existing semantic navigation is as shown in figure 8, configure the multi computer communication of remote control terminal and robotic end, so that number between the two According to can share.The input of voice is carried out in remote control terminal, then uploads to Baidu's cloud platform, recognition result is returned and issues Sound result topic subscribes to the sound result topic in robotic end, then carries out with the semantic dictionary information in semantic map Mapping matching carries out the publication of navigation information after matching, last robot chassis control program subscribes to endpoint information, and leads to Cross serial communication control bobbin movement.Meanwhile before publication navigation, needs to complete to position this step, could successfully start Navigation.
In the matching of semantic instructions and semantic map, the processing of data flow is carried out first, what it is due to speech recognition is Text, the coding mode of Chinese in a stream are the modes of Unicode.Unicode is the Xiang Ye in computer science Boundary mark is quasi-, including character set, encoding scheme etc..Unicode is generated to solve the limitation of traditional character coding method , it is the unified and unique binary coding of each character setting in every kind of language, with meet it is cross-platform, across language The requirement of text conversion, processing is carried out, Unicode usually uses two byte representations, one character.Here, the conversion of data flow and Matching is all based on Unicode coded format.Herein, it is flat to be sent to Baidu's cloud after receiving voice input for remote control terminal Platform carries out identification and Unicode code conversion, and the Unicode sound result encoded is sent to robot again by remote control terminal End.
As shown in figure 9, using keyword match principle, i.e., the group of verb and noun ought occur when completing semantic matches When conjunction, for example " going to A302 ", " going to Tea Room ", " removing B308 ", system can be by the semantemes in the data flow and semantic map Dictionary is matched, and is directly released the world coordinates of the semantic locations if successful match, and then guided robot is led It navigates to semantic terminal.Continue to monitor next semantic instructions if matching is unsuccessful.
In semantic navigation system, the data communication figure of ROS node is as shown in Figure 10.The center of whole system is move_ Base, on condition that the positioning of AMCL (adaptive Monte Carlo localization, Adaptive Monte Carlo Localization). First by subscribing to sensing data and odometer, in conjunction with coordinate transform, AMCL completes self positioning of robot, when by posture It issues move_base quarter, then passes through the input of voice voice, identify sound result, send_goal node sends navigation eventually Point is moved to move_base finally by the controller of cmd_vel control bottom.
Robot realizes the specific frame of semantic navigation, as shown in figure 11.Robot is by voice module to received language Sound result carries out semantic matches, identifies end point location information, seeks machine using Maximum Likelihood Estimation by locating module The global pose of device people.It is the end point location information of input voice module identification, complete in the independent navigation module for realizing semantic navigation The robot initial overall situation posture that office's locating module calculates and the AMCL location algorithm for posture tracking.Robotic end is first Using Global localization realize robot initial pose acquisition, then in conjunction with AMCL location algorithm carry out robot pose with Track.Next, the navigation spots exported using voice module, independent navigation module combination sensing data start navigation task.? When navigation starts, global path is realized by center line global path planning method first, then DWA (Dynamic Window Approach, dynamic pane method) algorithm is responsible within each period of part, according to current environmental information, cooks up next The control instruction that period should move, is sent to robot controller, and until navigating to semantic navigation point, navigation task terminates.
Center line global path planning method used is as shown in figure 12, and a certain number of angles are first selected in grating map The mark point of line centered on point, and coordinate of each angle point in global indoor map is obtained, then according to possible traveling Track connects all mark points for being capable of forming center line, generates track container.In track generates, due to not being from one The single line path that point is put to another, but mulitpath mixing, therefore, it is necessary to all possible path lines are all examined Worry is entered.
After robot operating system receives the starting point coordinate and terminal point coordinate of navigation, starting point and end are calculated separately first Point is at a distance from each point of whole centerline path, and the smallest point of selected distance is used as point of proximity, is denoted as starting point respectively Point of proximity and terminal point of proximity.Then since terminal, one by one by terminal to pressing in path between terminal point of proximity, and And the point between terminal point of proximity and starting point point of proximity is also pressed into path.Finally, by between starting point point of proximity and starting point Point be pressed into path, finally complete the process of entire global path planning.
When robot system realizes autonomous semantic navigation, global path planning algorithm is to pass through Plugin Mechanism (Pluginlib) it is integrated with entire navigation system.Pluginlib is the library C++, it can be understood as a ROS packet dynamic The plug-in unit of load and unloading.Here plug-in unit is usually some function classes, and when running can dynamically load (such as shared pair of library As dynamic link library) form exist.By the help of Pluginlib, user can not have to be concerned about that the application program of oneself should How to link including oneself want using class library because Pluginlib can automatically open the plugin libraries of needs when calling.Make The function of being extended with plug-in unit or modify application program is very convenient, does not have to change source code and recompilates application program, passes through The extension and modification of function can be completed in the dynamically load of plug-in unit.
When specifically used, the polymorphic characteristic of C++ is utilized in Pluginlib, as long as different plug-in units is connect using unified Mouthful, it can be replaced.Detailed process is that first then creation plug-in unit base class, definition unified interface write plug-in unit class, after Socket component base class realizes unified interface, and exports plug-in unit, compiles dynamic base.Finally in registration, ROS system is added in plug-in unit, It can identify and manage.
The hardware platform for the autonomous navigation system that the method for the present invention is realized is two wheel differential mechanisms of laboratory independent research People.The overall structure of hardware module is as shown in figure 13, and the lower data control panel of the robot uses STM32, robot Master control borad be NVIDIA Jetson TX2 platform, visual sensor uses USB camera, uses USB communication protocol With master control board communications.What laser radar was selected is the Hokuyo UST-10LX single line radar of Bei Yang company, is communicated using network interface. Wherein, Jetson TX2 platform mainly realizes that the global pose of robot calculates, AMCL is positioned and posture tracking, carries out center line Path planning, and next control instruction etc. for controlling cycle machinery people bobbin movement is calculated using DWA.STM32 is for controlling Robot bobbin movement.
Due to only one USB interface of Jetson TX2 platform, USB Hub has been used to extend USB port, and by It is randomly assigned call number under linux system in common USB device, therefore at this to No. USB progress of inertial navigation module Fixed mapping, so that system can open two USB devices with automatic distinguishing.
In order to use newest convolutional neural networks, the present invention has selected possessing for NVIDIA company production herein The Jetson TX2 platform of CPU module.Different from other embedded platforms, the characteristics of Jetson TX2 is its included Pascal GPU possesses 256 CUDA cores, and is connected by high performance relevant interconnection structure.In terms of CPU, by two ARM v8's 64 bit CPU clusters composition optimizes Denver's double-core CPU cluster to improve single-thread performance.Second CPU cluster is an ARM Cortex-A57Quad Core, more suitable for multithread application.It, can due to the GPU of the powerful calculating ability possessed Easily to carry out the deployment of deep learning model on the platform, edge calculations are realized.
The memory subsystem of Jetson TX2 includes one 128 Memory Controller Hub, it provides high broadband LPDDR4 branch It holds.8GB LPDDR4 main memory and 32GB eMMC flash memory are integrated in module.And generation system 64 designs, TX2 are compared 128 CPU be also a main performance boost.
Meanwhile Jetson TX2 also supports hardware video encoders and decoder, supports 4K ultra high-definition video and not apposition 60 frame videos of formula.This is slightly different with Jetson TX1 module is mixed, and Jetson TX1 has been used to be run on Tegra Soc Specialized hardware and software complete these tasks.In addition, Jetson TX2 further includes an audio processing engine, devices at full hardware is supported Multi-channel audio.Jetson TX2 supports Wi-Fi and bluetooth wireless connection, it may be convenient to remotely be controlled and channel radio News.
Figure 14 illustrates the detection effect of object detection model of the invention on test set, it can be seen from the figure that inspection Model is surveyed in the more robust of the detection for object, preferable detection basis can be established for subsequent experiment.Some object meetings Occurring a part in the picture, a part is except image, in this case, frame is calculated the frame to image when detection, Detection block as current object.
The comparison of object detection model forward speed of the invention is as shown in figure 15, is existed using original VGG network model The forward speed of Jetson TX2 upper mounting plate only has 165ms, and the requirement used in embedded end is not achieved.Using lightweight After network MobileNet, model velocity is promoted to 83ms, is carried out using TensorRT to model further on TX2 platform After quantization compression, final forward speed can achieve 49ms, so that model can detecte out more objects in location navigation Body, and combine turn to when, the error of the object to be detected heart in the picture can further reduce.Therefore, in the method for the present invention, SSD object detection mould is carried out using lightweight network MobileNet+ engine TensorRT preferably on Jetson TX2 platform Type training and detection.
The semantic map established by experiment is as shown in figure 16.It altogether include 33 objects, each object in the semanteme map It is to be indicated with world coordinates in map.Since object always considers from Vertical Square, and the object in present invention experiment Body is simplified as a particle, therefore sacrifices certain accuracy rate, such as the bigger object of this occupied area of mailbox, Coordinate in the actual distance of measurement and semantic map can not necessarily correspond to completely, herein it should be noted that in this side Face can lose a part of precision.However, it is noteworthy that semantic map is designed to provide robot semantic information, and This semantic information not necessarily very precisely, such as in global initial bit posture, it is only necessary to provide one it is more accurately global The particle filtering algorithm of pose, downstream can automatically be restrained with the movement of robot, for this angle, Particle is simplified to consider to comply fully with requirement.In addition, the considerations of being simplified to particle, has compared general all figures of reservation For the map of picture, memory requirement is also reduced most possibly, also reduces the calculation amount of the tasks such as subsequent match.
After starting particle filter algorithm AMCL, control robot rotates a circle, by Maximum Likelihood Estimation Method, positioning The global pose of robot out.Experiment effect is as shown in figure 17, when different positions is positioned, in obtained result, swashs Light observed quantity can form good matching with global map, it was demonstrated that the premeasuring of current pose is in close proximity to substantial amount.? In actual solution, due to robot position and be all to be calculated by multiple observed quantities, location information is more Accurately.In contrast, it is averagely obtained due to attitude angle by the rotational angle of multiple observed objects, and rotational angle is that have centainly Error, therefore attitude angle has error to a certain extent, it is true from it can be seen that positioning posture on positioning result and compare Value, there are certain rotation errors.
Further, the different location in corridor is chosen, 100 positioning experiments have been carried out, to measure the essence of positioning experiment Exactness and robustness, experimental result are as shown in figure 18.Abscissa is the precision of positioning in figure, and ordinate is in this precision interval Experiment number, it can be seen that the number in the section 0.4m to 0.6m be it is most, have in 100 times and determined twice Position error is larger, in the case of remaining in the experiment of subsequent posture tracking, can successfully realize the convergence of particle, hence it is demonstrated that The positioning experiment is more effective.
Since robot is detected in rotary manner in Global localization, the size of rotation speed is directly affected Image clearly quality and detectable frame number, therefore influence of the rotation speed to positioning accuracy is considered first, experimental result is as schemed Shown in 19, wherein abscissa is rotation speed, and unit is rad/s, and ordinate is positioning accuracy, and unit is m.It can from figure Out, with the promotion of rotation speed, the precision of positioning can reduce gradually, and when rotation speed is lower, position error is stablized In 0.4m hereinafter, and this error can satisfy robot global positioning requirements completely.Three lines respectively represent different in figure Object to be detected quantity, with the increase of quantity, the precision of Global localization is also constantly promoted.Machine is preferably set up in the present invention People's rotation speed is 0.3rad/s.
Figure 20 indicates influence of the object to be detected quantity for precision.From object detection angle consider, testee from Robot is farther out, ambient light is insufficient or excessively bright, object only occur the factors such as a part in camera all can be to detected material Body quantity brings challenges, and causes the quantity of object to be detected more or less.It can be seen from the figure that the quantity of object to be detected is got over More, the error of positioning is lower, and when detection case is preferable, position error can be stablized in 0.4m or less.Three lines in figure Different turning velocities is respectively represented, curvilinear trend also complies with Figure 19 and tests embodied relationship.
Voice Navigation experiment includes the correct matching of the correct identification and voice and navigation of voice.The method of the present invention carries out language Sound identification experiment correctly identifies voice and the ratio correctly navigated reaches 80%, voice in all Voice Navigations experiment Identification division is wrong but still being able to the ratio for correctly navigating to designated place is 11%, this part is mainly by semantic matches In only use keyword match, cause even if having part identification it is wrong it is possible to being correctly mapped to specified terminal.Finally Voice can not be identified correctly while the ratio that could not be correctly matched to navigation terminal is 9%, this part is mainly known by voice The reasons such as other environment is noisy, and network environment is bad cause.
The present invention is based on the path planning effect of center line is as shown in figure 21, trajectory line therein is that robot planning goes out Global path.It can be seen from the figure that the global path distance cooked up may deposit after robot starts navigation task Barrier all farther out, driving path is safer, and will not be more many than shortest path length.

Claims (9)

1. a kind of Indoor Robot air navigation aid based on environmental characteristic detection, which comprises the steps of:
Step 1: in robotic end, first establishing indoor object data set, training object detection model resettles indoor grille map Semantic map in conjunction with object space;
Step 2: Global localization being carried out to robot in robotic end;Robot rotates in place one week, utilizes object detection model It identifies the object being observed, and is matched with the semantic information in semantic map, obtain the position for being observed object, it is then sharp The global pose of robot is sought with Maximum Likelihood Estimation;
Step 3: after remote control terminal receives voice input, identifying and issue sound result topic, subscribed in robotic end Semantic dictionary in obtained sound result and semantic map is carried out mapping matching by the sound result topic of remote control terminal, Identify the destination locations to be navigated;Then, robotic end plans global path using the center line inside corridor, controls robot Bobbin movement.
2. the method according to claim 1, wherein installing Image Acquisition in robot in the step 2 Device to the image of acquisition, only select the object of heart appearance in the picture as being effectively observed object in rotary course, And record corresponding robot rotation angle.
3. method according to claim 1 or 2, which is characterized in that in the step 2, if striked robot In initial overall situation pose, position coordinates x0、y0, pose angle is θ0, then robot pose is sought according to following formula:
Wherein, xkAnd ykIt indicates to be observed position coordinates of the object in map, R k-thkIt indicates to be observed object and machine k-th The measurement distance of device people, θkThe angle of object and robot center, Δ θ are observed for k-thkIt indicates to work as and detects k-th of quilt When observed objects, rotation angle of the robot relative to initial pose, k is positive integer.
4. according to the method described in claim 3, it is characterized in that, in the step 2, first with Maximum-likelihood estimation side Method seeks the position coordinates x of robot initial0、y0If k-th of object that robot observes is position (xk, yk), distance is Rk, it is based on Bayesian probability, obtains following formula:
P(x0, y0|xk, yk, Rk)∝P(xk, yk, Rk|x0, y0)P(x0, y0)
Wherein, P (x0, y0|xk, yk, Rk) indicate known k-th of object space (xk, yk) and distance RkRobot is located at (x0, y0) Probability;P(x0, y0) indicate that robot is located at position (x0, y0) probability, P (xk, yk, Rk|x0, y0) indicate that robot is located at (x0, y0) under the conditions of k-th of object be located at (xk, yk) and distance RkProbability;
According to Maximum-likelihood estimation theory, in order to maximize probability distribution P (x0, y0|xk, yk, Rk), it is only necessary to maximize probability point Cloth P (xk, yk, Rk|x0, y0);Therefore, (x in indoor map is traversed0, y0) each point in value range, calculate the total of each point Cost, the point for possessing minimum cost are taken as the location point of final robot;
The matching degree measuring current point using Euclidean distance and being observed between object is as follows:
Wherein, N indicates the total quantity for being observed object;
Further, according to find out come position coordinates (x0, y0), then carry out the initial angle θ of calculating robot according to following formula0Such as Under:
ΔθkIndicate rotation angle of the robot relative to initial pose when detecting that k-th is observed object.
5. according to the method described in claim 4, it is characterized in that, seeking the global position of robot initial in the step 2 When appearance, when, there are when multiple positions, being counted using the mode of exact matching in the object that robot observes indoors space Calculate, that is, using the object in space all location points all with the location matches of robot one time, to seek robot Best global pose.
6. according to the method described in claim 4, it is characterized in that, in the step 2, (the x in traversal indoor map0, y0) Value when, the pyramidal method of use space carries out quickening calculating, specifically: divide three scales to scan in space, Global registration is carried out with 5 meters of unit first, obtains 10 meters of range areas around optimum position;Then around optimum position 10 meters within the scope of carry out second with 1 meter of unit and search for, obtain 2 meters of range areas around optimum position;Finally most Unit around best placement within the scope of 2 meters in 0.1m scans for, and obtains final global optimum position.
7. the method according to claim 1, wherein robotic end is using inside corridor in the step 3 Center line plans global path, specifically: first selecting the mark of line centered on the angle point of preset quantity in grating map indoors Remember point, and obtain coordinate of each angle point indoors in map, then according to may traveling track, connect all is capable of forming The mark point of center line generates track container;
After the starting point coordinate and terminal point coordinate of navigation has been determined, each of beginning and end and whole center path are calculated separately The distance of point, and the smallest point of selected distance is used as point of proximity, if obtaining starting point point of proximity and terminal point of proximity respectively;Then Since terminal, one by one by terminal to pressing in path between terminal point of proximity, and terminal point of proximity is faced with starting point Point between near point is also pressed into path;Finally, the point between starting point point of proximity and starting point is pressed into path, it is last complete At entire global path planning.
8. method according to claim 1 or claim 7, which is characterized in that in the step 3, obtaining based on center line After global path, the independent navigation module combination sensing data of robotic end starts navigation task;Independent navigation module utilizes Dynamic pane method DWA cooks up next cycle machinery people in each control cycle, according to the sensing data currently obtained The control instruction of bobbin movement, until navigation task terminates.
9. the method according to claim 1, wherein in the step 2, when robot rotates in place one week, It is 0.3rad/s that robot rotation speed, which is arranged,.
CN201910015546.7A 2019-01-08 2019-01-08 A kind of Indoor Robot air navigation aid based on environmental characteristic detection Pending CN109724603A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910015546.7A CN109724603A (en) 2019-01-08 2019-01-08 A kind of Indoor Robot air navigation aid based on environmental characteristic detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910015546.7A CN109724603A (en) 2019-01-08 2019-01-08 A kind of Indoor Robot air navigation aid based on environmental characteristic detection

Publications (1)

Publication Number Publication Date
CN109724603A true CN109724603A (en) 2019-05-07

Family

ID=66298886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910015546.7A Pending CN109724603A (en) 2019-01-08 2019-01-08 A kind of Indoor Robot air navigation aid based on environmental characteristic detection

Country Status (1)

Country Link
CN (1) CN109724603A (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298320A (en) * 2019-07-01 2019-10-01 北京百度网讯科技有限公司 A kind of vision positioning method, device and storage medium
CN110531766A (en) * 2019-08-27 2019-12-03 熵智科技(深圳)有限公司 Based on the known continuous laser SLAM composition localization method for occupying grating map
CN110795836A (en) * 2019-10-17 2020-02-14 浙江大学 Mechanical arm robust optimization design method based on mixed uncertainty of interval and bounded probability
CN110838145A (en) * 2019-10-09 2020-02-25 西安理工大学 Visual positioning and mapping method for indoor dynamic scene
CN111319044A (en) * 2020-03-04 2020-06-23 达闼科技(北京)有限公司 Article grabbing method and device, readable storage medium and grabbing robot
CN111488419A (en) * 2020-03-30 2020-08-04 中移(杭州)信息技术有限公司 Method and device for creating indoor robot map, electronic equipment and storage medium
CN111539994A (en) * 2020-04-28 2020-08-14 武汉科技大学 Particle filter repositioning method based on semantic likelihood estimation
CN111743462A (en) * 2020-06-18 2020-10-09 小狗电器互联网科技(北京)股份有限公司 Sweeping method and device of sweeping robot
CN111932675A (en) * 2020-10-16 2020-11-13 北京猎户星空科技有限公司 Map building method and device, self-moving equipment and storage medium
CN112068555A (en) * 2020-08-27 2020-12-11 江南大学 Voice control type mobile robot based on semantic SLAM method
CN112132951A (en) * 2020-08-18 2020-12-25 北京旋极伏羲科技有限公司 Method for constructing grid semantic map based on vision
CN112581535A (en) * 2020-12-25 2021-03-30 达闼机器人有限公司 Robot positioning method, device, storage medium and electronic equipment
CN112711249A (en) * 2019-10-24 2021-04-27 科沃斯商用机器人有限公司 Robot positioning method and device, intelligent robot and storage medium
CN113010631A (en) * 2021-04-20 2021-06-22 上海交通大学 Knowledge engine-based robot and environment interaction method
CN113052189A (en) * 2021-03-30 2021-06-29 电子科技大学 Improved MobileNet V3 feature extraction network
CN113483747A (en) * 2021-06-25 2021-10-08 武汉科技大学 Improved AMCL (advanced metering library) positioning method based on semantic map with corner information and robot
CN113505646A (en) * 2021-06-10 2021-10-15 清华大学 Target searching method based on semantic map
CN113887508A (en) * 2021-10-25 2022-01-04 上海品览数据科技有限公司 Method for accurately identifying center line of public corridor space in building professional residential plan
CN113916245A (en) * 2021-10-09 2022-01-11 上海大学 Semantic map construction method based on instance segmentation and VSLAM
CN114445323A (en) * 2020-11-06 2022-05-06 顺丰科技有限公司 Damaged package detection method and device, computer equipment and storage medium
US11614633B2 (en) 2017-10-17 2023-03-28 Goer Optical Technology Co., Ltd. Optical module assembly device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914068A (en) * 2013-01-07 2014-07-09 中国人民解放军第二炮兵工程大学 Service robot autonomous navigation method based on raster maps
CN106056207A (en) * 2016-05-09 2016-10-26 武汉科技大学 Natural language-based robot deep interacting and reasoning method and device
CN106931975A (en) * 2017-04-14 2017-07-07 北京航空航天大学 A kind of many strategy paths planning methods of mobile robot based on semantic map
CN107689075A (en) * 2017-08-30 2018-02-13 北京三快在线科技有限公司 Generation method, device and the robot of navigation map
CN108958256A (en) * 2018-07-23 2018-12-07 浙江优迈德智能装备有限公司 A kind of vision navigation method of mobile robot based on SSD object detection model
CN109084749A (en) * 2018-08-21 2018-12-25 北京云迹科技有限公司 The method and device of semantic positioning is carried out by object in environment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914068A (en) * 2013-01-07 2014-07-09 中国人民解放军第二炮兵工程大学 Service robot autonomous navigation method based on raster maps
CN106056207A (en) * 2016-05-09 2016-10-26 武汉科技大学 Natural language-based robot deep interacting and reasoning method and device
CN106931975A (en) * 2017-04-14 2017-07-07 北京航空航天大学 A kind of many strategy paths planning methods of mobile robot based on semantic map
CN107689075A (en) * 2017-08-30 2018-02-13 北京三快在线科技有限公司 Generation method, device and the robot of navigation map
CN108958256A (en) * 2018-07-23 2018-12-07 浙江优迈德智能装备有限公司 A kind of vision navigation method of mobile robot based on SSD object detection model
CN109084749A (en) * 2018-08-21 2018-12-25 北京云迹科技有限公司 The method and device of semantic positioning is carried out by object in environment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHENG ZHAO .ET AL: "Building a grid-semantic map for the navigation of service robots through human–robot interaction", 《DIGITAL COMMUNICATIONS AND NETWORKS》 *
董洪义等: "Global Localization Using Object Detection in Indoor Environment Based on Semantic Map", 《2018 WRC SYMPOSIUM ON ADVANCED ROBOTICS AND AUTOMATION (WRC SARA)》 *

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11614633B2 (en) 2017-10-17 2023-03-28 Goer Optical Technology Co., Ltd. Optical module assembly device
CN110298320A (en) * 2019-07-01 2019-10-01 北京百度网讯科技有限公司 A kind of vision positioning method, device and storage medium
CN110298320B (en) * 2019-07-01 2021-06-22 北京百度网讯科技有限公司 Visual positioning method, device and storage medium
CN110531766A (en) * 2019-08-27 2019-12-03 熵智科技(深圳)有限公司 Based on the known continuous laser SLAM composition localization method for occupying grating map
CN110531766B (en) * 2019-08-27 2022-06-28 熵智科技(深圳)有限公司 Continuous laser SLAM (Simultaneous laser mapping) composition positioning method based on known occupied grid map
CN110838145A (en) * 2019-10-09 2020-02-25 西安理工大学 Visual positioning and mapping method for indoor dynamic scene
CN110795836A (en) * 2019-10-17 2020-02-14 浙江大学 Mechanical arm robust optimization design method based on mixed uncertainty of interval and bounded probability
CN112711249A (en) * 2019-10-24 2021-04-27 科沃斯商用机器人有限公司 Robot positioning method and device, intelligent robot and storage medium
EP4050449A4 (en) * 2019-10-24 2022-11-16 Ecovacs Commercial Robotics Co., Ltd. Method and device for robot positioning, smart robot, and storage medium
CN111319044A (en) * 2020-03-04 2020-06-23 达闼科技(北京)有限公司 Article grabbing method and device, readable storage medium and grabbing robot
CN111488419A (en) * 2020-03-30 2020-08-04 中移(杭州)信息技术有限公司 Method and device for creating indoor robot map, electronic equipment and storage medium
CN111488419B (en) * 2020-03-30 2023-11-03 中移(杭州)信息技术有限公司 Method and device for creating indoor robot map, electronic equipment and storage medium
CN111539994B (en) * 2020-04-28 2023-04-18 武汉科技大学 Particle filter repositioning method based on semantic likelihood estimation
CN111539994A (en) * 2020-04-28 2020-08-14 武汉科技大学 Particle filter repositioning method based on semantic likelihood estimation
CN111743462A (en) * 2020-06-18 2020-10-09 小狗电器互联网科技(北京)股份有限公司 Sweeping method and device of sweeping robot
CN112132951A (en) * 2020-08-18 2020-12-25 北京旋极伏羲科技有限公司 Method for constructing grid semantic map based on vision
CN112132951B (en) * 2020-08-18 2024-05-17 北斗伏羲信息技术有限公司 Construction method of grid semantic map based on vision
CN112068555A (en) * 2020-08-27 2020-12-11 江南大学 Voice control type mobile robot based on semantic SLAM method
CN111932675B (en) * 2020-10-16 2020-12-29 北京猎户星空科技有限公司 Map building method and device, self-moving equipment and storage medium
CN111932675A (en) * 2020-10-16 2020-11-13 北京猎户星空科技有限公司 Map building method and device, self-moving equipment and storage medium
CN114445323A (en) * 2020-11-06 2022-05-06 顺丰科技有限公司 Damaged package detection method and device, computer equipment and storage medium
CN112581535A (en) * 2020-12-25 2021-03-30 达闼机器人有限公司 Robot positioning method, device, storage medium and electronic equipment
CN113052189A (en) * 2021-03-30 2021-06-29 电子科技大学 Improved MobileNet V3 feature extraction network
CN113052189B (en) * 2021-03-30 2022-04-29 电子科技大学 Improved MobileNet V3 feature extraction network
CN113010631B (en) * 2021-04-20 2022-11-11 上海交通大学 Knowledge engine-based robot and environment interaction method
CN113010631A (en) * 2021-04-20 2021-06-22 上海交通大学 Knowledge engine-based robot and environment interaction method
CN113505646A (en) * 2021-06-10 2021-10-15 清华大学 Target searching method based on semantic map
CN113505646B (en) * 2021-06-10 2024-04-12 清华大学 Target searching method based on semantic map
CN113483747A (en) * 2021-06-25 2021-10-08 武汉科技大学 Improved AMCL (advanced metering library) positioning method based on semantic map with corner information and robot
CN113916245A (en) * 2021-10-09 2022-01-11 上海大学 Semantic map construction method based on instance segmentation and VSLAM
CN113887508A (en) * 2021-10-25 2022-01-04 上海品览数据科技有限公司 Method for accurately identifying center line of public corridor space in building professional residential plan
CN113887508B (en) * 2021-10-25 2024-05-14 上海品览数据科技有限公司 Accurate identification method for central line of public corridor space in building professional residence plan

Similar Documents

Publication Publication Date Title
CN109724603A (en) A kind of Indoor Robot air navigation aid based on environmental characteristic detection
US20220262115A1 (en) Visual-Inertial Positional Awareness for Autonomous and Non-Autonomous Tracking
JP6496323B2 (en) System and method for detecting and tracking movable objects
US9996936B2 (en) Predictor-corrector based pose detection
CN111126304A (en) Augmented reality navigation method based on indoor natural scene image deep learning
US20150371440A1 (en) Zero-baseline 3d map initialization
CN103162682B (en) Based on the indoor path navigation method of mixed reality
CN109631855A (en) High-precision vehicle positioning method based on ORB-SLAM
US20150138310A1 (en) Automatic scene parsing
EP3656138A1 (en) Aligning measured signal data with slam localization data and uses thereof
WO2020224305A1 (en) Method and apparatus for device positioning, and device
CN106291517A (en) Indoor cloud robot angle positioning method based on position and visual information optimization
WO2020000395A1 (en) Systems and methods for robust self-relocalization in pre-built visual map
CN103759724B (en) A kind of indoor navigation method based on lamp decoration feature and system
CN112348887A (en) Terminal pose determining method and related device
Shu et al. 3D point cloud-based indoor mobile robot in 6-DoF pose localization using a Wi-Fi-aided localization system
CN114332232B (en) Smart phone indoor positioning method based on space point, line and surface feature hybrid modeling
CN111833443A (en) Landmark position reconstruction in autonomous machine applications
CN108534781A (en) Indoor orientation method based on video
CN115019167A (en) Fusion positioning method, system, equipment and storage medium based on mobile terminal
Jonas et al. IMAGO: Image-guided navigation for visually impaired people
US20210224538A1 (en) Method for producing augmented reality image
CN101499176B (en) Video game interface method
LU et al. Scene Visual Perception and AR Navigation Applications
US20240312056A1 (en) Method and system for determining a three dimensional position

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190507