[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN1811644A - Automatic positioning method for intelligent robot under complex environment - Google Patents

Automatic positioning method for intelligent robot under complex environment Download PDF

Info

Publication number
CN1811644A
CN1811644A CN 200610056937 CN200610056937A CN1811644A CN 1811644 A CN1811644 A CN 1811644A CN 200610056937 CN200610056937 CN 200610056937 CN 200610056937 A CN200610056937 A CN 200610056937A CN 1811644 A CN1811644 A CN 1811644A
Authority
CN
China
Prior art keywords
robot
probability
point
information
sampled point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200610056937
Other languages
Chinese (zh)
Other versions
CN100461058C (en
Inventor
王启宁
刘亮环
李华
容春霞
谢广明
王龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CNB2006100569376A priority Critical patent/CN100461058C/en
Publication of CN1811644A publication Critical patent/CN1811644A/en
Application granted granted Critical
Publication of CN100461058C publication Critical patent/CN100461058C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The present invention relates to an autonomous positioning method of intelligent robot in complex environment. It is characterized by that said invention can make human thinking ability and cognitive process be transplanted into the sensor and memory of robot so as to greatly improve positioning mode of traditional robot in complex environment. Said invention is applicable to various mobile robots.

Description

The method of intelligent robot autonomous positioning under complex environment
Technical field
The invention belongs to Based Intelligent Control and Computer Applied Technology field, relate to a kind of method that is used for robot autonomous location, more specifically relate to a kind of robot method based on pre-stored information or group collaboration realization autonomous positioning under complex environment.Can be used for intelligent robot control, robot is self-align during AUTONOMOUS TASK under complex environment, for the autonomous robot correct decisions provides favourable precondition.
Background technology
In the last hundred years, three stages of growth have roughly been experienced in the development of robot, also are three epoch.First on behalf of simple individual robot; Second on behalf of work robot of colony, has possessed perceptibility; The third generation is the intelligent robot with highly autonomous wisdom, and its future thrust is consciousness is arranged, thinking is arranged, to talk with the people.Now, robot is in second development epoch, strides forward to the 3rd development epoch.
At present, robot research and developing direction are to intelligent development, so the independence of robot just seems particularly important.The mobile robot has important effect at numerous areas, is widely used in the productive life.In actual applications, when robot wants autonomous accomplishing a task, the most important thing is to carry out self-align, just own present residing position need be known by robot, comprise self getting local coordinate and overall situation of living in gets world coordinates, could making a strategic decision then, how next step should do.
In this position fixing process, the information that robot can utilize can be divided into two big classes: internal sensor information and external sensor information.The information that the former normally utilizes odometer to obtain, both internally motion obtain moving displacement information.The method can directly be determined mobile robot's displacement, but enough precision are only just arranged in short distance.The latter then is actually the detection information to external environment condition (as road sign).Yet all is not available whenever and wherever possible to locating effective environmental information, and all exists uncertain.This uncertainty often causes position data unreliable, thereby influences the correct decisions and the follow-up behavior of robot.Therefore, be necessary to merge complementation or redundant data, obtain robot location's reliable estimation, be applied to other subsystem of robot again from each sensor.
During this external process sensor data, owing to the limitation of the uncertainty of sensor measurement, blending algorithm, surrounding environment is lacked accurate the description and the reason such as rich of road sign feature, a lot of sensor informations all have been dropped.As: in existing method for positioning mobile robot; require sensing data to obtain simultaneously mostly; but robot is in real work; usually can not obtain a plurality of sensing datas that the method that can satisfy requires simultaneously; if certain data constantly can not meet the demands, then can abandon the data that this moment has obtained usually.Therefore the utilization ratio that improves sensing data also is the another one problem that needs concern.
Mobile robot's vision guided navigation mainly comprises the location, path planning and keep away several respects such as barrier, wherein the location is a link the most basic in the Mobile Robotics Navigation, the working environment of robot, be equipped with the kind of sensor and the different mobile robots of making of quantity multiple localization method is arranged, and be two kinds of the most frequently used localization methods at present based on the localization method of road sign with based on the localization method of overall visual information, road sign can be divided into artificial road sign, as: straight line, arrow or other figures, with natural landmark as: the door, window, the turning, lamp, pillar, so-called road sign location is to cut apart from image, identify road sign, according to the known coordinate of these road signs in environment, features such as shape are determined the relative position of robot, therefore, these road signs should have obvious characteristics and should be easy to and split from scene image, but in non-structured environment, in the time artificial road sign or natural landmark can not be set be difficult to cut apart, using overall visual information to position is one of feasible method.Localization method based on overall visual information often obtains global information by an overall camera, the position that comprises other objects in robot current location and the scene, robot itself does not need to utilize autonomous vision that image is cut apart and local feature extraction, but the positional information of utilizing overall camera to obtain is determined the position of oneself.This localization method is less demanding to the visual capacity and the computing level of robot itself, even does not require.But the shortcoming of the method maximum is how to obtain global information.In most cases, can't set up overall camera, robot can't obtain global information, and then can't carry out autonomous positioning.Under the particularly open in the open air environment, robot is because loss of learning, and to rely on global information merely be inaccurate even can't realize.Therefore, the autonomic positioning method that is applicable to various complex environments just seems and is even more important, and also is robot to one of core technology of more advanced development.
Autonomous positioning has played crucial effects as intelligent robot one of the most several the gordian techniquies of core in robot application.The various intelligent behaviors that intelligent robot embodies, multirobot cooperate the intelligent strategy that embodies etc. all will be based on self-align information accurately.Mainly concentrate on the positive location of autonomous robot for robot The Location focus in the world.Past, a lot of common methods based on overall camera all became not too suitable.
In the actual production life, the robot as main sensing tool extensively exists with vision sensor.Autonomous mobile robot particularly has biped robot, humanoid robot, in complex environment when movable, because the 3-D in body kinematics joint motion causes visual information to be subjected to great interference.Traditional localization method based on terrestrial reference in actual applications, particularly when environment is unknown, becomes very unreliable.International current research use based on probability method, concentrate and considered to solve the thought of location by optimizing motion model and sensing model.But in a single day these methods run into robot and take place to collide mutually or collide with barrier, bring very big error will for the location result.
Summary of the invention
The human experience of determining to rely on often the past to self-position, rather than constantly all at concrete some marks of identification.When the people comes a local time of coming in the past, even for a long time there is not concrete mark to occur, some trickle visual informations also can help the people to finish determining of self-position.In addition, the global position of some object around a people has obtained self position, just can judge.If this moment, another person also saw these objects, then can extrapolate the position of oneself according to this global position.The method of this cooperation is particularly common in group's cooperation of human society.Advantage by above two kinds of human thinking's basic processes is inspired just, and we have proposed a kind of brand-new intelligent robot method for self-locating.
The present invention is in conjunction with up-to-date probability localization method, human thinking and cognitive process are transplanted in the perception of robot to external environment condition, well solved in the actual location process terrestrial reference lazy weight and collision influences problems such as bearing accuracy, made intelligent robot can in complex environment, realize comparatively accurate localization.
The present invention is directed to intelligent mobile robot and be subjected to various complicated disturbance to influence pinpoint problem, a kind of method of autonomous positioning is provided.This method is applicable to that various mobile robots based on vision sensor finish under complex environment self-align, and for independently keeping in obscurity, complex tasks such as navigation provide good foundation.The present invention is transplanted to human thinking and cognitive process in the perception and memory of robot to external environment condition, has well improved conventional machines people station-keeping mode, and international newest research results is improved.The intelligent independent robot that uses method of the present invention to position can better adapt to the actual production life.
One aspect of the present invention provide a kind of under complex environment the method for intelligent robot autonomous positioning, this method is applicable to the situation that the individual machine people works alone, but perhaps a plurality of robots work simultaneously can't accurately discern each other situation between the robot, and localization method specifically may further comprise the steps:
1) in robot work region, generate several sampled points at random, characterize the current position of robot and the probability of this position;
2) if the robot occurrence positions moves (being that the motor movement module is driven), then upgrade all sampled points according to motion model;
3),, and increase candidate point then according to the probability of this each sampled point of information updating if there is sensor to obtain landmark information;
4) if in a period of time, all do not obtain landmark information, then call experience database, carry out images match;
If when in experience database, finding the satisfactory information of matching degree, upgrade the probability of sampled point according to this posterior infromation, and increase candidate point;
5) the minimum some spots of probable value in the deletion sampled point, and the point of choosing same number from candidate point joins in the sampled point, deletes all candidate points;
6) find out probability and maximum zone, calculate the weighted mean value of the position of the sampled point in this zone, resulting position is as the current optimum position of robot;
7) carry out 2 repeatedly) to 6) go on foot, finish the real-time positioning of robot.
Another aspect of the present invention provide a kind of under complex environment the method for intelligent robot autonomous positioning, this method is applicable to the collaborative work of a plurality of robots, can communicate mutually between promptly a plurality of robots, localization method specifically may further comprise the steps:
1) in robot work region, generate several sampled points at random, characterize the current position of robot and the probability of this position;
2) if the robot occurrence positions moves (being that the motor movement module is driven), then upgrade all sampled points according to motion model;
3),, and increase candidate point then according to the probability of this each sampled point of information updating if there is sensor to obtain landmark information;
4) if in a period of time, all do not obtain landmark information, then call experience database, carry out images match, if when in experience database, finding the satisfactory information of matching degree, upgrade the probability of sampled point according to this posterior infromation, and increase candidate point;
5) if both do not obtained landmark information in robot discovery oneself a period of time, in experience database, do not find satisfactory coupling again, then carry out communication, upgrade the probability of sampled point according to the information that the other machines people transmits, and increase candidate point with other robot;
6) the minimum some spots of probable value in the deletion sampled point, and the point of choosing same number from candidate point joins in the sampled point, deletes all candidate points;
7) find out probability and maximum zone, calculate the weighted mean value of the position of the sampled point in this zone, resulting position is as the current optimum position of robot;
8) execution in step 2 repeatedly) to 7), can realize a plurality of robot cooperated real-time positioning.
Description of drawings
Below in conjunction with accompanying drawing the present invention is illustrated in further detail:
Fig. 1 is the place synoptic diagram of four robot leg football matches;
Fig. 2 is single robot real-time positioning flow chart of steps that works alone;
Fig. 3 is visual information match condition figure; Wherein Fig. 3 b and Fig. 3 d are the images of pre-stored in the experience database, and Fig. 3 a and Fig. 3 c are that robot collects in real time, need the image of coupling;
Fig. 4 is a probability renewal process synoptic diagram of the present invention; Wherein Fig. 4 a is the stochastic sampling synoptic diagram, and Fig. 4 b is the positional information synoptic diagram after probability upgrades, and Fig. 4 c is the optimum position synoptic diagram that obtains at last;
Fig. 5 is multirobot cooperation real-time positioning process flow diagram;
Fig. 6 is method of the present invention and international fresh approach effect comparison diagram.
Embodiment
Below with reference to accompanying drawing of the present invention, most preferred embodiment of the present invention is described in more detail.
This preferred implementation is an example with four leg Soccer robots real-time positioning in football match, and the method for intelligent robot autonomous positioning under complex environment is described.Intelligent robot autonomic positioning method of the present invention comprises two kinds, and first kind is individual machine people autonomous positioning, and this also is the main method that intelligent robot is used to locate; The secondth, a plurality of robot cooperated location, the basis of co-positioned is each robot autonomous independently locating information, by the intercommunication mutually between the robot, inform other machines people's oneself locating information, then these information are carried out comprehensively obtaining more accurate in locating at last according to confidence level.
Four robot leg footballs are one of projects that get most of the attention in the robot soccer game, and it is the important tie that connects traditional wheeled robot and final humanoid robot (being biped robot).Robot soccer is a comprehensive project, and it relates to numerous traditional theories and cutting edge technology.Itself has just gathered multiple technologies such as Structural Engineering, electronic circuit, precision optical machinery, biomimetic material the robot manufacturing, technology such as computing machine, automatically control, sensing, wireless telecommunications then are that the institute that can move and play football of robot is indispensable, and the cooperation between a plurality of robots more relates to the problems such as the coordination about multiagent, cooperation and strategy of more complicated.
Be illustrated in figure 1 as the place of four robot leg football matches.Be placed with four identification post on the both sides in place, each one of the color at each identification post top is identical, is formed with yellow and blue combination respectively by pink colour.The color at goal, both sides, place is also inequality, and robot can realize the location by these different colors just.But tightly rely on this location accurate not enough.
The imageing sensor that the difference of color carries on one's body by robot (being camera) is gathered, and robot not only comprises imageing sensor on one's body, comprises that also other are as pressure transducer and touch sensor etc.Therefore, analysis-by-synthesis, all information of utilizing these sensor acquisition to arrive can realize more accurate localization.
Describe in detail in the present embodiment according to localization method of the present invention, realize the pinpoint process of robot on the field.In order to describe robot of the present invention localization method in detail, define following coordinate system earlier:
1. absolute coordinate system: it is a fixed coordinate system selected in robot work region.
2. robot coordinate system: it is to be connected in robot moving coordinate system on one's body, and its initial point is the geometric center of robot, x axle forward be the robot health the dead ahead to.
3. visual coordinate is: it is the moving coordinate system that is connected on the robot camera, and its initial point is the photocentre of robot camera, and x axle forward is the emergent light direction of primary optical axis.
When the individual machine people worked alone in play, perhaps a plurality of robots worked when still can't accurately discern between the robot simultaneously, and this also is the main localization method of robot.
Be illustrated in figure 2 as single robot real-time positioning steps flow chart synoptic diagram that works alone, concrete localization method is realized according to following steps:
1) the robot position is modeled as the density of series of points.And these points all are the pre-estimations to current location.In robot work region, generate several estimation points at random, characterize the current location estimation of robot.Corresponding its position degree of confidence of each possible position.
As Fig. 4 is probability renewal process synoptic diagram of the present invention, and the small arrow among Fig. 4 a is promptly represented the location probability signal of stochastic sampling.Because at first without any characteristic information and empirical value information, so the estimation point of Sheng Chenging may be distributed in optional position on the field immediately.
2) for four leg Soccer robots, we choose its health geometric center is position reference point, and represents with a vectorial φ of the four-dimension.It is the x/y/z triaxial coordinate of initial point that its four components are respectively with the perform region point of fixity, and the robot health towards angle.If the robot occurrence positions moves, promptly the motor movement module is driven, and then moves all sampled points according to motion model:
φ t(x,y,z,θ)=φ t-1(x,y,z,θ)+Δ t
φ wherein t(z θ) represents current time t, the position that robot may be in for x, y; φ T-1(x, y, z, θ) the residing position of expression previous moment robot; Δ tExpression is carved into t robot motion's module displacement constantly during from t-1.Mobile content is included in and draws x/y/z each goes up the component that moves position and towards the variation of angle.
Upgrading location probability according to motion model can calculate by the mileometer of robot, moved for 6 seconds on the field such as robot, according to robot current movement speed average, obtain the distance of robot ambulation, equal speed 45 cels of our four robot legs, thereby draw 270 centimetres of walkings, calculate the position after the renewal then.
3),, and increase candidate point then according to the probability of this each sampled point of information updating if there is sensor to obtain landmark information:
The image that robot scanning current shooting arrives if identify marker post in image, is then write down position and the size of this marker post in image.Then, utilize known marker post actual size, the camera attribute of robot (photocentre position, as planimetric position, focal length etc.),, can calculate the coordinate P of marker post under the camera coordinate system according to the convex lens image-forming principle Bc(x Bc, y Bc) (concrete computing method can find in the book of any relevant convex lens image-forming principle, owing to do not belong to content of the present invention, so do not give unnecessary details).In addition, robot can obtain the coordinate P of camera photocentre under the robot coordinate system by the sensor of self Cr(x Cr, y Cr) and the direction θ of primary optical axis under the robot coordinate system CrAccording to P Bc, P CrAnd θ CrJust can calculate the coordinate P of marker post under the robot coordinate system Br(x Br, y Br):
x br=x cr+x bccosθ cr-y bcsinθ cr
y br=y cr+x bcsinθ cr+y bccosθ cr
According to the coordinate P of marker post under absolute coordinate system Ba(x Ba, y Ba) and each sampled point S iCoordinate S under absolute coordinate system Rai(x Rai, y Rai, θ Rai), can calculate the coordinate P of marker post under the robot coordinate system of each sampled point representative Bri(x Bri, y Bri):
x bri=(x ba-x rai)cosθ rai+(y ba-y rai)sinθ rai
y bri=-(x ba-x rai)sinθ rai+(y ba-y rai)cosθ rai
For each sampled point S i, compare (x Br, y Br) and (x Bri, y Bri), if differ very little, then improve the probability of this sampled point; If differ greatly, then reduce the probability of this sampled point.The threshold value of judgement difference size and the increase and decrease amplitude of probability both can be fixed in advance, also can change in real time.
Bel tt(x,y,z,θ)←αP(s t|(φ t(x,y,z,θ))Bel tt(x,y,z,θ))
Bel tt(x, y, z, the θ) degree of confidence of the current estimated position of representative is by probability model P (s t| (φ t(x, y, z, θ)) upgrade.
In addition, according to the coordinate P of marker post under absolute coordinate system Ba(x Ba, y Ba) and the coordinate P of marker post under the robot coordinate system Br(x Br, y Br), can draw the coordinate P of robot under absolute coordinate system Ra(x Ra, y Ra, θ Ra) should satisfy:
x ba=x ra+x brcosθ ra-y brsinθ ra
y ba=y ra+x brsinθ ra-y brcosθ ra
Generate the some groups of P that satisfy above-mentioned condition at random Ra(x Ra, y Ra, θ Ra), as candidate point.
4) if in a period of time, all do not obtain landmark information, then call experience database, carry out images match.At first calculate the eigenwert of current images acquired in the matching process, in the present embodiment, the relation in the employing image between the gray-scale value of the gray-scale value of the geometric position of each pixel, itself and surrounding pixel point is carried out images match.Robot calculates the eigenwert of present image according to the characteristics of image value-based algorithm, searches the image I that differs minimum with it then in database Best, write down both eigenwert poor.If this difference is enough little, think that then two width of cloth images mate very much, can use I BestThe pairing position P of image BestUpgrade the probability of sampled point: for each sampled point, with its position and P BestCompare,, then improve the probability of this sampled point if differ very little; If differ greatly, then reduce the probability of this sampled point.The threshold value of judgement difference size and the increase and decrease amplitude of probability both can be fixed in advance, also can change in real time.
Bel tt(x,y,z,θ)←βP(ε t|(φ t(x,y,z,θ))Bel tt(x,y,z,θ))
P (ε wherein t| (φ t(x, y, z, θ)) representative is based on the probability of experience position new model more.In addition, at P BestNear generate several location points at random as candidate point.
For example: order has been deposited 5 width of cloth images in the tentation data storehouse, and eigenwert is 10,20,30,40,50, and the threshold value of the difference of eigenwert is 4.If calculating the eigenwert of present image is 28.5, then I Best=3, the difference of eigenwert is 1.5, much smaller than threshold value 4.So think that the current position of robot dog is exactly the position of the 3rd width of cloth database images correspondence,, and near this position, generate several positions at random as candidate point with the probability of these all sampled points of position renewal.
Fig. 3 is visual information match condition figure; Wherein Fig. 3 b and Fig. 3 d are the images of pre-stored in the experience database, and Fig. 3 a and Fig. 3 c are that robot collects in real time, need the image of coupling; Shown in Fig. 3 a, when (2388 ,-1186,300) represent this image of hand sampling among the figure, the residing actual position coordinate of robot.The method of the application of the invention, the image information that in experience database, has found characteristic matching that robot is very fast, and the position coordinates that calculates place this moment is for (2388 ,-1203,300).Result and the manual current location basically identical of measuring, error is very little.The practicality and the high efficiency of method have been embodied.
5) the minimum some spots of probable value in the deletion sampled point, and the point of choosing same number from candidate point joins in the sampled point, deletes all candidate points:
Calculate the mean value of the probability of all sampled points, with this mean value be standard determine a threshold value (such as mean value 1/3), probability is lower than the sampled point deletion of this threshold value.Sampled point of every deletion, just one of picked at random is added in the sampled point from candidate point.At last, with unnecessary candidate point Delete All (process of determining threshold value has guaranteed that the sampled point number of deleting is no more than total number of candidate point).
6) find out probability and maximum zone, calculate the weighted mean value of the position of the sampled point in this zone, resulting position is as the current optimum position of robot:
The perform region of robot is divided into subregion several rules, disjoint, and then the position of each sampled point correspondence all falls into a unique sub regions.Add up the sampled point in each subregion probability and, probability and maximum subregion then are best subregion.The weighted mean value of the position of all sampled points in the calculating optimum subregion (weights are probability of sampled point), resulting mean value is the current optimum position of robot; The mean value of the probability of all sampled points in the best subregion is exactly the confidence level of this optimum position.
Be depicted as according to the diagram after the different model modification location estimation as Fig. 4 b, Fig. 4 c is depicted as the optimum position that obtains at last and estimates.
7) execution in step 2 repeatedly) to 6), finish the real-time positioning of robot.
Another aspect of the present invention provide a kind of under complex environment the method for intelligent robot autonomous positioning, this method is applicable to the collaborative work of a plurality of robots, can communicate mutually between promptly a plurality of robots.The multirobot co-positioned can be used as auxiliary localization method, owing to need the real-time Communication for Power between the robot, use the method for co-positioned, locating speed is fast not as autonomous positioning speed, but the information of location can be more accurate, when the individual machine people can not well locate, when perhaps needing point-device locating information, adopt the mode of co-positioned better.The prerequisite of co-positioned method and basis are the information of each independent location of robot.Narration according to the front, the step 1) to 8 of co-positioned method) step 1) to 7 of a merchandiser robot autonomous localization method) compares, only be many step 5), communication co-positioned mutually between promptly a plurality of robots is just introduced the detailed implementation process in this step below.
During a plurality of robot cooperated location, can intercom mutually between the robot.The agreement of communication is not limit, and both can adopt UDP, can adopt TCP/IP again.Four robot legs in this example are furnished with the wireless network card that meets the IEEE802.11b standard, adopt the mode of udp broadcast to carry out communication.
The A of robot is in motion process, if go out the B of another robot by image recognition, then by the coordinate P of image calculation machine people B under the A of robot coordinate system BA(computation process is similar to the process of the coordinate of calculation flag post under the robot coordinate system) is again according to own current optimum position P AAnd confidence level, calculate the absolute coordinates P of B BAnd confidence level r B(by the confidence level and the P of image recognition AConfidence level determine jointly).After having calculated, A is just with this P BAnd r BBroadcasting is gone out.
If both do not obtained landmark information in certain robot discovery oneself a period of time, in experience database, do not find satisfactory coupling again, then receive current all broadcast messages that send by other robot, relevant own position, and therefrom choose the highest confidence level, if this confidence level is greater than a certain threshold value, then use the probability of all sampled points of position renewal of this confidence level and correspondence thereof, and near this position, generate several positions at random as candidate point.
Such as, both do not obtained landmark information in the A of robot discovery oneself a period of time, in experience database, do not find satisfactory coupling again, and there are other three robots to broadcast position and the confidence level of A this moment, these three information (x, y, θ, r) expression is respectively (100,200,30; 0.7), (120,190,35,0.9) and (90,210,25,0.6).Then B chooses second the highest information of confidence level, if this confidence level 0.9 is higher than threshold value, then uses second all sampled point of information updating: improve and position (120,190,35) differ the probability of nearer sampled point, reduce and position (120,190,35) differ the probability of sampled point far away; In addition, near (120,190,35), generate several candidate points at random.
In addition, if can't accurately discern between the robot, promptly robot and do not know around robot which specifically is, then use the method for broadcast position information to have limitation.For one of the solution problem, the method for co-positioned is applied in the various systems, can no matter accurately discern each other between the robot, the present invention have introduced the notion of dynamic object of reference.So-called dynamically object of reference is exactly at a time, have more than a robot can see can be by the accurate object of identification of vision module.In the time can't accurately discerning each other between the robot, dynamic object of reference around they can choose, and object of reference position degree of confidence is set for the dynamic object of reference position of discerning, and follow the positional information of this object of oneself discerning together to be broadcast to the other machines people according to the position degree of confidence of oneself.If at synchronization, another robot need finish the location by cooperation, then can be, and the position of the high confidence level that by the reference this moment that broadcasting obtains is according to own that judge and distance and angle object of reference, calculate current self position.So just solved the problem that accurately to discern between the robot.The present invention promptly can be used for system more widely, and the accurate identification between the robot is also no longer included restriction.
Fig. 6 illustrates for robot actual location situation after using the present invention.Wherein Fig. 6 a) represents the physical location (800,700) of current robot ground on the scene and towards 180 ° of angles.Fig. 6 b) during expression probability of use method, the initial random distribution of sampled point, wherein arrow is represented the position of sampled point and towards angle.Fig. 6 c) expression uses international up-to-date probability localization method (Landmark-based Monte Carlo Localization) after 10 seconds, the sampling point distributions of robot perception self-position.Fig. 6 d) for after using the present invention, behind identical 10 seconds, the sampling point distributions of robot perception self-position.Can compare the present invention from figure significantly is used for the effect of autonomous robot location and carries out efficient.In order to emphasize the unknown influence of actual environment, at Fig. 6 c) and 6d) in the implementation, the position that manually changes robot is at random got back to the original position then, thereby occur under the simulating reality condition colliding and other X factors to the influence of autonomous positioning.
Although disclose specific embodiments of the invention and accompanying drawing for the purpose of illustration, its purpose is to help to understand content of the present invention and implement according to this, but it will be appreciated by those skilled in the art that: without departing from the spirit and scope of the invention and the appended claims, various replacements, variation and modification all are possible.Therefore, the present invention should not be limited to most preferred embodiment and the disclosed content of accompanying drawing, and the scope of protection of present invention is as the criterion with the scope that claims define.

Claims (4)

1. the method for an intelligent robot autonomous positioning under complex environment specifically may further comprise the steps:
1) in robot work region, generate several sampled points at random, characterize the current position of robot and the probability of this position;
2) if the robot occurrence positions moves, then upgrade all sampled points according to motion model;
3),, and increase candidate point then according to the probability of this each sampled point of information updating if there is sensor to obtain landmark information;
4) if in a period of time, all do not obtain landmark information, then call experience database, carry out images match;
If when in experience database, finding the satisfactory information of matching degree, upgrade the probability of sampled point according to this posterior infromation, and increase candidate point;
5) the minimum some spots of probable value in the deletion sampled point, and the point of choosing same number from candidate point joins in the sampled point, deletes all candidate points;
6) find out probability and maximum zone, calculate the weighted mean value of the position of the sampled point in this zone, resulting position is as the current optimum position of robot;
7) carry out 2 repeatedly) to 6) go on foot, finish the real-time positioning of robot.
2. as claimed in claim 1 under complex environment the method for intelligent robot autonomous positioning, it is characterized in that: when calling experience database, adopt the gradation of image value to carry out images match.
3. the method for a plurality of robot cooperated location under complex environment specifically may further comprise the steps:
1) in robot work region, generate several sampled points at random, characterize the current position of robot and the probability of this position;
2) if the robot occurrence positions moves, then upgrade all sampled points according to motion model;
3),, and increase candidate point then according to the probability of this each sampled point of information updating if there is sensor to obtain landmark information;
4) if in a period of time, all do not obtain landmark information, then call experience database, carry out images match, if when in experience database, finding the satisfactory information of matching degree, upgrade the probability of sampled point according to this posterior infromation, and increase candidate point;
5) if both do not obtained landmark information in robot discovery oneself a period of time, in experience database, do not find satisfactory coupling again, then carry out communication, upgrade the probability of sampled point according to the information that the other machines people transmits, and increase candidate point with other robot;
6) the minimum some spots of probable value in the deletion sampled point, and the point of choosing same number from candidate point joins in the sampled point, deletes all candidate points;
7) find out probability and maximum zone, calculate the weighted mean value of the position of the sampled point in this zone, resulting position is as the current optimum position of robot;
8) execution in step 2 repeatedly) to 7), can realize a plurality of robot cooperated real-time positioning.
4. as claimed in claim 3 under complex environment the method for a plurality of robot cooperated location, it is characterized in that: robot intercoms mutually and adopts the mode of UDP radio broadcasting, perhaps ICP/IP protocol mode.
CNB2006100569376A 2006-03-07 2006-03-07 Automatic positioning method for intelligent robot under complex environment Expired - Fee Related CN100461058C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006100569376A CN100461058C (en) 2006-03-07 2006-03-07 Automatic positioning method for intelligent robot under complex environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006100569376A CN100461058C (en) 2006-03-07 2006-03-07 Automatic positioning method for intelligent robot under complex environment

Publications (2)

Publication Number Publication Date
CN1811644A true CN1811644A (en) 2006-08-02
CN100461058C CN100461058C (en) 2009-02-11

Family

ID=36844606

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006100569376A Expired - Fee Related CN100461058C (en) 2006-03-07 2006-03-07 Automatic positioning method for intelligent robot under complex environment

Country Status (1)

Country Link
CN (1) CN100461058C (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101033971B (en) * 2007-02-09 2011-02-16 中国科学院合肥物质科学研究院 Mobile robot map building system and map building method thereof
CN103197684A (en) * 2013-04-25 2013-07-10 清华大学 Method and system for cooperatively tracking target by unmanned aerial vehicle cluster
CN104115082A (en) * 2012-02-08 2014-10-22 罗伯特有限责任公司 Method for automatically triggering a self-positioning process
CN105652871A (en) * 2016-02-19 2016-06-08 深圳杉川科技有限公司 Repositioning method for mobile robot
CN106444780A (en) * 2016-11-10 2017-02-22 速感科技(北京)有限公司 Robot autonomous navigation method and system based on vision positioning algorithm
JP2017188066A (en) * 2016-04-01 2017-10-12 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Autonomous mobile body system
CN107617220A (en) * 2017-09-06 2018-01-23 滨州学院 A kind of intelligent soccer robot control system and control method
CN108107882A (en) * 2016-11-24 2018-06-01 中国科学技术大学 Service robot automatic Calibration and detecting system based on optical motion tracking
CN108594169A (en) * 2018-03-15 2018-09-28 中国人民解放军63892部队 A kind of multirobot distributed collaborative localization method being adapted to time-varying communication topology
CN108733037A (en) * 2017-04-17 2018-11-02 哈工大机器人集团有限公司 A kind of sweeping robot avoids cleaning method
CN111113430A (en) * 2019-12-31 2020-05-08 深圳市优必选科技股份有限公司 Robot and tail end control method and device thereof
CN114485619A (en) * 2022-01-26 2022-05-13 清华大学 Multi-robot positioning and navigation method and device based on air-ground cooperation

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101033971B (en) * 2007-02-09 2011-02-16 中国科学院合肥物质科学研究院 Mobile robot map building system and map building method thereof
CN104115082A (en) * 2012-02-08 2014-10-22 罗伯特有限责任公司 Method for automatically triggering a self-positioning process
CN104115082B (en) * 2012-02-08 2016-09-07 罗伯特有限责任公司 Automatically the method starting self-align process
CN103197684A (en) * 2013-04-25 2013-07-10 清华大学 Method and system for cooperatively tracking target by unmanned aerial vehicle cluster
CN103197684B (en) * 2013-04-25 2016-09-21 清华大学 Unmanned aerial vehicle group works in coordination with the method and system following the tracks of target
CN105652871A (en) * 2016-02-19 2016-06-08 深圳杉川科技有限公司 Repositioning method for mobile robot
JP2017188066A (en) * 2016-04-01 2017-10-12 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Autonomous mobile body system
CN106444780B (en) * 2016-11-10 2019-06-28 速感科技(北京)有限公司 A kind of autonomous navigation method and system of the robot of view-based access control model location algorithm
CN106444780A (en) * 2016-11-10 2017-02-22 速感科技(北京)有限公司 Robot autonomous navigation method and system based on vision positioning algorithm
CN108107882A (en) * 2016-11-24 2018-06-01 中国科学技术大学 Service robot automatic Calibration and detecting system based on optical motion tracking
CN108733037A (en) * 2017-04-17 2018-11-02 哈工大机器人集团有限公司 A kind of sweeping robot avoids cleaning method
CN108733037B (en) * 2017-04-17 2021-03-16 哈工大机器人集团股份有限公司 Avoidable sweeping method of sweeping robot
CN107617220A (en) * 2017-09-06 2018-01-23 滨州学院 A kind of intelligent soccer robot control system and control method
CN107617220B (en) * 2017-09-06 2023-12-01 滨州学院 Intelligent football robot control system and control method
CN108594169A (en) * 2018-03-15 2018-09-28 中国人民解放军63892部队 A kind of multirobot distributed collaborative localization method being adapted to time-varying communication topology
CN111113430A (en) * 2019-12-31 2020-05-08 深圳市优必选科技股份有限公司 Robot and tail end control method and device thereof
CN111113430B (en) * 2019-12-31 2021-07-27 深圳市优必选科技股份有限公司 Robot and tail end control method and device thereof
CN114485619A (en) * 2022-01-26 2022-05-13 清华大学 Multi-robot positioning and navigation method and device based on air-ground cooperation

Also Published As

Publication number Publication date
CN100461058C (en) 2009-02-11

Similar Documents

Publication Publication Date Title
CN1811644A (en) Automatic positioning method for intelligent robot under complex environment
CN108885459B (en) Navigation method, navigation system, mobile control system and mobile robot
CN110084272B (en) Cluster map creation method and repositioning method based on cluster map and position descriptor matching
Ulrich et al. Appearance-based place recognition for topological localization
Breitenmoser et al. A monocular vision-based system for 6D relative robot localization
CN108297115B (en) Autonomous repositioning method for robot
CN110458161B (en) Mobile robot doorplate positioning method combined with deep learning
CN116630394B (en) Multi-mode target object attitude estimation method and system based on three-dimensional modeling constraint
CN104503449A (en) Positioning method based on environment line features
CN104657968B (en) Automatic vehicle-mounted three-dimensional laser point cloud facade classification and outline extraction method
CN112734765A (en) Mobile robot positioning method, system and medium based on example segmentation and multi-sensor fusion
CN103959307A (en) Method of detecting and describing features from an intensity image
CN114413909A (en) Indoor mobile robot positioning method and system
CN110827353B (en) Robot positioning method based on monocular camera assistance
CN107063229A (en) Mobile robot positioning system and method based on artificial landmark
CN1820282A (en) Image processing device
CN109528089A (en) A kind of walk on method, apparatus and the chip of stranded clean robot
CN102729250A (en) Chess opening chessman-placing system and method
CN111161334B (en) Semantic map construction method based on deep learning
CN1569558A (en) Moving robot's vision navigation method based on image representation feature
CN108839035A (en) A method of it takes and borrows books
CN108171753A (en) Stereoscopic vision localization method based on centroid feature point Yu neighborhood gray scale cross correlation
CN110433467A (en) Picking up table tennis ball robot operation method and equipment based on binocular vision and ant group algorithm
CN108030452A (en) Vision sweeping robot and the method for establishing scene map
CN112101160A (en) Binocular semantic SLAM method oriented to automatic driving scene

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090211

Termination date: 20120307