CN113664826A - Robot grabbing method and system in unknown environment - Google Patents
Robot grabbing method and system in unknown environment Download PDFInfo
- Publication number
- CN113664826A CN113664826A CN202110844861.8A CN202110844861A CN113664826A CN 113664826 A CN113664826 A CN 113664826A CN 202110844861 A CN202110844861 A CN 202110844861A CN 113664826 A CN113664826 A CN 113664826A
- Authority
- CN
- China
- Prior art keywords
- grabbing
- robot
- image data
- target object
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000012549 training Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 11
- 230000006399 behavior Effects 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 8
- 238000013519 translation Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 230000006870 function Effects 0.000 claims description 4
- 210000002569 neuron Anatomy 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- 238000010030 laminating Methods 0.000 claims description 3
- 238000011084 recovery Methods 0.000 claims description 3
- 230000007613 environmental effect Effects 0.000 claims description 2
- 230000000007 visual effect Effects 0.000 claims 2
- 238000000605 extraction Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 238000011161 development Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1661—Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J15/00—Gripping heads and other end effectors
- B25J15/08—Gripping heads and other end effectors having finger members
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
- B25J19/04—Viewing devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
The invention provides a robot grabbing method and system in an unknown environment, wherein the method specifically comprises the following steps: acquiring image data of an actual operation environment; secondly, extracting a target object in the image data; thirdly, positioning according to the marked position of the target object; fourthly, generating a corresponding grabbing control instruction according to the positioning result; and step five, triggering the grabbing behavior according to the grabbing control instruction. According to the invention, the positioning accuracy is increased and the grabbing efficiency is improved by compensating the position positioning deviation of the intelligent robot. Meanwhile, the acquired data image is preprocessed, so that the accuracy of image feature extraction is improved. Aiming at the surface damage of a clamped object in the grabbing process, the flexible clamp holder is further provided, compared with the traditional clamp holder in rigid contact, the flexible clamp holder is stronger in self-adaptability, and objects with different sizes and shapes can be grabbed more stably without damage in the actual working process.
Description
Technical Field
The invention relates to a robot grabbing method and system in an unknown environment, in particular to the technical field of automatic intelligent robots.
Background
With the development of computer technology and the great advance of automation technology, intelligent industrial operations have been greatly introduced in daily production activities. And after the operation environment of the reciprocating grabbing robot is fixed-point grabbing operation, the robot carries the target object to a destination, and then returns to the object taking operation environment to carry out the next round of grabbing operation.
In the prior art, the phenomenon of inaccurate positioning often occurs due to the fact that the robot is in an unfamiliar environment or returns to an industrial operation working condition of operation, and therefore the grabbing accuracy of the intelligent robot is insufficient.
Disclosure of Invention
The purpose of the invention is as follows: a robot grabbing method and system in an unknown environment are provided to solve the problems in the prior art. Meanwhile, the positioning accuracy is improved and the grabbing efficiency is improved by compensating the position positioning deviation of the intelligent robot.
The technical scheme is as follows: in a first aspect, a robot grabbing method in an unknown environment is provided, which includes the following steps:
acquiring image data of an actual operation environment;
secondly, extracting a target object in the image data;
thirdly, positioning according to the marked position of the target object;
fourthly, generating a corresponding grabbing control instruction according to the positioning result;
and step five, triggering the grabbing behavior according to the grabbing control instruction.
In some implementation manners of the first aspect, in the third step, when the target position is located, the implementation flow further includes: when the robot carries out hand eye calibration, the base of the robot is set to be in X1In the process, the laser data of the shapes of the grabbing operation table top and the table legs are observed to be D1Is converted into a point cloud P1If the robot stops at X for grabbing2Where the laser data observed to grasp the shape of the work surface and table legs is D2Is converted into a point cloud P2(ii) a The laser sensor and the robot base are integrated, the position relation is fixed, the pose deviation of the laser sensor is the pose deviation of the robot base when the system is calibrated and grabbed, and the iterative closest point algorithm based on laser data is adopted to estimate X1And X2The deviation therebetween; laser point cloud P for calibrating hand and eye1As a ginsengExamining the point cloud, the laser point cloud P when capturing2As a source point cloud, carrying out optimal matching based on a least square method on the source point cloud and the source point cloud, and carrying out P iteration each time2Matching P by rototranslation1Overlapping the two point sets, optimizing rotation and translation actions in each iteration, achieving optimal registration through multiple iterations, and correcting the final position through the calculated deviation data.
In some implementation manners of the first aspect, after the actual operation environment image data is acquired in the step one, image preprocessing is performed on the environment image data; the image preprocessing further comprises: firstly, acquiring image data with pixel values as the distance from an image sensor to a scene point through information acquisition equipment of an intelligent robot; secondly, calculating and obtaining the distance between the object and the information acquisition equipment according to the depth value; thirdly, obtaining coordinates under information acquisition equipment according to the internal reference matrix and the pixel coordinates of the information acquisition equipment, and further obtaining three-dimensional information of the target object; and finally, acquiring the three-dimensional data of the target object.
And carrying out filtering operation on the acquired environmental image data to improve the accuracy of data recovery.
In some realizable modes of the first aspect, in the actual residential area operation process, before the grabbing action is performed, the intelligent robot firstly plans and positions the transportation path according to the landmark; when the target position is reached, acquiring an actual image of a scene where a current target object is located through information acquisition equipment, and then accurately determining the position through calibration relation and positioning deviation compensation; and when the position of the target object is determined, generating a corresponding grabbing instruction, and executing grabbing behaviors after the mechanical arm receives the grabbing instruction to complete grabbing.
In some implementation manners of the first aspect, before the capturing in the fifth step, the method further includes determining a type of the object to be captured, and further, first, constructing a network model for identifying the type of the object; secondly, constructing a training set; thirdly, training the target object type recognition network model through a training set; secondly, using the trained target object type recognition network model to receive and analyze image data acquired in the actual operation process; and finally, outputting a final recognition result by the target object type recognition network model, and generating a grabbing instruction according to the obtained target judgment result to finish grabbing behaviors.
In the training process of the target object type recognition network model, the mean square error is used as a loss function, and a dropout layer is added after a full connection layer of the target object type recognition network model is formed, so that the complex relation among neurons is reduced.
In some realizations of the first aspect, in the step five, the gripper is used for performing gripping action on the object, the gripper is provided with a mounting hole at the top, the mounting hole is used for being fixedly connected with a driver for driving the gripper to move, the internal support is a sheet support, the gripper used for gripping the object is two opposite-gripping parts, and when the object is gripped, corresponding flexible deformation is generated to adapt to the shape of the object, so that gripping is realized; when the gripper snatchs the object, the clamping face contacts with the object, and the clamping face warp the laminating with the outside profile of object to realize snatching.
In a second aspect, a robot gripping system in an unknown environment is provided, which specifically includes:
a vision unit configured to control the information acquisition device to acquire a target image;
the mechanical arm control unit is used for controlling the servo motor to control the joint of the intelligent robot to rotate;
and the computer operation control unit is used for receiving the image data acquired by the vision unit, processing and analyzing the image data and acquiring corresponding scheme measures.
In some realizations of the second aspect, in the actual operation process of the intelligent robot, the vision unit acquires the surrounding operation environment through a camera arranged at the head of the intelligent robot, and transmits the acquired image data to the computer operation control unit; the computer operation control unit receives the image data transmitted by the vision unit, processes and analyzes the image data, and generates a corresponding control instruction for controlling the movement of the intelligent robot and controlling the mechanical arm to perform corresponding operation; the mechanical arm control unit receives a control instruction generated by the computer operation control unit, and regulates and controls the servo motor to drive each joint of the intelligent robot mechanical arm to rotate according to the control instruction, so that the target object is grabbed.
Wherein the computer operation control unit further comprises: a processor and a memory storing computer program instructions; the processor reads and executes the computer program instructions to implement any one of the robot gripping methods.
In a third aspect, a computer-readable storage medium having computer program instructions stored thereon is presented. Wherein the computer program instructions, when executed by a processor, implement any one of the robot gripping methods.
Has the advantages that: the invention provides a robot grabbing method and system in an unknown environment, which can increase the positioning accuracy and improve the grabbing efficiency by compensating the position positioning deviation of an intelligent robot. Meanwhile, the acquired data image is preprocessed, so that the accuracy of image feature extraction is improved. Aiming at the surface damage of a clamped object in the grabbing process, the flexible clamp holder is further provided, compared with the traditional clamp holder in rigid contact, the flexible clamp holder is stronger in self-adaptability, and objects with different sizes and shapes can be grabbed more stably without damage in the actual working process.
Drawings
FIG. 1 is a flow chart of data processing according to an embodiment of the present invention.
Fig. 2 is a simplified schematic diagram of a gripper according to an embodiment of the present invention.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.
Example one
A robot grabbing method in an unknown environment is proposed, as shown in fig. 1, the method specifically includes the following steps:
acquiring image data of an actual operation environment;
secondly, extracting a target object in the image data;
thirdly, positioning according to the marked position of the target object;
fourthly, generating a corresponding grabbing control instruction according to the positioning result;
and step five, triggering the grabbing behavior according to the grabbing control instruction.
Specifically, in a further embodiment, the intelligent robot performs planning and positioning of the transportation path according to the landmark during the operation process. When the target position is reached, the actual image of the scene where the current target object is located is obtained through the information acquisition equipment, and then the position is accurately determined through calibration relation and positioning deviation compensation. And when the position of the target object is determined, generating a corresponding grabbing instruction, and executing grabbing behaviors after the mechanical arm receives the grabbing instruction to complete grabbing.
Example two
In a further embodiment based on the first embodiment, in order to solve the problem of inaccurate positioning, which results in insufficient grasping accuracy of the intelligent robot, the present embodiment provides a positioning deviation compensation method during positioning. Specifically, when the robot carries out hand eye calibration, the robot base is set to be at X1In the process, the laser data of the shapes of the grabbing operation table top and the table legs are observed to be D1Is converted into a point cloud P1If the robot stops at X for grabbing2Where the laser data observed to grasp the shape of the work surface and table legs is D2Is converted into a point cloud P2. The laser sensor and the robot base are integrated, the position relation is fixed, the pose deviation of the laser sensor is the pose deviation of the robot base when the system is calibrated and grabbed, and the iterative closest point algorithm based on laser data is adopted to estimate X1And X2The principle of the deviation is that the laser points are used for calibrating the hand and the eyeCloud P1As a reference point cloud, a laser point cloud P is obtained when the laser point cloud P is captured2As a source point cloud, carrying out optimal matching based on a least square method on the source point cloud and the source point cloud, and carrying out P iteration each time2Matching P by rototranslation1Overlapping the two point sets, optimizing the rotation and translation actions each time of iteration, and achieving the optimal registration through multiple iterations to obtain a rotation and translation matrix [ R ] for optimally matching the two point clouds1,T1]And the matrix satisfies the following expression:
P1=R1×P2+T1
rotation and translation matrix [ R ] obtained by iteration1,T1]The deviation between the robot in grabbing operation and the base coordinate system in calibration is represented.
According to pose deviation R of grabbing and calibrating1,T1]Relation between hands and eyes [ R ]0,T0]And (3) compensating to obtain the hand-eye relationship during actual grabbing, wherein the expression is as follows:
[R0′,T0 ′]=[R0,T0]×[R1,T1]
in the formula, R0A rotation matrix representing the intelligent robot base coordinate system to the vision sensor coordinate system; t is0A translation matrix is represented.
EXAMPLE III
In a further embodiment based on the first embodiment, the gripper is used as an important component of the intelligent robot in the actual grabbing process, and is used for grabbing the target object according to the control command received by the intelligent robot. Among the prior art, clamping device with by exerting pressure through the butt contact between the centre gripping thing to increase frictional force, and then realize snatching the function, and be mostly rigid contact between the two. The contact method by increasing the pressure and the friction force may damage the surface of the object, and therefore, in order to reduce the damage of the grasped object, the embodiment further provides a flexible gripper. Compared with the prior art, the gripper of the embodiment increases the flexibility of the traditional gripper, so that the gripper can reduce surface damage when rapidly adapting to the shape of a target grabbed object.
Specifically, as shown in fig. 2, which is a simple schematic diagram of the gripper of this embodiment, the top of the gripper of this embodiment is provided with a mounting hole for being fixedly connected with a driver for driving the gripper to move, the internal support is a sheet support, the gripper for gripping an object is two opposite gripping parts, and when gripping the object, the gripper is adapted to the shape of the object to generate corresponding flexible deformation, so as to achieve gripping. When the gripper snatchs the target object, the clamping face contacts with the target object, because inside support rigidity is little, the clamping face warp the laminating with the outer profile of target object to realize snatching.
Compared with the traditional rigid contact gripper, the gripper has stronger self-adaptability, and can grasp objects with different sizes and shapes more stably without damage in the actual working process.
Example four
In a further embodiment based on the first embodiment, an image data preprocessing method is further provided for the identification processing of the image data. Specifically, firstly, image data with pixel values as the distance from an image sensor to a scene point is acquired through information acquisition equipment of the intelligent robot; secondly, calculating and obtaining the distance between the object and the information acquisition equipment according to the depth value; thirdly, obtaining coordinates under information acquisition equipment according to the internal reference matrix and the pixel coordinates of the information acquisition equipment, and further obtaining three-dimensional information of the target object; and finally, acquiring the three-dimensional data of the target object.
In a further embodiment, in an actual industrial operation process, due to the problems of the surface material, the shielding, the contour shadow and the like of the target object, the quality of image data acquired by the intelligent robot information acquisition device is often affected, and therefore a deviation occurs between a measured depth value and an actual value. In order to solve the problems encountered in the actual operation process, the embodiment improves the precision of data recovery by performing a filtering operation on the acquired image data.
Specifically, when a pixel point m (x, y) exists on the acquired image data, the corresponding depth value is data (m); when the domain set of the pixel point is A and the pixel point n (a, b) is a point in the domain set A, the corresponding depth value is data (n), and if the depth value of the pixel point m is missing, the estimation expression of the depth value is:
in the formula, wdis(x, y) represents a distance weight; w is ac(x, y) represents an image gray weight; hpRepresents a normalized coefficient; sigmadisA standard deviation representing a gray-scale weight of the image data; sigmacRepresenting the image data spatial weight standard deviation. Preferably, in actual operation, σdisThe value range is 4-8 mm when sigma iscWhen the value is smaller, the edge of the image data is clearer, and the obtained depth value is more practical. Due to sigmacValue of and image noise sigmanHas a linear relationship with respect to the standard deviation of (a), in the preferred embodiment, when σ iscIn the range of 2 σnTo 3 sigmanThe effect is optimal, wherein the standard deviation sigma of the image noisenThe expression of (a) is:
where w represents the number of lines of image data; h represents the number of columns of image data; n represents a laplace template; g (p) represents the acquired image data.
EXAMPLE five
In a further embodiment based on the first embodiment, before the capturing in the fifth step, the method further comprises judging the type of the object to be captured, and further, firstly, constructing a network model for identifying the type of the object; secondly, constructing a training set; thirdly, training the target object type recognition network model through a training set; secondly, using the trained target object type recognition network model to receive and analyze image data acquired in the actual operation process; finally, the target object type recognition network model outputs a final recognition result, and generates a grabbing instruction according to the obtained target judgment result to finish grabbing behaviors;
the mean square error is adopted as a loss function in the training process of the target object type recognition network model, and meanwhile, a dropout layer is added after a full connection layer of the target object type recognition network model is formed, and due to the fact that updating of different weights does not depend on the combined action of hidden nodes with fixed relations, the situation that certain characteristics are only effective under other specific characteristics is prevented, weights are reduced, complex relations among neurons are reduced, and meanwhile robustness of the network for losing specific neuron connections is improved.
EXAMPLE six
On the basis of the first embodiment, a robot grasping system in an unknown environment is further provided for implementing the method in the first embodiment, and the system more specifically includes:
a vision unit configured to control the information acquisition device to acquire a target image;
the mechanical arm control unit is used for controlling the servo motor to control the joint of the intelligent robot to rotate;
and the computer operation control unit is used for receiving the image data acquired by the vision unit, processing and analyzing the image data and acquiring corresponding scheme measures.
In a further embodiment, the computer arithmetic control unit further comprises: a processor and a memory storing computer program instructions.
The processor reads and executes the computer program instructions to realize the robot grabbing method.
Specifically, in the actual operation process of the intelligent robot, the vision unit acquires the surrounding operation environment through a camera arranged at the head of the intelligent robot and transmits the acquired image data to the computer operation control unit; the computer operation control unit receives the image data transmitted by the vision unit, processes and analyzes the image data, and generates a corresponding control instruction for controlling the movement of the intelligent robot and controlling the mechanical arm to perform corresponding operation; the mechanical arm control unit receives a control instruction generated by the computer operation control unit, and regulates and controls the servo motor to drive each joint of the intelligent robot mechanical arm to rotate according to the control instruction, so that the target object is grabbed.
EXAMPLE seven
On the basis of the first embodiment, a computer-readable storage medium is further provided, on which computer program instructions are stored; the computer program instructions, when executed by the processor, implement a robot grasping method.
As noted above, while the present invention has been shown and described with reference to certain preferred embodiments, it is not to be construed as limited thereto. Various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. A robot grabbing method in an unknown environment is characterized by comprising the following steps:
acquiring image data of an actual operation environment;
secondly, extracting a target object in the image data;
thirdly, positioning according to the marked position of the target object;
fourthly, generating a corresponding grabbing control instruction according to the positioning result;
and step five, triggering the grabbing behavior according to the grabbing control instruction.
2. The method according to claim 1, wherein the third step further comprises, when performing the target position locating:
when the robot carries out hand eye calibration, the base of the robot is set to be in X1In the process, the laser data of the shapes of the grabbing operation table top and the table legs are observed to be D1Is converted into a point cloud P1If the robot stops at X for grabbing2Where the laser data observed to grasp the shape of the work surface and table legs is D2Is converted into a point cloud P2(ii) a The laser sensor and the robot base are integrated, the position relation is fixed, the pose deviation of the laser sensor is the pose deviation of the robot base when the system is calibrated and grabbed, and the iterative closest point algorithm based on laser data is adopted to estimate X1And X2The deviation therebetween; laser point cloud P for calibrating hand and eye1As a reference point cloud, a laser point cloud P is obtained when the laser point cloud P is captured2As a source point cloud, carrying out optimal matching based on a least square method on the source point cloud and the source point cloud, and carrying out P iteration each time2Matching P by rototranslation1Overlapping the two point sets, optimizing rotation and translation actions in each iteration, achieving optimal registration through multiple iterations, and correcting the final position through the calculated deviation data.
3. The robot grabbing method in an unknown environment according to claim 1, characterized in that after actual working environment image data is acquired in step one, image preprocessing is performed on the environment image data; the image preprocessing further comprises:
firstly, acquiring image data with pixel values as the distance from an image sensor to a scene point through information acquisition equipment of an intelligent robot; secondly, calculating and obtaining the distance between the object and the information acquisition equipment according to the depth value; thirdly, obtaining coordinates under information acquisition equipment according to the internal reference matrix and the pixel coordinates of the information acquisition equipment, and further obtaining three-dimensional information of the target object; and finally, acquiring the three-dimensional data of the target object.
4. The method of robotic grasping in an unknown environment according to claim 3,
and carrying out filtering operation on the acquired environmental image data to improve the accuracy of data recovery.
5. The method for grabbing robots in an unknown environment according to claim 1, wherein in the actual residential area operation process, before grabbing behaviors, an intelligent robot firstly plans and positions a transportation path according to landmarks; when the target position is reached, acquiring an actual image of a scene where a current target object is located through information acquisition equipment, and then accurately determining the position through calibration relation and positioning deviation compensation; and when the position of the target object is determined, generating a corresponding grabbing instruction, and executing grabbing behaviors after the mechanical arm receives the grabbing instruction to complete grabbing.
6. The robot grasping method according to claim 1, further comprising, before grasping in step five, determining a type of the object grasped, and further, first, constructing an object type identification network model; secondly, constructing a training set; thirdly, training the target object type recognition network model through a training set; secondly, using the trained target object type recognition network model to receive and analyze image data acquired in the actual operation process; finally, the target object type recognition network model outputs a final recognition result, and generates a grabbing instruction according to the obtained target judgment result to finish grabbing behaviors;
in the training process of the target object type recognition network model, the mean square error is used as a loss function, and a dropout layer is added after a full connection layer of the target object type recognition network model is formed, so that the complex relation among neurons is reduced.
7. The method according to claim 1, wherein the gripping action of the object in the step five is performed by a gripper, the gripper is provided with a mounting hole at the top, the mounting hole is used for being fixedly connected with a driver for driving the gripper to move, the internal support is a sheet support, the gripper for gripping the object is two opposite-gripping parts, and the object is gripped by corresponding flexible deformation to adapt to the shape of the object; when the gripper snatchs the object, the clamping face contacts with the object, and the clamping face warp the laminating with the outside profile of object to realize snatching.
8. A robot grabbing system in an unknown environment is used for realizing the robot grabbing method of any one of claims 1 to 7, and is characterized by specifically comprising the following steps:
a vision unit configured to control the information acquisition device to acquire a target image;
the mechanical arm control unit is used for controlling the servo motor to control the joint of the intelligent robot to rotate;
the computer operation control unit is used for receiving the image data acquired by the visual unit, processing and analyzing the image data and acquiring corresponding scheme measures;
in the actual operation process of the intelligent robot, the visual unit acquires the surrounding operation environment through a camera arranged at the head of the intelligent robot and transmits the acquired image data to the computer operation control unit; the computer operation control unit receives the image data transmitted by the vision unit, processes and analyzes the image data, and generates a corresponding control instruction for controlling the movement of the intelligent robot and controlling the mechanical arm to perform corresponding operation; the mechanical arm control unit receives a control instruction generated by the computer operation control unit, and regulates and controls the servo motor to drive each joint of the intelligent robot mechanical arm to rotate according to the control instruction, so that the target object is grabbed.
9. The system of claim 8,
the computer arithmetic control unit further includes: a processor and a memory storing computer program instructions;
the processor reads and executes the computer program instructions to implement the robot gripping method according to any of claims 1-7.
10. A computer-readable storage medium, having computer program instructions stored thereon, which, when executed by a processor, implement a robotic grasping method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110844861.8A CN113664826A (en) | 2021-07-26 | 2021-07-26 | Robot grabbing method and system in unknown environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110844861.8A CN113664826A (en) | 2021-07-26 | 2021-07-26 | Robot grabbing method and system in unknown environment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113664826A true CN113664826A (en) | 2021-11-19 |
Family
ID=78540141
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110844861.8A Withdrawn CN113664826A (en) | 2021-07-26 | 2021-07-26 | Robot grabbing method and system in unknown environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113664826A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117549338A (en) * | 2024-01-09 | 2024-02-13 | 北京李尔现代坦迪斯汽车系统有限公司 | Grabbing robot for automobile cushion production workshop |
CN117549317A (en) * | 2024-01-12 | 2024-02-13 | 深圳威洛博机器人有限公司 | Robot grabbing and positioning method and system |
CN118397097A (en) * | 2024-07-01 | 2024-07-26 | 苏州华兴源创科技股份有限公司 | Control method, device, computer equipment, readable storage medium and program product for object grabbing |
-
2021
- 2021-07-26 CN CN202110844861.8A patent/CN113664826A/en not_active Withdrawn
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117549338A (en) * | 2024-01-09 | 2024-02-13 | 北京李尔现代坦迪斯汽车系统有限公司 | Grabbing robot for automobile cushion production workshop |
CN117549338B (en) * | 2024-01-09 | 2024-03-29 | 北京李尔现代坦迪斯汽车系统有限公司 | Grabbing robot for automobile cushion production workshop |
CN117549317A (en) * | 2024-01-12 | 2024-02-13 | 深圳威洛博机器人有限公司 | Robot grabbing and positioning method and system |
CN117549317B (en) * | 2024-01-12 | 2024-04-02 | 深圳威洛博机器人有限公司 | Robot grabbing and positioning method and system |
CN118397097A (en) * | 2024-07-01 | 2024-07-26 | 苏州华兴源创科技股份有限公司 | Control method, device, computer equipment, readable storage medium and program product for object grabbing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110014426B (en) | Method for grabbing symmetrically-shaped workpieces at high precision by using low-precision depth camera | |
CN110238849B (en) | Robot hand-eye calibration method and device | |
CN113664826A (en) | Robot grabbing method and system in unknown environment | |
CN111251295B (en) | Visual mechanical arm grabbing method and device applied to parameterized parts | |
CN113379849B (en) | Robot autonomous recognition intelligent grabbing method and system based on depth camera | |
CN113146172B (en) | Multi-vision-based detection and assembly system and method | |
CN113269723B (en) | Unordered grabbing system for parts with three-dimensional visual positioning and manipulator cooperative work | |
CN111923053A (en) | Industrial robot object grabbing teaching system and method based on depth vision | |
CN111645074A (en) | Robot grabbing and positioning method | |
US12030184B2 (en) | System and method for error correction and compensation for 3D eye-to-hand coordination | |
CN110378325B (en) | Target pose identification method in robot grabbing process | |
JP2015213973A (en) | Picking device and picking method | |
CN110605711B (en) | Method, device and system for controlling cooperative robot to grab object | |
CN114519738A (en) | Hand-eye calibration error correction method based on ICP algorithm | |
EP4116043A2 (en) | System and method for error correction and compensation for 3d eye-to-hand coordination | |
JP2019057250A (en) | Work-piece information processing system and work-piece recognition method | |
CN114902872A (en) | Visual guidance method for picking fruits by robot | |
CN115861780B (en) | Robot arm detection grabbing method based on YOLO-GGCNN | |
CN116749233A (en) | Mechanical arm grabbing system and method based on visual servoing | |
CN117103277A (en) | Mechanical arm sensing method based on multi-mode data fusion | |
CN114074331A (en) | Disordered grabbing method based on vision and robot | |
CN117733851A (en) | Automatic workpiece grabbing method based on visual detection | |
Luo et al. | Robotic conveyor tracking with dynamic object fetching for industrial automation | |
CN117340929A (en) | Flexible clamping jaw grabbing and disposing device and method based on three-dimensional point cloud data | |
CN116868772A (en) | Robot for identifying fruits and picking fruits based on vision and use method of robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20211119 |
|
WW01 | Invention patent application withdrawn after publication |