Manipulability-Aware Task-Oriented Grasp Planning and Motion Control with Application in a Seven-DoF Redundant Dual-Arm Robot
<p>Architecture of the proposed M-aware grasp planning and motion control system.</p> "> Figure 2
<p>The JL-score plot of the joint limits evaluation function proposed in (<b>a</b>) previous studies [<a href="#B22-electronics-13-05025" class="html-bibr">22</a>,<a href="#B23-electronics-13-05025" class="html-bibr">23</a>] and (<b>b</b>) this work. The evaluation function in (<b>a</b>) features an unbounded value range, making it challenging to quantitatively assess the degree of joint limits. In contrast, the function in (<b>b</b>) has a bounded value range between 0 and 1, simplifying the quantitative assessment of the degree of joint limits.</p> "> Figure 3
<p>System configuration of the lab-made 7-DoF redundant dual-arm robot used in this study. Each arm has an additional prismatic joint to adjust its <span class="html-italic">z</span>-axis position to extend the workspace along the z<sub>0</sub>-axis.</p> "> Figure 4
<p>The S-score plot of the proposed singularity evaluation functions (15) and (16). These plots illustrate that the singularity evaluation of the robot arm is treated as a joint limit evaluation of its wrist singularity factors. Consequently, the S-score of the evaluation function remains within the bounded range of 0 to 1.</p> "> Figure 5
<p>System architecture of the proposed M-aware grasp planning method, which aims to determine maximum M-score grasp poses for the robot arm to perform object grasping tasks.</p> "> Figure 6
<p>The result of each step in the TOGD method: (<b>a</b>) the input RGB image, (<b>b</b>) the result of Mask R-CNN, (<b>c</b>) the affordances of all segmented objects (the darker regions), and (<b>d</b>) the feasible grasp rectangles.</p> "> Figure 7
<p>The state definition of the grasp rectangle used in this work.</p> "> Figure 8
<p>Definition of position and orientation errors used in grasp matching: (<b>a</b>) position error and (<b>b</b>) orientation error.</p> "> Figure 9
<p>Illustration of (<b>a</b>) the definition of the target pose vector used in this study and (<b>b</b>) the proposed grasp pose transformation to transform a target pose vector from the camera frame to the robot base frame.</p> "> Figure 10
<p>The hardware equipment and environment settings used in the experiments of this study.</p> "> Figure 11
<p>Three target objects used in the experiment and their default affordance masks: (<b>a</b>) a PET bottle with cap and body masks, (<b>b</b>) a cup with mouth and body masks, and (<b>c</b>) a scraper with handler and head masks.</p> "> Figure 12
<p>Comparison between the proposed TOGD method and the existing Grasp R-CNN method: (<b>a</b>) Input image showing two target objects (bottle and cup) and three non-target objects (vacuum gripper, ruler, and screwdriver), (<b>b</b>) task-agnostic grasp detection results of the Grasp R-CNN method, presenting multiple possible grasp rectangles for target and non-target objects, (<b>c</b>) object detection results obtained by the Mask R-CNN method, and (<b>d</b>) task-oriented grasp detection results of the proposed TOGD method, showing multiple grasp rectangles only for the two target objects.</p> "> Figure 13
<p>A virtual 7-DoF dual-arm robot model was used to test the performance of all compared methods in the Gazebo simulator.</p> "> Figure 14
<p>Comparison results of the manipulability evolution of each control method in (<b>a</b>) Test 1 and (<b>b</b>) Test 2.</p> "> Figure 15
<p>Comparison results of the angle evolution of all joints in Test 3 (<b>a</b>) without and (<b>b</b>) with the proposed JOSI method.</p> "> Figure 16
<p>Experimental setup for verifying the performance of the proposed system, in which the planar platform in front of the robot is divided into five regions (regions A to E) for object grasping tests to verify the performance of the proposed M-aware grasp planning and motion control system.</p> "> Figure 17
<p>Demonstration of the proposed system controlling the dual-arm robot to perform a task-oriented pouring action: (<b>a</b>) system initialization; (<b>b</b>,<b>c</b>) the robot uses its left arm to grasp the bottle object; (<b>d</b>,<b>e</b>) the robot uses its right arm to grasp the cup object; (<b>f</b>) the robot prepares to perform the pouring action; (<b>g</b>–<b>i</b>) the robot performs the pouring action.</p> ">
Abstract
:1. Introduction
- To quantitatively evaluate the manipulability of the robot arm, a new manipulability evaluation function is proposed to estimate the M-score of a given robot arm configuration with respect to a desired grasping pose, ensuring that the object grasping pose of the robot continuously and simultaneously moves away from both its joint limits and singularity points;
- Based on the proposed manipulability evaluation function, we propose a novel M-aware grasp planning method, which consists of two key components: deep-learning-based task-oriented grasp detection (TOGD) and M-aware grasp pose estimation. Together, these components enable task-oriented grasping while optimizing the manipulability of the robot arm;
- For the motion control of redundant robot arms, we introduce a novel M-aware motion control method, which consists of two algorithms: manipulability incremental redundant angle selection (MIRAS) and joint over-speed inhibition (JOSI). These algorithms aim to improve the M-score and suppress the joint over-speed of the redundant robot arm, respectively, thereby improving its manipulability. The proposed JOSI method is also applicable to non-redundant robots;
- By integrating the proposed M-aware grasp planning and M-aware motion control methods, we implement a novel M-aware grasp planning and motion control system, which allows the redundant multi-arm robot to perform task-oriented grasping and grasping control with a high level of manipulability and safety.
2. Related Work
3. Overview of the Proposed System
4. The Proposed Manipulability Evaluation Function
4.1. Joint Limits Evaluation Function
4.2. Singularity Evaluation Function
4.3. Manipulability Evaluation Function
5. The Proposed M-Aware Grasp Planning Method
5.1. TOGD Method
5.1.1. Grasp Rectangle Detection
5.1.2. Grasp Matching
5.2. M-Aware Grasp Pose Estimation Method
5.2.1. Grasp Pose Generation
5.2.2. Grasp Pose Transformation
5.2.3. Grasp Quality Evaluation
6. The Proposed M-Aware Motion Control Method
6.1. MIRAS Method
Algorithm 1. Pseudocode of the proposed MIRAS method |
01: Input: Current joint angles of the j-th robot arm , 02: Current redundant angle of the j-th robot arm , 03: Position Pnext and orientation Onext at the next time step (n + 1) 04: Output: Redundant angle of the j-th robot arm at the next time step 05: Initialize: Maximum variation value of the redundant angle ϕmax, 06: Hardware constraints matrix of the j-th robot arm 07: Begin: Evaluate the current manipulability: by Formula (21) 08: 09: 10: for each do 11: 12: Evaluate the target manipulability: by Formula (21) 13: if then 14: 15: end if 16: end for 17: Calculate the adaptive variation of the redundant angle 18: Return: |
6.2. JOSI Method
Algorithm 2. Pseudocode of the proposed JOSI method |
01: Input: Current joint angles , target joint angles 02: Output: Joint angles at next time step 03: Initialize: Set a maximum value of the joint angle variation Δθmax, 04: Begin: 05: if (>) then 06: 07: else 08: 09: end if 10: Return: |
6.2.1. Detect Robot Over-Speed
6.2.2. Suppress Robot Joint Speed
7. Experimental Results
7.1. Experiments of the Proposed TOGD Method
7.2. Experiments for the Proposed M-Aware Motion Control Method
7.2.1. Joint Limit and Singularity Avoidance Tests
7.2.2. Joint Over-Speed Suppression Test
7.3. Experiments of the Proposed System
8. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
6-D | Six-Dimensional |
CNN | Convolutional Neural Network |
DCM | Direction Cosine Matrix |
DoF | Degree-of-Freedom |
FCN | Fully Convolutional Network |
JL-score | Joint Limits score |
JOSI | Joint Over-Speed Inhibition |
M-aware | Manipulability-Aware |
M-score | Manipulability score |
MIRAS | Manipulability Incremental Redundant Angle Selection |
OOI | Object of Interest |
R-CNN | Region-based Convolutional Neural Network |
S-score | Singularity score |
TOGD | Task-Oriented Grasp Detection |
References
- Billard, A.; Kragic, D. Trends and challenges in robot manipulation. Science 2019, 364, eaat8414. [Google Scholar] [CrossRef] [PubMed]
- Bohg, J.; Morales, A.; Asfour, T.; Kragic, D. Data-driven grasp synthesis—A survey. IEEE Trans. Robot. 2013, 30, 289–309. [Google Scholar] [CrossRef]
- Mahler, J.; Matl, M.; Liu, X.; Li, A.; Gealy, D.; Goldberg, K. Dex-net 3.0: Computing robust vacuum suction grasp targets in point clouds using a new analytic model and deep learning. In Proceedings of the IEEE International Conference on Robotics and Automation, Brisbane, Australia, 21–25 May 2018; pp. 5620–5627. [Google Scholar]
- Mahler, J.; Matl, M.; Satish, V.; Danielczuk, M.; DeRose, B.; McKinley, S.; Goldberg, K. Learning ambidextrous robot grasping policies. Sci. Robot. 2019, 4, eaau4984. [Google Scholar] [CrossRef] [PubMed]
- Fang, K.; Zhu, Y.; Garg, A.; Kurenkov, A.; Mehta, V.; Li, F.-F.; Savarese, S. Learning task-oriented grasping for tool manipulation from simulated self-supervision. Int. J. Robot. Res. 2020, 39, 202–216. [Google Scholar] [CrossRef]
- Lüddecke, T.; Kulvicius, T.; Wörgötter, F. Context-based affordance segmentation from 2D images for robot actions. Robot. Auton. Syst. 2019, 119, 92–107. [Google Scholar] [CrossRef]
- Kumra, S.; Kanan, C. Robotic grasp detection using deep convolutional neural networks. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Vancouver, BC, Canada, 24–28 September 2017; pp. 769–776. [Google Scholar]
- Guo, D.; Sun, F.; Fang, B.; Yang, C.; Xi, N. Robotic grasping using visual and tactile sensing. Inf. Sci. 2017, 417, 274–286. [Google Scholar] [CrossRef]
- Zeng, A.; Song, S.; Yu, K.; Donlon, E.; Hogan, F.R.; Bauza, M.; Ma, D.; Taylor, O.; Liu, M.; Romo, E.; et al. Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching. Int. J. Robot. Res. 2022, 41, 690–705. [Google Scholar] [CrossRef]
- Iizuka, M.; Hashimoto, M. Detection of semantic grasping-parameter using part-affordance recognition. In Proceedings of the 19th International Conference on Research and Education in Mechatronics, Hague, The Netherlands, 7 June 2018; pp. 136–140. [Google Scholar]
- Chu, F.; Xu, R.; Vela, P.A. Learning affordance segmentation for real-world robotic manipulation via synthetic images. IEEE Robot. Autom. Lett. 2019, 4, 1140–1147. [Google Scholar] [CrossRef]
- Shao, Q.; Hu, J. Combining RGB and points to predict grasping region for robotic bin-picking. arXiv 2019, arXiv:1904.07394v2. [Google Scholar]
- Liang, H.; Ma, X.; Li, S.; Görner, M.; Tang, S.; Fang, B.; Sun, F.; Zhang, J. PointNetGPD: Detecting grasp configurations from point sets. In Proceedings of the International Conference on Robotics and Automation, Montreal, QC, Canada, 20–24 May 2019; pp. 3629–3635. [Google Scholar]
- Liu, H.W.; Cao, C.Q. Grasp pose detection based on point cloud shape simplification. IOP Conf. Ser. Mater. Sci. Eng. 2020, 717, 012007. [Google Scholar] [CrossRef]
- Yin, R.; Wu, H.; Li, M.; Cheng, Y.; Song, Y.; Handroos, H. RGB-D-based robotic grasping in fusion application environments. Appl. Sci. 2022, 12, 7573. [Google Scholar] [CrossRef]
- Yin, Z.; Li, Y. Overview of robotic grasp detection from 2D to 3D. Cogn. Robot. 2022, 2, 73–82. [Google Scholar] [CrossRef]
- Mousavian, A.; Eppner, C.; Fox, D. 6-DOF GraspNet: Variational grasp generation for object manipulation. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 2901–2910. [Google Scholar]
- Choi, Y.; Kee, H.; Lee, K.; Choy, J.; Min, J.; Lee, S.; Oh, S. Hierarchical 6-DoF grasping with approaching direction selection. In Proceedings of the IEEE International Conference on Robotics and Automation, Paris, France, 31 May–31 August 2020; pp. 1553–1559. [Google Scholar]
- Yoshikawa, T. Manipulability of robotic mechanisms. Int. J. Robot. Res. 1985, 4, 3–9. [Google Scholar] [CrossRef]
- Huo, L.; Baron, L. The joint-limits and singularity avoidance in robotic welding. Ind. Robot. 2008, 35, 456–464. [Google Scholar] [CrossRef]
- Rozo, L.; Jaquier, N.; Calinon, S.; Caldwell, D.G. Learning manipulability ellipsoids for task compatibility in robot manipulation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Vancouver, BC, Canada, 24–28 September 2017. [Google Scholar]
- Zghal, H.; Dubey, R.V.; Euler, J.A. Efficient gradient projection optimization for manipulators with multiple degrees of redundancy. In Proceedings of the IEEE International Conference on Robotics and Automation, Cincinnati, OH, USA, 13–18 May 1990; pp. 1006–1011. [Google Scholar]
- Chan, T.F.; Dubey, R.V. A weighted least-norm solution based scheme for avoiding joint limits for redundant joint manipulators. IEEE Trans. Robot. Autom. 1995, 11, 286–292. [Google Scholar] [CrossRef]
- Hayes, M.; Husty, M.; Zsombor-Murray, P. Singular configurations of wrist-partitioned 6r serial robots: A geometric perspective for users. Trans. Can. Soc. Mech. Eng. 2002, 26, 41–55. [Google Scholar] [CrossRef]
- Park, T.-W.; Yang, H.-S. A study on singularity avoidance and robust control of redundant robot. In Proceedings of the 4th World Congress on Intelligent Control and Automation, Shanghai, China, 10–14 June 2002; pp. 1687–1691. [Google Scholar]
- Zaplana, I.; Hadfield, H.; Lasenby, J. Singularities of serial robots: Identification and distance computation using geometric algebra. Mathematics 2022, 10, 2068. [Google Scholar] [CrossRef]
- Vahrenkamp, N.; Asfour, T.; Metta, G.; Sandini, G.; Dillmann, R. Manipulability analysis. In Proceedings of the 12th IEEE-RAS International Conference on Humanoid Robots, Osaka, Japan, 29 November–1 December 2012; pp. p 568–573. [Google Scholar]
- Shi, X.; Guo, Y.; Chen, X.; Chen, Z.; Yang, Z. Kinematics and singularity analysis of a 7-DOF redundant manipulator. Sensors 2021, 21, 7257. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
- Chu, F.-J.; Xu, R.; Vela, P.A. Real-world multiobject, multigrasp detection. IEEE Robot. Autom. Lett. 2018, 3, 3355–3362. [Google Scholar] [CrossRef]
- Xiang, T.; Jiang, X.; Qiao, G.; Gao, C.; Zuo, H. Kinematics parameter calibration of serial industrial robots based on partial pose measurement. Mathematics 2023, 11, 4802. [Google Scholar] [CrossRef]
- Singh, G.K.; Claassens, J. An analytical solution for the inverse kinematics of a redundant 7DoF manipulator with link offsets. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 2976–2982. [Google Scholar]
Left DH-Link | ai (m) | αi (Degree) | di (m) | qi (Degree) |
---|---|---|---|---|
Joint 0 (Base) | 0 | −90 | d0 | 0 |
Joint 1 | 0 | −90 | d1 | θ1 + 90 |
Joint 2 | 0 | −90 | 0 | θ2 − 90 |
Joint 3 | a3 | 90 | d3 | θ3 |
Joint 4 | a4 | −90 | 0 | θ4 |
Joint 5 | 0 | 90 | d5 | θ5 |
Joint 6 | 0 | −90 | 0 | θ6 |
Joint 7 | 0 | 0 | d7 | θ7 + 90 |
Parameter | Symbol | Value | |
---|---|---|---|
Test 1 | Test 2 | ||
Initial posture of the end-effector | Pcurr | [0.5, 0.3, −0.5] | [0.3, 0.2, −0.35] |
Ocurr | [180°, 0°, 0°] | [90°, 0°, 0°] | |
ϕcurr | 0° | 0° | |
Target posture of the end-effector | Ptar | [0.1, −0.1, −0.5] | [0.3, 0.05, −0.35] |
Otar | [90°, 0°, 0°] | [180°, 0°, 0°] | |
Maximum variation in the redundant angle | ϕmax | 2° | 2° |
Parameter | Symbol | Value |
---|---|---|
Initial posture of the end-effector | Pcurr | [0.2, 0.55, −0.6] |
Ocurr | [180°, 0°, 0°] | |
ϕcurr | 0° | |
Target posture of the end-effector | Ptar | [−0.2, 0.55, −0.6] |
Otar | [180°, 0°, 0°] | |
Maximum variation in joint angle | Δθmax | 1.7° |
Target Object | PET Bottle | Cup | Scraper | Average | |
---|---|---|---|---|---|
Total number of grasping tests | 25 | 25 | 25 | 25 | |
Success rate | Single-arm | 96% | 92% | 96% | 94.6% |
Dual-arm | 100% | 100% | 96% | 98.6% | |
Minimum M-score | Single-arm | 0.38 | 0.16 | 0.47 | 0.33 |
Dual-arm | 0.91 | 0.89 | 0.84 | 0.88 | |
Improvement on minimum M-score | +0.53 | +0.73 | +0.37 | +0.55 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wong, C.-C.; Tsai, C.-Y.; Lai, Y.-C.; Wong, S.-W. Manipulability-Aware Task-Oriented Grasp Planning and Motion Control with Application in a Seven-DoF Redundant Dual-Arm Robot. Electronics 2024, 13, 5025. https://doi.org/10.3390/electronics13245025
Wong C-C, Tsai C-Y, Lai Y-C, Wong S-W. Manipulability-Aware Task-Oriented Grasp Planning and Motion Control with Application in a Seven-DoF Redundant Dual-Arm Robot. Electronics. 2024; 13(24):5025. https://doi.org/10.3390/electronics13245025
Chicago/Turabian StyleWong, Ching-Chang, Chi-Yi Tsai, Yu-Cheng Lai, and Shang-Wen Wong. 2024. "Manipulability-Aware Task-Oriented Grasp Planning and Motion Control with Application in a Seven-DoF Redundant Dual-Arm Robot" Electronics 13, no. 24: 5025. https://doi.org/10.3390/electronics13245025
APA StyleWong, C. -C., Tsai, C. -Y., Lai, Y. -C., & Wong, S. -W. (2024). Manipulability-Aware Task-Oriented Grasp Planning and Motion Control with Application in a Seven-DoF Redundant Dual-Arm Robot. Electronics, 13(24), 5025. https://doi.org/10.3390/electronics13245025