[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1007/978-3-030-92238-2_15guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Fast Organization of Objects’ Spatial Positions in Manipulator Space from Single RGB-D Camera

Published: 08 December 2021 Publication History

Abstract

For the grasp task in physical environment, it is important for the manipulator to know the objects’ spatial positions with as few sensors as possible in real time. This work proposed an effective framework to organize the objects’ spatial positions in the manipulator 3D workspace with a single RGB-D camera robustly and fast. It mainly contains two steps: (1) a 3D reconstruction strategy for objects’ contours obtained in environment; (2) a distance-restricted outlier point elimination strategy to reduce the reconstruction errors caused by sensor noise. The first step ensures fast object extraction and 3D reconstruction from scene image, and the second step contributes to more accurate reconstructions by eliminating outlier points from initial result obtained by the first step. We validated the proposed method in a physical system containing a Kinect 2.0 RGB-D camera and a Mico2 robot. Experiments show that the proposed method can run in quasi real time on a common PC and it outperforms the traditional 3D reconstruction methods.

References

[1]
Boby, R.A., Saha, S.K.: Single image based camera calibration and pose estimation of the end-effector of a robot. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 2435–2440. IEEE (2016)
[2]
Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: Yolov4: optimal speed and accuracy of object detection (2020). arXiv preprint arXiv:2004.10934
[3]
Brachmann, E., Michel, F., Krull, A., Yang, M.Y., Gumhold, S., et al.: Uncertainty-driven 6d pose estimation of objects and scenes from a single rgb image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3364–3372 (2016)
[4]
Cao, Z., Sheikh, Y., Banerjee, N.K.: Real-time scalable 6D of pose estimation for textureless objects. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 2441–2448. IEEE (2016)
[5]
Collet A, Martinez M, and Srinivasa SS The moped framework: object recognition and pose estimation for manipulation Int. J. Rob. Res. 2011 30 10 1284-1306
[6]
Durović, P., Grbić, R., Cupec, R.: Visual servoing for low-cost scara robots using an rgb-d camera as the only sensor. Automatika: časopis za automatiku, mjerenje, elektroniku, računarstvo i komunikacije 58(4), 495–505 (2017)
[7]
Gao, G., Lauri, M., Wang, Y., Hu, X., Zhang, J., Frintrop, S.: 6D object pose regression via supervised learning on point clouds. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 3643–3649. IEEE (2020)
[8]
He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)
[9]
Jones M and Vernon D Using neural networks to learn hand-eye co-ordination Neural Comput. Appl. 1994 2 1 2-12
[10]
Kehl, W., Manhardt, F., Tombari, F., Ilic, S., Navab, N.: SSD-6D: Making rgb-based 3D detection and 6D pose estimation great again. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1521–1529 (2017)
[11]
Kehl W, Milletari F, Tombari F, Ilic S, and Navab N Leibe B, Matas J, Sebe N, and Welling M Deep learning of local RGB-D patches for 3D object detection and 6D pose estimation Computer Vision – ECCV 2016 2016 Cham Springer 205-220
[12]
Kuan YW, Ee NO, and Wei LS Comparative study of intel r200, kinect v2, and primesense rgb-d sensors performance outdoors IEEE Sens. J. 2019 19 19 8741-8750
[13]
Levine S, Pastor P, Krizhevsky A, Ibarz J, and Quillen D Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection Int. J. Rob. Res. 2018 37 4–5 421-436
[14]
Li E, Mo H, Xu D, and Li H Image projective invariants IEEE Trans. Pattern Anal. Mach. Intell. 2018 41 5 1144-1157
[15]
Liu W et al. Leibe B, Matas J, Sebe N, Welling M, et al. SSD: single shot multibox detector Computer Vision – ECCV 2016 2016 Cham Springer 21-37
[16]
Meng Y and Zhuang H Self-calibration of camera-equipped robot manipulators Int. J. Rob. Res. 2001 20 11 909-921
[17]
Michel, F., et al.: Global hypothesis generation for 6D object pose estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 462–471 (2017)
[19]
Pavlakos, G., Zhou, X., Chan, A., Derpanis, K.G., Daniilidis, K.: 6-dof object pose from semantic keypoints. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 2011–2018. IEEE (2017)
[20]
Rad, M., Lepetit, V.: Bb8: a scalable, accurate, robust to partial occlusion method for predicting the 3D poses of challenging objects without using depth. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3828–3836 (2017)
[21]
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)
[22]
Redmon, J., Farhadi, A.: Yolov3: an incremental improvement (2018). arXiv preprint arXiv:1804.02767
[23]
Ren S, He K, Girshick R, and Sun J Faster r-cnn: towards real-time object detection with region proposal networks Adv. Neural Inf. Process. Syst. 2015 28 91-99
[24]
Rodriguez A and Laio A Clustering by fast search and find of density peaks Science 2014 344 6191 1492-1496
[25]
Schmid C and Mohr R Local grayvalue invariants for image retrieval IEEE Trans. Pattern Anal. Mach. Intell. 1997 19 5 530-535
[26]
Tekin, B., Sinha, S.N., Fua, P.: Real-time seamless single shot 6D object pose prediction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 292–301 (2018)
[27]
Wang, C., Xu, D., Zhu, Y., Martín-Martín, R., Lu, C., Fei-Fei, L., Savarese, S.: Densefusion: 6D object pose estimation by iterative dense fusion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3343–3352 (2019)
[28]
Wohlhart, P., Lepetit, V.: Learning descriptors for object recognition and 3D pose estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3109–3118 (2015)
[29]
Wu H, Tizzano W, Andersen TT, Andersen NA, and Ravn O Kim J-H, Matson ET, Myung H, Xu P, and Karray F Hand-eye calibration and inverse kinematics of robot arm using neural network Robot Intelligence Technology and Applications 2 2014 Cham Springer 581-591
[30]
Zeng, A., et al.: Multi-view self-supervised deep learning for 6D pose estimation in the amazon picking challenge. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 1386–1383. IEEE (2017)

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
Neural Information Processing: 28th International Conference, ICONIP 2021, Sanur, Bali, Indonesia, December 8–12, 2021, Proceedings, Part III
Dec 2021
723 pages
ISBN:978-3-030-92237-5
DOI:10.1007/978-3-030-92238-2

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 08 December 2021

Author Tags

  1. 3D reconstruction
  2. Real-time system
  3. Robot grasping

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 03 Jan 2025

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media