[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

OCID-Ref: A 3D Robotic Dataset With Embodied Language For Clutter Scene Grounding

Ke-Jyun Wang, Yun-Hsuan Liu, Hung-Ting Su, Jen-Wei Wang, Yu-Siang Wang, Winston Hsu, Wen-Chin Chen


Abstract
To effectively apply robots in working environments and assist humans, it is essential to develop and evaluate how visual grounding (VG) can affect machine performance on occluded objects. However, current VG works are limited in working environments, such as offices and warehouses, where objects are usually occluded due to space utilization issues. In our work, we propose a novel OCID-Ref dataset featuring a referring expression segmentation task with referring expressions of occluded objects. OCID-Ref consists of 305,694 referring expressions from 2,300 scenes with providing RGB image and point cloud inputs. To resolve challenging occlusion issues, we argue that it’s crucial to take advantage of both 2D and 3D signals to resolve challenging occlusion issues. Our experimental results demonstrate the effectiveness of aggregating 2D and 3D signals but referring to occluded objects still remains challenging for the modern visual grounding systems. OCID-Ref is publicly available at https://github.com/lluma/OCID-Ref
Anthology ID:
2021.naacl-main.419
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Editors:
Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5333–5338
Language:
URL:
https://aclanthology.org/2021.naacl-main.419
DOI:
10.18653/v1/2021.naacl-main.419
Bibkey:
Cite (ACL):
Ke-Jyun Wang, Yun-Hsuan Liu, Hung-Ting Su, Jen-Wei Wang, Yu-Siang Wang, Winston Hsu, and Wen-Chin Chen. 2021. OCID-Ref: A 3D Robotic Dataset With Embodied Language For Clutter Scene Grounding. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5333–5338, Online. Association for Computational Linguistics.
Cite (Informal):
OCID-Ref: A 3D Robotic Dataset With Embodied Language For Clutter Scene Grounding (Wang et al., NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.419.pdf
Optional supplementary data:
 2021.naacl-main.419.OptionalSupplementaryData.zip
Video:
 https://aclanthology.org/2021.naacl-main.419.mp4
Code
 lluma/OCID-Ref
Data
Cops-RefOCID