[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3627341.3630370acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiccvitConference Proceedingsconference-collections
research-article

Three-Dimensional Object Segmentation Method based on YOLO, SAM, and NeRF

Published: 15 December 2023 Publication History

Abstract

The neural radiance field (NeRF) representation has shown promising results in capturing 3D scenes. However, when it comes to editing or moving specific objects within virtual environments, it is essential to segment individual objects from the neural radiance field. Existing methods primarily rely on feature computation and clustering for 3D segmentation, but they often suffer from poor segmentation quality and low accuracy. Taking advantage of recent advancements in 2D image segmentation, we propose a 3D segmentation method that generates single-object segmentation from multiple viewpoint images. First, we compute camera poses using feature points from the original images. Then, with user guidance, we use the You Only Look Once (YOLO) network model to roughly locate the objects and utilize the resulting information as hints. Subsequently, we employ the Segment Anything Model (SAM) for fine-grained segmentation. The processed dataset is then rendered into a 3D segmented object using the neural radiance field, producing high-quality rendered images. By training on specific objects, we can segment object categories that the publicly available YOLO model cannot recognize, thereby enhancing generalization. Through experiments, we analyze the real-time performance of each module and qualitatively and quantitatively evaluate the effectiveness of our method. Compared to the N3F and ISRF methods, our approach achieves an average 17% improvement in Intersection over Union (IoU) and an average 9% improvement in Pixel Accuracy (Acc). We validate the performance of our method on the LLFF, mip-Nerf360 datasets, demonstrating its generalizability and effectiveness.

References

[1]
Bao, Chong, "Sine: Semantic-driven image-based nerf editing with prior-guided editing field." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[2]
Barron, Jonathan T., "Mip-nerf 360: Unbounded anti-aliased neural radiance fields." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[3]
Carion, Nicolas, "End-to-end object detection with transformers." European conference on computer vision. Cham: Springer International Publishing, 2020.
[4]
Zhi, Shuaifeng, "In-place scene labelling and understanding with implicit scene representation." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.
[5]
Long, Yanxin, "Capdet: Unifying dense captioning and open-world detection pretraining." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[6]
Creswell, Antonia, "Generative adversarial networks: An overview." IEEE signal processing magazine 35.1 (2018): 53-65.
[7]
Wen, Bowen, "BundleSDF: Neural 6-DoF Tracking and 3D Reconstruction of Unknown Objects." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[8]
Goel, Rahul, "Interactive segmentation of radiance fields." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[9]
Hu, Jie, "You Only Segment Once: Towards Real-Time Panoptic Segmentation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[10]
Kirillov, Alexander, "Segment anything." arXiv preprint arXiv:2304.02643 (2023).
[11]
Yin, Youtan, "OR-NeRF: Object Removing from 3D Scenes Guided by Multiview Segmentation with Neural Radiance Fields." arXiv preprint arXiv:2305.10503 (2023).
[12]
Li, Sicheng, "SteerNeRF: Accelerating NeRF Rendering via Smooth Viewpoint Trajectory." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[13]
Lin, Chen-Hsuan, "Magic3d: High-resolution text-to-3d content creation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[14]
Wang, Chien-Yao, Alexey Bochkovskiy, and Hong-Yuan Mark Liao. "YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[15]
Mildenhall, Ben, "Local light field fusion: Practical view synthesis with prescriptive sampling guidelines." ACM Transactions on Graphics (TOG) 38.4 (2019): 1-14.
[16]
Mildenhall, Ben, "Nerf: Representing scenes as neural radiance fields for view synthesis." Communications of the ACM 65.1 (2021): 99-106.
[17]
Liu, Fangfu, "Semantic Ray: Learning a Generalizable Semantic Field with Cross-Reprojection Attention." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[18]
Müller, Thomas, "Instant neural graphics primitives with a multiresolution hash encoding." ACM Transactions on Graphics (ToG) 41.4 (2022): 1-15.
[19]
Kerr, Justin, "Lerf: Language embedded radiance fields." arXiv preprint arXiv:2303.09553 (2023).
[20]
Ren, Zhongzheng, "Neural volumetric object selection." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[21]
Tschernezki, Vadim, "Neural Feature Fusion Fields: 3D distillation of self-supervised 2D image representations." 2022 International Conference on 3D Vision (3DV). IEEE, 2022.
[22]
Cheng, Bowen, "Mask2former for video instance segmentation." arXiv preprint arXiv:2112.10764 (2021).
[23]
Zhang, Yuechen, "Ref-NPR: Reference-Based Non-Photorealistic Radiance Fields." arXiv preprint arXiv:2212.02766 (2022).

Index Terms

  1. Three-Dimensional Object Segmentation Method based on YOLO, SAM, and NeRF
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICCVIT '23: Proceedings of the 2023 International Conference on Computer, Vision and Intelligent Technology
    August 2023
    378 pages
    ISBN:9798400708701
    DOI:10.1145/3627341
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 15 December 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. 3D segmentation
    2. Multi-viewpoint images
    3. Neural radiance fields
    4. Segment anything model

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • the Fundamental Research Funds for the Central Universities
    • the National Natural Science Foundation of China

    Conference

    ICCVIT 2023

    Acceptance Rates

    ICCVIT '23 Paper Acceptance Rate 54 of 142 submissions, 38%;
    Overall Acceptance Rate 54 of 142 submissions, 38%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 103
      Total Downloads
    • Downloads (Last 12 months)103
    • Downloads (Last 6 weeks)7
    Reflects downloads up to 11 Dec 2024

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media