[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

End-to-End Ultrametric Learning for Hierarchical Segmentation

  • Conference paper
Discrete Geometry and Mathematical Morphology (DGMM 2024)

Abstract

Hierarchical image segmentation aims to capture the structure of objects of different sizes at different scales and helps to understand the scene. With the success of neural networks for image segmentation and the recent emergence of object and part segmentation datasets, the task of supervised learning of segmentation hierarchies naturally arises. In a previous work, we proposed a differentiable ultrametric layer that transforms any dissimilarity measure into an ultrametric distance equivalent to a hierarchical segmentation. In this paper, we study several loss functions for end-to-end learning of a neural network model predicting hierarchical segmentations. In particular, we propose a generalization of the Rand index for hierarchical segmentation and propose exact and approximate algorithms to compute it. We introduce new metrics to compare hierarchical segmentations, and we demonstrate the suitability of the proposed pipeline with several possible loss function combinations on a simulated hierarchical dataset.

This work is supported by the French ANR grant ANR-20-CE23-0019, and was granted access to the HPC resources of IDRIS under the allocation 2023-AD011013101R1 made by GENCI.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 95.50
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 64.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Al-Huda, Z., Peng, B., Yang, Y., Algburi, R.N.A.: Object scale selection of hierarchical image segmentation with deep seeds. IET Image Process. 15, 191–205 (2020)

    Article  Google Scholar 

  2. Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE TPAMI 33(5), 898–916 (2010)

    Article  Google Scholar 

  3. Bender, M.A., Farach-Colton, M.: The LCA problem revisited. In: Gonnet, G.H., Viola, A. (eds.) LATIN 2000. LNCS, vol. 1776, pp. 88–94. Springer, Heidelberg (2000). https://doi.org/10.1007/10719839_9

    Chapter  Google Scholar 

  4. Chen, X., Mottaghi, R., Liu, X., Fidler, S., Urtasun, R., Yuille, A.: Detect what you can: detecting and representing objects using holistic models and body parts. In: IEEE CVPR (2014)

    Google Scholar 

  5. Chierchia, G., Perret, B.: Ultrametric fitting by gradient descent. In: NeurIPS, vol. 32. Curran Associates, Inc. (2019)

    Google Scholar 

  6. Cousty, J., Najman, L., Kenmochi, Y., Guimarães, S.: Hierarchical segmentations with graphs: quasi-flat zones, minimum spanning trees, and saliency maps. JMIV 60(4), 479–502 (2018)

    Article  MathSciNet  Google Scholar 

  7. Elizar, E., Zulkifley, M.A., Muharar, R., Zaman, M.H.M., Mustaza, S.M.: A review on multiscale-deep-learning applications. Sensors 22(19), 7384 (2022)

    Article  Google Scholar 

  8. de Geus, D., Meletis, P., Lu, C., Wen, X., Dubbelman, G.: Part-aware panoptic segmentation. In: IEEE CVPR (2021)

    Google Scholar 

  9. He, J., et al.: PartImageNet: a large, high-quality dataset of parts. CoRR (2021)

    Google Scholar 

  10. Kendall, A., Gal, Y., Cipolla, R.: Multi-task learning using uncertainty to weigh losses for scene geometry and semantics (2018)

    Google Scholar 

  11. Lapertot, R., Chierchia, G., Perret, B.: Supervised learning of hierarchical image segmentation. In: Vasconcelos, V., Domingues, I., Paredes, S. (eds.) CIARP 2023. LNCS, vol. 14469, pp. 201–213. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-49018-7_15

    Chapter  Google Scholar 

  12. Li, L., Zhou, T., Wang, W., Li, J., Yang, Y.: Deep hierarchical semantic segmentation. In: IEEE CVPR, pp. 1236–1247 (2022)

    Google Scholar 

  13. Maninis, K., Pont-Tuset, J., Arbeláez, P., Gool, L.V.: Convolutional oriented boundaries: from image segmentation to high-level tasks. IEEE TPAMI 40, 819–833 (2017)

    Article  Google Scholar 

  14. Meletis, P., Wen, X., Lu, C., de Geus, D., Dubbelman, G.: Cityscapes-panoptic-parts and pascal-panoptic-parts datasets for scene understanding. CoRR (2020)

    Google Scholar 

  15. Najman, L., Cousty, J., Perret, B.: Playing with Kruskal: algorithms for morphological trees in edge-weighted graphs. In: Hendriks, C.L.L., Borgefors, G., Strand, R. (eds.) ISMM 2013. LNCS, vol. 7883, pp. 135–146. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-38294-9_12

    Chapter  Google Scholar 

  16. Ôn Vû Ngoc, M., et al.: Introducing the boundary-aware loss for deep image segmentation. In: BMVC 2021 (2021)

    Google Scholar 

  17. Perret, B., Cousty, J., Guimarães, S.J.F., Kenmochi, Y., Najman, L.: Removing non-significant regions in hierarchical clustering and segmentation. PRL 128, 433–439 (2019)

    Article  Google Scholar 

  18. Pont-Tuset, J., Arbeláez, P., Barron, J., Marques, F., Malik, J.: Multiscale combinatorial grouping for image segmentation and object proposal generation. arXiv:1503.00848 (2015)

  19. Ramanathan, V., et al.: PACO: parts and attributes of common objects (2023)

    Google Scholar 

  20. Ren, Z., Shakhnarovich, G.: Image segmentation by cascaded region agglomeration. In: IEEE CVPR, pp. 2011–2018 (2013)

    Google Scholar 

  21. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  22. Smith, L.N., Topin, N.: Super-convergence: very fast training of residual networks using large learning rates. CoRR (2017)

    Google Scholar 

  23. Thisanke, H., Deshan, C., Chamith, K., Seneviratne, S., Vidanaarachchi, R., Herath, D.: Semantic segmentation using vision transformers: a survey (2023)

    Google Scholar 

  24. Wolf, S., Schott, L., Köthe, U., Hamprecht, F.: Learned watershed: end-to-end learning of seeded segmentation (2017)

    Google Scholar 

  25. Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: SegFormer: simple and efficient design for semantic segmentation with transformers (2021)

    Google Scholar 

  26. Zhang, Y., Wang, C., Wang, X., Zeng, W., Liu, W.: FairMOT: on the fairness of detection and re-identification in multiple object tracking. IJCV 129(11), 3069–3087 (2021)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Raphael Lapertot .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Cite this paper

Lapertot, R., Chierchia, G., Perret, B. (2024). End-to-End Ultrametric Learning for Hierarchical Segmentation. In: Brunetti, S., Frosini, A., Rinaldi, S. (eds) Discrete Geometry and Mathematical Morphology. DGMM 2024. Lecture Notes in Computer Science, vol 14605. Springer, Cham. https://doi.org/10.1007/978-3-031-57793-2_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-57793-2_22

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-57792-5

  • Online ISBN: 978-3-031-57793-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics