[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

A Volumetric Saliency Guided Image Summarization for RGB-D Indoor Scene Classification

Published: 11 June 2024 Publication History

Abstract

Image summary, an abridged version of the original visual content, can be used to represent the scene. Thus, tasks such as scene classification, identification, indexing, etc., can be performed efficiently using the unique summary. Saliency is the most commonly used technique for generating the relevant image summary. However, the definition of saliency is subjective in nature and depends upon the application. Existing saliency detection methods using RGB-D data mainly focus on color, texture, and depth features. Consequently, the generated summary contains either foreground objects or non-stationary objects. However, applications such as scene identification require stationary characteristics of the scene, unlike state-of-the-art methods. This paper proposes a novel volumetric saliency-guided framework for indoor scene classification. The results highlight the efficacy of the proposed method.

References

[1]
P. Meena, H. Kumar, and S. K. Yadav, “A review on video summarization techniques,” Eng. Appl. Artif. Intell., vol. 118, Feb. 2023, Art. no.
[2]
W. Wang, J. Shen, M.-M. Cheng, and L. Shao, “An iterative and cooperative top-down and bottom-up inference network for salient object detection,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 5961–5970.
[3]
Ç. Aytekin, E. C. Ozan, S. Kiranyaz, and M. Gabbouj, “Visual saliency by extended quantum cuts,” in Proc. IEEE Int. Conf. Image Process. (ICIP), Sep. 2015, pp. 1692–1696.
[4]
C. Park, M. Lee, M. Cho, and S. Lee, “Saliency detection via global context enhanced feature fusion and edge weighted loss,” in Proc. IEEE Int. Conf. Image Process. (ICIP), Oct. 2022, pp. 811–815.
[5]
A. López-Cifuentes, M. Escudero-Viñolo, J. Bescós, and J. C. SanMiguel, “Attention-based knowledge distillation in scene recognition: The impact of a DCT-driven loss,” IEEE Trans. Circuits Syst. Video Technol., vol. 33, no. 9, pp. 4769–4783, Sep. 2023.
[6]
G. Lara, A. De Antonio, and A. Peña, “A computational model of perceptual saliency for 3D objects in virtual environments,” Virtual Reality, vol. 22, no. 3, pp. 221–234, Sep. 2018.
[7]
R. Hou, G. Chen, Y. Han, Z. Tang, and Q. Ru, “Multi-modal feature fusion for 3D object detection in the production workshop,” Appl. Soft Comput., vol. 115, Jan. 2022, Art. no.
[8]
A. Li, Y. Mao, J. Zhang, and Y. Dai, “Mutual information regularization for weakly-supervised RGB-d salient object detection,” IEEE Trans. Circuits Syst. Video Technol., vol. 34, no. 1, pp. 397–410, Jan. 2024.
[9]
R. Cong, J. Lei, H. Fu, J. Hou, Q. Huang, and S. Kwong, “Going from RGB to RGBD saliency: A depth-guided transformation model,” IEEE Trans. Cybern., vol. 50, no. 8, pp. 3627–3639, Aug. 2020.
[10]
P. D. Kovesi. MATLAB and Octave Functions for Computer Vision and Image Processing. Accessed: May 1, 2018. [Online]. Available: https://www.peterkovesi.com/matlabfns/
[11]
D. Kuzovkin, T. Pouli, R. Cozot, O. L. Meur, J. Kervec, and K. Bouatouch, “Context-aware clustering and assessment of photo collections,” in Proc. Symp. Comput. Aesthetics, Jul. 2017, pp. 1–10.
[12]
A. Pasini, F. Giobergia, E. Pastor, and E. Baralis, “Semantic image collection summarization with frequent subgraph mining,” IEEE Access, vol. 10, pp. 131747–131764, 2022.
[13]
D. K. Sharma, A. Singh, S. K. Sharma, G. Srivastava, and J. C.-W. Lin, “Task-specific image summaries using semantic information and self-supervision,” Soft Comput., vol. 26, no. 16, pp. 7581–7594, Aug. 2022.
[14]
P. Mahalakshmi and N. S. Fatima, “Summarization of text and image captioning in information retrieval using deep learning techniques,” IEEE Access, vol. 10, pp. 18289–18297, 2022.
[15]
Y. E. Ozkose, B. Celikkale, E. Erdem, and A. Erdem, “Diverse neural photo album summarization,” in Proc. 9th Int. Conf. Image Process. Theory, Tools Appl. (IPTA), Nov. 2019, pp. 1–6.
[16]
A. Singh and D. K. Sharma, “Image collection summarization: Past, present and future,” in Data Visualization and Knowledge Engineering: Spotting Data Points With Artificial Intelligence. Cham, Switzerland: Springer, 2020, pp. 49–78.
[17]
A. Singh, L. Virmani, and A. V. Subramanyam, “Image corpus representative summarization,” in Proc. IEEE 5th Int. Conf. Multimedia Big Data (BigMM), Sep. 2019, pp. 21–29.
[18]
P. R. Sreelakshmi and S. Manmadhan, “Image summarization using unsupervised learning,” in Proc. 7th Int. Conf. Adv. Comput. Commun. Syst. (ICACCS), vol. 1, Mar. 2021, pp. 100–103.
[19]
W. Theisen, D. G. Cedre, Z. Carmichael, D. Moreira, T. Weninger, and W. Scheirer, “Motif mining: Finding and summarizing remixed image content,” in Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis. (WACV), Jan. 2023, pp. 1319–1328.
[20]
I. Simon, N. Snavely, and S. M. Seitz, “Scene summarization for online image collections,” in Proc. IEEE 11th Int. Conf. Comput. Vis., Oct. 2007, pp. 1–8.
[21]
J. E. Camargo and F. A. González, “A multi-class kernel alignment method for image collection summarization,” in Proc. 14th Iberoamerican Conf. Pattern Recognit. Cham, Switzerland: Springer, 2009, pp. 545–552.
[22]
D. Deng, “Content-based image collection summarization and comparison using self-organizing maps,” Pattern Recognit., vol. 40, no. 2, pp. 718–727, 2007.
[23]
D. Stan and I. K. Sethi, “EID: A system for exploration of image databases,” Inf. Process. Manage., vol. 39, no. 3, pp. 335–361, May 2003.
[24]
J.-Y. Chen, C. A. Bouman, and J. C. Dalton, “Hierarchical browsing and search of large image databases,” IEEE Trans. Image Process., vol. 9, no. 3, pp. 442–455, Mar. 2000.
[25]
D. Cai, X. He, Z. Li, W.-Y. Ma, and J.-R. Wen, “Hierarchical clustering of www image search results using visual, textual and link information,” in Proc. 12th Annu. ACM Int. Conf. Multimedia, 2004, pp. 952–959.
[26]
B. Gao, T.-Y. Liu, T. Qin, X. Zheng, Q.-S. Cheng, and W.-Y. Ma, “Web image clustering by consistent utilization of visual features and surrounding texts,” in Proc. 13th Annu. ACM Int. Conf. Multimedia, Nov. 2005, pp. 112–121.
[27]
P. Sinha, S. Mehrotra, and R. Jain, “Summarization of personal photologs using multidimensional content and context,” in Proc. 1st ACM Int. Conf. Multimedia Retr., Apr. 2011, pp. 1–8.
[28]
C. Yang, J. Shen, J. Peng, and J. Fan, “Image collection summarization via dictionary learning for sparse representation,” Pattern Recognit., vol. 46, no. 3, pp. 948–961, Mar. 2013.
[29]
C. Szegedy et al., “Going deeper with convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2015, pp. 1–9.
[30]
A. Creswell et al., “Generative adversarial networks: An overview,” IEEE Signal Process. Mag., vol. 35, no. 1, pp. 53–65, Aug. 2018.
[31]
T. Chen, A. Lu, and S.-M. Hu, “Visual storylines: Semantic visualization of movie sequence,” Comput. Graph., vol. 36, no. 4, pp. 241–249, Jun. 2012.
[32]
L. Shi, J. Wang, L. Xu, H. Lu, and C. Xu, “Context saliency based image summarization,” in Proc. IEEE Int. Conf. Multimedia Expo, Jun. 2009, pp. 270–273.
[33]
F. Zhang, B. Du, and L. Zhang, “Saliency-guided unsupervised feature learning for scene classification,” IEEE Trans. Geosci. Remote Sens., vol. 53, no. 4, pp. 2175–2184, Apr. 2015.
[34]
L. Jiang, A. Koch, and A. Zell, “Salient regions detection for indoor robots using RGB-D data,” in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), May 2015, pp. 1323–1328.
[35]
S. S. Thomas, S. Gupta, and V. K. Subramanian, “Perceptual video summarization—A new framework for video summarization,” IEEE Trans. Circuits Syst. Video Technol., vol. 27, no. 8, pp. 1790–1802, Aug. 2017.
[36]
J. Deng et al., “RGB-D salient object ranking based on depth stack and truth stack for complex indoor scenes,” Pattern Recognit., vol. 137, May 2023, Art. no.
[37]
C. Lang, T. V. Nguyen, H. Katti, K. Yadati, M. Kankanhalli, and S. Yan, “Depth matters: Influence of depth cues on visual saliency,” in Proc. Eur. Conf. Comput. Vis. (ECCV). Cham, Switzerland: Springer, 2012, pp. 101–115.
[38]
X. Fan, Z. Liu, and G. Sun, “Salient region detection for stereoscopic images,” in Proc. 19th Int. Conf. Digit. Signal Process., Aug. 2014, pp. 454–458.
[39]
H. Peng, B. Li, W. Xiong, W. Hu, and R. Ji, “RGBD salient object detection: A benchmark and algorithms,” in Proc. Eur. Conf. Comput. Vis. Cham, Switzerland: Springer, 2014, pp. 92–109.
[40]
D.-P. Fan, Z. Lin, Z. Zhang, M. Zhu, and M.-M. Cheng, “Rethinking RGB-D salient object detection: Models, data sets, and large-scale benchmarks,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 5, pp. 2075–2089, May 2021.
[41]
J. Lou et al., “Exploiting color name space for salient object detection,” Multimedia Tools Appl., vol. 79, nos. 15–16, pp. 10873–10897, Apr. 2020.
[42]
Y. Pang, L. Zhang, X. Zhao, and H. Lu, “Hierarchical dynamic filtering network for RGB-D salient object detection,” in Proc. Eur. Conf. Comput. Vis. Cham, Switzerland: Springer, 2020, pp. 235–252.
[43]
N. Huang, Y. Liu, Q. Zhang, and J. Han, “Joint cross-modal and unimodal features for RGB-D salient object detection,” IEEE Trans. Multimedia, vol. 23, pp. 2428–2441, 2021.
[44]
C. Li, R. Cong, Y. Piao, Q. Xu, and C. C. Loy, “RGB-D salient object detection with cross-modality modulation and selection,” in Proc. Eur. Conf. Comput. Vis. Cham, Switzerland: Springer, 2020, pp. 225–241.
[45]
X. Zhao, Y. Pang, L. Zhang, H. Lu, and X. Ruan, “Self-supervised pretraining for RGB-D salient object detection,” 2021, arXiv:2101.12482.
[46]
J. Zhang et al., “Uncertainty inspired RGB-D saliency detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 9, pp. 5761–5779, Sep. 2022.
[47]
Y. Yang, Q. Qin, Y. Luo, Y. Liu, Q. Zhang, and J. Han, “Bi-directional progressive guidance network for RGB-D salient object detection,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 8, pp. 5346–5360, Aug. 2022.
[48]
W. Ji et al., “DMRA: Depth-induced multi-scale recurrent attention network for RGB-D saliency detection,” IEEE Trans. Image Process., vol. 31, pp. 2321–2336, 2022.
[49]
Z. Wang, Y. Zhang, Y. Liu, D. Zhu, S. A. Coleman, and D. Kerr, “Elwnet: An extremely lightweight approach for real-time salient object detection,” IEEE Trans. Circuits Syst. Video Technol., vol. 33, no. 11, pp. 6404–6417, Nov. 2023.
[50]
Y. Piao, W. Ji, J. Li, M. Zhang, and H. Lu, “Depth-induced multi-scale recurrent attention network for saliency detection,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 7254–7263.
[51]
H. Chen and Y. Li, “Three-stream attention-aware network for RGB-D salient object detection,” IEEE Trans. Image Process., vol. 28, no. 6, pp. 2825–2835, Jun. 2019.
[52]
N. Liu, N. Zhang, and J. Han, “Learning selective self-mutual attention for RGB-D saliency detection,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 13756–13765.
[53]
X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 7794–7803.
[54]
N. Liu, N. Zhang, L. Shao, and J. Han, “Learning selective mutual attention and contrast for RGB-D saliency detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 12, pp. 9026–9042, Dec. 2022.
[55]
K. Fu, D.-P. Fan, G.-P. Ji, Q. Zhao, J. Shen, and C. Zhu, “Siamese network for RGB-D salient object detection and beyond,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 9, pp. 5541–5559, Sep. 2022.
[56]
Q. Chen, Z. Liu, Y. Zhang, K. Fu, Q. Zhao, and H. Du, “RGB-D salient object detection via 3D convolutional neural networks,” in Proc. AAAI Conf. Artif. Intell. (AAAI), 2021, pp. 1063–1071.
[57]
W. Ji et al., “Calibrated RGB-D salient object detection,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2021, pp. 9471–9481.
[58]
G. Li, Z. Liu, M. Chen, Z. Bai, W. Lin, and H. Ling, “Hierarchical alternate interaction network for RGB-D salient object detection,” IEEE Trans. Image Process., vol. 30, pp. 3528–3542, 2021.
[59]
H. Mei et al., “Exploring dense context for salient object detection,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 3, pp. 1378–1389, Mar. 2021.
[60]
L. Zhu et al., “S3Net: Self-supervised self-ensembling network for semi-supervised RGB-D salient object detection,” IEEE Trans. Multimedia, vol. 25, pp. 676–689, 2023.
[61]
M. Zhang, S. Yao, B. Hu, Y. Piao, and W. Ji, “C2DFNet: Criss-cross dynamic filter network for RGB-D salient object detection,” IEEE Trans. Multimedia, vol. 25, pp. 5142–5154, 2023.
[62]
G. Chen et al., “Modality-induced transfer-fusion network for RGB-D and RGB-T salient object detection,” IEEE Trans. Circuits Syst. Video Technol., vol. 33, no. 4, pp. 1787–1801, Apr. 2022.
[63]
Z. Wu, G. Allibert, F. Meriaudeau, C. Ma, and C. Demonceaux, “HiDAnet: RGB-D salient object detection via hierarchical depth awareness,” IEEE Trans. Image Process., vol. 32, pp. 2160–2173, 2023.
[64]
F. Sun, P. Ren, B. Yin, F. Wang, and H. Li, “CATNet: A cascaded and aggregated transformer network for RGB-D salient object detection,” IEEE Trans. Multimedia, vol. 26, pp. 2249–2262, 2024. 10.1109/TMM.2023.3294003.
[65]
Z. Liu, Y. Tan, Q. He, and Y. Xiao, “SwinNet: Swin transformer drives edge-aware RGB-D and RGB-T salient object detection,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 7, pp. 4486–4497, Nov. 2021.
[66]
N. Zhang, J. Han, and N. Liu, “Learning implicit class knowledge for RGB-D co-salient object detection with transformers,” IEEE Trans. Image Process., vol. 31, pp. 4556–4570, 2022.
[67]
B. Tang, Z. Liu, Y. Tan, and Q. He, “HRTransNet: HRFormer-driven two-modality salient object detection,” IEEE Trans. Circuits Syst. Video Technol., vol. 33, no. 2, pp. 728–742, Feb. 2023.
[68]
L. Li et al., “Robust perception and precise segmentation for scribble-supervised RGB-D saliency detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 46, no. 1, pp. 479–496, Jan. 2024.
[69]
Q. Zhang, Q. Qin, Y. Yang, Q. Jiao, and J. Han, “Feature calibrating and fusing network for RGB-D salient object detection,” IEEE Trans. Circuits Syst. Video Technol., vol. 34, no. 3, pp. 1493–1507, Mar. 2024.
[70]
X. Zhang, Y. Xu, T. Wang, and T. Liao, “Multi-prior driven network for RGB-D salient object detection,” IEEE Trans. Circuits Syst. Video Technol., p. 1, 2023. 10.1109/TCSVT.2023.3268217.
[71]
S. Montabone and A. Soto, “Human detection using a mobile platform and novel features derived from a visual saliency mechanism,” Image Vis. Comput., vol. 28, no. 3, pp. 391–402, 2010.
[72]
Y. Luo et al., “HFMDNet: Hierarchical fusion and multilevel decoder network for RGB-D salient object detection,” IEEE Trans. Instrum. Meas., vol. 73, pp. 1–15, 2024.
[73]
G. Chen et al., “Em-Trans: Edge-aware multimodal transformer for RGB-d salient object detection,” IEEE Trans. Neural Netw. Learn. Syst., pp. 1–14, 2024. 10.1109/TNNLS.2024.3358858.
[74]
J. Ren, X. Gong, L. Yu, W. Zhou, and M. Ying Yang, “Exploiting global priors for RGB-D saliency detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, Jun. 2015, pp. 25–32.
[75]
F. Liang, L. Duan, W. Ma, Y. Qiao, Z. Cai, and L. Qing, “Stereoscopic saliency model using contrast and depth-guided-background prior,” Neurocomputing, vol. 275, pp. 2227–2238, Jan. 2018.
[76]
Z. Zhang, Z. Lin, J. Xu, W. Jin, S. Lu, and D. Fan, “Bilateral attention network for RGB-D salient object detection,” IEEE Trans. Image Process., vol. 30, pp. 1949–1961, 2021.
[77]
Z. Zeng, H. Liu, F. Chen, and X. Tan, “AirSOD: A lightweight network for RGB-D salient object detection,” IEEE Trans. Circuits Syst. Video Technol., vol. 34, no. 3, pp. 1656–1669, Mar. 2024.
[78]
X. Jin, K. Yi, and J. Xu, “MoADNet: Mobile asymmetric dual-stream networks for real-time and lightweight RGB-D salient object detection,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 11, pp. 7632–7645, Nov. 2022.
[79]
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 2818–2826.
[80]
R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for image classification,” IEEE Trans. Syst., Man, Cybern., vol. SMC-3, no. 6, pp. 610–621, Nov. 1973.
[81]
H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle, “Surface reconstruction from unorganized points,” in Proc. 19th Annu. Conf. Comput. Graph. Interact. Techn. (SIGGRAPH), 1992, pp. 71–78.
[82]
R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk, “SLIC superpixels compared to state-of-the-art superpixel methods,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 11, pp. 2274–2282, Nov. 2012.
[83]
M. Harouni and H. Y. Baghmaleki, “Color image segmentation metrics,” in Encyclopedia of Image Processing, vol. 95. USA: CRC, 2018, pp. 10–21.
[84]
N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor segmentation and support inference from RGBD images,” in Proc. Eur. Conf. Comput. Vis. (ECCV), vol. 7576, 2012, pp. 746–760. [Online]. Available: https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html
[85]
P. Kasarapu and L. Allison, “Minimum message length estimation of mixtures of multivariate Gaussian and von mises-Fisher distributions,” Mach. Learn., vol. 100, nos. 2–3, pp. 333–378, Sep. 2015.
[86]
A. Banerjee, I. S. Dhillon, J. Ghosh, S. Sra, and G. Ridgeway, “Clustering on the unit hypersphere using von Mises–Fisher distributions,” J. Mach. Learn. Res., vol. 6, no. 9, pp. 1345–1382, 2005.
[87]
W. Wang and T. Lee, “Matrix Fisher–Gaussian distribution on SO(3) × Rn and Bayesian attitude estimation,” IEEE Trans. Autom. Control, vol. 67, no. 5, pp. 2175–2191, May 2022.
[88]
P. Stoica and Y. Selen, “Model-order selection: A review of information criterion rules,” IEEE Signal Process. Mag., vol. 21, no. 4, pp. 36–47, Jul. 2004.
[89]
T. K. Moon, “The expectation-maximization algorithm,” IEEE Signal Process. Mag., vol. 13, no. 6, pp. 47–60, Nov. 1996.
[90]
J. Duchi, “Derivations for linear algebra and optimization,” Berkeley, California, vol. 3, no. 1, pp. 2325–5870, 2007.
[91]
C.-T. Chang, B. Gorissen, and S. Melchior, “Fast oriented bounding box optimization on the rotation group SO (3,R),” ACM Trans. Graph., vol. 30, no. 5, pp. 1–16, Oct. 2011.
[92]
H. G. Gene and F. Charles, Matrix Computations, 3rd ed. Baltimore, MD, USA: The Johns Hopkins Univ. Press, 1996.
[93]
W. Ji. (2019). Evaluation Toolbox for Salient Object Detection. [Online]. Available: https://github.com/jiwei0921/Saliency-Evaluation-Toolbox/
[94]
S. Song, S. P. Lichtenberg, and J. Xiao, “SUN RGB-D: A RGB-D scene understanding benchmark suite,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2015, pp. 567–576.
[95]
M. Roberts et al., “Hypersim: A photorealistic synthetic dataset for holistic indoor scene understanding,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., Oct. 2021, pp. 10912–10922.
[96]
G. Csurka, C. Dance, L. Fan, J. Willamowski, and C. Bray, “Visual categorization with bags of keypoints,” in Proc. Workshop Stat. Learn. Comput. Vis. (ECCV), vol. 1, nos. 1–22. Prague, Czech Republic, 2004, pp. 1–2.

Index Terms

  1. A Volumetric Saliency Guided Image Summarization for RGB-D Indoor Scene Classification
            Index terms have been assigned to the content through auto-classification.

            Recommendations

            Comments

            Please enable JavaScript to view thecomments powered by Disqus.

            Information & Contributors

            Information

            Published In

            cover image IEEE Transactions on Circuits and Systems for Video Technology
            IEEE Transactions on Circuits and Systems for Video Technology  Volume 34, Issue 11_Part_1
            Nov. 2024
            789 pages

            Publisher

            IEEE Press

            Publication History

            Published: 11 June 2024

            Qualifiers

            • Research-article

            Contributors

            Other Metrics

            Bibliometrics & Citations

            Bibliometrics

            Article Metrics

            • 0
              Total Citations
            • 0
              Total Downloads
            • Downloads (Last 12 months)0
            • Downloads (Last 6 weeks)0
            Reflects downloads up to 14 Dec 2024

            Other Metrics

            Citations

            View Options

            View options

            Media

            Figures

            Other

            Tables

            Share

            Share

            Share this Publication link

            Share on social media