[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

MTCS-Net: A Novel Framework for Non-invasive Myocardial Tissue Quantitative Measurement and Instance Segmentation

  • Conference paper
  • First Online:
Advanced Intelligent Computing in Bioinformatics (ICIC 2024)

Part of the book series: Lecture Notes in Computer Science ((LNBI,volume 14882))

Included in the following conference series:

  • 378 Accesses

Abstract

In the realm of biomedicine, the in vitro cultivation of myocardial tissue, followed by the precise measurement and analysis of its contractility, plays a crucial role in the study of cardiac disease, exploring stem cell science, and enhancing the drug development and screening processes. However, traditional methodologies for monitoring and analyzing myocardial tissue often suffer from necessitate invasive operations and fail to support high-throughput monitoring, posing substantial obstacles to advancing disease research and drug development endeavors. Deep learning sheds promising light on this problem through the integration of advanced computer vision algorithms with biomedical data analysis. In this paper, we introduce the Multi-path Transformer-Convolution Network with a Spatial Matching Mechanism (MTCS-Net), a novel architecture for non-invasive, high-throughput monitoring of myocardial tissue contractility. To the best of our knowledge, this is the first work to quantitatively measure the myocardium’s functional state in a completely non-invasive manner. MTCS-Net uniquely combines Local-Global Feature Interaction (LGFI) for extracting both coarse-grained and fine-grained feature information at various scales in the same feature, Box-Based Spatial Matching (BBSM) module for accurate cross-frame data alignment, and the Region Proposal Network (RPN) for efficient region identification. Furthermore, this research introduces the first publicly available cultured myocardial tissue dataset, Cultured Myocardial Tissue (CMT) in vitro, with a comprehensive evaluation on this dataset revealing that our approach not only surpasses traditional methods in terms of monitoring efficacy, time reduction and financial costs, but also surpasses contemporary state-of-the-art (SOTA) deep learning algorithms in segmentation performance. The dataset and code are available at https://github.com/RainCh-zyq/MTCS-Net.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 49.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 64.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Mummery, C.L., Zhang, J., Ng, E.S., et al.: Differentiation of human embryonic stem cells and induced pluripotent stem cells to cardiomyocytes: a methods overview. Circ. Res. 111(3), 344–358 (2012)

    Article  Google Scholar 

  2. Li, L., Wan, Z., Wang, R., et al.: Generation of high-performance human cardiomyocytes and engineered heart tissues from extended pluripotent stem cells. Cell Discovery 8(1), 105 (2022)

    Article  Google Scholar 

  3. Schneider, C.A., Rasband, W.S., Eliceiri, K.W.: NIH Image to ImageJ: 25 years of image analysis. Nat. Methods 9(7), 671–675 (2012)

    Article  Google Scholar 

  4. Yuan, M., Xia, Y., Dong, H., et al.: Devil is in the queries: advancing mask transformers for real-world medical image segmentation and out-of-distribution localization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 23879–23889 (2023)

    Google Scholar 

  5. He, K., Gkioxari, G., Dollár, P., et al.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)

    Google Scholar 

  6. Cai Z, Vasconcelos N. Cascade r-cnn: Delving into high quality object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 6154–6162

    Google Scholar 

  7. Kirillov, A., Girshick, R., He, K., et al.: Panoptic feature pyramid networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6399–6408 (2019)

    Google Scholar 

  8. He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  9. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv preprint arXiv:1409.1556

  10. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)

    Google Scholar 

  11. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  12. Vaswani, A., Shazeer, N., Parmar, N., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  13. Cao, H., et al.: Swin-Unet: unet-like pure transformer for medical image segmentation. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds.) Computer Vision – ECCV 2022 Workshops. ECCV 2022. LNCS, vol. 13803. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-25066-8_9

  14. Chen, J., Lu, Y., Yu, Q., et al.: TransUNet: Transformers make strong encoders for medical image segmentation (2021). arXiv preprint arXiv:2102.04306

  15. Qin, Y., et al.: TransOrga: end-to-end multi-modal transformer-based organoid segmentation. In: Huang, D.S., Premaratne, P., Jin, B., Qu, B., Jo, K.H., Hussain, A. (eds.) Advanced Intelligent Computing Technology and Applications. ICIC 2023. LNCS, vol. 14088. Springer, Singapore (2023). https://doi.org/10.1007/978-981-99-4749-2_39

  16. Jiang, H., Zhang, R., Zhou, Y., et al.: DoNet: deep de-overlapping network for cytology instance segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15641–15650 (2023)

    Google Scholar 

  17. Xu, W., Xu, Y., Chang, T., et al.: Co-scale conv-attentional image transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9981–9990 (2021)

    Google Scholar 

  18. Graham, B., El-Nouby, A., Touvron, H., et al.: LeViT: a vision transformer in convNet’s clothing for faster inference. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12259–12269 (2021)

    Google Scholar 

  19. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456. PMLR (2015)

    Google Scholar 

  20. Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. JMLR Workshop and Conference Proceedings, pp. 315–323 (2011)

    Google Scholar 

  21. Lee, Y., Kim, J., Willette, J., et al.: MpViT: multi-path vision transformer for dense prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7287–7296 (2022)

    Google Scholar 

  22. Islam, M.A., Jia, S., Bruce, N.D.: How much position information do convolutional neural networks encode? arXiv 2020 (2001). arXiv preprint arXiv:2001.08248

  23. Lowe, D.G.: Object recognition from local scale-invariant features. In: Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 2, pp. 1150–1157. IEEE (1999)

    Google Scholar 

  24. Kayhan, O.S., Gemert, J.C.: On translation invariance in CNNs: convolutional layers can exploit absolute spatial location. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14274–14285 (2020)

    Google Scholar 

  25. Baker, N., Lu, H., Erlikhman, G., et al.: Deep convolutional networks do not classify based on global object shape. PLoS Comput. Biol. 14(12), e1006613 (2018)

    Article  Google Scholar 

  26. Bolya, D., Zhou, C., Xiao, F., et al.: YOLACT: real-time instance segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9157–9166 (2019)

    Google Scholar 

  27. Ke, L., Danelljan, M., Li, X., et al.: Mask transfiner for high-quality instance segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4412–4421 (2022)

    Google Scholar 

  28. He, J., Li, P., Geng, Y., et al.: FastInst: a simple query-based model for real-time instance segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 23663–23672 (2023)

    Google Scholar 

Download references

Acknowledgments

This work was partly supported by the National Key R&D Program of China (Grant No. 2021YFA1101902) and the National Natural Science Foundation of China (Grant No. 32171107, 62106069 and 62002104).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Yang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, Y. et al. (2024). MTCS-Net: A Novel Framework for Non-invasive Myocardial Tissue Quantitative Measurement and Instance Segmentation. In: Huang, DS., Pan, Y., Zhang, Q. (eds) Advanced Intelligent Computing in Bioinformatics. ICIC 2024. Lecture Notes in Computer Science(), vol 14882. Springer, Singapore. https://doi.org/10.1007/978-981-97-5692-6_38

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-5692-6_38

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-5691-9

  • Online ISBN: 978-981-97-5692-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics