Abstract
In the realm of biomedicine, the in vitro cultivation of myocardial tissue, followed by the precise measurement and analysis of its contractility, plays a crucial role in the study of cardiac disease, exploring stem cell science, and enhancing the drug development and screening processes. However, traditional methodologies for monitoring and analyzing myocardial tissue often suffer from necessitate invasive operations and fail to support high-throughput monitoring, posing substantial obstacles to advancing disease research and drug development endeavors. Deep learning sheds promising light on this problem through the integration of advanced computer vision algorithms with biomedical data analysis. In this paper, we introduce the Multi-path Transformer-Convolution Network with a Spatial Matching Mechanism (MTCS-Net), a novel architecture for non-invasive, high-throughput monitoring of myocardial tissue contractility. To the best of our knowledge, this is the first work to quantitatively measure the myocardium’s functional state in a completely non-invasive manner. MTCS-Net uniquely combines Local-Global Feature Interaction (LGFI) for extracting both coarse-grained and fine-grained feature information at various scales in the same feature, Box-Based Spatial Matching (BBSM) module for accurate cross-frame data alignment, and the Region Proposal Network (RPN) for efficient region identification. Furthermore, this research introduces the first publicly available cultured myocardial tissue dataset, Cultured Myocardial Tissue (CMT) in vitro, with a comprehensive evaluation on this dataset revealing that our approach not only surpasses traditional methods in terms of monitoring efficacy, time reduction and financial costs, but also surpasses contemporary state-of-the-art (SOTA) deep learning algorithms in segmentation performance. The dataset and code are available at https://github.com/RainCh-zyq/MTCS-Net.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Mummery, C.L., Zhang, J., Ng, E.S., et al.: Differentiation of human embryonic stem cells and induced pluripotent stem cells to cardiomyocytes: a methods overview. Circ. Res. 111(3), 344–358 (2012)
Li, L., Wan, Z., Wang, R., et al.: Generation of high-performance human cardiomyocytes and engineered heart tissues from extended pluripotent stem cells. Cell Discovery 8(1), 105 (2022)
Schneider, C.A., Rasband, W.S., Eliceiri, K.W.: NIH Image to ImageJ: 25 years of image analysis. Nat. Methods 9(7), 671–675 (2012)
Yuan, M., Xia, Y., Dong, H., et al.: Devil is in the queries: advancing mask transformers for real-world medical image segmentation and out-of-distribution localization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 23879–23889 (2023)
He, K., Gkioxari, G., Dollár, P., et al.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)
Cai Z, Vasconcelos N. Cascade r-cnn: Delving into high quality object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 6154–6162
Kirillov, A., Girshick, R., He, K., et al.: Panoptic feature pyramid networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6399–6408 (2019)
He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv preprint arXiv:1409.1556
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)
Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Vaswani, A., Shazeer, N., Parmar, N., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Cao, H., et al.: Swin-Unet: unet-like pure transformer for medical image segmentation. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds.) Computer Vision – ECCV 2022 Workshops. ECCV 2022. LNCS, vol. 13803. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-25066-8_9
Chen, J., Lu, Y., Yu, Q., et al.: TransUNet: Transformers make strong encoders for medical image segmentation (2021). arXiv preprint arXiv:2102.04306
Qin, Y., et al.: TransOrga: end-to-end multi-modal transformer-based organoid segmentation. In: Huang, D.S., Premaratne, P., Jin, B., Qu, B., Jo, K.H., Hussain, A. (eds.) Advanced Intelligent Computing Technology and Applications. ICIC 2023. LNCS, vol. 14088. Springer, Singapore (2023). https://doi.org/10.1007/978-981-99-4749-2_39
Jiang, H., Zhang, R., Zhou, Y., et al.: DoNet: deep de-overlapping network for cytology instance segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15641–15650 (2023)
Xu, W., Xu, Y., Chang, T., et al.: Co-scale conv-attentional image transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9981–9990 (2021)
Graham, B., El-Nouby, A., Touvron, H., et al.: LeViT: a vision transformer in convNet’s clothing for faster inference. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12259–12269 (2021)
Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456. PMLR (2015)
Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. JMLR Workshop and Conference Proceedings, pp. 315–323 (2011)
Lee, Y., Kim, J., Willette, J., et al.: MpViT: multi-path vision transformer for dense prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7287–7296 (2022)
Islam, M.A., Jia, S., Bruce, N.D.: How much position information do convolutional neural networks encode? arXiv 2020 (2001). arXiv preprint arXiv:2001.08248
Lowe, D.G.: Object recognition from local scale-invariant features. In: Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 2, pp. 1150–1157. IEEE (1999)
Kayhan, O.S., Gemert, J.C.: On translation invariance in CNNs: convolutional layers can exploit absolute spatial location. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14274–14285 (2020)
Baker, N., Lu, H., Erlikhman, G., et al.: Deep convolutional networks do not classify based on global object shape. PLoS Comput. Biol. 14(12), e1006613 (2018)
Bolya, D., Zhou, C., Xiao, F., et al.: YOLACT: real-time instance segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9157–9166 (2019)
Ke, L., Danelljan, M., Li, X., et al.: Mask transfiner for high-quality instance segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4412–4421 (2022)
He, J., Li, P., Geng, Y., et al.: FastInst: a simple query-based model for real-time instance segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 23663–23672 (2023)
Acknowledgments
This work was partly supported by the National Key R&D Program of China (Grant No. 2021YFA1101902) and the National Natural Science Foundation of China (Grant No. 32171107, 62106069 and 62002104).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zhang, Y. et al. (2024). MTCS-Net: A Novel Framework for Non-invasive Myocardial Tissue Quantitative Measurement and Instance Segmentation. In: Huang, DS., Pan, Y., Zhang, Q. (eds) Advanced Intelligent Computing in Bioinformatics. ICIC 2024. Lecture Notes in Computer Science(), vol 14882. Springer, Singapore. https://doi.org/10.1007/978-981-97-5692-6_38
Download citation
DOI: https://doi.org/10.1007/978-981-97-5692-6_38
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-5691-9
Online ISBN: 978-981-97-5692-6
eBook Packages: Computer ScienceComputer Science (R0)