[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Contrastive Learning of Single-Cell Phenotypic Representations for Treatment Classification

  • Conference paper
  • First Online:
Machine Learning in Medical Imaging (MLMI 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12966))

Included in the following conference series:

Abstract

Learning robust representations to discriminate cell phenotypes based on microscopy images is important for drug discovery. Drug development efforts typically analyse thousands of cell images to screen for potential treatments. Early works focus on creating hand-engineered features from these images or learn such features with deep neural networks in a fully or weakly-supervised framework. Both require prior knowledge or labelled datasets. Therefore, subsequent works propose unsupervised approaches based on generative models to learn these representations. Recently, representations learned with self-supervised contrastive loss-based methods have yielded state-of-the-art results on various imaging tasks compared to earlier unsupervised approaches. In this work, we leverage a contrastive learning framework to learn appropriate representations from single-cell fluorescent microscopy images for the task of Mechanism-of-Action classification. The proposed work is evaluated on the annotated BBBC021 dataset, and we obtain state-of-the-art results in NSC, NCSB and drop metrics for an unsupervised approach. We observe an improvement of 10% in NCSB accuracy and 11% in NSC-NSCB drop over the previously best unsupervised method. Moreover, the performance of our unsupervised approach ties with the best supervised approach. Additionally, we observe that our framework performs well even without post-processing, unlike earlier methods. With this, we conclude that one can learn robust cell representations with contrastive learning. We make the code available on GitHub (https://github.com/SamriddhiJain/SimCLR-for-cell-profiling).

A. Perakis, A. Gorji and S. Jain—These authors contributed equally to the paper.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 71.50
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 89.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ando, D.M., McLean, C.Y., Berndl, M.: Improving phenotypic measurements in high-content imaging screens. BioRxiv, p. 161422 (2017)

    Google Scholar 

  2. Azizi, S., et al.: Big self-supervised models advance medical image classification. arXiv preprint arXiv:2101.05224 (2021)

  3. Caicedo, J.C., McQuin, C., Goodman, A., Singh, S., Carpenter, A.E.: Weakly supervised learning of single-cell feature embeddings. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9309–9318 (2018)

    Google Scholar 

  4. Caie, P.D., et al.: High-content phenotypic profiling of drug response signatures across distinct cancer cells. Mol. Cancer Ther. 9(6), 1913–1926 (2010)

    Article  Google Scholar 

  5. Caron, M., Bojanowski, P., Joulin, A., Douze, M.: Deep clustering for unsupervised learning of visual features. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 132–149 (2018)

    Google Scholar 

  6. Chaitanya, K., Erdil, E., Karani, N., Konukoglu, E.: Contrastive learning of global and local features for medical image segmentation with limited annotations. In: Advances in Neural Information Processing Systems, vol. 33 (2020)

    Google Scholar 

  7. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607. PMLR (2020)

    Google Scholar 

  8. Donahue, J., Krähenbühl, P., Darrell, T.: Adversarial feature learning. ICLR (2017)

    Google Scholar 

  9. Donahue, J., Simonyan, K.: Large scale adversarial representation learning. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  10. Gidaris, S., Singh, P., Komodakis, N.: Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728 (2018)

  11. Godinez, W.J., Hossain, I., Lazic, S.E., Davies, J.W., Zhang, X.: A multi-scale convolutional neural network for phenotyping high-content cellular images. Bioinformatics 33(13), 2010–2019 (2017)

    Article  Google Scholar 

  12. Godinez, W.J., Hossain, I., Zhang, X.: Unsupervised phenotypic analysis of cellular images with multi-scale convolutional neural networks. BioRxiv, p. 361410 (2018)

    Google Scholar 

  13. Goldsborough, P., Pawlowski, N., Caicedo, J.C., Singh, S., Carpenter, A.E.: CytoGAN: generative modeling of cell images. BioRxiv, p. 227645 (2017)

    Google Scholar 

  14. Hadsell, R., Chopra, S., LeCun, Y.: Dimensionality reduction by learning an invariant mapping. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2006), vol. 2, pp. 1735–1742. IEEE (2006)

    Google Scholar 

  15. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9729–9738 (2020)

    Google Scholar 

  16. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  17. Hjelm, R.D., et al.: Learning deep representations by mutual information estimation and maximization. ICLR (2019)

    Google Scholar 

  18. Huang, G.B., Yang, H.F., Takemura, S.y., Rivlin, P., Plaza, S.M.: Latent feature representation via unsupervised learning for pattern discovery in massive electron microscopy image volumes. arXiv preprint arXiv:2012.12175 (2020)

  19. Janssens, R., Zhang, X., Kauffmann, A., de Weck, A., Durand, E.Y.: Fully unsupervised deep mode of action learning for phenotyping high-content cellular images. BioRxiv, p. 215459 (2020)

    Google Scholar 

  20. Kiyasseh, D., Zhu, T., Clifton, D.A.: CLOCS: contrastive learning of cardiac signals. arXiv preprint arXiv:2005.13249 (2020)

  21. Kraus, O.Z., Ba, J.L., Frey, B.J.: Classifying and segmenting microscopy images with deep multiple instance learning. Bioinformatics 32(12), 52–59 (2016). https://doi.org/10.1093/bioinformatics/btw252

    Article  Google Scholar 

  22. Lafarge, M.W., Caicedo, J.C., Carpenter, A.E., Pluim, J.P.W., Singh, S., Veta, M.: Capturing single-cell phenotypic variation via unsupervised representation learning. In: International Conference on Medical Imaging with Deep Learning, pp. 315–325. PMLR (2019)

    Google Scholar 

  23. Ljosa, V., Sokolnicki, K., Carpenter, A.E.: Annotated high-throughput microscopy image sets for validation. Nat. Meth. 9, 637 (2012). https://doi.org/10.1038/nmeth.2083

    Article  Google Scholar 

  24. Ljosa, V., et al.: Comparison of methods for image-based profiling of cellular morphological responses to small-molecule treatment. J. Biomol. Screen. 18(10), 1321–1329 (2013)

    Article  Google Scholar 

  25. Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536–2544 (2016)

    Google Scholar 

  26. Pawlowski, N., Caicedo, J.C., Singh, S., Carpenter, A.E., Storkey, A.: Automating morphological profiling with generic deep convolutional networks. BioRxiv, p. 085118 (2016)

    Google Scholar 

  27. Singh, S., Bray, M.A., Jones, T., Carpenter, A.: Pipeline for illumination correction of images for high-throughput microscopy. J. Microsc. 256(3), 231–236 (2014)

    Article  Google Scholar 

  28. Spiegel, S., Hossain, I., Ball, C., Zhang, X.: Metadata-guided visual representation learning for biomedical images. BioRxiv, p. 725754 (2019)

    Google Scholar 

  29. Sriram, A., et al.: COVID-19 prognosis via self-supervised representation learning and multi-image prediction. arXiv preprint arXiv:2101.04909 (2021)

  30. Sun, B., Feng, J., Saenko, K.: Correlation Alignment for Unsupervised Domain Adaptation. In: Csurka, G. (ed.) Domain Adaptation in Computer Vision Applications. ACVPR, pp. 153–171. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58347-1_8

    Chapter  Google Scholar 

  31. Tabak, G., Fan, M., Yang, S., Hoyer, S., Davis, G.: Correcting nuisance variation using Wasserstein distance. PeerJ 8, e8594 (2020)

    Article  Google Scholar 

  32. Vu, Y.N.T., Wang, R., Balachandar, N., Liu, C., Ng, A.Y., Rajpurkar, P.: Contrastive learning leveraging patient metadata improves representations for chest x-ray interpretation. arXiv preprint arXiv:2102.10663 (2021)

  33. Wu, Z., Xiong, Y., Yu, S.X., Lin, D.: Unsupervised feature learning via non-parametric instance discrimination. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3733–3742 (2018)

    Google Scholar 

  34. Xie, Y., Zhang, J., Liao, Z., Xia, Y., Shen, C.: PGL: prior-guided local self-supervised learning for 3D medical image segmentation. arXiv preprint arXiv:2011.12640 (2020)

  35. Yan, K., et al.: Self-supervised learning of pixel-wise anatomical embeddings in radiological images. arXiv preprint arXiv:2012.02383 (2020)

  36. Zhang, Y., Jiang, H., Miura, Y., Manning, C.D., Langlotz, C.P.: Contrastive learning of medical visual representations from paired images and text. arXiv preprint arXiv:2010.00747 (2020)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexis Perakis .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 125 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Perakis, A., Gorji, A., Jain, S., Chaitanya, K., Rizza, S., Konukoglu, E. (2021). Contrastive Learning of Single-Cell Phenotypic Representations for Treatment Classification. In: Lian, C., Cao, X., Rekik, I., Xu, X., Yan, P. (eds) Machine Learning in Medical Imaging. MLMI 2021. Lecture Notes in Computer Science(), vol 12966. Springer, Cham. https://doi.org/10.1007/978-3-030-87589-3_58

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87589-3_58

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87588-6

  • Online ISBN: 978-3-030-87589-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics