Generative Adversarial Networks for Morphological–Temporal Classification of Stem Cell Images
<p>Image examples for four morphological classes observable in a single cell colony (debris: green; dense: red; spread: blue; differentiated: yellow). Throughout the differentiation process, various proportions of each class can be found in cell colonies with contiguous cell boundaries. Classification of these multiclass images can be performed using image patches.</p> "> Figure 2
<p>Data preprocessing and classification schematic. The binary map of colony locations is used to segment colonies from the original image, which are then sorted by hand during ground-truth generation (<b>left</b>). Patches from the resulting dataset are used to train the GAN. Generated images are added to balance the dataset for the temporal CNN classification scheme (<b>right</b>), during which images are sorted into their individual classes through multiple hierarchical stages.</p> "> Figure 3
<p>Image entropy distribution histograms for GAN configurations. These graphs provide a quantitative measure of the overall generated image distribution in relation to the real image distribution and are used during GAN training to improve network learning. Values in parentheses indicate the percent overlap of the two graphs shown in the figure.</p> "> Figure 4
<p>Bar graph of data breakdown including values for training/testing (blue/yellow) split. Generated images (red) are added to the dataset to make up for class imbalances during CNN training.</p> "> Figure 5
<p>Graphs of network accuracy (<b>left</b>) and cross-entropy loss (<b>right</b>) for training and validation datasets. A small respective bump/dip in accuracy/loss is observed at 100 epochs, where the learning rate parameter is reduced. Training levels out before the 200 epochs, indicating that the network has finished learning.</p> "> Figure 6
<p>Image patch samples for real and generated images. A comparison of classwise image features displays generally realistic image features indicative of morphological class. However, visual appearance of images provides only a qualitative measure of image quality, where quantitative metrics are necessary to determine image realness.</p> "> Figure 7
<p>Normalized generator inception score (red) and FID (blue) per training epoch with example images at various intervals for the spread class. Graphs include accompanying trend line. Training epoch numbers are marked by a white ’E’ in the bottom of each image. Agreement between inception score and FID can be seen in terms of their relative minimum and maximum values versus training epoch.</p> "> Figure 8
<p>Generator and discriminator loss values for the dense class. As training progresses, the GAN reaches an equilibrium which is when training is considered finished. Using individual GAN models for each image class allows the GAN to be trained for different amounts of time based on the image class.</p> "> Figure 9
<p>Bar graph of network configuration vs. FID score by image class. The dcGAN+MSE configuration consistently displays the best performance in terms of this metric.</p> "> Figure 10
<p>Graphs of classification metric (left: true postitive rate, right: classification accuracy) vs. number of added generated images for the dense and spread classes. These graphs are used to determine the saturation point of a CNN, which is where the generated images no longer provide useful features to the model.</p> ">
Abstract
:1. Introduction
1.1. Developmental Toxicology
1.2. Video Bioinformatics and Machine Learning
1.3. Deep Learning Approaches
1.4. Generative Adversarial Learning
2. Related Works
Contributions of this Paper
- (1)
- Models complex, varied, and highly textured image patches using GAN
- (2)
- Incorporates domain knowledge in the form of temporal constraints on model learning as well as bioinspired algorithm design
- (3)
- Introduces an image-entropy-based metric for model training, image postprocessing, and quality control
- (4)
- Explores dataset augmentation as a viable means for improving network performance for tasks involving patch-based classification
3. Materials and Methods
3.1. Technical Approach
3.2. GAN Architectures
3.2.1. Wasserstein GAN
3.2.2. Auxiliary GAN
3.2.3. Metropolis-Hastings GAN
3.3. Assessing Generated Image Quality
Image Entropy Distribution
3.4. CNN Training Configurations
Temporal Classification
4. Results and Discussion
4.1. Data
4.2. Ground-Truth Validation
4.3. Patch-Based Sampling
- to increase the apparent training dataset size
- to accommodate efficient network architectures (it is widely recognized that GANs are effective when images are relatively small (≤64 × 64) but are prone to mode collapse with high-resolution images)
- to standardize input size, as image crops vary in dimension
- to model low-level features (i.e., fine-grained textures), which show high variation across image patches for a given class
- to increase general variability via patch sampling, which generally improves training
- to aid in the analytical goal of classifying contiguous, multilabel cell colonies in a patchwise manner using only cellular morphology
4.4. Assessment of Generated Image Quality
4.4.1. Inception Score
4.4.2. Frécet Inception Distance
4.5. GAN Training Visualization
4.6. GAN Network Comparisons
4.7. Classification Metrics
4.8. Dataset Balancing Using Generated Image Augmentation
4.9. Effect of the Temporal Classification Scheme
4.10. Saturation Point of Generated Image Augmentation
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Thomson, J.A.; Itskovitz-Eldor, J.; Shapiro, S.S.; Waknitz, M.A.; Swiergiel, J.J.; Marshall, V.S.; Jones, J.M. Embryonic stem cell lines derived from human blastocysts. Science 1998, 282, 1145–1147. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Clevers, H. Modeling development and disease with organoids. Cell 2016, 165, 1586–1597. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Gage, F.H. Mammalian neural stem cells. Science 2000, 287, 1433–1438. [Google Scholar] [CrossRef] [PubMed]
- Pittenger, M.F.; Mackay, A.M.; Beck, S.C.; Jaiswal, R.K.; Douglas, R.; Mosca, J.D.; Moorman, M.A.; Simonetti, D.W.; Craig, S.; Marshak, D.R. Multilineage potential of adult human mesenchymal stem cells. Science 1999, 284, 143–147. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kim, J.; Koo, B.K.; Knoblich, J.A. Human organoids: Model systems for human biology and medicine. Nat. Rev. Mol. Cell Biol. 2020, 21, 571–584. [Google Scholar] [CrossRef] [PubMed]
- Takahashi, K.; Tanabe, K.; Ohnuki, M.; Narita, M.; Ichisaka, T.; Tomoda, K.; Yamanaka, S. Induction of pluripotent stem cells from adult human fibroblasts by defined factors. Cell 2007, 131, 861–872. [Google Scholar] [CrossRef] [Green Version]
- Stumpf, P.S.; Smith, R.C.; Lenz, M.; Schuppert, A.; Müller, F.J.; Babtie, A.; Chan, T.E.; Stumpf, M.P.; Please, C.P.; Howison, S.D.; et al. Stem cell differentiation as a non-Markov stochastic process. Cell Syst. 2017, 5, 268–282. [Google Scholar] [CrossRef] [Green Version]
- Yamanaka, S. Pluripotent stem cell-based cell therapy—promise and challenges. Cell Stem Cell 2020, 27, 523–531. [Google Scholar] [CrossRef]
- Talbot, P.; Zur Nieden, N.; Lin, S.; Martinez, I.; Guan, B.; Bhanu, B. Use of video bioinformatics tools in stem cell toxicology. In Handbook of Nanotoxicology, Nanomedicine and Stem Cell Use in Toxicology; John Wiley and Sons, Ltd.: Hoboken, NJ, USA, 2014; pp. 379–402. [Google Scholar]
- Bhanu, B.; Talbot, P. Video Bioinformatics: From Live Imaging to Knowledge, 1st ed.; Springer International Publishing: Cham, Switzerland, 2015; Volume 381. [Google Scholar]
- Available online: https://www.nikon.com/products/microscope-solutions/special/ct/ (accessed on 21 September 2021).
- Abràmoff, M.D.; Magalhães, P.J.; Ram, S.J. Image processing with ImageJ. Biophotonics Int. 2004, 11, 36–42. [Google Scholar]
- Available online: https://www.nikon.com/products/microscope-solutions/lineup/integrated/cl-quant/ (accessed on 5 October 2021).
- Zahedi, A.; On, V.; Lin, S.C.; Bays, B.C.; Omaiye, E.; Bhanu, B.; Talbot, P. Evaluating cell processes, quality, and biomarkers in pluripotent stem cells using video bioinformatics. PLoS ONE 2016, 11, e0148642. [Google Scholar]
- Guan, B.X.; Bhanu, B.; Talbot, P.; Lin, S. Bio-driven cell region detection in human embryonic stem cell assay. IEEE/ACM Trans. Comput. Biol. Bioinform. (TCBB) 2014, 11, 604–611. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Perestrelo, T.; Chen, W.; Correia, M.; Le, C.; Pereira, S.; Rodrigues, A.S.; Sousa, M.I.; Ramalho-Santos, J.; Wirtz, D. Pluri-IQ: Quantification of Embryonic Stem Cell Pluripotency through an Image-Based Analysis Software. Stem Cell Rep. 2018, 11, 607. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami Beach, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin, Germany, 2015; pp. 234–241. [Google Scholar]
- Van Valen, D.A.; Kudo, T.; Lane, K.M.; Macklin, D.N.; Quach, N.T.; DeFelice, M.M.; Maayan, I.; Tanouchi, Y.; Ashley, E.A.; Covert, M.W. Deep learning automates the quantitative analysis of individual cells in live-cell imaging experiments. PLoS Comput. Biol. 2016, 12, e1005177. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Christiansen, E.M.; Yang, S.J.; Ando, D.M.; Javaherian, A.; Skibinski, G.; Lipnick, S.; Mount, E.; O’Neil, A.; Shah, K.; Lee, A.K.; et al. In silico labeling: Predicting fluorescent labels in unlabeled images. Cell 2018, 173, 792–803. [Google Scholar] [CrossRef] [Green Version]
- Xie, W.; Noble, J.A.; Zisserman, A. Microscopy cell counting and detection with fully convolutional regression networks. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2018, 6, 283–292. [Google Scholar] [CrossRef]
- Chen, C.L.; Mahjoubfar, A.; Tai, L.C.; Blaby, I.K.; Huang, A.; Niazi, K.R.; Jalali, B. Deep learning in label-free cell classification. Sci. Rep. 2016, 6, 21471. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Pereira, S.; Pinto, A.; Alves, V.; Silva, C.A. Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Trans. Med. Imaging 2016, 35, 1240–1251. [Google Scholar] [CrossRef]
- Jonnalagedda, P.; Schmolze, D.; Bhanu, B. [Regular Paper] MVPNets: Multi-viewing Path Deep Learning Neural Networks for Magnification Invariant Diagnosis in Breast Cancer. In Proceedings of the 2018 IEEE 18th International Conference on Bioinformatics and Bioengineering (BIBE), Taichung, Taiwan, 29–31 October 2018; pp. 189–194. [Google Scholar]
- Guan, B.; Bhanu, B.; Theagarajan, R.; Liu, H.; Talbot, P.; Weng, N. Human embryonic stem cell classification: Random network with autoencoded feature extractor. J. Biomed. Opt. 2021, 26, 052913. [Google Scholar] [CrossRef] [PubMed]
- Witmer, A.; Bhanu, B. Multi-label Classification of Stem Cell Microscopy Images Using Deep Learning. In Proceedings of the 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 1408–1413. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the 28th Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 2672–2680. [Google Scholar]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
- Krizhevsky, A.; Hinton, G. Learning Multiple Layers of Features from Tiny Images; University of Toronto: Toronto, ON, Canada, 2009. [Google Scholar]
- Lecun, Y.; Cortes, C.; Burges, C. The Mnist Database. Available online: http://yann.lecun.com/exdb/mnist/ (accessed on 19 August 2021).
- Yi, X.; Walia, E.; Babyn, P. Generative Adversarial Network in Medical Imaging: A Review. Med. Image Anal. 2019, 58, 101552. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Nie, D.; Trullo, R.; Lian, J.; Wang, L.; Petitjean, C.; Ruan, S.; Wang, Q.; Shen, D. Medical Image Synthesis with Deep Convolutional Adversarial Networks. IEEE Trans. Biomed. Eng. 2018, 65, 2720–2730. [Google Scholar] [CrossRef]
- Majurski, M.; Manescu, P.; Padi, S.; Schaub, N.; Hotaling, N.; Simon, C., Jr.; Bajcsy, P. Cell Image Segmentation Using Generative Adversarial Networks, Transfer Learning, and Augmentations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
- Wang, D.; Lu, Z.; Xu, Y.; Wang, Z.; Santella, A.; Bao, Z. Cellular structure image classification with small targeted training samples. IEEE Access 2019, 7, 148967–148974. [Google Scholar] [CrossRef]
- Rivenson, Y.; Liu, T.; Wei, Z.; Zhang, Y.; de Haan, K.; Ozcan, A. PhaseStain: The digital staining of label-free quantitative phase microscopy images using deep learning. Light. Sci. Appl. 2019, 8, 1–11. [Google Scholar] [CrossRef]
- Lee, S.; Han, S.; Salama, P.; Dunn, K.W.; Delp, E.J. Three Dimensional Blind Image Deconvolution for Fluorescence Microscopy using Generative Adversarial Networks. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 538–542. [Google Scholar]
- Bailo, O.; Ham, D.; Min Shin, Y. Red Blood Cell Image Generation for Data Augmentation Using Conditional Generative Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
- Goldsborough, P.; Pawlowski, N.; Caicedo, J.C.; Singh, S.; Carpenter, A. CytoGAN: Generative modeling of cell images. bioRxiv 2017, 227645. [Google Scholar] [CrossRef]
- Pandhe, N.; Rada, B.; Quinn, S. Generative spatiotemporal modeling of neutrophil behavior. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 969–972. [Google Scholar]
- Theagarajan, R.; Bhanu, B. DeephESC 2.0: Deep Generative Multi Adversarial Networks for improving the classification of hESC. PLoS ONE 2019, 14, e0212849. [Google Scholar] [CrossRef]
- Osokin, A.; Chessel, A.; Carazo Salas, R.E.; Vaggi, F. Gans for biological image synthesis. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2233–2242. [Google Scholar]
- Shaga Devan, K.; Walther, P.; von Einem, J.; Ropinski, T.; A Kestler, H.; Read, C. Improved automatic detection of herpesvirus secondary envelopment stages in electron microscopy by augmenting training data with synthetic labelled images generated by a generative adversarial network. Cell. Microbiol. 2021, 23, e13280. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28, 91–99. [Google Scholar] [CrossRef] [Green Version]
- Shaham, T.R.; Dekel, T.; Michaeli, T. Singan: Learning a generative model from a single natural image. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 4570–4580. [Google Scholar]
- Dimitrakopoulos, P.; Sfikas, G.; Nikou, C. ISING-GAN: Annotated data augmentation with a spatially constrained generative adversarial network. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; pp. 1600–1603. [Google Scholar]
- Witmer, A.; Bhanu, B. HESCNET: A Synthetically Pre-Trained Convolutional Neural Network for Human Embryonic Stem Cell Colony Classification. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 2441–2445. [Google Scholar]
- Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein GAN. arXiv 2017, arXiv:stat.ML/1701.07875. [Google Scholar]
- Odena, A.; Olah, C.; Shlens, J. Conditional image synthesis with auxiliary classifier gans. In Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, Sydney, Australia, 6–11 August 2017; pp. 2642–2651. [Google Scholar]
- Turner, R.; Hung, J.; Frank, E.; Saatchi, Y.; Yosinski, J. Metropolis-hastings generative adversarial networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6345–6353. [Google Scholar]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32; Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2019; pp. 8024–8035. [Google Scholar]
- Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; Chen, X. Improved Techniques for Training GANs. arXiv 2016, arXiv:cs.LG/1606.03498. [Google Scholar]
- Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; Hochreiter, S. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. arXiv 2018, arXiv:cs.LG/1706.08500. [Google Scholar]
- Walker, F.O. Huntington’s disease. Lancet 2007, 369, 218–228. [Google Scholar] [CrossRef]
- Martin, J.B.; Gusella, J.F. Huntingtons disease. N. Engl. J. Med. 1986, 315, 1267–1276. [Google Scholar] [CrossRef]
- Quik, M. Smoking, nicotine and Parkinson’s disease. Trends Neurosci. 2004, 27, 561–568. [Google Scholar] [CrossRef]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Karras, T.; Laine, S.; Aila, T. A Style-Based Generator Architecture for Generative Adversarial Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
- Yang, M. Imbalanced Dataset Sampler. Available online: https://github.com/ufoym/imbalanced-dataset-sampler (accessed on 19 July 2021).
Class | Morphological Description | Implication |
---|---|---|
Debris | Individual cells or aggregates cells showing circular morphology with high intensity white ‘halo’ marking distinct boundaries | Distressed, dead (apoptotic/necrotic) cells that float on top of colony indicating negative response to experimental conditions |
Dense | Homogeneous aggregates of small cells with indiscernible cell boundaries, no clear nucleus | Induced pluripotent stem cell colonies that maintain undifferentiated status under current conditions |
Spread | Homogeneous aggregates of large cells with discernible cell boundaries, clear nuclei, large protrusions | Down stream lineage intermediates or progenitor cells |
Differentiated | Individual cells or spaced out aggregates of cells with distinct, dark cell bodies, high-intensity white boundaries, and dark axon like protrusions. | Differentiated neurons or neuronlike downstream lineages |
Generator | Discriminator | ||||
---|---|---|---|---|---|
Module | Size | Maps | Module | Size | Maps |
Linear | 1/512 | C2d | 64/32 | 1/64 | |
Up | 16/36 | 512/512 | C2d | 32/16 | 64/128 |
C2d | 36/36 | 512/512 | C2d | 16/8 | 128/256 |
Up | 36/64 | 512/512 | C2d | 8/4 | 256/512 |
C2d | 64/64 | 512/256 | FC | 8192 | 512 |
C2d | 64/64 | 256/1 | Sig(·) | 1/1 | -/- |
Tanh(·) | 64/64 | 1/1 |
Training Hyperparameters | |
---|---|
Parameter | Value |
Learning Rate-Adam | 0.002 |
1—Adam | 0.5 |
2—Adam | 0.999 |
Max feature maps—Discriminator | 512 |
Max feature maps—Generator | 512 |
Image Class | Overlap Percentage—Mean (std.) |
---|---|
Debris | 0.6182 (0.0026) |
Dense | 0.7066 (0.0033) |
Diff | 0.3936 (0.0018) |
Spread | 0.3999 (0.0011) |
Class | # Samples |
---|---|
Debris | 3587 |
Dense | 3934 |
Diff | 656 |
Spread | 10,506 |
Total | 18,683 |
Image Class | Optimal Generator Epoch | Inception Score |
---|---|---|
Debris | 116 | 2.60 |
Dense | 444 | 2.32 |
Diff | 225 | 2.38 |
Spread | 136 | 2.57 |
Config./Class/FID | Debris | Dense | Diff. | Spread | Average |
---|---|---|---|---|---|
dcGAN | 27.73 | 36.32 | 72.77 | 18.51 | 38.83 |
dcGAN + MSE | 19.5 | 29.5 | 70.7 | 13.67 | 33.34 |
wGAN | 33.85 | 81.45 | 393.94 | 24.05 | 133.32 |
auxGAN | 31.62 | 155.13 | 117.92 | 69.86 | 93.63 |
mhGAN | 125.22 | 35.03 | 90.63 | 23.05 | 68.48 |
aux-mhGAN (Dense, Diff, Spread) | x | 84.93 | 88.7 | 41.53 | 71.72 |
aux-mhGAN (Dense, Spread) | x | x | 83.37 | 29.55 | 57.54 |
aux-mhGAN (Diff, Spread) | x | 74.0 | x | 41.05 | 56.46 |
Configuration/Class/TPR (Std.) | Debris | Dense | Diff. | Spread | Average |
---|---|---|---|---|---|
Unbalanced | 0.9141 (0.0144) | 0.8093 (0.0211) | 0.8807 (0.0342) | 0.9144 (0.0093) * | 0.8789 |
Sampler Balanced | 0.8570 (0.0184) | 0.9300 (0.0189) | 0.9274 (0.0219) | 0.8410 (0.0073) | 0.8888 |
Weight Balanced | 0.9030 (0.0312) | 0.8065 (0.0249) | 0.8439 (0.0715) | 0.9300 (0.0290) | 0.8708 |
Generator Balanced | 0.9105 (0.0206) | 0.7940 (0.0116) | 0.8999 (0.0247) | 0.9172 (0.0124) | 0.8804 |
Temporally Balanced | 0.9277 (0.0148) | 0.8157 (0.0142) | 0.8856 (0.0289) | 0.9646 (0.0040) * | 0.8984 |
Configuration/Class/TPR (std.) | Debris | Dense/Diff./Spread | Average |
---|---|---|---|
Unbalanced | 0.9145 (0.0097) | 0.9570 (0.0058) | 0.9357 |
Generator Balanced | 0.9277 (0.0148) | 0.9545 (0.0053) | 0.9411 |
Configuration/Class/TPR (std.) | Diff. | Dense/Spread | Average |
---|---|---|---|
Unbalanced | 0.8792 (0.0255) | 0.9941 (0.0007) | 0.9367 |
Generator Balanced | 0.8856 (0.0289) | 0.9935 (0.0007) | 0.9396 |
Configuration/Class/TPR (std.) | Dense | Spread | Average |
---|---|---|---|
Unbalanced | 0.8187 (0.0140) | 0.9624 (0.0035) | 0.8906 |
Generator Balanced | 0.8157 (0.0142) | 0.9646 (0.0040) | 0.8902 |
Configuration/Class/F1 (std.) | Debris | Dense | Diff. | Spread | Average |
---|---|---|---|---|---|
Unbalanced | 0.8732 (0.0059) | 0.8430 (0.0082) * | 0.8580 (0.0164) | 0.9119 (0.0036) ** | 0.8715 |
Generator-Balanced | 0.8599 (0.0099) | 0.8599 (0.0050) * | 0.8714 (0.0009) | 0.9433 (0.0030) ** | 0.8836 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Witmer, A.; Bhanu, B. Generative Adversarial Networks for Morphological–Temporal Classification of Stem Cell Images. Sensors 2022, 22, 206. https://doi.org/10.3390/s22010206
Witmer A, Bhanu B. Generative Adversarial Networks for Morphological–Temporal Classification of Stem Cell Images. Sensors. 2022; 22(1):206. https://doi.org/10.3390/s22010206
Chicago/Turabian StyleWitmer, Adam, and Bir Bhanu. 2022. "Generative Adversarial Networks for Morphological–Temporal Classification of Stem Cell Images" Sensors 22, no. 1: 206. https://doi.org/10.3390/s22010206
APA StyleWitmer, A., & Bhanu, B. (2022). Generative Adversarial Networks for Morphological–Temporal Classification of Stem Cell Images. Sensors, 22(1), 206. https://doi.org/10.3390/s22010206