DeepGraviLens: a multi-modal architecture for classifying gravitational lensing data
Pages 19253 - 19277
Abstract
Gravitational lensing is the relativistic effect generated by massive bodies, which bend the space-time surrounding them. It is a deeply investigated topic in astrophysics and allows validating theoretical relativistic results and studying faint astrophysical objects that would not be visible otherwise. In recent years, machine learning methods have been applied to support the analysis of the gravitational lensing phenomena by detecting lensing effects in datasets consisting of images associated with brightness variation time series. However, the state-of-the-art approaches either consider only images and neglect time-series data or achieve relatively low accuracy on the most difficult datasets. This paper introduces DeepGraviLens, a novel multi-modal network that classifies spatio-temporal data belonging to one non-lensed system type and three lensed system types. It surpasses the current state-of-the-art accuracy results by to , depending on the considered data set. Such an improvement will enable the acceleration of the analysis of lensed objects in upcoming astrophysical surveys, which will exploit the petabytes of data collected, e.g., from the Vera C. Rubin Observatory.
References
[1]
Treu T Strong lensing by galaxies Ann Rev Astron Astrophys 2010 48 1 87-125
[2]
Shaikh R, Banerjee P, Paul S, and Sarkar T Strong gravitational lensing by wormholes J Cosmol Astropart Phys 2019 2019 07 028
[3]
Islam SU, Kumar J, and Ghosh SG Strong gravitational lensing by rotating Simpson–Visser black holes J Cosmol Astropart Phys 2021 2021 10 013
[4]
Jin X-H, Gao Y-X, and Liu D-J Strong gravitational lensing of a 4-dimensional Einstein–Gauss–Bonnet black hole in homogeneous plasma Int J Mod Phys D 2020 29 09 2050065
[5]
Vicedomini M, Brescia M, Cavuoti S, Riccio G, and Longo G Statistical characterization and classification of astronomical transients with machine learning in the era of the Vera C. Rubin observatory 2021 Cham Springer 81-113
[6]
Dahle H, Kaiser N, Irgens RJ, Lilje PB, and Maddox SJ Weak gravitational lensing by a sample of X-ray luminous clusters of galaxies. I. the data set Astrophys J Suppl Ser 2002 139 2 313
[7]
Quider AM, Pettini M, Shapley AE, and Steidel CC The ultraviolet spectrum of the gravitationally lensed galaxy ‘the Cosmic Horseshoe’: a close-up of a star-forming galaxy at z 2 Mon Not R Astron Soc 2009 398 3 1263-1278
[8]
Morgan R, Nord B, Bechtol K, González S, Buckley-Geer E, Möller A, Park J, Kim A, Birrer S, Aguena M, et al. DeepZipper: a novel deep-learning architecture for lensed supernovae identification Astrophys J 2022 927 1 109
[9]
Vakulik V, Schild R, Dudinov V, Nuritdinov S, Tsvetkova V, Burkhonov O, and Akhunov T Observational determination of the time delays in gravitational lens system Q2237+ 0305 Astron Astrophys 2006 447 3 905-913
[10]
Park JW, Villar A, Li Y, Jiang Y-F, Ho S, Lin JY-Y, Marshall PJ, Roodman A (2021) Inferring black hole properties from astronomical multivariate time series with Bayesian attentive neural processes. arXiv:2106.01450
[11]
Park JW (2018) Strongly-lensed quasar selection based on both multi-band tabular data. Project report of the “CS230 Deep Learning” (2018 edition) course at Stanford
[12]
Zwicky F On the probability of detecting nebulae which act as gravitational lenses Phys Rev 1937 51 8 679
[13]
Gorenstein M, Shapiro I, Cohen N, Corey B, Falco E, Marcaide J, Rogers A, Whitney A, Porcas R, Preston R, et al. Detection of a compact radio source near the center of a gravitational lens: quasar image or galactic core? Science 1983 219 4580 54-56
[14]
Lawrence C, Schneider D, Schmidt M, Bennett C, Hewitt J, Burke B, Turner E, and Gunn J Discovery of a new gravitational lens system Science 1984 223 4631 46-49
[15]
Tyson JA, Valdes F, and Wenk R Detection of systematic gravitational lens galaxy image alignments-mapping dark matter in galaxy clusters Astrophys J 1990 349 1-4
[16]
Davies A, Serjeant S, Bromley JM (2019) Using convolutional neural networks to identify gravitational lenses in astronomical images. Mon Not Roy Astron Soc 487(4): 5263–5271. https://academic.oup.com/mnras/article-pdf/487/4/5263/28893573/stz1288.pdf.
[17]
Teimoorinia H, Toyonaga RD, Fabbro S, and Bottrell C Comparison of multi-class and binary classification machine learning models in identifying strong gravitational lenses Publ Astron Soc Pac 2020 132 1010 044501
[18]
Marshall PJ, Hogg DW, Moustakas LA, Fassnacht CD, Bradač M, Schrabback T, and Blandford RD Automated detection of galaxy-scale gravitational lenses in high-resolution imaging data Astrophys J 2009 694 2 924
[19]
Morgan R, Nord B, Birrer S, Lin JY-Y, Poh J (2021) Deeplenstronomy: a dataset simulation package for strong gravitational lensing. arXiv:2102.02830
[20]
Morgan R, Nord B, Bechtol K, Möller A, Hartley W, Birrer S, González S, Martinez M, Gruendl R, Buckley-Geer E et al (2022) Deepzipper ii: searching for lensed supernovae in dark energy survey data with deep learning. arXiv:2204.05924
[21]
Kodi Ramanah D, Arendse N, Wojtak R (2022) AI-driven spatio-temporal engine for finding gravitationally lensed type Ia supernovae. Mon Not Roy Astron Soc 512(4):5404–5417. https://academic.oup.com/mnras/article-pdf/512/4/5404/43377895/stac838.pdf.
[22]
Ramachandram D and Taylor GW Deep multimodal learning: a survey on recent advances and trends IEEE Signal Process Mag 2017 34 6 96-108
[23]
Gao J, Li P, Chen Z, and Zhang J A survey on deep learning for multimodal data fusion Neural Comput 2020 32 5 829-864
[24]
Gao J, Li P, Chen Z, Zhang J (2020) A survey on deep learning for multimodal data fusion. Neural Comput 32(5): 829–864. https://direct.mit.edu/neco/article-pdf/32/5/829/1865303/neco_a_01273.pdf.
[25]
Fan R, Li J, Song W, Han W, Yan J, and Wang L Urban informal settlements classification via a transformer-based spatial-temporal fusion network using multimodal remote sensing and time-series human activity data Int J Appl Earth Obs Geoinf 2022 111 102831
[26]
Jayachitra V, Nivetha S, Nivetha R, and Harini R A cognitive iot-based framework for effective diagnosis of covid-19 using multimodal data Biomed Signal Process Control 2021 70 102960
[27]
Gadiraju KK, Ramachandra B, Chen Z, Vatsavai RR (2020) Multimodal deep learning based crop classification using multispectral and multitemporal satellite imagery. In: Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery and data mining, pp 3234–3242
[28]
Diehl HT, Neilsen E, Gruendl RA, Abbott TMC, Allam S, Alvarez O, Annis J, Balbinot E, Bhargava S, Bechtol K, Bernstein GM, Bhatawdekar R, Bocquet S, Brout D, Capasso R, Cawthon R, Chang C, Cook E, Conselice CJ, Cruz J, D’Andrea C, da Costa L, Das R, DePoy DL, Drlica-Wagner A, Elliott A, Everett SW, Frieman J, Neto AF, Ferté A, Friswell I, Furnell KE, Gelman L, Gerdes DW, Gill MSS, Goldstein DA, Gruen D, Gulledge DJ, Hamilton S, Hollowood D, Honscheid K, James DJ, Johnson MD, Johnson MWG, Kent S, Kessler RS, Khullar G, Kovacs E, Kremin A, Kron R, Kuropatkin N, Lasker J, Lathrop A, Li TS, Manera M, March M, Marshall JL, Medford M, Menanteau F, Mohammed I, Monroy M, Moraes B, Morganson E, Muir J, Murphy M, Nord B, Pace AB, Palmese A, Park Y, Paz-Chinchón F, Pereira MES, Petravick D, Plazas AA, Poh J, Prochaska T, Romer AK, Reil K, Roodman A, Sako M, Sauseda M, Scolnic D, Secco LF, Sevilla-Noarbe I, Shipp N, Smith JA, Soares-Santos M, Soergel B, Stebbins A, Story KT, Stringer K, Tarsitano F, Thomas B, Tucker DL, Vivas K, Walker AR, Wang M-Y, Weaverdyck C, Weaverdyck N, Wester W, Wethers CF, Wilkenson R, Wu H-Y, Yanny B, Zenteno A, Zhang Y (2018) Dark energy survey operations: years 4 and 5. In: Peck AB, Seaman RL, Benn CR (eds) Observatory Operations: strategies, Processes, and Systems VII, vol 10704, pp 138–155. SPIE. International Society for Optics and Photonics
[29]
Ivezić Ž, Kahn SM, Tyson JA, Abel B, Acosta E, Allsman R, Alonso D, AlSayyad Y, Anderson SF, Andrew J, et al. LSST: from science drivers to reference design and anticipated data products Astrophys J 2019 873 2 111
[30]
Oguri M Strong gravitational lensing of explosive transients Rep Prog Phys 2019 82 12 126901
[31]
Goldstein DA and Nugent PE How to find gravitationally lensed type Ia supernovae Astrophys J Lett 2016 834 1 5
[32]
Wojtak R, Hjorth J, Gall C (2019) Magnified or multiply imaged?—Search strategies for gravitationally lensed supernovae in wide-field surveys. Mon Not Roy Astronom Soc 487(3):3342–3355 https://academic.oup.com/mnras/article-pdf/487/3/3342/28840832/stz1516.pdf.
[33]
Savary E, Rojas K, Maus M, Clément B, Courbin F, Gavazzi R, Chan J, Lemon C, Vernardos G, Cañameras R et al (2022) Strong lensing in unions: toward a pipeline from discovery to modeling. Astron Astrophys 666(ARTICLE), 1
[34]
Stern D, Djorgovski S, Krone-Martins A, Sluse D, Delchambre L, Ducourant C, Teixeira R, Surdej J, Boehm C, Den Brok J, et al. Gaia gral: Gaia dr2 gravitational lens systems. vi. spectroscopic confirmation and modeling of quadruply imaged lensed quasars Astrophys J 2021 921 1 42
[35]
Cañameras R, Schuldt S, Suyu S, Taubenberger S, Meinhardt T, Leal-Taixé L, Lemon C, Rojas K, and Savary E Holismokes-ii. identifying galaxy-scale strong gravitational lenses in pan-starrs using convolutional neural networks Astron Astrophys 2020 644 163
[36]
Chao DC-Y, Chan JH-H, Suyu SH, Yasuda N, More A, Oguri M, Morokuma T, and Jaelani AT Lensed quasar search via time variability with the hsc transient survey Astron Astrophys 2020 640 88
[37]
Chan JH, Suyu SH, Sonnenfeld A, Jaelani AT, More A, Yonehara A, Kubota Y, Coupon J, Lee C-H, Oguri M, et al. Survey of gravitationally lensed objects in hsc imaging (sugohi)-iv. Lensed quasar search in the hsc survey Astron Astrophys 2020 636 87
[38]
Li R, Napolitano N, Tortora C, Spiniello C, Koopmans L, Huang Z, Roy N, Vernardos G, Chatterjee S, Giblin B, et al. New high-quality strong lens candidates with deep learning in the kilo-degree survey Astrophys J 2020 899 1 30
[39]
Cheng T-Y, Li N, Conselice CJ, Aragón-Salamanca A, Dye S, and Metcalf RB Identifying strong lenses with unsupervised machine learning using convolutional autoencoder Mon Not R Astron Soc 2020 494 3 3750-3765
[40]
Petrillo C, Tortora C, Chatterjee S, Vernardos G, Koopmans L, Verdoes Kleijn G, Napolitano NR, Covone G, Kelvin L, and Hopkins A Testing convolutional neural networks for finding strong gravitational lenses in kids Mon Not R Astron Soc 2019 482 1 807-820
[41]
Delchambre L, Krone-Martins A, Wertz O, Ducourant C, Galluccio L, Klüter J, Mignard F, Teixeira R, Djorgovski S, Stern D, et al. Gaia gral: Gaia dr2 gravitational lens systems-iii. a systematic blind search for new lensed systems Astron Astrophys 2019 622 165
[42]
Petrillo C, Tortora C, Vernardos G, Koopmans L, Verdoes Kleijn G, Bilicki M, Napolitano NR, Chatterjee S, Covone G, Dvornik A, et al. Links: discovering galaxy-scale strong lenses in the kilo-degree survey using convolutional neural networks Mon Not R Astron Soc 2019 484 3 3879-3896
[43]
Khramtsov V, Sergeyev A, Spiniello C, Tortora C, Napolitano NR, Agnello A, Getman F, De Jong JT, Kuijken K, Radovich M, et al. Kids-squad-ii. machine learning selection of bright extragalactic objects to search for new gravitationally lensed quasars Astron Astrophys 2019 632 56
[44]
Pearson J, Pennock C, and Robinson T Auto-detection of strong gravitational lenses using convolutional neural networks Emerg Sci 2018 2 1
[45]
Schaefer C, Geiger M, Kuntzer T, and Kneib J-P Deep convolutional neural networks as strong gravitational lens detectors Astron Astrophys 2018 611 2
[46]
Hartley P, Flamary R, Jackson N, Tagore A, and Metcalf R Support vector machine classification of strong gravitational lenses Mon Not R Astron Soc 2017 471 3 3378-3397
[47]
Petrillo C, Tortora C, Chatterjee S, Vernardos G, Koopmans L, Verdoes Kleijn G, Napolitano NR, Covone G, Schneider P, Grado A, et al. Finding strong gravitational lenses in the kilo degree survey with convolutional neural networks Mon Not R Astron Soc 2017 472 1 1129-1150
[48]
Savary EMC (2022) Teaching machines how to find strongly lensed galaxies in cosmological sky surveys. Technical report, EPFL
[49]
Hoeflich P, Khokhlov A, Wheeler JC, Phillips MM, Suntzeff NB, and Hamuy M Maximum brightness and postmaximum decline of light curves of type supernovae ia: a comparison of theory and observations Astrophys J 1996 472 2 81
[50]
Pourrahmani M, Nayyeri H, and Cooray A LensFlow: a convolutional neural network in search of strong gravitational lenses Astrophys J 2018 856 1 68
[51]
Banos O, Villalonga C, Garcia R, Saez A, Damas M, Holgado-Terriza JA, Lee S, Pomares H, and Rojas I Design, implementation and validation of a novel open framework for agile development of mobile health applications Biomed Eng Online 2015 14 2 1-20
[52]
Cao Y, Steffey S, He J, Xiao D, Tao C, Chen P, and Müller H Medical image retrieval: a multimodal approach Cancer Inf 2014 13 14053
[53]
Liang M, Li Z, Chen T, and Zeng J Integrative data analysis of multi-platform cancer data with a multimodal deep learning approach IEEE/ACM Trans Comput Biol Bioinf 2014 12 4 928-937
[54]
Azagra P, Mollard Y, Golemo F, Murillo AC, Lopes M, Civera J (2016) A multimodal human-robot interaction dataset. In: NIPS 2016, workshop future of interactive learning machines
[55]
Geiger A, Lenz P, Stiller C, and Urtasun R Vision meets robotics: the kitti dataset Int J Robot Res 2013 32 11 1231-1237
[56]
Maddern W, Pascoe G, Linegar C, and Newman P 1 year, 1000 km: the Oxford RobotCar dataset Int J Robot Res 2017 36 1 3-15
[57]
Summaira J, Li X, Shoib AM, Li S, Abdul J (2021) Recent advances and trends in multimodal deep learning: a review. arXiv:2105.11087
[58]
Liu M, Hu H, Li L, Yu Y, Guan W (2020) Chinese image caption generation via visual attention and topic modeling. IEEE Trans Cybern
[59]
Liu M, Li L, Hu H, Guan W, and Tian J Image caption generation with dual attention mechanism Inf Process Manag 2020 57 2 102178
[60]
Liu S, Ren Z, and Yuan J Sibnet: sibling convolutional encoder for video captioning IEEE Trans Pattern Anal Mach Intell 2020 43 9 3259-3272
[61]
Rahman M, Abedin T, Prottoy KS, Moshruba A, Siddiqui FH et al (2020) Semantically sensible video captioning (SSVC). arXiv:2009.07335
[62]
Elias I, Zen H, Shen J, Zhang Y, Jia Y, Weiss RJ, Wu Y (2021) Parallel tacotron: non-autoregressive and controllable tts. In: ICASSP 2021–2021 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 5709–5713
[63]
Shen J, Pang R, Weiss RJ, Schuster M, Jaitly N, Yang Z, Chen Z, Zhang Y, Wang Y, Skerrv-Ryan R et al (2018) Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. In: 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 4779–4783
[64]
Cimtay Y, Ekmekcioglu E, and Caglar-Ozhan S Cross-subject multimodal emotion recognition based on hybrid fusion IEEE Access 2020 8 168865-168878
[65]
Li M, Xie L, Lv Z, Li J, and Wang Z Multistep deep system for multimodal emotion detection with invalid data in the internet of things IEEE Access 2020 8 187208-187221
[66]
Jaiswal M, Aldeneh Z, Mower Provost E (2019) Controlling for confounders in multimodal emotion classification via adversarial learning. In: 2019 International conference on multimodal interaction, pp 174–184
[67]
Shimizu H and Nakayama KI Artificial intelligence in oncology Cancer Sci 2020 111 5 1452-1460
[68]
Xu T, Zhang H, Huang X, Zhang S, and Metaxas DN Ourselin S, Joskowicz L, Sabuncu MR, Unal G, and Wells W Multimodal deep learning for cervical dysplasia diagnosis Medical image computing and computer-assisted intervention-MICCAI 2016 2016 Cham Springer 115-123
[69]
Gibert D, Mateu C, and Planes J HYDRA: a multimodal deep learning framework for malware classification Comput Secur 2020 95 101873
[70]
Velioglu R, Rose J (2020) Detecting hate speech in memes using multimodal deep learning approaches: prize-winning solution to hateful memes challenge. arXiv:2012.12975
[71]
Saito N, Ogata T, Funabashi S, Mori H, and Sugano S How to select and use tools?: Active perception of target objects using multimodal deep learning IEEE Robot Autom Lett 2021 6 2 2517-2524
[72]
Stahlschmidt SR, Ulfenborg B, Synnergren J (2022) Multimodal deep learning for biomedical data fusion: a review. Briefings Bioinf 23(2) https://academic.oup.com/bib/article-pdf/23/2/bbab569/42805085/bbab569.pdf.
[73]
Huang Y, Yang J, Liao P, Pan J (2017) Fusion of facial expressions and EEG for multimodal emotion recognition. In: Computational intelligence and neuroscience 2017
[74]
Wei R, Mi L, Hu Y, and Chen Z Exploiting the local temporal information for video captioning J Vis Commun Image Represent 2020 67 102751
[75]
Hazarika D, Poria S, Mihalcea R, Cambria E, Zimmermann R (2018) Icon: interactive conversational memory network for multimodal emotion detection. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp 2594–2604
[76]
Oord A, Li Y, Babuschkin I, Simonyan K, Vinyals O, Kavukcuoglu K, Driessche G, Lockhart E, Cobo L, Stimberg F et al (2018) Parallel wavenet: fast high-fidelity speech synthesis. In: International conference on machine learning. PMLR, pp 3918–3926
[77]
Wei Y, Wang L, Cao H, Shao M, and Wu C Multi-attention generative adversarial network for image captioning Neurocomputing 2020 387 91-99
[78]
Gómez-Chova L, Tuia D, Moser G, and Camps-Valls G Multimodal classification of remote sensing images: a review and future directions Proc IEEE 2015 103 9 1560-1584
[79]
Arioz U, Smrke U, Plohl N, and Mlakar I Scoping review on the multimodal classification of depression and experimental study on existing multimodal models Diagnostics 2022 12 11 2683
[80]
Salekin MS, Zamzmi G, Goldgof D, Kasturi R, Ho T, and Sun Y Multimodal spatio-temporal deep learning approach for neonatal postoperative pain assessment Comput Biol Med 2021 129 104150
[81]
Cai T, Ni H, Yu M, Huang X, Wong K, Volpi J, Wang JZ, and Wong ST Deepstroke: an efficient stroke screening framework for emergency rooms with multimodal adversarial deep learning Med Image Anal 2022 80 102522
[82]
Pouyanfar S, Tao Y, Tian H, Chen S-C, and Shyu M-L Multimodal deep learning based on multiple correspondence analysis for disaster management World Wide Web 2019 22 1893-1911
[83]
Liang PP, Zadeh A, Morency L-P (2022) Foundations and recent trends in multimodal machine learning: principles, challenges, and open questions. arXiv:2209.03430
[84]
Feghali J, Jimenez AE, Schilling AT, Azad TD (2022) Overview of algorithms for natural language processing and time series analyses. In: Machine learning in clinical neuroscience: foundations and applications. Springer, pp 221–242
[85]
Suel E, Bhatt S, Brauer M, Flaxman S, and Ezzati M Multimodal deep learning from satellite and street-level imagery for measuring income, overcrowding, and environmental deprivation in urban areas Remote Sens Environ 2021 257 112339
[86]
Pelletier C, Webb GI, and Petitjean F Temporal convolutional neural network for the classification of satellite image time series Remote Sens 2019 11 5 523
[87]
do Nascimento Bendini H, Fonseca LMG, Schwieder M, Körting TS, Rufin P, Sanches IDA, Leitao PJ, Hostert P (2019) Detailed agricultural land classification in the Brazilian Cerrado based on phenological information from dense satellite image time series. Int J Appl Earth Observ Geoinf 82:101872
[88]
Schneider D, Gunn J, and Hoessel J CCD photometry of Abell clusters. I-magnitudes and redshifts for 84 brightest cluster galaxies. Astrophys J 1983 264 337-355
[89]
Birrer S, Shajib AJ, Gilman D, Galan A, Aalbers J, Millon M, Morgan R, Pagano G, Park JW, Teodori L et al (2021) lenstronomy ii: a gravitational lensing software ecosystem. arXiv:2106.05976
[90]
Flaugher B, Diehl HT, Honscheid K, Abbott TMC, Alvarez O, Angstadt R, Annis JT, Antonik M, Ballester O, Beaufore L, Bernstein GM, Bernstein RA, Bigelow B, Bonati M, Boprie D, Brooks D, Buckley-Geer EJ, Campa J, Cardiel-Sas L, Castander FJ, Castilla J, Cease H, Cela-Ruiz JM, Chappa S, Chi E, Cooper C, da Costa LN, Dede E, Derylo G, DePoy DL, de Vicente J, Doel P, Drlica-Wagner A, Eiting J, Elliott AE, Emes J, Estrada J, Neto AF, Finley DA, Flores R, Frieman J, Gerdes D, Gladders MD, Gregory B, Gutierrez GR, Hao J, Holland SE, Holm S, Huffman D, Jackson C, James DJ, Jonas M, Karcher A, Karliner I, Kent S, Kessler R, Kozlovsky M, Kron RG, Kubik D, Kuehn K, Kuhlmann S, Kuk K, Lahav O, Lathrop A, Lee J, Levi ME, Lewis P, Li TS, Mandrichenko I, Marshall JL, Martinez G, Merritt KW, Miquel R, Muñoz F, Neilsen EH, Nichol RC, Nord B, Ogando R, Olsen J, Palaio N, Patton K, Peoples J, Plazas AA, Rauch J, Reil K, Rheault J-P, Roe NA, Rogers H, Roodman A, Sanchez E, Scarpine V, Schindler RH, Schmidt R, Schmitt R, Schubnell M, Schultz K, Schurter P, Scott L, Serrano S, Shaw TM, Smith RC, Soares-Santos M, Stefanik A, Stuermer W, Suchyta E, Sypniewski A, Tarle G, Thaler J, Tighe R, Tran C, Tucker D, Walker AR, Wang G, Watson M, Weaverdyck C, Wester W, Woods R, BY (2015) The dark energy camera. Astron J 150(5):150.
[91]
Abbott TMC, Abdalla FB, Allam S, Amara A, Annis J, Asorey J, Avila S, Ballester O, Banerji M, Barkhouse W, Baruah L, Baumer M, Bechtol K, Becker MR, Benoit-Lévy A, Bernstein GM, Bertin E, Blazek J, Bocquet S, Brooks D, Brout D, Buckley-Geer E, Burke DL, Busti V, Campisano R, Cardiel-Sas L, Rosell AC, Kind MC, Carretero J, Castander FJ, Cawthon R, Chang C, Chen X, Conselice C, Costa G, Crocce M, Cunha CE, D’Andrea CB, da Costa LN, Das R, Daues G, Davis TM, Davis C, Vicente JD, DePoy DL, DeRose J, Desai S, Diehl HT, Dietrich JP, Dodelson S, Doel P, Drlica-Wagner A, Eifler TF, Elliott AE, Evrard AE, Farahi A, Neto AF, Fernandez E, Finley DA, Flaugher B, Foley RJ, Fosalba P, Friedel DN, Frieman J, García-Bellido J, Gaztanaga E, Gerdes DW, Giannantonio T, Gill MSS, Glazebrook K, Goldstein DA, Gower M, Gruen D, Gruendl RA, Gschwend J, Gupta RR, Gutierrez G, Hamilton S, Hartley WG, Hinton SR, Hislop JM, Hollowood D, Honscheid K, Hoyle B, Huterer D, Jain B, James DJ, Jeltema T, Johnson MWG, Johnson MD, Kacprzak T, Kent S, Khullar G, Klein M, Kovacs A, Koziol AMG, Krause E, Kremin A, Kron R, Kuehn K, Kuhlmann S, Kuropatkin N, Lahav O, Lasker J, Li TS, Li RT, Liddle AR, Lima M, Lin H, López-Reyes P, MacCrann N, Maia MAG, Maloney JD, Manera M, March M, Marriner J, Marshall JL, Martini P, McClintock T, McKay T, McMahon RG, Melchior P, Menanteau F, Miller CJ, Miquel R, Mohr JJ, Morganson E, Mould J, Neilsen E, Nichol RC, Nogueira F, Nord B, Nugent P, Nunes L, Ogando RLC, Old L, Pace AB, Palmese A, Paz-Chinchón F, Peiris HV, Percival WJ, Petravick D, Plazas AA, Poh J, Pond C, Porredon A, Pujol A, Refregier A, Reil K, Ricker PM, Rollins RP, Romer AK, Roodman A, Rooney P, Ross AJ, Rykoff ES, Sako M, Sanchez ML, Sanchez E, Santiago B, Saro A, Scarpine V, Scolnic D, Serrano S, Sevilla-Noarbe I, Sheldon E, Shipp N, Silveira ML, Smith M, Smith RC, Smith JA, Soares-Santos M, Sobreira F, Song J, Stebbins A, Suchyta E, Sullivan M, Swanson MEC, Tarle G, Thaler J, Thomas D, Thomas RC, Troxel MA, Tucker DL, Vikram V, Vivas AK, Walker AR, Wechsler RH, Weller J, Wester W, Wolf RC, Wu H, Yanny B, Zenteno A, Zhang Y, Zuntz J, Juneau S, Fitzpatrick M, Nikutta R, Nidever D, Olsen K, and Scott A The dark energy survey: data release 1 Astrophys J Suppl Ser 2018 239 2 18
[92]
Stalder B, Reil K, Claver C, Liang M, Tsai TW, Lange T, Haupt J, Wiecha O, Lopez M, Poczulp G et al (2020) Rubin commissioning camera: integration, functional testing, and lab performance. In: Ground-based and airborne instrumentation for astronomy VIII, vol 11447. SPIE, pp 86–98
[93]
Marshall P, Clarkson W, Shemmer O, Biswas R, de Val-Borro M, Rho J, Jones L, Anguita T, Ridgway S, Bianco F, Ivezic Z, Lochner M, Meyers J, Vivas K, Graham M, Claver C, Digel S, Kasliwal V, McGehee PM, Gawiser E, Bellm E, Walkowicz L, Olsen K, Yoachim P, Bell K, Nidever D, Lund M, Connolly A, Arcavi I, Awan H (2017) LSST science collaborations observing strategy white paper: “science-driven optimization of the LSST observing strategy”. Zenodo.
[94]
Abbott TMC, Allam S, Andersen P, Angus C, Asorey J, Avelino A, Avila S, Bassett BA, Bechtol K, Bernstein GM, Bertin E, Brooks D, Brout D, Brown P, Burke DL, Calcino J, Rosell AC, Carollo D, Kind MC, Carretero J, Casas R, Castander FJ, Cawthon R, Challis P, Childress M, Clocchiatti A, Cunha CE, D’Andrea CB, da Costa LN, Davis C, Davis TM, Vicente JD, DePoy DL, Desai S, Diehl HT, Doel P, Drlica-Wagner A, Eifler TF, Evrard AE, Fernandez E, Filippenko AV, Finley DA, Flaugher B, Foley RJ, Fosalba P, Frieman J, Galbany L, García-Bellido J, Gaztanaga E, Giannantonio T, Glazebrook K, Goldstein DA, González-Gaitán S, Gruen D, Gruendl RA, Gschwend J, Gupta RR, Gutierrez G, Hartley WG, Hinton SR, Hollowood DL, Honscheid K, Hoormann JK, Hoyle B, James DJ, Jeltema T, Johnson MWG, Johnson MD, Kasai E, Kent S, Kessler R, Kim AG, Kirshner RP, Kovacs E, Krause E, Kron R, Kuehn K, Kuhlmann S, Kuropatkin N, Lahav O, Lasker J, Lewis GF, Li TS, Lidman C, Lima M, Lin H, Macaulay E, Maia MAG, Mandel KS, March M, Marriner J, Marshall JL, Martini P, Menanteau F, Miller CJ, Miquel R, Miranda V, Mohr JJ, Morganson E, Muthukrishna D, Möller A, Neilsen E, Nichol RC, Nord B, Nugent P, Ogando RLC, Palmese A, Pan Y-C, Plazas AA, Pursiainen M, Romer AK, Roodman A, Rozo E, Rykoff ES, Sako M, Sanchez E, Scarpine V, Schindler R, Schubnell M, Scolnic D, Serrano S, Sevilla-Noarbe I, Sharp R, Smith M, Soares-Santos M, Sobreira F, Sommer NE, Spinka H, Suchyta E, Sullivan M, Swann E, Tarle G, Thomas D, Thomas RC, Troxel MA, Tucker BE, Uddin SA, Walker AR, Wester W, Wiseman P, Wolf RC, Yanny B, Zhang B, YZ (2019) First cosmology results using type Ia supernovae from the dark energy survey: constraints on cosmological parameters. Astrophys J 872(2):30.
[95]
Hochreiter S and Schmidhuber J Long short-term memory Neural Comput 1997 9 8 1735-1780
[96]
Cho K, Van Merriënboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, Bengio Y (2014) Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv:1406.1078
[97]
Tang Z, Shi Y, Wang D, Feng Y, Zhang S (2017) Memory visualization for gated recurrent neural networks in speech recognition. In: 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 2736–2740
[98]
Chung J, Gulcehre C, Cho K, Bengio Y (2014) Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv:1412.3555
[99]
Collins J, Sohl-Dickstein J, Sussillo D (2016) Capacity and trainability in recurrent neural networks. arXiv:1611.09913
[100]
Jozefowicz R, Zaremba W, Sutskever I (2015) An empirical exploration of recurrent network architectures. In: International conference on machine learning. PMLR, pp 2342–2350
[101]
Yang S, Yu X, Zhou Y (2020) LSTM and GRU neural network performance comparison study: taking yelp review dataset as an example. In: 2020 International workshop on electronic communication and artificial intelligence (IWECAI), pp. 98–101.
[102]
Luo W, Li Y, Urtasun R, Zemel R (2016) Understanding the effective receptive field in deep convolutional neural networks. In: Advances in neural information processing systems 29
[103]
Pinciroli Vago NO, Milani F, Fraternali P, and da Silva Torres R Comparing cam algorithms for the identification of salient image features in iconography artwork analysis J Imaging 2021 7 7 106
[104]
Milani F, Pinciroli Vago NO, and Fraternali P Proposals generation for weakly supervised object detection in artwork images J Imaging 2022 8 8 215
[105]
Manna A, Kundu R, Kaplun D, Sinitca A, and Sarkar R A fuzzy rank-based ensemble of cnn models for classification of cervical cytology Sci Rep 2021 11 1 14538
[106]
Witten IH and Frank E Data mining: practical machine learning tools and techniques with java implementations ACM SIGMOD Rec 2002 31 1 76-77
[107]
Zhao K, Gao Q, Hao S, Sun J, Zhou L (2023) Credible remote sensing scene classification using evidential fusion on aerial-ground dual-view images. arXiv:2301.00622
[108]
Torres RN, Fraternali P, Romero J (2020) ODIN: An object detection and instance segmentation diagnosis framework. In: European conference on computer vision. Springer, pp 19–31
[109]
Torres RN, Milani F, Fraternali P (2021) ODIN: pluggable meta-annotations and metrics for the diagnosis of classification and localization. In: International conference on machine learning, optimization, and data science. Springer, pp 383–398
[110]
Zangrando N (2021) The ODIN framework, a tool for image classification diagnosis
Recommendations
Inverting Gravitational Lenses
Gravitational lensing provides a powerful tool to study a number of fundamental questions in astrophysics. Fortuitously, one can begin to explore some nontrivial issues associated with this phenomenon without a lot of very sophisticated mathematics, ...
Global ionospheric modeling based on multi-GNSS, satellite altimetry, and Formosat-3/COSMIC data
Ionosphere total electron content (TEC) from global ionospheric maps (GIM) is widely applied in both ionospheric delay correction and research on space weather monitoring. Global ionospheric modeling based on multisource data is an effective method to ...
Comments
Please enable JavaScript to view thecomments powered by Disqus.Information & Contributors
Information
Published In
© The Author(s) 2023.
Publisher
Springer-Verlag
Berlin, Heidelberg
Publication History
Published: 23 June 2023
Accepted: 12 June 2023
Received: 14 March 2023
Author Tags
Qualifiers
- Research-article
Funding Sources
- Politecnico di Milano
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 0Total Downloads
- Downloads (Last 12 months)0
- Downloads (Last 6 weeks)0
Reflects downloads up to 29 Jan 2025