Automated MRI Video Analysis for Pediatric Neuro-Oncology: An Experimental Approach
<p>Progression of the questions in the study from general to specific with their specific reasoning.</p> "> Figure 2
<p>Sagittal plane MRI scan (<b>A</b>) and frontal plane MRI scan (<b>B</b>) show a multifocal cystic lesion with a solid component, mainly in the medial part, peripherally in the left cerebellar hemisphere. The lesion measures approximately 53 × 44 × 40 mm (SD × AP × CC) and adheres to the inner plate of the occipital bone, causing its thinning, and to the cerebellar tentorium. The solid part of the lesion shows fairly uniform contrast enhancement.</p> "> Figure 3
<p>Figure shows a fragment of the answer from Gemini Pro regarding the analyzed MRI video material.</p> "> Figure 4
<p>Analysis performed by ChatGPT 4o for the frontal plane. The results of the motion detection and contour analysis indicate the following: 1. Contours in Frames (<b>Left side</b>). The green contours highlight regions where significant changes or movements have occurred between consecutive frames. These regions could correspond to moving objects, changing light conditions, or other dynamic elements in the video. 2. Thresholded Difference (<b>Right side</b>): The binary images show the areas where the differences between frames exceed a certain threshold. White areas represent significant changes, while black areas indicate little to no change.</p> "> Figure 5
<p>Analysis performed by ChatGPT 4o for the sagittal plane. The frames with segmentation and annotations show the following: Segmentation: Green contours highlight potential areas of interest in the images. Annotation Detection: Red lines indicate detected line segments, which may correspond to annotations such as arrows.</p> "> Figure 6
<p>ChatGPT 4o’s response regarding tumor detection in the frontal plane. The analysis of the first frame indicates two potential tumor regions, which are highlighted with green contours.</p> "> Figure 7
<p>ChatGPT 4o’s response regarding tumor detection in the sagittal plane. Potential areas of interest in the three selected frames are highlighted. The red rectangles indicate regions where there might be abnormal enhancement, suggesting the presence of a tumor.</p> ">
Abstract
:Featured Application
Abstract
1. Introduction
2. Materials and Methods
2.1. Human Evaluation
2.2. Preparation of Radiological Material for Analysis
2.3. AI System Evaluation Methodology
2.3.1. Aim and Method of Question Gradation
- Preparation of AI Models: The strategy of question gradation allowed for the assessment of the AI models’ ability to recognize and analyze medical recordings at different levels of detail. The questions began with very general ones to check if the model could understand and identify the content of the video material. The questions then became more specific to examine the models’ analytical capabilities in more detail;
- Gradation of Difficulty: Starting with general questions, researchers could first establish if the model had a basic ability to interpret the image. Subsequently, more detailed questions allowed for a more thorough examination of how well the model could identify specific pathological features. This approach minimizes the risk of overlooking significant errors in the models’ performance at an early stage.
2.3.2. Set of Questions 1: General Questions
- Could you analyze this video?
- Do I need a more detailed scientific analysis of what you observe in the attached video?
- Are you able to recognize what this video contains?
2.3.3. Set of Questions 2: More Specific Questions
- Are you able to recognize what this video contains?
- What does the attached video show?
2.3.4. Set of Questions 3: Detailed Questions
- I am uploading a video file of a brain MRI with contrast showing a tumor. Are you able to analyze this video and identify the pathological change?
3. Results
3.1. Results of the AI Model Analysis
3.1.1. GEMINI PRO
3.1.2. ChatGPT 4o
3.2. Summary and Tabulation of Differences and Similarities in AI Models’ Responses to the First Set of Questions
3.2.1. Similarities
- Initial Analysis Methodology
- Both models began the analysis by extracting key frames from the video;
- In both cases, it was suggested to use image analysis methods such as contrast, edge detection, and motion analysis;
- Advanced Techniques
- Both models suggested using object detection algorithms, although they encountered technical limitations in accessing the necessary libraries and model files;
- Analysis Steps
- Both models conducted motion analysis and edge detection to identify regions of interest.
3.2.2. Differences
- Detail of Initial Characterization
- Frontal Plane: Detailed video properties were provided (FPS, number of frames, duration, and dimensions);
- Sagittal Plane: Detailed video properties were not provided;
- Specificity of Analysis
- Frontal Plane: A wide range of analysis methods was suggested (frame-by-frame analysis, object detection, motion analysis, and image processing techniques);
- Sagittal Plane: The focus was on image content analysis, brightness and contrast, object detection, and temporal analysis;
- Results of Initial Analysis
- Frontal Plane: The results indicated regions of motion and contour analysis;
- Sagittal Plane: The results included brightness and contrast analysis and edge detection;
- Recognition of Video Content:
- Frontal Plane: The model could not clearly recognize the medical context, suggesting motion and contour analysis;
- Sagittal Plane: The model suggested a medical imaging context and tumor detection based on context and image analysis.
3.3. Summary and Tabulation of Differences and Similarities in AI Models’ Responses to the Second Set of Questions
3.3.1. Similarities
- Analysis Methodology
- Both models suggest frame extraction as the fundamental step for analyzing video content;
- Basic Video Data
- Both models provide detailed information on the number of frames, frames per second (FPS), and video duration;
- Readiness for Further Analysis
- Both models are prepared for further analysis based on additional details or specific questions about the video content.
3.3.2. Differences
- Direct Video Analysis
- In the frontal plane, the model indicates an inability to directly play or analyze the video, suggesting the need for additional information or specific queries. In the sagittal plane, the model undertakes direct analysis by extracting frames and providing key information about the video;
- Frame Extraction
- In the frontal plane, only the first frame is extracted without details about its content. In the sagittal plane, 10 frames are extracted at equal intervals, with a description of each frame’s content;
- Frame Content Description
- The frontal plane lacks a detailed description of the first frame, while the sagittal plane provides detailed descriptions of each of the 10 extracted frames, including scene context, perspective, motion, and interactions.
3.4. Summary and Tabulation of Differences and Similarities in AI Models’ Responses to the Third Set of Questions
3.4.1. Similarities
- Analysis Methodology
- Both models began by extracting frames from the video and suggested further frame analysis to identify pathological changes, with an emphasis on identifying tumor regions;
- Identified Elements
- Both models identify and mark potential tumor areas in the video frames;
- Next Steps:
- Both models suggest the possibility of further frame analysis or focusing on more detailed aspects of the analysis, including the need for additional clinical or radiological information.
3.4.2. Differences
- Extent of Frame Extraction
- In the frontal plane, only the first 10 frames are extracted, potentially limiting the full video sequence analysis. In the sagittal plane, all 180 frames are extracted, allowing for a more comprehensive video sequence analysis;
- Marking Methodology
- In the frontal plane, potential tumor regions are marked with green contours. In the sagittal plane, potential tumor regions are marked with red rectangles;
- Detail of Analysis
- The frontal plane suggests further frame analysis without providing details on the needed clinical information, while the sagittal plane clearly suggests the need for additional clinical or radiological information for a more precise analysis.
4. Discussion
4.1. Comprehensive Summary
4.2. Potential Reasons for Failure
- Lack of Medical Specialization in the Models: Models such as ChatGPT 4o and Gemini Pro are general language models not specifically designed or trained to analyze medical video images. Although they possess the ability to process textual information and some knowledge of medicine, their lack of specialization in medical image analysis results in low accuracy and inconsistency in interpreting MRI data. Large language models (LLMs), such as ChatGPT and Gemini Pro, while proficient in processing textual information and demonstrating some medical knowledge [22], lack specialization in medical image analysis, leading to potential inaccuracies in interpreting MRI data. It is recommended that specialized medical LLMs, trained based on authoritative medical databases that are human-validated, provide greater accuracy and completeness in medical fields [23]. Evaluations of ChatGPT 4o’s performance on medical licensing exams show high proficiency in handling textual and visual questions, meeting positive criteria but demonstrating limitations in clinical assessment and prioritization [24]. Additionally, a study comparing LLMs with healthcare-specific NLP tools found that ChatGPT 4 performed similarly in some tasks but less accurately in others, highlighting the need for task-specific evaluation before implementing LLMs in medical contexts [25];
- Technical Limitations in Video Processing: The technical limitations in video processing of ChatGPT 4o and Gemini Pro arise primarily from their focus on text and static image analysis and lack the advanced visual analysis algorithms necessary for complex MRI sequences [26,27]. Although ChatGPT excels in natural language processing tasks, it encounters challenges in tasks such as summarization and commonsense reasoning [27]. Additionally, the explainability of results in language models like ChatGPT poses significant challenges, hindering their use in sensitive applications [28]. Integrating video analysis with text-based language analysis remains a technological hurdle that has not been effectively addressed, underscoring the need for further development and optimization of AI models like ChatGPT [27]. Overcoming these limitations is crucial for enhancing the models’ capabilities for more comprehensive and integrated data analysis across different modalities;
- Limited Ability to Understand Medical Context: Although these models have some medical knowledge, their ability to understand specific clinical contexts is limited. The inability to correctly identify the tumor in the video material may stem from their incapacity to apply medical context in analyzing MRI images. Research articles provide insights into the performance of AI models like ChatGPT in medical contexts. While ChatGPT 4o demonstrated high proficiency in handling text- and image-based questions on the Japanese Medical Licensing Examination (JMLE) [20], it is noted that the model had difficulties with clinical assessment and prioritization, indicating limitations in applying medical context in certain task scenarios. Similarly, a study evaluating ChatGPT’s ability to diagnose keratinocyte tumors found that although ChatGPT 4 improved diagnostic accuracy compared to ChatGPT 3.5, it still had limitations in specific tumor identification [29]. Additionally, a study comparing ChatGPT with Google in diagnosing rare rheumatologic diseases highlighted ChatGPT’s comparable diagnostic effectiveness but with significantly reduced query execution time, suggesting its practicality in clinical settings [30]. These findings collectively suggest that while AI models like ChatGPT possess medical knowledge, their ability to understand specific clinical contexts, particularly in tasks such as identifying tumors in MRI images, may still be limited, necessitating further improvements to enhance accuracy and reliability [31,32];
- Issues with Access to Sufficient Training Data: For AI models to be effective in analyzing medical images, they must be trained on large diverse medical datasets. It is possible that the data used to train ChatGPT 4o and Gemini Pro did not include a sufficient number of cases involving contrast-enhanced brain MRI, resulting in their inability to analyze such data correctly;
- Mismatch with Diagnostic Tasks: ChatGPT 4o and Gemini Pro were not originally designed for diagnostic tasks. Gemini Pro, in particular, may have built-in limitations regarding medical analysis, which could have led to the refusal to conduct the analysis and generate responses suggesting a lack of diagnostic capability.
- Lack of Specialized Training for Models: ChatGPT 4o and Gemini Pro are general language models not specifically trained for analyzing medical video images. Their application in a medical context is thus limited, which significantly affects their ability to recognize MRI video content and identify pathological changes;
- Limited Number of Cases: The study was based on analyzing only one MRI video case of a child with a brain tumor. A larger number of diverse cases could provide more representative data and better assess the models’ ability to analyze different types of neoplastic changes;
- Specificity of the Selected Material: The research material came from a single patient, which may limit the generalizability of the results. Brain tumors can vary depending on many factors, such as patient age, tumor location, or histopathological type, which could affect the AI models’ analysis results;
- Lack of Comparison with Other Tools: The study did not include a comparison with other specialized AI tools designed for medical image analysis. Such a comparison could provide valuable insights into the relative effectiveness of ChatGPT 4o and Gemini Pro compared to tools specifically created for MRI image analysis;
- Technical Limitations: AI models may have technical limitations in video processing, which, combined with a limited number of frames per second and image quality, could affect the models’ ability to correctly analyze the material.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Uppalapati, V.K.; Nag, D.S. A Comparative Analysis of AI Models in Complex Medical Decision-Making Scenarios: Evaluating ChatGPT, Claude AI, Bard, and Perplexity. Cureus 2024, 16, e52485. [Google Scholar] [CrossRef] [PubMed]
- Waisberg, E.; Ong, J.; Masalkhi, M.; Zaman, N.; Sarker, P.; Lee, A.G.; Tavakkoli, A. GPT-4 and medical image analysis: Strengths, weaknesses and future directions. J. Med. Artif. Intell. 2023, 6, 29. [Google Scholar] [CrossRef]
- Zong, H.; Li, J.; Wu, E.; Wu, R.; Lu, J.; Shen, B. Performance of ChatGPT on Chinese national medical licensing examinations: A five-year examination evaluation study for physicians, pharmacists and nurses. BMC Med. Educ. 2024, 24, 143. [Google Scholar] [CrossRef] [PubMed]
- Saravia-Rojas, M.Á.; Camarena-Fonseca, A.R.; León-Manco, R.; Geng-Vivanco, R. Artificial intelligence: ChatGPT as a disruptive didactic strategy in dental education. J. Dent. Educ. 2024, 88, 872–876. [Google Scholar] [CrossRef]
- Pradhan, F.; Fiedler, A.; Samson, K.; Olivera-Martinez, M.; Manatsathit, W.; Peeraphatdit, T. Artificial intelligence compared with human-derived patient educational materials on cirrhosis. Hepatol. Commun. 2024, 8, e0367. [Google Scholar] [CrossRef]
- Masalkhi, M.; Ong, J.; Waisberg, E.; Lee, A.G. Google DeepMind’s gemini AI versus ChatGPT: A comparative analysis in ophthalmology. Eye 2024, 38, 1412–1417. [Google Scholar] [CrossRef]
- Maniaci, A.; Fakhry, N.; Chiesa-Estomba, C.; Lechien, J.R.; Lavalle, S. Synergizing ChatGPT and general AI for enhanced medical diagnostic processes in head and neck imaging. Eur. Arch. Otorhinolaryngol. 2024, 281, 3297–3298. [Google Scholar] [CrossRef]
- Maaz, M.; Rasheed, H.; Khan, S.; Khan, F.S. Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models. arXiv 2024, arXiv:2306.05424. [Google Scholar] [CrossRef]
- Reddy Kalli, V.D. Creating an AI-powered platform for neurosurgery alongside a usability examination: Progressing towards minimally invasive robotics. J. Artif. Intell. Gen. Sci. (JAIGS) 2024, 3, 256–268. [Google Scholar] [CrossRef]
- Dip, S.S.; Rahman, M.H.; Islam, N.; Arafat, M.E.; Bhowmick, P.K.; Yousuf, M.A. Enhancing Brain Tumor Classification in MRI: Leveraging Deep Convolutional Neural Networks for Improved Accuracy. Int. J. Inf. Technol. Comput. Sci. 2024, 16, 12–21. [Google Scholar] [CrossRef]
- Lemaire, R.; Raboutet, C.; Leleu, T.; Jaudet, C.; Dessoude, L.; Missohou, F.; Poirier, Y.; Deslandes, P.Y.; Lechervy, A.; Lacroix, J.; et al. Artificial intelligence solution to accelerate the acquisition of MRI images: Impact on the therapeutic care in oncology in radiology and radiotherapy departments. Cancer Radiother. 2024, 28, 251–257. [Google Scholar] [CrossRef] [PubMed]
- Wood, D.; Guilhem, E.; Kafiabadi, S.; Al Busaidi, A.; Al Busaidi, A.; Hammam, A.; Mansoor, N.; Townend, M.; Agarwal, S.; Wei, Y.; et al. Automated Brain Abnormality Detection using a Self-Supervised Text-Vision Framework. Authorea 2024, 2, 1. [Google Scholar] [CrossRef]
- Chen, H.; Xu, Q.; Zhang, L.; Kiraly, A.P.; Novak, C.L. Automated definition of mid-sagittal planes for MRI brain scans. In Medical Imaging 2007: Image Processing; SPIE: Bellingham, WA, USA, 2007. [Google Scholar] [CrossRef]
- Kozel, G.; Gurses, M.E.; Gecici, N.N.; Gökalp, E.; Bahadir, S.; Merenzon, M.A.; Shah, A.H.; Komotar, R.J.; Ivan, M.E. Chat-GPT on brain tumors: An examination of Artificial Intelligence/Machine Learning’s ability to provide diagnoses and treatment plans for example neuro-oncology cases. Clin. Neurol. Neurosurg. 2024, 239, 108238. [Google Scholar] [CrossRef] [PubMed]
- Abbas, A.A.; Shitran, R.; Dagash, H.T.; Khalil, M.A.; Abdulrazzaq, R. Prevalence of Pediatric brain tumor in children from a tertiary neurosurgical center, during a period from 2010 to 2018 in Baghdad, Iraq. Ann. Trop. Med. Public Health 2021, 24, 315–321. [Google Scholar] [CrossRef]
- Elgamal, E.A.; Mohamed, R.M. Pediatric Brain Tumors. In Clinical Child Neurology; Salih, M.A., Ed.; Springer: Cham, Germany, 2020; pp. 1033–1068. [Google Scholar] [CrossRef]
- Jaju, A.; Yeom, K.W.; Ryan, M.E. MR Imaging of Pediatric Brain Tumors. Diagnostics 2022, 12, 961. [Google Scholar] [CrossRef]
- Sultan, L.R.; Mohamed, M.K.; Andronikou, S. ChatGPT-4: A Breakthrough in Ultrasound Image Analysis. Radiol. Adv. 2024, 1, umae006. [Google Scholar] [CrossRef]
- Lee, K.-H.; Lee, R.-W. ChatGPT’s Accuracy on Magnetic Resonance Imaging Basics: Characteristics and Limitations Depending on the Question Type. Diagnostics 2024, 14, 171. [Google Scholar] [CrossRef]
- Perera Molligoda Arachchige, A.S. Empowering Radiology: The Transformative Role of ChatGPT. Clin. Radiol. 2023, 78, 851–855. [Google Scholar] [CrossRef]
- Rawas, S.; Tafran, C.; AlSaeed, D. ChatGPT-powered Deep Learning: Elevating Brain Tumor Detection in MRI Scans. Appl. Comput. Inform. 2024, 1–13. [Google Scholar] [CrossRef]
- Yan, Z.; Liu, J.; Shuang, L.; Xu, D.; Yang, Y.; Wang, H.; Mao, J.; Tseng, H.; Chang, T.; Chen, Y.; et al. Large Language Models (LLMs) vs. Specialist Doctors: A Comparative Study on Health Information in specific medical domains. J. Med. Internet Res. 2024. in preprint. [Google Scholar] [CrossRef]
- Wang, S. Beyond ChatGPT: It Is Time to Focus More on Specialized Medical LLMs. J. Endourol. 2024, 1–9. [Google Scholar] [CrossRef] [PubMed]
- Miyazaki, Y.; Hata, M.; Omori, H.; Hirashima, A.; Nakagawa, Y.; Etō, M.; Takahashi, S.; Ikeda, M. Performance and Errors of ChatGPT-4o on the Japanese Medical Licensing Examination: Solving All Questions Including Images with Over 90% Accuracy. JMIR Med. Educ. 2024. in preprint. [Google Scholar] [CrossRef]
- Rough, K.; Feng, H.; Milligan, P.B.; Tombini, F.; Kwon, T.; El Abidine, K.Z.; Mack, C.; Hughes, B. How well it works: Benchmarking performance of GPT models on medical natural language processing tasks. medRxiv 2024. [Google Scholar] [CrossRef]
- Patil, P.; Kulkarni, K.; Sharma, P. Algorithmic Issues, Challenges, and Theoretical Concerns of ChatGPT. In Applications, Challenges, and the Future of ChatGPT; Sharma, P., Jyotiyana, M., Senthil Kumar, A.V., Eds.; IGI Global: Hershey, PA, USA, 2024; Chapter 3; pp. 56–74. [Google Scholar] [CrossRef]
- Wu, Y. Evaluating ChatGPT: Strengths and Limitations in NLP Problem Solving. Highl. Sci. Eng. Technol. 2024, 94, 319–325. [Google Scholar] [CrossRef]
- Arnold, T. Herausforderungen in der Forschung: Mangelnde Reproduzierbarkeit und Erklärbarkeit. In KI:Text: Diskurse Über KI-Textgeneratoren; Schreiber, G., Ohly, L., Eds.; De Gruyter: Berlin, Germany; Boston, MA, USA, 2024; pp. 67–80. [Google Scholar] [CrossRef]
- Yang, K.; Zeb, L.; Bae, S.; Pavlidakey, P.G. Diagnostic Accuracy of ChatGPT for Textbook Descriptions of Epidermal Tumors: An Exploratory Study. Am. J. Dermatopathol. 2024, 46, 632–634. [Google Scholar] [CrossRef]
- Lasnier-Siron, J. Pos0749 Respective Performances of ChatGPT and Google for the Diagnosis of Rare Diseases in Rheumatology. Ann. Rheum. Dis. 2024, 83, 1115–1116. [Google Scholar] [CrossRef]
- Holland, A.; Lorenz, W.; Cavanaugh, J.; Ayuso, S.; Scarola, G.; Jorgensen, L.; Kercher, K.; Smart, N.; Fischer, J.; Janiset, J.; et al. ChatGPT, MD: A Pilot Study Utilizing Large Language Models to Write Medical Abstracts. Br. J. Surg. 2024, 111 (Suppl. S5), znae122-039. [Google Scholar] [CrossRef]
- Li, K.C.; Bu, Z.J.; Shahjalal, M.; He, B.X.; Zhuang, Z.F.; Li, C.; Liu, J.P.; Wang, B.; Liu, Z.L. Performance of ChatGPT on Chinese Master’s Degree Entrance Examination in Clinical Medicine. PLoS ONE 2024, 19, e0301702. [Google Scholar] [CrossRef]
Criterion | Frontal Plane | Sagittal Plane |
---|---|---|
General Characteristics | Video properties: 20 FPS, 200 frames, 10 s, 1024 × 1024 pixels | Detailed video properties not provided |
Analysis Methods | Frame-by-frame, object detection, motion analysis, image processing | Frame extraction, image content analysis, contrast and brightness analysis, object detection, temporal analysis |
Analysis Specificity | Enhancements (contrast, edge detection) | Edge detection, regions of interest based on brightness and contrast |
Identified Elements | Contours, significant changes, motion regions | Significant features, structural elements, potential regions of interest |
Advanced Techniques | Object detection (OpenCV), contour analysis, motion detection | Object detection (OpenCV), edge detection |
Initial Analysis Results | Motion regions, contour analysis | Brightness and contrast variations, edge detection results |
Next Steps | Tracking motion, object recognition | Motion tracking, detailed region analysis |
Criterion | Frontal Plane | Sagittal Plane |
---|---|---|
Video Content Recognition | I cannot directly play or analyze the video. Please provide more details or specific queries. | I can analyze video content starting with frame extraction and key information. |
Detailed Video Data | 200 frames, 20 FPS, 10 s | 180 frames, 20 FPS, 9 s |
Frame Extraction | Extracted first frame, no detailed content provided. | Extraction of 10 frames at equal intervals, with detailed description of each frame. |
Frame Content Description | No detailed description of the first frame. | Detailed description of each of the 10 extracted frames, focusing on scene context, perspective, movement, and interactions. |
Next Steps | Request for further information or specific queries. | Suggested further analysis or detailed video content information based on frames. |
Criterion | Frontal Plane | Sagittal Plane |
---|---|---|
Video Content Recognition | Yes, analysis is possible, but detailed medical analysis should be performed by a specialist. | Yes, analysis is possible with a focus on identifying pathological changes, especially tumors. |
Frame Extraction | Extracted the first 10 frames: Frame 0, 10, 20, 30, 40, 50, 60, 70, 80, and 90. | Extracted all 180 frames from the video. |
Analysis Methods | Frame analysis to identify potential tumor areas, green contours. | Frame analysis to identify pathological changes, especially tumors, with red contours. |
Identified Elements | Potential tumor regions marked with green contours. | Potential tumor regions marked with red rectangles. |
Next Steps | Further analysis of more frames or focus on a specific aspect of the analysis. | Suggested more detailed analysis based on clinical information or radiological markers. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Fabijan, A.; Zawadzka-Fabijan, A.; Fabijan, R.; Zakrzewski, K.; Nowosławska, E.; Kosińska, R.; Polis, B. Automated MRI Video Analysis for Pediatric Neuro-Oncology: An Experimental Approach. Appl. Sci. 2024, 14, 8323. https://doi.org/10.3390/app14188323
Fabijan A, Zawadzka-Fabijan A, Fabijan R, Zakrzewski K, Nowosławska E, Kosińska R, Polis B. Automated MRI Video Analysis for Pediatric Neuro-Oncology: An Experimental Approach. Applied Sciences. 2024; 14(18):8323. https://doi.org/10.3390/app14188323
Chicago/Turabian StyleFabijan, Artur, Agnieszka Zawadzka-Fabijan, Robert Fabijan, Krzysztof Zakrzewski, Emilia Nowosławska, Róża Kosińska, and Bartosz Polis. 2024. "Automated MRI Video Analysis for Pediatric Neuro-Oncology: An Experimental Approach" Applied Sciences 14, no. 18: 8323. https://doi.org/10.3390/app14188323
APA StyleFabijan, A., Zawadzka-Fabijan, A., Fabijan, R., Zakrzewski, K., Nowosławska, E., Kosińska, R., & Polis, B. (2024). Automated MRI Video Analysis for Pediatric Neuro-Oncology: An Experimental Approach. Applied Sciences, 14(18), 8323. https://doi.org/10.3390/app14188323