CN116596830A - Detecting robustness of machine learning models in clinical workflows - Google Patents
Detecting robustness of machine learning models in clinical workflows Download PDFInfo
- Publication number
- CN116596830A CN116596830A CN202310100005.0A CN202310100005A CN116596830A CN 116596830 A CN116596830 A CN 116596830A CN 202310100005 A CN202310100005 A CN 202310100005A CN 116596830 A CN116596830 A CN 116596830A
- Authority
- CN
- China
- Prior art keywords
- network
- medical analysis
- medical
- input
- machine learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010801 machine learning Methods 0.000 title claims abstract description 119
- 238000004458 analytical method Methods 0.000 claims abstract description 219
- 238000012550 audit Methods 0.000 claims abstract description 116
- 238000000034 method Methods 0.000 claims abstract description 51
- 238000012549 training Methods 0.000 claims description 76
- 238000009826 distribution Methods 0.000 claims description 26
- 230000004044 response Effects 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 14
- 239000010410 layer Substances 0.000 description 136
- 230000006870 function Effects 0.000 description 49
- 238000013528 artificial neural network Methods 0.000 description 32
- 230000008878 coupling Effects 0.000 description 31
- 238000010168 coupling process Methods 0.000 description 31
- 238000005859 coupling reaction Methods 0.000 description 31
- 230000011218 segmentation Effects 0.000 description 26
- 238000013527 convolutional neural network Methods 0.000 description 22
- 239000011159 matrix material Substances 0.000 description 17
- 238000011176 pooling Methods 0.000 description 17
- 230000015654 memory Effects 0.000 description 16
- 238000004364 calculation method Methods 0.000 description 13
- 238000002059 diagnostic imaging Methods 0.000 description 9
- 238000001125 extrusion Methods 0.000 description 8
- 238000003860 storage Methods 0.000 description 8
- 230000008901 benefit Effects 0.000 description 7
- 230000004913 activation Effects 0.000 description 6
- 238000013500 data storage Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 6
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 239000013598 vector Substances 0.000 description 5
- 230000000875 corresponding effect Effects 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 238000010606 normalization Methods 0.000 description 4
- 230000002829 reductive effect Effects 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- 238000011144 upstream manufacturing Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 210000004204 blood vessel Anatomy 0.000 description 3
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 238000002591 computed tomography Methods 0.000 description 3
- 238000010968 computed tomography angiography Methods 0.000 description 3
- 238000013480 data collection Methods 0.000 description 3
- 230000003247 decreasing effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 230000001965 increasing effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000000644 propagated effect Effects 0.000 description 3
- 230000002441 reversible effect Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 208000031481 Pathologic Constriction Diseases 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 238000012804 iterative process Methods 0.000 description 2
- 238000002595 magnetic resonance imaging Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000011002 quantification Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000036262 stenosis Effects 0.000 description 2
- 208000037804 stenosis Diseases 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 102100029469 WD repeat and HMG-box DNA-binding protein 1 Human genes 0.000 description 1
- 101710097421 WD repeat and HMG-box DNA-binding protein 1 Proteins 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 230000003920 cognitive function Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 210000004351 coronary vessel Anatomy 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000011143 downstream manufacturing Methods 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000011229 interlayer Substances 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
- G06V10/765—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/20—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the management or administration of healthcare resources or facilities, e.g. managing hospital staff or surgery rooms
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Medical Informatics (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Quality & Reliability (AREA)
- Pathology (AREA)
- General Business, Economics & Management (AREA)
- Business, Economics & Management (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Systems and methods are provided for determining the robustness of a machine learning based medical analysis network for performing medical analysis tasks on input medical data. Input medical data is received. Results of medical analysis tasks performed based on the input medical data using a machine learning based medical analysis network are received. The robustness of the machine learning based medical analysis network for performing the medical analysis task is determined based on the input medical data and the results of the medical analysis task using the machine learning based audit network. The determination of the robustness of the machine learning based medical analysis network is output.
Description
Technical Field
The present invention relates generally to machine learning models in clinical workflows, and in particular to detecting robustness of machine learning models in clinical workflows.
Background
Machine learning models have been applied to perform various medical analysis tasks such as, for example, detection, segmentation, quantification, and the like. The supervised machine learning model is typically trained offline and deployed in a clinical site (e.g., on a medical imaging scanner) or cloud for integration into a clinical workflow for clinical decision making (e.g., diagnosis, treatment planning, etc.). Such machine learning models are typically trained on large training data sets to cover a wide range of variations to ensure robust performance. However, regardless of the size of the training data set, it may be desirable to have such a machine learning model perform predictions on a data set that is significantly different from its training data set.
Disclosure of Invention
In accordance with one or more embodiments, systems and methods are provided for determining the robustness of a machine learning based medical analysis network for performing medical analysis tasks on input medical data. Input medical data is received. Results of medical analysis tasks performed based on the input medical data using a machine learning based medical analysis network are received. The robustness of the machine learning based medical analysis network for performing the medical analysis task is determined based on the input medical data and the results of the medical analysis task using the machine learning based audit network. The determination of the robustness of the machine learning based medical analysis network is output.
In one embodiment, in response to determining that the machine-learning-based medical analysis network is not robust, determining that the machine-learning-based medical analysis network is not robust is due to an artifact in at least one of the input medical data relative to a training data out-of-distribution on which the machine-learning-based medical analysis network is trained, or due to the input medical data or a result of a medical analysis task. In another embodiment, the machine-learning-based medical analysis network and the machine-learning-based audit network are retrained based on the input medical data in response to determining that the machine-learning-based medical analysis network is not robust. In another embodiment, in response to determining that the machine learning based medical analysis network is not robust, one or more surrogate results from medical analysis tasks of other machine learning based medical analysis networks are presented.
In one embodiment, user input is received that compiles the results of the medical analysis task to generate final results of the medical analysis task. The robustness of the machine learning based medical analysis network is determined based on the final results of the medical analysis tasks.
In one embodiment, a machine learning based audit network is implemented using a standardized flow model.
In one embodiment, in response to determining that the machine learning based medical analysis network is not robust, an alert is generated to the user informing the user that the machine learning based medical analysis network is not robust or requesting input from the user. Input may be received from a user to override a machine learning based determination that the medical analysis network is not robust or to edit the results of the medical analysis task.
In one embodiment, the medical analysis task includes at least one of segmenting, determining a vessel centerline, or calculating Fractional Flow Reserve (FFR).
These and other advantages of the present invention will be apparent to those of ordinary skill in the art by reference to the following detailed description and accompanying drawings.
Drawings
FIG. 1 illustrates a method for determining the robustness of a medical analysis network for performing medical imaging analysis tasks on input medical data in accordance with one or more embodiments;
FIG. 2 illustrates a workflow for training and applying a machine-learning based medical analysis network for performing medical imaging analysis tasks on input medical data and a machine-learning based audit network for determining robustness of the medical analysis network in accordance with one or more embodiments;
FIG. 3 illustrates a workflow for evaluating user input received for performing a medical analysis task in accordance with one or more embodiments;
FIG. 4 illustrates a workflow of calculating FFR (fractional flow reserve) from CCTA (coronary artery CT angiography) images in accordance with one or more embodiments;
FIG. 5 illustrates a graph comparing robustness determined for independent input data sets with ground truth labels according to embodiments described herein;
FIG. 6 illustrates an exemplary Glow-type standardized flow network architecture for an audit network in accordance with one or more embodiments;
FIG. 7 illustrates a table for implementing a Glow-type standardized flow network in accordance with one or more embodiments;
FIG. 8 illustrates a table for implementing a dual-headed 3D CNN (convolutional neural network) for implementing a zoom function and a pan function in accordance with one or more embodiments;
FIG. 9 illustrates a table for implementing CNNs that compute kernel k and vector b in accordance with one or more embodiments;
FIG. 10 illustrates a network architecture for implementing a standardized flow audit network in accordance with one or more embodiments;
FIG. 11 illustrates a table for implementing the network architecture of FIG. 10 in accordance with one or more embodiments;
FIG. 12 illustrates a training image for training a standardized flow audit network in accordance with one or more embodiments;
FIG. 13 illustrates a graph showing probability changes across one vessel segment from 80 cross-sections of a test dataset in accordance with one or more embodiments;
FIG. 14 illustrates a saliency chart of a standardized flow audit model in accordance with one or more embodiments;
FIG. 15 illustrates a workflow for reducing the number of rejected cases in a clinical center in accordance with one or more embodiments;
FIG. 16 illustrates an exemplary artificial neural network that can be used to implement one or more embodiments;
FIG. 17 illustrates a convolutional neural network that may be used to implement one or more embodiments; and
FIG. 18 depicts a high-level block diagram of a computer that may be used to implement one or more embodiments.
Detailed Description
The present invention relates generally to methods and systems for detecting robustness of machine learning models in clinical workflows. Embodiments of the present invention are described herein to give an intuitive understanding of such methods and systems. A digital image is typically made up of a digital representation of one or more objects (or shapes). The digital representation of an object is generally described herein in terms of identifying and manipulating the object. Such manipulations are virtual manipulations that are accomplished in the memory or other circuitry/hardware of a computer system. Thus, it should be understood that embodiments of the invention may be performed within a computer system using data stored within the computer system.
The machine learning based medical analysis network (or model) may be applied to perform various medical analysis tasks such as, for example, detecting, segmenting, quantifying, making clinical decisions on input medical data, and the like. According to embodiments described herein, a machine learning based audit network is provided to evaluate the robustness of a medical analysis network for performing medical analysis tasks on input medical data. Robustness of a medical analysis network refers to the ability of the medical analysis network to accurately perform medical analysis tasks on input medical data. For performing medical analysis tasks on input medical data, the medical analysis network may not be robust, for example, where the input medical data is off-distribution relative to a training data set on which the medical analysis network is trained, or where the input medical data includes artifacts. Advantageously, the embodiments described herein enable tagging of input medical data, i.e. input or possible input of input medical data into a medical analysis network, in case the medical analysis network is not robust for performing medical analysis tasks on the input medical data. User input may be requested from the user or the user may be alerted that the predictions of the medical analysis network cannot be trusted for such tagged medical input data.
FIG. 1 illustrates a method 100 for determining the robustness of a medical analysis network for performing medical imaging analysis tasks on input medical data in accordance with one or more embodiments. The steps of method 100 may be performed by one or more suitable computing devices, such as, for example, computer 1802 of fig. 18. FIG. 2 illustrates a workflow 200 for training and applying a machine-learning based medical analysis network for performing medical imaging analysis tasks on input medical data and a machine-learning based audit network for determining robustness of the medical analysis network in accordance with one or more embodiments. Fig. 1 and 2 will be described together. The workflow 200 of fig. 2 shows an offline stage 202 for training a medical analysis network and an audit network, and an online stage 204 for applying a trained medical analysis network and a trained audit network. In one example, the steps of the method 100 of fig. 1 are performed during the online phase 204 of fig. 2.
At step 102 of fig. 1, input medical data is received. In one example, as shown in fig. 2, the input medical data may be input medical data 206 of workflow 200. The input medical data may include any suitable medical data of the patient.
In one embodiment, the input medical data may include an input medical image of the patient. The input medical image may have any suitable modality, such as, for example, CT (computed tomography), MRI (magnetic resonance imaging), ultrasound, x-ray or any other medical imaging modality or combination of medical imaging modalities. The input medical image may be a 2D (two-dimensional) image and/or a 3D (three-dimensional) volume and may comprise a single image or multiple images.
The input medical data may include any other suitable medical data of the patient. For example, the input medical data may include sensor data acquired from medical sensors on or in the patient, a medical modality associated with the patient (e.g., a patient questionnaire), or any other medical data of the patient. In one embodiment, the input medical data includes data output from an upstream machine learning based network, such as a network performing upstream medical analysis tasks in a cascading workflow.
The input medical data may be received by loading previously acquired medical data from a storage device or memory of the computer system or receiving medical data that has been transmitted from a remote computer system. In the case where the input medical data includes an input medical image, the medical image may be received directly from an image acquisition device (e.g., image acquisition device 1814 of fig. 18), such as, for example, a CT scanner, when the medical image is acquired.
At step 104 of fig. 1, results of a medical analysis task performed based on input medical data using a machine learning based medical analysis network are received. In one example, as shown in fig. 2, the result of the medical analysis task may be a result 210 of running the medical analysis network based on the input medical data 206 at block 208. The medical analysis task may be any suitable medical analysis task, such as, for example, diagnosis, treatment planning, etc. In one embodiment, the medical analysis task is a medical imaging analysis task performed based on the input medical image, such as, for example, detection, quantification, segmentation, and the like. The medical analysis network may be implemented according to any suitable machine learning based architecture. In one embodiment, the medical analysis network is a supervised machine learning model.
At step 106 of fig. 1, the robustness of the machine learning-based medical analysis network for performing the medical analysis task is determined based on the input medical data and the results of the medical analysis task using the machine learning-based audit network. In one example, as shown in fig. 2, the determination of the robustness of the machine learning based medical analysis network is a robustness determination 214 determined by running an audit network based on the input medical data 206 and the results 210 at block 212. The audit network receives as input the input medical data and results of the medical analysis tasks and generates as output a robustness determination. The audit network may be implemented according to any suitable machine learning-based architecture. In one embodiment, the audit network is an unsupervised machine learning model.
Robustness of a medical analysis network refers to the ability of the medical analysis network to accurately perform medical analysis tasks on input medical data. For performing a medical analysis task on input medical data, the medical analysis network may not be robust, wherein, for example, the input medical data is off-distribution relative to a training dataset on which the medical analysis network is trained (i.e., the input medical data falls outside of a data distribution relative to the training dataset), or wherein the input medical data and/or the results of the medical analysis task include artifacts. Artifacts may be due to erroneous data acquisitions, due to the output of previous algorithms (e.g., preprocessing algorithms that generate incorrect outputs), erroneous user inputs, and so forth.
The determination of the robustness of the medical analysis network may be represented in any suitable form. In one embodiment, the determination of robustness includes a binary output indicating whether the medical analysis network is robust or not, or whether the results of the medical analysis task should be accepted or not. In another embodiment, the determination of robustness includes multiple classifications. For example, the determination of robustness may include a multi-class output that indicates 1) that the medical analysis network is robust (e.g., the output of the medical analysis network may be trusted without user interaction), 2) that user feedback is requested (e.g., the output of the medical analysis network should be verified by a user), or 3) that the medical analysis network is not robust (e.g., the output of the medical analysis network cannot be trusted). In category 3, the user may verify the output of the medical analysis network and overrule the determination of the audit network. In further embodiments, the determination of the robustness comprises a continuous output, wherein the robustness is represented in a continuous range. For example, the determination of robustness may be a robustness score representing a measure of dissimilarity of the input medical data with the training data on which the medical analysis network is trained. One or more thresholds may be applied to the robustness scores to generate a binary output or a multi-class output.
At step 108 of fig. 1, a determination of robustness of the machine learning based medical analysis network is output. For example, the determination of robustness may be output by displaying the determination of robustness on a display device of the computer system, storing the determination of robustness on a memory or storage of the computer system, or by transmitting the determination of robustness to a remote computer system.
In one embodiment, step 104 of method 100 of fig. 1 is not performed, and the determination of the robustness of the machine-learning based medical analysis network at step 106 of method 100 is based on the input medical data, but not on the results of the medical analysis tasks.
In one embodiment, in response to determining that the medical analysis network is not robust to performing medical analysis tasks on the input medical data, an alert may be generated, for example, to inform a user that the medical analysis network is not robust and/or to request input from the user. In response to the alert, user input may be received from the user to, for example, override the determination of the audit network, edit the results of the medical analysis task, confirm the determination of the audit network, and the like.
In one embodiment, in response to determining that the medical analysis network is not robust to performing medical analysis tasks on the input medical data, it may be further determined whether the medical analysis network is not robust due to the input medical data being out of distribution relative to training data on which the medical analysis network is trained or due to the input medical data including artifacts. The determination of whether the medical analysis network is not robust due to the input medical data being out of distribution or due to the input medical data comprising artifacts may be performed automatically, manually or semi-automatically. The automatic determination may be performed using a separate machine learning model or a rule-based method. The manual determination may be performed by the user marking the input medical data as out of distribution or with artifacts. Semi-automatic determination may be performed as a combination of automatic and manual determination.
The medical analysis network and the audit network are trained during a previous offline or training phase. For example, as shown in fig. 2, the medical analysis network and the audit network may be trained during the offline phase 202 to train the medical analysis network at block 218 and the audit network at block 220 based on the training data set 216. The training dataset 216 may include training medical images with labels annotating ground truth results of medical analysis tasks. Once the audit network is trained, robustness criteria may be defined at block 222. For example, the robustness criterion may define one or more thresholds to be applied to the output of the audit network to define binary outputs, e.g., robust or non-robust, or multi-class outputs, e.g., robust, requiring user feedback or non-robust.
In one embodiment, in the event that it is determined that the input medical data 206 is out of distribution relative to the training data on which the medical analysis network is trained, the input medical data 206 may be added to update the training data set 216, and the medical analysis network and the audit network may be retrained based on the updated training data set 216. In some embodiments, data expansion techniques may be applied to such out-of-distribution input medical data 206 using, for example, standard expansion techniques or by generating synthetic data similar to such input medical data.
In one embodiment, the audit network is trained as a standardized flow model. The standardized flow model is a bijective generation model based on a deep neural network. The standardized flow model utilizes a stack of coupling layers (or stages). At each layer, some inputs pass unchanged (equation 1), while other inputs are modified in a reversible manner based on the passing inputs (equation 2). Affine coupling may be defined as follows:
y 0...k =u 0...k (equation 1)
y k+1...m =u k+1...m s(u 0...k )t(u 0...k ) (equation 2)
Where u denotes an input of each layer, y denotes an output of each layer, and k is an index indicating a cut between the invariably passed layer and the modified layer. Each coupling stage includes the computation of two functions: a scaling function s (·) for the scaling input and a translation function t (·) for the translation input. The permutation is performed at each coupling stage to ensure that each original input is modified at least several times as it passes through the stack of coupling layers. Each affine transformation is a step towards modifying the original input distribution to another desired target distribution.
The normalized flow model is denoted p (x), where x is the input data from the dataset. The normalized flow model p (X) is a one-to-one mapping f from X ε X to Z ε Z. The normalized flow model p (x) may be calculated based on the change in the variable formula as follows:
The raw input data x is projected by f onto Z e Z, where Z is a latent variable. In one example, p z May be a simple multivariate gaussian distribution. The second term in equation 4 describes the amount of spatial stretching or compression performed by the normalized flow model p (x) around x. A loss function can be applied to maximize the log (p) of all X e X X (x) A kind of electronic device. The basic idea of this approach is to use a simple distribution (for which the density can be easily and quickly calculated) to group the nonlinear embedding of the original input data x. The second term in equation 4 imposes the constraint that f must be bijective.
Once trained, a trained medical analysis network and a trained audit network may be applied during an online or inference phase. For example, as shown in fig. 2, a medical analysis network and an audit network are applied during the online phase 204. In one example, the trained medical analysis network and the trained audit network are applied to perform steps 104 and 106, respectively, of fig. 1. The trained audit network may be used to label low probability input data, such as, for example, input medical data that is dissimilar to (i.e., out of distribution relative to) the training data set on which the medical analysis network was trained. Given that the labeled input data is outside of the training distribution, medical analysis networks that perform medical analysis tasks on such labeled input medical data may lack robustness.
In one embodiment, the audit network may be applied to evaluate user input received for performing medical analysis tasks. Some medical analysis tasks are not fully automated and may involve user input, for example, to edit the results of a medical analysis network. As shown in fig. 3, where the medical analysis task involves user input, an audit network may be applied to evaluate whether the user input is likely to be correct.
FIG. 3 illustrates a workflow 300 for evaluating user input received for performing a medical analysis task in accordance with one or more embodiments. Workflow 300 illustrates an updated online phase of online phase 204 of fig. 2. In workflow 300, a medical analysis network receives input medical data 302 and runs at block 304 for performing medical analysis tasks to generate initial results 306. At block 308, user input is received, for example, to edit the initial result 306 to generate a final result 310. The audit network receives as input the input medical data 302, the initial results 306, and the final results 310, and the audit network operates at block 312 to determine a robustness determination 314. The audit network further evaluates whether the final result 310 incorporating the user input is correct (i.e., potentially trusted). The audit network may be trained (during a previous offline or training phase) on training medical data and corresponding final result pairs.
One example of a medical analysis task involving user input is semi-automatic segmentation. The medical analysis network outputs the proposed segmentation as an initial result 306. User input may be received to correct the proposed segmentation as the final result 310. The audit network may then evaluate whether the corrected proposed segmentation is correct. In the event that the input medical data is determined to deviate from the distribution, the final result 310 may be correct, but the robustness determination 314 output from the audit network may indicate that the final result 310 may not be trusted. In this case, the user may overrule the audit network and/or the input medical data may be used to retrain the medical analysis network and the audit network.
In one embodiment, FFR (fractional flow reserve) may be calculated using a plurality of medical analysis networks and audit networks. Fig. 4 illustrates a workflow 400 for computing FFR from CCTA (coronary CT angiography) images in accordance with one or more embodiments. The calculation of FFR utilizes three medical analysis networks: a centerline detection network, a segmentation network, and an FFR calculation network. The centerline detection network receives as input a CCTA image 402 of a patient's blood vessel and generates as output a vessel centerline image 404 identifying the vessel centerline. The segmentation network receives as input a CCTA image 402 of the vessel and a vessel centerline image 404 and generates as output a segmentation map 406, the segmentation map 406 identifying the segmentation of the cross-sectional area along the vessel centerline. The FFR calculation network receives as input a set of features calculated based on the centerline in the vessel centerline image 404 and the cross-sectional areas in the segmentation map 406 and generates calculated FFR values 408 for each location along the vessel centerline. The centerline detection network and segmentation network are followed by a user input step via the user interface 410, with user input received at the user interface 410 to correct/prune/add centerlines and correct segmentation/cross-sectional areas, respectively. An audit network is trained for each medical analysis network. The audit network may or may not take into account user input steps in performing the evaluation. In accordance with one or more embodiments, three use cases related to the calculation of FFR are described below.
In a first use case, an audit network may be applied to detect artifacts in CCTA images along the coronary centerline. In this use case, the input medical data includes 32 x 32 pixel image patches perpendicular to the centerline, with a spacing of 0.5mm (millimeters) between the patches. Each cross section is marked as follows: health, illness, motion artifact, stent, neglect. The input to the audit network is a single 2D cross section or a 3D patch comprising a plurality of adjacent 2D cross sections.
The same data preprocessing and data expansion techniques may be applied to both the audit network and the medical analysis network. In general, the audit network and the medical analysis network share the same training data set. Training of the audit network is performed end-to-end in a similar manner and timeframe as the medical analysis network.
In experimental evaluation, the audit network was trained on 3D patches of 16 adjacent cross sections. The audit network was applied to the entire blood vessel using a sliding window method. Only "healthy" cross sections without artifacts are used for training, thus leaving cross sections with artifacts out of distribution with respect to the training data. Fig. 5 shows graphs 502 and 504 that compare robustness determined for independent input data sets to ground truth labels according to embodiments described herein. Lines 506 and 510 represent probabilities of input medical data being robust as determined by an audit network in accordance with embodiments described herein. Lines 508 and 512 represent ground truth labels, where 3 indicates ignore, 2 indicates motion artifact, 1 indicates health, 0 indicates illness, and-1 indicates stent. As can be seen in graphs 502 and 504, the audit network outputs significantly lower probabilities for the cross-section of the stent region. Since reliable segmentation is difficult to perform in these areas, the correct detection of stent segments is of particular interest.
In one embodiment, an additional machine learning based network (employing only dense layers) may be added on top of the z-embedding provided by the audit network f. The top level network may be, for example, a classifier that detects the type of artifact present given a low probability cross section. Since the embedded z-vectors are structured such that they are compared to a multivariate gaussian distribution, which is a much simpler distribution than the pixel distribution in the original image space, the new top-level classifier is able to reliably distinguish artifact types.
In a second use case, an audit network is used to evaluate the correctness of the cross-sectional lumen profile. The cross-sectional lumen profile is obtained after automatic segmentation (from the segmentation network) and manual editing. The input medical data includes 32 x 32 pixel image patches perpendicular to the centerline paired with the corresponding lumen contour. The input to the audit network may be a tensor of size 4 dimensions: 2 channels (cross-sectional image and lumen mask), 8 pairs of adjacent cross-sections and masks (resulting in a depth range of 4 mm), and a 2D resolution of 32 x 32 pixels.
In one embodiment, one or more artificial mask perturbations may be applied for increasing the sensitivity of the audit network to certain types of mask defects. Exemplary artificial mask perturbations include: 1) Enlarging the region of interest around the lumen mask to make the audit network more aware of over-segmentation and under-segmentation; 2) Translating the lumen mask to model potential offsets between the cross-section and the proposed segmentation; and 3) deforming (e.g., squeezing or shrinking) the lumen mask along a plurality of directions to model structural mask defects, wherein a portion of the mask erroneously includes or excludes small areas of the cross-section, while the rest of the mask is correct.
The training data of the audit network may be constructed from the same training data as used in the development of the medical analysis network. The loss function may be modified as follows. If the training data is not touched (i.e., as observed by the medical analysis network), the probabilistic output of the audit network is maximized. If the training data is perturbed, a hinge loss may be employed to force the probability output of the audit network below a value that is much lower than the probability value associated with the untouched training data.
In one embodiment, the audit network may be implemented with a Glow-type standardized flow architecture that combines layers such as, for example, checkerboard and channel mask coupling layers, reversible 1 x 1 convolution, segmentation and extrusion layers, and the like. FIG. 6 illustrates an exemplary Glow-style standardized flow network architecture 600 for an audit network in accordance with one or more embodiments. According to one or more embodiments, the Glow-type standardized procedure network architecture 600 may be implemented according to the table 700 of fig. 7.
As described in table 1, the Glow-type standardized flow network architecture 600 includes 4 phases. Stage 1 includes 4 affine checkerboard coupling layers 604. The affine checkerboard coupling layer 604 receives as input the input medical image 602. The input medical image 602 comprises a 2-channel (concatenation of CTA and binary mask volume) 3D image with a resolution of 8 x 32 (8 slices of 32 x 32 width/height). As the number of channels increases, the three squeeze operation 606 reduces the input resolution by a factor of 23 to 1 x 4.
After each extrusion operation 606, 3 convolutional coupling layers 608 are applied to 3 dimensions: 4×16×16, 2×8×8, and 1×4×4. The coupling layer 608 applies the operations as defined by equations 5 and 6. The effective receptive field of the coupling layer is given by the receptive field of the scaling function s and the translation function t, in this case 5 x 5. Stacking the coupling layers and using multiple dimensions (i.e., extrusion layers) increases the final normalized flow receptive field, similar to the operation of classical CNNs.
y a =x a (equation 5)
y b =(x b -t DNN (x a ))s DNN (x a ) (equation 6)
Where x and y are input and output tensors, respectively. Subscripts a and b denote the two halves of the tensor: the first half passes unchanged and the remaining second half is updated in a linear way with respect to itself, but in a highly non-linear way with respect to the first half as a scaling function s and a shifting function t of the DNN (deep neural network). According to one or more embodiments, according to table 800 of fig. 8, functions s and t may be implemented as a double-headed 3D CNN (convolutional neural network). In one example, the final activation function of s may be exp (tanh (x)) to easily calculate the contribution of logDet (e.g., Σtanh (x) across all spatial dimensions and channels) and provide [ e ] for scaling performed at each coupling layer -1 ,e 1 ]To ensure numerical stability and a bounded global maximum of logDet. The slicing layer 610 slices the tensor in half, e.g., along the channel axis, so that half of the channels are reserved as input to the next layer of the computational graph, while remainingThe remaining half of the channels are broken down (out).
In one embodiment, a coupling layer is provided that can operate efficiently for both standardized flow directions, is not focused on local pixel correlation, and has a similar inductive bias as a conventional CNN. The coupling layer resembles a standard Glow-like sequence of 1x1 convolutions (with applied bias), the parameters of which are calculated based on the channel passing. The applied bias is propagated to all spatial positions and thus the width, height and depth across the resulting tensor are the same, which means that the coupling layer is no longer able to reproduce the masked pixel values. In contrast to element-by-element calculations, the same (sample-specific) convolution kernel is applied at all spatial locations. Equations 7 and 8 describe the operation of the coupling layer as follows.
y a =x a (equation 7)
y b =x a *k(x a )+b(x a ) (equation 8)
Where 1x1 convolutions with kernel k are denoted, while +broadcast sums are denoted. k is calculated by CNN and has shape c Modification of ×c Modification of Wherein c Modification of Is the updated number of channels. b is c Modification of Element vectors. According to one or more embodiments, the CNN for calculating k and b is implemented according to table 900 of fig. 9.
The coupling layer is self-adjusting (i.e., it does not employ an external tuning network or another parallel flow) because the intra-cavity binary mask and angiographic image are not processed separately, but rather are stitched on the channel axis. This is possible because the mask and image should be highly spatially correlated in order to achieve a high logarithmic probability.
Fig. 10 illustrates a network architecture 1000 for implementing a normalized flow audit network in accordance with one or more embodiments. The network architecture 1000 employs a coupling layer defined by equations 7 and 8. According to one or more embodiments, the network architecture 100 is implemented according to table 11 of fig. 11.
Stage 1 of network architecture 1000 includes a coupling layer 1004. The coupling layer 1004 receives as input the input medical data 1002. The coupling layer 1004 includes a series of additional coupling layers with a checkerboard mask. The coupling layer 1004 focuses mainly on the local pixel correlation. In contrast to affine coupling, additive couplings are volume-preserving (i.e., they do not directly contribute to logDet and final log (p (x)) but rather contribute through upstream interlayer interactions).
Stage 2 of network architecture 1000 includes cascading of coupling layers. In contrast to classical CNNs, which use filters of 3 x 3 (or larger) shape and step size greater than 1 (in the convolution layer or the max-pooling layer) to increase the effective FOV (field of view), the FOV of the network architecture 1000 is only increased by the extrusion operation 1006. After the squeeze operation 1006, a patch of 1 x 1 pixels is formed from the patch of 2 x 2 pixels, the patch is spatially flattened into a channel dimension. Thus, for each extrusion step, the FOV is doubled on each spatial axis. This allows the reversible 1 x 1 convolution 1008 to operate over a larger and larger FOV while still maintaining the ability of efficient forward/backward normalization flow calculation. There is enough squeeze operation 1006 such that the resolution of the final stage decays to 1 x 1. The input spatial dimension is limited to a power of 2.
After each extrusion operation 1006, 4 convolutional coupling layers 1010 are applied to 5 scales: 4X 16, 2X 8, 1X 4 1X 2 and 1X 1. The coupling layer 1010 applies the operations as defined by equations 7 and 8. As shown in table 1100 of fig. 11, channel c i The number (at stage i) increases exponentially with the number of extrusion dimensions. This directly affects the runtime and complexity of the coupling layer, since it must produce a size and c i A matrix k proportional to the square of (a). Furthermore, the inference and sampling involve computing the determinant and inverse of k, respectively. To alleviate this problem, the split layer 1012 may be modified so that the tensor is not split into two halves along the channel axis, but instead only, for example, one quarter is retained as input to the next layer of the computational graph, while the remaining 75% of the channels are broken down. This can be applied to stage 1, where most descriptive textures are embedded. After such a cut, the next extruded input is only c i And/4 channels, half of which are cut conventionally. Such splitting across a network cascade may mitigate exponential growthC of (2) i Particularly for larger resolution inputs. In one embodiment, the first tangential layer may only hold c i And/4 channels. The net effect is that there are only 512 channels in the final stage, as opposed to the original 1024 channels (as shown in table 11 of fig. 11), resulting in faster run time and fewer model parameters.
In the network architecture 1000, batchNorm is used instead of ActNorm. The normalization is done with two running averages of batch mean and standard deviation and updated with current batch statistics after its use, so that the normalization process depends only on past batches and any cross-talk between samples in the current batch is eliminated. The main purpose of BatchNorm is to provide a "checkpoint" for activation inside the network (i.e., after each BatchNorm layer, the activation has preset statistics (e.g., centered on 0, with a standard deviation of 1)) to improve the training process.
The standardized flow audit network implemented according to the network architecture 1000 of fig. 10 was experimentally verified. FIG. 12 illustrates a training image 1200 for training a normalized flow audit network in accordance with one or more embodiments. Image 1202 shows an original cross-sectional image of a blood vessel and image 1204 shows a corresponding lumen mask. Image 1206 shows a lumen mask perturbed by applying a 20% severity mask extrusion perturbation in the bottom left to top right direction of image 1204. Fig. 13 illustrates a graph 1300 showing probability changes across one vessel segment from 80 cross sections (equivalent to 40mm depth) of a test dataset in accordance with one or more embodiments. Mask perturbations are applied at random locations and durations are randomly selected across the cross-section. The severity of the disturbance is indicated by line 1302 (as defined by the right y-axis). The output (log probability) of the normalized flow audit network prior to application of the perturbation(s) is indicated by line 1304 (as defined by the left y-axis). The output (log probability) of the normalized flow audit network after the perturbation is applied is indicated by line 1306 (as defined by the left y-axis). The normalized flow audit network detects the presence of a mask perturbation and, as a result, its probability output drops when the perturbation level is sufficiently large.
FIG. 14 illustrates a saliency chart 1400 of a standardized flow audit model in accordance with one or more embodiments. Graph 1402 shows an image gradient superimposed on a cross-sectional image. Graph 1404 shows the original lumen mask. Graph 1406 shows a lumen mask pixel-by-pixel gradient. The dimensions of the mask gradient in graph 1406 are greater in magnitude than the dimensions of the cross-sectional image gradient in graph 1402. As shown in graph 1404, the gradient inside the mask is almost zero, while the gradient of the edges and outside is larger (i.e., if some edge pixels are to increase their intensity, log-prob will increase, but if any pixel outside the mask neighborhood is set to 1, log-prob will decay immediately). Significance chart 1400 reveals that the normalized flow audit model focuses on both the content of the cross-sectional image and the lumen mask provided, but penalizes the boundaries of the lumen mask more severely. Thus, small deviations on the mask boundaries result in a substantial reduction in the predictive log probability, which indicates that a standardized flow audit model trained using synthetic mask perturbations is used to check whether the mask is properly aligned with the cross-section.
In a third use case, it is assessed whether FFR can be reliably calculated by determining whether the feature vector for calculating a given centerline position of the FFR value lies within the distribution of training data over which the FFR calculation network is trained. The FFR calculation network may be trained based on the synthetic data and evaluated based on the synthetic and real patient data. The audit network is implemented as a standardized flow model to estimate the probability density of the input medical data to determine the likelihood that the input medical data is similar (e.g., in the same distribution) to the composite data on which the FFR calculation network was trained.
For this use case, the same training data set may be used to develop both the FFR calculation network and the audit network. In an experimental implementation, the audit network is a standardized flow architecture that employs a stack of coupling layers. The audit network was found to be fast and lightweight because it operates on 0D data.
The synthetic training data is sliced at the case level: 90% is used as a training dataset and the remaining 10% is used as a validation set for the standardized flow audit network. The patient dataset is used only as a test set. The standardized flow model for implementing the audit network is selected such that the log probability of its training data set approximates the log probability of its validation set and maximizes the separation between the log probability of the random features and the log probability of the true data features. By averaging all centerline positions, the probabilities obtained using the audit network are also aggregated at the patient level.
To evaluate the performance of the audit network, another experiment was performed. For a subset of features, for each feature, the sample with the highest value for that feature is determined (1-10 standard deviations are added) and the sample with the lowest value for that feature is determined (1-10 standard deviations are subtracted). The log probability is found to decrease progressively as the eigenvalues become less likely.
In another experiment, the value of the feature "percent diameter stenosis upstream of the main stenosis" was modified in 5% increments/decrements. It was found that once the value changed by more than +/-10%, the logarithmic probability decreased (even though the absolute value was still one possible value). Thus, the network is audited for learning relationships between features and detecting outlier combinations.
In one embodiment, the embodiments described herein may be applied to increase the success rate of a clinical center by reducing the number of cases rejected by the clinical center. Depending on the equipment employed for data collection and the experience of the clinician, the number of cases rejected by the audit network may vary. Minimizing the number of rejected cases using an audit network is the best benefit for both the clinical center and the equipment manufacturer/developer of the medical analysis network. While the presence of out-of-distribution input medical data rejected by the audit network cannot be directly controlled or minimized (which can be addressed by collecting as many out-of-distribution cases as possible and iteratively refining the medical analysis network and the audit network in an indirectly focused manner), the number of cases with artifacts can be minimized. Cases rejected due to erroneous data acquisition and erroneous user input are distinguished.
For erroneous data collection, cases rejected by the audit network are sent back to the manufacturer and advice for improving the data collection process is sent back to the clinical center. These suggestions may relate to, for example, data acquisition protocols, equipment settings, equipment issues (e.g., maintenance or replacement of certain equipment components), and so forth. The suggestions can be determined automatically (e.g., using a machine learning based model based on natural language processing), semi-automatically, or manually.
For erroneous user inputs, cases rejected by the audit network may be due to incorrect edits or other user inputs by the user. As described above, one example of erroneous user input may be about segmentation of the cross-sectional lumen profile. To reduce the number of rejected cases, the clinician should be trained to provide the correct editing/input. For example, as depicted in fig. 15, clinician training may be performed by an experienced clinician in a real-time session, or may be performed automatically. Fig. 15 illustrates a workflow 1500 for reducing the number of rejected cases in a clinical center in accordance with one or more embodiments. The data sets (including the input data and user edits) that were rejected due to erroneous user input are stored in database 1502. At step 1504, the rejected cases are sent back to the developer of the machine learning model (e.g., medical analysis network and audit network). At step 1506, similar cases are extracted from the training dataset based on the rejected cases. At step 1508, the extracted input/output pairs are sent to a clinical center as learning examples for training a clinician. Where the audit network is implemented as a normalized flow audit network, the normalized flow audit network may be used to extract similar cases. The predicted log probability value of the audit network depends on the z-embedding of the input medical data x and the amount of spatial stretching/compression performed by the standardized flow model (i.e., the second term in equation 4). Given query sample x a Can be embedded by a method based on vector distance (dataset embedding z D And embedding z of query samples q Between) and each sample x e D and x q The amount of surrounding spatial stretching/compression evaluates the heuristic to find the closest sample from the dataset D. Given a multi-scale standardized flow architecture, heuristicsIt is also possible to operate only on top level embedded components to find a near x q Or operate only on the bottom level embedded component to find texture-by-texture proximity x q Is a sample of (a).
In one embodiment, the output of the audit network may be used to assist in editing tasks. For example, in the case of image segmentation, in response to determining that the medical analysis network is not robust to performing image segmentation, one or more alternative segmentations from other medical imaging analysis networks (e.g., as generated from a model based on ensemble machine learning) may be suggested to the user, which gives a higher consistency score with the original training dataset. The user may edit the suggested segmentation directly, or may provide an interaction mechanism (e.g., an on-screen slider) that allows for continuous representation along the direction of increasing/decreasing confidence scores. Each suggestion split may be presented to the audit network along with an output allowing a user to select an option acceptable according to the audit network.
In one embodiment, the output of the audit network may be used to automatically correct the output of the medical analysis network. For example, in image segmentation, the predictive segmentation mask may be optimized with respect to the output of the audit network to increase the similarity score with the original training data set. In one embodiment, an iterative process may be employed in which the audit network is considered as a function through which the input (i.e., the segmentation mask of the current iteration) is maximized. In another embodiment, a saliency map of the input segmentation mask may be computed, and heuristics may be utilized to obtain segmentation masks with higher similarity scores. The advantage of this approach is that it can be performed in a single step, in contrast to an iterative process.
In one embodiment, user input (e.g., user edits or other interactions) received during the online phase may be used to learn about updates to the medical analysis network and the audit network.
In one embodiment, the output from the medical analysis network and the audit network may be used in combination to identify additional high value data sets that would provide the greatest value as part of the training data set. In one example, when a large new data set is provided that includes multiple samples, both the medical analysis network and the audit network may run on the new data set, and the samples are ordered in order of decreasing dissimilarity score with the original training data set. The data set with the highest dissimilarity score may be annotated by the user and included in the training data set used to train the updated model. This approach may also be used during online real-time utilization of medical analytics and auditing networks, where cases with high degree of score dissimilarity that require extensive editing may be marked by the auditing network and passed on after appropriate data-cleaning procedures for retraining the medical analytics and auditing networks.
In one embodiment, in the case where the data set shows a high consistency score, downstream processing tasks that depend on the medical analysis network output may be triggered in advance to obtain results faster, thereby reducing the overall latency of the user. In case no editing is required, the results can be immediately shown to the user. In case editing is required, the result is updated.
In one embodiment, the output of the audit network may be used to infer the uncertainty of the medical analysis network. This uncertainty can be further used as an input to clinical decision making (which in turn can be performed clinically or entered into a higher-order clinical decision support system).
The embodiments described herein are described with respect to the claimed systems and with respect to the claimed methods. Features, advantages, or alternative embodiments herein may be allocated to other claimed objects, and vice versa. In other words, the claims of the system may be modified with features described or claimed in the context of the method. In this case, the functional features of the method are embodied by the target unit of the providing system.
Furthermore, certain embodiments described herein are described with respect to methods and systems that utilize a trained machine-learning based network (or model) and with respect to methods and systems for training a machine-learning based network. Features, advantages, or alternative embodiments herein may be allocated to other claimed objects, and vice versa. In other words, the claims of the method and system for training a machine learning based network may be improved with features described or claimed in the context of a method and system for utilizing a trained machine learning based network, and vice versa.
In particular, the trained machine learning based networks applied in the embodiments described herein may be adapted by methods and systems for training machine learning based networks. Furthermore, the input data of the trained machine learning based network may include advantageous features and embodiments of the training input data, and vice versa. Further, the output data of the trained machine learning based network may include advantageous features and embodiments of the output training data, and vice versa.
In general, a trained machine learning-based network mimics the cognitive functions of humans in association with other human thinking. In particular, by training based on training data, the trained machine learning based network is able to adapt to new environments and detect and infer patterns.
In general, parameters of a machine learning based network may be adapted by means of training. In particular, supervised training, semi-supervised training, unsupervised training, reinforcement learning, and/or active learning may be used. Further, expression learning (alternative term is "feature learning") may be used. In particular, parameters of a trained machine learning based network may be iteratively adapted through several training steps.
In particular, the trained machine learning based network may include a neural network, a support vector machine, a decision tree, and/or a bayesian network, and/or the trained machine learning based network may be based on k-means clustering, Q learning, genetic algorithms, and/or association rules. In particular, the neural network may be a deep neural network, a convolutional neural network, or a convolutional deep neural network. Further, the neural network may be an countermeasure network, a deep countermeasure network, and/or a generation countermeasure network.
Fig. 16 illustrates an embodiment of an artificial neural network 1600 in accordance with one or more embodiments. An alternative term for "artificial neural network" is "neural network", "artificial neural network" or "neural network". The artificial neural network 1600 may be used to implement machine learning networks described herein, such as, for example, the machine learning-based medical analysis network 104 and the machine learning-based audit network 106 of fig. 1, and the medical analysis network and audit network of fig. 2 and 3.
The artificial neural network 1600 includes nodes 1602-1622 and edges 1632, 1634, 1636, where each edge 1632, 1634, 1636 is a directed connection from a first node 1602-1622 to a second node 1602-1622. In general, the first nodes 1602-1622 and the second nodes 1602-1622 are different nodes 1602-1622, it is also possible that the first nodes 1602-1622 and the second nodes 1602-1622 are the same. For example, in FIG. 16, edge 1632 is a directed connection from node 1602 to node 1606, and edge 1634 is a directed connection from node 1604 to node 1606. Edges 1632, 1634 from the first nodes 1602-1622 to the second nodes 1602-1622 are also denoted as "input edges" of the second nodes 1602-1622 and "output edges" of the first nodes 1602-1622.
In this embodiment, nodes 1602-1622 of the artificial neural network 1600 may be arranged in layers 1624-1630, where the layers may include an inherent order introduced by edges 1632, 1634, 1636 between the nodes 1602-1622. In particular, edges 1632, 1634, 1636 may only exist between adjacent layers of nodes. In the embodiment shown in fig. 16, there is an input layer 1624 that includes only nodes 1602 and 1604, no input edges, an output layer 1630 that includes only node 1622, no output edges, and hidden layers 1626, 1628 between the input layer 1624 and the output layer 1630. In general, the number of hidden layers 1626, 1628 may be arbitrarily selected. The number of nodes 1602 and 1604 within the input layer 1624 is generally related to the number of input values of the neural network 1600, and the number of nodes 1622 within the output layer 1630 is generally related to the number of output values of the neural network 1600.
In particular, a (real) number may be assigned as a value to each node 1602-1622 of the neural network 1600. Here, x (n) i Marking the nth layerValues of i-th nodes 1602-1622 of 1624-1630. The values of nodes 1602-1622 of input layer 1624 are equivalent to the input values of neural network 1600 and the values of node 1622 of output layer 1630 are equivalent to the output values of neural network 1600. Furthermore, each edge 1632, 1634, 1636 may include a weight as a real number, in particular the weight is the interval [ -1,1 ] ]Inner or interval [0,1 ]]Real numbers within. Here, w (m,n) i,j Weights indicating edges between the i-th nodes 1602-1622 of the m-th layers 1624-1630 and the j-th nodes 1602-1622 of the n-th layers 1624-1630. Furthermore, for the weight w (n,n+1) i,j Definition of abbreviation w (n) i,j 。
In particular, to calculate the output value of the neural network 1600, the input value is propagated through the neural network. In particular, the values of the nodes 1602-1622 of the (n+1) th layer 1624-1630 may be calculated by the following formula based on the values of the nodes 1602-1622 of the n-th layer 1624-1630
Herein, the function f is a transfer function (another term is "activation function"). Known transfer functions are step functions, sigmoid functions (e.g., logic functions, generalized logic functions, hyperbolic tangent functions, arctangent functions, error functions, smooth step functions), or rectifying functions. The transfer function is mainly used for normalization purposes.
In particular, values are propagated layer by layer through the neural network, where the value of the input layer 1624 is given by the input of the neural network 1600, where the value of the first hidden layer 1626 may be calculated based on the value of the input layer 1624 of the neural network, where the value of the second hidden layer 1628 may be calculated based on the value of the first hidden layer 1626, and so on.
To set the value w of the edge (m,n) i,j The neural network 1600 must be trained using training data. In particular, the training data comprises training input data and training output data (denoted t i ). For the training step, the neural network 1600 is applied to training input data to generate calculated output data.In particular, the training data and the calculated output data comprise a number of values, which is equal to the number of nodes of the output layer.
In particular, the comparison between the calculated output data and the training data is used to recursively adapt the weights (back propagation algorithm) within the neural network 1600. In particular, the weight is changed according to the following equation
Wherein γ is the learning rate and the number δ (n) j Can be recursively calculated as
Delta-based (n+1) j If the (n+1) th layer is not the output layer, and
if the (n+1) th layer is the output layer 1630, where f' is the first derivative of the activation function, and y (n+1) j Is the comparison training value of the jth node of the output layer 1630.
Fig. 17 illustrates a convolutional neural network 1700 in accordance with one or more embodiments. Machine learning networks described herein, such as, for example, machine learning-based medical analysis network 104 and machine learning-based audit network 106 of fig. 1, and medical analysis networks and audit networks of fig. 2 and 3, may be implemented using convolutional neural network 1700.
In the embodiment shown in fig. 17, convolutional neural network 1700 includes an input layer 1702, a convolutional layer 1704, a pooling layer 1706, a full connectivity layer 1708, and an output layer 1710. Alternatively, convolutional neural network 1700 may include a number of convolutional layers 1704, a number of pooling layers 1706, and a number of fully connected layers 1708, as well as other types of layers. The order of the layers may be chosen arbitrarily, typically a fully connected layer 1708 is used as the last layer before the output layer 1710.
In particular, within convolutional neural network 1700, nodes 1712-1720 of one layer 1702-1710 may be considered to be arranged as a d-dimensional matrix or d-dimensional image. In particular, in the two-dimensional case, the values of nodes 1712-1720 indexed by i and j in the nth layer 1702-1710 may be denoted as x (n) [i,j] . However, the arrangement of nodes 1712-1720 of one layer 1702-1710 as such has no effect on the calculations performed within convolutional neural network 1700, as these are given only by the structure and weight of the edges.
In particular, the convolutional layer 1704 is characterized by the structure and weights of the input edges that form the convolutional operation based on a number of kernels. In particular, the structure and weights of the input edges are selected such that the value x of node 1714 of convolutional layer 1704 (n) k Calculated as a value x based on node 1712 of previous layer 1702 (n-1) Is a convolution x of (2) (n) k=K k *x (n-1) Wherein convolution is defined as in the two-dimensional case
Here, the kth core K k Is a d-dimensional matrix (in this embodiment a two-dimensional matrix) that is typically small (e.g., a 3 x 3 matrix or a 5 x 5 matrix) compared to the number of nodes 1712-1718. In particular, this means that the weights of the input edges are not independent, but are chosen such that they produce the convolution equation. In particular, for a core that is a 3×3 matrix, there are only 9 independent weights (each entry of the core matrix corresponds to an independent weight), regardless of the number of nodes 1712-1720 in the respective layers 1702-1710. In particular, for convolutional layer 1704, the number of nodes 1714 in the convolutional layer is equal to the number of nodes 1712 in the previous layer 1702 multiplied by the number of kernels.
If the nodes 1712 of the previous layer 1702 were arranged as a d-dimensional matrix, using multiple kernels may be interpreted as adding a further dimension (labeled as the "depth" dimension) such that the nodes 1714 of the convolutional layer 1704 are arranged as a (d+1) -dimensional matrix. If the nodes 1712 of the previous layer 1702 have been arranged as a (d+1) dimensional matrix including a depth dimension, using multiple kernels may be interpreted as extending along the depth dimension such that the nodes 1714 of the convolutional layer 1704 are also arranged as a (d+1) dimensional matrix, where the size of the (d+1) dimensional matrix relative to the depth dimension is a multiple of the number of kernels in the previous layer 1702.
The advantage of using convolutional layers 1704 is that the spatial local correlation of the input data can be exploited by implementing a local connection pattern between nodes of adjacent layers, in particular by each node being connected only to a small region of the nodes of the previous layer.
In the embodiment shown in fig. 17, the input layer 1702 includes 36 nodes 1712 arranged in a two-dimensional 6 x 6 matrix. The convolution layer 1704 includes 72 nodes 1714 arranged into two-dimensional 6 x 6 matrices, each of which is the result of the convolution of the values of the input layer with the kernel. Equivalently, the nodes 1714 of the convolutional layer 1704 may be interpreted as being arranged as a three-dimensional 6×6×2 matrix, with the last dimension being the depth dimension.
The pooling layer 1706 may be characterized by the structure and weights of the input edges and the activation functions of their nodes 1716, forming a pooling operation based on the nonlinear pooling function f. For example, in the two-dimensional case, the value x of node 1716 of pooling layer 1706 (n) May be based on the value x of node 1714 of previous layer 1704 (n-1) The calculation is as follows
x (n) [i,j]=f(x (n-1) [id 1 ,jd 2 ],...,x (n-1) [id 1 +d 1 -1,jd 2 +d 2 -1])。
In other words, by using the pooling layer 1706, the number of nodes 1714, 1716 may be reduced by replacing the number d1.d2 of neighboring nodes 1714 in the previous layer 1704 with a single node 1716, the single node 1716 calculating from the value of the number of neighboring nodes in the pooling layer. In particular, the pooling function f may be a maximum function, an average value or an L2 norm. In particular, for the pooling layer 1706, the weights of the input edges are fixed and are not modified by training.
The advantage of using the pooling layer 1706 is that the number of nodes 1714, 1716 and the number of parameters are reduced. This results in a reduced amount of computation in the network and controls the overfitting.
In the embodiment shown in fig. 17, the pooling layer 1706 is a maximum pooling layer, replacing four neighboring nodes with only one node, which is the maximum of the four neighboring nodes. Max pooling is applied to each d-dimensional matrix of the previous layer; in this embodiment, max pooling is applied to each of the two-dimensional matrices, reducing the number of nodes from 72 to 18.
Fully connected layer 1708 may be characterized by the fact that there are most, and in particular all, edges between node 1716 of the previous layer 1706 and node 1718 of fully connected layer 1708, and wherein the weight of each edge may be adjusted individually.
In this embodiment, nodes 1716 of a previous layer 1706 of fully connected layer 1708 are shown as a two-dimensional matrix and additionally as uncorrelated nodes (indicated as a row of nodes, where the number of nodes is reduced for better presentability). In this embodiment, the number of nodes 1718 in fully connected layer 1708 is equal to the number of nodes 1716 in the previous layer 1706. Alternatively, the number of nodes 1716, 1718 may be different.
Further, in this embodiment, the value of node 1720 of output layer 1710 is determined by applying a Softmax function to the value of node 1718 of previous layer 1708. By applying the Softmax function, the sum of the values of all nodes 1720 of the output layer 1710 is 1, and all values of all nodes 1720 of the output layer are real numbers between 0 and 1.
Convolutional neural network 1700 may also include a ReLU (corrective linear unit) layer or active layer with a nonlinear transfer function. In particular, the number of nodes and the node structure contained in the ReLU layer are equivalent to those contained in the previous layer. In particular, the value of each node in the ReLU layer is calculated by applying a rectification function to the value of the corresponding node of the previous layer.
The inputs and outputs of the different convolutional neural network blocks may be connected using summation (residual/dense neural network), element-by-element multiplication (note), or other differentiable operators. Thus, if the entire pipeline is differentiable, the convolutional neural network architecture may be nested, rather than sequential.
In particular, convolutional neural network 1700 may be trained based on a back-propagation algorithm. To prevent overfitting, regularization methods may be used, such as dropping of nodes 1712-1720, random pooling, use of artificial data, weight decay based on L1 or L2 norms or maximum norms constraints. Different penalty functions may be combined for training the same neural network to reflect the joint training objective. A subset of the neural network parameters may be excluded from optimization to preserve the weight pre-trained on another data set.
The systems, apparatus, and methods described herein may be implemented using digital electronic circuitry, or using one or more computers using well known computer processors, memory units, storage devices, computer software, and other components. Generally, a computer includes a processor for executing instructions and one or more memories for storing instructions and data. A computer may also include or be coupled to one or more mass storage devices, such as one or more magnetic disks, internal hard and removable magnetic disks, magneto-optical disks, and the like.
The systems, apparatuses, and methods described herein may be implemented using a computer operating in a client-server relationship. Typically, in such systems, the client computers are located remotely from the server computer and interact via a network. The client-server relationship may be defined and controlled by computer programs running on the respective client and server computers.
The systems, apparatuses, and methods described herein may be implemented in a network-based cloud computing system. In such a network-based cloud computing system, a server or another processor connected to a network communicates with one or more client computers via the network. For example, a client computer may communicate with a server via a web browser application resident and operating on the client computer. The client computer may store data on the server and access the data via the network. The client computer may transmit a request for data or a request for online services to the server via the network. The server may execute the requested service and provide data to the client computer(s). The server may also transmit data suitable for causing the client computer to perform specified functions, such as performing calculations, displaying specified data on a screen, and the like. For example, the server may transmit a request adapted to cause the client computer to perform one or more steps or functions of the methods and workflows described herein, including one or more steps or functions of fig. 1-3. Some of the steps or functions of the methods and workflows described herein, including one or more of the steps or functions of fig. 1-3, may be performed by a server or another processor in a network-based cloud computing system. Some of the steps or functions of the methods and workflows described herein, including one or more of the steps of fig. 1-3, may be performed by a client computer in a network-based cloud computing system. The steps or functions of the methods and workflows described herein, including one or more of the steps of fig. 1-3, may be performed by a server and/or client computer in a network-based cloud computing system in any combination.
The systems, apparatus, and methods described herein may be implemented using a computer program product tangibly embodied in an information carrier, e.g., in a non-transitory machine-readable storage device, for execution by a programmable processor; as well as the method and workflow steps described herein, including one or more steps or functions of fig. 1-3, may be implemented using one or more computer programs executable by such a processor. A computer program is a set of computer program instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
A high-level block diagram of an example computer 1802 that can be used to implement the systems, apparatus, and methods described herein is depicted in fig. 18. The computer 1802 includes a processor 1804 that is operatively coupled to a data storage device 1812 and a memory 1810. The processor 1804 controls the overall operation of the computer 1802 by executing computer program instructions that define such operations. The computer program instructions may be stored in a data storage device 1812 or other computer-readable medium and loaded into memory 1810 when execution of the computer program instructions is desired. Thus, the method and workflow steps or functions of fig. 1-3 may be defined by computer program instructions stored in the memory 1810 and/or the data storage 1812 and controlled by the processor 1804 executing the computer program instructions. For example, the computer program instructions may be implemented as computer executable code programmed by one skilled in the art to perform the methods and workflow steps or functions of fig. 1-3. Thus, by executing computer program instructions, the processor 1804 performs the method and workflow steps or functions of fig. 1-3. The computer 1802 may also include one or more network interfaces 1806 for communicating with other devices via a network. The computer 1802 may also include one or more input/output devices 1808 that enable a user to interact with the computer 1802 (e.g., display, keyboard, mouse, speakers, buttons, etc.).
The processor 1804 may include both general-purpose and special-purpose microprocessors, and may be the only processor or one of multiple processors of the computer 1802. For example, the processor 1804 may include one or more Central Processing Units (CPUs). The processor 1804, data storage 1812, and/or memory 1810 may include or be supplemented by, or incorporated in, one or more application-specific integrated circuits (ASICs) and/or one or more field-programmable gate arrays (FPGAs).
Data storage device 1812 and memory 1810 each include a tangible, non-transitory computer-readable storage medium. The data storage device 1812 and memory 1810 may each include high speed random access memory, such as Dynamic Random Access Memory (DRAM), static Random Access Memory (SRAM), double data rate synchronous dynamic random access memory (DDR RAM) or other random access solid state memory devices, and may include non-volatile memory, such as one or more magnetic disk storage devices, such as internal hard disks and removable disks, magneto-optical disk storage devices, flash memory devices, semiconductor memory devices, such as Erasable Programmable Read Only Memory (EPROM), electrically Erasable Programmable Read Only Memory (EEPROM), compact disk read only memory (CD-ROM), digital versatile disk read only memory (DVD-ROM), or other non-volatile solid state memory devices.
The input/output devices 1808 may include peripheral devices such as printers, scanners, display screens, and the like. For example, the input/output devices 1808 may include a display device such as a Cathode Ray Tube (CRT) or Liquid Crystal Display (LCD) monitor, a keyboard, and a pointing device such as a mouse or trackball, by which a user may provide input to the computer 1802.
The image acquisition device 1814 may be connected to the computer 1802 to input image data (e.g., medical images) to the computer 1802. It is possible to implement image capture device 1814 and computer 1802 as one device. It is also possible that the image capture device 1814 and the computer 1802 communicate wirelessly over a network. In a possible embodiment, the computer 1802 may be located remotely from the image acquisition device 1814.
Any or all of the systems and devices discussed herein may be implemented using one or more computers, such as computer 1802.
Those skilled in the art will recognize that an actual computer or implementation of a computer system may have other structures and may contain other components as well, and that FIG. 18 is a high-level representation of some of the components of such a computer for illustrative purposes.
The foregoing detailed description should be understood as being in all respects illustrative and exemplary, rather than limiting, and the scope of the invention disclosed herein is to be determined not by the detailed description, but rather by the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are merely illustrative of the principles of this invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention. Various other combinations of features may be implemented by those skilled in the art without departing from the scope and spirit of the invention.
Claims (20)
1. A computer-implemented method, comprising:
receiving input medical data;
receiving, using a machine learning based medical analysis network, results of a medical analysis task performed based on the input medical data;
determining, using a machine learning based audit network, a robustness of the machine learning based medical analysis network for performing the medical analysis task based on the input medical data and the results of the medical analysis task; and
the determination of the robustness of the machine learning based medical analysis network is output.
2. The computer-implemented method of claim 1, further comprising:
In response to determining that the machine-learning-based medical analysis network is not robust, determining that the machine-learning-based medical analysis network is not robust is due to an input medical data being out of distribution relative to training data on which the machine-learning-based medical analysis network is trained, or due to artifacts in at least one of the input medical data or results of the medical analysis task.
3. The computer-implemented method of claim 1, further comprising:
in response to determining that the machine-learning-based medical analysis network is not robust, retraining the machine-learning-based medical analysis network and the machine-learning-based audit network based on the input medical data.
4. The computer-implemented method of claim 1, further comprising:
in response to determining that the machine learning based medical analysis network is not robust, one or more surrogate results from medical analysis tasks of other machine learning based medical analysis networks are presented.
5. The computer-implemented method of claim 1, further comprising receiving user input editing results of the medical analysis task to generate final results of the medical analysis task, and wherein determining, using the machine-learning-based audit network, robustness of the machine-learning-based medical analysis network for performing the medical analysis task based on the input medical data and the results of the medical analysis task comprises:
The robustness of the machine learning based medical analysis network is determined based on the final result of the medical analysis task.
6. The computer-implemented method of claim 1, wherein the machine learning based audit network is implemented using a standardized flow model.
7. The computer-implemented method of claim 1, further comprising:
in response to determining that the machine learning based medical analysis network is not robust, an alert is generated to the user informing the user that the machine learning based medical analysis network is not robust or that the request is from a user input.
8. The computer-implemented method of claim 7, further comprising:
input from a user is received to override a determination that the machine learning based medical analysis network is not robust or to edit the results of the medical analysis task.
9. The computer-implemented method of claim 1, wherein the medical analysis task comprises at least one of segmenting, determining a vessel centerline, or calculating Fractional Flow Reserve (FFR).
10. An apparatus, comprising:
means for receiving input medical data;
means for receiving results of a medical analysis task performed based on the input medical data using a machine learning based medical analysis network;
Means for determining a robustness of the machine learning based medical analysis network for performing the medical analysis task based on the input medical data and the results of the medical analysis task using the machine learning based audit network; and
means for outputting a determination of robustness of the machine learning based medical analysis network.
11. The apparatus of claim 10, further comprising:
in response to determining that the machine-learning-based medical analysis network is not robust, determining that the machine-learning-based medical analysis network is not robust is due to an input medical data being out of distribution relative to training data on which the machine-learning-based medical analysis network is trained, or due to artifacts in at least one of the input medical data or results of the medical analysis task.
12. The apparatus of claim 10, further comprising:
in response to determining that the machine-learning-based medical analysis network is not robust, retraining the machine-learning-based medical analysis network and the machine-learning-based audit network based on the input medical data.
13. The apparatus of claim 10, further comprising:
in response to determining that the machine-learning-based medical analysis network is not robust, presenting one or more surrogate results from medical analysis tasks of other machine-learning-based medical analysis networks.
14. The apparatus of claim 10, further comprising means for receiving user input editing results of the medical analysis task to generate final results of the medical analysis task, and wherein means for determining a robustness of the machine learning based medical analysis network for performing the medical analysis task based on the input medical data and the results of the medical analysis task using the machine learning based audit network comprises:
means for determining a robustness of the machine learning based medical analysis network based on the final result of the medical analysis task.
15. A non-transitory computer-readable medium storing computer program instructions that, when executed by a processor, cause the processor to perform operations comprising:
receiving input medical data;
receiving, using a machine learning based medical analysis network, results of a medical analysis task performed based on the input medical data;
determining, using a machine learning based audit network, a robustness of the machine learning based medical analysis network for performing the medical analysis task based on the input medical data and the results of the medical analysis task; and
the determination of the robustness of the machine learning based medical analysis network is output.
16. The non-transitory computer-readable medium of claim 15, wherein the machine learning based audit network is implemented using a standardized flow model.
17. The non-transitory computer-readable medium of claim 15, the operations further comprising receiving user input editing results of the medical analysis task to generate final results of the medical analysis task, and wherein determining, using the machine-learning-based audit network, a robustness of the machine-learning-based medical analysis network for performing the medical analysis task based on the input medical data and the results of the medical analysis task comprises:
the robustness of the machine learning based medical analysis network is determined based on the final result of the medical analysis task.
18. The non-transitory computer-readable medium of claim 15, the operations further comprising:
in response to determining that the machine learning based medical analysis network is not robust, an alert is generated to the user informing the user that the machine learning based medical analysis network is not robust or that the request is from a user input.
19. The non-transitory computer-readable medium of claim 18, the operations further comprising:
Input from a user is received to override a determination that the machine learning based medical analysis network is not robust or to edit the results of the medical analysis task.
20. The non-transitory computer-readable medium of claim 15, wherein the medical analysis task comprises at least one of segmenting, determining a vessel centerline, or calculating Fractional Flow Reserve (FFR).
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/650669 | 2022-02-11 | ||
EP22465510.0 | 2022-02-11 | ||
US17/650,669 US20230260106A1 (en) | 2022-02-11 | 2022-02-11 | Detecting robustness of machine learning models in clinical workflows |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116596830A true CN116596830A (en) | 2023-08-15 |
Family
ID=87558777
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310100005.0A Pending CN116596830A (en) | 2022-02-11 | 2023-02-09 | Detecting robustness of machine learning models in clinical workflows |
Country Status (2)
Country | Link |
---|---|
US (1) | US20230260106A1 (en) |
CN (1) | CN116596830A (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117218457B (en) * | 2023-11-07 | 2024-01-26 | 成都理工大学 | Self-supervision industrial anomaly detection method based on double-layer two-dimensional normalized flow |
-
2022
- 2022-02-11 US US17/650,669 patent/US20230260106A1/en active Pending
-
2023
- 2023-02-09 CN CN202310100005.0A patent/CN116596830A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
US20230260106A1 (en) | 2023-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230106440A1 (en) | Content based image retrieval for lesion analysis | |
US20200085382A1 (en) | Automated lesion detection, segmentation, and longitudinal identification | |
US11776128B2 (en) | Automatic detection of lesions in medical images using 2D and 3D deep learning networks | |
EP4064187A1 (en) | Automatic hemorrhage expansion detection from head ct images | |
Tyagi et al. | LCSCNet: A multi-level approach for lung cancer stage classification using 3D dense convolutional neural networks with concurrent squeeze-and-excitation module | |
EP4160529A1 (en) | Probabilistic tree tracing and large vessel occlusion detection in medical imaging | |
CN116596830A (en) | Detecting robustness of machine learning models in clinical workflows | |
US20230230228A1 (en) | Out-of-distribution detection for artificial intelligence systems for prostate cancer detection | |
EP4057296A1 (en) | Machine learning for automatic detection of intracranial hemorrhages with uncertainty measures from ct images | |
US12106549B2 (en) | Self-supervised learning for artificial intelligence-based systems for medical imaging analysis | |
EP4141788A1 (en) | Fully automated assessment of coronary vulnerable plaque in coronary ct images using radiomic features | |
US20240070853A1 (en) | Self-supervised learning for modeling a 3d brain anatomical representation | |
US20230237647A1 (en) | Ai driven longitudinal liver focal lesion analysis | |
Liu et al. | Grading diagnosis of sacroiliitis in ct scans based on radiomics and deep learning | |
EP4404207A1 (en) | Automatic personalization of ai systems for medical imaging analysis | |
EP4343782A1 (en) | A multi-task learning framework for fully automated assessment of coronary arteries in angiography images | |
US20240379244A1 (en) | Ai-driven biomarker bank for liver lesion analysis | |
US20240104719A1 (en) | Multi-task learning framework for fully automated assessment of coronary arteries in angiography images | |
US20240321458A1 (en) | Clinical correction network for clinically significant prostate cancer prediction | |
EP4160616A1 (en) | Multimodal analysis of imaging and clinical data for personalized therapy | |
US20230404512A1 (en) | Large vessel occlusion detection and classification in medical imaging | |
US20230316532A1 (en) | Acute intracranial hemorrhage segmentation on brain images for volume quantification and individual detection | |
US20240338813A1 (en) | Domain adaption for prostate cancer detection | |
US11861828B2 (en) | Automated estimation of midline shift in brain ct images | |
EP4160483A1 (en) | Out-of-domain detection for improved ai performance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20240902 Address after: German Phu F Haim Applicant after: Siemens Medical AG Country or region after: Germany Address before: Erlangen Applicant before: Siemens Healthineers AG Country or region before: Germany |