CN110753517A - Ultrasound scanning based on probability mapping - Google Patents
Ultrasound scanning based on probability mapping Download PDFInfo
- Publication number
- CN110753517A CN110753517A CN201880030236.6A CN201880030236A CN110753517A CN 110753517 A CN110753517 A CN 110753517A CN 201880030236 A CN201880030236 A CN 201880030236A CN 110753517 A CN110753517 A CN 110753517A
- Authority
- CN
- China
- Prior art keywords
- information
- interest
- probability
- processing
- processing device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000002604 ultrasonography Methods 0.000 title claims abstract description 49
- 238000013507 mapping Methods 0.000 title description 10
- 238000012545 processing Methods 0.000 claims abstract description 62
- 239000000523 sample Substances 0.000 claims abstract description 50
- 238000000034 method Methods 0.000 claims abstract description 47
- 238000010801 machine learning Methods 0.000 claims abstract description 25
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 23
- 238000013527 convolutional neural network Methods 0.000 claims description 101
- 210000000709 aorta Anatomy 0.000 claims description 7
- 210000002307 prostate Anatomy 0.000 claims description 7
- 210000002216 heart Anatomy 0.000 claims description 5
- 210000004291 uterus Anatomy 0.000 claims description 5
- 210000004381 amniotic fluid Anatomy 0.000 claims description 4
- 210000004204 blood vessel Anatomy 0.000 claims description 4
- 210000003754 fetus Anatomy 0.000 claims description 4
- 210000003734 kidney Anatomy 0.000 claims description 4
- 230000008569 process Effects 0.000 abstract description 35
- 210000000056 organ Anatomy 0.000 description 63
- 238000012805 post-processing Methods 0.000 description 39
- 230000000875 corresponding effect Effects 0.000 description 15
- 238000013528 artificial neural network Methods 0.000 description 10
- 210000003484 anatomy Anatomy 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 230000008685 targeting Effects 0.000 description 8
- 238000004458 analytical method Methods 0.000 description 7
- 238000002592 echocardiography Methods 0.000 description 6
- 230000007246 mechanism Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000013459 approach Methods 0.000 description 4
- 239000012530 fluid Substances 0.000 description 4
- 210000003689 pubic bone Anatomy 0.000 description 4
- 238000005259 measurement Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 230000005856 abnormality Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000001672 ovary Anatomy 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 210000001519 tissue Anatomy 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 206010002329 Aneurysm Diseases 0.000 description 1
- 208000017667 Chronic Disease Diseases 0.000 description 1
- 208000020221 Short stature Diseases 0.000 description 1
- 208000019790 abdominal distention Diseases 0.000 description 1
- 208000020560 abdominal swelling Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 208000029162 bladder disease Diseases 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000036571 hydration Effects 0.000 description 1
- 238000006703 hydration reaction Methods 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000005693 optoelectronics Effects 0.000 description 1
- 210000004197 pelvis Anatomy 0.000 description 1
- 230000035935 pregnancy Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 231100000241 scar Toxicity 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000008961 swelling Effects 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 210000003932 urinary bladder Anatomy 0.000 description 1
- 208000026533 urinary bladder disease Diseases 0.000 description 1
- 210000001835 viscera Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Clinical applications
- A61B8/0833—Clinical applications involving detecting or locating foreign bodies or organic structures
- A61B8/085—Clinical applications involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Clinical applications
- A61B8/0866—Clinical applications involving foetal diagnosis; pre-natal or peri-natal diagnosis of the baby
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Clinical applications
- A61B8/0883—Clinical applications for diagnosis of the heart
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Clinical applications
- A61B8/0891—Clinical applications for diagnosis of blood vessels
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/13—Tomography
- A61B8/14—Echo-tomography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/42—Details of probe positioning or probe attachment to the patient
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/46—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
- A61B8/461—Displaying means of special interest
- A61B8/463—Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
- A61B8/5223—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5292—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves using additional data, e.g. patient information, image labeling, acquisition parameters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/56—Details of data transmission or power supply
- A61B8/565—Details of data transmission or power supply involving data transmission via a network
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
- G06T7/0014—Biomedical image inspection using an image reference approach
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/43—Detecting, measuring or recording for evaluating the reproductive systems
- A61B5/4306—Detecting, measuring or recording for evaluating the reproductive systems for evaluating the female reproductive systems, e.g. gynaecological evaluations
- A61B5/4318—Evaluation of the lower reproductive system
- A61B5/4325—Evaluation of the lower reproductive system of the uterine cavities, e.g. uterus, fallopian tubes, ovaries
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/43—Detecting, measuring or recording for evaluating the reproductive systems
- A61B5/4375—Detecting, measuring or recording for evaluating the reproductive systems for evaluating the male reproductive system
- A61B5/4381—Prostate evaluation or disorder diagnosis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30084—Kidney; Renal
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Pathology (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Heart & Thoracic Surgery (AREA)
- Veterinary Medicine (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Physiology (AREA)
- Vascular Medicine (AREA)
- Databases & Information Systems (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Fuzzy Systems (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Cardiology (AREA)
Abstract
一种系统可以包括探针,该探针被配置为将超声信号发射至受关注的目标,并接收与所发射的超声信号相关联的回波信息。该系统还可以包括至少一个处理装置,该至少一个处理装置被配置为使用机器学习算法来处理接收到的回波信息,以生成与受关注的目标相关联的概率信息。该至少一个处理装置可以进一步对所述概率信息进行分类并基于经过分类的概率信息输出与受关注的目标相对应的图像信息。
A system may include a probe configured to transmit an ultrasound signal to a target of interest and to receive echo information associated with the transmitted ultrasound signal. The system may also include at least one processing device configured to process the received echo information using a machine learning algorithm to generate probability information associated with the object of interest. The at least one processing device may further classify the probability information and output image information corresponding to the object of interest based on the classified probability information.
Description
相关申请Related applications
本申请根据35U.S.C§119、基于2017年5月11日提交的美国临时申请62/504,709要求优先权,该美国临时申请的内容通过引用整体合并于此。This application claims priority under 35 U.S.C § 119 based on US Provisional Application 62/504,709, filed May 11, 2017, the contents of which are incorporated herein by reference in their entirety.
背景技术Background technique
超声扫描仪通常用于识别体内的目标器官或其他结构和/或确定与目标器官/结构相关联的特征,例如器官/结构的尺寸或器官中的流体的体积。例如,超声扫描仪用于识别患者的膀胱并估计膀胱中的流体体积。在典型情况下,将超声扫描仪放置在患者身上并触发以生成超声信号,该超声信号包括以特定频率输出的声波。扫描仪可以接收来自超声信号的回波,并对其进行分析以确定膀胱中的流体体积。例如,接收到的回波可以用于生成相应的图像,而相应的图像可以被分析以检测目标器官的边界,例如膀胱壁。然后可以基于检测到的边界信息来估计膀胱的体积。然而,典型的超声扫描仪经常遭受由许多因素引起的不准确性,例如,患者之间受关注的目标器官的尺寸和/或形状的可变性,体内的障碍物使得难以精确地检测目标器官/结构的边界等特征。Ultrasound scanners are often used to identify target organs or other structures within the body and/or to determine characteristics associated with the target organ/structure, such as the size of the organ/structure or the volume of fluid in the organ. For example, ultrasound scanners are used to identify a patient's bladder and estimate the volume of fluid in the bladder. Typically, an ultrasound scanner is placed on a patient and triggered to generate an ultrasound signal that includes sound waves output at a specific frequency. The scanner can receive echoes from the ultrasound signal and analyze them to determine the volume of fluid in the bladder. For example, the received echoes can be used to generate corresponding images, and the corresponding images can be analyzed to detect the boundaries of the target organ, such as the bladder wall. The bladder volume can then be estimated based on the detected boundary information. However, typical ultrasound scanners often suffer from inaccuracies caused by a number of factors, for example, variability in the size and/or shape of the target organ of interest between patients, obstacles in the body that make it difficult to accurately detect the target organ/ features such as the boundaries of the structure.
附图说明Description of drawings
图1A示出了根据示例性实施方式的扫描系统的示例性配置;FIG. 1A shows an exemplary configuration of a scanning system according to an exemplary embodiment;
图1B示出了图1A的扫描系统关于检测患者体内器官的操作;FIG. 1B illustrates the operation of the scanning system of FIG. 1A with respect to detecting organs in a patient;
图2示出了图1A的扫描系统中包括的逻辑元件的示例性配置;FIG. 2 shows an exemplary configuration of logic elements included in the scanning system of FIG. 1A;
图3示出了示例性实施方式中的图2的数据获取单元的一部分;Figure 3 shows a portion of the data acquisition unit of Figure 2 in an exemplary embodiment;
图4示出了示例性实施方式中的图2的自动编码器单元的一部分;Figure 4 illustrates a portion of the autoencoder unit of Figure 2 in an exemplary embodiment;
图5示出了图2的一个或多个元件中包括的部件的示例性配置;FIG. 5 illustrates an exemplary configuration of components included in one or more of the elements of FIG. 2;
图6是示出根据一示例性实施方式的由图2所示的各个部件进行的处理的流程图;FIG. 6 is a flowchart illustrating the processing performed by the various components shown in FIG. 2 according to an exemplary embodiment;
图7示出了示例性实施方式中由图2的自动编码器生成的输出;Figure 7 illustrates the output generated by the autoencoder of Figure 2 in an exemplary embodiment;
图8示出了根据图6的处理的二值化处理;Figure 8 shows a binarization process according to the process of Figure 6;
图9是与经由图1A的基本单元来显示信息相关联的流程图;和Figure 9 is a flow chart associated with displaying information via the base unit of Figure 1A; and
图10示出了根据图9的处理由基本单元输出的示例性图像数据。FIG. 10 shows exemplary image data output by the base unit according to the process of FIG. 9 .
具体实施方式Detailed ways
下面的详细描述参考附图。不同附图中的相同附图标记可以标识相同或相似的元件。另外,以下详细描述不限制本发明。The following detailed description refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. In addition, the following detailed description does not limit the invention.
本文描述的实施方式涉及使用机器学习,其中包括使用神经网络和深度学习,以便基于经由超声扫描仪获得的信息来识别患者体内受关注的器官或结构。例如,扫描仪可以用于向目标器官发射多个超声信号,并且可以使用机器学习技术/算法来处理与所发射的信号相关联的回波信息。机器学习处理可以用于识别受关注的目标并生成与基于所接收到的超声回波数据而生成的图像的每个部分或像素相关联的概率信息。Embodiments described herein relate to the use of machine learning, including the use of neural networks and deep learning, to identify organs or structures of interest in a patient based on information obtained via an ultrasound scanner. For example, a scanner may be used to transmit multiple ultrasound signals to a target organ, and machine learning techniques/algorithms may be used to process echo information associated with the transmitted signals. A machine learning process can be used to identify objects of interest and generate probability information associated with each portion or pixel of an image generated based on the received ultrasound echo data.
例如,在一种实施方式中,超声回波数据(例如,与在指向目标器官的多个不同扫描平面上发射的超声信号相关联的B模式回波数据)可以用于为每个B模式图像生成概率映射。在一个实施方式中,B模式图像中的每个像素可以被映射到指示该特定像素是处于目标器官/结构之内还是作为目标器官/结构的一部分的概率。逐像素分析的结果用于生成目标概率映射。然后可以进行二值化处理和后处理以去除噪声并且与试图确定目标器官的边界壁并基于边界信息估计尺寸的常规扫描仪相比提供器官的更准确的呈现。在一些实施方式中,来自后处理的输出被显示给医务人员,并且可以在执行超声扫描时帮助容易地定位器官。还可以执行其他后处理以估计目标器官的体积,例如患者膀胱中的流体体积。For example, in one embodiment, ultrasound echo data (eg, B-mode echo data associated with ultrasound signals transmitted on multiple different scan planes directed toward the target organ) may be used for each B-mode image Generate a probability map. In one embodiment, each pixel in the B-mode image may be mapped to a probability indicating whether that particular pixel is within or part of the target organ/structure. The results of the pixel-by-pixel analysis are used to generate a target probability map. Binarization and post-processing can then be performed to remove noise and provide a more accurate representation of the organ than conventional scanners that attempt to determine the bounding walls of the target organ and estimate dimensions based on the bounding information. In some embodiments, the output from the post-processing is displayed to medical personnel and can aid in easily locating organs when performing ultrasound scans. Other post-processing can also be performed to estimate the volume of the target organ, such as the volume of fluid in the patient's bladder.
图1A是示出根据示例性实施例的扫描系统100的图形。参照图1,扫描系统100包括探针110、基本(基础)单元120和电缆130。FIG. 1A is a diagram illustrating a
探针110包括手柄部分112(也称为手柄112)、触发器114和鼻部116(也称为圆顶或圆顶部分116)。医务人员可以通过手柄112握住探针110,并按下触发器114,以激活(启动)位于鼻部116中的一个或多个超声收发器和换能器,从而向受关注的目标器官发射超声信号。例如,图1B示出了位于患者150的骨盆区域上并且处于受关注的目标器官(在此示例中为患者的膀胱152)之上的探针110。
手柄112允许使用者相对于患者150移动探针110。如上所述,当对选定的解剖学部分进行扫描时,在圆顶116与患者150的表面部分相接触时,触发器114启动对选定的解剖学部分的超声扫描。圆顶116通常由向解剖学部分提供适当的声学阻抗匹配和/或允许超声能量随着其被投射到解剖学部分中而适当地聚焦的材料形成。例如,图1B中区域154处所示的声学凝胶或凝胶垫可在受关注区域(ROI)之上施加至患者150的皮肤,以在将圆顶116放在患者150的皮肤上时提供声学阻抗匹配。The
圆顶116包括一个或多个超声收发器元件和一个或多个换能器元件(图1A或图1B中未示出)。收发器元件从圆顶116向外发射超声能量,并接收由解剖学部分内的内部结构/组织产生的声学反射或回波。所述一个或多个超声换能器元件可以包括一维或二维压电元件阵列,该一维或二维压电元件阵列可以通过马达在圆顶116内移动以针对由收发器元件进行的超声信号的发射提供不同的扫描方向。备选地,换能器元件可以相对于探针110固定,以使得可以通过选择性地激励阵列中的元件来扫描选定的解剖学区域。The
在一些实施方式中,探针110可以包括方向指示器面板(图1A中未示出),该方向指示器面板包括可以被点亮以用于初始确定目标并引导使用者接近ROI内的目标器官或结构的多个箭头。例如,在一些实施方式中,如果器官或结构相对于在患者150身上的第一位置处抵靠皮肤表面放置的探针110的落点居中,则方向箭头可以不被点亮。然而,如果器官偏心,则可以点亮一个箭头或一组箭头以引导使用者将探针110重新定位在患者150身上的第二或后续皮肤位置处。在其他实施方式中,可以在基本单元120的显示器122上呈现方向指示器。In some embodiments,
位于探针110中的一个或多个收发器可以包括惯性参考单元,所述惯性参考单元包括加速度计和/或陀螺仪,该加速度计和/或陀螺仪优选地定位在圆顶116之内或附近。该加速度计可以是可操作的以感测收发器的加速度(优选地相对于一坐标系的加速度),而陀螺仪可以是可操作的以感测收发器相对于同一或另一坐标系的角速度。因此,该陀螺仪可以呈采用动态元件的常规配置,或者该陀螺仪可以是光电装置,例如光学环形陀螺仪。在一个实施例中,加速度计和陀螺仪可包括普通封装装置和/或固态装置。在其他实施例中,加速度计和/或陀螺仪可以包括普通封装的微机电系统(MEMS)装置。在每种情况下,加速度计和陀螺仪共同允许确定相对于患者体内受关注的解剖学区域附近的已知位置的位置和/或角度变化。One or more transceivers located in
探针110可以经由诸如电缆130之类的有线连接来与基本单元120通信。在其他实施方式中,探针110可以经由无线连接(例如,蓝牙,WiFi等)与基本单元120通信。在每种情况下,基本单元120都包括显示器122,以允许使用者查看来自超声扫描的处理结果,和/或允许在探针110的操作期间相对于使用者进行操作互动。例如,显示器122可以包括输出显示器/屏幕,例如液晶显示器(LCD)、基于发光二极管(LED)的显示器或向使用者提供文本和/或图像数据的其他类型的显示器。例如,显示器122可以提供用于相对于患者150的选定解剖学部分定位探针110的指令。显示器122还可以显示选定解剖学区域的二维或三维图像。
在一些实施方式中,显示器122可以包括图形用户界面(GUI),其允许使用者选择与超声扫描相关联的各种特征。例如,显示器122可以允许使用者选择患者150是男性、女性还是儿童。这允许系统100自动调整超声信号的发射、接收和处理以适应所选患者的解剖学结构,例如调整系统100以适应男性和女性患者的各种解剖学细节。例如,当经由显示器122上的GUI选择男性患者时,系统100可以被配置成在男性患者体内定位单个腔体(例如膀胱)。反之,当经由GUI选择女性患者时,系统100可以被配置成对具有多个腔体的解剖学部分(例如包括膀胱和子宫的身体区域)进行成像。类似地,当选择儿童患者时,系统100可以被配置为基于儿童患者的较小尺寸来调节发射。在备选实施方式中,系统100可以包括腔体选择器,所述腔体选择器被配置为选择可以用于男性和/或女性患者的单腔体扫描模式或多腔体扫描模式。腔体选择器因此可以允许对单腔体区域进行成像,或者对多腔体区域(例如包括主动脉和心脏的区域)进行成像。另外,如下所述,在分析图像时可以使用患者类型(例如,男性、女性、儿童)的选择,以帮助提供目标器官的准确呈现。In some embodiments, the
为了扫描患者的选定解剖学部分,可以如图1B所示的那样将圆顶116抵靠患者150的表面部分定位,该表面部分靠近将被扫描的解剖学部分。使用者通过按下触发器114来启动收发器。作为响应,换能器元件可选地定位该收发器,该收发器将超声信号发射到体内,并接收相应的回波信号,该回波信号可以至少由该收发器部分地处理以产生所述选定解剖学部分的超声图像。在一特定实施例中,系统100在从大约两兆赫兹(MHz)延伸到大约10或更大MHz(例如18MHz)的范围内发射超声信号。To scan a selected portion of the patient's anatomy, the
在一个实施例中,探针110可以耦合到基本单元120,该基本单元120被配置为以预定的频率和/或脉冲重复速率生成超声能量,并且将所述超声能量传输至收发器。基本单元120还包括一个或多个处理器或处理逻辑,所述一个或多个处理器或处理逻辑被配置为处理由收发器接收到的反射超声能量以产生所扫描的解剖学区域的图像。In one embodiment, the
在又一个特定实施例中,探针110可以是独立装置,其包括定位于探针110内的微处理器和与该微处理器相关联的软件,以可操作地控制收发器,并处理反射的超声能量,从而生成超声图像。因此,探针110上的显示器可以用于显示所生成的图像和/或查看与收发器的操作相关联的其他信息。例如,该信息可以包括字母数字数据,所述字母数字数据指示执行一系列扫描之前收发器的优选位置。在其他实施方式中,收发器可以耦合到通用计算机(例如膝上型计算机或台式计算机),所述通用计算机包括至少部分地控制收发器的操作的软件,并且还包括用于处理从收发器传输的信息的软件,以使得可以生成所扫描的解剖学区域的图像。In yet another particular embodiment,
图2是根据示例性实施方式的在系统100中实现的功能逻辑部件的框图。参照图2,系统100包括数据获取单元210、卷积神经网络(CNN)自动编码器单元220、后处理单元230、瞄准逻辑240和体积估计逻辑250。在一示例性实施方式中,探针110可以包括数据获取单元210,并且其他功能单元(例如CNN自动编码器单元220、后处理单元230、瞄准逻辑240和体积估计逻辑250)可以在基本单元120中实现。在其他实施方式中,可以通过其他装置实现特定单元和/或逻辑,例如经由相对于探针110和基本单元120二者位于外部的计算装置或服务器(例如,可经由与Internet或医院内的局域网等的无线连接访问的计算装置或服务器)实现特定单元和/或逻辑。例如,探针110可以经由例如无线连接(例如,WiFi或一些其他的无线协议/技术)将回波数据和/或图像数据发送至远离探针110和基本单元120定位的处理系统。FIG. 2 is a block diagram of functional logic components implemented in
如上所述,探针110可以包括收发器,该收发器产生超声信号、接收来自所发射信号的回波并基于所接收的回波(例如,所接收的回波的大小或强度)生成B模式图像数据。在示例性实施方式中,数据获取单元210获得与对应于患者150体内的受关注区域的多个扫描平面相关联的数据。例如,探针110可以接收由数据获取单元210处理的回波数据以生成二维(2D)B模式图像数据,从而确定膀胱的尺寸和/或体积。在其他实施方式中,探针110可以接收回波数据,该回波数据被处理以生成三维(3D)图像数据,所述三维(3D)图像数据可以用于确定膀胱的尺寸和/或体积。As discussed above,
例如,图3示出了用于获得3D图像数据的示例性数据获取单元210。参照图3,数据获取单元210包括换能器310、圆顶部分116的外表面320和基座360。图3所示的元件可以包括在探针110的圆顶部分116内。For example, FIG. 3 shows an exemplary
换能器310可以从探针110发射超声信号,如图3中的330所示。换能器310可以安装成允许换能器310围绕两个垂直轴线旋转。例如,换能器310可以相对于基座360围绕第一轴线340旋转,并且相对于基座360围绕第二轴线350旋转。第一轴线340在本文中被称为theta轴,第二轴线350在本文中被称为phi轴。在一示例性实施方式中,theta和phi运动范围可以小于180度。在一个实施方式中,可以关于theta运动和phi运动进行隔行扫描。例如,换能器310可以在theta方向上移动,然后在phi方向上移动。这使得数据获取单元210能够获得平滑且连续的体积扫描并提高获取扫描数据的速率。Transducer 310 may transmit ultrasonic signals from
在一示例性实施方式中,数据获取单元210可以在将B模式图像转发给CNN自动编码器单元220之前调整B模式图像的尺寸(大小)。例如,数据获取单元210可以包括通过减小或抽取处理减小B模式图像的尺寸的逻辑。然后可以将尺寸减小的B模式图像输入CNN自动编码器单元220,所述CNN自动编码器单元220将生成输出概率映射,如下面更详细地描述的那样。在备选实施方式中,CNN自动编码器单元220可以在输入层处减小或抽取所输入的B模式图像本身。在任何一种情况下,减小B模式图像数据的尺寸/量可以减少CNN自动编码器单元220处理B模式图像数据所需的处理时间和处理能力。在其他实施方式中,数据获取单元210可以不在将B模式图像数据输入CNN自动编码器单元220之前执行尺寸调整。在其他实施方式中,数据获取单元210和/或CNN自动编码器单元220可以进行诸如亮度归一化、对比度增强、扫描转换之类的图像增强操作,以提高关于生成输出数据的准确性。In an exemplary embodiment, the
再次参考图2,CNN自动编码器单元220可以包括用于处理经由数据获取单元210接收的数据的逻辑。在一示例性实施方式中,CNN自动编码器单元220可以执行深度神经网络(DNN)处理,所述深度神经网络(DNN)处理包括多个卷积层处理以及用于每个层的多个内核或滤波器,如下文更详细描述的那样。本文使用的术语“CNN自动编码器单元”或“自动编码器单元”应广义地解释为包括神经网络和/或机器学习系统/单元,与输出不带空间信息的全局标签的分类器相比,其中输入和输出均具有空间信息。Referring again to FIG. 2 , the
例如,CNN自动编码器单元220包括以最小可能的失真量将接收到的图像输入映射到输出的逻辑。CNN处理可以类似于其他类型的神经网络处理,但是CNN处理使用显式假设(即输入是图像),这使得CNN处理可以更轻松地将各种属性/限制编码至所述处理中,从而减少了必须由CNN自动编码器单元220处理或作为因数计入的参数的量。在一示例性实施方式中,CNN自动编码器单元220执行卷积处理以生成与所输入的图像相关联的特征映射。然后可以对特征映射进行多次采样以生成输出。在一示例性实施方式中,由CNN自动编码器单元220使用的CNN的内核尺寸可以具有17x17或更小的尺寸,以提供足够的速度来生成输出。另外,17x17内核尺寸允许CNN自动编码器单元220捕获B模式图像数据内的受关注点的周围的足够信息。另外,根据一示例性实施方式,卷积层的数量可以是八个或更少,每个层具有五个或更少的内核。但是,应该理解,可以在其他实施方式中使用较小的内核尺寸(例如3x3、7x7、9x9等)或较大的内核尺寸(例如大于17x17)、每层附加的内核(例如大于五个)和附加的卷积层(例如,多于十个并且多达数百个)。For example, the CNN auto-
在涉及CNN处理的典型应用中,通过在处理内添加狭窄的瓶颈层以使得只有受关注的数据可以通过该狭窄的层而减小了数据维度(尺寸)。通常通过添加“池化”层或使用较大的“步长”来减小由神经网络处理的图像的尺寸以实现该数据维度的减小。然而,在本文关于膀胱检测所描述的一些实施方式中(其中所检测的膀胱壁位置的空间精确度对于精确的体积计算来说很重要),池化和/或大步长被最小化使用或与其他空间分辨率保持技术(例如,残差连接或膨胀卷积)相结合。In typical applications involving CNN processing, the data dimension (size) is reduced by adding a narrow bottleneck layer within the process so that only data of interest can pass through the narrow layer. This reduction in data dimensionality is typically achieved by adding a "pooling" layer or using a larger "stride" to reduce the size of the image processed by the neural network. However, in some embodiments described herein for bladder detection where the spatial accuracy of the detected bladder wall position is important for accurate volume calculations, pooling and/or large step sizes are minimized using or Combined with other spatial resolution preserving techniques such as residual connections or dilated convolutions.
虽然示例性系统100描绘了使用CNN自动编码器单元220处理B模式输入数据,但是在其他实施方式中,系统100可以包括其他类型的自动编码器单元或机器学习单元。例如,CNN自动编码器单元220可以包括神经网络结构,其中输出层与输入层具有相同数量的节点。在其他实施方式中,可以使用其他类型的机器学习模块或单元,其中输入层的尺寸不等于输出层的尺寸。例如,机器学习模块可以生成(就层数而言)大于输入图像的两倍或小于输入图像的一半的概率映射输出。在其他实施方式中,系统100中包括的机器学习单元可以使用各种机器学习技术和算法,例如决策树、支持向量机、贝叶斯(Bayesian)网络等。在每种情况下,系统100使用机器学习算法来生成关于B模式输入数据的概率信息,所述概率信息进而可以用来估计受关注的目标器官的体积,如下面详细描述的那样。Although the
图4示意性地示出了根据一示例性实施方式的CNN自动编码器单元220的一部分。参照图4,CNN自动编码器单元220可以包括空间输入410、FFT输入420、查询422、特征映射430、特征映射440、查询442、内核450、偏置452、内核460和偏置462。输入空间410可以代表由数据获取单元210所提供的2D B模式图像数据。CNN自动编码器220可以执行快速傅立叶变换(FFT)以将图像数据转换为频域,并经由内核FFT 450将滤波器或权重施加至输入FFT。卷积处理的输出可以经由偏置值452进行偏置,并且应用逆向快速傅里叶变换(IFFT)函数,其结果被传送至查询表422以生成空间特征映射430。CNN自动编码器单元220可以将FFT应用于空间特征映射430以生成FFT特征映射440,并且该过程可以针对附加的卷积和内核重复。例如,如果CNN自动编码器单元220包括八个卷积层,则该过程可以多继续进行七次。另外,应用于每个后续特征映射的内核对应于内核数量乘以特征映射的数量,如图4中的四个内核460所示。也可以应用偏置452和462来改善CNN处理的性能。Figure 4 schematically shows a portion of a CNN auto-
如上所述,CNN自动编码器单元220可以使用FFT在频域中执行卷积。与可以使用多个计算机来执行CNN算法的较大系统相比,这种方法允许系统100使用较少的计算能力来实现CNN算法。以这种方式,系统100可以使用手持单元和基站(例如探针110和基本单元120)来执行CNN处理。在其他实施方式中,可以使用空间域方法。在系统100能够与其他处理装置(例如,经由网络(例如,无线或有线网络)连接至系统100的处理装置和/或经由客户端/服务器途径(例如,系统100是客户端)与系统100一起操作的处理装置)进行通信的情况下,空间域方法可能会使用附加的处理能力。As described above, the CNN auto-
CNN自动编码器单元220的输出是与被处理的输入图像的每个被处理的部分或像素处于受关注的目标器官内的概率相关联的概率信息。例如,CNN自动编码器单元220可以生成概率映射,其中与被处理的输入图像数据相关联的每个像素被映射到与0和1之间的值相对应的概率,其中值零表示该像素在目标器官内的概率为0%,值1表示该像素在目标器官内的概率为100%,如下更详细地描述的那样。CNN自动编码器单元220在处理后的图像上而非输入图像上执行像素分析或空间位置分析。结果,对处理后的图像的逐像素分析可能与输入图像不一一对应。例如,基于输入图像的尺寸调整,由CNN自动编码器单元220分析以生成概率信息的一个处理后的像素或空间位置可以对应于输入图像中的多个像素,反之亦然。另外,本文所使用的术语“概率”应被解释为广义地包括图像的像素或一部分在受关注的目标或器官内的可能性。如本文所使用的术语“概率信息”也应该被广义地解释为包括离散值,例如二进制值或其他值。The output of the
在其他实施方式中,CNN自动编码器单元220可以生成一概率映射,在所述概率映射中,每个像素被映射到可以与概率值或指示符(例如,范围从-10到10的值,对应于256个灰度值之一的值,等)相关的各种值。在每种情况下,CNN自动编码器单元220所生成的值或单位可用于确定图像的一像素或一部分在目标器官内的概率。例如,在256个灰度的示例中,值1可以表示图像的一像素或一部分在目标器官内的概率为0%,而值256可以表示像素或图像在目标器官内的概率为100%。In other embodiments, the CNN auto-
在其他实施方式中,CNN自动编码器单元220可以产生离散输出值,例如二进制值,其指示像素或输出区域是否在目标器官内。例如,CNN自动编码器单元220可以包括二值化或分类处理,所述二值化或分类处理生成离散值,例如,当像素在目标器官内时为“1”,而当像素不在目标器官内时为“0”。在其他情况下,生成的值可能不是二进制的,而是可能与像素处于目标器官内还是目标器官外相关。In other embodiments, the
在一些实施方式中,CNN自动编码器单元220在分析逐像素数据时可以考虑各种因素。例如,CNN自动编码器单元220可以经由在基本单元120(图1A)的显示器122上显示的GUI接收来自使用者的指示患者150是男性、女性还是儿童的输入,并且基于所存储的与特定类型患者的目标器官的可能尺寸、形状、体积等相关的信息来调整概率值。在这样的实施方式中,CNN自动编码器单元220可以包括利用男性、女性和儿童数据来训练的三种不同的CNN,并且CNN自动编码器单元220可以基于选择来使用适当的CNN。In some embodiments, the
在一些实施方式中,CNN自动编码器单元220可以使用例如与受试者相关联的B模式图像数据来自动识别受试者的患者人口统计信息,例如性别、年龄、年龄范围、成人或儿童状态等。CNN自动编码器单元220还可以使用例如B模式图像数据(例如体重指数(BMI)、身体大小和/或体重等)自动识别受试者的临床状况。CNN自动编码器单元220还可以在系统100进行扫描时自动识别装置信息,例如探针110的位置信息、探针110相对于受关注目标的瞄准质量等。In some embodiments, the
在其他实施方式中,另一处理装置(例如,类似于自动编码器单元220和/或处理器520的处理装置)可以使用例如另一神经网络或其他处理逻辑来执行对患者人口统计信息、临床状况和/或装置信息的自动检测,并且可以将自动确定的输出作为输入提供给CNN自动编码器单元220。此外,在其他实施方式中,可以经由例如基本单元120的显示器122或经由探针110上的输入选择来手动地输入患者人口统计信息、临床状况和/或装置信息、患者数据等。在每种情况下,由CNN自动编码器单元220自动识别或手动输入至CNN自动编码器单元220/系统100的信息可用于选择适当的CNN来处理图像数据。In other embodiments, another processing device (eg, a processing device similar to
在其他实施方式中,可以利用其他信息来训练CNN自动编码器单元220。例如,可以利用与受试者相关联的患者数据来训练CNN自动编码器单元220,该患者数据可以包括使用患者的病史数据获得的信息以及经由在扫描受关注的目标之前对患者进行的身体检查而获得的信息。例如,患者数据可以包括患者的病史信息,例如患者手术史、慢性疾病史(例如,膀胱疾病信息)、受关注目标的先前图像(例如,受试者膀胱的先前图像)等以及经由对患者/受试者进行的身体检查而获得的数据,例如怀孕状态、疤痕组织的存在、水化问题、目标区域的异常(例如腹部膨胀或肿胀)等。在一示例性实施方式中,可以经由基本单元120的显示器122将患者数据输入系统100。在每种情况下,由CNN自动编码器单元220和/或另一处理装置自动生成的信息和/或手动输入系统100的信息可以作为输入提供给由系统100执行的机器学习处理,以帮助提高由系统100生成的与受关注目标相关联的数据的准确性。In other embodiments, other information may be utilized to train the CNN auto-
在其他情况下,自动编码器单元220可以经由设置在显示器122上的GUI来接收与正被成像的器官类型(例如,膀胱,主动脉,前列腺,心脏,肾脏,子宫,血管,羊水,胎儿等)以及器官数量等有关的输入信息,并使用根据所选器官训练的适当的CNN。In other cases, the
后处理单元230包括用于接收逐像素概率信息并且应用“智能”二值化概率算法的逻辑。例如,后处理单元230可执行插值以更清楚地限定轮廓细节,如下面详细描述的那样。另外,后处理单元230可以基于受试者类型来调整CNN自动编码器单元220的输出。例如,如果在使用探针110启动超声扫描之前通过显示器122上的GUI选择了“儿童”,则后处理单元230可以忽略来自CNN自动编码器单元220的与比某个深度更深的位置相对应的输出,因为儿童体内膀胱的深度通常由于典型儿童的身材矮小而较浅。作为另一个示例,后处理单元230可以基于器官类型来确定是选择单个主要区域还是多个受关注区域。例如,如果正被扫描的器官类型是膀胱,则后处理单元230可以选择单个主要区域,因为体内仅存在一个膀胱。然而,如果目标是耻骨,则后处理单元230可以选择多达两个受关注区域,所述两个受关注区域对应于耻骨的两侧。
瞄准逻辑240包括用于确定在超声扫描期间目标器官是否相对于探针110正确居中的逻辑。在一些实施方式中,瞄准逻辑240可以生成文本或图形以指导使用者调整探针110的位置以实现更好地扫描目标器官。例如,瞄准逻辑240可以分析来自探针110的数据并确定探针110需要移动至患者150的左侧。在这种情况下,瞄准逻辑240可以向显示器122输出文本和/或图形(例如,闪烁的箭头)以引导使用者向适当的方向移动探针110。Targeting
体积估计逻辑250可以包括用于估计目标器官的体积的逻辑。例如,体积估计逻辑250可以基于由后处理单元230生成的2D图像来估计体积,如下面详细描述的那样。在提供3D图像的情况下,体积估计逻辑250可以简单地使用3D图像来确定目标器官的体积。体积估计逻辑250可以经由显示器122和/或探针110上的显示器输出估计出的体积。
为简单起见,提供了图2所示的示例性配置。应该理解,系统100可以包括比图2所示的逻辑单元/装置更多或更少的逻辑单元/装置。例如,系统100可以包括多个数据获取单元210和处理接收到的数据的多个处理单元。另外,系统100可以包括附加元件,例如经由外部网络发射和接收信息以帮助分析超声信号从而识别受关注的目标器官的通信接口(例如,射频收发器)。For simplicity, the exemplary configuration shown in Figure 2 is provided. It should be understood that the
此外,下面将描述由系统100中的特定部件执行的各种功能。在其他实施方式中,被描述为由一个装置执行的各种功能可以由另一装置或多个其他装置执行,并且/或者被描述为由多个装置执行的各种功能可以被组合并且由单个装置执行。例如,在一个实施方式中,CNN自动编码器单元220可以将输入图像转换为概率信息、生成中间映射输出(如下所述),并且还可以将中间输出转换为例如体积信息、长度信息、面积信息等。也就是说,单个神经网络处理装置/单元可以接收输入图像数据并输出经过处理的图像输出数据以及体积和/或尺寸信息。在该示例中,可能不需要单独的后处理单元230和/或体积估计逻辑250。另外,在该示例中,任何中间映射输出对于系统100的操作人员而言可能是可访问的或可见的或者不可访问的或不可见的(例如,中间映射可以是对于使用者而言不可直接访问/可见的内部处理的一部分)。也就是说,包括在系统100中的神经网络(例如,CNN自动编码器单元220)可以转换接收到的超声回波信息和/或图像,并输出受关注目标的体积信息或其他尺寸信息,而无需由系统100的使用者进行额外的输入或仅仅需要系统100的使用者进行小量的额外输入。Additionally, various functions performed by specific components in
图5示出了装置500的示例性配置。装置500可以对应于例如CNN自动编码器单元220、后处理单元230、瞄准逻辑240和体积估计逻辑250的部件。参考图5,装置500可以包括总线510、处理器520、存储器530、输入装置540、输出装置550和通信接口560。总线510可以包括允许装置500的各个元件之间进行通信的路径。在一示例性实施方式中,图5中示出的所有或一些部件可以通过处理器520执行存储在存储器530中的软件指令来实现和/或控制。FIG. 5 shows an exemplary configuration of
处理器520可以包括一个或多个可以解释和执行指令的处理器、微处理器或处理逻辑。存储器530可以包括随机存取存储器(RAM)或可以存储信息和指令以供处理器520执行的另一种类型的动态存储装置。存储器530还可以包括只读存储器(ROM)装置或可以存储静态信息和指令以供处理器520使用的另一种类型的静态存储装置。存储器530可以进一步包括固态驱动器(SDD)。存储器530还可包括磁和/或光记录介质(例如,硬盘)及其相应的驱动器。
输入装置540可以包括允许使用者向装置500输入信息的机构(机制),例如键盘、小键盘、鼠标、笔、麦克风、触摸屏、语音识别和/或生物识别机构等。输出装置550可以包括向使用者输出信息的机构,其中包括显示器(例如,液晶显示器(LCD))、打印机、扬声器等。在一些实施方式中,触摸屏显示器可以用作输入装置和输出装置。
通信接口560可以包括装置500用来经由有线、无线或光学机构与其他装置通信的一个或多个收发器。例如,通信接口560可以包括一个或多个射频(RF)发射器、接收器和/或收发器以及一个或多个用于经由网络发射和接收RF数据的天线。通信接口560还可包括与LAN接合的调制解调器或以太网接口或者用于与网络中的元件通信的其他机构。
为简单起见,提供了图5所示的示例性配置。应当理解,装置500可以包括比图5所示的装置更多或更少的装置。在一示例性实施方式中,装置500响应于处理器520执行包含在计算机可读介质(例如存储器530)中的指令序列而执行操作。计算机可读介质可以被定义为物理或逻辑存储装置。可以从另一计算机可读介质(例如,硬盘驱动器(HDD),SSD等)或经由通信接口560从另一装置将软件指令读入存储器530。备选地,诸如专用集成电路(ASIC)、现场可编程门阵列(FPGA)等的硬件连接电路可以代替软件指令使用或与软件指令结合使用,以实现根据本文所述的实施方式的过程(处理)。因此,本文描述的实施方式不限于硬件电路和软件的任何特定组合。For simplicity, the exemplary configuration shown in Figure 5 is provided. It should be understood that
图6是示出与识别受关注目标以及识别与受关注目标相关联的参数(例如,体积)相关联的示例性处理的流程图。处理可以从使用者操作探针110以扫描受关注的目标器官开始。在此示例中,假设目标器官是膀胱。应当理解,本文描述的特征可以用于识别体内的其他器官或结构。6 is a flowchart illustrating an example process associated with identifying an object of interest and identifying a parameter (eg, volume) associated with the object of interest. Processing may begin with the
在一示例性实施方式中,使用者可以按下触发器114,并且包括在探针110中的收发器发射超声信号并获取与由探针110接收的回波信号相关联的B模式数据(方框610)。在一种实施方式中,数据获取单元210可以穿过膀胱在12个不同的平面上发射超声信号,并生成与12个不同平面相对应的12个B模式图像。在该实施方式中,数据可以对应于2D图像数据。在其他实施方式中,数据获取单元210可以生成3D图像数据。例如,如以上关于图3所讨论的那样,数据获取单元210可以执行隔行扫描以生成3D图像。在每种情况下,所发射的超声信号/扫描平面的数量可以基于特定的实施方式而变化。如上所述,在一些实施方式中,数据获取单元210可以在将B模式数据转发到CNN自动编码器单元220之前减小B模式图像的尺寸。例如,数据获取单元210可以将B模式图像的尺寸减小10%或更多。In an exemplary embodiment, a user may depress
在每种情况下,假设CNN自动编码器单元220接收2D B模式数据并处理该数据以从接收到的数据中去除噪声。例如,参考图7,CNN自动编码器单元220可以接收B模式图像数据710,其中暗区或暗域712对应于膀胱。如图所示,B模式图像数据包括不规则或对于使用者而言可能看起来不清楚或模糊的区域。例如,图7中的区域712包括膀胱周边的较亮区域以及不明确的边界。这样的嘈杂区域可能使得难以准确估计膀胱的体积。In each case, it is assumed that the
在这种情况下,CNN自动编码器单元220通过生成目标概率映射来对所获取的B模式图像710执行去噪(方框620)。例如,如上所述,CNN自动编码器220可以利用CNN技术来生成与输入图像中的每个像素相关的概率信息。In this case, the
然后,基本单元120可以确定是否已经获取和处理了整个圆锥数据(即,所有扫描平面数据)(方框630)。例如,基本单元120可以确定是否已经处理了与穿过膀胱的12次不同扫描相对应的所有12个B模式图像。如果尚未处理所有B模式图像数据(方框630-否),则基本单元120进行控制以移动至下一扫描平面位置(方框640),并且处理过程继续行进至方框610以处理与另一扫描平面相关联的B模式图像。The
如果已经处理了所有的B模式图像数据(方框630-是),则基本单元120可以使用3D信息来修正概率映射(方框650)。例如,CNN自动编码器单元220可以基于患者是男性、女性还是儿童等使用与膀胱的3D形状和尺寸有关的存储假设信息,来修改由CNN自动编码器单元220生成的一些概率信息,从而有效地修改膀胱的尺寸和/或形状。也就是说,如上所述,CNN自动编码器单元220可以使用基于患者的人口统计信息、患者的临床状况、与系统100(例如,探针110)相关联的装置信息、患者的患者数据(例如,患者病史信息和患者检查数据)等而训练的CNN。例如,如果患者150是男性,则CNN自动编码器单元220可以使用利用男性患者数据训练的CNN,如果患者150是女性,则可以使用利用女性患者数据训练的CNN,如果患者150是儿童,则可以使用利用儿童数据训练的CNN,使用基于患者的年龄范围而训练的CNN,使用利用患者的病史训练的CNN等等。在其他实施方式中,例如,当基本单元120接收并处理3D图像数据时,可以不执行附加处理,并且可以跳过方框650。在任何一种情况下,系统100可以显示P模式图像数据(方框660),例如图7所示的图像720。If all of the B-mode image data has been processed (block 630-Yes), the
在任一情况下,基本单元120可以使用概率映射通过二值化处理来分割目标区域(方框670)。例如,后处理单元230可以接收CNN自动编码器单元220的输出并(例如,通过插值)调整概率映射的尺寸(大小)、对概率映射进行平滑处理和/或(例如,通过滤波)对概率映射进行去噪。例如,在一个实施方式中,可以通过插值将概率映射调整为更大的尺寸,以获得更好的分辨率和/或至少部分地恢复尺寸可能已经减小了的原始B模式图像数据的空间分辨率。在一个实施方式中,可以执行2D Lanczos差值以调整与目标概率映射相关联的图像的尺寸。In either case, the
此外,基本单元120可以执行分类或二值化处理以将来自概率映射单元的概率信息转换为二值化的输出数据。例如,后处理单元230可以将概率值转换为二进制值。当针对特定像素识别出多个候选概率值时,后处理单元230可以选择最突出的值。以这种方式,当识别出多个候选值时,后处理单元230可以应用一些“智能”以选择最可能的值。In addition, the
图8示意性地示出了示例性智能二值化处理。参照图8,图像810示出了来自与2D超声图像相对应的概率映射或像素分类的输出,其中,概率信息被转换为具有各种强度的灰度图像。如图所示,图像810包括标记为812的灰色区域和标记为814的灰色区域,这些灰色区域表示膀胱的一些部分的可能位置。后处理单元230识别图像810内具有最大强度的峰点或尖点,如图像820中所示的十字线822所示。然后,后处理单元230可以针对其强度大于阈值强度的区域填充峰点周围的区域,如图像830中的区域832所示。在这种情况下,区域820内的其阈值强度值小于阈值强度的区域不会被填充,从而导致图像810中显示的灰色区域814被去除。然后,后处理单元230可以填充背景,如图像840中的区域842所示。然后,后处理单元230填充图像内的任何孔或开口区域,如图像850中的区域852所示。区域842中的孔可能对应于噪声区域或与患者150中的某些阻塞相关联的区域。以这种方式,后处理单元230识别出膀胱的最可能的位置和尺寸。也就是说,区域852被认为是患者150的膀胱的一部分。Figure 8 schematically illustrates an exemplary smart binarization process. Referring to Figure 8,
在其他实施方式中,后处理单元230可以使用图像810内的除峰值强度值以外的信息。例如,后处理单元230可以使用经过处理的概率的峰值(例如经过平滑处理的概率映射的峰值),使用多个峰值来识别多个填充区域等。作为其他示例,后处理单元230可以基于每个区域中的面积、峰值概率或平均概率来选择“主要”区域。在其他实施方式中,后处理单元230可以使用由操作人员经由例如显示器122手动输入的一个或多个种子点、使用生成一个或多个种子点的算法、执行不使用种子点的另一类型的阈值处理等来识别患者膀胱的区域。In other embodiments,
在以这种方式处理图像810之后,基本单元120可以输出图像,例如图7所示的图像720。参考图7,图像720包括与膀胱相对应的区域722。如图所示,膀胱722的边缘比图像712中的边界明确得多,从而提供了更加准确的膀胱呈现。以这种方式,基本单元120可以使用每个像素的亮度值和相邻像素的局部梯度值以及统计方法(例如隐马尔可夫(Markov)模型和神经网络算法(例如,CNN))来生成B模式图像中的每个像素的概率值并对B模式数据进行去噪。After processing
然后,基本单元120可以将分割结果转换为目标体积(方框670)。例如,后处理单元230可以对3D空间中与二值化映射中的每个有效目标像素相对应的所有体素的体积求和。也就是说,体积估计逻辑250可以对12个分割的目标图像中的体素求和以估计膀胱的体积。例如,每个体素的贡献或体积可以被预先计算并存储在基本单元120内的查询表中。在这种情况下,体积估计逻辑250可以将体素的总和用作查询表的索引来确定估计的体积。体积估计逻辑250还可通过基本单元120的显示器122显示体积。例如,体积估计逻辑250可在图7中的区域724处显示膀胱的估计体积(即,在该示例中为135毫升(mL)),该估计体积被输出至基本单元120的显示器122。备选地,体积估计逻辑250可以经由探针110上的显示器显示体积信息。后处理单元230还可以显示分割结果(方框690)。也就是说,后处理单元230可以经由基本单元120的显示器122显示膀胱的12个分段。The
在一些实施方式中,系统100可以不对概率映射信息执行二值化处理。例如,在一些实施方式中,CNN自动编码器单元220和/或后处理单元230可以将查询表应用于概率映射信息以识别受关注的目标器官的可能部分,并经由显示器122显示输出。In some implementations, the
返回参照方框620,在一些实施方式中,概率映射单元230可以在信息生成时实时显示信息。图9示出了与向使用者提供附加显示信息相关联的示例性处理。例如,后处理单元230可以在概率模式信息(在此称为P模式)生成时经由显示器122实时显示该概率模式信息(图9,方框910)。后处理单元230还可对目标进行分割(方框920),并利用B模式图像显示分割结果(方框930)。例如,图10示出了三个B模式图像1010、1012和1014以及相应的P模式图像1020、1022和1024。在其他实施方式中,可以显示所有12个B模式图像和12个相应的P模式图像。如图所示,P模式图像1020、1022和1024比B模式图像1010、1012和1014清晰得多。另外,在一些实施方式中,后处理单元230可以提供在每个P模式图像中显示的膀胱边界的轮廓。例如,如图10所示,P模式图像1020、1022和1024中的每一个可以包括例如与膀胱的内部部分呈不同颜色或与膀胱的内部部分相比呈更亮的颜色的轮廓。Referring back to block 620, in some embodiments, the
本文所述的实施方式使用机器学习以基于经由超声扫描仪获得的信息来识别患者体内受关注的器官或结构。机器学习处理可以接收图像数据并生成所述图像的每个特定部分(例如,像素)的概率信息,以确定该特定部分在目标器官内的概率。后处理分析还可以使用附加信息(例如患者的性别或年龄,特定目标器官等)完善概率信息。在某些情况下,目标器官的体积也可以与实时概率模式图像一起提供给使用者。Embodiments described herein use machine learning to identify organs or structures of interest in a patient based on information obtained via an ultrasound scanner. A machine learning process may receive image data and generate probability information for each particular portion (eg, pixel) of the image to determine the probability that the particular portion is within the target organ. Post-processing analysis can also refine the probabilistic information with additional information such as the gender or age of the patient, specific target organs, etc. In some cases, the volume of the target organ may also be provided to the user along with the real-time probabilistic mode image.
示例性实施方式的前述描述提供了说明和描述,但并不意图是穷举性的或将实施例限制为所公开的精确形式。根据以上教导,修改和变化是可能的,或者可以从实施例的实践中获得修改和变化。The foregoing description of exemplary embodiments provides illustration and description, but is not intended to be exhaustive or to limit the embodiments to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the embodiments.
例如,以上已经关于识别受关注的目标(例如患者的膀胱)并使用CNN处理来估计目标(例如膀胱)的体积对特征进行了描述。在其他实施方式中,可以识别其他器官或结构,并且可以估计尺寸或与所述器官/结构相关联的其他参数。例如,本文描述的处理可以用于识别和显示前列腺、肾脏、子宫、卵巢、主动脉、心脏、血管、羊水、胎儿等以及与这些目标相关联的特定特征(例如与体积和/或尺寸有关的测量值)。For example, features have been described above with respect to identifying an object of interest (eg, a patient's bladder) and using CNN processing to estimate the volume of the object (eg, bladder). In other embodiments, other organs or structures can be identified, and dimensions or other parameters associated with the organs/structures can be estimated. For example, the processes described herein can be used to identify and display the prostate, kidney, uterus, ovary, aorta, heart, blood vessels, amniotic fluid, fetus, etc. as well as specific features (eg, volume and/or size-related) associated with these objects Measurements).
例如,在其中关于除膀胱以外的各种器官或目标(例如,主动脉、前列腺、肾脏、心脏、子宫、卵巢、血管、羊水、胎儿等)使用本文所述的处理的实施方式中,可能会生成附加的与尺寸有关的测量值。例如,可以计算受关注的器官或区域的长度、高度、宽度、深度、直径、面积等。例如,对于主动脉的扫描,测量主动脉的直径在试图识别异常(例如动脉瘤)时可能很重要。对于前列腺扫描,可能需要测量前列腺的宽度和高度。在这些情况下,可以使用上述机器学习处理来生成/估计诸如长度、高度、宽度、深度、直径、面积等的测量值。也就是说,上述机器学习可以用于识别边界壁或其他受关注的项目,并估计医务人员所关注的与尺寸有关的特定参数。For example, in embodiments in which the treatments described herein are used with respect to various organs or targets other than the bladder (eg, aorta, prostate, kidney, heart, uterus, ovary, blood vessels, amniotic fluid, fetus, etc.), there may be Generates additional dimension-dependent measurements. For example, the length, height, width, depth, diameter, area, etc. of the organ or region of interest can be calculated. For example, for scans of the aorta, measuring the diameter of the aorta may be important when trying to identify abnormalities such as aneurysms. For prostate scans, it may be necessary to measure the width and height of the prostate. In these cases, measurements such as length, height, width, depth, diameter, area, etc. may be generated/estimated using the machine learning process described above. That said, the machine learning described above can be used to identify boundary walls or other items of interest, and estimate specific parameters related to size that are of concern to medical personnel.
此外,以上已经主要关于使用回波数据生成B模式图像并将机器学习应用于B模式图像以识别与目标相关联的体积、长度或其他信息对特征进行了描述。在其他实施方式中,可以使用其他类型的超声输入图像数据。例如,在其他实施方式中,可以使用通常包括在垂直于B模式图像定向的平面中形成的受关注目标(例如,膀胱)的呈现的C模式图像数据。更进一步,在其他实施方式中,射频(RF)或正交信号(例如,IQ信号)可以用作CNN自动编码器单元220的输入,以生成与目标相关联的概率输出映射。Furthermore, features have been described above primarily with respect to using echo data to generate B-mode images and applying machine learning to B-mode images to identify volume, length, or other information associated with objects. In other embodiments, other types of ultrasound input image data may be used. For example, in other embodiments, C-mode image data may be used that typically includes a representation of an object of interest (eg, a bladder) formed in a plane oriented perpendicular to the B-mode image. Still further, in other embodiments, radio frequency (RF) or quadrature signals (eg, IQ signals) may be used as input to the
此外,以上已经关于生成单个概率映射对特征进行了描述。在其他实施方式中,可以生成多个概率映射。例如,系统100可以生成关于受关注的目标器官(例如,膀胱)的一个概率映射,关于耻骨/耻骨阴影的另一概率映射以及关于前列腺的另一概率映射。以这种方式,可以生成患者150的内部器官的更准确的呈现,这可以实现对目标器官(例如,膀胱)的更准确的体积估计。Furthermore, features have been described above with respect to generating a single probability map. In other embodiments, multiple probability maps may be generated. For example, the
另外,本文描述的特征涉及对B模式图像数据进行逐像素分析。在其他实施方式中,作为逐像素映射的替代,可以使用边缘映射。在此实施方式中,可以使用CNN算法检测目标的边缘。在进一步的实施方式中,可以使用多边形坐标方法来识别膀胱的离散部分,然后连接这些点。在该实施方式中,可以使用轮廓边缘跟踪算法来连接目标器官的这些点。Additionally, the features described herein relate to pixel-by-pixel analysis of B-mode image data. In other embodiments, edge mapping may be used instead of pixel-by-pixel mapping. In this embodiment, a CNN algorithm can be used to detect the edges of the object. In a further embodiment, a polygonal coordinate method can be used to identify discrete parts of the bladder and then connect the points. In this embodiment, a contour edge tracking algorithm can be used to connect the points of the target organ.
此外,上面已经描述了各种输入(例如指示患者是男性、女性还是儿童等的信息)。也可以使用对概率映射和/或二值化进行的其他输入。例如,可以将体重指数(BMI)、年龄或年龄范围输入基本单元120,并且基本单元120可以基于特定的BMI、年龄或年龄范围自动调整所述处理。对概率映射和/或二值化处理进行的其他输入(例如每个像素的深度,平面方位等)可用于改善由系统100生成的体积估计和/或输出图像的准确性。Additionally, various inputs (eg, information indicating whether the patient is male, female, child, etc.) have been described above. Other inputs to probability mapping and/or binarization may also be used. For example, a body mass index (BMI), age, or age range can be entered into the
此外,如上所述,可以使用与各种类型的患者(男性、女性和儿童)相关联的训练数据来帮助生成P模式数据。例如,数千或更多的训练数据图像可用于生成用于处理B模式输入数据以识别受关注的目标的CNN算法。另外,可以在基本单元120中输入或存储数千或更多的图像,以帮助修改CNN自动编码器单元220的输出。这在预期的障碍物(例如对于膀胱扫描来说,耻骨)不利地影响图像的情况下尤其有用。在这些实施方式中,基本单元120可以存储关于如何应对障碍物的影响并使障碍物的影响最小化的信息。CNN自动编码器单元220和/或后处理单元230于是可以更准确地应对障碍物。Additionally, as discussed above, training data associated with various types of patients (males, females, and children) can be used to help generate P-mode data. For example, thousands or more images of training data can be used to generate a CNN algorithm for processing B-mode input data to identify objects of interest. Additionally, thousands or more images may be input or stored in the
此外,本文描述的特征是指使用B模式图像数据作为CNN自动编码器单元220的输入。在其他实施方式中,可以使用其他数据。例如,与所发射的超声信号相关联的回波数据可以包括谐波信息,该谐波信息可以用于检测诸如膀胱的目标器官。在这种情况下,与所发射的超声信号的频率有关的较高阶谐波回波信息(例如,二次谐波或更高阶谐波)可以用于生成概率映射信息,而无需生成B模式图像。在其他实施方式中,除了上述的B模式数据之外,还可以使用较高阶谐波信息来增强P模式图像数据。在更进一步的实施方式中,探针110可以以多个频率发射超声信号,并且与该多个频率相关联的回波信息可以用作CNN自动编码器单元220或其他机器学习模块的输入,以检测目标器官并估计目标器官的体积、尺寸等参数。Furthermore, the features described herein refer to the use of B-mode image data as input to the CNN auto-
例如,可以使用基频下的多个B模式图像和较高阶谐波频率下或多个较高阶谐波频率下的多个B模式图像作为CNN自动编码器单元220的输入。此外,基频和谐波频率信息可以被预处理,并用作CNN自动编码器单元220的输入,以帮助生成概率映射。例如,谐波功率与基频功率之间的比值可以用作CNN自动编码器单元220的输入,以增强概率映射的准确性。For example, multiple B-mode images at the fundamental frequency and multiple B-mode images at higher-order harmonic frequencies or at multiple higher-order harmonic frequencies may be used as inputs to the CNN auto-
另外,在一些实施方式中,上述后处理可使用第二机器学习(例如,CNN)算法对图像数据进行去噪和/或对图像执行轮廓/边缘跟踪。Additionally, in some embodiments, the post-processing described above may use a second machine learning (eg, CNN) algorithm to denoise the image data and/or perform contour/edge tracking on the image.
此外,以上已经关于获取2-维(2D)B模式图像数据的数据获取单元210对实施方式进行了描述。在其他实施方式中,更高维度的图像(例如2.5D或3D)数据可以被输入CNN自动编码器单元220。例如,对于2.5D的实施方式,CNN自动编码器单元220可以使用与若干扫描平面以及相邻的扫描平面相关联的B模式图像来提高准确性。对于3D的实施方式,CNN自动编码器单元220可以为12个扫描平面中的每一个生成12个概率映射,并且后处理单元230可以使用所有12个概率映射基于这12个概率映射来生成3D图像(例如,经由3D泛洪填充算法)。然后可以在2.5D或3D图像上执行分类和/或二值化处理以生成例如3D输出图像。Furthermore, the embodiments have been described above with respect to the
此外,虽然已经关于图6和图9描述了一系列动作,但是动作的顺序在其他实施方式中可以不同。此外,可以并行地实施非依赖性动作。Furthermore, although a series of actions have been described with respect to Figures 6 and 9, the order of the actions may differ in other implementations. Furthermore, non-dependent actions can be implemented in parallel.
显而易见的是,在附图所示的实施方式中,上述各种特征可以以许多不同形式的软件、固件和硬件来实施。用于实施各种特征的实际软件代码或专用控制硬件不是限制性的。因此,在不参考特定软件代码的情况下对特征的操作和行为进行了描述,应该理解,本领域的普通技术人员将能够基于本文的描述来设计软件和控制硬件以实施各种特征。It will be apparent that, in the embodiments shown in the drawings, the various features described above may be implemented in many different forms of software, firmware and hardware. The actual software code or dedicated control hardware used to implement the various features is not limiting. Thus, the operation and behavior of the features have been described without reference to the specific software code, it being understood that those of ordinary skill in the art would be able to design software and control hardware to implement the various features based on the descriptions herein.
此外,本发明的某些部分可以被实施为执行一个或多个功能的“逻辑”。该逻辑可以包括硬件(例如一个或多个处理器、微处理器、专用集成电路、现场可编程门阵列或其他处理逻辑)、软件或硬件和软件的组合。Additionally, portions of the invention may be implemented as "logic" that performs one or more functions. The logic may include hardware (eg, one or more processors, microprocessors, application specific integrated circuits, field programmable gate arrays, or other processing logic), software, or a combination of hardware and software.
在前面的说明书中,已经参考附图描述了各种优选实施例。然而,将显而易见的是,在不脱离如所附权利要求书中所阐述的本发明的较宽范围的情况下,可以对其进行各种修改和改变,并且可以实施附加的实施例。因此,说明书和附图应被认为是说明性而非限制性的。In the foregoing specification, various preferred embodiments have been described with reference to the accompanying drawings. It will, however, be evident that various modifications and changes may be made, and additional embodiments may be practiced, without departing from the broader scope of the invention as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
除非明确地描述,否则在本申请的说明书中使用的任何要素、动作或指令都不应被解释为对本发明是关键或必要的。同样,如本文所使用的那样,冠词“一”旨在包括一个或多个项目。此外,除非另有明确说明,否则短语“基于”旨在表示“至少部分地基于”。No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article "a" is intended to include one or more of the items. Furthermore, the phrase "based on" is intended to mean "based at least in part on" unless expressly stated otherwise.
Claims (21)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762504709P | 2017-05-11 | 2017-05-11 | |
US62/504,709 | 2017-05-11 | ||
PCT/US2018/032247 WO2018209193A1 (en) | 2017-05-11 | 2018-05-11 | Probability map-based ultrasound scanning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110753517A true CN110753517A (en) | 2020-02-04 |
Family
ID=62685100
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201880030236.6A Pending CN110753517A (en) | 2017-05-11 | 2018-05-11 | Ultrasound scanning based on probability mapping |
Country Status (7)
Country | Link |
---|---|
US (1) | US12217445B2 (en) |
EP (1) | EP3621525A1 (en) |
JP (1) | JP6902625B2 (en) |
KR (2) | KR20200003400A (en) |
CN (1) | CN110753517A (en) |
CA (1) | CA3062330A1 (en) |
WO (1) | WO2018209193A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112184683A (en) * | 2020-10-09 | 2021-01-05 | 深圳度影医疗科技有限公司 | Ultrasonic image identification method, terminal equipment and storage medium |
CN113616235A (en) * | 2020-05-07 | 2021-11-09 | 中移(成都)信息通信科技有限公司 | Ultrasonic detection method, device, system, equipment, storage medium and ultrasonic probe |
Families Citing this family (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2740698C2 (en) | 2016-03-09 | 2021-01-19 | Эконаус, Инк. | Systems and methods of recognizing ultrasonic images performed using a network with artificial intelligence |
KR102139856B1 (en) * | 2017-06-23 | 2020-07-30 | 울산대학교 산학협력단 | Method for ultrasound image processing |
EP3420913B1 (en) * | 2017-06-26 | 2020-11-18 | Samsung Medison Co., Ltd. | Ultrasound imaging apparatus and control method thereof |
US11622744B2 (en) * | 2017-07-07 | 2023-04-11 | Massachusetts Institute Of Technology | System and method for automated ovarian follicular monitoring |
WO2019189386A1 (en) * | 2018-03-30 | 2019-10-03 | 富士フイルム株式会社 | Ultrasound diagnostic device and control method of ultrasound diagnostic device |
US11391817B2 (en) | 2018-05-11 | 2022-07-19 | Qualcomm Incorporated | Radio frequency (RF) object detection using radar and machine learning |
US10878570B2 (en) * | 2018-07-17 | 2020-12-29 | International Business Machines Corporation | Knockout autoencoder for detecting anomalies in biomedical images |
US20210265042A1 (en) * | 2018-07-20 | 2021-08-26 | Koninklijke Philips N.V. | Ultrasound imaging by deep learning and associated devices, systems, and methods |
WO2020122606A1 (en) | 2018-12-11 | 2020-06-18 | 시너지에이아이 주식회사 | Method for measuring volume of organ by using artificial neural network, and apparatus therefor |
JP7192512B2 (en) * | 2019-01-11 | 2022-12-20 | 富士通株式会社 | Learning program, learning device and learning method |
CA3126020C (en) * | 2019-01-17 | 2024-04-23 | Verathon Inc. | Systems and methods for quantitative abdominal aortic aneurysm analysis using 3d ultrasound imaging |
JP7273518B2 (en) * | 2019-01-17 | 2023-05-15 | キヤノンメディカルシステムズ株式会社 | Ultrasound diagnostic equipment and learning program |
JP7258568B2 (en) * | 2019-01-18 | 2023-04-17 | キヤノンメディカルシステムズ株式会社 | ULTRASOUND DIAGNOSTIC DEVICE, IMAGE PROCESSING DEVICE, AND IMAGE PROCESSING PROGRAM |
JP7302988B2 (en) * | 2019-03-07 | 2023-07-04 | 富士フイルムヘルスケア株式会社 | Medical imaging device, medical image processing device, and medical image processing program |
JP7242409B2 (en) * | 2019-04-26 | 2023-03-20 | キヤノンメディカルシステムズ株式会社 | MEDICAL IMAGE PROCESSING DEVICE, ULTRASOUND DIAGNOSTIC DEVICE, AND LEARNED MODEL CREATION METHOD |
WO2020252330A1 (en) | 2019-06-12 | 2020-12-17 | Carnegie Mellon University | System and method for labeling ultrasound data |
US11986345B2 (en) | 2019-07-12 | 2024-05-21 | Verathon Inc. | Representation of a target during aiming of an ultrasound probe |
US20210045716A1 (en) * | 2019-08-13 | 2021-02-18 | GE Precision Healthcare LLC | Method and system for providing interaction with a visual artificial intelligence ultrasound image segmentation module |
CN110567558B (en) * | 2019-08-28 | 2021-08-10 | 华南理工大学 | Ultrasonic guided wave detection method based on deep convolution characteristics |
CN112568935B (en) * | 2019-09-29 | 2024-06-25 | 中慧医学成像有限公司 | Three-dimensional ultrasonic imaging method and system based on three-dimensional tracking camera |
US11583244B2 (en) * | 2019-10-04 | 2023-02-21 | GE Precision Healthcare LLC | System and methods for tracking anatomical features in ultrasound images |
US20210183521A1 (en) * | 2019-12-13 | 2021-06-17 | Korea Advanced Institute Of Science And Technology | Method and apparatus for quantitative imaging using ultrasound data |
JP7093093B2 (en) * | 2020-01-08 | 2022-06-29 | 有限会社フロントエンドテクノロジー | Ultrasonic urine volume measuring device, learning model generation method, learning model |
KR102246966B1 (en) | 2020-01-29 | 2021-04-30 | 주식회사 아티큐 | Method for Recognizing Object Target of Body |
EP4132366A1 (en) * | 2020-04-07 | 2023-02-15 | Verathon, Inc. | Automated prostate analysis system |
KR102238280B1 (en) * | 2020-12-09 | 2021-04-08 | 박지현 | Underwater target detection system and method of thereof |
US20230070062A1 (en) * | 2021-08-27 | 2023-03-09 | Clarius Mobile Health Corp. | Method and system, using an ai model, for identifying and predicting optimal fetal images for generating an ultrasound multimedia product |
JP2023034400A (en) * | 2021-08-31 | 2023-03-13 | DeepEyeVision株式会社 | Information processing device, information processing method and program |
JP2023087273A (en) | 2021-12-13 | 2023-06-23 | 富士フイルム株式会社 | Ultrasonic diagnostic device and control method of ultrasonic diagnostic device |
JP2023143418A (en) * | 2022-03-25 | 2023-10-06 | 富士フイルム株式会社 | Ultrasonic diagnostic device and operation method thereof |
WO2024101255A1 (en) * | 2022-11-08 | 2024-05-16 | 富士フイルム株式会社 | Medical assistance device, ultrasonic endoscope, medical assistance method, and program |
CN118071746B (en) * | 2024-04-19 | 2024-08-30 | 广州索诺星信息科技有限公司 | Ultrasonic image data management system and method based on artificial intelligence |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6238342B1 (en) * | 1998-05-26 | 2001-05-29 | Riverside Research Institute | Ultrasonic tissue-type classification and imaging methods and apparatus |
WO2001082787A2 (en) * | 2000-05-03 | 2001-11-08 | University Of Washington | Method for determining the contour of an in vivo organ using multiple image frames of the organ |
US20090093717A1 (en) * | 2007-10-04 | 2009-04-09 | Siemens Corporate Research, Inc. | Automated Fetal Measurement From Three-Dimensional Ultrasound Data |
CN102629376A (en) * | 2011-02-11 | 2012-08-08 | 微软公司 | Image registration |
US20140052001A1 (en) * | 2012-05-31 | 2014-02-20 | Razvan Ioan Ionasec | Mitral Valve Detection for Transthoracic Echocardiography |
CN104840209A (en) * | 2014-02-19 | 2015-08-19 | 三星电子株式会社 | Apparatus and method for lesion detection |
CN106204465A (en) * | 2015-05-27 | 2016-12-07 | 美国西门子医疗解决公司 | Knowledge based engineering ultrasonoscopy strengthens |
US9536054B1 (en) * | 2016-01-07 | 2017-01-03 | ClearView Diagnostics Inc. | Method and means of CAD system personalization to provide a confidence level indicator for CAD system recommendations |
Family Cites Families (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2212267B (en) | 1987-11-11 | 1992-07-29 | Circulation Res Ltd | Methods and apparatus for the examination and treatment of internal organs |
US5081933A (en) * | 1990-03-15 | 1992-01-21 | Utdc Inc. | Lcts chassis configuration with articulated chassis sections between vehicles |
JPH06233761A (en) * | 1993-02-09 | 1994-08-23 | Hitachi Medical Corp | Image diagnostic device for medical purpose |
US5734739A (en) * | 1994-05-31 | 1998-03-31 | University Of Washington | Method for determining the contour of an in vivo organ using multiple image frames of the organ |
US5871019A (en) | 1996-09-23 | 1999-02-16 | Mayo Foundation For Medical Education And Research | Fast cardiac boundary imaging |
US5984870A (en) * | 1997-07-25 | 1999-11-16 | Arch Development Corporation | Method and system for the automated analysis of lesions in ultrasound images |
AU5117699A (en) | 1998-07-21 | 2000-02-14 | Acoustic Sciences Associates | Synthetic structural imaging and volume estimation of biological tissue organs |
WO2001082225A2 (en) | 2000-04-24 | 2001-11-01 | Washington University | Method and apparatus for probabilistic model of ultrasonic images |
US8435181B2 (en) | 2002-06-07 | 2013-05-07 | Verathon Inc. | System and method to identify and measure organ wall boundaries |
GB2391625A (en) | 2002-08-09 | 2004-02-11 | Diagnostic Ultrasound Europ B | Instantaneous ultrasonic echo measurement of bladder urine volume with a limited number of ultrasound beams |
US7744534B2 (en) | 2002-06-07 | 2010-06-29 | Verathon Inc. | 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume |
JP4244300B2 (en) | 2003-03-24 | 2009-03-25 | 富士フイルム株式会社 | Ultrasonic transceiver |
US6932770B2 (en) | 2003-08-04 | 2005-08-23 | Prisma Medical Technologies Llc | Method and apparatus for ultrasonic imaging |
US7720269B2 (en) | 2003-10-02 | 2010-05-18 | Siemens Medical Solutions Usa, Inc. | Volumetric characterization using covariance estimation from scale-space hessian matrices |
US20050089205A1 (en) * | 2003-10-23 | 2005-04-28 | Ajay Kapur | Systems and methods for viewing an abnormality in different kinds of images |
US7555151B2 (en) | 2004-09-02 | 2009-06-30 | Siemens Medical Solutions Usa, Inc. | System and method for tracking anatomical structures in three dimensional images |
US7627386B2 (en) | 2004-10-07 | 2009-12-01 | Zonaire Medical Systems, Inc. | Ultrasound imaging system parameter optimization via fuzzy logic |
US7831081B2 (en) | 2005-08-15 | 2010-11-09 | Boston Scientific Scimed, Inc. | Border detection in medical image analysis |
US8047990B2 (en) | 2006-01-19 | 2011-11-01 | Burdette Everette C | Collagen density and structural change measurement and mapping in tissue |
US8055098B2 (en) | 2006-01-27 | 2011-11-08 | Affymetrix, Inc. | System, method, and product for imaging probe arrays with small feature sizes |
US8078255B2 (en) | 2006-03-29 | 2011-12-13 | University Of Georgia Research Foundation, Inc. | Virtual surgical systems and methods |
US8157736B2 (en) | 2006-04-18 | 2012-04-17 | Siemens Corporation | System and method for feature detection in ultrasound images |
US20110137172A1 (en) | 2006-04-25 | 2011-06-09 | Mcube Technology Co., Ltd. | Apparatus and method for measuring an amount of urine in a bladder |
KR100779548B1 (en) | 2006-04-25 | 2007-11-27 | (주) 엠큐브테크놀로지 | Ultrasound diagnostic device and ultrasound diagnostic method |
US20140024937A1 (en) | 2006-04-25 | 2014-01-23 | Mcube Technology Co., Ltd. | Apparatus and method for measuring an amount of urine in a bladder |
CN101448461B (en) | 2006-05-19 | 2011-04-06 | 株式会社日立医药 | Ultrasonic diagnostic device and boundary extraction method |
US8905932B2 (en) | 2006-08-17 | 2014-12-09 | Jan Medical Inc. | Non-invasive characterization of human vasculature |
US8167803B2 (en) | 2007-05-16 | 2012-05-01 | Verathon Inc. | System and method for bladder detection using harmonic imaging |
CN101677805B (en) | 2007-06-01 | 2013-05-29 | 皇家飞利浦电子股份有限公司 | Wireless ultrasound probe cable |
CN101848677B (en) * | 2007-09-26 | 2014-09-17 | 麦德托尼克公司 | Frequency selective monitoring of physiological signals |
US8175351B2 (en) * | 2008-09-16 | 2012-05-08 | Icad, Inc. | Computer-aided detection and classification of suspicious masses in breast imagery |
US8265390B2 (en) * | 2008-11-11 | 2012-09-11 | Siemens Medical Solutions Usa, Inc. | Probabilistic segmentation in computer-aided detection |
EP2194486A1 (en) | 2008-12-04 | 2010-06-09 | Koninklijke Philips Electronics N.V. | A method, apparatus, and computer program product for acquiring medical image data |
WO2010066007A1 (en) | 2008-12-12 | 2010-06-17 | Signostics Limited | Medical diagnostic method and apparatus |
US20100158332A1 (en) * | 2008-12-22 | 2010-06-24 | Dan Rico | Method and system of automated detection of lesions in medical images |
US8467856B2 (en) | 2009-07-17 | 2013-06-18 | Koninklijke Philips Electronics N.V. | Anatomy modeling for tumor region of interest definition |
US8343053B2 (en) | 2009-07-21 | 2013-01-01 | Siemens Medical Solutions Usa, Inc. | Detection of structure in ultrasound M-mode imaging |
JP5645432B2 (en) | 2010-03-19 | 2014-12-24 | キヤノン株式会社 | Image processing apparatus, image processing system, image processing method, and program for causing computer to execute image processing |
US8396268B2 (en) | 2010-03-31 | 2013-03-12 | Isis Innovation Limited | System and method for image sequence processing |
US8532360B2 (en) | 2010-04-20 | 2013-09-10 | Atheropoint Llc | Imaging based symptomatic classification using a combination of trace transform, fuzzy technique and multitude of features |
US20110257527A1 (en) * | 2010-04-20 | 2011-10-20 | Suri Jasjit S | Ultrasound carotid media wall classification and imt measurement in curved vessels using recursive refinement and validation |
AU2011213889B2 (en) | 2010-08-27 | 2016-02-18 | Signostics Limited | Method and apparatus for volume determination |
JP2013542046A (en) | 2010-11-10 | 2013-11-21 | エコーメトリックス,エルエルシー | Ultrasound image processing system and method |
JP6106190B2 (en) * | 2011-12-21 | 2017-03-29 | ボルケーノ コーポレイション | Visualization method of blood and blood likelihood in blood vessel image |
US20160270757A1 (en) | 2012-11-15 | 2016-09-22 | Konica Minolta, Inc. | Image-processing apparatus, image-processing method, and program |
US10226227B2 (en) * | 2013-05-24 | 2019-03-12 | Sunnybrook Research Institute | System and method for classifying and characterizing tissues using first-order and second-order statistics of quantitative ultrasound parametric maps |
JP6200249B2 (en) | 2013-09-11 | 2017-09-20 | キヤノン株式会社 | Information processing apparatus and information processing method |
KR102328269B1 (en) | 2014-10-23 | 2021-11-19 | 삼성전자주식회사 | Ultrasound imaging apparatus and control method for the same |
US20180140282A1 (en) | 2015-06-03 | 2018-05-24 | Hitachi, Ltd. | Ultrasonic diagnostic apparatus and image processing method |
WO2017033502A1 (en) * | 2015-08-21 | 2017-03-02 | 富士フイルム株式会社 | Ultrasonic diagnostic device and method for controlling ultrasonic diagnostic device |
US10420523B2 (en) * | 2016-03-21 | 2019-09-24 | The Board Of Trustees Of The Leland Stanford Junior University | Adaptive local window-based methods for characterizing features of interest in digital images and systems for practicing same |
US10643092B2 (en) * | 2018-06-21 | 2020-05-05 | International Business Machines Corporation | Segmenting irregular shapes in images using deep region growing with an image pyramid |
US11164067B2 (en) * | 2018-08-29 | 2021-11-02 | Arizona Board Of Regents On Behalf Of Arizona State University | Systems, methods, and apparatuses for implementing a multi-resolution neural network for use with imaging intensive applications including medical imaging |
-
2018
- 2018-05-11 CA CA3062330A patent/CA3062330A1/en active Pending
- 2018-05-11 EP EP18732986.7A patent/EP3621525A1/en active Pending
- 2018-05-11 KR KR1020197035041A patent/KR20200003400A/en active Application Filing
- 2018-05-11 CN CN201880030236.6A patent/CN110753517A/en active Pending
- 2018-05-11 US US15/977,091 patent/US12217445B2/en active Active
- 2018-05-11 KR KR1020227008925A patent/KR102409090B1/en active IP Right Review Request
- 2018-05-11 WO PCT/US2018/032247 patent/WO2018209193A1/en unknown
- 2018-05-11 JP JP2019561958A patent/JP6902625B2/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6238342B1 (en) * | 1998-05-26 | 2001-05-29 | Riverside Research Institute | Ultrasonic tissue-type classification and imaging methods and apparatus |
WO2001082787A2 (en) * | 2000-05-03 | 2001-11-08 | University Of Washington | Method for determining the contour of an in vivo organ using multiple image frames of the organ |
US20090093717A1 (en) * | 2007-10-04 | 2009-04-09 | Siemens Corporate Research, Inc. | Automated Fetal Measurement From Three-Dimensional Ultrasound Data |
CN102629376A (en) * | 2011-02-11 | 2012-08-08 | 微软公司 | Image registration |
US20140052001A1 (en) * | 2012-05-31 | 2014-02-20 | Razvan Ioan Ionasec | Mitral Valve Detection for Transthoracic Echocardiography |
CN104840209A (en) * | 2014-02-19 | 2015-08-19 | 三星电子株式会社 | Apparatus and method for lesion detection |
CN106204465A (en) * | 2015-05-27 | 2016-12-07 | 美国西门子医疗解决公司 | Knowledge based engineering ultrasonoscopy strengthens |
US9536054B1 (en) * | 2016-01-07 | 2017-01-03 | ClearView Diagnostics Inc. | Method and means of CAD system personalization to provide a confidence level indicator for CAD system recommendations |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113616235A (en) * | 2020-05-07 | 2021-11-09 | 中移(成都)信息通信科技有限公司 | Ultrasonic detection method, device, system, equipment, storage medium and ultrasonic probe |
CN113616235B (en) * | 2020-05-07 | 2024-01-19 | 中移(成都)信息通信科技有限公司 | Ultrasonic detection method, device, system, equipment, storage medium and ultrasonic probe |
CN112184683A (en) * | 2020-10-09 | 2021-01-05 | 深圳度影医疗科技有限公司 | Ultrasonic image identification method, terminal equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
KR20200003400A (en) | 2020-01-09 |
CA3062330A1 (en) | 2018-11-15 |
KR20220040507A (en) | 2022-03-30 |
US20180330518A1 (en) | 2018-11-15 |
KR102409090B1 (en) | 2022-06-15 |
JP2020519369A (en) | 2020-07-02 |
WO2018209193A1 (en) | 2018-11-15 |
EP3621525A1 (en) | 2020-03-18 |
US12217445B2 (en) | 2025-02-04 |
JP6902625B2 (en) | 2021-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12217445B2 (en) | Probability map-based ultrasound scanning | |
US7819806B2 (en) | System and method to identify and measure organ wall boundaries | |
CN106204465B (en) | The enhancing of Knowledge based engineering ultrasound image | |
CN110325119B (en) | Ovarian follicle count and size determination | |
US8435181B2 (en) | System and method to identify and measure organ wall boundaries | |
US20080146932A1 (en) | 3D ultrasound-based instrument for non-invasive measurement of Amniotic Fluid Volume | |
US11684344B2 (en) | Systems and methods for quantitative abdominal aortic aneurysm analysis using 3D ultrasound imaging | |
US11464490B2 (en) | Real-time feedback and semantic-rich guidance on quality ultrasound image acquisition | |
CN111629670B (en) | Echo window artifact classification and visual indicators for ultrasound systems | |
US20080139938A1 (en) | System and method to identify and measure organ wall boundaries | |
US10949976B2 (en) | Active contour model using two-dimensional gradient vector for organ boundary detection | |
US11278259B2 (en) | Thrombus detection during scanning | |
KR20220163445A (en) | Automated Prostate Analysis System | |
KR20150103956A (en) | Apparatus and method for processing medical image, and computer-readable recoding medium | |
US9364196B2 (en) | Method and apparatus for ultrasonic measurement of volume of bodily structures | |
WO2020133236A1 (en) | Spinal imaging method and ultrasonic imaging system | |
CN116258736A (en) | System and method for segmenting an image | |
WO2021230230A1 (en) | Ultrasonic diagnosis device, medical image processing device, and medical image processing method | |
JP2018157982A (en) | Ultrasonic diagnosis apparatus and program | |
EP3848892A1 (en) | Generating a plurality of image segmentation results for each node of an anatomical structure model to provide a segmentation confidence value for each node | |
JP2018157981A (en) | Ultrasonic diagnosis apparatus and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |