[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110753517A - Ultrasound scanning based on probability mapping - Google Patents

Ultrasound scanning based on probability mapping Download PDF

Info

Publication number
CN110753517A
CN110753517A CN201880030236.6A CN201880030236A CN110753517A CN 110753517 A CN110753517 A CN 110753517A CN 201880030236 A CN201880030236 A CN 201880030236A CN 110753517 A CN110753517 A CN 110753517A
Authority
CN
China
Prior art keywords
information
interest
probability
processing
processing device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880030236.6A
Other languages
Chinese (zh)
Inventor
J·H·崔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Verathon Inc
Original Assignee
Verathon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Verathon Inc filed Critical Verathon Inc
Publication of CN110753517A publication Critical patent/CN110753517A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Clinical applications
    • A61B8/0833Clinical applications involving detecting or locating foreign bodies or organic structures
    • A61B8/085Clinical applications involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Clinical applications
    • A61B8/0866Clinical applications involving foetal diagnosis; pre-natal or peri-natal diagnosis of the baby
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Clinical applications
    • A61B8/0883Clinical applications for diagnosis of the heart
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Clinical applications
    • A61B8/0891Clinical applications for diagnosis of blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/13Tomography
    • A61B8/14Echo-tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/42Details of probe positioning or probe attachment to the patient
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • A61B8/463Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5292Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves using additional data, e.g. patient information, image labeling, acquisition parameters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/56Details of data transmission or power supply
    • A61B8/565Details of data transmission or power supply involving data transmission via a network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/43Detecting, measuring or recording for evaluating the reproductive systems
    • A61B5/4306Detecting, measuring or recording for evaluating the reproductive systems for evaluating the female reproductive systems, e.g. gynaecological evaluations
    • A61B5/4318Evaluation of the lower reproductive system
    • A61B5/4325Evaluation of the lower reproductive system of the uterine cavities, e.g. uterus, fallopian tubes, ovaries
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/43Detecting, measuring or recording for evaluating the reproductive systems
    • A61B5/4375Detecting, measuring or recording for evaluating the reproductive systems for evaluating the male reproductive system
    • A61B5/4381Prostate evaluation or disorder diagnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30084Kidney; Renal

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Physiology (AREA)
  • Vascular Medicine (AREA)
  • Databases & Information Systems (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Fuzzy Systems (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Cardiology (AREA)

Abstract

一种系统可以包括探针,该探针被配置为将超声信号发射至受关注的目标,并接收与所发射的超声信号相关联的回波信息。该系统还可以包括至少一个处理装置,该至少一个处理装置被配置为使用机器学习算法来处理接收到的回波信息,以生成与受关注的目标相关联的概率信息。该至少一个处理装置可以进一步对所述概率信息进行分类并基于经过分类的概率信息输出与受关注的目标相对应的图像信息。

A system may include a probe configured to transmit an ultrasound signal to a target of interest and to receive echo information associated with the transmitted ultrasound signal. The system may also include at least one processing device configured to process the received echo information using a machine learning algorithm to generate probability information associated with the object of interest. The at least one processing device may further classify the probability information and output image information corresponding to the object of interest based on the classified probability information.

Description

基于概率映射的超声扫描Probabilistic Mapping-Based Ultrasound Scanning

相关申请Related applications

本申请根据35U.S.C§119、基于2017年5月11日提交的美国临时申请62/504,709要求优先权,该美国临时申请的内容通过引用整体合并于此。This application claims priority under 35 U.S.C § 119 based on US Provisional Application 62/504,709, filed May 11, 2017, the contents of which are incorporated herein by reference in their entirety.

背景技术Background technique

超声扫描仪通常用于识别体内的目标器官或其他结构和/或确定与目标器官/结构相关联的特征,例如器官/结构的尺寸或器官中的流体的体积。例如,超声扫描仪用于识别患者的膀胱并估计膀胱中的流体体积。在典型情况下,将超声扫描仪放置在患者身上并触发以生成超声信号,该超声信号包括以特定频率输出的声波。扫描仪可以接收来自超声信号的回波,并对其进行分析以确定膀胱中的流体体积。例如,接收到的回波可以用于生成相应的图像,而相应的图像可以被分析以检测目标器官的边界,例如膀胱壁。然后可以基于检测到的边界信息来估计膀胱的体积。然而,典型的超声扫描仪经常遭受由许多因素引起的不准确性,例如,患者之间受关注的目标器官的尺寸和/或形状的可变性,体内的障碍物使得难以精确地检测目标器官/结构的边界等特征。Ultrasound scanners are often used to identify target organs or other structures within the body and/or to determine characteristics associated with the target organ/structure, such as the size of the organ/structure or the volume of fluid in the organ. For example, ultrasound scanners are used to identify a patient's bladder and estimate the volume of fluid in the bladder. Typically, an ultrasound scanner is placed on a patient and triggered to generate an ultrasound signal that includes sound waves output at a specific frequency. The scanner can receive echoes from the ultrasound signal and analyze them to determine the volume of fluid in the bladder. For example, the received echoes can be used to generate corresponding images, and the corresponding images can be analyzed to detect the boundaries of the target organ, such as the bladder wall. The bladder volume can then be estimated based on the detected boundary information. However, typical ultrasound scanners often suffer from inaccuracies caused by a number of factors, for example, variability in the size and/or shape of the target organ of interest between patients, obstacles in the body that make it difficult to accurately detect the target organ/ features such as the boundaries of the structure.

附图说明Description of drawings

图1A示出了根据示例性实施方式的扫描系统的示例性配置;FIG. 1A shows an exemplary configuration of a scanning system according to an exemplary embodiment;

图1B示出了图1A的扫描系统关于检测患者体内器官的操作;FIG. 1B illustrates the operation of the scanning system of FIG. 1A with respect to detecting organs in a patient;

图2示出了图1A的扫描系统中包括的逻辑元件的示例性配置;FIG. 2 shows an exemplary configuration of logic elements included in the scanning system of FIG. 1A;

图3示出了示例性实施方式中的图2的数据获取单元的一部分;Figure 3 shows a portion of the data acquisition unit of Figure 2 in an exemplary embodiment;

图4示出了示例性实施方式中的图2的自动编码器单元的一部分;Figure 4 illustrates a portion of the autoencoder unit of Figure 2 in an exemplary embodiment;

图5示出了图2的一个或多个元件中包括的部件的示例性配置;FIG. 5 illustrates an exemplary configuration of components included in one or more of the elements of FIG. 2;

图6是示出根据一示例性实施方式的由图2所示的各个部件进行的处理的流程图;FIG. 6 is a flowchart illustrating the processing performed by the various components shown in FIG. 2 according to an exemplary embodiment;

图7示出了示例性实施方式中由图2的自动编码器生成的输出;Figure 7 illustrates the output generated by the autoencoder of Figure 2 in an exemplary embodiment;

图8示出了根据图6的处理的二值化处理;Figure 8 shows a binarization process according to the process of Figure 6;

图9是与经由图1A的基本单元来显示信息相关联的流程图;和Figure 9 is a flow chart associated with displaying information via the base unit of Figure 1A; and

图10示出了根据图9的处理由基本单元输出的示例性图像数据。FIG. 10 shows exemplary image data output by the base unit according to the process of FIG. 9 .

具体实施方式Detailed ways

下面的详细描述参考附图。不同附图中的相同附图标记可以标识相同或相似的元件。另外,以下详细描述不限制本发明。The following detailed description refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. In addition, the following detailed description does not limit the invention.

本文描述的实施方式涉及使用机器学习,其中包括使用神经网络和深度学习,以便基于经由超声扫描仪获得的信息来识别患者体内受关注的器官或结构。例如,扫描仪可以用于向目标器官发射多个超声信号,并且可以使用机器学习技术/算法来处理与所发射的信号相关联的回波信息。机器学习处理可以用于识别受关注的目标并生成与基于所接收到的超声回波数据而生成的图像的每个部分或像素相关联的概率信息。Embodiments described herein relate to the use of machine learning, including the use of neural networks and deep learning, to identify organs or structures of interest in a patient based on information obtained via an ultrasound scanner. For example, a scanner may be used to transmit multiple ultrasound signals to a target organ, and machine learning techniques/algorithms may be used to process echo information associated with the transmitted signals. A machine learning process can be used to identify objects of interest and generate probability information associated with each portion or pixel of an image generated based on the received ultrasound echo data.

例如,在一种实施方式中,超声回波数据(例如,与在指向目标器官的多个不同扫描平面上发射的超声信号相关联的B模式回波数据)可以用于为每个B模式图像生成概率映射。在一个实施方式中,B模式图像中的每个像素可以被映射到指示该特定像素是处于目标器官/结构之内还是作为目标器官/结构的一部分的概率。逐像素分析的结果用于生成目标概率映射。然后可以进行二值化处理和后处理以去除噪声并且与试图确定目标器官的边界壁并基于边界信息估计尺寸的常规扫描仪相比提供器官的更准确的呈现。在一些实施方式中,来自后处理的输出被显示给医务人员,并且可以在执行超声扫描时帮助容易地定位器官。还可以执行其他后处理以估计目标器官的体积,例如患者膀胱中的流体体积。For example, in one embodiment, ultrasound echo data (eg, B-mode echo data associated with ultrasound signals transmitted on multiple different scan planes directed toward the target organ) may be used for each B-mode image Generate a probability map. In one embodiment, each pixel in the B-mode image may be mapped to a probability indicating whether that particular pixel is within or part of the target organ/structure. The results of the pixel-by-pixel analysis are used to generate a target probability map. Binarization and post-processing can then be performed to remove noise and provide a more accurate representation of the organ than conventional scanners that attempt to determine the bounding walls of the target organ and estimate dimensions based on the bounding information. In some embodiments, the output from the post-processing is displayed to medical personnel and can aid in easily locating organs when performing ultrasound scans. Other post-processing can also be performed to estimate the volume of the target organ, such as the volume of fluid in the patient's bladder.

图1A是示出根据示例性实施例的扫描系统100的图形。参照图1,扫描系统100包括探针110、基本(基础)单元120和电缆130。FIG. 1A is a diagram illustrating a scanning system 100 according to an exemplary embodiment. Referring to FIG. 1 , a scanning system 100 includes a probe 110 , a base (base) unit 120 and a cable 130 .

探针110包括手柄部分112(也称为手柄112)、触发器114和鼻部116(也称为圆顶或圆顶部分116)。医务人员可以通过手柄112握住探针110,并按下触发器114,以激活(启动)位于鼻部116中的一个或多个超声收发器和换能器,从而向受关注的目标器官发射超声信号。例如,图1B示出了位于患者150的骨盆区域上并且处于受关注的目标器官(在此示例中为患者的膀胱152)之上的探针110。Probe 110 includes handle portion 112 (also referred to as handle 112), trigger 114, and nose 116 (also referred to as dome or dome portion 116). The medical practitioner may hold probe 110 by handle 112 and depress trigger 114 to activate (activate) one or more ultrasound transceivers and transducers located in nose 116 to transmit to the target organ of interest Ultrasound signal. For example, Figure IB shows probe 110 positioned over the pelvic region of patient 150 and over the target organ of interest (in this example, the patient's bladder 152).

手柄112允许使用者相对于患者150移动探针110。如上所述,当对选定的解剖学部分进行扫描时,在圆顶116与患者150的表面部分相接触时,触发器114启动对选定的解剖学部分的超声扫描。圆顶116通常由向解剖学部分提供适当的声学阻抗匹配和/或允许超声能量随着其被投射到解剖学部分中而适当地聚焦的材料形成。例如,图1B中区域154处所示的声学凝胶或凝胶垫可在受关注区域(ROI)之上施加至患者150的皮肤,以在将圆顶116放在患者150的皮肤上时提供声学阻抗匹配。The handle 112 allows the user to move the probe 110 relative to the patient 150 . As described above, when the selected anatomical portion is scanned, the trigger 114 initiates an ultrasound scan of the selected anatomical portion when the dome 116 is in contact with the surface portion of the patient 150 . The dome 116 is typically formed from a material that provides suitable acoustic impedance matching to the anatomical portion and/or allows the ultrasound energy to be properly focused as it is projected into the anatomical portion. For example, the acoustic gel or gel pad shown at area 154 in FIG. 1B may be applied to the skin of the patient 150 over the region of interest (ROI) to provide when the dome 116 is placed on the skin of the patient 150 Acoustic impedance matching.

圆顶116包括一个或多个超声收发器元件和一个或多个换能器元件(图1A或图1B中未示出)。收发器元件从圆顶116向外发射超声能量,并接收由解剖学部分内的内部结构/组织产生的声学反射或回波。所述一个或多个超声换能器元件可以包括一维或二维压电元件阵列,该一维或二维压电元件阵列可以通过马达在圆顶116内移动以针对由收发器元件进行的超声信号的发射提供不同的扫描方向。备选地,换能器元件可以相对于探针110固定,以使得可以通过选择性地激励阵列中的元件来扫描选定的解剖学区域。The dome 116 includes one or more ultrasonic transceiver elements and one or more transducer elements (not shown in FIG. 1A or FIG. 1B ). The transceiver elements transmit ultrasonic energy outward from the dome 116 and receive acoustic reflections or echoes generated by internal structures/tissues within the anatomical portion. The one or more ultrasonic transducer elements may include a one-dimensional or two-dimensional array of piezoelectric elements that can be moved within the dome 116 by a motor to target signals performed by the transceiver elements. The emission of ultrasound signals provides different scanning directions. Alternatively, the transducer elements can be fixed relative to the probe 110 so that selected anatomical regions can be scanned by selectively energizing elements in the array.

在一些实施方式中,探针110可以包括方向指示器面板(图1A中未示出),该方向指示器面板包括可以被点亮以用于初始确定目标并引导使用者接近ROI内的目标器官或结构的多个箭头。例如,在一些实施方式中,如果器官或结构相对于在患者150身上的第一位置处抵靠皮肤表面放置的探针110的落点居中,则方向箭头可以不被点亮。然而,如果器官偏心,则可以点亮一个箭头或一组箭头以引导使用者将探针110重新定位在患者150身上的第二或后续皮肤位置处。在其他实施方式中,可以在基本单元120的显示器122上呈现方向指示器。In some embodiments, probe 110 may include a directional indicator panel (not shown in FIG. 1A ) that includes a directional indicator panel that may be illuminated for initial targeting and guiding the user to approach the target organ within the ROI or multiple arrows of a structure. For example, in some embodiments, the directional arrows may not be lit if the organ or structure is centered relative to where the probe 110 is placed against the skin surface at the first location on the patient 150 . However, if the organ is off-center, an arrow or set of arrows may be illuminated to guide the user to reposition probe 110 at a second or subsequent skin location on patient 150 . In other embodiments, a direction indicator may be presented on the display 122 of the base unit 120 .

位于探针110中的一个或多个收发器可以包括惯性参考单元,所述惯性参考单元包括加速度计和/或陀螺仪,该加速度计和/或陀螺仪优选地定位在圆顶116之内或附近。该加速度计可以是可操作的以感测收发器的加速度(优选地相对于一坐标系的加速度),而陀螺仪可以是可操作的以感测收发器相对于同一或另一坐标系的角速度。因此,该陀螺仪可以呈采用动态元件的常规配置,或者该陀螺仪可以是光电装置,例如光学环形陀螺仪。在一个实施例中,加速度计和陀螺仪可包括普通封装装置和/或固态装置。在其他实施例中,加速度计和/或陀螺仪可以包括普通封装的微机电系统(MEMS)装置。在每种情况下,加速度计和陀螺仪共同允许确定相对于患者体内受关注的解剖学区域附近的已知位置的位置和/或角度变化。One or more transceivers located in probe 110 may include an inertial reference unit including accelerometers and/or gyroscopes preferably positioned within dome 116 or nearby. The accelerometer may be operable to sense the acceleration of the transceiver (preferably relative to a coordinate system), and the gyroscope may be operable to sense the angular velocity of the transceiver relative to the same or another coordinate system . Thus, the gyroscope may be in a conventional configuration employing dynamic elements, or the gyroscope may be an optoelectronic device, such as an optical ring gyroscope. In one embodiment, the accelerometers and gyroscopes may comprise common packaged devices and/or solid state devices. In other embodiments, the accelerometers and/or gyroscopes may comprise commonly packaged microelectromechanical systems (MEMS) devices. In each case, the accelerometer and gyroscope together allow the determination of position and/or angular changes relative to known locations in the vicinity of the anatomical region of interest within the patient.

探针110可以经由诸如电缆130之类的有线连接来与基本单元120通信。在其他实施方式中,探针110可以经由无线连接(例如,蓝牙,WiFi等)与基本单元120通信。在每种情况下,基本单元120都包括显示器122,以允许使用者查看来自超声扫描的处理结果,和/或允许在探针110的操作期间相对于使用者进行操作互动。例如,显示器122可以包括输出显示器/屏幕,例如液晶显示器(LCD)、基于发光二极管(LED)的显示器或向使用者提供文本和/或图像数据的其他类型的显示器。例如,显示器122可以提供用于相对于患者150的选定解剖学部分定位探针110的指令。显示器122还可以显示选定解剖学区域的二维或三维图像。Probe 110 may communicate with base unit 120 via a wired connection, such as cable 130 . In other embodiments, probe 110 may communicate with base unit 120 via a wireless connection (eg, Bluetooth, WiFi, etc.). In each case, the base unit 120 includes a display 122 to allow the user to view processing results from the ultrasound scan and/or to allow operational interaction with the user during operation of the probe 110 . For example, display 122 may include an output display/screen, such as a liquid crystal display (LCD), light emitting diode (LED) based display, or other type of display that provides text and/or image data to a user. For example, display 122 may provide instructions for positioning probe 110 relative to selected anatomical portions of patient 150 . Display 122 may also display two-dimensional or three-dimensional images of selected anatomical regions.

在一些实施方式中,显示器122可以包括图形用户界面(GUI),其允许使用者选择与超声扫描相关联的各种特征。例如,显示器122可以允许使用者选择患者150是男性、女性还是儿童。这允许系统100自动调整超声信号的发射、接收和处理以适应所选患者的解剖学结构,例如调整系统100以适应男性和女性患者的各种解剖学细节。例如,当经由显示器122上的GUI选择男性患者时,系统100可以被配置成在男性患者体内定位单个腔体(例如膀胱)。反之,当经由GUI选择女性患者时,系统100可以被配置成对具有多个腔体的解剖学部分(例如包括膀胱和子宫的身体区域)进行成像。类似地,当选择儿童患者时,系统100可以被配置为基于儿童患者的较小尺寸来调节发射。在备选实施方式中,系统100可以包括腔体选择器,所述腔体选择器被配置为选择可以用于男性和/或女性患者的单腔体扫描模式或多腔体扫描模式。腔体选择器因此可以允许对单腔体区域进行成像,或者对多腔体区域(例如包括主动脉和心脏的区域)进行成像。另外,如下所述,在分析图像时可以使用患者类型(例如,男性、女性、儿童)的选择,以帮助提供目标器官的准确呈现。In some embodiments, the display 122 may include a graphical user interface (GUI) that allows the user to select various features associated with the ultrasound scan. For example, display 122 may allow the user to select whether patient 150 is male, female, or child. This allows the system 100 to automatically adjust the transmission, reception, and processing of ultrasound signals to suit the anatomy of a selected patient, eg, adjust the system 100 to suit various anatomical details of male and female patients. For example, when a male patient is selected via the GUI on the display 122, the system 100 may be configured to locate a single cavity (eg, a bladder) within the male patient. Conversely, when a female patient is selected via the GUI, the system 100 may be configured to image anatomical parts with multiple cavities (eg, regions of the body including the bladder and uterus). Similarly, when a pediatric patient is selected, the system 100 may be configured to adjust the emission based on the smaller size of the pediatric patient. In alternative embodiments, the system 100 may include a cavity selector configured to select a single cavity scan mode or a multiple cavity scan mode that may be used for male and/or female patients. The cavity selector may thus allow imaging of single cavity regions, or imaging of multiple cavity regions (eg, regions including the aorta and the heart). Additionally, as described below, the selection of patient type (eg, male, female, child) can be used when analyzing images to help provide an accurate representation of the target organ.

为了扫描患者的选定解剖学部分,可以如图1B所示的那样将圆顶116抵靠患者150的表面部分定位,该表面部分靠近将被扫描的解剖学部分。使用者通过按下触发器114来启动收发器。作为响应,换能器元件可选地定位该收发器,该收发器将超声信号发射到体内,并接收相应的回波信号,该回波信号可以至少由该收发器部分地处理以产生所述选定解剖学部分的超声图像。在一特定实施例中,系统100在从大约两兆赫兹(MHz)延伸到大约10或更大MHz(例如18MHz)的范围内发射超声信号。To scan a selected portion of the patient's anatomy, the dome 116 may be positioned as shown in Figure IB against a surface portion of the patient 150 proximate the portion of the anatomy to be scanned. The user activates the transceiver by pressing trigger 114 . In response, the transducer element optionally positions the transceiver, which transmits an ultrasound signal into the body, and receives a corresponding echo signal, which can be processed at least in part by the transceiver to generate the said Ultrasound images of selected anatomical sections. In a particular embodiment, the system 100 transmits ultrasound signals in a range extending from about two megahertz (MHz) to about 10 or more MHz (eg, 18 MHz).

在一个实施例中,探针110可以耦合到基本单元120,该基本单元120被配置为以预定的频率和/或脉冲重复速率生成超声能量,并且将所述超声能量传输至收发器。基本单元120还包括一个或多个处理器或处理逻辑,所述一个或多个处理器或处理逻辑被配置为处理由收发器接收到的反射超声能量以产生所扫描的解剖学区域的图像。In one embodiment, the probe 110 may be coupled to a base unit 120 configured to generate ultrasonic energy at a predetermined frequency and/or pulse repetition rate and transmit the ultrasonic energy to the transceiver. The base unit 120 also includes one or more processors or processing logic configured to process reflected ultrasound energy received by the transceiver to generate an image of the scanned anatomical region.

在又一个特定实施例中,探针110可以是独立装置,其包括定位于探针110内的微处理器和与该微处理器相关联的软件,以可操作地控制收发器,并处理反射的超声能量,从而生成超声图像。因此,探针110上的显示器可以用于显示所生成的图像和/或查看与收发器的操作相关联的其他信息。例如,该信息可以包括字母数字数据,所述字母数字数据指示执行一系列扫描之前收发器的优选位置。在其他实施方式中,收发器可以耦合到通用计算机(例如膝上型计算机或台式计算机),所述通用计算机包括至少部分地控制收发器的操作的软件,并且还包括用于处理从收发器传输的信息的软件,以使得可以生成所扫描的解剖学区域的图像。In yet another particular embodiment, probe 110 may be a stand-alone device that includes a microprocessor located within probe 110 and software associated with the microprocessor to operatively control the transceiver and process reflections ultrasound energy to generate ultrasound images. Accordingly, the display on probe 110 may be used to display the generated images and/or view other information associated with the operation of the transceiver. For example, the information may include alphanumeric data indicating the preferred location of the transceiver prior to performing a series of scans. In other embodiments, the transceiver may be coupled to a general-purpose computer (eg, a laptop or desktop computer) that includes software that controls, at least in part, the operation of the transceiver, and also includes software for processing transmissions from the transceiver. software to make it possible to generate images of scanned anatomical regions.

图2是根据示例性实施方式的在系统100中实现的功能逻辑部件的框图。参照图2,系统100包括数据获取单元210、卷积神经网络(CNN)自动编码器单元220、后处理单元230、瞄准逻辑240和体积估计逻辑250。在一示例性实施方式中,探针110可以包括数据获取单元210,并且其他功能单元(例如CNN自动编码器单元220、后处理单元230、瞄准逻辑240和体积估计逻辑250)可以在基本单元120中实现。在其他实施方式中,可以通过其他装置实现特定单元和/或逻辑,例如经由相对于探针110和基本单元120二者位于外部的计算装置或服务器(例如,可经由与Internet或医院内的局域网等的无线连接访问的计算装置或服务器)实现特定单元和/或逻辑。例如,探针110可以经由例如无线连接(例如,WiFi或一些其他的无线协议/技术)将回波数据和/或图像数据发送至远离探针110和基本单元120定位的处理系统。FIG. 2 is a block diagram of functional logic components implemented in system 100 according to an exemplary embodiment. Referring to FIG. 2 , the system 100 includes a data acquisition unit 210 , a convolutional neural network (CNN) autoencoder unit 220 , a post-processing unit 230 , targeting logic 240 and volume estimation logic 250 . In an exemplary embodiment, probe 110 may include data acquisition unit 210 and other functional units (eg, CNN autoencoder unit 220 , post-processing unit 230 , targeting logic 240 and volume estimation logic 250 ) may be implemented in base unit 120 . realized in. In other embodiments, certain units and/or logic may be implemented by other means, such as via a computing device or server external to both probe 110 and base unit 120 (eg, may be connected to the Internet or a local area network within a hospital). A computing device or server accessed by a wireless connection, etc.) implements specific elements and/or logic. For example, probe 110 may transmit echo data and/or image data to a processing system located remotely from probe 110 and base unit 120 via, for example, a wireless connection (eg, WiFi or some other wireless protocol/technology).

如上所述,探针110可以包括收发器,该收发器产生超声信号、接收来自所发射信号的回波并基于所接收的回波(例如,所接收的回波的大小或强度)生成B模式图像数据。在示例性实施方式中,数据获取单元210获得与对应于患者150体内的受关注区域的多个扫描平面相关联的数据。例如,探针110可以接收由数据获取单元210处理的回波数据以生成二维(2D)B模式图像数据,从而确定膀胱的尺寸和/或体积。在其他实施方式中,探针110可以接收回波数据,该回波数据被处理以生成三维(3D)图像数据,所述三维(3D)图像数据可以用于确定膀胱的尺寸和/或体积。As discussed above, probe 110 may include a transceiver that generates ultrasound signals, receives echoes from the transmitted signals, and generates B-modes based on the received echoes (eg, the size or strength of the received echoes) image data. In an exemplary embodiment, data acquisition unit 210 acquires data associated with a plurality of scan planes corresponding to regions of interest within patient 150 . For example, probe 110 may receive echo data processed by data acquisition unit 210 to generate two-dimensional (2D) B-mode image data to determine the size and/or volume of the bladder. In other embodiments, the probe 110 can receive echo data that is processed to generate three-dimensional (3D) image data that can be used to determine the size and/or volume of the bladder.

例如,图3示出了用于获得3D图像数据的示例性数据获取单元210。参照图3,数据获取单元210包括换能器310、圆顶部分116的外表面320和基座360。图3所示的元件可以包括在探针110的圆顶部分116内。For example, FIG. 3 shows an exemplary data acquisition unit 210 for acquiring 3D image data. Referring to FIG. 3 , the data acquisition unit 210 includes a transducer 310 , an outer surface 320 of the dome portion 116 and a base 360 . The elements shown in FIG. 3 may be included within the dome portion 116 of the probe 110 .

换能器310可以从探针110发射超声信号,如图3中的330所示。换能器310可以安装成允许换能器310围绕两个垂直轴线旋转。例如,换能器310可以相对于基座360围绕第一轴线340旋转,并且相对于基座360围绕第二轴线350旋转。第一轴线340在本文中被称为theta轴,第二轴线350在本文中被称为phi轴。在一示例性实施方式中,theta和phi运动范围可以小于180度。在一个实施方式中,可以关于theta运动和phi运动进行隔行扫描。例如,换能器310可以在theta方向上移动,然后在phi方向上移动。这使得数据获取单元210能够获得平滑且连续的体积扫描并提高获取扫描数据的速率。Transducer 310 may transmit ultrasonic signals from probe 110, as shown at 330 in FIG. 3 . The transducer 310 may be mounted to allow the transducer 310 to rotate about two vertical axes. For example, the transducer 310 may rotate relative to the base 360 about a first axis 340 and relative to the base 360 about a second axis 350 . The first axis 340 is referred to herein as the theta axis and the second axis 350 is referred to herein as the phi axis. In an exemplary embodiment, theta and phi ranges of motion may be less than 180 degrees. In one embodiment, interlacing may be performed with respect to theta motion and phi motion. For example, the transducer 310 may move in the theta direction and then move in the phi direction. This enables the data acquisition unit 210 to obtain smooth and continuous volume scans and increases the rate at which scan data is acquired.

在一示例性实施方式中,数据获取单元210可以在将B模式图像转发给CNN自动编码器单元220之前调整B模式图像的尺寸(大小)。例如,数据获取单元210可以包括通过减小或抽取处理减小B模式图像的尺寸的逻辑。然后可以将尺寸减小的B模式图像输入CNN自动编码器单元220,所述CNN自动编码器单元220将生成输出概率映射,如下面更详细地描述的那样。在备选实施方式中,CNN自动编码器单元220可以在输入层处减小或抽取所输入的B模式图像本身。在任何一种情况下,减小B模式图像数据的尺寸/量可以减少CNN自动编码器单元220处理B模式图像数据所需的处理时间和处理能力。在其他实施方式中,数据获取单元210可以不在将B模式图像数据输入CNN自动编码器单元220之前执行尺寸调整。在其他实施方式中,数据获取单元210和/或CNN自动编码器单元220可以进行诸如亮度归一化、对比度增强、扫描转换之类的图像增强操作,以提高关于生成输出数据的准确性。In an exemplary embodiment, the data acquisition unit 210 may resize (size) the B-mode image before forwarding the B-mode image to the CNN auto-encoder unit 220 . For example, the data acquisition unit 210 may include logic to reduce the size of the B-mode image through a reduction or decimation process. The reduced size B-mode image may then be input to the CNN auto-encoder unit 220, which will generate an output probability map, as described in more detail below. In alternative embodiments, the CNN auto-encoder unit 220 may reduce or decimate the input B-mode image itself at the input layer. In either case, reducing the size/amount of B-mode image data may reduce the processing time and processing power required by the CNN autoencoder unit 220 to process the B-mode image data. In other embodiments, the data acquisition unit 210 may not perform resizing before inputting the B-mode image data to the CNN auto-encoder unit 220 . In other embodiments, the data acquisition unit 210 and/or the CNN auto-encoder unit 220 may perform image enhancement operations such as brightness normalization, contrast enhancement, scan conversion, etc., to improve accuracy with respect to generating output data.

再次参考图2,CNN自动编码器单元220可以包括用于处理经由数据获取单元210接收的数据的逻辑。在一示例性实施方式中,CNN自动编码器单元220可以执行深度神经网络(DNN)处理,所述深度神经网络(DNN)处理包括多个卷积层处理以及用于每个层的多个内核或滤波器,如下文更详细描述的那样。本文使用的术语“CNN自动编码器单元”或“自动编码器单元”应广义地解释为包括神经网络和/或机器学习系统/单元,与输出不带空间信息的全局标签的分类器相比,其中输入和输出均具有空间信息。Referring again to FIG. 2 , the CNN autoencoder unit 220 may include logic for processing data received via the data acquisition unit 210 . In an exemplary embodiment, the CNN autoencoder unit 220 may perform deep neural network (DNN) processing including multiple convolutional layer processing and multiple kernels for each layer. or filter, as described in more detail below. The terms "CNN auto-encoder unit" or "auto-encoder unit" as used herein should be interpreted broadly to include neural networks and/or machine learning systems/units, in contrast to classifiers that output global labels without spatial information, where both input and output have spatial information.

例如,CNN自动编码器单元220包括以最小可能的失真量将接收到的图像输入映射到输出的逻辑。CNN处理可以类似于其他类型的神经网络处理,但是CNN处理使用显式假设(即输入是图像),这使得CNN处理可以更轻松地将各种属性/限制编码至所述处理中,从而减少了必须由CNN自动编码器单元220处理或作为因数计入的参数的量。在一示例性实施方式中,CNN自动编码器单元220执行卷积处理以生成与所输入的图像相关联的特征映射。然后可以对特征映射进行多次采样以生成输出。在一示例性实施方式中,由CNN自动编码器单元220使用的CNN的内核尺寸可以具有17x17或更小的尺寸,以提供足够的速度来生成输出。另外,17x17内核尺寸允许CNN自动编码器单元220捕获B模式图像数据内的受关注点的周围的足够信息。另外,根据一示例性实施方式,卷积层的数量可以是八个或更少,每个层具有五个或更少的内核。但是,应该理解,可以在其他实施方式中使用较小的内核尺寸(例如3x3、7x7、9x9等)或较大的内核尺寸(例如大于17x17)、每层附加的内核(例如大于五个)和附加的卷积层(例如,多于十个并且多达数百个)。For example, the CNN auto-encoder unit 220 includes logic to map received image inputs to outputs with the smallest possible amount of distortion. CNN processing can be similar to other types of neural network processing, but CNN processing uses explicit assumptions (i.e. the input is an image), which makes it easier for CNN processing to encode various properties/restrictions into said processing, reducing The amount of parameters that must be processed or factored by the CNN autoencoder unit 220. In an exemplary embodiment, the CNN auto-encoder unit 220 performs a convolution process to generate feature maps associated with the input images. The feature map can then be sampled multiple times to generate the output. In an exemplary embodiment, the kernel size of the CNN used by the CNN auto-encoder unit 220 may have a size of 17x17 or less to provide sufficient speed to generate the output. Additionally, the 17x17 kernel size allows the CNN auto-encoder unit 220 to capture sufficient information around points of interest within the B-mode image data. Additionally, according to an exemplary embodiment, the number of convolutional layers may be eight or less, with each layer having five or less kernels. However, it should be understood that smaller kernel sizes (eg, 3x3, 7x7, 9x9, etc.) or larger kernel sizes (eg, greater than 17x17), additional kernels per layer (eg, greater than five), and Additional convolutional layers (eg, more than ten and up to hundreds).

在涉及CNN处理的典型应用中,通过在处理内添加狭窄的瓶颈层以使得只有受关注的数据可以通过该狭窄的层而减小了数据维度(尺寸)。通常通过添加“池化”层或使用较大的“步长”来减小由神经网络处理的图像的尺寸以实现该数据维度的减小。然而,在本文关于膀胱检测所描述的一些实施方式中(其中所检测的膀胱壁位置的空间精确度对于精确的体积计算来说很重要),池化和/或大步长被最小化使用或与其他空间分辨率保持技术(例如,残差连接或膨胀卷积)相结合。In typical applications involving CNN processing, the data dimension (size) is reduced by adding a narrow bottleneck layer within the process so that only data of interest can pass through the narrow layer. This reduction in data dimensionality is typically achieved by adding a "pooling" layer or using a larger "stride" to reduce the size of the image processed by the neural network. However, in some embodiments described herein for bladder detection where the spatial accuracy of the detected bladder wall position is important for accurate volume calculations, pooling and/or large step sizes are minimized using or Combined with other spatial resolution preserving techniques such as residual connections or dilated convolutions.

虽然示例性系统100描绘了使用CNN自动编码器单元220处理B模式输入数据,但是在其他实施方式中,系统100可以包括其他类型的自动编码器单元或机器学习单元。例如,CNN自动编码器单元220可以包括神经网络结构,其中输出层与输入层具有相同数量的节点。在其他实施方式中,可以使用其他类型的机器学习模块或单元,其中输入层的尺寸不等于输出层的尺寸。例如,机器学习模块可以生成(就层数而言)大于输入图像的两倍或小于输入图像的一半的概率映射输出。在其他实施方式中,系统100中包括的机器学习单元可以使用各种机器学习技术和算法,例如决策树、支持向量机、贝叶斯(Bayesian)网络等。在每种情况下,系统100使用机器学习算法来生成关于B模式输入数据的概率信息,所述概率信息进而可以用来估计受关注的目标器官的体积,如下面详细描述的那样。Although the exemplary system 100 depicts processing B-mode input data using the CNN autoencoder unit 220, in other embodiments, the system 100 may include other types of autoencoder units or machine learning units. For example, the CNN auto-encoder unit 220 may include a neural network structure in which the output layer has the same number of nodes as the input layer. In other embodiments, other types of machine learning modules or units may be used, where the size of the input layer is not equal to the size of the output layer. For example, a machine learning module can generate (in terms of layers) a probability map output that is larger than twice the input image or smaller than half the input image. In other embodiments, the machine learning unit included in the system 100 may use various machine learning techniques and algorithms, such as decision trees, support vector machines, Bayesian networks, and the like. In each case, the system 100 uses machine learning algorithms to generate probabilistic information about the B-mode input data, which in turn can be used to estimate the volume of the target organ of interest, as described in detail below.

图4示意性地示出了根据一示例性实施方式的CNN自动编码器单元220的一部分。参照图4,CNN自动编码器单元220可以包括空间输入410、FFT输入420、查询422、特征映射430、特征映射440、查询442、内核450、偏置452、内核460和偏置462。输入空间410可以代表由数据获取单元210所提供的2D B模式图像数据。CNN自动编码器220可以执行快速傅立叶变换(FFT)以将图像数据转换为频域,并经由内核FFT 450将滤波器或权重施加至输入FFT。卷积处理的输出可以经由偏置值452进行偏置,并且应用逆向快速傅里叶变换(IFFT)函数,其结果被传送至查询表422以生成空间特征映射430。CNN自动编码器单元220可以将FFT应用于空间特征映射430以生成FFT特征映射440,并且该过程可以针对附加的卷积和内核重复。例如,如果CNN自动编码器单元220包括八个卷积层,则该过程可以多继续进行七次。另外,应用于每个后续特征映射的内核对应于内核数量乘以特征映射的数量,如图4中的四个内核460所示。也可以应用偏置452和462来改善CNN处理的性能。Figure 4 schematically shows a portion of a CNN auto-encoder unit 220 according to an exemplary embodiment. 4 , the CNN autoencoder unit 220 may include a spatial input 410 , an FFT input 420 , a query 422 , a feature map 430 , a feature map 440 , a query 442 , a kernel 450 , a bias 452 , a kernel 460 , and a bias 462 . Input space 410 may represent 2D B-mode image data provided by data acquisition unit 210 . The CNN autoencoder 220 may perform a Fast Fourier Transform (FFT) to convert the image data to the frequency domain and apply filters or weights to the input FFT via the kernel FFT 450. The output of the convolution process may be biased via bias value 452 and an Inverse Fast Fourier Transform (IFFT) function applied, the result of which is passed to look-up table 422 to generate spatial feature map 430 . CNN autoencoder unit 220 may apply FFT to spatial feature map 430 to generate FFT feature map 440, and this process may be repeated for additional convolutions and kernels. For example, if the CNN autoencoder unit 220 includes eight convolutional layers, the process may continue seven more times. Additionally, the kernels applied to each subsequent feature map correspond to the number of kernels multiplied by the number of feature maps, as shown by four kernels 460 in FIG. 4 . Bias 452 and 462 can also be applied to improve the performance of CNN processing.

如上所述,CNN自动编码器单元220可以使用FFT在频域中执行卷积。与可以使用多个计算机来执行CNN算法的较大系统相比,这种方法允许系统100使用较少的计算能力来实现CNN算法。以这种方式,系统100可以使用手持单元和基站(例如探针110和基本单元120)来执行CNN处理。在其他实施方式中,可以使用空间域方法。在系统100能够与其他处理装置(例如,经由网络(例如,无线或有线网络)连接至系统100的处理装置和/或经由客户端/服务器途径(例如,系统100是客户端)与系统100一起操作的处理装置)进行通信的情况下,空间域方法可能会使用附加的处理能力。As described above, the CNN auto-encoder unit 220 may perform convolution in the frequency domain using FFT. This approach allows the system 100 to use less computing power to implement the CNN algorithm compared to larger systems that can use multiple computers to execute the CNN algorithm. In this manner, the system 100 can use the handheld unit and base station (eg, probe 110 and base unit 120) to perform CNN processing. In other embodiments, spatial domain methods may be used. The system 100 is capable of working with other processing devices (eg, via a network (eg, a wireless or wired network) connected to the processing device of the system 100 and/or via a client/server approach (eg, the system 100 is a client) with the system 100 The spatial domain approach may use additional processing power in the case of communication with the operating processing device.

CNN自动编码器单元220的输出是与被处理的输入图像的每个被处理的部分或像素处于受关注的目标器官内的概率相关联的概率信息。例如,CNN自动编码器单元220可以生成概率映射,其中与被处理的输入图像数据相关联的每个像素被映射到与0和1之间的值相对应的概率,其中值零表示该像素在目标器官内的概率为0%,值1表示该像素在目标器官内的概率为100%,如下更详细地描述的那样。CNN自动编码器单元220在处理后的图像上而非输入图像上执行像素分析或空间位置分析。结果,对处理后的图像的逐像素分析可能与输入图像不一一对应。例如,基于输入图像的尺寸调整,由CNN自动编码器单元220分析以生成概率信息的一个处理后的像素或空间位置可以对应于输入图像中的多个像素,反之亦然。另外,本文所使用的术语“概率”应被解释为广义地包括图像的像素或一部分在受关注的目标或器官内的可能性。如本文所使用的术语“概率信息”也应该被广义地解释为包括离散值,例如二进制值或其他值。The output of the CNN autoencoder unit 220 is probability information associated with the probability that each processed portion or pixel of the processed input image is within the target organ of interest. For example, the CNN auto-encoder unit 220 may generate a probability map in which each pixel associated with the input image data being processed is mapped to a probability corresponding to a value between 0 and 1, where a value of zero indicates that the pixel is in The probability within the target organ is 0%, and a value of 1 indicates that the pixel is within the target organ with a probability of 100%, as described in more detail below. The CNN auto-encoder unit 220 performs pixel analysis or spatial location analysis on the processed image rather than the input image. As a result, pixel-by-pixel analysis of the processed image may not correspond one-to-one with the input image. For example, based on the resizing of the input image, one processed pixel or spatial location analyzed by the CNN autoencoder unit 220 to generate probability information may correspond to multiple pixels in the input image, and vice versa. Additionally, the term "probability" as used herein should be interpreted broadly to include the likelihood that a pixel or portion of an image is within a target or organ of interest. The term "probability information" as used herein should also be construed broadly to include discrete values, such as binary or other values.

在其他实施方式中,CNN自动编码器单元220可以生成一概率映射,在所述概率映射中,每个像素被映射到可以与概率值或指示符(例如,范围从-10到10的值,对应于256个灰度值之一的值,等)相关的各种值。在每种情况下,CNN自动编码器单元220所生成的值或单位可用于确定图像的一像素或一部分在目标器官内的概率。例如,在256个灰度的示例中,值1可以表示图像的一像素或一部分在目标器官内的概率为0%,而值256可以表示像素或图像在目标器官内的概率为100%。In other embodiments, the CNN auto-encoder unit 220 can generate a probability map in which each pixel is mapped to a value that can be correlated with a probability value or indicator (eg, a value ranging from -10 to 10, A value corresponding to one of the 256 grayscale values, etc.) associated with various values. In each case, the values or units generated by the CNN autoencoder unit 220 may be used to determine the probability that a pixel or portion of the image is within the target organ. For example, in the 256 grayscale example, a value of 1 may represent a 0% probability that a pixel or portion of the image is within the target organ, while a value of 256 may represent a 100% probability that the pixel or image is within the target organ.

在其他实施方式中,CNN自动编码器单元220可以产生离散输出值,例如二进制值,其指示像素或输出区域是否在目标器官内。例如,CNN自动编码器单元220可以包括二值化或分类处理,所述二值化或分类处理生成离散值,例如,当像素在目标器官内时为“1”,而当像素不在目标器官内时为“0”。在其他情况下,生成的值可能不是二进制的,而是可能与像素处于目标器官内还是目标器官外相关。In other embodiments, the CNN autoencoder unit 220 may generate discrete output values, such as binary values, that indicate whether a pixel or output region is within the target organ. For example, the CNN auto-encoder unit 220 may include a binarization or classification process that generates discrete values, eg, "1" when the pixel is within the target organ and "1" when the pixel is not within the target organ is "0". In other cases, the resulting value may not be binary, but may be related to whether the pixel is inside or outside the target organ.

在一些实施方式中,CNN自动编码器单元220在分析逐像素数据时可以考虑各种因素。例如,CNN自动编码器单元220可以经由在基本单元120(图1A)的显示器122上显示的GUI接收来自使用者的指示患者150是男性、女性还是儿童的输入,并且基于所存储的与特定类型患者的目标器官的可能尺寸、形状、体积等相关的信息来调整概率值。在这样的实施方式中,CNN自动编码器单元220可以包括利用男性、女性和儿童数据来训练的三种不同的CNN,并且CNN自动编码器单元220可以基于选择来使用适当的CNN。In some embodiments, the CNN autoencoder unit 220 may consider various factors when analyzing pixel-by-pixel data. For example, the CNN auto-encoder unit 220 may receive input from the user via a GUI displayed on the display 122 of the base unit 120 (FIG. 1A) indicating whether the patient 150 is male, female, or child, and based on stored data with the particular type Information about the possible size, shape, volume, etc. of the patient's target organ is used to adjust the probability value. In such an embodiment, the CNN auto-encoder unit 220 may include three different CNNs trained with male, female, and child data, and the CNN auto-encoder unit 220 may use the appropriate CNN based on the selection.

在一些实施方式中,CNN自动编码器单元220可以使用例如与受试者相关联的B模式图像数据来自动识别受试者的患者人口统计信息,例如性别、年龄、年龄范围、成人或儿童状态等。CNN自动编码器单元220还可以使用例如B模式图像数据(例如体重指数(BMI)、身体大小和/或体重等)自动识别受试者的临床状况。CNN自动编码器单元220还可以在系统100进行扫描时自动识别装置信息,例如探针110的位置信息、探针110相对于受关注目标的瞄准质量等。In some embodiments, the CNN autoencoder unit 220 may use, for example, B-mode image data associated with the subject to automatically identify the subject's patient demographic information, such as gender, age, age range, adult or child status Wait. The CNN autoencoder unit 220 may also automatically identify the clinical condition of the subject using, for example, B-mode image data (eg, body mass index (BMI), body size and/or weight, etc.). The CNN autoencoder unit 220 may also automatically identify device information, such as position information of the probe 110, aiming quality of the probe 110 relative to the target of interest, etc., when the system 100 scans.

在其他实施方式中,另一处理装置(例如,类似于自动编码器单元220和/或处理器520的处理装置)可以使用例如另一神经网络或其他处理逻辑来执行对患者人口统计信息、临床状况和/或装置信息的自动检测,并且可以将自动确定的输出作为输入提供给CNN自动编码器单元220。此外,在其他实施方式中,可以经由例如基本单元120的显示器122或经由探针110上的输入选择来手动地输入患者人口统计信息、临床状况和/或装置信息、患者数据等。在每种情况下,由CNN自动编码器单元220自动识别或手动输入至CNN自动编码器单元220/系统100的信息可用于选择适当的CNN来处理图像数据。In other embodiments, another processing device (eg, a processing device similar to autoencoder unit 220 and/or processor 520 ) may use, for example, another neural network or other processing logic to perform analysis of patient demographics, clinical Automatic detection of condition and/or device information, and the automatically determined output may be provided as input to the CNN autoencoder unit 220. Furthermore, in other embodiments, patient demographic information, clinical condition and/or device information, patient data, etc. may be manually entered via, eg, display 122 of base unit 120 or via input selections on probe 110 . In each case, information automatically identified by the CNN auto-encoder unit 220 or manually input to the CNN auto-encoder unit 220/system 100 may be used to select an appropriate CNN to process the image data.

在其他实施方式中,可以利用其他信息来训练CNN自动编码器单元220。例如,可以利用与受试者相关联的患者数据来训练CNN自动编码器单元220,该患者数据可以包括使用患者的病史数据获得的信息以及经由在扫描受关注的目标之前对患者进行的身体检查而获得的信息。例如,患者数据可以包括患者的病史信息,例如患者手术史、慢性疾病史(例如,膀胱疾病信息)、受关注目标的先前图像(例如,受试者膀胱的先前图像)等以及经由对患者/受试者进行的身体检查而获得的数据,例如怀孕状态、疤痕组织的存在、水化问题、目标区域的异常(例如腹部膨胀或肿胀)等。在一示例性实施方式中,可以经由基本单元120的显示器122将患者数据输入系统100。在每种情况下,由CNN自动编码器单元220和/或另一处理装置自动生成的信息和/或手动输入系统100的信息可以作为输入提供给由系统100执行的机器学习处理,以帮助提高由系统100生成的与受关注目标相关联的数据的准确性。In other embodiments, other information may be utilized to train the CNN auto-encoder unit 220. For example, the CNN autoencoder unit 220 may be trained with patient data associated with the subject, which may include information obtained using the patient's medical history data and via a physical examination of the patient prior to scanning the object of interest information obtained. For example, patient data may include patient medical history information, such as patient surgical history, chronic disease history (eg, bladder disease information), previous images of objects of interest (eg, previous images of the subject's bladder), etc. Data obtained from a physical examination performed by the subject, such as pregnancy status, presence of scar tissue, hydration problems, abnormalities in the target area (such as abdominal distention or swelling), etc. In an exemplary embodiment, patient data may be entered into the system 100 via the display 122 of the base unit 120 . In each case, information automatically generated by the CNN autoencoder unit 220 and/or another processing device and/or manually entered into the system 100 may be provided as input to the machine learning process performed by the system 100 to help improve The accuracy of the data associated with the object of interest generated by the system 100 .

在其他情况下,自动编码器单元220可以经由设置在显示器122上的GUI来接收与正被成像的器官类型(例如,膀胱,主动脉,前列腺,心脏,肾脏,子宫,血管,羊水,胎儿等)以及器官数量等有关的输入信息,并使用根据所选器官训练的适当的CNN。In other cases, the autoencoder unit 220 may receive information via a GUI provided on the display 122 with the type of organ being imaged (eg, bladder, aorta, prostate, heart, kidney, uterus, blood vessels, amniotic fluid, fetus, etc. ) and the number of organs, etc., and use an appropriate CNN trained on the selected organs.

后处理单元230包括用于接收逐像素概率信息并且应用“智能”二值化概率算法的逻辑。例如,后处理单元230可执行插值以更清楚地限定轮廓细节,如下面详细描述的那样。另外,后处理单元230可以基于受试者类型来调整CNN自动编码器单元220的输出。例如,如果在使用探针110启动超声扫描之前通过显示器122上的GUI选择了“儿童”,则后处理单元230可以忽略来自CNN自动编码器单元220的与比某个深度更深的位置相对应的输出,因为儿童体内膀胱的深度通常由于典型儿童的身材矮小而较浅。作为另一个示例,后处理单元230可以基于器官类型来确定是选择单个主要区域还是多个受关注区域。例如,如果正被扫描的器官类型是膀胱,则后处理单元230可以选择单个主要区域,因为体内仅存在一个膀胱。然而,如果目标是耻骨,则后处理单元230可以选择多达两个受关注区域,所述两个受关注区域对应于耻骨的两侧。Post-processing unit 230 includes logic for receiving pixel-by-pixel probability information and applying a "smart" binarization probability algorithm. For example, post-processing unit 230 may perform interpolation to more clearly define contour details, as described in detail below. Additionally, the post-processing unit 230 may adjust the output of the CNN auto-encoder unit 220 based on the subject type. For example, if "Children" was selected through the GUI on display 122 prior to initiating an ultrasound scan using probe 110, post-processing unit 230 may ignore data from CNN auto-encoder unit 220 corresponding to locations deeper than a certain depth output, as the depth of the bladder in a child is often shallow due to the short stature of the typical child. As another example, the post-processing unit 230 may determine whether to select a single primary region or multiple regions of interest based on the organ type. For example, if the type of organ being scanned is the bladder, the post-processing unit 230 may select a single primary region since there is only one bladder in the body. However, if the target is the pubic bone, the post-processing unit 230 may select up to two regions of interest, which correspond to both sides of the pubic bone.

瞄准逻辑240包括用于确定在超声扫描期间目标器官是否相对于探针110正确居中的逻辑。在一些实施方式中,瞄准逻辑240可以生成文本或图形以指导使用者调整探针110的位置以实现更好地扫描目标器官。例如,瞄准逻辑240可以分析来自探针110的数据并确定探针110需要移动至患者150的左侧。在这种情况下,瞄准逻辑240可以向显示器122输出文本和/或图形(例如,闪烁的箭头)以引导使用者向适当的方向移动探针110。Targeting logic 240 includes logic for determining whether the target organ is properly centered relative to probe 110 during the ultrasound scan. In some embodiments, targeting logic 240 may generate text or graphics to guide the user in adjusting the position of probe 110 to achieve better scanning of the target organ. For example, targeting logic 240 may analyze data from probe 110 and determine that probe 110 needs to be moved to the left side of patient 150 . In this case, targeting logic 240 may output text and/or graphics (eg, blinking arrows) to display 122 to guide the user to move probe 110 in the appropriate direction.

体积估计逻辑250可以包括用于估计目标器官的体积的逻辑。例如,体积估计逻辑250可以基于由后处理单元230生成的2D图像来估计体积,如下面详细描述的那样。在提供3D图像的情况下,体积估计逻辑250可以简单地使用3D图像来确定目标器官的体积。体积估计逻辑250可以经由显示器122和/或探针110上的显示器输出估计出的体积。Volume estimation logic 250 may include logic for estimating the volume of the target organ. For example, the volume estimation logic 250 may estimate the volume based on the 2D images generated by the post-processing unit 230, as described in detail below. Where a 3D image is provided, the volume estimation logic 250 may simply use the 3D image to determine the volume of the target organ. Volume estimation logic 250 may output the estimated volume via display 122 and/or a display on probe 110 .

为简单起见,提供了图2所示的示例性配置。应该理解,系统100可以包括比图2所示的逻辑单元/装置更多或更少的逻辑单元/装置。例如,系统100可以包括多个数据获取单元210和处理接收到的数据的多个处理单元。另外,系统100可以包括附加元件,例如经由外部网络发射和接收信息以帮助分析超声信号从而识别受关注的目标器官的通信接口(例如,射频收发器)。For simplicity, the exemplary configuration shown in Figure 2 is provided. It should be understood that the system 100 may include more or fewer logical units/devices than those shown in FIG. 2 . For example, system 100 may include multiple data acquisition units 210 and multiple processing units that process received data. Additionally, the system 100 may include additional elements such as a communication interface (eg, a radio frequency transceiver) to transmit and receive information via an external network to aid in analyzing the ultrasound signal to identify a target organ of interest.

此外,下面将描述由系统100中的特定部件执行的各种功能。在其他实施方式中,被描述为由一个装置执行的各种功能可以由另一装置或多个其他装置执行,并且/或者被描述为由多个装置执行的各种功能可以被组合并且由单个装置执行。例如,在一个实施方式中,CNN自动编码器单元220可以将输入图像转换为概率信息、生成中间映射输出(如下所述),并且还可以将中间输出转换为例如体积信息、长度信息、面积信息等。也就是说,单个神经网络处理装置/单元可以接收输入图像数据并输出经过处理的图像输出数据以及体积和/或尺寸信息。在该示例中,可能不需要单独的后处理单元230和/或体积估计逻辑250。另外,在该示例中,任何中间映射输出对于系统100的操作人员而言可能是可访问的或可见的或者不可访问的或不可见的(例如,中间映射可以是对于使用者而言不可直接访问/可见的内部处理的一部分)。也就是说,包括在系统100中的神经网络(例如,CNN自动编码器单元220)可以转换接收到的超声回波信息和/或图像,并输出受关注目标的体积信息或其他尺寸信息,而无需由系统100的使用者进行额外的输入或仅仅需要系统100的使用者进行小量的额外输入。Additionally, various functions performed by specific components in system 100 will be described below. In other embodiments, various functions described as being performed by one apparatus may be performed by another apparatus or apparatuses, and/or various functions described as being performed by multiple apparatuses may be combined and performed by a single apparatus device executes. For example, in one embodiment, the CNN auto-encoder unit 220 may convert the input image to probability information, generate intermediate map outputs (as described below), and may also convert the intermediate outputs to, for example, volume information, length information, area information Wait. That is, a single neural network processing device/unit may receive input image data and output processed image output data and volume and/or size information. In this example, a separate post-processing unit 230 and/or volume estimation logic 250 may not be required. Additionally, in this example, any intermediate map output may be accessible or visible or inaccessible or invisible to an operator of the system 100 (eg, the intermediate map may not be directly accessible to the user) /visible part of internal processing). That is, a neural network (eg, CNN auto-encoder unit 220) included in system 100 can convert received ultrasound echo information and/or images and output volume information or other dimensional information of the object of interest, while No additional input by the user of the system 100 or only a small amount of additional input is required.

图5示出了装置500的示例性配置。装置500可以对应于例如CNN自动编码器单元220、后处理单元230、瞄准逻辑240和体积估计逻辑250的部件。参考图5,装置500可以包括总线510、处理器520、存储器530、输入装置540、输出装置550和通信接口560。总线510可以包括允许装置500的各个元件之间进行通信的路径。在一示例性实施方式中,图5中示出的所有或一些部件可以通过处理器520执行存储在存储器530中的软件指令来实现和/或控制。FIG. 5 shows an exemplary configuration of apparatus 500 . Apparatus 500 may correspond to components of, for example, CNN autoencoder unit 220 , post-processing unit 230 , targeting logic 240 and volume estimation logic 250 . Referring to FIG. 5 , the apparatus 500 may include a bus 510 , a processor 520 , a memory 530 , an input device 540 , an output device 550 and a communication interface 560 . The bus 510 may include paths that allow communication between the various elements of the apparatus 500 . In an exemplary embodiment, all or some of the components shown in FIG. 5 may be implemented and/or controlled by processor 520 executing software instructions stored in memory 530 .

处理器520可以包括一个或多个可以解释和执行指令的处理器、微处理器或处理逻辑。存储器530可以包括随机存取存储器(RAM)或可以存储信息和指令以供处理器520执行的另一种类型的动态存储装置。存储器530还可以包括只读存储器(ROM)装置或可以存储静态信息和指令以供处理器520使用的另一种类型的静态存储装置。存储器530可以进一步包括固态驱动器(SDD)。存储器530还可包括磁和/或光记录介质(例如,硬盘)及其相应的驱动器。Processor 520 may include one or more processors, microprocessors, or processing logic that can interpret and execute instructions. Memory 530 may include random access memory (RAM) or another type of dynamic storage device that may store information and instructions for execution by processor 520 . Memory 530 may also include read only memory (ROM) devices or another type of static storage device that may store static information and instructions for use by processor 520 . The memory 530 may further include a solid state drive (SDD). Memory 530 may also include magnetic and/or optical recording media (eg, hard disks) and their corresponding drives.

输入装置540可以包括允许使用者向装置500输入信息的机构(机制),例如键盘、小键盘、鼠标、笔、麦克风、触摸屏、语音识别和/或生物识别机构等。输出装置550可以包括向使用者输出信息的机构,其中包括显示器(例如,液晶显示器(LCD))、打印机、扬声器等。在一些实施方式中,触摸屏显示器可以用作输入装置和输出装置。Input device 540 may include a mechanism (mechanism) that allows a user to enter information into device 500, such as a keyboard, keypad, mouse, pen, microphone, touch screen, voice recognition and/or biometric mechanisms, and the like. Output device 550 may include a mechanism for outputting information to a user, including a display (eg, a liquid crystal display (LCD)), a printer, a speaker, and the like. In some embodiments, a touch screen display can be used as an input device and an output device.

通信接口560可以包括装置500用来经由有线、无线或光学机构与其他装置通信的一个或多个收发器。例如,通信接口560可以包括一个或多个射频(RF)发射器、接收器和/或收发器以及一个或多个用于经由网络发射和接收RF数据的天线。通信接口560还可包括与LAN接合的调制解调器或以太网接口或者用于与网络中的元件通信的其他机构。Communication interface 560 may include one or more transceivers that device 500 uses to communicate with other devices via wired, wireless, or optical mechanisms. For example, communication interface 560 may include one or more radio frequency (RF) transmitters, receivers and/or transceivers and one or more antennas for transmitting and receiving RF data via a network. Communication interface 560 may also include a modem or Ethernet interface to the LAN or other mechanism for communicating with elements in the network.

为简单起见,提供了图5所示的示例性配置。应当理解,装置500可以包括比图5所示的装置更多或更少的装置。在一示例性实施方式中,装置500响应于处理器520执行包含在计算机可读介质(例如存储器530)中的指令序列而执行操作。计算机可读介质可以被定义为物理或逻辑存储装置。可以从另一计算机可读介质(例如,硬盘驱动器(HDD),SSD等)或经由通信接口560从另一装置将软件指令读入存储器530。备选地,诸如专用集成电路(ASIC)、现场可编程门阵列(FPGA)等的硬件连接电路可以代替软件指令使用或与软件指令结合使用,以实现根据本文所述的实施方式的过程(处理)。因此,本文描述的实施方式不限于硬件电路和软件的任何特定组合。For simplicity, the exemplary configuration shown in Figure 5 is provided. It should be understood that apparatus 500 may include more or fewer apparatuses than those shown in FIG. 5 . In an exemplary embodiment, apparatus 500 performs operations in response to processor 520 executing sequences of instructions contained in a computer-readable medium (eg, memory 530). Computer readable media can be defined as physical or logical storage devices. The software instructions may be read into memory 530 from another computer-readable medium (eg, a hard disk drive (HDD), SSD, etc.) or from another device via communication interface 560 . Alternatively, hardware-connected circuits such as Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), etc. may be used in place of or in combination with software instructions to implement processes (processes) according to the embodiments described herein. ). Accordingly, the embodiments described herein are not limited to any specific combination of hardware circuitry and software.

图6是示出与识别受关注目标以及识别与受关注目标相关联的参数(例如,体积)相关联的示例性处理的流程图。处理可以从使用者操作探针110以扫描受关注的目标器官开始。在此示例中,假设目标器官是膀胱。应当理解,本文描述的特征可以用于识别体内的其他器官或结构。6 is a flowchart illustrating an example process associated with identifying an object of interest and identifying a parameter (eg, volume) associated with the object of interest. Processing may begin with the user operating probe 110 to scan the target organ of interest. In this example, the target organ is assumed to be the bladder. It should be understood that the features described herein can be used to identify other organs or structures in the body.

在一示例性实施方式中,使用者可以按下触发器114,并且包括在探针110中的收发器发射超声信号并获取与由探针110接收的回波信号相关联的B模式数据(方框610)。在一种实施方式中,数据获取单元210可以穿过膀胱在12个不同的平面上发射超声信号,并生成与12个不同平面相对应的12个B模式图像。在该实施方式中,数据可以对应于2D图像数据。在其他实施方式中,数据获取单元210可以生成3D图像数据。例如,如以上关于图3所讨论的那样,数据获取单元210可以执行隔行扫描以生成3D图像。在每种情况下,所发射的超声信号/扫描平面的数量可以基于特定的实施方式而变化。如上所述,在一些实施方式中,数据获取单元210可以在将B模式数据转发到CNN自动编码器单元220之前减小B模式图像的尺寸。例如,数据获取单元210可以将B模式图像的尺寸减小10%或更多。In an exemplary embodiment, a user may depress trigger 114 and a transceiver included in probe 110 transmits ultrasonic signals and acquires B-mode data associated with echo signals received by probe 110 (square). block 610). In one embodiment, the data acquisition unit 210 may transmit ultrasound signals in 12 different planes through the bladder and generate 12 B-mode images corresponding to the 12 different planes. In this embodiment, the data may correspond to 2D image data. In other embodiments, the data acquisition unit 210 may generate 3D image data. For example, as discussed above with respect to FIG. 3, data acquisition unit 210 may perform interlacing to generate a 3D image. In each case, the number of transmitted ultrasound signals/scan planes may vary based on the particular implementation. As described above, in some embodiments, the data acquisition unit 210 may reduce the size of the B-mode image before forwarding the B-mode data to the CNN autoencoder unit 220. For example, the data acquisition unit 210 may reduce the size of the B-mode image by 10% or more.

在每种情况下,假设CNN自动编码器单元220接收2D B模式数据并处理该数据以从接收到的数据中去除噪声。例如,参考图7,CNN自动编码器单元220可以接收B模式图像数据710,其中暗区或暗域712对应于膀胱。如图所示,B模式图像数据包括不规则或对于使用者而言可能看起来不清楚或模糊的区域。例如,图7中的区域712包括膀胱周边的较亮区域以及不明确的边界。这样的嘈杂区域可能使得难以准确估计膀胱的体积。In each case, it is assumed that the CNN autoencoder unit 220 receives 2D B-mode data and processes the data to remove noise from the received data. For example, referring to FIG. 7, the CNN autoencoder unit 220 may receive B-mode image data 710, where dark areas or dark areas 712 correspond to the bladder. As shown, the B-mode image data includes areas that are irregular or that may appear unclear or blurred to a user. For example, region 712 in FIG. 7 includes a brighter area around the bladder and an indistinct border. Such noisy areas can make it difficult to accurately estimate bladder volume.

在这种情况下,CNN自动编码器单元220通过生成目标概率映射来对所获取的B模式图像710执行去噪(方框620)。例如,如上所述,CNN自动编码器220可以利用CNN技术来生成与输入图像中的每个像素相关的概率信息。In this case, the CNN autoencoder unit 220 performs denoising on the acquired B-mode image 710 by generating a target probability map (block 620). For example, as described above, the CNN autoencoder 220 may utilize CNN techniques to generate probability information associated with each pixel in the input image.

然后,基本单元120可以确定是否已经获取和处理了整个圆锥数据(即,所有扫描平面数据)(方框630)。例如,基本单元120可以确定是否已经处理了与穿过膀胱的12次不同扫描相对应的所有12个B模式图像。如果尚未处理所有B模式图像数据(方框630-否),则基本单元120进行控制以移动至下一扫描平面位置(方框640),并且处理过程继续行进至方框610以处理与另一扫描平面相关联的B模式图像。The base unit 120 may then determine whether the entire cone data (ie, all scan plane data) has been acquired and processed (block 630). For example, the base unit 120 may determine whether all 12 B-mode images corresponding to 12 different scans through the bladder have been processed. If all B-mode image data has not been processed (block 630-NO), the base unit 120 controls to move to the next scan plane position (block 640), and processing proceeds to block 610 to process another Scan plane associated B-mode image.

如果已经处理了所有的B模式图像数据(方框630-是),则基本单元120可以使用3D信息来修正概率映射(方框650)。例如,CNN自动编码器单元220可以基于患者是男性、女性还是儿童等使用与膀胱的3D形状和尺寸有关的存储假设信息,来修改由CNN自动编码器单元220生成的一些概率信息,从而有效地修改膀胱的尺寸和/或形状。也就是说,如上所述,CNN自动编码器单元220可以使用基于患者的人口统计信息、患者的临床状况、与系统100(例如,探针110)相关联的装置信息、患者的患者数据(例如,患者病史信息和患者检查数据)等而训练的CNN。例如,如果患者150是男性,则CNN自动编码器单元220可以使用利用男性患者数据训练的CNN,如果患者150是女性,则可以使用利用女性患者数据训练的CNN,如果患者150是儿童,则可以使用利用儿童数据训练的CNN,使用基于患者的年龄范围而训练的CNN,使用利用患者的病史训练的CNN等等。在其他实施方式中,例如,当基本单元120接收并处理3D图像数据时,可以不执行附加处理,并且可以跳过方框650。在任何一种情况下,系统100可以显示P模式图像数据(方框660),例如图7所示的图像720。If all of the B-mode image data has been processed (block 630-Yes), the base unit 120 may use the 3D information to modify the probability map (block 650). For example, the CNN auto-encoder unit 220 may modify some of the probabilistic information generated by the CNN auto-encoder unit 220 using stored hypothesis information related to the 3D shape and size of the bladder based on whether the patient is male, female, child, etc., effectively Modify the size and/or shape of the bladder. That is, as described above, the CNN autoencoder unit 220 may use patient-based demographic information, the patient's clinical condition, device information associated with the system 100 (eg, probe 110 ), patient data (eg, patient data) of the patient. , patient medical history information and patient examination data), etc. For example, the CNN autoencoder unit 220 may use a CNN trained with male patient data if patient 150 is male, a CNN trained with female patient data if patient 150 is female, and a CNN trained with female patient data if patient 150 is a child Use a CNN trained on children's data, use a CNN trained on the patient's age range, use a CNN trained on the patient's medical history, etc. In other embodiments, for example, when the base unit 120 receives and processes the 3D image data, no additional processing may be performed and block 650 may be skipped. In either case, system 100 may display P-mode image data (block 660 ), such as image 720 shown in FIG. 7 .

在任一情况下,基本单元120可以使用概率映射通过二值化处理来分割目标区域(方框670)。例如,后处理单元230可以接收CNN自动编码器单元220的输出并(例如,通过插值)调整概率映射的尺寸(大小)、对概率映射进行平滑处理和/或(例如,通过滤波)对概率映射进行去噪。例如,在一个实施方式中,可以通过插值将概率映射调整为更大的尺寸,以获得更好的分辨率和/或至少部分地恢复尺寸可能已经减小了的原始B模式图像数据的空间分辨率。在一个实施方式中,可以执行2D Lanczos差值以调整与目标概率映射相关联的图像的尺寸。In either case, the base unit 120 may segment the target region through a binarization process using the probability map (block 670). For example, post-processing unit 230 may receive the output of CNN auto-encoder unit 220 and adjust the size (size) of the probability map (eg, by interpolation), smooth the probability map, and/or (eg, by filtering) the probability map to denoise. For example, in one embodiment, the probability map may be resized to a larger size by interpolation to obtain better resolution and/or to at least partially restore the spatial resolution of the original B-mode image data that may have been reduced in size Rate. In one embodiment, 2D Lanczos differencing may be performed to resize the image associated with the target probability map.

此外,基本单元120可以执行分类或二值化处理以将来自概率映射单元的概率信息转换为二值化的输出数据。例如,后处理单元230可以将概率值转换为二进制值。当针对特定像素识别出多个候选概率值时,后处理单元230可以选择最突出的值。以这种方式,当识别出多个候选值时,后处理单元230可以应用一些“智能”以选择最可能的值。In addition, the base unit 120 may perform classification or binarization processing to convert the probability information from the probability mapping unit into binarized output data. For example, the post-processing unit 230 may convert the probability values to binary values. When multiple candidate probability values are identified for a particular pixel, the post-processing unit 230 may select the most salient value. In this way, when multiple candidate values are identified, the post-processing unit 230 may apply some "intelligence" to select the most probable value.

图8示意性地示出了示例性智能二值化处理。参照图8,图像810示出了来自与2D超声图像相对应的概率映射或像素分类的输出,其中,概率信息被转换为具有各种强度的灰度图像。如图所示,图像810包括标记为812的灰色区域和标记为814的灰色区域,这些灰色区域表示膀胱的一些部分的可能位置。后处理单元230识别图像810内具有最大强度的峰点或尖点,如图像820中所示的十字线822所示。然后,后处理单元230可以针对其强度大于阈值强度的区域填充峰点周围的区域,如图像830中的区域832所示。在这种情况下,区域820内的其阈值强度值小于阈值强度的区域不会被填充,从而导致图像810中显示的灰色区域814被去除。然后,后处理单元230可以填充背景,如图像840中的区域842所示。然后,后处理单元230填充图像内的任何孔或开口区域,如图像850中的区域852所示。区域842中的孔可能对应于噪声区域或与患者150中的某些阻塞相关联的区域。以这种方式,后处理单元230识别出膀胱的最可能的位置和尺寸。也就是说,区域852被认为是患者150的膀胱的一部分。Figure 8 schematically illustrates an exemplary smart binarization process. Referring to Figure 8, image 810 shows the output from a probability map or pixel classification corresponding to a 2D ultrasound image, wherein the probability information is converted into a grayscale image with various intensities. As shown, image 810 includes a gray area labeled 812 and a gray area labeled 814 that represent possible locations of portions of the bladder. Post-processing unit 230 identifies peaks or cusps within image 810 that have the greatest intensity, as indicated by crosshairs 822 shown in image 820 . The post-processing unit 230 may then fill in the area around the peak point for the area whose intensity is greater than the threshold intensity, as shown by the area 832 in the image 830 . In this case, regions within region 820 whose threshold intensity values are less than the threshold intensity are not filled, resulting in the gray area 814 displayed in image 810 being removed. Post-processing unit 230 may then fill in the background, as shown by area 842 in image 840 . Post-processing unit 230 then fills in any holes or open areas within the image, as shown by area 852 in image 850 . Apertures in region 842 may correspond to noise regions or regions associated with certain obstructions in patient 150 . In this way, the post-processing unit 230 identifies the most likely location and size of the bladder. That is, region 852 is considered to be a portion of patient 150's bladder.

在其他实施方式中,后处理单元230可以使用图像810内的除峰值强度值以外的信息。例如,后处理单元230可以使用经过处理的概率的峰值(例如经过平滑处理的概率映射的峰值),使用多个峰值来识别多个填充区域等。作为其他示例,后处理单元230可以基于每个区域中的面积、峰值概率或平均概率来选择“主要”区域。在其他实施方式中,后处理单元230可以使用由操作人员经由例如显示器122手动输入的一个或多个种子点、使用生成一个或多个种子点的算法、执行不使用种子点的另一类型的阈值处理等来识别患者膀胱的区域。In other embodiments, post-processing unit 230 may use information within image 810 other than peak intensity values. For example, the post-processing unit 230 may use the processed probability peaks (eg, smoothed probability map peaks), use multiple peaks to identify multiple fill regions, and the like. As other examples, the post-processing unit 230 may select a "primary" region based on the area in each region, the peak probability, or the average probability. In other embodiments, post-processing unit 230 may use one or more seed points manually entered by an operator via, for example, display 122, use an algorithm that generates one or more seed points, perform another type of seed point that does not use seed points Thresholding etc. to identify areas of the patient's bladder.

在以这种方式处理图像810之后,基本单元120可以输出图像,例如图7所示的图像720。参考图7,图像720包括与膀胱相对应的区域722。如图所示,膀胱722的边缘比图像712中的边界明确得多,从而提供了更加准确的膀胱呈现。以这种方式,基本单元120可以使用每个像素的亮度值和相邻像素的局部梯度值以及统计方法(例如隐马尔可夫(Markov)模型和神经网络算法(例如,CNN))来生成B模式图像中的每个像素的概率值并对B模式数据进行去噪。After processing image 810 in this manner, base unit 120 may output an image, such as image 720 shown in FIG. 7 . Referring to Figure 7, image 720 includes a region 722 corresponding to the bladder. As shown, the edges of bladder 722 are much more defined than the boundaries in image 712, providing a more accurate representation of the bladder. In this way, the base unit 120 can use the luminance value of each pixel and the local gradient values of neighboring pixels and statistical methods such as hidden Markov models and neural network algorithms (eg, CNN) to generate B. Probability values for each pixel in the mode image and denoising the B-mode data.

然后,基本单元120可以将分割结果转换为目标体积(方框670)。例如,后处理单元230可以对3D空间中与二值化映射中的每个有效目标像素相对应的所有体素的体积求和。也就是说,体积估计逻辑250可以对12个分割的目标图像中的体素求和以估计膀胱的体积。例如,每个体素的贡献或体积可以被预先计算并存储在基本单元120内的查询表中。在这种情况下,体积估计逻辑250可以将体素的总和用作查询表的索引来确定估计的体积。体积估计逻辑250还可通过基本单元120的显示器122显示体积。例如,体积估计逻辑250可在图7中的区域724处显示膀胱的估计体积(即,在该示例中为135毫升(mL)),该估计体积被输出至基本单元120的显示器122。备选地,体积估计逻辑250可以经由探针110上的显示器显示体积信息。后处理单元230还可以显示分割结果(方框690)。也就是说,后处理单元230可以经由基本单元120的显示器122显示膀胱的12个分段。The base unit 120 may then convert the segmentation result to a target volume (block 670). For example, the post-processing unit 230 may sum the volumes of all voxels in the 3D space corresponding to each valid target pixel in the binarization map. That is, the volume estimation logic 250 may sum the voxels in the 12 segmented target images to estimate the volume of the bladder. For example, the contribution or volume of each voxel may be pre-computed and stored in a look-up table within the base unit 120. In this case, the volume estimation logic 250 may use the sum of the voxels as an index to the lookup table to determine the estimated volume. The volume estimation logic 250 may also display the volume via the display 122 of the base unit 120 . For example, the volume estimation logic 250 may display the estimated volume of the bladder (ie, 135 milliliters (mL) in this example) at area 724 in FIG. 7 , which is output to the display 122 of the base unit 120 . Alternatively, volume estimation logic 250 may display volume information via a display on probe 110 . The post-processing unit 230 may also display the segmentation results (block 690). That is, the post-processing unit 230 may display the 12 segments of the bladder via the display 122 of the base unit 120 .

在一些实施方式中,系统100可以不对概率映射信息执行二值化处理。例如,在一些实施方式中,CNN自动编码器单元220和/或后处理单元230可以将查询表应用于概率映射信息以识别受关注的目标器官的可能部分,并经由显示器122显示输出。In some implementations, the system 100 may not perform a binarization process on the probability map information. For example, in some embodiments, CNN auto-encoder unit 220 and/or post-processing unit 230 may apply a look-up table to the probabilistic mapping information to identify possible portions of the target organ of interest, and display the output via display 122 .

返回参照方框620,在一些实施方式中,概率映射单元230可以在信息生成时实时显示信息。图9示出了与向使用者提供附加显示信息相关联的示例性处理。例如,后处理单元230可以在概率模式信息(在此称为P模式)生成时经由显示器122实时显示该概率模式信息(图9,方框910)。后处理单元230还可对目标进行分割(方框920),并利用B模式图像显示分割结果(方框930)。例如,图10示出了三个B模式图像1010、1012和1014以及相应的P模式图像1020、1022和1024。在其他实施方式中,可以显示所有12个B模式图像和12个相应的P模式图像。如图所示,P模式图像1020、1022和1024比B模式图像1010、1012和1014清晰得多。另外,在一些实施方式中,后处理单元230可以提供在每个P模式图像中显示的膀胱边界的轮廓。例如,如图10所示,P模式图像1020、1022和1024中的每一个可以包括例如与膀胱的内部部分呈不同颜色或与膀胱的内部部分相比呈更亮的颜色的轮廓。Referring back to block 620, in some embodiments, the probability mapping unit 230 may display the information in real-time as the information is generated. FIG. 9 illustrates an example process associated with providing additional display information to a user. For example, post-processing unit 230 may display probabilistic pattern information (referred to herein as P-modes) in real-time via display 122 as the probabilistic pattern information is generated (FIG. 9, block 910). The post-processing unit 230 may also segment the object (block 920) and display the segmentation results using the B-mode image (block 930). For example, FIG. 10 shows three B-mode images 1010 , 1012 and 1014 and corresponding P-mode images 1020 , 1022 and 1024 . In other embodiments, all 12 B-mode images and 12 corresponding P-mode images may be displayed. As shown, P-mode images 1020 , 1022 and 1024 are much sharper than B-mode images 1010 , 1012 and 1014 . Additionally, in some embodiments, the post-processing unit 230 may provide an outline of the bladder boundary displayed in each P-mode image. For example, as shown in FIG. 10, each of the P-mode images 1020, 1022, and 1024 may include, for example, outlines that are a different color or brighter color than the interior portion of the bladder.

本文所述的实施方式使用机器学习以基于经由超声扫描仪获得的信息来识别患者体内受关注的器官或结构。机器学习处理可以接收图像数据并生成所述图像的每个特定部分(例如,像素)的概率信息,以确定该特定部分在目标器官内的概率。后处理分析还可以使用附加信息(例如患者的性别或年龄,特定目标器官等)完善概率信息。在某些情况下,目标器官的体积也可以与实时概率模式图像一起提供给使用者。Embodiments described herein use machine learning to identify organs or structures of interest in a patient based on information obtained via an ultrasound scanner. A machine learning process may receive image data and generate probability information for each particular portion (eg, pixel) of the image to determine the probability that the particular portion is within the target organ. Post-processing analysis can also refine the probabilistic information with additional information such as the gender or age of the patient, specific target organs, etc. In some cases, the volume of the target organ may also be provided to the user along with the real-time probabilistic mode image.

示例性实施方式的前述描述提供了说明和描述,但并不意图是穷举性的或将实施例限制为所公开的精确形式。根据以上教导,修改和变化是可能的,或者可以从实施例的实践中获得修改和变化。The foregoing description of exemplary embodiments provides illustration and description, but is not intended to be exhaustive or to limit the embodiments to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the embodiments.

例如,以上已经关于识别受关注的目标(例如患者的膀胱)并使用CNN处理来估计目标(例如膀胱)的体积对特征进行了描述。在其他实施方式中,可以识别其他器官或结构,并且可以估计尺寸或与所述器官/结构相关联的其他参数。例如,本文描述的处理可以用于识别和显示前列腺、肾脏、子宫、卵巢、主动脉、心脏、血管、羊水、胎儿等以及与这些目标相关联的特定特征(例如与体积和/或尺寸有关的测量值)。For example, features have been described above with respect to identifying an object of interest (eg, a patient's bladder) and using CNN processing to estimate the volume of the object (eg, bladder). In other embodiments, other organs or structures can be identified, and dimensions or other parameters associated with the organs/structures can be estimated. For example, the processes described herein can be used to identify and display the prostate, kidney, uterus, ovary, aorta, heart, blood vessels, amniotic fluid, fetus, etc. as well as specific features (eg, volume and/or size-related) associated with these objects Measurements).

例如,在其中关于除膀胱以外的各种器官或目标(例如,主动脉、前列腺、肾脏、心脏、子宫、卵巢、血管、羊水、胎儿等)使用本文所述的处理的实施方式中,可能会生成附加的与尺寸有关的测量值。例如,可以计算受关注的器官或区域的长度、高度、宽度、深度、直径、面积等。例如,对于主动脉的扫描,测量主动脉的直径在试图识别异常(例如动脉瘤)时可能很重要。对于前列腺扫描,可能需要测量前列腺的宽度和高度。在这些情况下,可以使用上述机器学习处理来生成/估计诸如长度、高度、宽度、深度、直径、面积等的测量值。也就是说,上述机器学习可以用于识别边界壁或其他受关注的项目,并估计医务人员所关注的与尺寸有关的特定参数。For example, in embodiments in which the treatments described herein are used with respect to various organs or targets other than the bladder (eg, aorta, prostate, kidney, heart, uterus, ovary, blood vessels, amniotic fluid, fetus, etc.), there may be Generates additional dimension-dependent measurements. For example, the length, height, width, depth, diameter, area, etc. of the organ or region of interest can be calculated. For example, for scans of the aorta, measuring the diameter of the aorta may be important when trying to identify abnormalities such as aneurysms. For prostate scans, it may be necessary to measure the width and height of the prostate. In these cases, measurements such as length, height, width, depth, diameter, area, etc. may be generated/estimated using the machine learning process described above. That said, the machine learning described above can be used to identify boundary walls or other items of interest, and estimate specific parameters related to size that are of concern to medical personnel.

此外,以上已经主要关于使用回波数据生成B模式图像并将机器学习应用于B模式图像以识别与目标相关联的体积、长度或其他信息对特征进行了描述。在其他实施方式中,可以使用其他类型的超声输入图像数据。例如,在其他实施方式中,可以使用通常包括在垂直于B模式图像定向的平面中形成的受关注目标(例如,膀胱)的呈现的C模式图像数据。更进一步,在其他实施方式中,射频(RF)或正交信号(例如,IQ信号)可以用作CNN自动编码器单元220的输入,以生成与目标相关联的概率输出映射。Furthermore, features have been described above primarily with respect to using echo data to generate B-mode images and applying machine learning to B-mode images to identify volume, length, or other information associated with objects. In other embodiments, other types of ultrasound input image data may be used. For example, in other embodiments, C-mode image data may be used that typically includes a representation of an object of interest (eg, a bladder) formed in a plane oriented perpendicular to the B-mode image. Still further, in other embodiments, radio frequency (RF) or quadrature signals (eg, IQ signals) may be used as input to the CNN autoencoder unit 220 to generate probabilistic output maps associated with targets.

此外,以上已经关于生成单个概率映射对特征进行了描述。在其他实施方式中,可以生成多个概率映射。例如,系统100可以生成关于受关注的目标器官(例如,膀胱)的一个概率映射,关于耻骨/耻骨阴影的另一概率映射以及关于前列腺的另一概率映射。以这种方式,可以生成患者150的内部器官的更准确的呈现,这可以实现对目标器官(例如,膀胱)的更准确的体积估计。Furthermore, features have been described above with respect to generating a single probability map. In other embodiments, multiple probability maps may be generated. For example, the system 100 may generate one probability map for the target organ of interest (eg, bladder), another probability map for the pubis/pubic shadow, and another probability map for the prostate. In this manner, a more accurate representation of the internal organs of the patient 150 may be generated, which may enable a more accurate volume estimation of the target organ (eg, bladder).

另外,本文描述的特征涉及对B模式图像数据进行逐像素分析。在其他实施方式中,作为逐像素映射的替代,可以使用边缘映射。在此实施方式中,可以使用CNN算法检测目标的边缘。在进一步的实施方式中,可以使用多边形坐标方法来识别膀胱的离散部分,然后连接这些点。在该实施方式中,可以使用轮廓边缘跟踪算法来连接目标器官的这些点。Additionally, the features described herein relate to pixel-by-pixel analysis of B-mode image data. In other embodiments, edge mapping may be used instead of pixel-by-pixel mapping. In this embodiment, a CNN algorithm can be used to detect the edges of the object. In a further embodiment, a polygonal coordinate method can be used to identify discrete parts of the bladder and then connect the points. In this embodiment, a contour edge tracking algorithm can be used to connect the points of the target organ.

此外,上面已经描述了各种输入(例如指示患者是男性、女性还是儿童等的信息)。也可以使用对概率映射和/或二值化进行的其他输入。例如,可以将体重指数(BMI)、年龄或年龄范围输入基本单元120,并且基本单元120可以基于特定的BMI、年龄或年龄范围自动调整所述处理。对概率映射和/或二值化处理进行的其他输入(例如每个像素的深度,平面方位等)可用于改善由系统100生成的体积估计和/或输出图像的准确性。Additionally, various inputs (eg, information indicating whether the patient is male, female, child, etc.) have been described above. Other inputs to probability mapping and/or binarization may also be used. For example, a body mass index (BMI), age, or age range can be entered into the base unit 120, and the base unit 120 can automatically adjust the process based on the particular BMI, age, or age range. Other inputs to the probability mapping and/or binarization process (eg, depth per pixel, plane orientation, etc.) may be used to improve the accuracy of volume estimates and/or output images generated by system 100 .

此外,如上所述,可以使用与各种类型的患者(男性、女性和儿童)相关联的训练数据来帮助生成P模式数据。例如,数千或更多的训练数据图像可用于生成用于处理B模式输入数据以识别受关注的目标的CNN算法。另外,可以在基本单元120中输入或存储数千或更多的图像,以帮助修改CNN自动编码器单元220的输出。这在预期的障碍物(例如对于膀胱扫描来说,耻骨)不利地影响图像的情况下尤其有用。在这些实施方式中,基本单元120可以存储关于如何应对障碍物的影响并使障碍物的影响最小化的信息。CNN自动编码器单元220和/或后处理单元230于是可以更准确地应对障碍物。Additionally, as discussed above, training data associated with various types of patients (males, females, and children) can be used to help generate P-mode data. For example, thousands or more images of training data can be used to generate a CNN algorithm for processing B-mode input data to identify objects of interest. Additionally, thousands or more images may be input or stored in the base unit 120 to help modify the output of the CNN autoencoder unit 220. This is especially useful where an expected obstacle (eg, the pubic bone for bladder scans) adversely affects the image. In these embodiments, the base unit 120 may store information on how to address and minimize the impact of obstacles. The CNN auto-encoder unit 220 and/or the post-processing unit 230 can then deal with obstacles more accurately.

此外,本文描述的特征是指使用B模式图像数据作为CNN自动编码器单元220的输入。在其他实施方式中,可以使用其他数据。例如,与所发射的超声信号相关联的回波数据可以包括谐波信息,该谐波信息可以用于检测诸如膀胱的目标器官。在这种情况下,与所发射的超声信号的频率有关的较高阶谐波回波信息(例如,二次谐波或更高阶谐波)可以用于生成概率映射信息,而无需生成B模式图像。在其他实施方式中,除了上述的B模式数据之外,还可以使用较高阶谐波信息来增强P模式图像数据。在更进一步的实施方式中,探针110可以以多个频率发射超声信号,并且与该多个频率相关联的回波信息可以用作CNN自动编码器单元220或其他机器学习模块的输入,以检测目标器官并估计目标器官的体积、尺寸等参数。Furthermore, the features described herein refer to the use of B-mode image data as input to the CNN auto-encoder unit 220. In other embodiments, other data may be used. For example, echo data associated with transmitted ultrasound signals may include harmonic information that may be used to detect target organs such as the bladder. In this case, higher order harmonic echo information (eg, second or higher order harmonics) related to the frequency of the transmitted ultrasound signal can be used to generate probability mapping information without generating B pattern image. In other embodiments, the P-mode image data may be enhanced using higher order harmonic information in addition to the B-mode data described above. In still further embodiments, the probe 110 may transmit ultrasound signals at multiple frequencies, and the echo information associated with the multiple frequencies may be used as input to the CNN autoencoder unit 220 or other machine learning module to Detect the target organ and estimate the volume, size and other parameters of the target organ.

例如,可以使用基频下的多个B模式图像和较高阶谐波频率下或多个较高阶谐波频率下的多个B模式图像作为CNN自动编码器单元220的输入。此外,基频和谐波频率信息可以被预处理,并用作CNN自动编码器单元220的输入,以帮助生成概率映射。例如,谐波功率与基频功率之间的比值可以用作CNN自动编码器单元220的输入,以增强概率映射的准确性。For example, multiple B-mode images at the fundamental frequency and multiple B-mode images at higher-order harmonic frequencies or at multiple higher-order harmonic frequencies may be used as inputs to the CNN auto-encoder unit 220. Additionally, fundamental and harmonic frequency information can be preprocessed and used as input to the CNN auto-encoder unit 220 to help generate probabilistic maps. For example, the ratio between the harmonic power and the fundamental frequency power can be used as an input to the CNN auto-encoder unit 220 to enhance the accuracy of the probability mapping.

另外,在一些实施方式中,上述后处理可使用第二机器学习(例如,CNN)算法对图像数据进行去噪和/或对图像执行轮廓/边缘跟踪。Additionally, in some embodiments, the post-processing described above may use a second machine learning (eg, CNN) algorithm to denoise the image data and/or perform contour/edge tracking on the image.

此外,以上已经关于获取2-维(2D)B模式图像数据的数据获取单元210对实施方式进行了描述。在其他实施方式中,更高维度的图像(例如2.5D或3D)数据可以被输入CNN自动编码器单元220。例如,对于2.5D的实施方式,CNN自动编码器单元220可以使用与若干扫描平面以及相邻的扫描平面相关联的B模式图像来提高准确性。对于3D的实施方式,CNN自动编码器单元220可以为12个扫描平面中的每一个生成12个概率映射,并且后处理单元230可以使用所有12个概率映射基于这12个概率映射来生成3D图像(例如,经由3D泛洪填充算法)。然后可以在2.5D或3D图像上执行分类和/或二值化处理以生成例如3D输出图像。Furthermore, the embodiments have been described above with respect to the data acquisition unit 210 that acquires 2-dimensional (2D) B-mode image data. In other embodiments, higher dimensional image (eg, 2.5D or 3D) data may be input to the CNN autoencoder unit 220. For example, for a 2.5D implementation, the CNN autoencoder unit 220 may use B-mode images associated with several scan planes and adjacent scan planes to improve accuracy. For 3D implementations, the CNN autoencoder unit 220 may generate 12 probability maps for each of the 12 scan planes, and the post-processing unit 230 may use all 12 probability maps to generate a 3D image based on the 12 probability maps (eg via a 3D flood fill algorithm). A classification and/or binarization process can then be performed on the 2.5D or 3D image to generate, for example, a 3D output image.

此外,虽然已经关于图6和图9描述了一系列动作,但是动作的顺序在其他实施方式中可以不同。此外,可以并行地实施非依赖性动作。Furthermore, although a series of actions have been described with respect to Figures 6 and 9, the order of the actions may differ in other implementations. Furthermore, non-dependent actions can be implemented in parallel.

显而易见的是,在附图所示的实施方式中,上述各种特征可以以许多不同形式的软件、固件和硬件来实施。用于实施各种特征的实际软件代码或专用控制硬件不是限制性的。因此,在不参考特定软件代码的情况下对特征的操作和行为进行了描述,应该理解,本领域的普通技术人员将能够基于本文的描述来设计软件和控制硬件以实施各种特征。It will be apparent that, in the embodiments shown in the drawings, the various features described above may be implemented in many different forms of software, firmware and hardware. The actual software code or dedicated control hardware used to implement the various features is not limiting. Thus, the operation and behavior of the features have been described without reference to the specific software code, it being understood that those of ordinary skill in the art would be able to design software and control hardware to implement the various features based on the descriptions herein.

此外,本发明的某些部分可以被实施为执行一个或多个功能的“逻辑”。该逻辑可以包括硬件(例如一个或多个处理器、微处理器、专用集成电路、现场可编程门阵列或其他处理逻辑)、软件或硬件和软件的组合。Additionally, portions of the invention may be implemented as "logic" that performs one or more functions. The logic may include hardware (eg, one or more processors, microprocessors, application specific integrated circuits, field programmable gate arrays, or other processing logic), software, or a combination of hardware and software.

在前面的说明书中,已经参考附图描述了各种优选实施例。然而,将显而易见的是,在不脱离如所附权利要求书中所阐述的本发明的较宽范围的情况下,可以对其进行各种修改和改变,并且可以实施附加的实施例。因此,说明书和附图应被认为是说明性而非限制性的。In the foregoing specification, various preferred embodiments have been described with reference to the accompanying drawings. It will, however, be evident that various modifications and changes may be made, and additional embodiments may be practiced, without departing from the broader scope of the invention as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.

除非明确地描述,否则在本申请的说明书中使用的任何要素、动作或指令都不应被解释为对本发明是关键或必要的。同样,如本文所使用的那样,冠词“一”旨在包括一个或多个项目。此外,除非另有明确说明,否则短语“基于”旨在表示“至少部分地基于”。No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article "a" is intended to include one or more of the items. Furthermore, the phrase "based on" is intended to mean "based at least in part on" unless expressly stated otherwise.

Claims (21)

1. A system, comprising:
a probe configured to:
transmitting an ultrasonic signal to an object of interest, and
receiving echo information associated with the transmitted ultrasound signals; and
at least one processing device configured to:
processing the received echo information using a machine learning algorithm to generate probability information associated with the object of interest,
classifying the probability information, and
based on the classified probability information, image information corresponding to the object of interest is output.
2. The system of claim 1, wherein when classifying the probability information, the at least one processing device is configured to binarize the probability information, and the at least one processing device is further configured to:
based on the binarized probability information, at least one of a volume, length, height, width, depth, diameter, or area associated with the object of interest is estimated.
3. The system of claim 1, wherein the machine learning algorithm comprises a convolutional neural network algorithm.
4. The system of claim 1, further comprising:
a display configured to receive the image information and display the image information.
5. The system of claim 4, wherein the display is further configured to:
b-mode image data corresponding to the received echo information and output image information corresponding to the target of interest are displayed simultaneously.
6. The system of claim 1, wherein the at least one processing device is further configured to:
generating aiming instructions for directing the probe to a target of interest.
7. The system of claim 1, wherein the object of interest comprises a bladder.
8. The system of claim 1, wherein the at least one processing device is further configured to:
receiving at least one of gender information of a subject, information indicating that the subject is a child, or patient data associated with the subject, and
the received echo information is processed based on the received information.
9. The system of claim 1, wherein the at least one processing device is further configured to:
automatically determining at least one of demographic information of the subject, clinical information of the subject, or device information associated with the probe, and
the received echo information is processed based on the automatically determined information.
10. The system of claim 1, wherein, when processing the received echo information, the at least one processing device is configured to:
processing the received echo information to generate output image data,
processing pixels associated with the output image data,
the value of each pixel processed is determined,
a peak is identified, an
Filling a region around a point associated with the peak to identify a portion of the target of interest.
11. The system of claim 1, wherein, when processing the received echo information, the at least one processing device is configured to:
identifying higher order harmonic information about a frequency associated with the transmitted ultrasound signal, and
probability information is generated based on the identified higher order harmonic information.
12. The system of claim 1, wherein the probe is configured to transmit the received echo information to the at least one processing device via a wireless interface.
13. The system of claim 1, wherein the object of interest comprises one of an aorta, a prostate, a heart, a uterus, a kidney, a blood vessel, amniotic fluid, or a fetus.
14. A method, comprising:
transmitting an ultrasound signal to a target of interest via an ultrasound scanner;
receiving echo information associated with the transmitted ultrasound signals;
processing the received echo information using a machine learning algorithm to generate probability information associated with the object of interest;
classifying the probability information; and
based on the classified probability information, image information corresponding to the object of interest is output.
15. The method of claim 14, wherein classifying the probability information comprises binarizing the probability information, the method further comprising:
estimating at least one of a volume, a length, a height, a width, a depth, a diameter, or an area associated with the object of interest based on the binarized probability information; and
outputting the at least one of the volume, length, height, width, depth, diameter, or area to a display.
16. The method of claim 14, further comprising:
b-mode image data corresponding to the echo information and output image information corresponding to the target of interest are displayed simultaneously.
17. The method of claim 14, further comprising:
receiving at least one of gender information, age range information, or body mass index information; and
the received echo information is processed based on the received information.
18. A system, comprising:
a memory; and
at least one processing device configured to:
receiving image information corresponding to an object of interest,
processing the received image information using a machine learning algorithm to generate probability information associated with the object of interest,
classifying the probability information, and
based on the classified probability information, second image information corresponding to the object of interest is output.
19. The system of claim 18, wherein the at least one processing device is further configured to:
based on the classified probability information, at least one of a volume, length, height, width, depth, diameter, or area associated with the object of interest is estimated.
20. The system of claim 18, wherein the machine learning algorithm comprises a convolutional neural network algorithm and the memory stores instructions to execute the convolutional neural network algorithm.
21. The system of claim 18, further comprising:
a probe configured to:
an ultrasound signal is transmitted to a target of interest,
receiving echo information associated with the transmitted ultrasound signals, and
forwarding the echo information to the at least one processing device,
wherein the at least one processing device is further configured to:
generating, using the machine learning algorithm, image information corresponding to an object of interest based on the echo information.
CN201880030236.6A 2017-05-11 2018-05-11 Ultrasound scanning based on probability mapping Pending CN110753517A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201762504709P 2017-05-11 2017-05-11
US62/504,709 2017-05-11
PCT/US2018/032247 WO2018209193A1 (en) 2017-05-11 2018-05-11 Probability map-based ultrasound scanning

Publications (1)

Publication Number Publication Date
CN110753517A true CN110753517A (en) 2020-02-04

Family

ID=62685100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880030236.6A Pending CN110753517A (en) 2017-05-11 2018-05-11 Ultrasound scanning based on probability mapping

Country Status (7)

Country Link
US (1) US12217445B2 (en)
EP (1) EP3621525A1 (en)
JP (1) JP6902625B2 (en)
KR (2) KR20200003400A (en)
CN (1) CN110753517A (en)
CA (1) CA3062330A1 (en)
WO (1) WO2018209193A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184683A (en) * 2020-10-09 2021-01-05 深圳度影医疗科技有限公司 Ultrasonic image identification method, terminal equipment and storage medium
CN113616235A (en) * 2020-05-07 2021-11-09 中移(成都)信息通信科技有限公司 Ultrasonic detection method, device, system, equipment, storage medium and ultrasonic probe

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2740698C2 (en) 2016-03-09 2021-01-19 Эконаус, Инк. Systems and methods of recognizing ultrasonic images performed using a network with artificial intelligence
KR102139856B1 (en) * 2017-06-23 2020-07-30 울산대학교 산학협력단 Method for ultrasound image processing
EP3420913B1 (en) * 2017-06-26 2020-11-18 Samsung Medison Co., Ltd. Ultrasound imaging apparatus and control method thereof
US11622744B2 (en) * 2017-07-07 2023-04-11 Massachusetts Institute Of Technology System and method for automated ovarian follicular monitoring
WO2019189386A1 (en) * 2018-03-30 2019-10-03 富士フイルム株式会社 Ultrasound diagnostic device and control method of ultrasound diagnostic device
US11391817B2 (en) 2018-05-11 2022-07-19 Qualcomm Incorporated Radio frequency (RF) object detection using radar and machine learning
US10878570B2 (en) * 2018-07-17 2020-12-29 International Business Machines Corporation Knockout autoencoder for detecting anomalies in biomedical images
US20210265042A1 (en) * 2018-07-20 2021-08-26 Koninklijke Philips N.V. Ultrasound imaging by deep learning and associated devices, systems, and methods
WO2020122606A1 (en) 2018-12-11 2020-06-18 시너지에이아이 주식회사 Method for measuring volume of organ by using artificial neural network, and apparatus therefor
JP7192512B2 (en) * 2019-01-11 2022-12-20 富士通株式会社 Learning program, learning device and learning method
CA3126020C (en) * 2019-01-17 2024-04-23 Verathon Inc. Systems and methods for quantitative abdominal aortic aneurysm analysis using 3d ultrasound imaging
JP7273518B2 (en) * 2019-01-17 2023-05-15 キヤノンメディカルシステムズ株式会社 Ultrasound diagnostic equipment and learning program
JP7258568B2 (en) * 2019-01-18 2023-04-17 キヤノンメディカルシステムズ株式会社 ULTRASOUND DIAGNOSTIC DEVICE, IMAGE PROCESSING DEVICE, AND IMAGE PROCESSING PROGRAM
JP7302988B2 (en) * 2019-03-07 2023-07-04 富士フイルムヘルスケア株式会社 Medical imaging device, medical image processing device, and medical image processing program
JP7242409B2 (en) * 2019-04-26 2023-03-20 キヤノンメディカルシステムズ株式会社 MEDICAL IMAGE PROCESSING DEVICE, ULTRASOUND DIAGNOSTIC DEVICE, AND LEARNED MODEL CREATION METHOD
WO2020252330A1 (en) 2019-06-12 2020-12-17 Carnegie Mellon University System and method for labeling ultrasound data
US11986345B2 (en) 2019-07-12 2024-05-21 Verathon Inc. Representation of a target during aiming of an ultrasound probe
US20210045716A1 (en) * 2019-08-13 2021-02-18 GE Precision Healthcare LLC Method and system for providing interaction with a visual artificial intelligence ultrasound image segmentation module
CN110567558B (en) * 2019-08-28 2021-08-10 华南理工大学 Ultrasonic guided wave detection method based on deep convolution characteristics
CN112568935B (en) * 2019-09-29 2024-06-25 中慧医学成像有限公司 Three-dimensional ultrasonic imaging method and system based on three-dimensional tracking camera
US11583244B2 (en) * 2019-10-04 2023-02-21 GE Precision Healthcare LLC System and methods for tracking anatomical features in ultrasound images
US20210183521A1 (en) * 2019-12-13 2021-06-17 Korea Advanced Institute Of Science And Technology Method and apparatus for quantitative imaging using ultrasound data
JP7093093B2 (en) * 2020-01-08 2022-06-29 有限会社フロントエンドテクノロジー Ultrasonic urine volume measuring device, learning model generation method, learning model
KR102246966B1 (en) 2020-01-29 2021-04-30 주식회사 아티큐 Method for Recognizing Object Target of Body
EP4132366A1 (en) * 2020-04-07 2023-02-15 Verathon, Inc. Automated prostate analysis system
KR102238280B1 (en) * 2020-12-09 2021-04-08 박지현 Underwater target detection system and method of thereof
US20230070062A1 (en) * 2021-08-27 2023-03-09 Clarius Mobile Health Corp. Method and system, using an ai model, for identifying and predicting optimal fetal images for generating an ultrasound multimedia product
JP2023034400A (en) * 2021-08-31 2023-03-13 DeepEyeVision株式会社 Information processing device, information processing method and program
JP2023087273A (en) 2021-12-13 2023-06-23 富士フイルム株式会社 Ultrasonic diagnostic device and control method of ultrasonic diagnostic device
JP2023143418A (en) * 2022-03-25 2023-10-06 富士フイルム株式会社 Ultrasonic diagnostic device and operation method thereof
WO2024101255A1 (en) * 2022-11-08 2024-05-16 富士フイルム株式会社 Medical assistance device, ultrasonic endoscope, medical assistance method, and program
CN118071746B (en) * 2024-04-19 2024-08-30 广州索诺星信息科技有限公司 Ultrasonic image data management system and method based on artificial intelligence

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6238342B1 (en) * 1998-05-26 2001-05-29 Riverside Research Institute Ultrasonic tissue-type classification and imaging methods and apparatus
WO2001082787A2 (en) * 2000-05-03 2001-11-08 University Of Washington Method for determining the contour of an in vivo organ using multiple image frames of the organ
US20090093717A1 (en) * 2007-10-04 2009-04-09 Siemens Corporate Research, Inc. Automated Fetal Measurement From Three-Dimensional Ultrasound Data
CN102629376A (en) * 2011-02-11 2012-08-08 微软公司 Image registration
US20140052001A1 (en) * 2012-05-31 2014-02-20 Razvan Ioan Ionasec Mitral Valve Detection for Transthoracic Echocardiography
CN104840209A (en) * 2014-02-19 2015-08-19 三星电子株式会社 Apparatus and method for lesion detection
CN106204465A (en) * 2015-05-27 2016-12-07 美国西门子医疗解决公司 Knowledge based engineering ultrasonoscopy strengthens
US9536054B1 (en) * 2016-01-07 2017-01-03 ClearView Diagnostics Inc. Method and means of CAD system personalization to provide a confidence level indicator for CAD system recommendations

Family Cites Families (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2212267B (en) 1987-11-11 1992-07-29 Circulation Res Ltd Methods and apparatus for the examination and treatment of internal organs
US5081933A (en) * 1990-03-15 1992-01-21 Utdc Inc. Lcts chassis configuration with articulated chassis sections between vehicles
JPH06233761A (en) * 1993-02-09 1994-08-23 Hitachi Medical Corp Image diagnostic device for medical purpose
US5734739A (en) * 1994-05-31 1998-03-31 University Of Washington Method for determining the contour of an in vivo organ using multiple image frames of the organ
US5871019A (en) 1996-09-23 1999-02-16 Mayo Foundation For Medical Education And Research Fast cardiac boundary imaging
US5984870A (en) * 1997-07-25 1999-11-16 Arch Development Corporation Method and system for the automated analysis of lesions in ultrasound images
AU5117699A (en) 1998-07-21 2000-02-14 Acoustic Sciences Associates Synthetic structural imaging and volume estimation of biological tissue organs
WO2001082225A2 (en) 2000-04-24 2001-11-01 Washington University Method and apparatus for probabilistic model of ultrasonic images
US8435181B2 (en) 2002-06-07 2013-05-07 Verathon Inc. System and method to identify and measure organ wall boundaries
GB2391625A (en) 2002-08-09 2004-02-11 Diagnostic Ultrasound Europ B Instantaneous ultrasonic echo measurement of bladder urine volume with a limited number of ultrasound beams
US7744534B2 (en) 2002-06-07 2010-06-29 Verathon Inc. 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
JP4244300B2 (en) 2003-03-24 2009-03-25 富士フイルム株式会社 Ultrasonic transceiver
US6932770B2 (en) 2003-08-04 2005-08-23 Prisma Medical Technologies Llc Method and apparatus for ultrasonic imaging
US7720269B2 (en) 2003-10-02 2010-05-18 Siemens Medical Solutions Usa, Inc. Volumetric characterization using covariance estimation from scale-space hessian matrices
US20050089205A1 (en) * 2003-10-23 2005-04-28 Ajay Kapur Systems and methods for viewing an abnormality in different kinds of images
US7555151B2 (en) 2004-09-02 2009-06-30 Siemens Medical Solutions Usa, Inc. System and method for tracking anatomical structures in three dimensional images
US7627386B2 (en) 2004-10-07 2009-12-01 Zonaire Medical Systems, Inc. Ultrasound imaging system parameter optimization via fuzzy logic
US7831081B2 (en) 2005-08-15 2010-11-09 Boston Scientific Scimed, Inc. Border detection in medical image analysis
US8047990B2 (en) 2006-01-19 2011-11-01 Burdette Everette C Collagen density and structural change measurement and mapping in tissue
US8055098B2 (en) 2006-01-27 2011-11-08 Affymetrix, Inc. System, method, and product for imaging probe arrays with small feature sizes
US8078255B2 (en) 2006-03-29 2011-12-13 University Of Georgia Research Foundation, Inc. Virtual surgical systems and methods
US8157736B2 (en) 2006-04-18 2012-04-17 Siemens Corporation System and method for feature detection in ultrasound images
US20110137172A1 (en) 2006-04-25 2011-06-09 Mcube Technology Co., Ltd. Apparatus and method for measuring an amount of urine in a bladder
KR100779548B1 (en) 2006-04-25 2007-11-27 (주) 엠큐브테크놀로지 Ultrasound diagnostic device and ultrasound diagnostic method
US20140024937A1 (en) 2006-04-25 2014-01-23 Mcube Technology Co., Ltd. Apparatus and method for measuring an amount of urine in a bladder
CN101448461B (en) 2006-05-19 2011-04-06 株式会社日立医药 Ultrasonic diagnostic device and boundary extraction method
US8905932B2 (en) 2006-08-17 2014-12-09 Jan Medical Inc. Non-invasive characterization of human vasculature
US8167803B2 (en) 2007-05-16 2012-05-01 Verathon Inc. System and method for bladder detection using harmonic imaging
CN101677805B (en) 2007-06-01 2013-05-29 皇家飞利浦电子股份有限公司 Wireless ultrasound probe cable
CN101848677B (en) * 2007-09-26 2014-09-17 麦德托尼克公司 Frequency selective monitoring of physiological signals
US8175351B2 (en) * 2008-09-16 2012-05-08 Icad, Inc. Computer-aided detection and classification of suspicious masses in breast imagery
US8265390B2 (en) * 2008-11-11 2012-09-11 Siemens Medical Solutions Usa, Inc. Probabilistic segmentation in computer-aided detection
EP2194486A1 (en) 2008-12-04 2010-06-09 Koninklijke Philips Electronics N.V. A method, apparatus, and computer program product for acquiring medical image data
WO2010066007A1 (en) 2008-12-12 2010-06-17 Signostics Limited Medical diagnostic method and apparatus
US20100158332A1 (en) * 2008-12-22 2010-06-24 Dan Rico Method and system of automated detection of lesions in medical images
US8467856B2 (en) 2009-07-17 2013-06-18 Koninklijke Philips Electronics N.V. Anatomy modeling for tumor region of interest definition
US8343053B2 (en) 2009-07-21 2013-01-01 Siemens Medical Solutions Usa, Inc. Detection of structure in ultrasound M-mode imaging
JP5645432B2 (en) 2010-03-19 2014-12-24 キヤノン株式会社 Image processing apparatus, image processing system, image processing method, and program for causing computer to execute image processing
US8396268B2 (en) 2010-03-31 2013-03-12 Isis Innovation Limited System and method for image sequence processing
US8532360B2 (en) 2010-04-20 2013-09-10 Atheropoint Llc Imaging based symptomatic classification using a combination of trace transform, fuzzy technique and multitude of features
US20110257527A1 (en) * 2010-04-20 2011-10-20 Suri Jasjit S Ultrasound carotid media wall classification and imt measurement in curved vessels using recursive refinement and validation
AU2011213889B2 (en) 2010-08-27 2016-02-18 Signostics Limited Method and apparatus for volume determination
JP2013542046A (en) 2010-11-10 2013-11-21 エコーメトリックス,エルエルシー Ultrasound image processing system and method
JP6106190B2 (en) * 2011-12-21 2017-03-29 ボルケーノ コーポレイション Visualization method of blood and blood likelihood in blood vessel image
US20160270757A1 (en) 2012-11-15 2016-09-22 Konica Minolta, Inc. Image-processing apparatus, image-processing method, and program
US10226227B2 (en) * 2013-05-24 2019-03-12 Sunnybrook Research Institute System and method for classifying and characterizing tissues using first-order and second-order statistics of quantitative ultrasound parametric maps
JP6200249B2 (en) 2013-09-11 2017-09-20 キヤノン株式会社 Information processing apparatus and information processing method
KR102328269B1 (en) 2014-10-23 2021-11-19 삼성전자주식회사 Ultrasound imaging apparatus and control method for the same
US20180140282A1 (en) 2015-06-03 2018-05-24 Hitachi, Ltd. Ultrasonic diagnostic apparatus and image processing method
WO2017033502A1 (en) * 2015-08-21 2017-03-02 富士フイルム株式会社 Ultrasonic diagnostic device and method for controlling ultrasonic diagnostic device
US10420523B2 (en) * 2016-03-21 2019-09-24 The Board Of Trustees Of The Leland Stanford Junior University Adaptive local window-based methods for characterizing features of interest in digital images and systems for practicing same
US10643092B2 (en) * 2018-06-21 2020-05-05 International Business Machines Corporation Segmenting irregular shapes in images using deep region growing with an image pyramid
US11164067B2 (en) * 2018-08-29 2021-11-02 Arizona Board Of Regents On Behalf Of Arizona State University Systems, methods, and apparatuses for implementing a multi-resolution neural network for use with imaging intensive applications including medical imaging

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6238342B1 (en) * 1998-05-26 2001-05-29 Riverside Research Institute Ultrasonic tissue-type classification and imaging methods and apparatus
WO2001082787A2 (en) * 2000-05-03 2001-11-08 University Of Washington Method for determining the contour of an in vivo organ using multiple image frames of the organ
US20090093717A1 (en) * 2007-10-04 2009-04-09 Siemens Corporate Research, Inc. Automated Fetal Measurement From Three-Dimensional Ultrasound Data
CN102629376A (en) * 2011-02-11 2012-08-08 微软公司 Image registration
US20140052001A1 (en) * 2012-05-31 2014-02-20 Razvan Ioan Ionasec Mitral Valve Detection for Transthoracic Echocardiography
CN104840209A (en) * 2014-02-19 2015-08-19 三星电子株式会社 Apparatus and method for lesion detection
CN106204465A (en) * 2015-05-27 2016-12-07 美国西门子医疗解决公司 Knowledge based engineering ultrasonoscopy strengthens
US9536054B1 (en) * 2016-01-07 2017-01-03 ClearView Diagnostics Inc. Method and means of CAD system personalization to provide a confidence level indicator for CAD system recommendations

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113616235A (en) * 2020-05-07 2021-11-09 中移(成都)信息通信科技有限公司 Ultrasonic detection method, device, system, equipment, storage medium and ultrasonic probe
CN113616235B (en) * 2020-05-07 2024-01-19 中移(成都)信息通信科技有限公司 Ultrasonic detection method, device, system, equipment, storage medium and ultrasonic probe
CN112184683A (en) * 2020-10-09 2021-01-05 深圳度影医疗科技有限公司 Ultrasonic image identification method, terminal equipment and storage medium

Also Published As

Publication number Publication date
KR20200003400A (en) 2020-01-09
CA3062330A1 (en) 2018-11-15
KR20220040507A (en) 2022-03-30
US20180330518A1 (en) 2018-11-15
KR102409090B1 (en) 2022-06-15
JP2020519369A (en) 2020-07-02
WO2018209193A1 (en) 2018-11-15
EP3621525A1 (en) 2020-03-18
US12217445B2 (en) 2025-02-04
JP6902625B2 (en) 2021-07-14

Similar Documents

Publication Publication Date Title
US12217445B2 (en) Probability map-based ultrasound scanning
US7819806B2 (en) System and method to identify and measure organ wall boundaries
CN106204465B (en) The enhancing of Knowledge based engineering ultrasound image
CN110325119B (en) Ovarian follicle count and size determination
US8435181B2 (en) System and method to identify and measure organ wall boundaries
US20080146932A1 (en) 3D ultrasound-based instrument for non-invasive measurement of Amniotic Fluid Volume
US11684344B2 (en) Systems and methods for quantitative abdominal aortic aneurysm analysis using 3D ultrasound imaging
US11464490B2 (en) Real-time feedback and semantic-rich guidance on quality ultrasound image acquisition
CN111629670B (en) Echo window artifact classification and visual indicators for ultrasound systems
US20080139938A1 (en) System and method to identify and measure organ wall boundaries
US10949976B2 (en) Active contour model using two-dimensional gradient vector for organ boundary detection
US11278259B2 (en) Thrombus detection during scanning
KR20220163445A (en) Automated Prostate Analysis System
KR20150103956A (en) Apparatus and method for processing medical image, and computer-readable recoding medium
US9364196B2 (en) Method and apparatus for ultrasonic measurement of volume of bodily structures
WO2020133236A1 (en) Spinal imaging method and ultrasonic imaging system
CN116258736A (en) System and method for segmenting an image
WO2021230230A1 (en) Ultrasonic diagnosis device, medical image processing device, and medical image processing method
JP2018157982A (en) Ultrasonic diagnosis apparatus and program
EP3848892A1 (en) Generating a plurality of image segmentation results for each node of an anatomical structure model to provide a segmentation confidence value for each node
JP2018157981A (en) Ultrasonic diagnosis apparatus and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination