[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110753517A - Ultrasound scanning based on probability mapping - Google Patents

Ultrasound scanning based on probability mapping Download PDF

Info

Publication number
CN110753517A
CN110753517A CN201880030236.6A CN201880030236A CN110753517A CN 110753517 A CN110753517 A CN 110753517A CN 201880030236 A CN201880030236 A CN 201880030236A CN 110753517 A CN110753517 A CN 110753517A
Authority
CN
China
Prior art keywords
information
interest
probability
processing
processing device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880030236.6A
Other languages
Chinese (zh)
Inventor
J·H·崔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Verathon Inc
Original Assignee
Verathon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Verathon Inc filed Critical Verathon Inc
Publication of CN110753517A publication Critical patent/CN110753517A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0833Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
    • A61B8/085Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0866Detecting organic movements or changes, e.g. tumours, cysts, swellings involving foetal diagnosis; pre-natal or peri-natal diagnosis of the baby
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0883Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of the heart
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0891Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/13Tomography
    • A61B8/14Echo-tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/42Details of probe positioning or probe attachment to the patient
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • A61B8/463Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5292Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves using additional data, e.g. patient information, image labeling, acquisition parameters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/56Details of data transmission or power supply
    • A61B8/565Details of data transmission or power supply involving data transmission via a network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/43Detecting, measuring or recording for evaluating the reproductive systems
    • A61B5/4306Detecting, measuring or recording for evaluating the reproductive systems for evaluating the female reproductive systems, e.g. gynaecological evaluations
    • A61B5/4318Evaluation of the lower reproductive system
    • A61B5/4325Evaluation of the lower reproductive system of the uterine cavities, e.g. uterus, fallopian tubes, ovaries
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/43Detecting, measuring or recording for evaluating the reproductive systems
    • A61B5/4375Detecting, measuring or recording for evaluating the reproductive systems for evaluating the male reproductive system
    • A61B5/4381Prostate evaluation or disorder diagnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30084Kidney; Renal

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Physiology (AREA)
  • Vascular Medicine (AREA)
  • Databases & Information Systems (AREA)
  • Psychiatry (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Fuzzy Systems (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Pregnancy & Childbirth (AREA)

Abstract

A system may include a probe configured to transmit an ultrasound signal to a target of interest and receive echo information associated with the transmitted ultrasound signal. The system may also include at least one processing device configured to process the received echo information using a machine learning algorithm to generate probability information associated with the object of interest. The at least one processing device may further classify the probability information and output image information corresponding to an object of interest based on the classified probability information.

Description

Ultrasound scanning based on probability mapping
RELATED APPLICATIONS
Priority is claimed in this application in accordance with 35u.s.c § 119, based on us provisional application 62/504,709 filed 2017, 5, 11, the contents of which are incorporated herein by reference in their entirety.
Background
Ultrasound scanners are commonly used to identify a target organ or other structure within the body and/or to determine characteristics associated with the target organ/structure, such as the size of the organ/structure or the volume of fluid in the organ. For example, ultrasound scanners are used to identify a patient's bladder and estimate the volume of fluid in the bladder. Typically, an ultrasound scanner is placed on a patient and triggered to generate an ultrasound signal that includes sound waves output at a particular frequency. The scanner may receive echoes from the ultrasound signals and analyze them to determine the volume of fluid in the bladder. For example, the received echoes may be used to generate corresponding images, which may be analyzed to detect boundaries of a target organ, such as a bladder wall. The volume of the bladder can then be estimated based on the detected boundary information. However, typical ultrasound scanners often suffer from inaccuracies caused by a number of factors, such as variability in the size and/or shape of the target organ of interest between patients, obstructions within the body that make it difficult to accurately detect the boundaries of the target organ/structure, and the like.
Drawings
FIG. 1A shows an exemplary configuration of a scanning system according to an exemplary embodiment;
FIG. 1B illustrates operation of the scanning system of FIG. 1A with respect to detecting an organ within a patient;
FIG. 2 illustrates an exemplary configuration of logic elements included in the scanning system of FIG. 1A;
FIG. 3 illustrates a portion of the data acquisition unit of FIG. 2 in an exemplary embodiment;
FIG. 4 illustrates a portion of the autoencoder unit of FIG. 2 in an exemplary embodiment;
FIG. 5 illustrates an exemplary configuration of components included in one or more elements of FIG. 2;
FIG. 6 is a flowchart illustrating processing by the various components shown in FIG. 2 according to an exemplary embodiment;
FIG. 7 illustrates an output generated by the autoencoder of FIG. 2 in an exemplary embodiment;
fig. 8 shows binarization processing according to the processing of fig. 6;
FIG. 9 is a flow diagram associated with displaying information via the base unit of FIG. 1A; and
fig. 10 shows exemplary image data output by the base unit according to the process of fig. 9.
Detailed Description
The following detailed description refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. In addition, the following detailed description does not limit the invention.
Embodiments described herein relate to the use of machine learning, including the use of neural networks and deep learning, to identify organs or structures of interest within a patient based on information obtained via an ultrasound scanner. For example, a scanner may be used to transmit a plurality of ultrasound signals to a target organ, and machine learning techniques/algorithms may be used to process echo information associated with the transmitted signals. The machine learning process may be used to identify objects of interest and generate probability information associated with each portion or pixel of an image generated based on the received ultrasound echo data.
For example, in one embodiment, ultrasound echo data (e.g., B-mode echo data associated with ultrasound signals transmitted on a plurality of different scan planes directed at a target organ) may be used to generate a probability map for each B-mode image. In one embodiment, each pixel in the B-mode image may be mapped to a probability indicating whether the particular pixel is within or part of the target organ/structure. The results of the pixel-by-pixel analysis are used to generate a target probability map. Binarization processing and post-processing may then be performed to remove noise and provide a more accurate representation of the organ than conventional scanners that attempt to determine the boundary walls of the target organ and estimate the size based on the boundary information. In some embodiments, the output from the post-processing is displayed to medical personnel and can help easily locate the organ when performing the ultrasound scan. Other post-processing may also be performed to estimate the volume of the target organ, such as the volume of fluid in the patient's bladder.
FIG. 1A is a diagram illustrating a scanning system 100 according to an exemplary embodiment. Referring to fig. 1, a scanning system 100 includes a probe 110, a base unit 120, and a cable 130.
The probe 110 includes a handle portion 112 (also referred to as handle 112), a trigger 114, and a nose 116 (also referred to as dome or dome portion 116). Medical personnel can hold the probe 110 by the handle 112 and depress the trigger 114 to activate (activate) one or more ultrasonic transceivers and transducers located in the nose 116 to emit ultrasonic signals to a target organ of interest. For example, fig. 1B shows the probe 110 positioned on the pelvic region of a patient 150 and over a target organ of interest (in this example, the bladder 152 of the patient).
The handle 112 allows a user to move the probe 110 relative to the patient 150. As discussed above, when scanning a selected anatomical portion, trigger 114 initiates an ultrasonic scan of the selected anatomical portion when dome 116 is in contact with a surface portion of patient 150. Dome 116 is typically formed of a material that provides a suitable acoustic impedance match to the anatomical portion and/or allows ultrasonic energy to be properly focused as it is projected into the anatomical portion. For example, an acoustic gel or gel pad shown at region 154 in fig. 1B may be applied to the skin of patient 150 over a region of interest (ROI) to provide acoustic impedance matching when dome 116 is placed on the skin of patient 150.
Dome 116 includes one or more ultrasonic transceiver elements and one or more transducer elements (not shown in fig. 1A or 1B). The transceiver elements transmit ultrasound energy outward from dome 116 and receive acoustic reflections or echoes generated by internal structures/tissues within the anatomical portion. The one or more ultrasound transducer elements may comprise a one-or two-dimensional array of piezoelectric elements that may be moved within dome 116 by a motor to provide different scan directions for the emission of ultrasound signals by the transceiver elements. Alternatively, the transducer elements may be fixed relative to the probe 110 so that a selected anatomical region may be scanned by selectively exciting the elements in the array.
In some embodiments, the probe 110 can include a direction indicator panel (not shown in fig. 1A) that includes a plurality of arrows that can be illuminated for initially targeting and guiding a user to access a target organ or structure within the ROI. For example, in some embodiments, if the organ or structure is centered with respect to the landing point of the probe 110 placed against the skin surface at the first location on the patient 150, the directional arrow may not be illuminated. However, if the organ is off center, an arrow or set of arrows may be illuminated to guide the user to reposition the probe 110 at a second or subsequent skin location on the patient 150. In other embodiments, a directional indicator may be presented on the display 122 of the base unit 120.
One or more transceivers located in probe 110 may include an inertial reference unit including an accelerometer and/or gyroscope, preferably positioned within or near dome 116. The accelerometer may be operable to sense acceleration of the transceiver (preferably relative to a coordinate system), while the gyroscope may be operable to sense angular velocity of the transceiver relative to the same or another coordinate system. Thus, the gyroscope may be in a conventional configuration employing dynamic elements, or the gyroscope may be an optoelectronic device, such as an optical ring gyroscope. In one embodiment, the accelerometer and gyroscope may comprise a common package device and/or a solid state device. In other embodiments, the accelerometer and/or gyroscope may comprise a commonly packaged microelectromechanical system (MEMS) device. In each case, the accelerometer and gyroscope collectively allow for the determination of position and/or angular changes relative to known locations near an anatomical region of interest within the patient.
The probe 110 may communicate with the base unit 120 via a wired connection, such as a cable 130. In other embodiments, the probe 110 may communicate with the base unit 120 via a wireless connection (e.g., bluetooth, WiFi, etc.). In each case, the base unit 120 includes a display 122 to allow a user to view the processing results from the ultrasound scan and/or to allow for operational interaction with respect to the user during operation of the probe 110. For example, the display 122 may include an output display/screen, such as a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) based display, or other type of display that provides text and/or image data to a user. For example, the display 122 may provide instructions for positioning the probe 110 relative to a selected anatomical portion of the patient 150. The display 122 may also display a two-dimensional or three-dimensional image of the selected anatomical region.
In some implementations, the display 122 can include a Graphical User Interface (GUI) that allows a user to select various features associated with the ultrasound scan. For example, display 122 may allow a user to select whether patient 150 is male, female, or a child. This allows the system 100 to automatically adjust the transmission, reception, and processing of ultrasound signals to accommodate the anatomy of a selected patient, e.g., to adjust the system 100 to accommodate various anatomical details of male and female patients. For example, when a male patient is selected via the GUI on the display 122, the system 100 may be configured to position a single cavity (e.g., bladder) within the male patient. Conversely, when a female patient is selected via the GUI, the system 100 may be configured to image an anatomical portion having multiple cavities (e.g., a body region including the bladder and uterus). Similarly, when a pediatric patient is selected, the system 100 may be configured to adjust the emissions based on the child patient's smaller size. In alternative embodiments, the system 100 may include a cavity selector configured to select a single or multi-cavity scan mode that may be used for male and/or female patients. The lumen selector may thus allow imaging of a single lumen region, or imaging of a multi-lumen region (e.g., a region including the aorta and the heart). Additionally, as described below, selection of patient type (e.g., male, female, child) can be used in analyzing the images to help provide an accurate representation of the target organ.
To scan a selected anatomical portion of a patient, dome 116 may be positioned against a surface portion of patient 150 proximate to the anatomical portion to be scanned, as shown in fig. 1B. The user activates the transceiver by depressing the trigger 114. In response, the transducer elements optionally position the transceiver, which transmits ultrasound signals into the body and receives corresponding echo signals that may be at least partially processed by the transceiver to produce an ultrasound image of the selected anatomical portion. In a particular embodiment, the system 100 transmits ultrasound signals in a range extending from about two megahertz (MHz) to about 10 or more MHz (e.g., 18 MHz).
In one embodiment, the probe 110 may be coupled to a base unit 120, the base unit 120 configured to generate ultrasound energy at a predetermined frequency and/or pulse repetition rate and transmit the ultrasound energy to a transceiver. The base unit 120 also includes one or more processors or processing logic configured to process the reflected ultrasound energy received by the transceiver to generate an image of the scanned anatomical region.
In yet another particular embodiment, the probe 110 may be a stand-alone device that includes a microprocessor positioned within the probe 110 and software associated with the microprocessor to operatively control the transceiver and process the reflected ultrasonic energy to generate an ultrasound image. Accordingly, the display on the probe 110 may be used to display the generated images and/or view other information associated with the operation of the transceiver. For example, the information may include alphanumeric data indicating a preferred location of the transceiver before performing a series of scans. In other embodiments, the transceiver may be coupled to a general purpose computer (e.g., a laptop computer or desktop computer) that includes software that at least partially controls the operation of the transceiver, and that also includes software for processing information transmitted from the transceiver so that an image of the scanned anatomical region may be generated.
Fig. 2 is a block diagram of functional logic components implemented in the system 100 in accordance with an exemplary embodiment. Referring to fig. 2, the system 100 includes a data acquisition unit 210, a Convolutional Neural Network (CNN) autoencoder unit 220, a post-processing unit 230, targeting logic 240, and volume estimation logic 250. In an exemplary embodiment, the probe 110 may include a data acquisition unit 210, and other functional units (e.g., CNN autoencoder unit 220, post-processing unit 230, aiming logic 240, and volume estimation logic 250) may be implemented in the base unit 120. In other embodiments, the particular elements and/or logic may be implemented by other means, such as via a computing device or server external to both the probe 110 and the base unit 120 (e.g., accessible via a wireless connection to the Internet or a local area network within a hospital, etc.). For example, the probe 110 may transmit the echo data and/or image data to a processing system located remotely from the probe 110 and base unit 120 via, for example, a wireless connection (e.g., WiFi or some other wireless protocol/technology).
As described above, the probe 110 may include a transceiver that produces ultrasound signals, receives echoes from the transmitted signals, and generates B-mode image data based on the received echoes (e.g., the magnitude or intensity of the received echoes). In an exemplary embodiment, the data acquisition unit 210 obtains data associated with a plurality of scan planes corresponding to a region of interest within the body of the patient 150. For example, the probe 110 may receive echo data processed by the data acquisition unit 210 to generate two-dimensional (2D) B-mode image data to determine the size and/or volume of the bladder. In other embodiments, the probe 110 may receive echo data that is processed to generate three-dimensional (3D) image data that may be used to determine the size and/or volume of the bladder.
For example, fig. 3 shows an exemplary data acquisition unit 210 for obtaining 3D image data. Referring to fig. 3, the data acquisition unit 210 includes a transducer 310, an outer surface 320 of the dome portion 116, and a base 360. The elements shown in fig. 3 may be included within the dome portion 116 of the probe 110.
The transducer 310 may emit an ultrasonic signal from the probe 110, as shown at 330 in fig. 3. Transducer 310 may be mounted to allow transducer 310 to rotate about two perpendicular axes. For example, transducer 310 may rotate about a first axis 340 relative to base 360 and rotate about a second axis 350 relative to base 360. First axis 340 is referred to herein as the theta axis and second axis 350 is referred to herein as the phi axis. In an exemplary embodiment, the theta and phi ranges of motion may be less than 180 degrees. In one embodiment, interlacing may be performed with respect to theta motion and phi motion. For example, the transducer 310 may be moved in the theta direction and then in the phi direction. This enables the data acquisition unit 210 to obtain smooth and continuous volume scans and increases the rate at which scan data is acquired.
In an exemplary embodiment, the data acquisition unit 210 may adjust the size (size) of the B-mode image before forwarding the B-mode image to the CNN auto-encoder unit 220. For example, the data acquisition unit 210 may include logic to reduce the size of the B-mode image through a reduction or decimation process. The reduced-size B-mode image may then be input to the CNN auto-encoder unit 220, which CNN auto-encoder unit 220 will generate an output probability map, as described in more detail below. In an alternative embodiment, the CNN auto-encoder unit 220 may reduce or decimate the input B-mode image itself at the input layer. In either case, reducing the size/amount of B-mode image data may reduce the processing time and processing power required for the CNN auto-encoder unit 220 to process the B-mode image data. In other embodiments, the data acquisition unit 210 may not perform resizing before inputting the B-mode image data to the CNN auto-encoder unit 220. In other embodiments, the data acquisition unit 210 and/or CNN autoencoder unit 220 may perform image enhancement operations such as brightness normalization, contrast enhancement, scan conversion, and the like to improve accuracy with respect to generating output data.
Referring again to fig. 2, the CNN autoencoder unit 220 may include logic for processing data received via the data acquisition unit 210. In an exemplary embodiment, the CNN auto-encoder unit 220 may perform Deep Neural Network (DNN) processing that includes multiple convolutional layer processing and multiple cores or filters for each layer, as described in more detail below. The term "CNN autoencoder unit" or "autoencoder unit" as used herein should be broadly interpreted to include neural networks and/or machine learning systems/units, where both the input and output have spatial information, as compared to a classifier that outputs a global label without spatial information.
For example, CNN autoencoder unit 220 includes logic to map received image input to output with the least amount of distortion possible. The CNN process may be similar to other types of neural network processes, but the CNN process uses explicit assumptions (i.e., the input is an image), which makes it easier for the CNN process to encode various attributes/constraints into the process, thereby reducing the amount of parameters that must be processed or factored by the CNN autoencoder unit 220. In an exemplary embodiment, the CNN auto-encoder unit 220 performs a convolution process to generate a feature map associated with the input image. The feature map may then be sampled multiple times to generate an output. In an exemplary embodiment, the kernel size of the CNN used by the CNN autoencoder unit 220 may have a size of 17x17 or less to provide sufficient speed to generate output. In addition, the 17x17 kernel size allows the CNN auto-encoder unit 220 to capture enough information around the points of interest within the B-mode image data. In addition, according to an exemplary embodiment, the number of convolutional layers may be eight or less, each layer having five or less cores. However, it should be understood that smaller core sizes (e.g., 3x3, 7x7, 9x9, etc.) or larger core sizes (e.g., greater than 17x17), additional cores per layer (e.g., greater than five), and additional convolutional layers (e.g., greater than ten and up to several hundred) may be used in other embodiments.
In typical applications involving CNN processing, the data dimension (size) is reduced by adding a narrow bottleneck layer within the process so that only data of interest can pass through the narrow layer. This reduction in data dimension is typically achieved by adding a "pooling" layer or using a larger "step size" to reduce the size of the image processed by the neural network. However, in some embodiments described herein with respect to bladder detection, where spatial accuracy of detected bladder wall locations is important for accurate volume calculations, pooling and/or large step size is minimized for use or in conjunction with other spatial resolution preserving techniques (e.g., residual concatenation or dilation convolution).
Although the exemplary system 100 depicts the use of the CNN auto-encoder unit 220 to process B-mode input data, in other embodiments, the system 100 may include other types of auto-encoder units or machine learning units. For example, the CNN autoencoder unit 220 may include a neural network structure in which the output layer has the same number of nodes as the input layer. In other embodiments, other types of machine learning modules or units may be used, where the size of the input layer is not equal to the size of the output layer. For example, the machine learning module may generate a probability map output that is greater than twice the input image (in terms of number of layers) or less than half the input image. In other embodiments, the machine learning units included in the system 100 may use various machine learning techniques and algorithms, such as decision trees, support vector machines, Bayesian (Bayesian) networks, and the like. In each case, the system 100 uses a machine learning algorithm to generate probability information about the B-mode input data, which in turn can be used to estimate the volume of the target organ of interest, as described in detail below.
Fig. 4 schematically illustrates a portion of a CNN auto-encoder unit 220 according to an exemplary embodiment. Referring to fig. 4, the CNN autoencoder unit 220 may include a spatial input 410, an FFT input 420, a query 422, a feature map 430, a feature map 440, a query 442, a kernel 450, an offset 452, a kernel 460, and an offset 462. The input space 410 may represent 2D B mode image data provided by the data acquisition unit 210. The CNN auto-encoder 220 may perform a Fast Fourier Transform (FFT) to convert the image data to the frequency domain and apply filters or weights to the input FFT via the kernel FFT 450. The output of the convolution process may be biased via bias values 452 and an Inverse Fast Fourier Transform (IFFT) function applied, the result of which is passed to lookup table 422 to generate spatial feature map 430. CNN autoencoder unit 220 may apply an FFT to spatial feature map 430 to generate FFT feature map 440, and the process may be repeated for additional convolutions and kernels. For example, if the CNN auto-encoder unit 220 includes eight convolutional layers, the process may continue seven more times. In addition, the kernels applied to each subsequent feature map correspond to the number of kernels multiplied by the number of feature maps, as shown by the four kernels 460 in FIG. 4. Biases 452 and 462 may also be applied to improve the performance of CNN processing.
As described above, the CNN auto-encoder unit 220 may perform convolution in the frequency domain using FFT. This approach allows system 100 to implement the CNN algorithm using less computing power than a larger system that may use multiple computers to execute the CNN algorithm. In this manner, system 100 may perform CNN processing using handheld units and base stations (e.g., probe 110 and base unit 120). In other embodiments, a spatial domain approach may be used. The spatial domain approach may use additional processing power in the event that the system 100 is capable of communicating with other processing devices, such as processing devices connected to the system 100 via a network (e.g., a wireless or wired network) and/or processing devices operating with the system 100 via a client/server approach (e.g., the system 100 is a client).
The output of the CNN auto-encoder unit 220 is probability information associated with the probability that each processed portion or pixel of the processed input image is within the target organ of interest. For example, the CNN auto-encoder unit 220 may generate a probability map in which each pixel associated with the processed input image data is mapped to a probability corresponding to a value between 0 and 1, where a value of zero represents a probability of 0% of the pixel within the target organ and a value of 1 represents a probability of 100% of the pixel within the target organ, as described in more detail below. The CNN automatic encoder unit 220 performs pixel analysis or spatial position analysis on the processed image instead of the input image. As a result, the pixel-by-pixel analysis of the processed image may not correspond one-to-one to the input image. For example, based on the resizing of the input image, one processed pixel or spatial location analyzed by the CNN auto-encoder unit 220 to generate probability information may correspond to multiple pixels in the input image, and vice versa. Additionally, the term "probability" as used herein should be interpreted to broadly include the likelihood that a pixel or portion of an image is within a target or organ of interest. The term "probability information" as used herein should also be broadly construed to include discrete values, such as binary or other values.
In other embodiments, the CNN auto-encoder unit 220 may generate a probability map in which each pixel is mapped to various values that may be associated with probability values or indicators (e.g., values ranging from-10 to 10, values corresponding to one of 256 gray values, etc.). In each case, the values or units generated by the CNN autoencoder unit 220 may be used to determine the probability that a pixel or portion of the image is within the target organ. For example, in the 256 gray scale example, a value of 1 may indicate a probability of 0% of a pixel or portion of the image being within the target organ, while a value of 256 may indicate a probability of 100% of the pixel or image being within the target organ.
In other embodiments, CNN autoencoder unit 220 may generate a discrete output value, such as a binary value, that indicates whether a pixel or output region is within the target organ. For example, the CNN auto-encoder unit 220 may include a binarization or classification process that generates discrete values, e.g., "1" when the pixel is within the target organ and "0" when the pixel is not within the target organ. In other cases, the generated value may not be binary, but may be related to whether the pixel is inside or outside the target organ.
In some embodiments, CNN autoencoder unit 220 may consider various factors in analyzing pixel-by-pixel data. For example, the CNN autoencoder unit 220 may receive input from the user indicating whether the patient 150 is male, female, or a child via a GUI displayed on the display 122 of the base unit 120 (fig. 1A) and adjust the probability values based on stored information relating to the likely size, shape, volume, etc. of the target organ for a particular type of patient. In such embodiments, the CNN auto-encoder unit 220 may include three different CNNs trained with male, female, and child data, and the CNN auto-encoder unit 220 may use the appropriate CNN based on the selection.
In some embodiments, the CNN autoencoder unit 220 may automatically identify patient demographic information of the subject, such as gender, age range, adult or child status, etc., using, for example, B-mode image data associated with the subject. The CNN auto-encoder unit 220 may also automatically identify the clinical condition of the subject using, for example, B-mode image data (e.g., Body Mass Index (BMI), body size and/or weight, etc.). The CNN autoencoder unit 220 may also automatically identify device information, such as position information of the probe 110, aiming quality of the probe 110 with respect to the target of interest, etc., when the system 100 performs scanning.
In other embodiments, another processing device (e.g., a processing device similar to the autoencoder unit 220 and/or the processor 520) may perform automatic detection of patient demographic information, clinical condition, and/or device information using, for example, another neural network or other processing logic, and may provide the automatically determined output as input to the CNN autoencoder unit 220. Moreover, in other embodiments, patient demographic information, clinical status and/or device information, patient data, etc. may be manually entered via, for example, the display 122 of the base unit 120 or via input selections on the probe 110. In each case, information automatically identified by the CNN auto-encoder unit 220 or manually input to the CNN auto-encoder unit 220/system 100 may be used to select the appropriate CNN to process the image data.
In other embodiments, CNN autoencoder unit 220 may be trained with other information. For example, CNN autoencoder unit 220 may be trained with patient data associated with a subject, which may include information obtained using patient history data as well as information obtained via physical examination of the patient prior to scanning a target of interest. For example, patient data can include patient history information, such as patient surgical history, chronic disease history (e.g., bladder disease information), previous images of a target of interest (e.g., previous images of the subject's bladder), and the like, as well as data obtained via physical examination of the patient/subject, such as pregnancy status, the presence of scar tissue, hydration issues, abnormalities in the target area (e.g., abdominal distension or swelling), and the like. In an exemplary embodiment, patient data may be entered into the system 100 via the display 122 of the base unit 120. In each case, information automatically generated by CNN autoencoder unit 220 and/or another processing device and/or information manually entered into system 100 may be provided as input to a machine learning process performed by system 100 to help improve the accuracy of data generated by system 100 associated with an object of interest.
In other cases, the auto-encoder unit 220 may receive input information regarding the type of organ being imaged (e.g., bladder, aorta, prostate, heart, kidney, uterus, blood vessels, amniotic fluid, fetus, etc.) and the number of organs, etc., via a GUI provided on the display 122 and use the appropriate CNN trained on the selected organ.
The post-processing unit 230 includes logic for receiving pixel-by-pixel probability information and applying a "smart" binarization probability algorithm. For example, the post-processing unit 230 may perform interpolation to more clearly define profile details, as described in detail below. In addition, the post-processing unit 230 may adjust the output of the CNN auto-encoder unit 220 based on the subject type. For example, if "child" is selected through the GUI on the display 122 before initiating an ultrasound scan using the probe 110, the post-processing unit 230 may ignore the output from the CNN automatic encoder unit 220 corresponding to a location deeper than a certain depth, since the depth of the bladder in a child is generally shallower due to the short stature of a typical child. As another example, the post-processing unit 230 may determine whether to select a single main region or multiple regions of interest based on the organ type. For example, if the organ type being scanned is a bladder, the post-processing unit 230 may select a single main region because there is only one bladder in the body. However, if the target is the pubis, the post-processing unit 230 may select up to two regions of interest, which correspond to both sides of the pubis.
Targeting logic 240 includes logic for determining whether the target organ is properly centered with respect to probe 110 during the ultrasound scan. In some embodiments, aiming logic 240 may generate text or graphics to guide the user in adjusting the position of probe 110 to achieve better scanning of the target organ. For example, aiming logic 240 may analyze data from probe 110 and determine that probe 110 needs to be moved to the left of patient 150. In this case, aiming logic 240 may output text and/or graphics (e.g., a flashing arrow) to display 122 to guide the user to move probe 110 in the appropriate direction.
The volume estimation logic 250 may include logic for estimating the volume of the target organ. For example, the volume estimation logic 250 may estimate the volume based on the 2D image generated by the post-processing unit 230, as described in detail below. Where a 3D image is provided, the volume estimation logic 250 may simply use the 3D image to determine the volume of the target organ. The volume estimation logic 250 may output the estimated volume via the display 122 and/or a display on the probe 110.
The exemplary configuration shown in fig. 2 is provided for simplicity. It should be understood that system 100 may include more or less logic units/devices than those shown in fig. 2. For example, the system 100 may include a plurality of data acquisition units 210 and a plurality of processing units that process received data. Additionally, the system 100 may include additional elements, such as a communication interface (e.g., a radio frequency transceiver) that transmits and receives information via an external network to assist in analyzing the ultrasound signals to identify a target organ of interest.
Further, various functions performed by specific components in the system 100 will be described below. In other embodiments, various functions described as being performed by one device may be performed by another device or multiple other devices, and/or various functions described as being performed by multiple devices may be combined and performed by a single device. For example, in one embodiment, the CNN auto-encoder unit 220 may convert the input image into probability information, generate an intermediate mapping output (described below), and may also convert the intermediate output into, for example, volume information, length information, area information, and the like. That is, a single neural network processing device/unit may receive input image data and output processed image output data along with volume and/or size information. In this example, a separate post-processing unit 230 and/or volume estimation logic 250 may not be required. Additionally, in this example, any intermediate mapping output may be accessible or visible or inaccessible or invisible to an operator of the system 100 (e.g., the intermediate mapping may be part of an internal process that is not directly accessible/visible to a user). That is, a neural network (e.g., CNN autoencoder unit 220) included in system 100 may convert received ultrasound echo information and/or images and output volumetric information or other dimensional information of an object of interest without additional input by a user of system 100 or with only a small amount of additional input by a user of system 100.
Fig. 5 shows an exemplary configuration of the apparatus 500. The apparatus 500 may correspond to components such as the CNN autoencoder unit 220, the post-processing unit 230, the targeting logic 240, and the volume estimation logic 250. Referring to fig. 5, the apparatus 500 may include a bus 510, a processor 520, a memory 530, an input device 540, an output device 550, and a communication interface 560. Bus 510 may include a path that allows communication among the various elements of device 500. In an exemplary embodiment, all or some of the components shown in fig. 5 may be implemented and/or controlled by processor 520 executing software instructions stored in memory 530.
Processor 520 may include one or more processors, microprocessors, or processing logic that may interpret and execute instructions. Memory 530 may include a Random Access Memory (RAM) or another type of dynamic storage device that may store information and instructions for execution by processor 520. Memory 530 may also include a Read Only Memory (ROM) device or another type of static storage device that may store static information and instructions for use by processor 520. The memory 530 may further include a solid state drive (SDD). Memory 530 may also include a magnetic and/or optical recording medium (e.g., a hard disk) and its corresponding drive.
Input device 540 may include mechanisms that allow a user to input information to device 500, such as a keyboard, keypad, mouse, pen, microphone, touch screen, voice recognition and/or biometric mechanisms, and the like. Output device 550 may include mechanisms that output information to a user, including a display (e.g., a Liquid Crystal Display (LCD)), a printer, speakers, and the like. In some implementations, a touch screen display can be used as an input device and an output device.
Communication interface 560 may include one or more transceivers used by apparatus 500 to communicate with other apparatuses via wired, wireless, or optical mechanisms. For example, communication interface 560 may include one or more Radio Frequency (RF) transmitters, receivers, and/or transceivers and one or more antennas for transmitting and receiving RF data via a network. Communication interface 560 may also include a modem or ethernet interface to interface with a LAN or other mechanism for communicating with elements in a network.
The exemplary configuration shown in fig. 5 is provided for simplicity. It should be understood that the apparatus 500 may include more or fewer apparatuses than those shown in fig. 5. In an exemplary embodiment, the apparatus 500 performs operations in response to the processor 520 executing sequences of instructions contained in a computer-readable medium, such as the memory 530. A computer-readable medium may be defined as a physical or logical memory device. The software instructions may be read into memory 530 from another computer-readable medium (e.g., a Hard Disk Drive (HDD), SSD, etc.) or from another device via communication interface 560. Alternatively, hardwired circuitry, such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or the like, may be used in place of or in combination with software instructions to implement processes according to embodiments described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software.
FIG. 6 is a flow diagram illustrating exemplary processing associated with identifying an object of interest and identifying a parameter (e.g., volume) associated with the object of interest. The process may begin with the user manipulating the probe 110 to scan a target organ of interest. In this example, it is assumed that the target organ is the bladder. It should be understood that the features described herein may be used to identify other organs or structures within the body.
In an exemplary embodiment, a user may press the trigger 114 and a transceiver included in the probe 110 transmits an ultrasound signal and acquires B-mode data associated with an echo signal received by the probe 110 (block 610). In one embodiment, the data acquisition unit 210 may transmit ultrasound signals on 12 different planes through the bladder and generate 12B-mode images corresponding to the 12 different planes. In this embodiment, the data may correspond to 2D image data. In other embodiments, the data acquisition unit 210 may generate 3D image data. For example, as discussed above with respect to fig. 3, the data acquisition unit 210 may perform interleaving to generate a 3D image. In each case, the number of ultrasound signals/scan planes transmitted may vary based on the particular implementation. As described above, in some embodiments, the data acquisition unit 210 may reduce the size of the B-mode image before forwarding the B-mode data to the CNN auto-encoder unit 220. For example, the data acquisition unit 210 may reduce the size of the B-mode image by 10% or more.
In each case, it is assumed that CNN autoencoder unit 220 receives 2D B mode data and processes the data to remove noise from the received data. For example, referring to fig. 7, the CNN auto-encoder unit 220 may receive B-mode image data 710 in which a dark region or dark field 712 corresponds to a bladder. As shown, the B-mode image data includes regions that are irregular or may appear unclear or fuzzy to the user. For example, region 712 in FIG. 7 includes lighter areas around the bladder and ambiguous boundaries. Such noisy regions may make it difficult to accurately estimate the volume of the bladder.
In this case, the CNN auto-encoder unit 220 performs denoising on the acquired B-mode image 710 by generating a target probability map (block 620). For example, as described above, the CNN auto-encoder 220 may utilize CNN techniques to generate probability information associated with each pixel in the input image.
The base unit 120 may then determine whether the entire cone data (i.e., all scan plane data) has been acquired and processed (block 630). For example, the base unit 120 may determine whether all 12B-mode images corresponding to 12 different scans across the bladder have been processed. If all B-mode image data has not been processed (block 630-NO), the base unit 120 controls to move to the next scan plane position (block 640) and the process continues to block 610 to process a B-mode image associated with another scan plane.
If all B-mode image data has been processed (block 630-YES), the base unit 120 may use the 3D information to modify the probability map (block 650). For example, the CNN auto-encoder unit 220 may modify some of the probability information generated by the CNN auto-encoder unit 220 based on whether the patient is a male, female, child, or the like, using stored hypothesis information about the 3D shape and size of the bladder, thereby effectively modifying the size and/or shape of the bladder. That is, as described above, the CNN autoencoder unit 220 may use CNNs trained based on patient demographic information, patient clinical condition, device information associated with the system 100 (e.g., the probe 110), patient data (e.g., patient history information and patient examination data) of the patient, and so forth. For example, if patient 150 is a male, CNN autoencoder unit 220 may use CNNs trained with male patient data, if patient 150 is a female, CNNs trained with female patient data, if patient 150 is a child, CNNs trained with child data, CNNs trained based on age ranges of patients, CNNs trained with patient's medical history, and so forth. In other embodiments, for example, when the base unit 120 receives and processes 3D image data, no additional processing may be performed and block 650 may be skipped. In either case, the system 100 may display P-mode image data (block 660), such as the image 720 shown in FIG. 7.
In either case, the base unit 120 may segment the target region by binarization processing using probability mapping (block 670). For example, the post-processing unit 230 may receive the output of the CNN auto-encoder unit 220 and adjust the size (e.g., by interpolation) of the probability map, smooth the probability map, and/or de-noise the probability map (e.g., by filtering). For example, in one embodiment, the probability map may be adjusted to a larger size by interpolation to obtain better resolution and/or to at least partially restore the spatial resolution of the original B-mode image data, which may have been reduced in size. In one embodiment, a 2D Lanczos difference may be performed to adjust the size of the image associated with the target probability map.
Further, the base unit 120 may perform classification or binarization processing to convert probability information from the probability mapping unit into binarized output data. For example, the post-processing unit 230 may convert the probability values into binary values. When multiple candidate probability values are identified for a particular pixel, the post-processing unit 230 may select the most prominent value. In this way, when multiple candidate values are identified, the post-processing unit 230 may apply some "intelligence" to select the most likely value.
Fig. 8 schematically shows an exemplary intelligent binarization process. Referring to fig. 8, an image 810 shows an output from a probability map or pixel classification corresponding to a 2D ultrasound image, in which probability information is converted into grayscale images having various intensities. As shown, the image 810 includes gray regions labeled 812 and 814 that represent possible locations of portions of the bladder. The post-processing unit 230 identifies the peak or cusp within the image 810 having the greatest intensity, as shown by the cross-hair 822 shown in the image 820. The post-processing unit 230 may then fill the area around the peak point for areas whose intensity is greater than the threshold intensity, as shown by area 832 in the image 830. In this case, regions within region 820 whose threshold intensity value is less than the threshold intensity are not filled in, resulting in the removal of gray region 814 displayed in image 810. The post-processing unit 230 may then fill in the background, as shown by area 842 in image 840. The post-processing unit 230 then fills any holes or open areas within the image, as shown by area 852 in the image 850. The holes in region 842 may correspond to areas of noise or areas associated with certain obstructions in patient 150. In this manner, the post-processing unit 230 identifies the most likely location and size of the bladder. That is, region 852 is considered to be part of the bladder of patient 150.
In other embodiments, the post-processing unit 230 may use information within the image 810 other than the peak intensity values. For example, the post-processing unit 230 may use a peak of the processed probability (e.g., a peak of the smoothed probability map), use multiple peaks to identify multiple fill regions, and so on. As other examples, the post-processing unit 230 may select the "primary" region based on the area, peak probability, or average probability in each region. In other embodiments, the post-processing unit 230 may identify a region of the patient's bladder using one or more seed points manually entered by the operator via, for example, the display 122, using an algorithm that generates the one or more seed points, performing another type of thresholding that does not use seed points, and so forth.
After processing image 810 in this manner, base unit 120 may output an image, such as image 720 shown in FIG. 7. Referring to fig. 7, an image 720 includes a region 722 corresponding to the bladder. As shown, the edges of the bladder 722 are much more distinct than the boundaries in the image 712, providing a more accurate bladder presentation. In this way, the base unit 120 may generate a probability value of each pixel in the B-mode image and denoise the B-mode data using a luminance value of each pixel and local gradient values of neighboring pixels and statistical methods such as a hidden Markov (Markov) model and a neural network algorithm (e.g., CNN).
The base unit 120 may then convert the segmentation results to a target volume (block 670). For example, the post-processing unit 230 may sum the volumes of all voxels in 3D space corresponding to each valid target pixel in the binarization map. That is, the volume estimation logic 250 may sum up the voxels in the 12 segmented target images to estimate the volume of the bladder. For example, the contribution or volume of each voxel may be pre-calculated and stored in a look-up table within base unit 120. In this case, the volume estimation logic 250 may use the sum of voxels as an index to a lookup table to determine the estimated volume. The volume estimation logic 250 may also display the volume via the display 122 of the base unit 120. For example, the volume estimation logic 250 may display the estimated volume of the bladder (i.e., 135 milliliters (mL) in this example) at region 724 in fig. 7, which is output to the display 122 of the base unit 120. Alternatively, the volume estimation logic 250 may display the volume information via a display on the probe 110. The post-processing unit 230 may also display the segmentation results (block 690). That is, the post-processing unit 230 may display 12 segments of the bladder via the display 122 of the base unit 120.
In some embodiments, the system 100 may not perform binarization processing on the probability map information. For example, in some embodiments, CNN autoencoder unit 220 and/or post-processing unit 230 may apply a look-up table to the probability mapping information to identify possible portions of the target organ of interest and display an output via display 122.
Referring back to block 620, in some embodiments, the probability mapping unit 230 may display the information in real-time as the information is generated. FIG. 9 illustrates exemplary processing associated with providing additional display information to a user. For example, the post-processing unit 230 may display the probability pattern information (referred to herein as P-pattern) in real-time via the display 122 as it is generated (fig. 9, block 910). The post-processing unit 230 may also segment the object (block 920) and display the segmentation results using the B-mode image (block 930). For example, fig. 10 shows three B- mode images 1010, 1012, and 1014 and corresponding P- mode images 1020, 1022, and 1024. In other embodiments, all 12B-mode images and 12 corresponding P-mode images may be displayed. As shown, the P- mode images 1020, 1022, and 1024 are much sharper than the B- mode images 1010, 1012, and 1014. Additionally, in some embodiments, the post-processing unit 230 may provide an outline of the bladder boundary displayed in each P-mode image. For example, as shown in fig. 10, each of the P- mode images 1020, 1022, and 1024 may include an outline that is, for example, a different color or a brighter color than an interior portion of the bladder.
Embodiments described herein use machine learning to identify organs or structures of interest within a patient based on information obtained via an ultrasound scanner. A machine learning process may receive image data and generate probability information for each particular portion (e.g., pixel) of the image to determine a probability that the particular portion is within a target organ. The post-processing analysis may also refine the probability information using additional information (e.g., the patient's gender or age, specific target organs, etc.). In some cases, the volume of the target organ may also be provided to the user along with the real-time probability mode images.
The foregoing description of exemplary embodiments provides illustration and description, but is not intended to be exhaustive or to limit the embodiments to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the embodiments.
For example, the features have been described above with respect to identifying an object of interest (e.g., a patient's bladder) and estimating the volume of the object (e.g., bladder) using CNN processing. In other embodiments, other organs or structures may be identified, and dimensions or other parameters associated with the organs/structures may be estimated. For example, the processes described herein may be used to identify and display prostate, kidney, uterus, ovary, aorta, heart, blood vessels, amniotic fluid, fetus, etc., as well as specific features associated with these targets (e.g., measurements related to volume and/or size).
For example, in embodiments in which the processes described herein are used with respect to various organs or targets other than the bladder (e.g., aorta, prostate, kidney, heart, uterus, ovary, blood vessels, amniotic fluid, fetus, etc.), additional size-related measurements may be generated. For example, the length, height, width, depth, diameter, area, etc. of the organ or region of interest may be calculated. For example, for a scan of the aorta, measuring the diameter of the aorta may be important in attempting to identify an abnormality (e.g., an aneurysm). For prostate scanning, it may be desirable to measure the width and height of the prostate. In these cases, measurements such as length, height, width, depth, diameter, area, etc. may be generated/estimated using the machine learning process described above. That is, the above-described machine learning can be used to identify boundary walls or other items of interest and estimate certain parameters of interest to medical personnel related to size.
Furthermore, features have been described above primarily with respect to generating B-mode images using echo data and applying machine learning to the B-mode images to identify volumes, lengths, or other information associated with the target. In other embodiments, other types of ultrasound input image data may be used. For example, in other embodiments, C-mode image data may be used that generally includes a presentation of an object of interest (e.g., the bladder) formed in a plane oriented perpendicular to the B-mode image. Still further, in other embodiments, a Radio Frequency (RF) or quadrature signal (e.g., IQ signal) may be used as an input to CNN autoencoder unit 220 to generate a probability output map associated with the target.
Furthermore, the features have been described above with respect to generating a single probability map. In other embodiments, multiple probability maps may be generated. For example, the system 100 may generate one probability map for a target organ of interest (e.g., the bladder), another probability map for the pubic/pubic shadows, and another probability map for the prostate. In this way, a more accurate representation of the internal organs of the patient 150 may be generated, which may enable a more accurate volume estimation of the target organ (e.g., the bladder).
In addition, features described herein relate to performing a pixel-by-pixel analysis on B-mode image data. In other embodiments, edge mapping may be used instead of pixel-by-pixel mapping. In this embodiment, the CNN algorithm may be used to detect edges of the target. In further embodiments, a polygon coordinate method may be used to identify discrete portions of the bladder and then connect these points. In this embodiment, a contour edge tracking algorithm may be used to connect the points of the target organ.
In addition, various inputs (e.g., information indicating whether the patient is male, female, child, etc.) have been described above. Other inputs to probability mapping and/or binarization may also be used. For example, a Body Mass Index (BMI), age, or age range may be entered into the base unit 120, and the base unit 120 may automatically adjust the process based on the particular BMI, age, or age range. Other inputs to the probability mapping and/or binarization process (e.g., depth per pixel, plane orientation, etc.) may be used to improve the accuracy of the volume estimate and/or output image generated by the system 100.
Further, as described above, training data associated with various types of patients (male, female, and child) may be used to help generate P-mode data. For example, thousands or more of training data images may be used to generate a CNN algorithm for processing B-mode input data to identify targets of interest. In addition, thousands or more of images may be input or stored in the base unit 120 to help modify the output of the CNN auto-encoder unit 220. This is particularly useful where the image is adversely affected by an anticipated obstruction (e.g., the pubic bone for bladder scanning). In these embodiments, the base unit 120 may store information on how to cope with and minimize the influence of the obstacle. The CNN automatic encoder unit 220 and/or the post-processing unit 230 can then cope with the obstacle more accurately.
Further, the features described herein refer to using B-mode image data as an input to the CNN auto-encoder unit 220. In other embodiments, other data may be used. For example, echo data associated with the transmitted ultrasound signals may include harmonic information that may be used to detect a target organ such as the bladder. In this case, higher order harmonic echo information (e.g., second harmonic or higher order harmonics) related to the frequency of the transmitted ultrasound signal may be used to generate probability map information without generating a B-mode image. In other embodiments, higher order harmonic information may be used to enhance the P-mode image data in addition to the B-mode data described above. In still further embodiments, the probe 110 may transmit ultrasound signals at multiple frequencies, and echo information associated with the multiple frequencies may be used as an input to the CNN autoencoder unit 220 or other machine learning module to detect a target organ and estimate parameters of the target organ's volume, size, etc.
For example, multiple B-mode images at the fundamental frequency and multiple B-mode images at higher order harmonic frequencies or multiple higher order harmonic frequencies may be used as inputs to the CNN auto-encoder unit 220. In addition, the fundamental and harmonic frequency information may be pre-processed and used as inputs to the CNN autoencoder unit 220 to help generate the probability map. For example, the ratio between the harmonic power and the fundamental power may be used as input to the CNN auto-encoder unit 220 to enhance the accuracy of the probability mapping.
Additionally, in some embodiments, the post-processing described above may use a second machine learning (e.g., CNN) algorithm to denoise the image data and/or perform contour/edge tracking on the image.
Further, the embodiments have been described above with respect to the data acquisition unit 210 that acquires 2-dimensional (2D) B-mode image data. In other embodiments, higher dimensional image (e.g., 2.5D or 3D) data may be input to CNN autoencoder unit 220. For example, for a 2.5D implementation, CNN auto-encoder unit 220 may use B-mode images associated with several scan planes and adjacent scan planes to improve accuracy. For a 3D implementation, CNN auto-encoder unit 220 may generate 12 probability maps for each of the 12 scan planes, and post-processing unit 230 may generate a 3D image based on the 12 probability maps using all 12 probability maps (e.g., via a 3D flood-fill algorithm). Classification and/or binarization processing may then be performed on the 2.5D or 3D image to generate, for example, a 3D output image.
Further, while series of acts have been described with regard to fig. 6 and 9, the order of the acts may be different in other implementations. Furthermore, non-dependent actions may be implemented in parallel.
It will be apparent that the various features described above may be implemented in many different forms of software, firmware, and hardware in the implementations illustrated in the figures. The actual software code or specialized control hardware used to implement the various features is not limiting. Thus, the operation and behavior of the features were described without reference to the specific software code-it being understood that one of ordinary skill in the art would be able to design software and control hardware to implement the various features based on the description herein.
Furthermore, certain portions of the invention may be implemented as "logic" that performs one or more functions. This logic may include hardware (e.g., one or more processors, microprocessors, application specific integrated circuits, field programmable gate arrays, or other processing logic), software, or a combination of hardware and software.
In the foregoing specification, various preferred embodiments have been described with reference to the accompanying drawings. It will, however, be evident that various modifications and changes may be made thereto, and additional embodiments may be implemented, without departing from the broader scope of the invention as set forth in the claims that follow. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article "a" is intended to include one or more items. Further, the phrase "based on" is intended to mean "based, at least in part, on" unless explicitly stated otherwise.

Claims (21)

1. A system, comprising:
a probe configured to:
transmitting an ultrasonic signal to an object of interest, and
receiving echo information associated with the transmitted ultrasound signals; and
at least one processing device configured to:
processing the received echo information using a machine learning algorithm to generate probability information associated with the object of interest,
classifying the probability information, and
based on the classified probability information, image information corresponding to the object of interest is output.
2. The system of claim 1, wherein when classifying the probability information, the at least one processing device is configured to binarize the probability information, and the at least one processing device is further configured to:
based on the binarized probability information, at least one of a volume, length, height, width, depth, diameter, or area associated with the object of interest is estimated.
3. The system of claim 1, wherein the machine learning algorithm comprises a convolutional neural network algorithm.
4. The system of claim 1, further comprising:
a display configured to receive the image information and display the image information.
5. The system of claim 4, wherein the display is further configured to:
b-mode image data corresponding to the received echo information and output image information corresponding to the target of interest are displayed simultaneously.
6. The system of claim 1, wherein the at least one processing device is further configured to:
generating aiming instructions for directing the probe to a target of interest.
7. The system of claim 1, wherein the object of interest comprises a bladder.
8. The system of claim 1, wherein the at least one processing device is further configured to:
receiving at least one of gender information of a subject, information indicating that the subject is a child, or patient data associated with the subject, and
the received echo information is processed based on the received information.
9. The system of claim 1, wherein the at least one processing device is further configured to:
automatically determining at least one of demographic information of the subject, clinical information of the subject, or device information associated with the probe, and
the received echo information is processed based on the automatically determined information.
10. The system of claim 1, wherein, when processing the received echo information, the at least one processing device is configured to:
processing the received echo information to generate output image data,
processing pixels associated with the output image data,
the value of each pixel processed is determined,
a peak is identified, an
Filling a region around a point associated with the peak to identify a portion of the target of interest.
11. The system of claim 1, wherein, when processing the received echo information, the at least one processing device is configured to:
identifying higher order harmonic information about a frequency associated with the transmitted ultrasound signal, and
probability information is generated based on the identified higher order harmonic information.
12. The system of claim 1, wherein the probe is configured to transmit the received echo information to the at least one processing device via a wireless interface.
13. The system of claim 1, wherein the object of interest comprises one of an aorta, a prostate, a heart, a uterus, a kidney, a blood vessel, amniotic fluid, or a fetus.
14. A method, comprising:
transmitting an ultrasound signal to a target of interest via an ultrasound scanner;
receiving echo information associated with the transmitted ultrasound signals;
processing the received echo information using a machine learning algorithm to generate probability information associated with the object of interest;
classifying the probability information; and
based on the classified probability information, image information corresponding to the object of interest is output.
15. The method of claim 14, wherein classifying the probability information comprises binarizing the probability information, the method further comprising:
estimating at least one of a volume, a length, a height, a width, a depth, a diameter, or an area associated with the object of interest based on the binarized probability information; and
outputting the at least one of the volume, length, height, width, depth, diameter, or area to a display.
16. The method of claim 14, further comprising:
b-mode image data corresponding to the echo information and output image information corresponding to the target of interest are displayed simultaneously.
17. The method of claim 14, further comprising:
receiving at least one of gender information, age range information, or body mass index information; and
the received echo information is processed based on the received information.
18. A system, comprising:
a memory; and
at least one processing device configured to:
receiving image information corresponding to an object of interest,
processing the received image information using a machine learning algorithm to generate probability information associated with the object of interest,
classifying the probability information, and
based on the classified probability information, second image information corresponding to the object of interest is output.
19. The system of claim 18, wherein the at least one processing device is further configured to:
based on the classified probability information, at least one of a volume, length, height, width, depth, diameter, or area associated with the object of interest is estimated.
20. The system of claim 18, wherein the machine learning algorithm comprises a convolutional neural network algorithm and the memory stores instructions to execute the convolutional neural network algorithm.
21. The system of claim 18, further comprising:
a probe configured to:
an ultrasound signal is transmitted to a target of interest,
receiving echo information associated with the transmitted ultrasound signals, and
forwarding the echo information to the at least one processing device,
wherein the at least one processing device is further configured to:
generating, using the machine learning algorithm, image information corresponding to an object of interest based on the echo information.
CN201880030236.6A 2017-05-11 2018-05-11 Ultrasound scanning based on probability mapping Pending CN110753517A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201762504709P 2017-05-11 2017-05-11
US62/504,709 2017-05-11
PCT/US2018/032247 WO2018209193A1 (en) 2017-05-11 2018-05-11 Probability map-based ultrasound scanning

Publications (1)

Publication Number Publication Date
CN110753517A true CN110753517A (en) 2020-02-04

Family

ID=62685100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880030236.6A Pending CN110753517A (en) 2017-05-11 2018-05-11 Ultrasound scanning based on probability mapping

Country Status (7)

Country Link
US (1) US20180330518A1 (en)
EP (1) EP3621525A1 (en)
JP (1) JP6902625B2 (en)
KR (2) KR20200003400A (en)
CN (1) CN110753517A (en)
CA (1) CA3062330A1 (en)
WO (1) WO2018209193A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184683A (en) * 2020-10-09 2021-01-05 深圳度影医疗科技有限公司 Ultrasonic image identification method, terminal equipment and storage medium
CN113616235A (en) * 2020-05-07 2021-11-09 中移(成都)信息通信科技有限公司 Ultrasonic detection method, device, system, equipment, storage medium and ultrasonic probe

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3426158A1 (en) 2016-03-09 2019-01-16 Echonous, Inc. Ultrasound image recognition systems and methods utilizing an artificial intelligence network
KR102139856B1 (en) * 2017-06-23 2020-07-30 울산대학교 산학협력단 Method for ultrasound image processing
EP3420913B1 (en) * 2017-06-26 2020-11-18 Samsung Medison Co., Ltd. Ultrasound imaging apparatus and control method thereof
WO2019009919A1 (en) * 2017-07-07 2019-01-10 Massachusetts Institute Of Technology System and method for automated ovarian follicular monitoring
EP3777699A4 (en) * 2018-03-30 2021-05-26 FUJIFILM Corporation Ultrasound diagnostic device and control method of ultrasound diagnostic device
US11391817B2 (en) 2018-05-11 2022-07-19 Qualcomm Incorporated Radio frequency (RF) object detection using radar and machine learning
US10878570B2 (en) * 2018-07-17 2020-12-29 International Business Machines Corporation Knockout autoencoder for detecting anomalies in biomedical images
JP2021530303A (en) * 2018-07-20 2021-11-11 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Ultrasound imaging with deep learning and related devices, systems, and methods
WO2020122606A1 (en) * 2018-12-11 2020-06-18 시너지에이아이 주식회사 Method for measuring volume of organ by using artificial neural network, and apparatus therefor
JP7192512B2 (en) * 2019-01-11 2022-12-20 富士通株式会社 Learning program, learning device and learning method
JP7273518B2 (en) * 2019-01-17 2023-05-15 キヤノンメディカルシステムズ株式会社 Ultrasound diagnostic equipment and learning program
WO2020150086A1 (en) * 2019-01-17 2020-07-23 Verathon Inc. Systems and methods for quantitative abdominal aortic aneurysm analysis using 3d ultrasound imaging
US20200281570A1 (en) * 2019-01-17 2020-09-10 Canon Medical Systems Corporaion Apparatus
JP7258568B2 (en) * 2019-01-18 2023-04-17 キヤノンメディカルシステムズ株式会社 ULTRASOUND DIAGNOSTIC DEVICE, IMAGE PROCESSING DEVICE, AND IMAGE PROCESSING PROGRAM
JP7302988B2 (en) * 2019-03-07 2023-07-04 富士フイルムヘルスケア株式会社 Medical imaging device, medical image processing device, and medical image processing program
JP7242409B2 (en) * 2019-04-26 2023-03-20 キヤノンメディカルシステムズ株式会社 MEDICAL IMAGE PROCESSING DEVICE, ULTRASOUND DIAGNOSTIC DEVICE, AND LEARNED MODEL CREATION METHOD
US20220262146A1 (en) 2019-06-12 2022-08-18 Carnegie Mellon University System and Method for Labeling Ultrasound Data
JP7284337B2 (en) * 2019-07-12 2023-05-30 ベラソン インコーポレイテッド Representation of a target during aiming of an ultrasonic probe
US20210045716A1 (en) * 2019-08-13 2021-02-18 GE Precision Healthcare LLC Method and system for providing interaction with a visual artificial intelligence ultrasound image segmentation module
CN110567558B (en) * 2019-08-28 2021-08-10 华南理工大学 Ultrasonic guided wave detection method based on deep convolution characteristics
CN112568935B (en) * 2019-09-29 2024-06-25 中慧医学成像有限公司 Three-dimensional ultrasonic imaging method and system based on three-dimensional tracking camera
US11583244B2 (en) * 2019-10-04 2023-02-21 GE Precision Healthcare LLC System and methods for tracking anatomical features in ultrasound images
US20210183521A1 (en) * 2019-12-13 2021-06-17 Korea Advanced Institute Of Science And Technology Method and apparatus for quantitative imaging using ultrasound data
JP7093093B2 (en) * 2020-01-08 2022-06-29 有限会社フロントエンドテクノロジー Ultrasonic urine volume measuring device, learning model generation method, learning model
KR102246966B1 (en) * 2020-01-29 2021-04-30 주식회사 아티큐 Method for Recognizing Object Target of Body
WO2021207226A1 (en) * 2020-04-07 2021-10-14 Verathon Inc. Automated prostate analysis system
KR102238280B1 (en) * 2020-12-09 2021-04-08 박지현 Underwater target detection system and method of thereof
US20230070062A1 (en) * 2021-08-27 2023-03-09 Clarius Mobile Health Corp. Method and system, using an ai model, for identifying and predicting optimal fetal images for generating an ultrasound multimedia product
JP2023034400A (en) * 2021-08-31 2023-03-13 DeepEyeVision株式会社 Information processing device, information processing method and program
JP2023087273A (en) 2021-12-13 2023-06-23 富士フイルム株式会社 Ultrasonic diagnostic device and control method of ultrasonic diagnostic device
JP2023143418A (en) * 2022-03-25 2023-10-06 富士フイルム株式会社 Ultrasonic diagnostic device and operation method thereof
WO2024101255A1 (en) * 2022-11-08 2024-05-16 富士フイルム株式会社 Medical assistance device, ultrasonic endoscope, medical assistance method, and program
CN118071746B (en) * 2024-04-19 2024-08-30 广州索诺星信息科技有限公司 Ultrasonic image data management system and method based on artificial intelligence

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6238342B1 (en) * 1998-05-26 2001-05-29 Riverside Research Institute Ultrasonic tissue-type classification and imaging methods and apparatus
WO2001082787A2 (en) * 2000-05-03 2001-11-08 University Of Washington Method for determining the contour of an in vivo organ using multiple image frames of the organ
US20090093717A1 (en) * 2007-10-04 2009-04-09 Siemens Corporate Research, Inc. Automated Fetal Measurement From Three-Dimensional Ultrasound Data
CN102629376A (en) * 2011-02-11 2012-08-08 微软公司 Image registration
US20140052001A1 (en) * 2012-05-31 2014-02-20 Razvan Ioan Ionasec Mitral Valve Detection for Transthoracic Echocardiography
CN104840209A (en) * 2014-02-19 2015-08-19 三星电子株式会社 Apparatus and method for lesion detection
CN106204465A (en) * 2015-05-27 2016-12-07 美国西门子医疗解决公司 Knowledge based engineering ultrasonoscopy strengthens
US9536054B1 (en) * 2016-01-07 2017-01-03 ClearView Diagnostics Inc. Method and means of CAD system personalization to provide a confidence level indicator for CAD system recommendations

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5081933A (en) * 1990-03-15 1992-01-21 Utdc Inc. Lcts chassis configuration with articulated chassis sections between vehicles
JPH06233761A (en) * 1993-02-09 1994-08-23 Hitachi Medical Corp Image diagnostic device for medical purpose
US5734739A (en) * 1994-05-31 1998-03-31 University Of Washington Method for determining the contour of an in vivo organ using multiple image frames of the organ
US5984870A (en) * 1997-07-25 1999-11-16 Arch Development Corporation Method and system for the automated analysis of lesions in ultrasound images
US20050089205A1 (en) * 2003-10-23 2005-04-28 Ajay Kapur Systems and methods for viewing an abnormality in different kinds of images
US8167803B2 (en) 2007-05-16 2012-05-01 Verathon Inc. System and method for bladder detection using harmonic imaging
US20090082691A1 (en) * 2007-09-26 2009-03-26 Medtronic, Inc. Frequency selective monitoring of physiological signals
US8265390B2 (en) * 2008-11-11 2012-09-11 Siemens Medical Solutions Usa, Inc. Probabilistic segmentation in computer-aided detection
US20100158332A1 (en) * 2008-12-22 2010-06-24 Dan Rico Method and system of automated detection of lesions in medical images
US20110257527A1 (en) * 2010-04-20 2011-10-20 Suri Jasjit S Ultrasound carotid media wall classification and imt measurement in curved vessels using recursive refinement and validation
JP6106190B2 (en) * 2011-12-21 2017-03-29 ボルケーノ コーポレイション Visualization method of blood and blood likelihood in blood vessel image
JP6323335B2 (en) * 2012-11-15 2018-05-16 コニカミノルタ株式会社 Image processing apparatus, image processing method, and program
JP6200249B2 (en) * 2013-09-11 2017-09-20 キヤノン株式会社 Information processing apparatus and information processing method
WO2016194161A1 (en) * 2015-06-03 2016-12-08 株式会社日立製作所 Ultrasonic diagnostic apparatus and image processing method
WO2017033502A1 (en) * 2015-08-21 2017-03-02 富士フイルム株式会社 Ultrasonic diagnostic device and method for controlling ultrasonic diagnostic device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6238342B1 (en) * 1998-05-26 2001-05-29 Riverside Research Institute Ultrasonic tissue-type classification and imaging methods and apparatus
WO2001082787A2 (en) * 2000-05-03 2001-11-08 University Of Washington Method for determining the contour of an in vivo organ using multiple image frames of the organ
US20090093717A1 (en) * 2007-10-04 2009-04-09 Siemens Corporate Research, Inc. Automated Fetal Measurement From Three-Dimensional Ultrasound Data
CN102629376A (en) * 2011-02-11 2012-08-08 微软公司 Image registration
US20140052001A1 (en) * 2012-05-31 2014-02-20 Razvan Ioan Ionasec Mitral Valve Detection for Transthoracic Echocardiography
CN104840209A (en) * 2014-02-19 2015-08-19 三星电子株式会社 Apparatus and method for lesion detection
CN106204465A (en) * 2015-05-27 2016-12-07 美国西门子医疗解决公司 Knowledge based engineering ultrasonoscopy strengthens
US9536054B1 (en) * 2016-01-07 2017-01-03 ClearView Diagnostics Inc. Method and means of CAD system personalization to provide a confidence level indicator for CAD system recommendations

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113616235A (en) * 2020-05-07 2021-11-09 中移(成都)信息通信科技有限公司 Ultrasonic detection method, device, system, equipment, storage medium and ultrasonic probe
CN113616235B (en) * 2020-05-07 2024-01-19 中移(成都)信息通信科技有限公司 Ultrasonic detection method, device, system, equipment, storage medium and ultrasonic probe
CN112184683A (en) * 2020-10-09 2021-01-05 深圳度影医疗科技有限公司 Ultrasonic image identification method, terminal equipment and storage medium

Also Published As

Publication number Publication date
JP2020519369A (en) 2020-07-02
JP6902625B2 (en) 2021-07-14
EP3621525A1 (en) 2020-03-18
KR102409090B1 (en) 2022-06-15
US20180330518A1 (en) 2018-11-15
KR20220040507A (en) 2022-03-30
WO2018209193A1 (en) 2018-11-15
CA3062330A1 (en) 2018-11-15
KR20200003400A (en) 2020-01-09

Similar Documents

Publication Publication Date Title
KR102409090B1 (en) Probability map-based ultrasound scanning
CN110325119B (en) Ovarian follicle count and size determination
CN112603361B (en) System and method for tracking anatomical features in ultrasound images
JP7022217B2 (en) Echo window artifact classification and visual indicators for ultrasound systems
US11464490B2 (en) Real-time feedback and semantic-rich guidance on quality ultrasound image acquisition
US11432803B2 (en) Method and system for generating a visualization plane from 3D ultrasound data
US10470744B2 (en) Ultrasound diagnosis apparatus, ultrasound diagnosis method performed by the ultrasound diagnosis apparatus, and computer-readable storage medium having the ultrasound diagnosis method recorded thereon
US9324155B2 (en) Systems and methods for determining parameters for image analysis
US10238368B2 (en) Method and system for lesion detection in ultrasound images
US10949976B2 (en) Active contour model using two-dimensional gradient vector for organ boundary detection
US11278259B2 (en) Thrombus detection during scanning
US11684344B2 (en) Systems and methods for quantitative abdominal aortic aneurysm analysis using 3D ultrasound imaging
CN112641464A (en) Method and system for context-aware enabled ultrasound scanning
KR20150103956A (en) Apparatus and method for processing medical image, and computer-readable recoding medium
Mendizabal-Ruiz et al. Probabilistic segmentation of the lumen from intravascular ultrasound radio frequency data
EP3409210B1 (en) Ultrasound diagnosis apparatus and operating method thereof
WO2020133236A1 (en) Spinal imaging method and ultrasonic imaging system
CN116258736A (en) System and method for segmenting an image
JP7336766B2 (en) Ultrasonic diagnostic device, ultrasonic diagnostic method and ultrasonic diagnostic program
US20190271771A1 (en) Segmented common anatomical structure based navigation in ultrasound imaging
EP3848892A1 (en) Generating a plurality of image segmentation results for each node of an anatomical structure model to provide a segmentation confidence value for each node
CN113939234A (en) Ultrasonic diagnostic apparatus, medical image processing apparatus, and medical image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination