[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2019182520A1 - Method and system of segmenting image of abdomen of human into image segments corresponding to fat compartments - Google Patents

Method and system of segmenting image of abdomen of human into image segments corresponding to fat compartments Download PDF

Info

Publication number
WO2019182520A1
WO2019182520A1 PCT/SG2019/050160 SG2019050160W WO2019182520A1 WO 2019182520 A1 WO2019182520 A1 WO 2019182520A1 SG 2019050160 W SG2019050160 W SG 2019050160W WO 2019182520 A1 WO2019182520 A1 WO 2019182520A1
Authority
WO
WIPO (PCT)
Prior art keywords
segment
image
adipose tissue
subcutaneous adipose
various embodiments
Prior art date
Application number
PCT/SG2019/050160
Other languages
French (fr)
Inventor
Bhanu Prakash Kirgaval Nagaraja Rao
Thuy Tien BUI
Krishna Kanth CHITTA
Original Assignee
Agency For Science, Technology And Research
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agency For Science, Technology And Research filed Critical Agency For Science, Technology And Research
Publication of WO2019182520A1 publication Critical patent/WO2019182520A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4869Determining body composition
    • A61B5/4872Body fat
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/46Arrangements for interfacing with the operator or the patient
    • A61B6/461Displaying means of special interest
    • A61B6/463Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/4828Resolving the MR signals of different chemical species, e.g. water-fat imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • G01R33/5608Data processing and visualization specially adapted for MR, e.g. for feature analysis and pattern recognition on the basis of measured MR data, segmentation of measured MR data, edge contour detection on the basis of measured MR data, for enhancing measured MR data in terms of signal-to-noise ratio by means of noise filtering or apodization, for enhancing measured MR data in terms of resolution by means for deblurring, windowing, zero filling, or generation of gray-scaled images, colour-coded images or images displaying vectors instead of pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Definitions

  • Various aspects of this disclosure relate to methods of segmenting an image of an abdomen of a human into image segments corresponding to fat compartments.
  • Various aspects of this disclosure relate to a system for performing the method for of segmenting an image of an abdomen of a human into image segments corresponding to fat compartments.
  • Various aspects of this disclosure relate to a computer program.
  • Obesity, and more specifically accumulation of adipose tissue in the abdomen is associated with health risks such as type II diabetes and cardiovascular diseases, which rank number 8 and number 1 in terms of worldwide mortality.
  • Modem advanced imaging modalities such as Computed Tomography (CT) and Magnetic Resonance (MR) can clearly visualize body fat volume in different depots, enabling non-invasive and accurate methods for quantitation of these fat depots.
  • CT Computed Tomography
  • MR Magnetic Resonance
  • the abdominal fat compartments can be broadly classified as Subcutaneous Adipose Tissue (SAT) and Visceral Adipose Tissue (VAT).
  • SAT Subcutaneous Adipose Tissue
  • VAT Visceral Adipose Tissue
  • the abdominal SAT can be further divided into superficial and deep SAT (SSAT and DSAT) which is separated by a fascial plane called fascia superficialis.
  • the fat depots differ in their anatomical locations, and also have different influence on metabolic activities.
  • VAT shows stronger correlation to adverse metabolic risk profiles such as cardio-metabolic risk factors and insulin resistance level than SAT.
  • DSAT contains more saturated fat and lipolytically active adipocytes than SSAT.
  • DSAT is also an indicator of metabolic syndrome, which belongs to risk factor cluster of diabetes type II and cardiovascular diseases.
  • SSAT inversely correlates to these risks, and higher distribution of SSAT in abdominal region signifies beneficial cardio-metabolic effects in type II diabetes patients. Therefore, accurate quantification of different fat compartments is crucial to evaluate the effectiveness of management of obesity and interventional study.
  • accurate quantification of abdominal VAT, DSAT and SSAT will aid the studies exploring the relationship between these fat depots and metabolic factors.
  • VAT visceral, superficial subcutaneous and deep subcutaneous adipose tissue
  • SSAT superficial subcutaneous and deep subcutaneous adipose tissue
  • Various embodiments may provide a method of segmenting an image of an abdomen of a human into image segments corresponding to fat compartments.
  • the method may include identifying an abdominal wall segment, a visceral adipose tissue (VAT) segment and a subcutaneous adipose tissue (SAT) segment of the image, wherein the subcutaneous adipose tissue (SAT) segment includes a superficial subcutaneous adipose tissue (SSAT) segment and a deep subcutaneous adipose tissue (DSAT) segment.
  • VAT visceral adipose tissue
  • SAT subcutaneous adipose tissue
  • SSAT superficial subcutaneous adipose tissue
  • DSAT deep subcutaneous adipose tissue
  • the method may include detecting edges of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment in the subcutaneous adipose tissue (SAT) segment to obtain edges coordinates in Cartesian coordinates of the edges of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment.
  • the method may include transforming the edges coordinates from Cartesian coordinates to polar coordinates to obtain a discontinuous boundary curve of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment.
  • the method may include determining a continuous boundary curve of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment, wherein determining the continuous boundary curve may include interpolating missing points in the discontinuous boundary curve.
  • the method may include transforming the continuous boundary curve from polar coordinates to Cartesian coordinates to obtain a transformed boundary curve of the image.
  • the method may include identifying the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment in the subcutaneous adipose tissue (SAT) segment, wherein the transformed boundary curve may separate the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment in the image, and wherein the superficial subcutaneous adipose tissue (SSAT) segment is between the abdominal wall segment and the transformed boundary curve, and the deep subcutaneous adipose tissue (DSAT) segment is between the transformed boundary curve and the visceral adipose tissue (VAT) segment.
  • SSAT superficial subcutaneous adipose tissue
  • DSAT deep subcutaneous adipose tissue
  • Various embodiments may provide a system for performing the method according to the present disclosure.
  • the system may include a memory and at least one processor communicatively coupled to the memory and configured to segment the image of the abdomen of the human into image segments corresponding to the fat compartments.
  • the system may include an input module communicatively coupled to the processor, wherein the input module is configured to receive the image of the abdomen of the human.
  • the system may include a user interface communicatively coupled to the processor, wherein the user interface is configured to receive instructions from a user and configured to communicate the instructions to the processor for execution.
  • Various embodiments may provide a method of segmenting an image of an abdomen of a human into image segments corresponding to fat compartments.
  • the method may include receiving training data comprising a plurality of images of abdomens of humans, each image of the plurality of images of abdomens of humans comprising image segments corresponding to a visceral adipose tissue (VAT) segment, a superficial subcutaneous adipose tissue (SSAT) segment and a deep subcutaneous adipose tissue (DSAT) segment.
  • VAT visceral adipose tissue
  • SSAT superficial subcutaneous adipose tissue
  • DSAT deep subcutaneous adipose tissue
  • Various embodiments may provide a computer program may include instructions executable by at least one processor according to the present disclosure.
  • FIG. 2A is a schematic illustration of a system 200 according to various embodiments.
  • FIG. 2B is a schematic illustration of a user interface 230 according to various embodiments
  • FIG. 3 A shows an example of a load panel 240 of a user interface 230 according to various embodiments
  • FIG. 3B shows an example of removing an extraneous region segment in the image according to various embodiments
  • FIG. 3C shows an example of correcting inhomogeneity intensity of the image according to various embodiments
  • FIG. 4A shows an example of an image transformation from Cartesian coordinates to polar coordinates in an exemplary method of segmenting of the abdominal fat compartments into SAT and VAT, according to various embodiments;
  • FIG. 4B shows an example of an edge map of an abdomen image, according to various embodiments
  • FIG. 4C shows an example of an interpolated SAT boundary according to various embodiments
  • FIG. 4D shows a clearer image of the interpolated SAT boundary of FIG. 4C according to various embodiments
  • FIG. 5A shows an intensity corrected SAT image before filtering, according to various embodiments
  • FIG. 5B shows an intensity corrected SAT image after coherence filtering, according to various embodiments.
  • FIG. 5C shows a Sobel edge map of FIG. 5 A, according to various embodiments.
  • FIG. 5D shows a Sobel edge map of FIG. 5B, according to various embodiments.
  • FIG. 6 shows edge positions from an edge map converted into polar coordinate system, according to various embodiments
  • FIG. 7 shows the results of segmentation, according to various embodiments.
  • FIG. 8 shows an exemplary deep learning architecture to segment the image into fat compartments according to various embodiments
  • FIG. 9A shows a plot of cross entropy loss according to various embodiments.
  • FIG. 9C shows a plot of test accuracy according to various embodiments.
  • FIG. 10A and FIG. 10B illustrate the results of segmentation of abdominal fat pictorially of a test subject 1 and a test subject 2, according to various embodiments;
  • FIG. 10C illustrates the results of segmentation of abdominal fat pictorially of a test subject 3 according to various embodiments:
  • FIG. 1 IB shows row-wise projection of S°, according to various embodiments
  • FIG. 12A shows a map of VAT without removal of bone structures, according to various embodiments
  • FIG. 12B shows a separated pelvic bone, according to various embodiments
  • FIGS. 12E and 12F show structuring elements for diagonal image opening, according to various embodiments
  • FIG. 13A shows a sagittal image S° crossing body symmetrical axis, according to various embodiments;
  • FIG. 13B shows an axial image crossing spine region, according to various embodiments
  • FIG. 13C shows column- wise projection of S°, according to various embodiments
  • FIG. 14 shows a flowchart of an exemplary framework of a segmentation process according to various embodiments
  • FIG. 15A shows an example of image selection in a load panel, according to various embodiments
  • FIG. 15B shows an example of the segmented fat compartments in a segmentation panel, according to various embodiments
  • FIG. 15C shows an example of correcting the segmented fat compartments in a segmentation panel, according to various embodiments
  • FIG. 15D shows an example of quantitation of separated fat compartments in a report panel, according to various embodiments
  • FIG. 16 shows segmented abdominal fat compartments in 3D volume, according to various embodiments
  • FIG. 17A shows Bland Altman plots for a method of segmenting an image of an abdomen of a human into image segments corresponding to fat compartments
  • FIG. 17B shows Bland Altman plots for a deep learning method, according to various embodiments.
  • the articles“a”,“an” and“the” as used with regard to a feature or element include a reference to one or more of the features or elements.
  • V arious embodiments may provide a method of segmenting an image of an abdomen of a human into image segments corresponding to fat compartments.
  • the image may be a Magnetic Resonance (MR) image or a Computed Tomography (CT) image.
  • the image may be acquired from any specified region of the abdomen.
  • the image may be acquired from a location of at least one of vertebrae discs F1-F5 of a lumbar spine of the human.
  • the MR image scanner may be set up for the required image acquisition.
  • the acquisition bandwidth chosen may separate fat and water voxels for example, the acquisition bandwidth may be between 400 to 800 Hz/pixel.
  • the image acquisition may be performed, for example, when the human holds his breath for a pre-determined time period. Also, the time period for acquiring each image may be between 10 to 40 seconds, for example 20 seconds. The image acquisition may be performed, for example, when the human breathes freely.
  • the data processing technique is not limited to the type of acquisition or imaging sequence. The data processing technique may be used for Dixon sequence, water suppressed fat imaging sequence or any other fat imaging sequences.
  • water-only and fat-only images may be generated by linear combination of in-phase and out-of-phase images.
  • distortions in the acquired images may be corrected during reconstruction processes.
  • the reconstruction process may be carried out during the process of image acquisition.
  • the distortions in the images may be filtered during the reconstruction process.
  • the reconstruction process may be conducted using any suitable correction technique, for example Cartesian, spiral or radial techniques.
  • the fat compartments may be a visceral adipose tissue (VAT), a subcutaneous adipose tissue (SAT), a superficial subcutaneous adipose tissue (SSAT), and a deep subcutaneous adipose tissue (DSAT).
  • VAT visceral adipose tissue
  • SAT subcutaneous adipose tissue
  • SSAT superficial subcutaneous adipose tissue
  • DSAT deep subcutaneous adipose tissue
  • the method may include: identifying an abdominal wall segment, a visceral adipose tissue (VAT) segment and a subcutaneous adipose tissue (SAT) segment of the image.
  • the subcutaneous adipose tissue (SAT) segment may include a superficial subcutaneous adipose tissue (SSAT) segment and a deep subcutaneous adipose tissue (DSAT) segment.
  • the visceral adipose tissue (VAT) segment is a segment on the image of the abdomen corresponding to the visceral adipose tissue (VAT).
  • the subcutaneous adipose tissue (SAT) segment is a segment on the image of the abdomen corresponding to the subcutaneous adipose tissue (SAT).
  • the superficial subcutaneous adipose tissue (SSAT) segment is a segment on the image of the abdomen corresponding to the superficial subcutaneous adipose tissue (SSAT).
  • the deep subcutaneous adipose tissue (DSAT) segment is a segment on the image of the abdomen corresponding to the deep subcutaneous adipose tissue (DSAT).
  • the method may include: detecting edges of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment in the subcutaneous adipose tissue (SAT) segment to obtain edges coordinates in Cartesian coordinates of the edges of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment ln various embodiments, the edges may be detected, for example, using Sobel edge detection or Canny edge detection.
  • the method may include: transforming the edges coordinates from Cartesian coordinates to polar coordinates to obtain a discontinuous boundary curve of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment.
  • the Cartesian coordinates may refer to Cartesian coordinates of a Cartesian coordinate system, for example applied to a CT image.
  • the polar coordinates may refer to polar coordinates of a polar coordinate system.
  • the transformation of the edges may use a center of the image in Cartesian coordinates as reference, for example the major human symmetry axis.
  • the center may be determined using the center of the row and the center of the column of the image, for example the center may be the [rows/2, cols/2] for an image in Cartesian coordinates.
  • the method may include: determining a continuous boundary curve of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment.
  • the determining the continuous boundary curve may include interpolating missing points in the discontinuous boundary curve.
  • interpolating missing points may be done using spline interpolation.
  • a shape preserving interpolation technique may be used to interpolate the missing points.
  • the shape preserving interpolation technique may be piecewise cubic Hermite interpolation.
  • the continuous boundary curve may undergo data smoothing. The data smoothing may be conducted by using locally weighted linear regression.
  • the method may include: transforming the continuous boundary curve from polar coordinates to Cartesian coordinates to obtain a transformed boundary curve of the image.
  • the DSAT and SSAT may be separated based on the transformed boundary curve.
  • the transformed boundary curve may substantially overlap with the fascia superficialis.
  • the method may include: identifying the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment in the subcutaneous adipose tissue (SAT) segment, wherein the transformed boundary curve separates the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment in the image, and wherein the superficial subcutaneous adipose tissue (SSAT) segment is between the abdominal wall segment and the transformed boundary curve, and the deep subcutaneous adipose tissue (DSAT) segment is between the transformed boundary curve and the visceral adipose tissue (VAT) segment.
  • SSAT superficial subcutaneous adipose tissue
  • DSAT deep subcutaneous adipose tissue
  • the superficial subcutaneous adipose tissue (SSAT) segment may be identified as the fat regions between the abdominal wall segment and the transformed boundary curve.
  • the deep subcutaneous adipose tissue (DSAT) segment may be identified as the fat regions between the transformed boundary curve and the visceral adipose tissue (VAT) segment.
  • the method of segmenting the image may include: conducting coherence filtering on the subcutaneous adipose tissue (SAT) segment of the image to remove unwanted edges before detecting edges of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment.
  • SAT subcutaneous adipose tissue
  • SSAT superficial subcutaneous adipose tissue
  • DSAT deep subcutaneous adipose tissue
  • identifying the visceral adipose tissue (VAT) segment and the subcutaneous adipose tissue (SAT) segment may include: transforming the image from Cartesian coordinates to polar coordinates.
  • the Cartesian coordinates may refer to Cartesian coordinates of a Cartesian coordinate system, for example applied to a CT image.
  • the polar coordinates may refer to polar coordinates of a polar coordinate system.
  • the transformation of the edges may use a center of the image in Cartesian coordinates as reference, for example the major human symmetry axis.
  • Identifying the VAT and SAR may further include detecting edges of an abdominal wall segment and a SAT-VAT boundary to obtain edges coordinates in polar coordinates of the edges of the abdominal wall segment and the SAT-VAT boundary wherein the SAT-VAT boundary in polar coordinates is discontinuous.
  • the edge map may be obtained using an edge detection method, for example Sobel edge detection.
  • the method of segmenting the image may include interpolating missing points of the SAT-VAT boundary to obtain a connected SAT-VAT boundary.
  • the method may include identifying the visceral adipose tissue (VAT) segment and a subcutaneous adipose tissue (SAT) segment of the image from the abdominal wall segment and the connected SAT-VAT boundary.
  • VAT visceral adipose tissue
  • SAT subcutaneous adipose tissue
  • the method may include pre-processing the image of the abdomen of the human before segmenting the image into image segments corresponding to the fat compartments.
  • pre-processing the image may include removing at least one extraneous region segment in the image.
  • pre processing the image may include removing a plurality of extraneous region segments in the image.
  • the extraneous region segment may be a limb segment of the human, such as an arm segment.
  • pre-processing the image may include correcting an inhomogeneity intensity of the image.
  • correcting inhomogeneity intensity of the image may include estimating a bias field of the image and applying the bias field to the image to obtain a corrected image.
  • the bias field of the image may be estimated using biased fuzzy c means algorithm.
  • the bias field may be subtracted from the image to obtain the corrected image.
  • the method may include post-processing the image by removing a pelvic bone segment and/or a spine segment after segmenting the image of the abdomen of the human into the image segments corresponding to the fat compartments.
  • the method may include receiving training data comprising a plurality of images of abdomens of humans, each image of the plurality of images of abdomens of humans comprising image segments corresponding to the visceral adipose tissue (VAT) segment, the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment.
  • VAT visceral adipose tissue
  • SSAT superficial subcutaneous adipose tissue
  • DSAT deep subcutaneous adipose tissue
  • the plurality of images in the training data may be derived from images segmented in accordance to the method above.
  • the training data may be manually derived by a human, for example, the training data may be a plurality of ground truth images which may be constructed by expert radiologist using manual tracing of boundaries between DSAT, SSAT and VAT.
  • the method may include using the training data to create a trained computer model for segmenting the image segments corresponding to the fat compartments of the abdomen.
  • the method may include using the trained computer model to segment a visceral adipose tissue (VAT) segment, a superficial subcutaneous adipose tissue (SSAT) segment and a deep subcutaneous adipose tissue (DSAT) segment of an abdomen in a target image.
  • VAT visceral adipose tissue
  • SSAT superficial subcutaneous adipose tissue
  • DSAT deep subcutaneous adipose tissue
  • the image and/or the target image may be acquired from a location of at least one of vertebrae discs L1-L5 of a lumbar spine of the human.
  • a system for performing the method of segmenting an image of an abdomen of a human into image segments corresponding to fat compartments may include a memory.
  • the system may include at least one processor communicatively coupled to the memory and configured to segment the image of the abdomen of the human into image segments corresponding to the fat compartments.
  • the system may include one or a plurality of processors communicatively coupled to the memory and configured to segment the image of the abdomen of the human into image segments corresponding to the fat compartments.
  • the system may include an input module communicatively coupled to the processor or to at least one of the plurality of processors, wherein the input module is configured to receive the image of the abdomen of the human.
  • a memory used in the embodiments may be a volatile memory, for example a DRAM (Dynamic Random Access Memory) or a non-volatile memory, for example a PROM (Programmable Read Only Memory), an EPROM (Erasable PROM), EEPROM (Electrically Erasable PROM), or a flash memory, e.g., a floating gate memory, a charge trapping memory, an MRAM (Magnetoresistive Random Access Memory) or a PCRAM (Phase Change Random Access Memory).
  • DRAM Dynamic Random Access Memory
  • PROM Programmable Read Only Memory
  • EPROM Erasable PROM
  • EEPROM Electrical Erasable PROM
  • flash memory e.g., a floating gate memory, a charge trapping memory, an MRAM (Magnetoresistive Random Access Memory) or a PCRAM (Phase Change Random Access Memory).
  • the user interface may include a load panel, a segment panel and a report panel.
  • the system may be configured to load the image into the memory and display the image in a display window when the load panel receives instructions from the user.
  • the system may be configured to interactively correct the image after the image has been segmented when the segment panel receive instructions from the user.
  • the system may be configured to receive instructions from the user to quantitate the segmented fat compartments when the report panel receives instructions from the user to quantitate the segmented fat compartments.
  • the separated fat components may be quantitated at least one, or for each, of VAT, SAT, SSAT and DSAT.
  • the separated fat components may be quantitated for a total fat computation, wherein total fat may mean the combination of total fat compartments of SAT and VAT.
  • the separated fat components may be quantitated for each of VAT, SAT, SSAT and DSAT at each lumber position (Ll - L5).
  • a 3D rendered visualization of each fat compartment may be provided.
  • the system may be configured to receive a plurality of images of the abdomen of the human and batch process the plurality of images, wherein each image of the plurality of images are configured to be segmented into image segments corresponding to the fat compartments.
  • a method of semantically segmenting an image of an abdomen of a human into image segments corresponding to fat compartments using deep learning may include: receiving training data may including a plurality of images of abdomens of humans, wherein each image of the plurality of images of abdomens of humans include image segments corresponding to a visceral adipose tissue (VAT) segment, a superficial subcutaneous adipose tissue (SSAT) segment, and a deep subcutaneous adipose tissue (DSAT) segment.
  • VAT visceral adipose tissue
  • SSAT superficial subcutaneous adipose tissue
  • DSAT deep subcutaneous adipose tissue
  • the method may include using the trained computer model to semantically segment a visceral adipose tissue (VAT) segment, a superficial subcutaneous adipose tissue (SSAT) segment a the deep subcutaneous adipose tissue (DSAT) segment of an abdomen in a target image.
  • VAT visceral adipose tissue
  • SSAT superficial subcutaneous adipose tissue
  • DSAT deep subcutaneous adipose tissue
  • a computer program may include instructions executable by at least one processor according to the present disclosure.
  • FIG. 1 shows a flowchart of the method 100 according to various embodiments.
  • a first step 110 may include identifying an abdominal wall segment, a visceral adipose tissue (VAT) segment and a subcutaneous adipose tissue (SAT) segment of the image, wherein the SAT segment comprises a superficial subcutaneous adipose tissue (SSAT) segment and a deep subcutaneous adipose tissue (DSAT) segment.
  • a second step 120 may include detecting edges of the SSAT segment and the DSAT segment in the SAT segment to obtain edges coordinates in Cartesian coordinates of the edges of the SSAT segment and the DSAT segment.
  • a third step 130 may include transforming the edges coordinates from Cartesian coordinates to polar coordinates to obtain a discontinuous boundary curve of the SSAT segment and the DSAT segment.
  • a fourth step 140 may include determining a continuous boundary curve of the SSAT segment and the DSAT segment, wherein determining the continuous boundary curve may include interpolating missing points in the discontinuous boundary curve.
  • a fifth step 150 may include transforming the continuous boundary curve from polar coordinates to Cartesian coordinates to obtain a transformed boundary curve of the image.
  • a sixth step 160 may include identifying the SSAT segment and the DSAT segment in the SAT segment, wherein the transformed boundary curve separates the SSAT segment and the DSAT segment in the image, and wherein the SSAT segment is between the abdominal wall segment and the transformed boundary curve, and the DSAT segment is between the transformed boundary curve and the VAT segment.
  • FIG. 2A is a schematic illustration of a system 200 according to various embodiments.
  • the system 200 may include a memory 210.
  • the system 200 may include at least one processor 220 which may be communicatively coupled to the memory 210 and may be configured to receive the image of the abdomen of the human and may be configured to segment the image of the abdomen of the human into image segments corresponding to the fat compartments.
  • the system 200 may include a user interface 230 which may be communicatively coupled to the processor 220, wherein the user interface 230 may be configured to receive instructions from a user and may be configured to communicate the instructions to the processor 220 for execution.
  • the user interface may include at least one I/O device such as a monitor or a keyboard or a touchscreen device.
  • the user interface may be configured to receive instructions from the user through the at least one I/O device.
  • the I/O device in the user interface 230 may be configured to communicate the instructions received from the user to the processor for execution.
  • the user interface may include a graphic user interface displayed on an I/O device, for example a monitor.
  • the system 200 may be configured to receive a plurality of images of the abdomen of the human and batch process the plurality of images, wherein each image of the plurality of images are configured to be segmented into image segments corresponding to the fat compartments.
  • FIG. 2B is a schematic illustration of a user interface 230 according to various embodiments.
  • the user interface 230 may include a load panel 240, a segment panel 250 and a report panel 260.
  • the system 200 may be configured to load the image into the memory and display the image in a display window when the load panel 240 receives instructions from the user.
  • the system 200 may be configured to segment the image of the abdomen of the human into image segments corresponding to the fat compartments when the segment panel 250 receives instructions from the user.
  • the system 200 may be configured to pre-process the image prior to segmenting the image when the segment panel 250 receives instructions from the user.
  • the system 200 may be configured to post process the image after segmenting the image when the segment panel 250 receives instructions from the user. According to various embodiments, the system 200 may be configured to interactively correct the image after the image has been segmented when the segment panel 250 receives instructions from the user. According to various embodiments, the system 200 may be configured to receive instructions from the user to quantitate the segmented fat compartments when the report panel 260 receives instructions from the user to quantitate the segmented fat compartments.
  • abdominal MR images at disc L1-L5 of lumbar spine from 20 normal and overweight healthy male volunteers may be acquired using an image acquisition device, for example, a 3T Siemens Tim Trio platform.
  • Each image stack consists of 80 axial slices of 3 mm slice thickness, 0.6 mm interslice gap and 1.25 mm x 1.25 mm inplane resolution.
  • a 2-point Dixon data from Ll -L5 vertebrae was acquired with TR / TE1/ TE2 / FA of 5.28 ms / 2.45 ms / 3.68 ms / 90 degrees respectively.
  • the repetition time (TR) and the echo time (TE) may be basic pulse sequence parameters measured in milliseconds.
  • the echo time (TE) may represent the time from the center of the RF -pulse to the center of the echo.
  • the repetition time (TR) may be the length of time between corresponding consecutive points on a repeating series of pulses and echoes.
  • FA is the flip angle of the RF pulse measured in degrees.
  • FIG. 3 A shows an example of a load panel 240 of a user interface 230 according to various embodiments.
  • Load panel 240 may include a selection window 310 for selecting images to be processed by the processor 220.
  • the user may select an image or a plurality of images to be processed by the processor 220.
  • the user may use information of the data set such as size, voxel size, number of grey scales to load multiple volume data into the system 200.
  • the loaded data list box 320 may show the image(s) selected by the user to be processed.
  • the image display window 330 may show the image selected by the user.
  • FIG. 3B shows an example of removing an extraneous region segment in the image according to various embodiments.
  • a pre-processing step may include removing at least one extraneous region segment in the image.
  • a plurality of extraneous region segments in the image may be removed.
  • the extraneous region may be, for example, a limb of the human.
  • the extraneous region is an arm.
  • FIG. 3B illustrates the example of removing extraneous region segments with image 340 being the original image with the arms (1°), graph 350 showing projection values (y) on the right half of 1° and image 360 showing the result after arm removal step (I).
  • the position of the arms on left half and right half of the image may be determined, for example, by taking column- wise projection of the thickness of the object in the image.
  • Pj ⁇ i: I°ij > 0 ⁇ : the set of row indices of every non-zero pixel in column j of image slice I.
  • FIG. 3C shows an example of correcting inhomogeneity intensity of the image according to various embodiments.
  • a pre-processing step may include correcting inhomogeneity intensity of the image.
  • Intensity inhomogeneities may be caused by several factors like magnetic fields, for example an static field (B0), as component of a radio- frequency field (Bl), as gradient field irregularities, as susceptibility effects arising from tissue interactions with magnetic field, and/or as an attenuation of the signal by tissues receiver coil sensitivity.
  • correcting inhomogeneity intensity of the image may include estimating a bias field of the image 370 and applying the bias field to the image to obtain a corrected image 380.
  • the bias field of the image may be estimated using biased fuzzy c means algorithm. The bias field may be subtracted from the original image 370 to get a uniform intensity data image 380.
  • segmentation of abdominal fat compartments into SAT and VAT and further SAT into SSAT and DSAT may be achieved by identifying the fascia superficialis.
  • the abdominal fat compartments may be identified by a low computation image processing based method or a deep learning method.
  • the SAT may appear as a single contiguous compartment and may be separated from the VAT by an abdominal wall.
  • Identification of an abdominal wall boundary may delineate the subcutaneous and visceral fat regions. In various embodiments, delineate may mean indicating a position of a border or boundary. In various embodiments, identification of the abdominal wall boundary may indicate the position of the border or boundary of the subcutaneous and visceral fat regions.
  • FIG. 4 A shows an example of an image transformation from Cartesian coordinates to polar coordinates in an exemplary method of segmenting of the abdominal fat compartments into SAT and VAT, according to various embodiments.
  • the image 410 may be a 2D image slice and may be transformed from a Cartesian coordinate system into a polar coordinate system to obtain a polar image 420.
  • the boundary of the SAT may be identified before SAT is segmented from the abdomen wall.
  • Cartesian-Polar image transformation In an exemplary Cartesian-Polar image transformation:
  • I p be the abdomen image 420 in polar system: I p ⁇ - I, with I being the abdomen image
  • Image pixels that are not exactly on the Cartesian image 410 may be predicted using interpolation during the transformation.
  • the interpolation may be bilinear interpolation.
  • FIG. 4B shows an example of an edge map of an image of an abdomen, according to various embodiments.
  • FIG. 4C shows an example of an interpolated SAT boundary according to various embodiments.
  • FIG. 4D shows a clearer image of the interpolated SAT boundary of FIG. 4C according to various embodiments.
  • FIG. 4D is shown for clarity purposes so that the boundary“430” (thick line) can be easily seen. The boundary line could have a color contrast with the remaining of the image for ease of reading.
  • an exemplary edge detection technique may be as follows: sobel
  • discontinuity may occur in M 2 , which may be recognized as:
  • j x being the column indices of the areas where SAT and VAT are connected, defined by formula (*)
  • ji and j 2 being column indices of the pixels directly before and after the connected part. That is, the discontinuous SAT-VAT boundary line in polar coordinate system may be predicted and interpolated.
  • the interpolated SAT boundary 430 is shown in FIGS. 4C and 4D.
  • the interpolated SAT boundary 430 is marked with a thick line.
  • FIG. 5A shows an intensity corrected SAT image before filtering, according to various embodiments.
  • FIG. 5B shows an intensity corrected SAT image after coherence filtering, according to various embodiments.
  • coherence filtering may be applied to the SAT image to eliminate the unwanted, discontinuous edges before edge detection is applied to create an edge map.
  • FIG. 5C shows a Sobel edge map of FIG. 5A, according to various embodiments.
  • FIG. 5D shows a Sobel edge map of FIG. 5B, according to various embodiments. Both FIG. 5C and 5D are obtained by edge detection under the same threshold.
  • Canny edge operation may be used on the segmented image through Sobel operation.
  • Sobel operation from the coherence filtered SAT image:
  • I SAT be the isolated SAT image after coherence filtering and I e SAT be the Canny edge map of I SAT : [0067]
  • an edge detector for example Canny edge detector, which is sensitive to less prominent edges may be used.
  • Canny edge detector is shown as an example, however the present disclosure is not limited thereto, and any suitable edge detection method may be used as long as the edges are detected.
  • the coordinates of the edges may be converted to polar system and may fitted to a single line with the missing points being interpolated.
  • the single line also named as transformed boundary curve, substantially overlaps with the fascia superficialis, thus the single line may be used as a representation of the fascia superficialis.
  • the interpolation may be piecewise cubic Hermite interpolation.
  • interpolating missing points may be done using spline interpolation.
  • the DSAT and SSAT may be separated based on this line in Cartesian coordinates.
  • FIG. 6 shows edge positions from an edge map converted into polar coordinate system, according to various embodiments.
  • Graph 610 shows the edge positions of abdominal wall boundary 620, and SAT-YAT boundary 630.
  • the coordinates of detected edges may be transformed from Cartesian to Polar coordinates. In an example:
  • I SAT be the isolated SAT image after coherence filtering and I e SAT be the Canny edge map
  • N ⁇ x,y: Ie SAT (y,x)>0
  • N is transformed to polar coordinate system: N p (r, Q) ⁇ - N, and plotted on graph
  • detected edges 640 may be a discontinuous curve.
  • FIG. 8 shows an exemplary deep learning architecture to semantically segment the image into fat compartments according to some embodiments.
  • the semantical segmentation may be done using any deep learning architecture for example a semantic segmentation network, such as a convolutional neural network, for example using the U-Net architecture, which will be used hereinafter for illustration purposes, however the present disclosure is not limited thereto.
  • semantic segmentation may describe the process of associating each pixel of an image with a class label.
  • U-Net architecture may incorporate both local and global features due to the encoding-decoding nature of its framework. In recent times, many variants of U-Net have been developed and the U-Net architecture achieved improved performance on many image segmentation and reconstruction tasks. In addition, U-Net was extended from 2D to 3D for volumetric biomedical image segmentation tasks. In this example, the 3D U-Net was adopted and the scheme of self-attention mechanism for performing down-sampling and up-sampling as well as global information fusion at the end of encoding step was utilized.
  • the input 3D patch first goes through input block with initial 64 feature maps with two 3x3x3 convolutions with stride 1. Batch normalization and ReLU may be performed before each convolution. ReLU stands for rectified linear unit, and ReLU is a type of nonlinear activation function. Down-sampling blocks with self-attention layer may be used in this framework to reduce dimensionality.
  • the latent space at the end of encoder network includes a global aggregation block or bottom block for improved global information fusion for each output location.
  • the decoder network may use three up-sampling blocks followed by output block which has lxlxl convolution with stride of 1.
  • the framework may utilize Dropout with a rate of 0.5 at the output before the final lxlxl convolution. Also weight decay of 2xl0 6 may be utilized.
  • the feature maps of encoder network are added to those of decoder network instead of concatenation. In various embodiments, any suitable training parameters may be used.
  • Training In the training phase, 16 randomly cropped 3D blocks of size 32x32x32 are extracted from a shuffled dataset of 8 input image volumes each of size 320x240x88. Batch size is set at 16. The model is trained on 8 FAT DIXON MRI image volumes. Similar training scheme may be extended to include other MRI modalities (like water Dixon images, Fat fraction Images) as input volumes and patches are simultaneously extracted from similar random locations from all input modalities. The Adam optimizer with a learning rate of 0.001 is employed for gradient descent. The number of training epochs and training performance are shown in FIGS. 9A-C.
  • Inference In a testing phase, a similar patch size of 32x32x32 is used to extract patches with an overlapping step size of 8 to cover the whole test volume. Prediction probabilities are obtained for each of the 3D blocks for segmentation in to three classes (SSAT, DSAT and VAT) using the trained model. Inference is performed on two subjects with 2 FAT DIXON MRI input image volumes respectively and the resultant segmentation maps for 3 classes of SSAT, DSAT and VAT are analysed for evaluation metrics. Testing or evaluation performance is indicated in test accuracy vs. number of iterations in figures.
  • the dice similarity index (DSI) is employed as the evaluation metrics.
  • DSI dice similarity index
  • These methods evaluate the accuracy by comparing binary images of prediction and its corresponding ground truth, so it is required to transform the 3 -class segmentation task into 3 binary segmentation tasks for evaluation. That is, a 3D binary segmentation map may be constructed for each class, where 1 denotes the voxel that belongs to foreground class and belongs to the background.
  • the evaluation is performed on binary segmentation maps derived from 3-class predictions for SSAT, DSAT and VAT.
  • the ground truth image may be an image which is constructed by expert radiologist using manual tracing of boundaries between DSAT, SSAT and VAT.
  • segmentation accuracy may be evaluated by statistical measures such as sensitivity, specificity, Dice statistical index, Cohen’s kappa coefficient and Bland-Altman analysis.
  • FIG. 9A shows a plot of cross entropy loss according to various embodiments.
  • FIG. 9B shows a plot of training accuracy according to various embodiments.
  • FIG. 9C shows a plot of test accuracy according to various embodiments.
  • FIG. 9C represents test accuracy on Y-axis and number of iterations/epochs of training on X-axis.
  • test accuracy curve 910 represents the test accuracy at different iterations, and test accuracy curve 920 is smoothened at different iterations.
  • the plots show the results of training and testing which illustrates the reduction in error and improvement of accuracy over the iterations.
  • FIG. 10A and FIG. 10B illustrate the results of segmentation of abdominal fat pictorially of test subject 1 and test subject 2 respectively, according to various embodiments. Also illustrated, comparing the prediction with the ground truth, there is high accuracy on the images where there is high and medium subcutaneous fat.
  • FIG. 10C illustrates the results of segmentation of abdominal fat pictorially of test subject 1 according to various embodiments.
  • axial, coronal and sagittal images are sir own.
  • the results of segmentation of the abdominal fat may be displayed in three orthogonal planes namely axial, coronal and sagittal. These data may be used in clinical studies or radiology.
  • post-processing may include removal of pelvic bone. It is observed that the pelvic bone may show up in the image slices, for example of the L5 region in the lumbar spine, as this is the natural location of the pelvis. Therefore, a first step in removing pelvic bone structure from VAT image may be to identify the L5 region in the image. Location of L5 may be determined by taking row- wise projection of sagittal plane which crosses the center of the dataset.
  • FIG. 11A shows a sagittal image S° crossing body symmetrical axis, according to various embodiments.
  • FIG. 11B shows row-wise projection of S°, according to various embodiments.
  • Binary map of L° may handle the image as a logical array instead of a numerical array as this may reduce computational efforts.
  • i lowest row index in P ⁇
  • the next step may be to identify the pelvic bone area in the image slice.
  • pelvic bone has similar pixel intensity to adipose tissue, threshold suppression usually fails to separate them from the VAT. Therefore, morphological separation may be a more applicable approach. From observations, adipose tissues get attached to the bones, causing separation between them vaguer.
  • FIG. 12A shows a map of VAT without removal of bone structures, according to various embodiments.
  • FIG. 12B shows a separated pelvic bone, according to various embodiments.
  • FIGS. 12C and 12D show a right half and left half of the map of VAT, respectively, after diagonal image opening, according to various embodiments.
  • FIGS. 12E and 12F show structuring elements for diagonal image opening, according to various embodiments.
  • the exemplary method may detach the pelvic bones from the VAT using image opening with diagonal structuring elements.
  • H L and H R be the structuring elements defined as followed:
  • CV Convex Hull map of L°, which covers VAT and pelvic bones
  • n - 4x width of CV.
  • G L and G R be the largest 8-connected area in M L and M R .
  • B W° represents the logical binary image of the 1°. This is for performing logical operations instead of arithmetic operations which may be computationally less expensive.
  • ‘L‘and‘R’ represent the left and right sides of the data or subject. Pelvic bone separation may be done by processing left and right side similarly and combining the two results at the end.
  • post-processing may include removal of spine. 2D coordinates of spine area may be identified using column- wise projection on sagittal and axial planes crossing through the centre of image volume. Pixels within this area may then suppressed to eliminate non-fat spine pixels from VAT. In various embodiments, the suppression may be done by Otsu thresholding.
  • FIG. 13A shows a sagittal image S° crossing body symmetrical axis, according to various embodiments.
  • FIG. 13B shows an axial image crossing spine region, according to various embodiments.
  • FIG. 13C shows column-wise projection of S°, according to various embodiments. According to various embodiments,
  • the spine may be modelled as a cylinder throughout the image volume. Center of the cylinder may be defined as:
  • Radius of the cylinder may be defined as:
  • FIG. 14 shows a flowchart of an exemplary framework 1400 of a segmentation process according to various embodiments.
  • a first step 1410 may be a preprocessing step.
  • the pre-processing step 1410 may include loading multiple data sets as described in FIG. 3 A and/or removing of extraneous regions as described in FIG. 3B and/or correcting intensity inhomogeneity as described in FIG. 3C.
  • extraneous features in the images for example, arm regions may be removed, followed by noise filtering and intensity inhomogeneity correction. Any noise removal algorithm may be used to enhance the signal-to- noise ratio of the image.
  • a second step 1420 may be a segmentation step.
  • the segmentation step 1420 may include SAT-VAT segmentation and SSAT-DSAT segmentation. Segmentation step 1420 may be performed using a low computation image processing based method as described in FIGS. 3-7 and/or a deep learning based method as described in FIG. 8.
  • SSAT, DSAT and VAT may be separated by transforming abdominal image to polar coordinate system, followed by modelling the boundaries (SAT-VAT and DSAT-SSAT) using interpolation, for example spline interpolation, based on the detected edges from the image. Interpolation results, such as spline fitting, may be colour coded to represent the actual and approximated regions of the curve.
  • a third step may be a post-processing step.
  • the post-processing step may include, for example, removing of pelvic bone as described in FIGS. 11-12 and/or removing of spine as described in FIG. 13 from the VAT region.
  • the system which executes the algorithm may also allow user interaction to correct the contours and save the results for each slice as well as for the whole volume.
  • the user interface may include three main components:
  • Load panel To read image volume from local drive and load into work space. Multiple datasets may be simultaneously imported for batch processing. Upon loading the images, they may be viewed in the Image Display Window.
  • Segment panel To carry out the pre-processing, segmentation and post-processing work. User may first select the slices of interest, option for intensity correction as well as single image or batch processing, before proceeding to run the segmentation. Upon completion, isolated VAT, SAT, DSAT, SSAT may be visualized on Segment window. Simultaneously, SAT-VAT boundary and DSAT-SSAT boundary may be highlighted on original Image Display Window. Correction option may enabled at this step to allow interactive correction from user.
  • Report panel To perform quantitation of separated fat compartments, which may be enabled after completion of segmentation step.
  • the program may save the quantitation report as well as segmentation image result, for example in the memory, once signified by user (by hitting“Save” button).
  • FIG. 15 A shows an example of image selection in a load panel, according to various embodiments.
  • FIG. 15B shows an example of the segmented fat compartments in a segmentation panel, according to various embodiments.
  • FIG. 15C shows an example of correcting the segmented fat compartments in a segmentation panel, according to various embodiments.
  • FIG. 15D shows an example of quantitation of separated fat compartments in a report panel, according to various embodiments.
  • FIG. 16 shows segmented abdominal fat compartments in 3D volume, according to various embodiments.
  • the first panel 1610 shows a total fat volume in 3D.
  • the second panel 1620 shows a total SAT volume in 3D.
  • the third panel 1630 shows a total SSAT volume in 3D.
  • the fourth panel 1640 shows a total DSAT volume in 3D.
  • the fifth panel 1650 shows a total VAT volume in 3D.
  • accuracy of segmentation by the proposed framework may be evaluated by Dice Similarity Index, which may be summarized in the following tables. The tables below sum up the evaluation of segmentation result against ground truth data.
  • the data is categorized into three groups, which are low, medium, and high fat (L, M, and H as in the table below) based total fat volume measured in each slice of the data.
  • Low fat slices contain less than 800 cm 3 of fat
  • medium fat contain 800 to 1300 cm 3 of fat
  • high fat slices contain more than 1300 cm 3 of fat.
  • subject scans were divided into low, medium and high category based on the amount of total abdominal fat.
  • the mean Dice similarity indices for the low computation image processing method for VAT, DSAT and SSAT segmentations were 0.88, 0.67, and 0.88 for low, 0.9, 0.8, and 0.88 for medium and 0.93, 0.84, and 0.89 for high total fat categories respectively.
  • Table 1 shows results for low computation image processing method.
  • FIG. 17A shows Bland Altman plots for a method of segmenting an image of an abdomen of a human into image segments corresponding to fat compartments according to various embodiments.
  • FIG. 17B shows Bland Altman plots for a deep learning method, according to various embodiments.
  • Bland-Altman plots may be used to analyse the overestimation and underestimation between the segmentation and ground truth on the whole dataset. The plot may be constructed by sketching the difference between automated result and ground truth against their mean.
  • the new segmentation technique has been developed with reduced computational complexity and preserved accuracy. Therefore, it can be used in special medical fields such as radiology and endocrinology to monitor changes in abdominal obesity as well as other lumbar position based analysis.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Quality & Reliability (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

Method and system of segmenting an image of an abdomen of a human into segments corresponding to fat compartments, comprising: identifying an abdominal wall segment, a visceral adipose tissue (VAT) segment and a subcutaneous adipose tissue (SAT) segment; detecting edges of the superficial SAT (SSAT) segment and the deep SAT (DSAT) segment to obtain edges coordinates in Cartesian coordinates; transforming the edges coordinates from Cartesian coordinates to polar coordinates to obtain a discontinuous boundary curve; determining a continuous boundary curve by interpolating missing points in the discontinuous boundary curve; transforming the continuous boundary curve from polar coordinates to Cartesian coordinates to obtain a transformed boundary curve; identifying the SSAT segment and the DSAT segment, wherein the SSAT segment is between the abdominal wall segment and the transformed boundary curve, and the DSAT segment is between the transformed boundary curve and the VAT segment. Also disclosed is a method of segmentation using deep learning.

Description

METHOD AND SYSTEM OF SEGMENTING IMAGE OF ABDOMEN OF HUMAN INTO IMAGE SEGMENTS CORRESPONDING TO FAT COMPARTMENTS
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims the benefit of priority of Singapore patent application No. 10201802349V filed on 22 March 2018, the contents of it being hereby incorporated by reference in its entirety for all purposes.
TECHNICAL FIELD
[0002] Various aspects of this disclosure relate to methods of segmenting an image of an abdomen of a human into image segments corresponding to fat compartments. Various aspects of this disclosure relate to a system for performing the method for of segmenting an image of an abdomen of a human into image segments corresponding to fat compartments. Various aspects of this disclosure relate to a computer program.
BACKGROUND
[0003] Obesity, and more specifically accumulation of adipose tissue in the abdomen is associated with health risks such as type II diabetes and cardiovascular diseases, which rank number 8 and number 1 in terms of worldwide mortality. Modem advanced imaging modalities such as Computed Tomography (CT) and Magnetic Resonance (MR) can clearly visualize body fat volume in different depots, enabling non-invasive and accurate methods for quantitation of these fat depots. The abdominal fat compartments can be broadly classified as Subcutaneous Adipose Tissue (SAT) and Visceral Adipose Tissue (VAT). The abdominal SAT can be further divided into superficial and deep SAT (SSAT and DSAT) which is separated by a fascial plane called fascia superficialis. The fat depots differ in their anatomical locations, and also have different influence on metabolic activities. For example, VAT shows stronger correlation to adverse metabolic risk profiles such as cardio-metabolic risk factors and insulin resistance level than SAT. In comparison, DSAT contains more saturated fat and lipolytically active adipocytes than SSAT. DSAT is also an indicator of metabolic syndrome, which belongs to risk factor cluster of diabetes type II and cardiovascular diseases. Meanwhile, SSAT inversely correlates to these risks, and higher distribution of SSAT in abdominal region signifies beneficial cardio-metabolic effects in type II diabetes patients. Therefore, accurate quantification of different fat compartments is crucial to evaluate the effectiveness of management of obesity and interventional study. In particular, accurate quantification of abdominal VAT, DSAT and SSAT will aid the studies exploring the relationship between these fat depots and metabolic factors.
[0004] Manual segmentation and quantification of VAT, DSAT and SSAT are relatively accurate, but the time and cost make it infeasible to apply in large cohort studies. Therefore, semi-automated and automated techniques to separate different fat compartments may be used to process of high volumes of data. Automated segmentation algorithms of abdominal SAT and VAT in human subjects is known. These include fuzzy clustering based methods, morphology based methods, registration based methods, deformable models based, and graph cuts based methods. Detection of DSAT and SSAT is more challenging and lesser addressed because the fascia superficialis dividing them is discontinuous and not visible in all the slices of the abdominal MR scan.
[0005] Some of the algorithms for SSAT-DSAT and even SAT-VAT segmentation are computationally expensive, and some take a longer processing time. Therefore, there is a need to introduce a new technique with reduced computational complexity. Although significant efforts have been put into developing robust algorithms to segment and quantify fat compartments in other studies, only a few published tools can be used by people without programming background. In addition, most of the existing algorithms addressing human image data lack the SSAT-DSAT segmentation feature. Therefore, there is a need to develop a user-friendly, interactive software platform which can perform VAT-DSAT-SSAT segmentation on human data. There is also a need for manual correction of the segmentation result to improve the accuracy and provide more user control on the segmentation and quantification process. There is also a need for an algorithm and interface which may permit automatic, interactive, and accurate quantification of visceral, superficial subcutaneous and deep subcutaneous adipose tissue (VAT, SSAT and DSAT) in abdominal region from MRI data to support large cohort studies which explore the association between different fat depots and metabolic factors.
SUMMARY
[0006] Various embodiments may provide a method of segmenting an image of an abdomen of a human into image segments corresponding to fat compartments. The method may include identifying an abdominal wall segment, a visceral adipose tissue (VAT) segment and a subcutaneous adipose tissue (SAT) segment of the image, wherein the subcutaneous adipose tissue (SAT) segment includes a superficial subcutaneous adipose tissue (SSAT) segment and a deep subcutaneous adipose tissue (DSAT) segment. The method may include detecting edges of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment in the subcutaneous adipose tissue (SAT) segment to obtain edges coordinates in Cartesian coordinates of the edges of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment. The method may include transforming the edges coordinates from Cartesian coordinates to polar coordinates to obtain a discontinuous boundary curve of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment. The method may include determining a continuous boundary curve of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment, wherein determining the continuous boundary curve may include interpolating missing points in the discontinuous boundary curve. The method may include transforming the continuous boundary curve from polar coordinates to Cartesian coordinates to obtain a transformed boundary curve of the image. The method may include identifying the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment in the subcutaneous adipose tissue (SAT) segment, wherein the transformed boundary curve may separate the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment in the image, and wherein the superficial subcutaneous adipose tissue (SSAT) segment is between the abdominal wall segment and the transformed boundary curve, and the deep subcutaneous adipose tissue (DSAT) segment is between the transformed boundary curve and the visceral adipose tissue (VAT) segment.
[0007] Various embodiments may provide a system for performing the method according to the present disclosure. The system may include a memory and at least one processor communicatively coupled to the memory and configured to segment the image of the abdomen of the human into image segments corresponding to the fat compartments. The system may include an input module communicatively coupled to the processor, wherein the input module is configured to receive the image of the abdomen of the human. The system may include a user interface communicatively coupled to the processor, wherein the user interface is configured to receive instructions from a user and configured to communicate the instructions to the processor for execution.
[0008] Various embodiments may provide a method of segmenting an image of an abdomen of a human into image segments corresponding to fat compartments. The method may include receiving training data comprising a plurality of images of abdomens of humans, each image of the plurality of images of abdomens of humans comprising image segments corresponding to a visceral adipose tissue (VAT) segment, a superficial subcutaneous adipose tissue (SSAT) segment and a deep subcutaneous adipose tissue (DSAT) segment. The method may include using the training data to create a trained computer model for segmenting the image segments corresponding to the fat compartments of the abdomen.
[0009] Various embodiments may provide a computer program may include instructions executable by at least one processor according to the present disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The invention will be better understood with reference to the detailed description when considered in conjunction with the non-limiting examples and the accompanying drawings, in which:
FIG. 1 shows a flowchart of a method 100 according to various embodiments;
FIG. 2A is a schematic illustration of a system 200 according to various embodiments;
FIG. 2B is a schematic illustration of a user interface 230 according to various embodiments; FIG. 3 A shows an example of a load panel 240 of a user interface 230 according to various embodiments;
FIG. 3B shows an example of removing an extraneous region segment in the image according to various embodiments;
FIG. 3C shows an example of correcting inhomogeneity intensity of the image according to various embodiments;
FIG. 4A shows an example of an image transformation from Cartesian coordinates to polar coordinates in an exemplary method of segmenting of the abdominal fat compartments into SAT and VAT, according to various embodiments;
FIG. 4B shows an example of an edge map of an abdomen image, according to various embodiments; FIG. 4C shows an example of an interpolated SAT boundary according to various embodiments;
FIG. 4D shows a clearer image of the interpolated SAT boundary of FIG. 4C according to various embodiments;
FIG. 5A shows an intensity corrected SAT image before filtering, according to various embodiments;
FIG. 5B shows an intensity corrected SAT image after coherence filtering, according to various embodiments.
FIG. 5C shows a Sobel edge map of FIG. 5 A, according to various embodiments;
FIG. 5D shows a Sobel edge map of FIG. 5B, according to various embodiments;
FIG. 6 shows edge positions from an edge map converted into polar coordinate system, according to various embodiments;
FIG. 7 shows the results of segmentation, according to various embodiments;
FIG. 8 shows an exemplary deep learning architecture to segment the image into fat compartments according to various embodiments;
FIG. 9A shows a plot of cross entropy loss according to various embodiments;
FIG. 9B shows a plot of training accuracy according to various embodiments;
FIG. 9C shows a plot of test accuracy according to various embodiments;
FIG. 10A and FIG. 10B illustrate the results of segmentation of abdominal fat pictorially of a test subject 1 and a test subject 2, according to various embodiments;
FIG. 10C illustrates the results of segmentation of abdominal fat pictorially of a test subject 3 according to various embodiments:
FIG. 11 A shows a sagittal image S° crossing body symmetrical axis, according to various embodiments;
FIG. 1 IB shows row-wise projection of S°, according to various embodiments;
FIG. 12A shows a map of VAT without removal of bone structures, according to various embodiments;
FIG. 12B shows a separated pelvic bone, according to various embodiments;
FIGS. 12C and 12D show a right half and left half of VAT map, respectively, after diagonal image opening, according to various embodiments;
FIGS. 12E and 12F show structuring elements for diagonal image opening, according to various embodiments; FIG. 13A shows a sagittal image S° crossing body symmetrical axis, according to various embodiments;
FIG. 13B shows an axial image crossing spine region, according to various embodiments; FIG. 13C shows column- wise projection of S°, according to various embodiments;
FIG. 14 shows a flowchart of an exemplary framework of a segmentation process according to various embodiments;
FIG. 15A shows an example of image selection in a load panel, according to various embodiments;
FIG. 15B shows an example of the segmented fat compartments in a segmentation panel, according to various embodiments;
FIG. 15C shows an example of correcting the segmented fat compartments in a segmentation panel, according to various embodiments;
FIG. 15D shows an example of quantitation of separated fat compartments in a report panel, according to various embodiments;
FIG. 16 shows segmented abdominal fat compartments in 3D volume, according to various embodiments;
FIG. 17A shows Bland Altman plots for a method of segmenting an image of an abdomen of a human into image segments corresponding to fat compartments;
FIG. 17B shows Bland Altman plots for a deep learning method, according to various embodiments.
DETAILED DESCRIPTION
[0011] The following detailed description refers to the accompanying drawings that show, by way of illustration, specific details and embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. Other embodiments may be utilized and structural, and logical changes may be made without departing from the scope of the invention. The various embodiments are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments. [0012] Embodiments described in the context of a method may be analogously valid for the system and vice-versa. Similarly, embodiments described in the context of a system may be analogously valid for a computer program, and vice-versa. Also, embodiments described in the context of a method may be analogously valid for a computer program, and vice-versa.
[0013] Features that are described in the context of an embodiment may correspondingly be applicable to the same or similar features in the other embodiments. Features that are described in the context of an embodiment may correspondingly be applicable to the other embodiments, even if not explicitly described in these other embodiments. Furthermore, additions and/or combinations and/or alternatives as described for a feature in the context of an embodiment may correspondingly be applicable to the same or similar feature in the other embodiments.
[0014] In the context of various embodiments, the articles“a”,“an” and“the” as used with regard to a feature or element include a reference to one or more of the features or elements.
[0015] As used herein, the term“and/or” includes any and all combinations of one or more of the associated listed items.
[0016] V arious embodiments may provide a method of segmenting an image of an abdomen of a human into image segments corresponding to fat compartments. According to various embodiments, the image may be a Magnetic Resonance (MR) image or a Computed Tomography (CT) image. According to various embodiments, the image may be acquired from any specified region of the abdomen. According to various embodiments, the image may be acquired from a location of at least one of vertebrae discs F1-F5 of a lumbar spine of the human. The MR image scanner may be set up for the required image acquisition. The acquisition bandwidth chosen may separate fat and water voxels for example, the acquisition bandwidth may be between 400 to 800 Hz/pixel. The image acquisition may be performed, for example, when the human holds his breath for a pre-determined time period. Also, the time period for acquiring each image may be between 10 to 40 seconds, for example 20 seconds. The image acquisition may be performed, for example, when the human breathes freely. The data processing technique is not limited to the type of acquisition or imaging sequence. The data processing technique may be used for Dixon sequence, water suppressed fat imaging sequence or any other fat imaging sequences.
[0017] According to various embodiments, water-only and fat-only images may be generated by linear combination of in-phase and out-of-phase images. According to various embodiments, distortions in the acquired images may be corrected during reconstruction processes. The reconstruction process may be carried out during the process of image acquisition. The distortions in the images may be filtered during the reconstruction process. The reconstruction process may be conducted using any suitable correction technique, for example Cartesian, spiral or radial techniques.
[0018] According to various embodiments, the fat compartments may be a visceral adipose tissue (VAT), a subcutaneous adipose tissue (SAT), a superficial subcutaneous adipose tissue (SSAT), and a deep subcutaneous adipose tissue (DSAT).
[0019] According to various embodiments, the method may include: identifying an abdominal wall segment, a visceral adipose tissue (VAT) segment and a subcutaneous adipose tissue (SAT) segment of the image. According to various embodiments, the subcutaneous adipose tissue (SAT) segment may include a superficial subcutaneous adipose tissue (SSAT) segment and a deep subcutaneous adipose tissue (DSAT) segment. According to various embodiments, the visceral adipose tissue (VAT) segment is a segment on the image of the abdomen corresponding to the visceral adipose tissue (VAT). According to various embodiments, the subcutaneous adipose tissue (SAT) segment is a segment on the image of the abdomen corresponding to the subcutaneous adipose tissue (SAT). According to various embodiments, the superficial subcutaneous adipose tissue (SSAT) segment is a segment on the image of the abdomen corresponding to the superficial subcutaneous adipose tissue (SSAT). According to various embodiments, the deep subcutaneous adipose tissue (DSAT) segment is a segment on the image of the abdomen corresponding to the deep subcutaneous adipose tissue (DSAT).
[0020] According to various embodiments, the method may include: detecting edges of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment in the subcutaneous adipose tissue (SAT) segment to obtain edges coordinates in Cartesian coordinates of the edges of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment ln various embodiments, the edges may be detected, for example, using Sobel edge detection or Canny edge detection.
[0021 ] According to various embodiments, the method may include: transforming the edges coordinates from Cartesian coordinates to polar coordinates to obtain a discontinuous boundary curve of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment. The Cartesian coordinates may refer to Cartesian coordinates of a Cartesian coordinate system, for example applied to a CT image. The polar coordinates may refer to polar coordinates of a polar coordinate system. The transformation of the edges may use a center of the image in Cartesian coordinates as reference, for example the major human symmetry axis. The center may be determined using the center of the row and the center of the column of the image, for example the center may be the [rows/2, cols/2] for an image in Cartesian coordinates.
[0022] According to various embodiments, the method may include: determining a continuous boundary curve of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment. The determining the continuous boundary curve may include interpolating missing points in the discontinuous boundary curve. In various embodiments, interpolating missing points may be done using spline interpolation. In various embodiments, a shape preserving interpolation technique may be used to interpolate the missing points. The shape preserving interpolation technique may be piecewise cubic Hermite interpolation. In various embodiments, the continuous boundary curve may undergo data smoothing. The data smoothing may be conducted by using locally weighted linear regression.
[0023] According to various embodiments, the method may include: transforming the continuous boundary curve from polar coordinates to Cartesian coordinates to obtain a transformed boundary curve of the image. In various embodiments, the DSAT and SSAT may be separated based on the transformed boundary curve. For example, the transformed boundary curve may substantially overlap with the fascia superficialis.
[0024] According to various embodiments, the method may include: identifying the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment in the subcutaneous adipose tissue (SAT) segment, wherein the transformed boundary curve separates the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment in the image, and wherein the superficial subcutaneous adipose tissue (SSAT) segment is between the abdominal wall segment and the transformed boundary curve, and the deep subcutaneous adipose tissue (DSAT) segment is between the transformed boundary curve and the visceral adipose tissue (VAT) segment. For example, the superficial subcutaneous adipose tissue (SSAT) segment may be identified as the fat regions between the abdominal wall segment and the transformed boundary curve. The deep subcutaneous adipose tissue (DSAT) segment may be identified as the fat regions between the transformed boundary curve and the visceral adipose tissue (VAT) segment.
[0025] In various embodiments, the segmented fat compartments may be presented with different properties, for example any known presentation techniques may be used. Each fat compartment of an image could be color coded, shaded, having their boundaries marked, or a combination thereof. In various embodiments, the segmented fat compartments may be presented in separate images, with each segmented fat compartment in an image.
[0026] According to various embodiments, the method of segmenting the image may include: conducting coherence filtering on the subcutaneous adipose tissue (SAT) segment of the image to remove unwanted edges before detecting edges of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment.
[0027] According to various embodiments, identifying the visceral adipose tissue (VAT) segment and the subcutaneous adipose tissue (SAT) segment may include: transforming the image from Cartesian coordinates to polar coordinates. The Cartesian coordinates may refer to Cartesian coordinates of a Cartesian coordinate system, for example applied to a CT image. The polar coordinates may refer to polar coordinates of a polar coordinate system. The transformation of the edges may use a center of the image in Cartesian coordinates as reference, for example the major human symmetry axis. Identifying the VAT and SAR may further include detecting edges of an abdominal wall segment and a SAT-VAT boundary to obtain edges coordinates in polar coordinates of the edges of the abdominal wall segment and the SAT-VAT boundary wherein the SAT-VAT boundary in polar coordinates is discontinuous. In various embodiments, the edge map may be obtained using an edge detection method, for example Sobel edge detection. According to various embodiments, the method of segmenting the image may include interpolating missing points of the SAT-VAT boundary to obtain a connected SAT-VAT boundary. According to various embodiments, the method may include identifying the visceral adipose tissue (VAT) segment and a subcutaneous adipose tissue (SAT) segment of the image from the abdominal wall segment and the connected SAT-VAT boundary.
[0028] According to various embodiments, the method may include pre-processing the image of the abdomen of the human before segmenting the image into image segments corresponding to the fat compartments. [0029] According to various embodiments, pre-processing the image may include removing at least one extraneous region segment in the image. According to some embodiments, pre processing the image may include removing a plurality of extraneous region segments in the image. For example, the extraneous region segment may be a limb segment of the human, such as an arm segment.
[0030] According to various embodiments, pre-processing the image may include correcting an inhomogeneity intensity of the image.
[0031 ] According to various embodiments, correcting inhomogeneity intensity of the image may include estimating a bias field of the image and applying the bias field to the image to obtain a corrected image. In various embodiments, the bias field of the image may be estimated using biased fuzzy c means algorithm. In various embodiments, the bias field may be subtracted from the image to obtain the corrected image.
[0032] According to various embodiments, the method may include post-processing the image by removing a pelvic bone segment and/or a spine segment after segmenting the image of the abdomen of the human into the image segments corresponding to the fat compartments.
[0033] According to various embodiments, the method may include receiving training data comprising a plurality of images of abdomens of humans, each image of the plurality of images of abdomens of humans comprising image segments corresponding to the visceral adipose tissue (VAT) segment, the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment. In various embodiments, the plurality of images in the training data may be derived from images segmented in accordance to the method above. In various embodiments, the training data may be manually derived by a human, for example, the training data may be a plurality of ground truth images which may be constructed by expert radiologist using manual tracing of boundaries between DSAT, SSAT and VAT.
[0034] According to various embodiments, the method may include using the training data to create a trained computer model for segmenting the image segments corresponding to the fat compartments of the abdomen.
[0035] According to various embodiments, the method may include using the trained computer model to segment a visceral adipose tissue (VAT) segment, a superficial subcutaneous adipose tissue (SSAT) segment and a deep subcutaneous adipose tissue (DSAT) segment of an abdomen in a target image. [0036] According to various embodiments, the image and/or the target image may be acquired from a location of at least one of vertebrae discs L1-L5 of a lumbar spine of the human.
[0037] According to various embodiments, a system for performing the method of segmenting an image of an abdomen of a human into image segments corresponding to fat compartments may include a memory. According to various embodiments, the system may include at least one processor communicatively coupled to the memory and configured to segment the image of the abdomen of the human into image segments corresponding to the fat compartments. According to various embodiments, the system may include one or a plurality of processors communicatively coupled to the memory and configured to segment the image of the abdomen of the human into image segments corresponding to the fat compartments. According to various embodiments, the system may include an input module communicatively coupled to the processor or to at least one of the plurality of processors, wherein the input module is configured to receive the image of the abdomen of the human. According to various embodiments, the input module may be configured to receive a plurality of images of the abdomen of the human. According to various embodiments, the system may include a user interface communicatively coupled to the processor, wherein the user interface is configured to receive instructions from a user and configured to communicate the instructions to the processor for execution. According to various embodiments, the user interface may include at least one I/O device such as a monitor or a keyboard or a touchscreen device. In some embodiments, the user interface may be configured to receive instructions from the user through the at least one I/O device. In some embodiments, the I/O device in the user interface may be configured to communicate the instructions received from the user to the processor for execution.
[0038] A memory used in the embodiments may be a volatile memory, for example a DRAM (Dynamic Random Access Memory) or a non-volatile memory, for example a PROM (Programmable Read Only Memory), an EPROM (Erasable PROM), EEPROM (Electrically Erasable PROM), or a flash memory, e.g., a floating gate memory, a charge trapping memory, an MRAM (Magnetoresistive Random Access Memory) or a PCRAM (Phase Change Random Access Memory).
[0039] According to various embodiments, the user interface may include a load panel, a segment panel and a report panel. [0040] According to various embodiments, the system may be configured to load the image into the memory and display the image in a display window when the load panel receives instructions from the user.
[0041] According to various embodiments, the system may be configured to carry out at least one of: segment the image of the abdomen of the human into image segments corresponding to the fat compartments, pre-process the image prior to segmenting the image, and post-process the image after segmenting the image, when the segment panel receives instructions from the user.
According to various embodiments, the system may be configured to interactively correct the image after the image has been segmented when the segment panel receive instructions from the user.
[0042] According to various embodiments, the system may be configured to receive instructions from the user to quantitate the segmented fat compartments when the report panel receives instructions from the user to quantitate the segmented fat compartments. According to various embodiments, the separated fat components may be quantitated at least one, or for each, of VAT, SAT, SSAT and DSAT. In various embodiments, the separated fat components may be quantitated for a total fat computation, wherein total fat may mean the combination of total fat compartments of SAT and VAT. According to various embodiments, the separated fat components may be quantitated for each of VAT, SAT, SSAT and DSAT at each lumber position (Ll - L5). According to various embodiments, a 3D rendered visualization of each fat compartment may be provided.
[0043] According to various embodiments, the system may be configured to receive a plurality of images of the abdomen of the human and batch process the plurality of images, wherein each image of the plurality of images are configured to be segmented into image segments corresponding to the fat compartments.
[0044] According to various embodiments, a method of semantically segmenting an image of an abdomen of a human into image segments corresponding to fat compartments using deep learning may include: receiving training data may including a plurality of images of abdomens of humans, wherein each image of the plurality of images of abdomens of humans include image segments corresponding to a visceral adipose tissue (VAT) segment, a superficial subcutaneous adipose tissue (SSAT) segment, and a deep subcutaneous adipose tissue (DSAT) segment. According to various embodiments, semantic segmentation may describe the process of associating each pixel of an image with a class label.
[0045] According to various embodiments, the method may include using the training data to create a trained computer model for semantically segmenting the image segments corresponding to the fat compartments of the abdomen. In various embodiments, the training data may be manually derived by a human, for example, the training data may be a plurality of ground truth images which may be constructed by expert radiologist using manual tracing of boundaries between DSAT, SSAT and VAT.
[0046] According to various embodiments, the method may include using the trained computer model to semantically segment a visceral adipose tissue (VAT) segment, a superficial subcutaneous adipose tissue (SSAT) segment a the deep subcutaneous adipose tissue (DSAT) segment of an abdomen in a target image.
[0047] According to various embodiments, a computer program may include instructions executable by at least one processor according to the present disclosure.
[0048] FIG. 1 shows a flowchart of the method 100 according to various embodiments. A first step 110 may include identifying an abdominal wall segment, a visceral adipose tissue (VAT) segment and a subcutaneous adipose tissue (SAT) segment of the image, wherein the SAT segment comprises a superficial subcutaneous adipose tissue (SSAT) segment and a deep subcutaneous adipose tissue (DSAT) segment. A second step 120 may include detecting edges of the SSAT segment and the DSAT segment in the SAT segment to obtain edges coordinates in Cartesian coordinates of the edges of the SSAT segment and the DSAT segment. A third step 130 may include transforming the edges coordinates from Cartesian coordinates to polar coordinates to obtain a discontinuous boundary curve of the SSAT segment and the DSAT segment. A fourth step 140 may include determining a continuous boundary curve of the SSAT segment and the DSAT segment, wherein determining the continuous boundary curve may include interpolating missing points in the discontinuous boundary curve. A fifth step 150 may include transforming the continuous boundary curve from polar coordinates to Cartesian coordinates to obtain a transformed boundary curve of the image. A sixth step 160 may include identifying the SSAT segment and the DSAT segment in the SAT segment, wherein the transformed boundary curve separates the SSAT segment and the DSAT segment in the image, and wherein the SSAT segment is between the abdominal wall segment and the transformed boundary curve, and the DSAT segment is between the transformed boundary curve and the VAT segment.
[0049] FIG. 2A is a schematic illustration of a system 200 according to various embodiments. The system 200 may include a memory 210. The system 200 may include at least one processor 220 which may be communicatively coupled to the memory 210 and may be configured to receive the image of the abdomen of the human and may be configured to segment the image of the abdomen of the human into image segments corresponding to the fat compartments. The system 200 may include a user interface 230 which may be communicatively coupled to the processor 220, wherein the user interface 230 may be configured to receive instructions from a user and may be configured to communicate the instructions to the processor 220 for execution. According to various embodiments, the user interface may include at least one I/O device such as a monitor or a keyboard or a touchscreen device. In some embodiments, the user interface may be configured to receive instructions from the user through the at least one I/O device. In some embodiments, the I/O device in the user interface 230 may be configured to communicate the instructions received from the user to the processor for execution. According to various embodiments, the user interface may include a graphic user interface displayed on an I/O device, for example a monitor. According to various embodiments, the system 200 may be configured to receive a plurality of images of the abdomen of the human and batch process the plurality of images, wherein each image of the plurality of images are configured to be segmented into image segments corresponding to the fat compartments.
[0050] FIG. 2B is a schematic illustration of a user interface 230 according to various embodiments. The user interface 230 may include a load panel 240, a segment panel 250 and a report panel 260. According to various embodiments, the system 200 may be configured to load the image into the memory and display the image in a display window when the load panel 240 receives instructions from the user. According to various embodiments, the system 200 may be configured to segment the image of the abdomen of the human into image segments corresponding to the fat compartments when the segment panel 250 receives instructions from the user. According to various embodiments, the system 200 may be configured to pre-process the image prior to segmenting the image when the segment panel 250 receives instructions from the user. According to various embodiments, the system 200 may be configured to post process the image after segmenting the image when the segment panel 250 receives instructions from the user. According to various embodiments, the system 200 may be configured to interactively correct the image after the image has been segmented when the segment panel 250 receives instructions from the user. According to various embodiments, the system 200 may be configured to receive instructions from the user to quantitate the segmented fat compartments when the report panel 260 receives instructions from the user to quantitate the segmented fat compartments.
[0051] According to an exemplary method, abdominal MR images at disc L1-L5 of lumbar spine from 20 normal and overweight healthy male volunteers may be acquired using an image acquisition device, for example, a 3T Siemens Tim Trio platform. Each image stack consists of 80 axial slices of 3 mm slice thickness, 0.6 mm interslice gap and 1.25 mm x 1.25 mm inplane resolution. A 2-point Dixon data from Ll -L5 vertebrae was acquired with TR / TE1/ TE2 / FA of 5.28 ms / 2.45 ms / 3.68 ms / 90 degrees respectively. The repetition time (TR) and the echo time (TE) may be basic pulse sequence parameters measured in milliseconds. The echo time (TE) may represent the time from the center of the RF -pulse to the center of the echo. The repetition time (TR) may be the length of time between corresponding consecutive points on a repeating series of pulses and echoes. FA is the flip angle of the RF pulse measured in degrees. These parameters may vary based on the type of sequence used for the data acquisition as long as it preserves the anatomical/contrast information of the abdomen.
Pre-processing
[0052] FIG. 3 A shows an example of a load panel 240 of a user interface 230 according to various embodiments. Load panel 240 may include a selection window 310 for selecting images to be processed by the processor 220. The user may select an image or a plurality of images to be processed by the processor 220. In various embodiments, the user may use information of the data set such as size, voxel size, number of grey scales to load multiple volume data into the system 200. In various embodiments, the loaded data list box 320 may show the image(s) selected by the user to be processed. In various embodiments, the image display window 330 may show the image selected by the user.
[0053] FIG. 3B shows an example of removing an extraneous region segment in the image according to various embodiments. In various embodiments, a pre-processing step may include removing at least one extraneous region segment in the image. In various embodiments, a plurality of extraneous region segments in the image may be removed. In various embodiments, the extraneous region may be, for example, a limb of the human. In the example in FIG. 3B, the extraneous region is an arm. FIG. 3B illustrates the example of removing extraneous region segments with image 340 being the original image with the arms (1°), graph 350 showing projection values (y) on the right half of 1° and image 360 showing the result after arm removal step (I).
[0054] In various embodiments, the position of the arms on left half and right half of the image may be determined, for example, by taking column- wise projection of the thickness of the object in the image. In various embodiments, consider the image slice 1° with background being suppressed to 0, with y being the projection value defined as: y = f(j) where
f(j) = max(Pj) - min(Pj) and
Pj = {i: I°ij > 0} : the set of row indices of every non-zero pixel in column j of image slice I.
Since y represents the thickness of the arm or abdomen in the image slice, column positions where y values equal 0 or reach a local minima may represent the gap between the arm and abdomen. In various embodiments, replacing column positions of arm region with zero values gives image slice without arms (I).
[0055] FIG. 3C shows an example of correcting inhomogeneity intensity of the image according to various embodiments. In various embodiments, a pre-processing step may include correcting inhomogeneity intensity of the image. Intensity inhomogeneities may be caused by several factors like magnetic fields, for example an static field (B0), as component of a radio- frequency field (Bl), as gradient field irregularities, as susceptibility effects arising from tissue interactions with magnetic field, and/or as an attenuation of the signal by tissues receiver coil sensitivity. In various embodiments, correcting inhomogeneity intensity of the image may include estimating a bias field of the image 370 and applying the bias field to the image to obtain a corrected image 380. In various embodiments, the bias field of the image may be estimated using biased fuzzy c means algorithm. The bias field may be subtracted from the original image 370 to get a uniform intensity data image 380.
Segmentation
[0056] According to various embodiments, segmentation of abdominal fat compartments into SAT and VAT and further SAT into SSAT and DSAT, may be achieved by identifying the fascia superficialis. In various embodiments, the abdominal fat compartments may be identified by a low computation image processing based method or a deep learning method.
[0057] According to various embodiments, the SAT may appear as a single contiguous compartment and may be separated from the VAT by an abdominal wall. Identification of an abdominal wall boundary may delineate the subcutaneous and visceral fat regions. In various embodiments, delineate may mean indicating a position of a border or boundary. In various embodiments, identification of the abdominal wall boundary may indicate the position of the border or boundary of the subcutaneous and visceral fat regions.
[0058] FIG. 4 A shows an example of an image transformation from Cartesian coordinates to polar coordinates in an exemplary method of segmenting of the abdominal fat compartments into SAT and VAT, according to various embodiments. The image 410 may be a 2D image slice and may be transformed from a Cartesian coordinate system into a polar coordinate system to obtain a polar image 420. The boundary of the SAT may be identified before SAT is segmented from the abdomen wall. In an exemplary Cartesian-Polar image transformation:
polar
Let Ip be the abdomen image 420 in polar system: Ip < - I, with I being the abdomen image
410 in Cartesian coordinates. Image pixels that are not exactly on the Cartesian image 410 may be predicted using interpolation during the transformation. The interpolation may be bilinear interpolation.
[0059] FIG. 4B shows an example of an edge map of an image of an abdomen, according to various embodiments. FIG. 4C shows an example of an interpolated SAT boundary according to various embodiments. FIG. 4D shows a clearer image of the interpolated SAT boundary of FIG. 4C according to various embodiments. FIG. 4D is shown for clarity purposes so that the boundary“430” (thick line) can be easily seen. The boundary line could have a color contrast with the remaining of the image for ease of reading.
[0060] In various embodiments, the abdominal wall and the SAT- VAT boundary may be identified from an edge map of polar image. In various embodiments, the edge map may be obtained using an edge detection method, for example Sobel edge detection. As the abdominal wall and outer boundary of the SAT region may have prominent edges, a less sensitive edge detection technique may suffice.
[0061] According to various embodiments, an exemplary edge detection technique may be as follows: sobel
Let Ie be the edge map of Ip using Sobel edge detection: Ie < - Ip
For each column of Ie:
Qj = {i: le(ij)>0} : set of row indices of non-zero pixels in column j of Ie.
11 = max(Qj); Ie(iij) = pixel of abdominal wall at column j of Ie
12 = max(Qj\ii); Ie(i2j) = pixel of SAT-VAT boundary at column j of Ie, wherein“\” denotes division.
Let Mi and M2 be the set of Ie(i 1 ) and Ie(i2j) identified by applying the above operation on every column of Ie.
[0062] In the areas where SAT and VAT are connected, discontinuity may occur in M2, which may be recognized as: |M2Q— M¾rΐ)| > 1, the discontinuous areas, e.g. pixels, may be replaced by interpolation, such as linear interpolation:
Figure imgf000021_0001
With jx being the column indices of the areas where SAT and VAT are connected, defined by formula (*), ji and j2 being column indices of the pixels directly before and after the connected part. That is, the discontinuous SAT-VAT boundary line in polar coordinate system may be predicted and interpolated.
[0064] The interpolated SAT boundary 430 is shown in FIGS. 4C and 4D. In FIG. 4D, the interpolated SAT boundary 430 is marked with a thick line.
[0065] FIG. 5A shows an intensity corrected SAT image before filtering, according to various embodiments. FIG. 5B shows an intensity corrected SAT image after coherence filtering, according to various embodiments. According to various embodiments, coherence filtering may be applied to the SAT image to eliminate the unwanted, discontinuous edges before edge detection is applied to create an edge map. FIG. 5C shows a Sobel edge map of FIG. 5A, according to various embodiments. FIG. 5D shows a Sobel edge map of FIG. 5B, according to various embodiments. Both FIG. 5C and 5D are obtained by edge detection under the same threshold.
[0066] Canny edge operation may be used on the segmented image through Sobel operation. In an exemplary Sobel operation from the coherence filtered SAT image:
Let ISAT be the isolated SAT image after coherence filtering and Ie SAT be the Canny edge map of ISAT:
Figure imgf000021_0002
[0067] In order to identify the subtle edges due to fascia superficialis under iso-intense conditions within the SAT region, and to divide it into SSAT and DSAT, an edge detector, for example Canny edge detector, which is sensitive to less prominent edges may be used. Canny edge detector is shown as an example, however the present disclosure is not limited thereto, and any suitable edge detection method may be used as long as the edges are detected.
[0068] In various embodiments, the coordinates of the edges may be converted to polar system and may fitted to a single line with the missing points being interpolated. The single line, also named as transformed boundary curve, substantially overlaps with the fascia superficialis, thus the single line may be used as a representation of the fascia superficialis. The interpolation may be piecewise cubic Hermite interpolation. In various embodiments, interpolating missing points may be done using spline interpolation. The DSAT and SSAT may be separated based on this line in Cartesian coordinates.
[0069] FIG. 6 shows edge positions from an edge map converted into polar coordinate system, according to various embodiments. Graph 610 shows the edge positions of abdominal wall boundary 620, and SAT-YAT boundary 630. In various embodiments, the coordinates of detected edges may be transformed from Cartesian to Polar coordinates. In an example:
Let ISAT be the isolated SAT image after coherence filtering and Ie SAT be the Canny edge map
Figure imgf000022_0001
The set of all non-zero pixels are first defined: N = {x,y: IeSAT(y,x)>0)
polar
Then N is transformed to polar coordinate system: Np (r, Q) < - N, and plotted on graph
610, as detected edges 640. As shown in FIG. 6, detected edges 640 may be a discontinuous curve.
[0070] In various embodiments, the discontinuous boundary curve 640 in polar coordinate system may be predicted. The missing points in the discontinuous curve 640 may be interpolated by a shape-preserving interpolation technique, for example by piecewise cubic Hermite interpolation. In various embodiments, interpolating missing points may be done using spline interpolation. As shown in graph 650, the inner edges may be fitted into a fascia plane. The resulting one-to-one curve may undergo data smoothing for example using locally weighted linear regression, resulting in a smooth and continuous curve 660 as shown in graph 650. [0071] In various embodiments, after segmentation, separated figures of SSAT-DSAT and VAT may be obtained. FIG. 7 shows the results of segmentation, according to various embodiments. In FIG. 7, separated figures of SSAT 710, DSAT 720 and VAT 730 may be obtained. In various embodiments, fat boundaries displayed on original image 740 may also be obtained.
[0072] FIG. 8 shows an exemplary deep learning architecture to semantically segment the image into fat compartments according to some embodiments. The semantical segmentation may be done using any deep learning architecture for example a semantic segmentation network, such as a convolutional neural network, for example using the U-Net architecture, which will be used hereinafter for illustration purposes, however the present disclosure is not limited thereto. In various embodiments, semantic segmentation may describe the process of associating each pixel of an image with a class label.
[0073] In an example, data volumes manually segmented by experts were used as training data. The details of an exemplary implementation and results are as follows.
[0074] The technical aspects of semantic segmentation using deep learning are described using the example of a U-Net architecture for illustration purposes, however the present disclosure is not limited thereto. Other suitable training parameters may be used for sematic segmentation. The U-Net architecture may incorporate both local and global features due to the encoding-decoding nature of its framework. In recent times, many variants of U-Net have been developed and the U-Net architecture achieved improved performance on many image segmentation and reconstruction tasks. In addition, U-Net was extended from 2D to 3D for volumetric biomedical image segmentation tasks. In this example, the 3D U-Net was adopted and the scheme of self-attention mechanism for performing down-sampling and up-sampling as well as global information fusion at the end of encoding step was utilized. The input 3D patch first goes through input block with initial 64 feature maps with two 3x3x3 convolutions with stride 1. Batch normalization and ReLU may be performed before each convolution. ReLU stands for rectified linear unit, and ReLU is a type of nonlinear activation function. Down-sampling blocks with self-attention layer may be used in this framework to reduce dimensionality. The latent space at the end of encoder network includes a global aggregation block or bottom block for improved global information fusion for each output location. The decoder network may use three up-sampling blocks followed by output block which has lxlxl convolution with stride of 1. The framework may utilize Dropout with a rate of 0.5 at the output before the final lxlxl convolution. Also weight decay of 2xl0 6 may be utilized. The feature maps of encoder network are added to those of decoder network instead of concatenation. In various embodiments, any suitable training parameters may be used.
[0075] Training: In the training phase, 16 randomly cropped 3D blocks of size 32x32x32 are extracted from a shuffled dataset of 8 input image volumes each of size 320x240x88. Batch size is set at 16. The model is trained on 8 FAT DIXON MRI image volumes. Similar training scheme may be extended to include other MRI modalities (like water Dixon images, Fat fraction Images) as input volumes and patches are simultaneously extracted from similar random locations from all input modalities. The Adam optimizer with a learning rate of 0.001 is employed for gradient descent. The number of training epochs and training performance are shown in FIGS. 9A-C.
[0076] Inference: In a testing phase, a similar patch size of 32x32x32 is used to extract patches with an overlapping step size of 8 to cover the whole test volume. Prediction probabilities are obtained for each of the 3D blocks for segmentation in to three classes (SSAT, DSAT and VAT) using the trained model. Inference is performed on two subjects with 2 FAT DIXON MRI input image volumes respectively and the resultant segmentation maps for 3 classes of SSAT, DSAT and VAT are analysed for evaluation metrics. Testing or evaluation performance is indicated in test accuracy vs. number of iterations in figures.
[0077] In the experiments, the dice similarity index (DSI) is employed as the evaluation metrics. These methods evaluate the accuracy by comparing binary images of prediction and its corresponding ground truth, so it is required to transform the 3 -class segmentation task into 3 binary segmentation tasks for evaluation. That is, a 3D binary segmentation map may be constructed for each class, where 1 denotes the voxel that belongs to foreground class and belongs to the background. The evaluation is performed on binary segmentation maps derived from 3-class predictions for SSAT, DSAT and VAT. The ground truth image may be an image which is constructed by expert radiologist using manual tracing of boundaries between DSAT, SSAT and VAT.
[0078] According to various embodiments, segmentation accuracy may be evaluated by statistical measures such as sensitivity, specificity, Dice statistical index, Cohen’s kappa coefficient and Bland-Altman analysis.
[0079] FIG. 9A shows a plot of cross entropy loss according to various embodiments. FIG. 9B shows a plot of training accuracy according to various embodiments. FIG. 9C shows a plot of test accuracy according to various embodiments. FIG. 9C represents test accuracy on Y-axis and number of iterations/epochs of training on X-axis. As shown in FIG. 9C, test accuracy curve 910 represents the test accuracy at different iterations, and test accuracy curve 920 is smoothened at different iterations. The plots show the results of training and testing which illustrates the reduction in error and improvement of accuracy over the iterations.
[0080] The results of various metrics for the test subjects are listed in the table.
[0081] Test subject 1
Figure imgf000025_0001
[0082] Test subject 2
Figure imgf000025_0002
[0083] FIG. 10A and FIG. 10B illustrate the results of segmentation of abdominal fat pictorially of test subject 1 and test subject 2 respectively, according to various embodiments. Also illustrated, comparing the prediction with the ground truth, there is high accuracy on the images where there is high and medium subcutaneous fat.
[0084] FIG. 10C illustrates the results of segmentation of abdominal fat pictorially of test subject 1 according to various embodiments. In FIG. 10 C, axial, coronal and sagittal images are sir own. In various embodiments, the results of segmentation of the abdominal fat may be displayed in three orthogonal planes namely axial, coronal and sagittal. These data may be used in clinical studies or radiology.
Post-processing
[0085] Bone structures appearing in abdomen data such as pelvic bone and spine may contain high intensity pixels, and may be easily misclassified into fat by conventional intensity thresholding operation.
[0086] In various embodiments, post-processing may include removal of pelvic bone. It is observed that the pelvic bone may show up in the image slices, for example of the L5 region in the lumbar spine, as this is the natural location of the pelvis. Therefore, a first step in removing pelvic bone structure from VAT image may be to identify the L5 region in the image. Location of L5 may be determined by taking row- wise projection of sagittal plane which crosses the center of the dataset. [0087] FIG. 11A shows a sagittal image S° crossing body symmetrical axis, according to various embodiments. FIG. 11B shows row-wise projection of S°, according to various embodiments.
[0088] In an example, let S° be the sagittal slice of VAT image at the plane crossing the body symmetrical axis; and zs be the row-wise projection of binary map of S°:
L° = logical(S°), binary map of L°
zs (i) = S-1-L Ljj, where L is the pixel value of L° at row i, column j and N is the number of columns in L°
Binary map of L° may handle the image as a logical array instead of a numerical array as this may reduce computational efforts.
Let P be the collection of local maxima in y. These local maxima correspond to the vertebrae: P = {Pi| Pi = local maximum of zs}
The lumbar 5 (L5) position is identified as the local maximum with lowest row index: L5 = {pi| i= lowest row index in P}
[0089] The next step may be to identify the pelvic bone area in the image slice. As pelvic bone has similar pixel intensity to adipose tissue, threshold suppression usually fails to separate them from the VAT. Therefore, morphological separation may be a more applicable approach. From observations, adipose tissues get attached to the bones, causing separation between them vaguer.
[0090] FIG. 12A shows a map of VAT without removal of bone structures, according to various embodiments. FIG. 12B shows a separated pelvic bone, according to various embodiments. FIGS. 12C and 12D show a right half and left half of the map of VAT, respectively, after diagonal image opening, according to various embodiments. FIGS. 12E and 12F show structuring elements for diagonal image opening, according to various embodiments.
[0091] In various embodiments, the exemplary method may detach the pelvic bones from the VAT using image opening with diagonal structuring elements. Consider an image slice in axial plane within the pelvic region 1°:
BW° = logical(I°) binary map of 1°
Let BWL and BWR be the left half and right half of BW° after morphological image opening. Let HL and HR be the structuring elements defined as followed:
HL = In and HR = Jn with In being the identity matrix of size n, and Jn being the exchange matrix of size n. CV = Convex Hull map of L°, which covers VAT and pelvic bones
n = - 4x width of CV.
Ml = BWL O HL and MR = BWR O HR (image opening with structuring element H)
Let GL and GR be the largest 8-connected area in ML and MR. The binary map of pelvic bone area is defined as: P = (ML P GL) U (MR P GR). Subsequently, pixels in pelvic bone area are suppressed by Otsu thresholding.
B W° represents the logical binary image of the 1°. This is for performing logical operations instead of arithmetic operations which may be computationally less expensive.‘L‘and‘R’ represent the left and right sides of the data or subject. Pelvic bone separation may be done by processing left and right side similarly and combining the two results at the end.
[0092] In various embodiments, post-processing may include removal of spine. 2D coordinates of spine area may be identified using column- wise projection on sagittal and axial planes crossing through the centre of image volume. Pixels within this area may then suppressed to eliminate non-fat spine pixels from VAT. In various embodiments, the suppression may be done by Otsu thresholding.
[0093] FIG. 13A shows a sagittal image S° crossing body symmetrical axis, according to various embodiments. FIG. 13B shows an axial image crossing spine region, according to various embodiments. FIG. 13C shows column-wise projection of S°, according to various embodiments. According to various embodiments,
Let S° be the sagittal slice of VAT image at the plane crossing the body symmetrical axis; and z be the column- wise projection of binary map of S°:
L° = logical(S°), binary map of S°
z(i) = Lj j , where L is the pixel value of L° at row i, column j and M is the number of rows in L°.
V = {Vi| Vi = local minimum of z}
There are 4 minima corresponding to SAT-VAT boundary and VAT-spine boundary (refer to figure below), denoting as VI, V2, V3, V4 such that: xvi< xv2< xv3< xv4 with xvi being the column index of Vi in S° (i=l,2,3,4).
[0094] The spine may be modelled as a cylinder throughout the image volume. Center of the cylinder may be defined as:
1
Xc = -xMA with MA being the number of columns in axial plane image slice
Figure imgf000028_0001
Radius of the cylinder may be defined as:
Figure imgf000028_0002
[0095] FIG. 14 shows a flowchart of an exemplary framework 1400 of a segmentation process according to various embodiments. A first step 1410 may be a preprocessing step. The pre-processing step 1410 may include loading multiple data sets as described in FIG. 3 A and/or removing of extraneous regions as described in FIG. 3B and/or correcting intensity inhomogeneity as described in FIG. 3C. In the first step 1410, extraneous features in the images, for example, arm regions may be removed, followed by noise filtering and intensity inhomogeneity correction. Any noise removal algorithm may be used to enhance the signal-to- noise ratio of the image. The images may be corrupted by at least one of background noise, B0 inhomogeneity, Bl inhomogeneity, motion artefacts, partial volume, sequence dependent artefacts and noise, or presence of other anatomical structures. The removal of extraneous features and/or noise may be handled during pre-processing. Any pre-processing technique may be used, as long as it preserves the anatomical and contrast information of the abdomen. A second step 1420 may be a segmentation step. The segmentation step 1420 may include SAT-VAT segmentation and SSAT-DSAT segmentation. Segmentation step 1420 may be performed using a low computation image processing based method as described in FIGS. 3-7 and/or a deep learning based method as described in FIG. 8. For example, SSAT, DSAT and VAT may be separated by transforming abdominal image to polar coordinate system, followed by modelling the boundaries (SAT-VAT and DSAT-SSAT) using interpolation, for example spline interpolation, based on the detected edges from the image. Interpolation results, such as spline fitting, may be colour coded to represent the actual and approximated regions of the curve. A third step may be a post-processing step. The post-processing step may include, for example, removing of pelvic bone as described in FIGS. 11-12 and/or removing of spine as described in FIG. 13 from the VAT region. The system which executes the algorithm may also allow user interaction to correct the contours and save the results for each slice as well as for the whole volume.
[0096] According to various embodiments, fat compartment may be computated by a processor. The following are examples of possible computations:
- Total Fat = SAT + VAT, - SAT = Superficial SAT + Deep SAT,
- VAT,
- SAT, VAT, SSAT, and DSAT quantitative analysis at each lumber position (Ll - L5),
- 3D rendered visualization of each fat compartment.
[0097] An exemplary process flow of the user interface, according to various embodiments is described. The user interface may include three main components:
[0098] Load panel: To read image volume from local drive and load into work space. Multiple datasets may be simultaneously imported for batch processing. Upon loading the images, they may be viewed in the Image Display Window.
[0099] Segment panel: To carry out the pre-processing, segmentation and post-processing work. User may first select the slices of interest, option for intensity correction as well as single image or batch processing, before proceeding to run the segmentation. Upon completion, isolated VAT, SAT, DSAT, SSAT may be visualized on Segment window. Simultaneously, SAT-VAT boundary and DSAT-SSAT boundary may be highlighted on original Image Display Window. Correction option may enabled at this step to allow interactive correction from user.
[00100] Report panel: To perform quantitation of separated fat compartments, which may be enabled after completion of segmentation step. The program may save the quantitation report as well as segmentation image result, for example in the memory, once signified by user (by hitting“Save” button).
[00101] FIG. 15 A shows an example of image selection in a load panel, according to various embodiments. FIG. 15B shows an example of the segmented fat compartments in a segmentation panel, according to various embodiments. FIG. 15C shows an example of correcting the segmented fat compartments in a segmentation panel, according to various embodiments. FIG. 15D shows an example of quantitation of separated fat compartments in a report panel, according to various embodiments.
[00102] FIG. 16 shows segmented abdominal fat compartments in 3D volume, according to various embodiments. The first panel 1610 shows a total fat volume in 3D. The second panel 1620 shows a total SAT volume in 3D. The third panel 1630 shows a total SSAT volume in 3D. The fourth panel 1640 shows a total DSAT volume in 3D. The fifth panel 1650 shows a total VAT volume in 3D. [00103] According to various embodiments, accuracy of segmentation by the proposed framework may be evaluated by Dice Similarity Index, which may be summarized in the following tables. The tables below sum up the evaluation of segmentation result against ground truth data. The data is categorized into three groups, which are low, medium, and high fat (L, M, and H as in the table below) based total fat volume measured in each slice of the data. Low fat slices contain less than 800 cm3 of fat, medium fat contain 800 to 1300 cm3 of fat, and high fat slices contain more than 1300 cm3 of fat.
[00104] In this example, subject scans were divided into low, medium and high category based on the amount of total abdominal fat. The mean Dice similarity indices for the low computation image processing method for VAT, DSAT and SSAT segmentations were 0.88, 0.67, and 0.88 for low, 0.9, 0.8, and 0.88 for medium and 0.93, 0.84, and 0.89 for high total fat categories respectively.
Figure imgf000030_0001
Table 1 shows results for low computation image processing method.
Figure imgf000030_0002
Table 2 s lows results for the deep learning method
[00105] FIG. 17A shows Bland Altman plots for a method of segmenting an image of an abdomen of a human into image segments corresponding to fat compartments according to various embodiments. FIG. 17B shows Bland Altman plots for a deep learning method, according to various embodiments. In various embodiments, Bland-Altman plots may be used to analyse the overestimation and underestimation between the segmentation and ground truth on the whole dataset. The plot may be constructed by sketching the difference between automated result and ground truth against their mean. [00106] According to various embodiments, the new segmentation technique has been developed with reduced computational complexity and preserved accuracy. Therefore, it can be used in special medical fields such as radiology and endocrinology to monitor changes in abdominal obesity as well as other lumbar position based analysis.
[00107] While the invention has been particularly shown and described with reference to specific embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the scope of the invention as defined by the appended claims. The scope of the invention is thus indicated by the appended claims and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced.

Claims

1. A method of segmenting an image of an abdomen of a human into image segments corresponding to fat compartments, the method comprising:
(a) identifying an abdominal wall segment, a visceral adipose tissue (VAT) segment and a subcutaneous adipose tissue (SAT) segment of the image, wherein the subcutaneous adipose tissue (SAT) segment comprises a superficial subcutaneous adipose tissue (SSAT) segment and a deep subcutaneous adipose tissue (DSAT) segment;
(b) detecting edges of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment in the subcutaneous adipose tissue (SAT) segment to obtain edges coordinates in Cartesian coordinates of the edges of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment;
(c) transforming the edges coordinates from Cartesian coordinates to polar coordinates to obtain a discontinuous boundary curve of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment;
(d) determining a continuous boundary curve of the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment, wherein determining the continuous boundary curve comprises interpolating missing points in the discontinuous boundary curve;
(e) transforming the continuous boundary curve from polar coordinates to Cartesian coordinates to obtain a transformed boundary curve of the image;
(f) identifying the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment in the subcutaneous adipose tissue (SAT) segment, wherein the transformed boundary curve separates the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment in the image, and wherein the superficial subcutaneous adipose tissue (SSAT) segment is between the abdominal wall segment and the transformed boundary curve, and the deep subcutaneous adipose tissue (DSAT) segment is between the transformed boundary curve and the visceral adipose tissue (VAT) segment.
2. The method of claim 1, further comprising:
conducting coherence filtering on the subcutaneous adipose tissue (SAT) segment of the image to remove unwanted edges before step (b).
3. The method of claim 1 or claim 2, wherein step (a) further comprising:
transforming the image from Cartesian coordinates to polar coordinates and detecting edges of the abdominal wall segment and a SAT-VAT boundary to obtain edges coordinates in polar coordinates of the edges of the abdominal wall segment and the SAT-VAT boundary wherein the SAT-VAT boundary in polar coordinates is discontinuous;
interpolating missing points of the SAT-VAT boundary to obtain a connected SAT- VAT boundary; and
identifying the visceral adipose tissue (VAT) segment and a subcutaneous adipose tissue (SAT) segment of the image from the abdominal wall segment and the connected SAT- VAT boundary.
4. The method of any of claims 1 to 3, further comprising pre-processing the image of the abdomen of the human before steps (a)-(f).
5. The method of claim 4, wherein pre-processing the image comprises removing at least one extraneous region segment in the image.
6. The method of claim 5, wherein the extraneous region segment is a limb segment of the human in the image.
7. The method of any of claims 4 to 6, wherein pre-processing the image comprises correcting inhomogeneity intensity of the image.
8. The method of claim 7, wherein correcting inhomogeneity intensity of the image comprises estimating a bias field of the image and applying the bias field to the image to obtain a corrected image.
9. The method of any of claims 1 to 8, further comprising post-processing the image by removing a pelvic bone segment and/or a spine segment after step (f).
10. The method of any of claims 1 to 9, further comprising:
receiving training data comprising a plurality of images of abdomens of humans, each image of the plurality of images of abdomens of humans comprising image segments corresponding to the visceral adipose tissue (VAT) segment, the superficial subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment; using the training data to create a trained computer model for segmenting the image segments corresponding to the fat compartments of the abdomen.
11. The method of claim 10, further comprising using the trained computer model to segment a visceral adipose tissue (VAT) segment, a superficial subcutaneous adipose tissue (SSAT) segment and a deep subcutaneous adipose tissue (DSAT) segment of an abdomen in a target image.
12. The method of any one of claims 1 to 11, wherein the image and/or the target image is acquired from a location of at least one of vertebrae discs L1-L5 of a lumbar spine of the human.
13. A system for performing the method according to any of claims 1 to 12, comprising: a memory;
at least one processor communicatively coupled to the memory and configured to receive the image of the abdomen of the human and segment the image of the abdomen of the human into image segments corresponding to the fat compartments; and
a user interface communicatively coupled to the processor, wherein the user interface is configured to receive instructions from a user and is further configured to communicate the instructions to the processor for execution.
14. The system of claim 13, wherein the user interface comprises a load panel, a segment panel and a report panel.
15. The system of claim 14, further configured to load the image into the memory and display the image in a display window when the load panel receives instructions from the user.
16. The system of claim 14 or claim 15, further configured to carry out at least one of:
segment the image of the abdomen of the human into image segments corresponding to the fat compartments;
pre-process the image prior to segmenting the image; and
post-process the image after segmenting the image;
when the segment panel receives instructions from the user.
17. The system of any of claims 14 to 16, further configured to interactively correct the image after the image has been segmented when the segment panel receives instructions from the user.
18. The system of any of claims 14 to 17, further configured to receive instructions from the user to quantitate the segmented fat compartments when the report panel receives instructions from the user to quantitate the segmented fat compartments.
19. The system of claim 13, wherein the system is configured to receive a plurality of images of the abdomen of the human and batch process the plurality of images, wherein each image of the plurality of images are configured to be segmented into image segments corresponding to the fat compartments.
20. A method of semantically segmenting an image of an abdomen of a human into image segments corresponding to fat compartments using deep learning comprising:
receiving training data comprising a plurality of images of abdomens of humans, wherein each image of the plurality of images of abdomens of humans comprises image segments corresponding to a visceral adipose tissue (VAT) segment, a superficial subcutaneous adipose tissue (SSAT) segment and a deep subcutaneous adipose tissue (DSAT) segment;
using the training data to create a trained computer model for semantically segmenting the image segments corresponding to the fat compartments of the abdomen.
21. The method of claim 20, further comprising using the trained computer model to semantically segment the visceral adipose tissue (VAT) segment, the superficial
subcutaneous adipose tissue (SSAT) segment and the deep subcutaneous adipose tissue (DSAT) segment of the abdomen in the image.
22. A computer program comprising instructions executable by the at least one processor according to any of claims 1 to 21.
PCT/SG2019/050160 2018-03-22 2019-03-22 Method and system of segmenting image of abdomen of human into image segments corresponding to fat compartments WO2019182520A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG10201802349V 2018-03-22
SG10201802349V 2018-03-22

Publications (1)

Publication Number Publication Date
WO2019182520A1 true WO2019182520A1 (en) 2019-09-26

Family

ID=67988439

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2019/050160 WO2019182520A1 (en) 2018-03-22 2019-03-22 Method and system of segmenting image of abdomen of human into image segments corresponding to fat compartments

Country Status (1)

Country Link
WO (1) WO2019182520A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110853049A (en) * 2019-10-17 2020-02-28 上海工程技术大学 Abdominal ultrasonic image segmentation method
CN111460903A (en) * 2020-03-05 2020-07-28 浙江省农业科学院 System and method for monitoring growth of field broccoli based on deep learning
CN111931684A (en) * 2020-08-26 2020-11-13 北京建筑大学 Weak and small target detection method based on video satellite data identification features
CN112435266A (en) * 2020-11-10 2021-03-02 中国科学院深圳先进技术研究院 Image segmentation method, terminal equipment and computer readable storage medium
CN113688695A (en) * 2021-08-03 2021-11-23 北京数美时代科技有限公司 Picture identification method, system, storage medium and electronic equipment
CN114549417A (en) * 2022-01-20 2022-05-27 高欣 Abdominal fat quantification method based on deep learning and nuclear magnetic resonance Dixon
CN115661098A (en) * 2022-10-31 2023-01-31 河海大学 Submarine pipeline two-dimensional scouring profile image recognition and data extraction method
GB2621332A (en) * 2022-08-08 2024-02-14 Twinn Health Ltd A method and an artificial intelligence system for assessing an MRI image
WO2024112260A3 (en) * 2022-11-23 2024-08-15 Agency For Science, Technology And Research A system for and a method of classifying adipose tissue

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8270687B2 (en) * 2003-04-08 2012-09-18 Hitachi Medical Corporation Apparatus and method of supporting diagnostic imaging for medical use
CN106355574A (en) * 2016-08-31 2017-01-25 上海交通大学 Intra-abdominal adipose tissue segmentation method based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8270687B2 (en) * 2003-04-08 2012-09-18 Hitachi Medical Corporation Apparatus and method of supporting diagnostic imaging for medical use
CN106355574A (en) * 2016-08-31 2017-01-25 上海交通大学 Intra-abdominal adipose tissue segmentation method based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MOSBECH T.H. ET AL.: "Automatic Segmentation of Abdominal Adipose Tissue in MRI", SCANDINAVIAN CONFERENCE ON IMAGE ANALYSIS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 6688, 2011, pages 501 - 511, XP047374165 *
SADANANTHAN S.A. ET AL.: "Automated Segmentation of Visceral and Subcutaneous (Deep and Superficial) Adipose Tissues in Normal and Overweight Men", J MAGN RESON IMAGING, vol. 41, no. 4, 7 May 2014 (2014-05-07), pages 924 - 934, XP055638832 *
WANG Y. ET AL.: "Applying a Deep Learning Based CAD scheme to Segment and Quantify Visceral and Subcutaneous Fat Areas from CT Images", PROC SPIE, vol. 10134, no. 1-6, 3 March 2017 (2017-03-03), pages 101343G, XP060086667 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110853049A (en) * 2019-10-17 2020-02-28 上海工程技术大学 Abdominal ultrasonic image segmentation method
CN111460903B (en) * 2020-03-05 2022-07-19 浙江省农业科学院 System and method for monitoring growth of field broccoli based on deep learning
CN111460903A (en) * 2020-03-05 2020-07-28 浙江省农业科学院 System and method for monitoring growth of field broccoli based on deep learning
CN111931684A (en) * 2020-08-26 2020-11-13 北京建筑大学 Weak and small target detection method based on video satellite data identification features
CN111931684B (en) * 2020-08-26 2021-04-06 北京建筑大学 Weak and small target detection method based on video satellite data identification features
CN112435266A (en) * 2020-11-10 2021-03-02 中国科学院深圳先进技术研究院 Image segmentation method, terminal equipment and computer readable storage medium
CN113688695A (en) * 2021-08-03 2021-11-23 北京数美时代科技有限公司 Picture identification method, system, storage medium and electronic equipment
CN114549417A (en) * 2022-01-20 2022-05-27 高欣 Abdominal fat quantification method based on deep learning and nuclear magnetic resonance Dixon
GB2621332A (en) * 2022-08-08 2024-02-14 Twinn Health Ltd A method and an artificial intelligence system for assessing an MRI image
GB2621332B (en) * 2022-08-08 2024-09-11 Twinn Health Ltd A method and an artificial intelligence system for assessing an MRI image
CN115661098A (en) * 2022-10-31 2023-01-31 河海大学 Submarine pipeline two-dimensional scouring profile image recognition and data extraction method
CN115661098B (en) * 2022-10-31 2024-02-06 河海大学 Submarine pipeline two-dimensional scouring profile image recognition and data extraction method
WO2024112260A3 (en) * 2022-11-23 2024-08-15 Agency For Science, Technology And Research A system for and a method of classifying adipose tissue

Similar Documents

Publication Publication Date Title
WO2019182520A1 (en) Method and system of segmenting image of abdomen of human into image segments corresponding to fat compartments
Mahapatra et al. Image super-resolution using progressive generative adversarial networks for medical image analysis
US9968257B1 (en) Volumetric quantification of cardiovascular structures from medical imaging
Zhao et al. A novel U-Net approach to segment the cardiac chamber in magnetic resonance images with ghost artifacts
Daniel et al. Automated renal segmentation in healthy and chronic kidney disease subjects using a convolutional neural network
Roy et al. Robust skull stripping using multiple MR image contrasts insensitive to pathology
EP2916738B1 (en) Lung, lobe, and fissure imaging systems and methods
US20230281809A1 (en) Connected machine-learning models with joint training for lesion detection
US9179881B2 (en) Physics based image processing and evaluation process of perfusion images from radiology imaging
Wan et al. Automated coronary artery tree segmentation in X-ray angiography using improved Hessian based enhancement and statistical region merging
Mahapatra et al. Progressive generative adversarial networks for medical image super resolution
Zhong et al. Automatic localization of the left ventricle from cardiac cine magnetic resonance imaging: a new spectrum-based computer-aided tool
EP3991092A1 (en) An improved medical scan protocol for in-scanner patient data acquisition analysis
Pérez-Pelegrí et al. Automatic left ventricle volume calculation with explainability through a deep learning weak-supervision methodology
Lu et al. Cardiac chamber segmentation using deep learning on magnetic resonance images from patients before and after atrial septal occlusion surgery
Hameeteman et al. Carotid wall volume quantification from magnetic resonance images using deformable model fitting and learning-based correction of systematic errors
US7711164B2 (en) System and method for automatic segmentation of vessels in breast MR sequences
CN114926487A (en) Multi-modal image brain glioma target area segmentation method, system and equipment
Amran et al. BV-GAN: 3D time-of-flight magnetic resonance angiography cerebrovascular vessel segmentation using adversarial CNNs
Arega et al. Using polynomial loss and uncertainty information for robust left atrial and scar quantification and segmentation
CN112862785B (en) CTA image data identification method, device and storage medium
Habes et al. Automated prostate segmentation in whole-body MRI scans for epidemiological studies
Mosbech et al. Automatic segmentation of abdominal adipose tissue in MRI
Chitiboi et al. Contour tracking and probabilistic segmentation of tissue phase mapping MRI
Fallah et al. Automatic atlas-guided constrained random walker algorithm for 3D segmentation of muscles on water magnetic resonance images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19771732

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19771732

Country of ref document: EP

Kind code of ref document: A1