[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112669399A - Method for establishing intracranial vascular enhancement three-dimensional model based on transfer learning - Google Patents

Method for establishing intracranial vascular enhancement three-dimensional model based on transfer learning Download PDF

Info

Publication number
CN112669399A
CN112669399A CN202011322252.8A CN202011322252A CN112669399A CN 112669399 A CN112669399 A CN 112669399A CN 202011322252 A CN202011322252 A CN 202011322252A CN 112669399 A CN112669399 A CN 112669399A
Authority
CN
China
Prior art keywords
blood
image
bright
black
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011322252.8A
Other languages
Chinese (zh)
Inventor
贾艳楠
王文杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Cresun Innovation Technology Co Ltd
Original Assignee
Xian Cresun Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Cresun Innovation Technology Co Ltd filed Critical Xian Cresun Innovation Technology Co Ltd
Priority to CN202011322252.8A priority Critical patent/CN112669399A/en
Publication of CN112669399A publication Critical patent/CN112669399A/en
Priority to CN202111381543.9A priority patent/CN114170337A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/421Filtered back projection [FBP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/424Iterative

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a method for establishing an intracranial vascular enhancement three-dimensional model based on transfer learning, which comprises the following steps: acquiring a bright blood image group, a black blood image group and an enhanced black blood image group of an intracranial vascular site; registering each bright blood image by taking the corresponding enhanced black blood image as a reference by using a registration method based on mutual information and an image pyramid to obtain a registered bright blood image group; performing flow-space artifact elimination operation on the enhanced black blood image in the enhanced black blood image group by using the registered bright blood image group to obtain an artifact eliminated enhanced black blood image group; subtracting corresponding images in the artifact removal enhanced black blood image group and the black blood image group to obtain K contrast enhanced images; establishing a blood three-dimensional model by using the registered bright blood image group and adopting a transfer learning method; establishing a blood vessel three-dimensional model of blood boundary expansion by using the registered bright blood image group; the method of the invention can simply, rapidly and visually obtain the integral state of the intracranial blood vessel in clinic to analyze the intracranial vascular lesion.

Description

Method for establishing intracranial vascular enhancement three-dimensional model based on transfer learning
Technical Field
The invention belongs to the field of image processing, and particularly relates to a method for establishing an intracranial vascular enhancement three-dimensional model based on transfer learning.
Background
According to the latest medical data, the vascular diseases seriously affect the life health of contemporary people and become one of the diseases with higher lethality rate. Such as atherosclerosis, inflammatory vascular diseases, vascular true neoplastic diseases, and the like. Common causes of vascular disease are stenosis, blockage, rupture, and plaque, among others. Currently, in clinical applications, methods based on lumen imaging, such as Digital Subtraction Angiography (DSA), CT Angiography (CTA), Magnetic Resonance Angiography (MRA), and High-Resolution mri (HRMRA), are commonly used to assess the degree of vascular lesion and vascular stenosis.
The magnetic resonance blood vessel imaging technology (MRA or HRMRA) is used as a non-invasive imaging method for a patient, the vascular wall structure can be clearly detected and analyzed, the magnetic resonance image obtained by scanning has high resolution ratio for soft tissues, no bone artifacts and good image quality, and the tissue structures with different imaging characteristics can be obtained by using multiple sequence scanning, so that the magnetic resonance blood vessel imaging technology has obvious superiority in blood vessel display.
Because the images corresponding to the bright blood sequence and the black blood sequence obtained by the magnetic resonance blood vessel imaging technology are two-dimensional images, in clinic, doctors need to obtain the comprehensive condition of blood vessels by combining the information of the two images through experience so as to analyze the pathological changes of the blood vessels. However, the two-dimensional image has limitations, which is not favorable for simply and rapidly obtaining the real information of the blood vessel.
Disclosure of Invention
In order to be used in clinical application, real information of blood vessels is simply, conveniently and quickly obtained so as to analyze the pathological changes of the blood vessels. The embodiment of the invention provides a method for establishing an intracranial vascular enhancement three-dimensional model based on transfer learning. The method comprises the following steps:
acquiring a bright blood image group, a black blood image group and an enhanced black blood image group of an intracranial vascular site; the bright blood image group, the black blood image group and the enhanced black blood image group respectively comprise K bright blood images, black blood images and enhanced black blood images; the images in the bright blood image group, the black blood image group and the enhanced black blood image group are in one-to-one correspondence; k is a natural number greater than 2;
aiming at each bright blood image in the bright blood image group, carrying out image registration by using a registration method based on mutual information and an image pyramid by taking a corresponding enhanced black blood image in the enhanced black blood image group as a reference to obtain a registered bright blood image group comprising K registered bright blood images;
performing flow-space artifact removing operation on the enhanced black blood images in the enhanced black blood image group by using the registered bright blood image group to obtain an artifact-removed enhanced black blood image group comprising K target enhanced black blood images;
subtracting the corresponding image in the artifact removal enhanced black blood image group from the corresponding image in the black blood image group to obtain K contrast enhanced images;
establishing a blood three-dimensional model by using the registered bright blood image group and adopting a transfer learning method;
establishing a blood vessel three-dimensional model of blood boundary expansion by using the registered bright blood image group;
establishing a contrast enhanced three-dimensional model by using the K contrast enhanced images;
and obtaining an intracranial vascular enhancement three-dimensional model based on the blood three-dimensional model, the vascular three-dimensional model and the contrast enhancement three-dimensional model.
In the scheme provided by the embodiment of the invention, firstly, the bright blood image and the enhanced black blood image obtained by scanning the magnetic resonance blood vessel imaging technology are subjected to image registration by adopting a registration method based on mutual information and an image pyramid, so that the registration efficiency can be improved, and the registration accuracy of the images is improved layer by layer from low resolution to high resolution. The bright blood image and the enhanced black blood image can be unified under the same coordinate system through the image registration. And secondly, the registered bright blood image is used for carrying out flow-space artifact elimination operation on the enhanced black blood image, so that more accurate and comprehensive blood vessel information can be displayed. The scheme provided by the embodiment of the invention is to eliminate the flow-space artifact from the angle of image post-processing without using a new imaging technology, an imaging mode or a pulse sequence, so that the flow-space artifact can be simply, accurately and quickly eliminated, and the better popularization can be realized in clinical application. Thirdly, establishing a blood three-dimensional model by using the registered bright blood image and a migration learning method, establishing a blood vessel three-dimensional model with blood boundary expansion by using the registered bright blood image, and subtracting the artifact-removed enhanced black blood image and the black blood image to obtain a contrast enhanced three-dimensional model with a contrast enhancement effect; and finally, obtaining the intracranial vascular enhancement three-dimensional model corresponding to the vascular wall with the contrast enhancement effect based on the blood three-dimensional model, the vascular three-dimensional model and the contrast enhancement three-dimensional model. The encephalic blood vessel enhanced three-dimensional model can simulate the encephalic three-dimensional blood vessel form, realizes the three-dimensional visualization of the encephalic blood vessel, does not need a doctor to restore the tissue structure, the disease characteristics and the like of the encephalic blood vessel through imagination, can facilitate the doctor to observe and analyze the blood vessel form characteristics from any interested angle and layer, can provide the blood vessel three-dimensional space information with image, is convenient for visual observation, and is convenient for positioning and displaying a focus area. The intracranial vascular integral state can be simply, conveniently, quickly and intuitively obtained clinically to carry out intracranial vascular lesion analysis.
Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for establishing an enhanced three-dimensional model of an intracranial blood vessel based on transfer learning according to an embodiment of the present invention;
fig. 2 is an exemplary MIP diagram of an embodiment of the present invention;
FIG. 3 is an inverse diagram of a MIP diagram and a characteristic MIP diagram corresponding to the MIP diagram;
FIG. 4 is an effect diagram of a three-dimensional model of an intracranial vascular simulation in accordance with an embodiment of the invention;
FIG. 5 is a graph of pre-registered results of intracranial vascular magnetic resonance images according to an embodiment of the invention;
FIG. 6 is a schematic diagram of a region to be registered of an intracranial vascular magnetic resonance image in accordance with an embodiment of the invention;
fig. 7(a) is a bright blood gaussian pyramid and a black blood gaussian pyramid of an intracranial vascular magnetic resonance image according to an embodiment of the invention; fig. 7(b) is a bright blood laplacian pyramid and a black blood laplacian pyramid of an intracranial vascular magnetic resonance image according to an embodiment of the present invention;
FIG. 8 is a result of registration of Laplacian pyramid images of intracranial vascular magnetic resonance images according to an embodiment of the invention;
fig. 9 is a schematic diagram of a gaussian pyramid image registration step based on mutual information for an intracranial vascular magnetic resonance image according to an embodiment of the present invention;
FIG. 10 is a normalized mutual information for different iterations according to an embodiment of the present invention;
FIG. 11 is a registration result of intracranial vascular magnetic resonance images of multiple registration methods;
FIG. 12 is a graph showing the result of linear gray scale transformation according to an embodiment of the present invention;
FIG. 13 is a diagram of an image binarization result according to an embodiment of the present invention;
FIG. 14 shows the flow-empty artifact removal result for intracranial vessels according to an embodiment of the present invention;
FIG. 15 is a diagram of the effect of enhancing a three-dimensional model for an intracranial blood vessel according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
In order to be used in clinical application, real information of blood vessels is simply, conveniently and quickly obtained so as to analyze the pathological changes of the blood vessels. The embodiment of the invention provides a method for establishing an intracranial vascular enhancement three-dimensional model based on transfer learning.
It should be noted that an implementation subject of the migration learning-based intracranial vascular enhancement three-dimensional model building method provided by the embodiment of the present invention may be an angiography enhancement three-dimensional model building apparatus, and the angiography enhancement three-dimensional model building apparatus may be operated in an electronic device. The electronic device may be a blood vessel imaging device or an image processing device, but is not limited thereto.
As shown in fig. 1, fig. 1 is a schematic flow chart of a method for establishing an enhanced three-dimensional model of an intracranial blood vessel based on migration learning according to an embodiment of the present invention, which may include the following steps:
s1, acquiring a bright blood image group, a black blood image group and an enhanced black blood image group of the intracranial vascular site;
the bright blood image group, the black blood image group and the enhanced black blood image group respectively comprise K bright blood images, black blood images and enhanced black blood images; the images in the bright blood image group, the black blood image group and the enhanced black blood image group are in one-to-one correspondence; k is a natural number greater than 2;
in the embodiment of the present invention, the blood vessel may be a blood vessel of a tissue portion such as an intracranial blood vessel, a cardiovascular blood vessel, an ocular fundus blood vessel, and the like, and the blood vessel portion in the embodiment of the present invention is not limited herein.
In an embodiment of the invention, the magnetic resonance angiography technique is preferably HRMRA.
The K images in the group of bright blood images, the group of black blood images and the group of enhanced black blood images are in one-to-one correspondence in such a way that the images formed according to the scanning time are in the same order.
S2, aiming at each bright blood image in the bright blood image group, carrying out image registration by using a registration method based on mutual information and an image pyramid by taking a corresponding enhanced black blood image in the enhanced black blood image group as a reference to obtain a registered bright blood image group comprising K registered bright blood images;
the step is to actually complete the image registration of each bright blood image, that is, to use the bright blood image to be registered as a floating image, use the enhanced black blood image corresponding to the bright blood image as a reference image, and perform the image registration by using the similarity measurement based on mutual information and introducing an image pyramid method.
In an alternative embodiment, S2 may include S21-S27:
s21, preprocessing each bright blood image and the corresponding enhanced black blood image to obtain a first bright blood image and a first black blood image;
in an alternative embodiment, S21 may include S211 and S212:
s211, aiming at each bright blood image, taking the corresponding enhanced black blood image as a reference, carrying out coordinate transformation and image interpolation on the bright blood image, and obtaining a pre-registered first bright blood image by using similarity measurement based on mutual information and a preset search strategy;
the step S211 is actually image pre-registration of the bright blood image with reference to the enhanced black blood image.
The enhanced black blood image is imaged by coronal plane scanning, while the bright blood image is imaged by axial plane scanning, and the difference of the sequence scanning direction causes the difference of the two final magnetic resonance imaging layers, so that the magnetic resonance images of different imaging layers need to be observed under a standard reference coordinate system through coordinate transformation.
For the blood vessel image, the coordinate transformation of the image can be realized by using the direction information in the DICOM (Digital Imaging and Communications in Medicine) file. The DICOM file is an image storage format for medical devices such as CT and nuclear magnetic resonance, and the contents stored in the DICOM standard include personal data of a patient, an image layer thickness, a time stamp, medical device information, and the like, in addition to image information. The DICOM3.0 format image file contains orientation label information related to the imaging direction, which briefly introduces the orientation relationship between the patient and the imaging instrument, and the accurate position information of each pixel in the image can be obtained through the data in the orientation label information.
Specifically, the enhanced black blood image and the bright blood image are to-be-registered images, and the enhanced black blood image is used as a reference image, the bright blood image is used as a floating image, and the bright blood image is subjected to coordinate transformation according to the orientation tag information in the DICOM file of the bright blood image, so that the purpose of rotating the bright blood image to the same coordinate system as the enhanced black blood image is achieved, and the scanning direction of the rotated bright blood image is also changed into a coronal plane.
Image registration is essentially a multi-parameter optimization problem, namely, spatial coordinate change is performed on images by using a certain search strategy, and finally, the similarity measurement of the two images is optimized, wherein the search strategy and the spatial coordinate change are performed in a mutual intersection manner in the actual calculation process. The algorithm idea is to calculate the similarity measurement between two images in each iteration, adjust the floating image through the operations of coordinate transformation such as translation or rotation and the like, and interpolate the images at the same time until the similarity measurement of the two images is the maximum. Currently, commonly used search strategies include a gradient descent optimizer, (1+1) -ES based on an Evolution Strategy (ES), and the like, and the predetermined search Strategy in the embodiment of the present invention may be selected as needed.
Through the pre-registration of the step, the magnetic resonance images of the same scanning layer can be compared under the same coordinate system preliminarily, but because the scanning time of the bright blood sequence and the scanning time of the black blood sequence are different, and the patient possibly moves slightly before and after the scanning, the operation is only a rough coordinate transformation, the complete registration of the multi-mode magnetic resonance images can not be realized only through the pre-registration, but the step can omit unnecessary processing procedures for the subsequent accurate registration link, and the processing speed is improved.
S212, the same area content as the scanning range of the first bright blood image is extracted from the corresponding enhanced black blood image, and a first black blood image is formed.
Optionally, S212 may include the following steps:
1. obtaining edge contour information of a blood vessel in the first bright blood image;
specifically, the edge contour information may be obtained by using a Sobel edge detection method or the like. The edge profile information contains coordinate values of the respective edge points.
2. Extracting the minimum value and the maximum value of the abscissa and the ordinate from the edge profile information, and determining an initial extraction frame based on the obtained four coordinate values;
in other words, in the edge profile information, extracting a minimum abscissa value, a maximum abscissa value, a minimum ordinate value and a maximum ordinate value, and determining four vertexes of the square frame by using the four coordinate values, thereby obtaining an initial extracted frame;
3. in the size range of the first bright blood image, the size of the initial extraction frame is respectively enlarged by a preset number of pixels along four directions to obtain a final extraction frame;
wherein, the four directions are respectively the positive and negative directions of the horizontal and vertical coordinates; the preset number is reasonably selected according to the type of the blood vessel image, so as to ensure that the expanded final extraction frame does not exceed the size range of the first bright blood image, for example, the preset number may be 20.
4. And extracting the corresponding area content in the final extracted frame from the enhanced black blood image to form a first black blood image.
And extracting the content of the corresponding area in the enhanced black blood image according to the coordinate range defined by the final extraction frame, and forming the extracted content into a first black blood image. The step obtains the common scanning range of the magnetic resonance images under the two modes by extracting the region to be registered, thereby being beneficial to subsequent rapid registration.
The two-step preprocessing process of the embodiment of the invention plays a very important role, the preprocessed image can pay more attention to useful information and exclude irrelevant information, and in actual use, the image preprocessing can be used for improving the reliability of image registration and identification.
In the embodiment of the invention, in order to improve the accuracy of image registration and avoid the convergence of an image to a local maximum value in the registration process, a multi-resolution strategy is selected to solve the problem of a local extreme value, and meanwhile, the multi-resolution strategy is utilized to improve the algorithm execution speed and increase the robustness under the condition of meeting the image registration accuracy. Thus, an image pyramid approach is employed. The method is an effective way to improve the registration accuracy and speed by increasing the complexity of the model, namely, in the registration process, the registration is performed in the order from coarse registration to fine registration, firstly, the registration is performed on the low-resolution image, and then, on the basis of the completion of the registration of the low-resolution image, the registration is performed on the high-resolution image. Optionally, the following steps may be employed:
s22, obtaining a bright blood Gaussian pyramid from the first bright blood image and obtaining a black blood Gaussian pyramid from the first black blood image based on downsampling processing; the bright blood Gaussian pyramid and the black blood Gaussian pyramid comprise m images with resolution ratios which are sequentially reduced from bottom to top; m is a natural number greater than 3;
in an alternative embodiment, S22 may include the following steps:
obtaining an input image of an ith layer, filtering the input image of the ith layer by using a Gaussian kernel, and deleting even rows and even columns of the filtered image to obtain an image G of the ith layer of the Gaussian pyramidiAnd the ith layer image GiObtaining an i +1 layer image G of a Gaussian pyramid as an i +1 layer input imagei+1
Wherein i is 1, 2, …, m-1; when the gaussian pyramid is a bright blood gaussian pyramid, the input image of the 1 st layer is a first bright blood image, and when the gaussian pyramid is a black blood gaussian pyramid, the input image of the 1 st layer is a first black blood image.
Specifically, the multiple images in the gaussian pyramid are corresponding to the same original image with different resolutions. The Gaussian pyramid acquires an image through Gaussian filtering and downsampling, and each layer of construction steps can be divided into two steps: firstly, smoothing filtering is carried out on an image by using Gaussian filtering, namely filtering is carried out by using a Gaussian kernel; and then deleting even rows and even columns of the filtered image, namely reducing the width and height of the lower layer image by half to obtain the current layer image, so that the current layer image is one fourth of the size of the lower layer image, and finally obtaining the Gaussian pyramid by continuously iterating the steps.
In this step, the first bright blood image and the first black blood image after the preprocessing are subjected to the processing, so that a bright blood gaussian pyramid and a black blood gaussian pyramid can be obtained. Wherein the number of picture layers m may be 4.
Since the gaussian pyramid is downsampled, i.e., the image is reduced, a portion of the data of the image is lost. Therefore, in order to avoid data loss of the image in the zooming process and recover detailed data, the Laplacian pyramid is used in the subsequent steps, image reconstruction is realized by matching with the Gaussian pyramid, and details are highlighted on the basis of the Gaussian pyramid image.
S23, based on the upsampling processing, utilizing the bright blood Gaussian pyramid to obtain a bright blood Laplacian pyramid, and utilizing the black blood Gaussian pyramid to obtain a black blood Laplacian pyramid; wherein the bright blood Laplacian pyramid and the black blood Laplacian pyramid comprise m-1 images with resolution which is sequentially reduced from bottom to top;
in an alternative embodiment, S23 may include the following steps:
for the i +1 layer image G of the Gaussian pyramidi+1Performing upsampling, and filling the newly added rows and columns with data 0 to obtain a filled image;
performing convolution on the filling image by utilizing a Gaussian kernel to obtain an approximate value of the filling pixel to obtain an amplified image;
the ith layer image G of the Gaussian pyramidiAnd putSubtracting the large images to obtain the ith layer image L of the Laplacian pyramidi
When the gaussian pyramid is the bright blood gaussian pyramid, the laplacian pyramid is the bright blood laplacian pyramid, and when the gaussian pyramid is the black blood laplacian pyramid, the laplacian pyramid is the black blood laplacian pyramid.
Since the laplacian pyramid is a residual between the image and the original image after downsampling, the laplacian pyramid is compared from bottom to top, and the laplacian pyramid has one layer of higher-level image less than the laplacian pyramid structure.
Specifically, the mathematical formula for generating the Laplacian pyramid structure is shown as (1), wherein LiIndicating the Laplacian pyramid (bright blood Laplacian pyramid or black blood Laplacian pyramid) of the i-th layer GiRepresenting the i-th level gaussian pyramid (bright blood gaussian pyramid or black blood gaussian pyramid), and the UP operation is an UP-sampled magnified image, symbol
Figure BDA0002793250960000082
Is a sign of the convolution of the symbols,
Figure BDA0002793250960000083
is the gaussian kernel used in constructing the gaussian pyramid. The formula shows that the laplacian pyramid is essentially formed by subtracting residual data of an image which is reduced and then enlarged from an original image, and is a residual prediction pyramid. Since a part of information lost in the previous downsampling operation cannot be completely restored by upsampling, that is, downsampling is irreversible, the display effect of the image after downsampling and upsampling is blurred compared with the original image. By storing the residual between the image and the original image after the down-sampling operation, the detail can be added to the images of different frequency layers on the basis of the Gaussian pyramid image, and the detail and the like can be highlighted.
Figure BDA0002793250960000081
Corresponding to the gaussian pyramid with 4 layers, the step can obtain a bright blood laplacian pyramid and a black blood laplacian pyramid with 3 image layers.
S24, registering images of corresponding layers in the bright blood Laplacian pyramid and the black blood Laplacian pyramid to obtain a registered bright blood Laplacian pyramid;
in an alternative embodiment, S24 may include the following steps:
aiming at each layer of the bright blood Laplacian pyramid and the black blood Laplacian pyramid, taking the corresponding black blood Laplacian image of the layer as a reference image, taking the corresponding bright blood Laplacian image of the layer as a floating image, and realizing image registration by using a similarity measure based on mutual information and a preset search strategy to obtain the registered bright blood Laplacian image of the layer;
forming a registered Laplacian pyramid of the bright blood from bottom to top according to the sequence of the sequential reduction of the resolution by the registered multilayer Laplacian images of the bright blood;
the black blood laplacian image is an image in the black blood laplacian pyramid, and the bright blood laplacian image is an image in the bright blood laplacian pyramid.
The registration process in this step is similar to the pre-registration process, and the registered bright blood laplacian image can be obtained by performing coordinate transformation and image interpolation on the bright blood laplacian image, and using the similarity measurement based on mutual information and a predetermined search strategy to realize image registration.
S25, registering images of each layer in the bright blood Gaussian pyramid and the black blood Gaussian pyramid from top to bottom by using the registered bright blood Laplacian pyramid as superposition information to obtain a registered bright blood Gaussian pyramid;
for S25, the registered leuca laplacian pyramid is used as overlay information to perform top-down registration on images of each layer in the leuca gaussian pyramid and the sanguine gaussian pyramid, and images with different resolutions in the gaussian pyramid need to be registered, and since the registration of low-resolution images can more easily hold the essential features of the images, embodiments of the present invention register high-resolution images on the basis of the registration of low-resolution images, that is, register the gaussian pyramid images from top to bottom, and use the registration result of the previous layer image as the input of the registration of the next layer image.
In an alternative embodiment, S25 may include the following steps:
for the j-th layer from top to bottom in the bright blood Gaussian pyramid and the black blood Gaussian pyramid, taking the black blood Gaussian image corresponding to the layer as a reference image, taking the bright blood Gaussian image corresponding to the layer as a floating image, and using similarity measurement based on mutual information and a preset search strategy to realize image registration to obtain a registered j-th layer bright blood Gaussian image;
performing up-sampling operation on the registered jth layer of bright blood Gaussian image, adding the up-sampling operation to the registered corresponding layer of bright blood Laplacian image, and replacing the jth +1 layer of bright blood Gaussian image in the bright blood Gaussian pyramid by using the added image;
taking the black blood Gaussian image of the j +1 th layer as a reference image, taking the replaced bright blood Gaussian image of the j +1 th layer as a floating image, and using a preset similarity measure and a preset search strategy to realize image registration to obtain a registered bright blood Gaussian image of the j +1 th layer; where j is 1, 2, …, m-1, the black blood gaussian image is an image in the black blood gaussian pyramid, and the bright blood gaussian image is an image in the bright blood gaussian pyramid.
And repeating the operations until the high-resolution registration of the bottom layer Gaussian pyramid image is completed to obtain the registered bright blood Gaussian pyramid. The coordinate system of the bright blood image is consistent with that of the black blood image, and the images have high similarity. The registration process is similar to the pre-registration process described above and will not be described in detail.
S26, obtaining a registered bright blood image corresponding to the bright blood image based on the registered bright blood Gaussian pyramid;
in the step, the bottom layer image in the registered bright blood Gaussian pyramid is obtained to be used as the bright blood image after registration.
And S27, obtaining a group of registered bright blood images by the registered bright blood images corresponding to the K bright blood images respectively.
After all the bright blood images are registered, K registered bright blood images can be used for obtaining a registered bright blood image group. Each post-registration bright blood image and the corresponding enhanced black blood image may be a post-registration image pair.
Through the steps, the image registration of the bright blood image and the enhanced black blood image can be realized, and in the registration scheme provided by the embodiment of the invention, the registration precision can be improved based on mutual information as similarity measurement; meanwhile, an image pyramid algorithm is introduced, which is an effective mode for improving the registration accuracy and speed by increasing the complexity of a model, namely, firstly, the image with lower resolution is subjected to coarse registration, then, the image with higher resolution is subjected to fine registration on the basis of the coarse registration, and the vessel image is decomposed and reconstructed by using the Gaussian pyramid and the Laplace pyramid, so that the effect of observing one image by human eyes at different distances is simulated, and the essential characteristics of the vessel image are more easily obtained. The magnetic resonance bright blood image and the black blood image of the blood vessel part are registered by using a pyramid algorithm, so that the registration efficiency can be improved, and the registration accuracy of the images is improved layer by layer from low resolution to high resolution. The bright blood images and the enhanced black blood images can be unified under the same coordinate system through the image registration, so that doctors can conveniently understand the blood vessel images corresponding to the black blood sequences and the bright blood sequences, comprehensive information required by diagnosis can be simply, conveniently and quickly obtained, and accurate and reliable reference information is provided for subsequent medical diagnosis, operation plan making, radiotherapy plan and the like. The registration scheme provided by the embodiment of the invention can provide a better reference mode for registration of other medical images, and has great clinical application value. Meanwhile, the image registration process of the embodiment of the invention is an important basis for eliminating the flow-space artifact subsequently.
After image registration, flow and empty artifacts in the black blood image enhanced after registration can be eliminated, wherein the flow and empty artifacts occur because blood vessels are too small, the blood flow velocity at the tortuous part is slow, and peripheral blood and tissue fluid may have signal pollution and other problems during imaging of blood vessel walls, so that in the image obtained by scanning the black blood sequence, blood information which should be black is instead bright, thereby simulating wall thickening or plaque appearance of normal individuals and exaggerating the degree of blood vessel stenosis. The embodiment of the invention considers that the blood information in the bright blood image after registration is utilized to correct the blood information with incorrect signal display in the enhanced black blood image after registration, and the blood information in the bright blood image after registration is embedded into the enhanced black blood image after registration so as to achieve the effect of image fusion. The method can be realized by the following steps:
s3, carrying out flow-space artifact removing operation on the enhanced black blood image in the enhanced black blood image group by using the registered bright blood image group to obtain an artifact-removed enhanced black blood image group comprising K target enhanced black blood images;
in an alternative embodiment, S3 may include S31-S34:
s31, aiming at each post-registration bright blood image, improving the contrast of the post-registration bright blood image to obtain a contrast enhanced bright blood image;
in an implementation mode with optional steps, according to the characteristic that blood in the bright blood image is high-signal and surrounding tissues are low-signal, the gray scale linear transformation is performed on the bright blood image after registration, the gray scale range of the image is adjusted, and the purpose of improving the image contrast is achieved.
The specific process of the gray scale linear transformation can be referred to in the related art, and is not described in detail herein.
S32, extracting blood information from the contrast enhanced bright blood image to obtain a bright blood characteristic diagram;
in an alternative embodiment, S32 may include the following steps:
s321, determining a first threshold value by using a preset image binarization method;
s322, extracting blood information from the contrast enhanced bright blood image by using a first threshold value;
the method used in this step is called threshold segmentation.
S323, a bright blood feature map is obtained from the extracted blood information.
The preset image binarization method, namely the binarization processing of the image, can set the gray scale of the points on the image to be 0 or 255, namely, the whole image can show obvious black and white effect. That is, a gray scale image with 256 brightness levels is selected by a proper threshold value to obtain a binary image which can still reflect the whole and local features of the image. According to the embodiment of the invention, the blood information in the contrast enhanced bright blood image can be highlighted as white and the irrelevant information can be displayed as black through a preset image binarization method, so that a bright blood characteristic diagram corresponding to the blood information can be extracted conveniently. The preset image binarization method in the embodiment of the invention can comprise a maximum inter-class variance method OTSU, kittle and the like.
The formula for extracting blood information is shown in (2), where T (x, y) is the gray-level value of the contrast-enhanced bright blood image, F (x, y) is the gray-level value of the bright blood feature map, and T is the first threshold.
Figure BDA0002793250960000111
S33, carrying out image fusion on the bright blood characteristic image and the enhanced black blood image corresponding to the bright blood image after registration according to a preset fusion formula to obtain a target enhanced black blood image with the flow space artifact eliminated corresponding to the enhanced black blood image;
in the step, firstly, a spatial mapping relation between the bright blood characteristic diagram and the corresponding enhanced black blood image is established, the bright blood characteristic diagram is mapped into the corresponding enhanced black blood image, and image fusion is performed according to a preset fusion formula, wherein the preset fusion formula is as follows:
Figure BDA0002793250960000121
wherein, F (x, y) is the gray value of the bright blood feature map, R (x, y) is the gray value of the corresponding enhanced black blood image, and g (x, y) is the gray value of the fused target enhanced black blood image.
Through the above operations, the gray value of the flow-space artifact which is supposed to be black but appears as bright color in the corresponding enhanced black blood image can be changed into black, so that the purpose of eliminating the flow-space artifact is achieved.
And S34, obtaining an artifact-eliminated enhanced black blood image group according to the target enhanced black blood images corresponding to the K enhanced black blood images.
After all the enhanced black blood images are subjected to the flow-space artifact elimination, an artifact eliminated enhanced black blood image group can be obtained.
S4, subtracting the corresponding image in the artifact removal enhanced black blood image group and the black blood image group to obtain K contrast enhanced images;
subtracting the corresponding black blood image from each target enhanced black blood image to obtain a contrast enhanced image with a contrast enhanced effect, and subtracting the corresponding black blood image from all the target enhanced black blood images to obtain K contrast enhanced images.
S5, establishing a blood three-dimensional model by using the registered bright blood image group and adopting a transfer learning method;
in an alternative embodiment, S5 may include the following steps:
s51, projecting the registered bright blood image group in three preset directions by using a maximum intensity projection method to obtain MIP (maximum intensity projection) images in all directions;
the Maximum Intensity Projection (MIP) is one of the CT three-dimensional image reconstruction techniques, and is referred to as MIP. Which traverses a volume data series along a preselected viewing angle using a set of projection lines, the highest CT value on each projection line being encoded to form a two-dimensional projection image. Is a method of generating a two-dimensional image by calculating the maximum density of pixels encountered along each ray of the scanned object. Specifically, when the fiber bundle passes through an original image of a section of tissue, the pixels with the highest density in the image are retained and projected onto a two-dimensional plane, thereby forming an MIP reconstruction image (referred to as an MIP map in the embodiment of the present invention). The MIP can reflect the X-ray attenuation value of the corresponding pixel, small density change can be displayed on the MIP image, and stenosis, expansion and filling defects of the blood vessel can be well displayed, and calcification on the blood vessel wall and contrast agents in the blood vessel cavity can be well distinguished.
It will be understood by those skilled in the art that the group of registered bright blood images is actually a three-dimensional volume data, and the three-dimensional volume data can be projected in three predetermined directions by using the above MIP method to obtain a two-dimensional MIP map in each direction, where the three predetermined directions include: axial, coronal, and sagittal.
For the MIP method, reference is made to the related description of the prior art, which is not repeated herein, and referring to fig. 2, fig. 2 is an exemplary MIP diagram according to an embodiment of the present invention.
And S52, taking the MIP maps in all directions as target domains and the fundus blood vessel map as a source domain, and obtaining two-dimensional blood vessel segmentation maps corresponding to the MIP maps in all directions by using a migration learning method.
The inventor finds out through research that the MIP map of the intracranial vascular bright blood sequence has a distribution of a vascular tree similar to that of the fundus blood vessel. Therefore, the inventor considers that a model pre-trained by a fundus blood vessel (source domain) segmentation task is migrated into an intracranial blood vessel segmentation task by means of a migration learning method, particularly by means of feature migration. Feature based TL (Feature based TL) is to transform the features of the source domain and the target domain into the same space by Feature transformation, assuming that the source domain and the target domain contain some common cross features, so that the source domain data and the target domain data in the space have the same distributed data distribution, and then perform conventional machine learning.
For S52, an optional implementation may include S521 to S523:
s521, obtaining a pre-trained target neural network aiming at the fundus blood vessel map segmentation task;
the target neural network is obtained by pre-training according to the fundus blood vessel map data set and the improved U-net network model.
As described above, the embodiment of the present invention intends to migrate a pre-trained model of a fundus blood vessel (source domain) segmentation task into an intracranial blood vessel segmentation task by means of a feature migration learning manner. Therefore, it is necessary to obtain a mature network model for the vessel segmentation of the fundus blood vessel map. Specifically, obtaining the target neural network may be performed by the following steps:
step 1, obtaining an original network model;
in the embodiment of the invention, the structure of the existing U-net network model can be improved, and each sub-module of the U-net network model is respectively replaced by a residual module with a residual connection form, so that the improved U-net network model is obtained. According to the embodiment of the invention, the residual error module is introduced into the U-net network model, so that the problem that the training error does not decrease or inversely increase due to the disappearance of the gradient caused by the deepening of the layer number of the neural network can be effectively solved.
Step 2, obtaining sample data of the fundus blood vessel map;
embodiments of the present invention acquire a fundus angiogram dataset, the DRIVE dataset, which is a dataset that has been labeled.
And 3, training the original network model by using the sample data of the fundus blood vessel map to obtain the trained target neural network.
The following summary describes some parameter characteristics of the target neural network of embodiments of the present invention:
the improved U-net network model in the embodiment of the invention has 5 levels, and a 2.5M parameter ladder network is formed. Each residual module uses 0.25 droout rate (droout means that the neural network unit is temporarily discarded from the network according to a certain probability in the training process of the deep learning network, generally, the droout rate can be set to be 0.3-0.5); and Batch Normalization (BN) is used, the variance size and the mean position are changed by optimization, so that the new distribution is more suitable for the real distribution of data, and the nonlinear expression capability of the model is ensured. The activating function adopts LeakyRelu; the last layer of the network model is activated using Softmax.
Moreover, because of the problem of uneven foreground and background distribution of the medical image sample, the loss function uses a common Dice coefficient (Dice coefficient) loss function for medical image segmentation, and specifically uses an improved Dice loss function, so as to solve the unstable condition of Dice loss function training.
In the aspect of neural network optimization, an Adam optimization algorithm and default parameters are adopted, and the batch size is 256. 250 epochs are trained using the "reduced learning rate" strategy, setting the learning rates at epochs 0, 20, and 150 to 0.01, 0.001, and 0.0001, respectively, and the total learning rate to 250. And the data enhancement is carried out by using a random clipping mode, and the training sample in the DRIVE data set is enlarged by 20000 times.
The process of obtaining the target neural network is briefly introduced, and the trained target neural network can realize the blood vessel segmentation of the fundus blood vessel map to obtain a corresponding two-dimensional blood vessel segmentation map.
S522, respectively carrying out gray inversion processing and contrast enhancement processing on the MIP images in all directions to obtain corresponding characteristic MIP images;
the realization of the feature transfer learning requires that a source domain (fundus blood vessel image) and a target domain (intracranial blood vessel bright blood sequence MIP image) have high similarity and realize the same data distribution.
Therefore, in step S522, the MIP map is subjected to the gradation inversion processing and the contrast enhancement processing, and the characteristic MIP map is obtained so as to be closer to the fundus blood vessel image.
In an alternative embodiment, S522 may include S5221 and S5222:
s5221, carrying out pixel transformation on the MIP by utilizing a gray inversion formula to obtain an inversion graph; wherein, the grayscale inversion formula is t (x) 255-x, x is the pixel value in the MIP map, and t (x) is the pixel value in the inversion map;
the step can be understood in a popular way as grayscale inversion processing, since the pixel range of the MIP map is between 0 and 255, the original brighter region can be darkened and the original darker region can be lightened through the step, specifically, the pixel transformation can be performed through the grayscale inversion formula, the obtained inversion map please refer to the left map in fig. 3, and the left map in fig. 3 is the inversion map corresponding to the MIP map in the embodiment of the present invention.
S5222, contrast of the inversion graph is enhanced by using a contrast-limited adaptive histogram equalization method, and a characteristic MIP graph is obtained.
The main purpose of this step is to enhance the contrast of the inversion map to show a clearer vascularity. As for the way of enhancing the Contrast, any one of the prior arts can be used, and in an alternative embodiment, this step may employ a Contrast Limited Adaptive Histogram Equalization (CLAHE) to enhance the Contrast. For the CLAHE method, reference is made to the prior art for understanding, and no further description is given here. The obtained characteristic MIP diagram refers to the right diagram in fig. 3, and the right diagram in fig. 3 is the characteristic MIP diagram corresponding to the MIP diagram of the embodiment of the present invention. It can be seen that the contrast of the characteristic MIP map is significantly enhanced and the blood vessels are clearer compared to the inversion map.
After S5222, corresponding characteristic MIP maps can be obtained for the MIP maps in each direction.
In the embodiment of the invention, the cross characteristics of the intracranial blood vessel bright blood sequence MIP and the fundus blood vessel image are considered, so that the MIP image characteristics are mapped to the fundus blood vessel image by adopting a characteristic migration learning method, and the intracranial blood vessel input sample and the fundus blood vessel input sample corresponding to the target neural network have the same sample distribution. Wherein, S521 and S522 may not be in sequence.
S523, respectively inputting the feature MIP images in all directions into a target neural network to obtain corresponding two-dimensional blood vessel segmentation images;
and respectively inputting the characteristic MIP images of all directions into a target neural network to obtain a two-dimensional blood vessel segmentation image corresponding to each direction, wherein the obtained two-dimensional blood vessel segmentation image is a binary image, namely pixels are only 0 and 255, white represents a blood vessel, and black represents a background.
S53, synthesizing the two-dimensional vessel segmentation maps in the three directions by using a back projection method to obtain first three-dimensional vessel volume data;
the principle of the back projection method is to evenly distribute measured projection values to each passing point according to the original projection path, back-project the projection values in all directions, and accumulate the back-projected images at all angles to estimate the original image. By synthesizing the two-dimensional vessel segmentation maps in the three directions by using a back projection method, three-dimensional volume data can be obtained, which is referred to as first three-dimensional vessel volume data in the embodiment of the invention. The back projection method in the embodiment of the present invention may be a direct back projection method, a filtered back projection method, a convolution back projection method, and the like, which is not limited herein.
In the embodiment of the present invention, the voxel value of the blood vessel portion in the obtained first three-dimensional blood vessel volume data is 0, and the voxel value of the non-blood vessel portion is minus infinity through the pixel control of the back projection method.
And S54, obtaining an intracranial blood vessel simulation three-dimensional model based on the first three-dimensional blood vessel volume data and the second three-dimensional blood vessel volume data corresponding to the registered bright blood image group.
In an alternative embodiment, S54 may include S541 and S542:
s541, adding the first three-dimensional blood vessel volume data and the second three-dimensional blood vessel volume data to obtain third three-dimensional blood vessel volume data;
the method can be used for directly correspondingly adding each voxel value in the first three-dimensional blood vessel volume data and the second three-dimensional blood vessel volume data to obtain third three-dimensional blood vessel volume data, and cerebrospinal fluid and fat signals with the same intracranial and blood vessel signal intensity can be eliminated through the step.
And S542, processing the third three-dimensional blood vessel volume data by using a threshold segmentation method to obtain an intracranial blood vessel simulation three-dimensional model.
The threshold segmentation method is an image segmentation technology based on regions, and the principle is to divide image pixel points into a plurality of classes. The purpose of image thresholding is to divide the set of pixels by gray level, each resulting subset forming a region corresponding to the real scene, each region having consistent properties within it, while adjacent regions do not have such consistent properties. Threshold segmentation is a method for processing an image into a high-contrast, easily recognizable image with a proper pixel value as a boundary.
The threshold segmentation method adopted by the embodiment of the invention comprises a maximum inter-class variance method, a maximum entropy, an iteration method, a self-adaptive threshold, a manual method, an iteration method, a basic global threshold method and the like. In an alternative implementation manner, the embodiment of the present invention may adopt a maximum inter-class variance method.
The maximum inter-class variance method (or referred to as "Otsu" for short) is a method for automatically calculating a threshold value suitable for a bimodal situation, and performing S642 by using OTSU may include the following steps:
firstly, calculating a first threshold corresponding to centered fourth three-dimensional blood vessel volume data in third three-dimensional blood vessel volume data by using the OTSU;
in this step, one threshold corresponding to a plurality of images in one small cube (referred to as fourth three-dimensional blood vessel volume data) located near the middle of the large three-dimensional cube of the third three-dimensional blood vessel volume data is determined as a first threshold by using the OTSU method. Because the blood information is substantially concentrated in the middle of the image in the third three-dimensional blood vessel volume data, the small cube data (fourth three-dimensional blood vessel volume data) in the middle is selected to determine the first threshold value in the third three-dimensional blood vessel volume data, so that the calculation amount of the threshold value can be reduced, the calculation speed can be improved, and the first threshold value can be accurately applied to all the blood information in the third three-dimensional blood vessel volume data.
For the size of the fourth three-dimensional blood vessel volume data, the central point of the third three-dimensional blood vessel volume data can be determined firstly, and then the preset side length extends in six directions corresponding to the cube, so that the size of the fourth three-dimensional blood vessel volume data is determined; the preset side length may be determined according to an empirical value including a Willis ring, such as 1/4 that is the side length of the cube of the third three-dimensional blood vessel volume data. The Willis loop is the most important collateral circulation pathway in the cranium, linking the bilateral hemisphere with the anterior and posterior circulation.
And then, threshold segmentation of the third three-dimensional blood vessel volume data is realized by utilizing the first threshold, and an intracranial blood vessel simulation three-dimensional model is obtained.
It can be understood by those skilled in the art that, by threshold segmentation, the gray-scale value of the point on the image corresponding to the third three-dimensional blood vessel volume data can be set to 0 or 255, that is, the whole image exhibits a distinct black-and-white effect, the blood information is highlighted as white, and the irrelevant information is displayed as black. For the processing procedure of threshold segmentation, please refer to the prior art, and will not be described herein. And finally, obtaining the intracranial blood vessel simulation three-dimensional model. Referring to fig. 4, fig. 4 is a diagram illustrating an effect of the three-dimensional simulation model of the intracranial blood vessel according to the embodiment of the invention. The map is grey-scale processed and the colours are not shown, in practice the vessel regions may be displayed in colour, such as red.
The embodiment of the invention applies the research idea of transfer learning to the field of the segmentation of intracranial blood vessels, and can obtain more accurate blood vessel segmentation effect. And then, obtaining first three-dimensional blood vessel volume data by using a back projection method, and realizing an intracranial blood vessel simulation three-dimensional model by using second three-dimensional blood vessel volume data corresponding to the bright blood image group after registration. The intracranial blood vessel simulation three-dimensional model can simulate the intracranial three-dimensional blood vessel form, realizes the three-dimensional visualization of the intracranial blood vessel, does not need a doctor to restore the blood vessel tissue structure, the disease characteristics and the like through imagination, can facilitate the doctor to observe and analyze the morphological characteristics of the intracranial blood vessel from any interested angle and layer, can provide the intracranial blood vessel three-dimensional space information with image, is convenient for visual observation, and is convenient for positioning and displaying a focus area. The intracranial vascular integral state can be simply, conveniently, quickly and intuitively obtained clinically to carry out intracranial vascular lesion analysis.
S6, establishing a blood vessel three-dimensional model of blood boundary expansion by using the registered bright blood image group;
the three-dimensional model of blood obtained in step S5 is actually the flow direction and the regional distribution of blood, and because there is a blood vessel wall in the periphery of blood in practice, the three-dimensional model of blood cannot actually represent the real blood vessel situation completely.
Therefore, in step S6, the blood boundary in the registered bright blood image may be expanded to cover the range of the blood vessel wall, so as to form the effect of a hollow tube, and then a three-dimensional model is generated by using a three-dimensional reconstruction method on the two-dimensional image after expanding the blood boundary, so as to obtain a three-dimensional model of the blood vessel closer to the real blood vessel condition than the three-dimensional model of the blood in step S5.
The expansion of the blood boundary can be realized by detecting blood boundary pixel points in the registered bright blood image and expanding the detected pixel points to preset pixel points in a preset direction, and the preset pixel points can be selected according to experience values obtained by a large amount of blood vessel diameter and blood vessel wall thickness data. Of course, the manner of expanding the blood boundary in the embodiment of the present invention is not limited thereto.
In an alternative embodiment, S6 may include S61-S65:
s61, obtaining K bright blood characteristic graphs;
namely, the K bright blood feature maps obtained in step S32 are obtained.
S62, expanding the boundary of the blood in each bright blood characteristic map by utilizing an expansion operation to obtain an expanded bright blood characteristic map corresponding to the bright blood characteristic map;
the dilation operation is one of morphological operations, and the basic idea of the morphological operation is to extract image data interested by a user by using structural elements in an original image, remove irrelevant information, and retain the essential characteristics of an interested region, and the morphological operation is generally used for binary images, is generally used for extracting connected regions or eliminating noise and the like, and is widely applied to image processing. Common morphological operations are: corrosion, expansion, opening operation and closing operation.
The expansion operation can fill the holes in the image and expand the protruding points of the object at the edge outwards, and the final expanded object has a larger area than the original object. The dilation operation may be denoted as a ≧ B, defined as a ≦ B ≦ x ≦ B (x) λ a ≠ Φ, where B is a structural element and a is the original. The original image a here is a bright blood feature map, in which only two pixel values, 0 and 255, are present, 0 corresponding to black and 255 corresponding to white.
The structural element is also called a kernel (referred to as a kernel), and the kernel can be regarded as a convolution kernel. The dilation operation is to perform convolution operation on the original image a to obtain a local maximum value by using the convolution kernel B, where the convolution kernel B usually has an anchor point and is usually located at the center of the convolution kernel. As the convolution kernel scans the original a, the maximum pixel value of the superimposed area is calculated and the position of the anchor point is replaced with the maximum value. I.e. the maximize operation results in a growth of bright areas in the picture (so called dilation). Simply speaking, the convolution kernel is used to perform a translation from left to right and from top to bottom on the original image, and if there is white in the box corresponding to the convolution kernel, all colors in the box become white.
The inner core can be rectangular, oval or circular. Specifically, in the function cv2.getstructuringelement () of OpenCV, a required kernel can be obtained by transferring the shape and size of the kernel.
In an alternative embodiment, the bright blood feature map may be expanded in multiple steps by using a circular inner core with a radius of 1 until the maximum gradient position is reached, so as to determine the boundary of the outer wall of the blood vessel, realize the segmentation of the blood vessel wall, and obtain an expanded bright blood feature map corresponding to the bright blood feature map. Since the blood vessel wall is tightly attached to the blood and the vessel wall is extremely thin, the expanded range is assumed to be the range of the blood vessel wall, and the operation can include the region of the blood vessel wall near the blood as the search range of the contrast enhancement characteristic of the blood vessel wall.
The specific implementation process of the expansion operation can be referred to in the related art, and is not described herein.
S63, obtaining a difference characteristic diagram corresponding to the bright blood characteristic diagram by subtracting the expanded bright blood characteristic diagram corresponding to the bright blood characteristic diagram from the bright blood characteristic diagram;
the difference feature map obtained by this step for each bright blood feature map is a two-dimensional plan similar to a hollow blood vessel. Similarly, the pixel values of the difference feature map are only 0 and 255.
S64, determining a third threshold;
this step may select a pixel value as the third threshold value for all difference feature maps according to empirical values, for example, any value between 100 and 200, for example, 128, may be selected as the third threshold value.
And S65, taking the third threshold as an input threshold of the moving cube method, and processing the K difference feature maps by using the moving cube method to obtain the blood vessel three-dimensional model with the blood boundary expanded.
The moving cube method uses the third threshold as an input threshold, and a blood vessel three-dimensional model of blood boundary expansion can be obtained from the K difference feature maps. The specific implementation process of the method for moving cubes is not described herein.
S7, establishing a contrast enhanced three-dimensional model by using the K contrast enhanced images;
this step can be implemented by using a moving cube method, see S5 and S6, which are not described herein.
And S8, obtaining an intracranial vascular enhancement three-dimensional model based on the blood three-dimensional model, the vascular three-dimensional model and the contrast enhancement three-dimensional model.
In an alternative embodiment, S8 may include the following steps:
s81, reserving the overlapped part of the contrast enhanced three-dimensional model and the blood vessel three-dimensional model to obtain a reserved contrast enhanced three-dimensional model;
since the contrast-enhanced three-dimensional model obtained in S7 does not only include contrast enhancement of blood vessels, but also needs to exclude enhancement characteristics of unrelated tissues, the search range of the vascular wall contrast enhancement characteristics in the vascular three-dimensional model obtained in S6 is used to determine whether the contrast-enhanced three-dimensional model obtained in S7 is located in a vascular wall region near blood, that is, whether there is an overlapping portion with the vascular three-dimensional model in the contrast-enhanced three-dimensional model, and if so, it indicates that the overlapping portion is located within the search range, and the overlapping portion needs to be retained, so that the retained contrast-enhanced three-dimensional model is obtained.
And S82, fusing the reserved contrast enhanced three-dimensional model with the blood three-dimensional model to obtain the intracranial vascular enhanced three-dimensional model.
The reserved contrast enhanced three-dimensional model representing angiography enhancement is fused with the blood three-dimensional model representing blood information, so that the blood vessel wall with obvious contrast enhancement can be visually displayed, the contrast enhancement effect in which part range of the blood vessel is most obvious can be clearly seen, and atherosclerosis or vulnerable plaque possibly appears in the region.
In an optional embodiment, a contrast-enhanced quantitative analysis may be obtained in the angiography-enhanced three-dimensional model, and specifically, a plaque enhancement index CE may be obtained for any one point on a blood vessel wall in the angiography-enhanced three-dimensional model, where CE is defined as:
Figure BDA0002793250960000201
wherein S ispreBBMRAnd SpostBBMRSignal intensity in the black blood image and the contrast enhanced black blood image, respectively.
As will be understood by those skilled in the art, SpreBBMRAnd SpostBBMRThe information carried in the images after the black blood image and the contrast enhanced black blood image are taken, respectively. The plaque enhancement index CE of each point of the edge of the blood vessel wall is obtained by utilizing the information and is embodied in the angiography enhanced three-dimensional model, so that a doctor can conveniently obtain more detailed blood vessel information, and particularly, when the CE is greater than a plaque threshold value, such as 0.5, the plaque enhancement index CE indicates that plaque appears on the blood vessel wall, so that the plaque enhancement index CE is helpful for identifying responsible artery plaque and the like by measuring the plaque enhancement index CE of the blood vessel wall area, and valuable diagnosis auxiliary information can be provided.
The fusion technique of the two three-dimensional models can be implemented by using the prior art, and is not described herein.
In the scheme provided by the embodiment of the invention, firstly, the bright blood image and the enhanced black blood image obtained by scanning the magnetic resonance blood vessel imaging technology are subjected to image registration by adopting a registration method based on mutual information and an image pyramid, so that the registration efficiency can be improved, and the registration accuracy of the images is improved layer by layer from low resolution to high resolution. The bright blood image and the enhanced black blood image can be unified under the same coordinate system through the image registration. And secondly, the registered bright blood image is used for carrying out flow-space artifact elimination operation on the enhanced black blood image, so that more accurate and comprehensive blood vessel information can be displayed. The scheme provided by the embodiment of the invention is to eliminate the flow-space artifact from the angle of image post-processing without using a new imaging technology, an imaging mode or a pulse sequence, so that the flow-space artifact can be simply, accurately and quickly eliminated, and the better popularization can be realized in clinical application. Thirdly, establishing a blood three-dimensional model by using the registered bright blood image and a migration learning method, establishing a blood vessel three-dimensional model with blood boundary expansion by using the registered bright blood image, and subtracting the artifact-removed enhanced black blood image and the black blood image to obtain a contrast enhanced three-dimensional model with a contrast enhancement effect; and finally, obtaining the angiography enhancement three-dimensional model corresponding to the vascular wall with the angiography enhancement effect based on the blood three-dimensional model, the vascular three-dimensional model and the angiography enhancement three-dimensional model. The angiography enhanced three-dimensional model realizes three-dimensional visualization of blood vessels, does not need a doctor to restore the tissue structure of the blood vessels, the disease characteristics and the like through imagination, can be convenient for the doctor to observe and analyze the morphological characteristics of the blood vessels from any interested angle and level, can provide three-dimensional spatial information of the blood vessels with reality, is convenient for visually displaying the blood vessel walls with obvious angiography enhancement, and is convenient for positioning and displaying a focus area. The method can simply, conveniently and quickly obtain the real information of the blood vessel in clinical application so as to analyze the pathological changes of the blood vessel.
The implementation process and the implementation effect of the method for establishing the enhanced three-dimensional model of the intracranial blood vessel provided by the embodiment of the invention are described in detail below. The implementation process can comprise the following steps:
acquiring a bright blood image group, a black blood image group and an enhanced black blood image group of an intracranial vascular part;
secondly, aiming at each bright blood image in the bright blood image group, carrying out image registration by using a registration method based on mutual information and an image pyramid by taking a corresponding enhanced black blood image in the enhanced black blood image group as a reference to obtain a registered bright blood image group comprising K registered bright blood images;
the step may include:
preprocessing each bright blood image and the corresponding enhanced black blood image to obtain a first bright blood image and a first black blood image; the pretreatment process can be divided into two main steps:
(1) pre-registration:
because the intracranial blood vessel can be regarded as a rigid body, the rigid body transformation is selected as a coordinate transformation method in the step. For a specific pre-alignment process, see step S211, which is not described herein again.
The embodiment of the invention carries out simulation experiment on the image interpolation method of the bright blood image, reduces the original image by 50%, then obtains an effect image with the same size as the original image by using different interpolation algorithms, and compares the effect image with the original image. The data shown in table 1 is the average value of the results of repeating interpolation operation for 100 times, and 5 evaluation indexes, namely root mean square error RMSE, peak signal-to-noise ratio PSNR, normalized cross-correlation coefficient NCC, normalized mutual information NMI and Time consumption Time, are set in the experiment, wherein the smaller the RMSE, the more accurate the registration, and the higher the PSNR, NCC and NMI values, the more accurate the registration. From the whole experimental data, the precision of bicubic interpolation is obviously better than that of nearest neighbor interpolation and bilinear interpolation, although the interpolation time of bicubic interpolation is slower than that of the former two methods, the interpolation operation of 100 times is only 0.1 second more than that of the fastest nearest neighbor interpolation, namely, each operation is only 0.001 second slower. Therefore, in a trade-off, embodiments of the present invention employ bicubic interpolation with higher image quality.
TABLE 1 analysis of image interpolation results
Figure BDA0002793250960000211
In the embodiment of the invention, aiming at intracranial blood vessels, the intracranial blood vessels can be regarded as a rigid body, hardly deform, and organs such as heart or lung change along with the movement of human breath and the like, so that compared with other types of blood vessels, the intracranial blood vessels are really more suitable for selecting mutual information as similarity measurement to achieve a more accurate registration effect.
In the experiment, in the image using the (1+1) -ES optimizer, the registration result is accurate, and the misaligned shadow part in the image completely disappears. The data shown in table 2 are 3 evaluation indexes of the registration result, namely normalized mutual information NMI, normalized cross correlation coefficient NCC and algorithm Time consumption Time. From the experimental result graph, the registration image effect of (1+1) -ES is displayed more clearly and is better than that of a gradient descent optimizer; from experimental data, the three evaluation indexes all represent good performance of the (1+1) -ES optimizer, so the embodiment of the invention uses (1+1) -ES as a search strategy.
TABLE 2 analysis of results under different search strategies
Figure BDA0002793250960000221
aThe value in (1) is based on the mean value of the evaluation indexes of the registration of 160 bright blood images and 160 enhanced black blood images +/-mean square error
Referring to fig. 5, fig. 5 is a diagram illustrating the result of pre-registering the intracranial vascular magnetic resonance image according to the embodiment of the invention. The left image is a pre-registered first bright blood image, wherein the interpolation method adopts bicubic interpolation; the middle image is an enhanced black blood image, both images are coronal planes, the right image is an effect image obtained by directly superimposing the two images, and the right image shows that although the bright blood image and the enhanced black blood image under the current imaging layer can be observed under the same coronal plane after pre-registration, the bright blood image and the enhanced black blood image are still misaligned, so that subsequent image fine registration is required.
(2) Unified scanning area:
the same area content as the scanning range of the first bright blood image is extracted from the enhanced black blood image to form a first black blood image. For details, refer to step S212, which is not described herein.
Referring to fig. 6, fig. 6 is a schematic diagram of a region to be registered of an intracranial vascular magnetic resonance image according to an embodiment of the invention; the left image is a first bright blood image after pre-registration, the right image is an enhanced black blood image, and the square frame is an area to be extracted in the enhanced black blood image. The region contains the common scanning range of a bright blood sequence and a black blood sequence in an intracranial vascular magnetic resonance image, and useful information can be focused more quickly by determining the region to be extracted.
(II) after the preprocessing, performing image registration on the first bright blood image and the first black blood image by using a registration method based on mutual information and an image pyramid, as described in the foregoing in relation to steps S22-S27. The method specifically comprises the following steps:
obtaining a bright blood Gaussian pyramid from the first bright blood image based on downsampling processing, and obtaining a black blood Gaussian pyramid from the first black blood image;
the bright blood Gaussian pyramid and the black blood Gaussian pyramid comprise 4 images with resolution becoming smaller from bottom to top in sequence; the generation process of the bright blood gaussian pyramid and the black blood gaussian pyramid is referred to in the foregoing S22, and is not described herein again. As shown in fig. 7(a), fig. 7(a) is a bright blood gaussian pyramid and a black blood gaussian pyramid of an intracranial vascular magnetic resonance image according to an embodiment of the present invention.
These resolutions are gradually reduced, and the images from the same image combined at different resolutions are arranged to resemble a pyramid, and are therefore referred to as an image pyramid, where the highest resolution image is located at the bottom of the pyramid and the lowest resolution image is located at the top of the pyramid. In the aspect of image information processing, the multi-resolution images can more easily acquire the essential characteristics of the images compared with the traditional single-resolution images.
Based on the upsampling processing, utilizing the bright blood Gaussian pyramid to obtain a bright blood Laplacian pyramid, and utilizing the black blood Gaussian pyramid to obtain a black blood Laplacian pyramid;
the bright blood Laplacian pyramid and the black blood Laplacian pyramid comprise 3 images of which the resolutions are sequentially reduced from bottom to top; the generation process of the bright blood laplacian pyramid and the black blood laplacian pyramid is referred to as S23, and is not described herein again. As shown in fig. 7(b), fig. 7(b) is a bright blood laplacian pyramid and a black blood laplacian pyramid of an intracranial vascular magnetic resonance image according to an embodiment of the present invention. The image display uses gamma correction to achieve a clearer effect, and the gamma value is 0.5.
Registering images of corresponding layers in the bright blood Laplacian pyramid and the black blood Laplacian pyramid to obtain a registered bright blood Laplacian pyramid;
in the step, the image in the black blood laplacian pyramid is used as a reference image, the image in the bright blood laplacian pyramid is used as a floating image, image registration is respectively carried out on the enhanced black blood image of each layer and the bright blood image of the corresponding layer, mutual information is used as similarity measurement of the two images, a (1+1) -ES is selected as a search strategy, after coordinate transformation is carried out on each image registration, the mutual information of the two images is circularly and iteratively calculated until the mutual information reaches the maximum, and the image registration is completed. See the foregoing S24 for details, which are not described herein.
As shown in fig. 8, fig. 8 is a registration result of laplacian pyramid images of an intracranial vascular magnetic resonance image according to an embodiment of the present invention, where the left image is a reference image in a black blood laplacian pyramid, the middle image is a registered image in a bright blood laplacian pyramid, the right image is an effect image obtained by directly superimposing the left and middle images, and the superimposed image displays a montage effect, and the black blood image and the bright blood image are enhanced by using pseudo-color transparency processing, where purple is the enhanced black blood laplacian pyramid image, and green is the bright blood laplacian pyramid image (the image is an image of an original image subjected to gray processing, and the color is not shown).
Fourthly, registering the images of each layer in the bright blood Gaussian pyramid and the black blood Gaussian pyramid from top to bottom by using the registered bright blood Laplacian pyramid as superposition information to obtain a registered bright blood Gaussian pyramid;
referring to the foregoing step S25, the specific steps of mutual information based gaussian pyramid image registration are shown in fig. 9, and fig. 9 is a schematic diagram of mutual information based gaussian pyramid image registration steps of an intracranial vascular magnetic resonance image according to an embodiment of the present invention. Firstly, registering the low-resolution black blood Gaussian image of the top layer and the low-resolution bright blood Gaussian image of the top layer based on mutual information; then, performing up-sampling operation on the registered bright blood Gaussian image, and adding the up-sampled bright blood Gaussian image and the bright blood Laplacian image of the corresponding layer which retains high-frequency information and is registered according to the operation to be used as a next layer of bright blood Gaussian image; and then, taking the bright blood Gaussian image obtained by the operation as an input image, registering the input image with the black blood Gaussian image of the corresponding layer, and repeating the operation until the high-resolution registration of the bottom layer Gaussian pyramid image is completed.
In the registration of Gaussian pyramid images based on mutual information, the registration of each layer of bright blood Gaussian image and black blood Gaussian image is carried out by taking normalized mutual information as similarity measurement, and the NMI of the two images is calculated through loop iteration until the NMI reaches the maximum. Fig. 10 is normalized mutual information under different iteration times of the embodiment of the present invention, and when the registration of the first-layer image, that is, the bottom-layer image with the highest resolution in the gaussian pyramid reaches the maximum NMI value and the data is stable, the iteration is stopped.
In addition, in order to verify the effectiveness and the practicability of the image registration method based on the mutual information and the image pyramid, a comparison experiment is also carried out, and intracranial vascular magnetic resonance images of five patients are used together, wherein the enhanced black blood image and the bright blood image of the patient A, B, C, D are 160 respectively, and the enhanced black blood image and the bright blood image of the patient E are 150 respectively; meanwhile, an algorithm which only uses DICOM image orientation label information for registration and a registration algorithm based on mutual information measurement are selected and compared with the registration method based on mutual information and an image pyramid, wherein the algorithm based on mutual information measurement is to search the optimal transformation between a reference image and a floating image by a multi-parameter optimization method, so that the mutual information value of the two images is the maximum, and the image pyramid algorithm is not used.
The experimental platform was Matlab R2016 b. And combining qualitative analysis and quantitative analysis according to the image registration result of the experiment. In the aspect of qualitative analysis, because a large gray difference exists between the multi-modal medical images, and a difference image obtained by subtracting the registration image from the reference image cannot effectively reflect the registration result of the multi-modal medical images, the embodiment of the present invention overlaps the registration image with the reference image to obtain a color overlapped image that can reflect the alignment degree of the registration image and the reference image, and performs qualitative analysis on the registration effect of the multi-modal registration algorithm through the color overlapped image, fig. 11 shows the registration result of the multi-modal intracranial vascular magnetic resonance images, and fig. 11 shows the registration result of the intracranial vascular magnetic resonance images of multiple registration methods. Wherein, (a) is a reference image; (b) is a floating image; (c) is an overlay image based on image orientation label information; (d) is an overlay image based on a mutual information metric; (e) the invention discloses a superposed image of an image registration method based on mutual information and an image pyramid. The figures are gray scale images of the original image, not shown in color. In the aspect of quantitative analysis, since the root mean square error RMSE and the peak signal-to-noise ratio PSNR of the evaluation indexes are not suitable for evaluating images with large gray scale changes, in order to better evaluate the registration result of the multi-modal medical image, the normalized cross-correlation coefficient NCC is adopted, the normalized mutual information NMI is used as the evaluation index, when the values of the normalized cross-correlation coefficient NCC and the normalized mutual information NMI are larger, the higher the image registration accuracy is, and table 3 shows the evaluation index result analysis of different registration algorithms.
TABLE 3 analysis of the results of different registration methods
Figure BDA0002793250960000251
aThe value in (1) is the mean value of the evaluation index +/-mean square error based on the registration of a plurality of images of a patient
And (3) qualitative analysis: as is apparent from the overlaid images of fig. 11, the mutual information metric-based method has a large registration shift, and the analysis reason may be that it is easy to fall into a local optimum value rather than a global optimum value only using the mutual information metric-based method; the registration effect based on the image orientation label information is not good enough, and the images are partially not overlapped; the registration method based on mutual information and the image pyramid has good image effect, the image display is clearer, and the images are almost completely overlapped.
Quantitative analysis: as can be seen from table 3, from the two evaluation indexes NCC and NMI, compared with the registration algorithm using only the orientation tag information of the DICOM image and the registration algorithm based on the mutual information metric, the registration method based on the mutual information and the image pyramid provided by the embodiment of the present invention has improved registration accuracy, and can well process the registration of the multi-modal intracranial vascular magnetic resonance image.
Obtaining a registered bright blood image corresponding to the bright blood image based on the registered bright blood Gaussian pyramid;
and acquiring a bottom layer image in the registered bright blood Gaussian pyramid as a registered bright blood image, and taking the registered bright blood image and the corresponding enhanced black blood image as a registered image pair.
And sixthly, obtaining a group of registered bright blood images by the registered bright blood images corresponding to the K bright blood images respectively.
In the embodiment of the invention, an image registration method based on mutual information and an image pyramid is used for registering the magnetic resonance bright blood image and the enhanced black blood image, the correlation of gray information is considered in the registration process, the registration efficiency is improved by using the Gaussian pyramid, the image is from low resolution to high resolution, and the registration accuracy is improved layer by layer.
Thirdly, performing flow-space artifact removing operation on the enhanced black blood image in the enhanced black blood image group by using the registered bright blood image group to obtain an artifact-removed enhanced black blood image group comprising K target enhanced black blood images; see in detail the previous step S3.
Firstly, aiming at each post-registration bright blood image, the contrast of the post-registration bright blood image is improved by utilizing gray scale linear transformation to obtain a contrast enhanced bright blood image. As shown in fig. 12, fig. 12 is a graph of the result of the gray scale linear transformation according to the embodiment of the present invention. The left image is the bright blood image after registration, the right image is the result image after gray scale linear transformation, and it can be seen that the contrast of the blood part in the right image is obviously enhanced compared with the surrounding pixels.
Secondly, extracting blood information from the contrast enhanced bright blood image to obtain a bright blood characteristic diagram;
the step adopts the maximum inter-class variance method OTSU, and the result is shown in FIG. 13, FIG. 13 is an image binarization result diagram of the embodiment of the invention; and the left image is a contrast enhanced bright blood image, and the right image is blood information after the contrast enhanced bright blood image is subjected to threshold extraction. It can be seen that the portion of the right image that appears bright is only blood related information.
And thirdly, carrying out image fusion on the bright blood characteristic image and the enhanced black blood image corresponding to the bright blood image after registration according to a preset fusion formula to obtain a target enhanced black blood image with the flow space artifact eliminated corresponding to the enhanced black blood image.
The specific steps are not repeated, the comparison result can be seen in fig. 14, and fig. 14 is a flow-empty artifact removal result for an intracranial blood vessel according to an embodiment of the present invention. The left image is an original image of the enhanced black blood image, the right image is the enhanced black blood image after the flow and space artifact is eliminated, the flow and space artifact appears at the position shown by an arrow, and the elimination effect of the flow and space artifact is more obvious compared with that of the visible flow and space artifact.
And finally, enhancing the black blood image by using the targets corresponding to the K enhanced black blood images to obtain an artifact-eliminated enhanced black blood image group.
Subtracting the corresponding images in the artifact-removed enhanced black blood image group and the black blood image group to obtain K contrast enhanced images;
fifthly, establishing a blood three-dimensional model by using the registered bright blood image group and adopting a transfer learning method;
step six, establishing a blood vessel three-dimensional model of blood boundary expansion by using the registered bright blood image group;
establishing a contrast enhanced three-dimensional model by using the K contrast enhanced images;
and step eight, obtaining an intracranial vascular enhancement three-dimensional model based on the blood three-dimensional model, the vascular three-dimensional model and the contrast enhancement three-dimensional model.
The detailed process of step four to step eight is not described again.
FIG. 15 is a diagram of the effect of enhancing a three-dimensional model for an intracranial blood vessel according to an embodiment of the invention. In practice, in fig. 15, the bright color portion in the white coil is an intracranial vascular region where contrast enhancement occurs, that is, a disease state or vulnerable plaque of intracranial atherosclerosis may occur, and the remaining portion is a vascular region where contrast enhancement does not occur, which may be distinguished by different colors, for example, blue is a vascular region where contrast enhancement does not occur, and red is a vascular region where contrast enhancement occurs. And the angiography enhanced three-dimensional model can realize basic functions of rotation, amplification, reduction and the like, so that a doctor is assisted to position a focus area, and more accurate judgment is made.
In the scheme provided by the embodiment of the invention, the three-dimensional visualization of the intracranial blood vessel is realized, the reduction of the vascular tissue structure, the disease characteristics and the like by imagination of a doctor is not needed, the observation and analysis of the morphological characteristics of the blood vessel from any interested angle and level can be facilitated for the doctor, the three-dimensional spatial information of the blood vessel with reality can be provided, the blood vessel wall with obvious contrast enhancement can be conveniently and visually displayed, and the positioning and the display of a focus area are facilitated. The method can simply, conveniently and quickly obtain the real information of the blood vessel in clinical application so as to analyze the pathological changes of the blood vessel.

Claims (10)

1. A method for establishing an intracranial vascular enhancement three-dimensional model based on transfer learning is characterized by comprising the following steps:
acquiring a bright blood image group, a black blood image group and an enhanced black blood image group of an intracranial vascular site; the bright blood image group, the black blood image group and the enhanced black blood image group respectively comprise K bright blood images, black blood images and enhanced black blood images; the images in the bright blood image group, the black blood image group and the enhanced black blood image group are in one-to-one correspondence; k is a natural number greater than 2;
aiming at each bright blood image in the bright blood image group, carrying out image registration by using a registration method based on mutual information and an image pyramid by taking a corresponding enhanced black blood image in the enhanced black blood image group as a reference to obtain a registered bright blood image group comprising K registered bright blood images;
performing flow-space artifact removing operation on the enhanced black blood images in the enhanced black blood image group by using the registered bright blood image group to obtain an artifact-removed enhanced black blood image group comprising K target enhanced black blood images;
subtracting the corresponding image in the artifact removal enhanced black blood image group from the corresponding image in the black blood image group to obtain K contrast enhanced images;
establishing a blood three-dimensional model by using the registered bright blood image group and adopting a transfer learning method;
establishing a blood vessel three-dimensional model of blood boundary expansion by using the registered bright blood image group;
establishing a contrast enhanced three-dimensional model by using the K contrast enhanced images;
and obtaining an intracranial vascular enhancement three-dimensional model based on the blood three-dimensional model, the vascular three-dimensional model and the contrast enhancement three-dimensional model.
2. The method according to claim 1, wherein the performing image registration for each of the group of bright blood images by using a registration method based on mutual information and an image pyramid with reference to a corresponding enhanced black blood image in the group of enhanced black blood images to obtain a group of registered bright blood images including K registered bright blood images comprises:
preprocessing each bright blood image and the corresponding enhanced black blood image to obtain a first bright blood image and a first black blood image;
based on downsampling processing, obtaining a bright blood Gaussian pyramid from the first bright blood image, and obtaining a black blood Gaussian pyramid from the first black blood image; the bright blood Gaussian pyramid and the black blood Gaussian pyramid comprise m images with resolution becoming smaller in sequence from bottom to top; m is a natural number greater than 3;
based on the upsampling processing, obtaining a bright blood Laplacian pyramid by using the bright blood Gaussian pyramid, and obtaining a black blood Laplacian pyramid by using the black blood Gaussian pyramid; the bright blood Laplacian pyramid and the black blood Laplacian pyramid comprise m-1 images with resolution which is sequentially reduced from bottom to top;
registering images of corresponding layers in the bright blood Laplacian pyramid and the black blood Laplacian pyramid to obtain a registered bright blood Laplacian pyramid;
registering the images of all layers in the bright blood Gaussian pyramid and the black blood Gaussian pyramid from top to bottom by using the registered bright blood Laplacian pyramid as superposition information to obtain a registered bright blood Gaussian pyramid;
obtaining a registered bright blood image corresponding to the bright blood image based on the registered bright blood Gaussian pyramid;
and obtaining a group of registered bright blood images by the registered bright blood images corresponding to the K bright blood images respectively.
3. The method according to claim 1 or 2, wherein the preprocessing each bright blood image and the corresponding enhanced black blood image to obtain a first bright blood image and a first black blood image comprises:
for each bright blood image, taking the corresponding enhanced black blood image as a reference, performing coordinate transformation and image interpolation on the bright blood image, and obtaining a pre-registered first bright blood image by using a similarity measurement based on mutual information and a preset search strategy;
and extracting the same area content as the scanning range of the first bright blood image from the corresponding enhanced black blood image to form a first black blood image.
4. The method of claim 3, wherein the registering images of corresponding layers of the Laplacian pyramid with bright blood and the Laplacian pyramid with black blood to obtain a registered Laplacian pyramid with bright blood comprises:
aiming at each layer of the bright blood Laplacian pyramid and the black blood Laplacian pyramid, taking a corresponding black blood Laplacian image of the layer as a reference image, taking a corresponding bright blood Laplacian image of the layer as a floating image, and realizing image registration by using a similarity measure based on mutual information and a preset search strategy to obtain a registered bright blood Laplacian image of the layer;
forming a registered Laplacian pyramid of the bright blood from bottom to top according to the sequence of the sequential reduction of the resolution by the registered multilayer Laplacian images of the bright blood;
the black blood laplacian image is an image in the black blood laplacian pyramid, and the bright blood laplacian image is an image in the bright blood laplacian pyramid.
5. The method according to claim 4, wherein the registering the images of the respective layers in the blood-brightening Gaussian pyramid and the black blood Gaussian pyramid from top to bottom by using the registered blood-brightening Gaussian pyramid as overlay information to obtain the registered blood-brightening Gaussian pyramid comprises:
for the j-th layer from top to bottom in the bright blood Gaussian pyramid and the black blood Gaussian pyramid, taking the black blood Gaussian image corresponding to the layer as a reference image, taking the bright blood Gaussian image corresponding to the layer as a floating image, and using similarity measurement based on mutual information and a preset search strategy to realize image registration to obtain a registered j-th layer bright blood Gaussian image;
performing upsampling operation on the registered jth layer of bright blood Gaussian image, adding the upsampled operation to the registered corresponding layer of bright blood Laplacian image, and replacing the jth +1 layer of bright blood Gaussian image in the bright blood Gaussian pyramid by using the added image;
taking the black blood Gaussian image of the j +1 th layer as a reference image, taking the replaced bright blood Gaussian image of the j +1 th layer as a floating image, and using a preset similarity measure and a preset search strategy to realize image registration to obtain a registered bright blood Gaussian image of the j +1 th layer;
wherein j is 1, 2, …, m-1, the black blood gaussian image is an image in the black blood gaussian pyramid, and the bright blood gaussian image is an image in the bright blood gaussian pyramid.
6. The method according to claim 1 or 5, wherein said performing an empty artifact removing operation on the enhanced black blood image in the enhanced black blood image group by using the post-registration bright blood image group to obtain an artifact removed enhanced black blood image group including K target enhanced black blood images comprises:
for each post-registration bright blood image, improving the contrast of the post-registration bright blood image to obtain a contrast enhanced bright blood image;
extracting blood information from the contrast enhanced bright blood image to obtain a bright blood characteristic diagram;
carrying out image fusion on the bright blood characteristic graph and the enhanced black blood image corresponding to the registered bright blood image according to a preset fusion formula to obtain a target enhanced black blood image with the air artifact removed corresponding to the enhanced black blood image;
and enhancing the black blood image by using the targets corresponding to the K enhanced black blood images to obtain an artifact-eliminated enhanced black blood image group.
7. The method of claim 6, wherein extracting blood information from the contrast enhanced bright blood image to obtain a bright blood feature map comprises:
determining a first threshold value by using a preset image binarization method;
extracting blood information from the contrast-enhanced bright blood image using the first threshold;
and obtaining a bright blood characteristic map from the extracted blood information.
8. The method according to claim 1 or 7, wherein the establishing of the blood three-dimensional model by using the registered bright blood image group and a migration learning method comprises:
projecting the registered bright blood image group in three preset directions by using a maximum intensity projection method to obtain MIP (maximum intensity projection) images in all directions;
taking the MIP images in all directions as target domains and the fundus blood vessel images as source domains, and obtaining two-dimensional blood vessel segmentation images corresponding to the MIP images in all directions by using a migration learning method;
synthesizing the two-dimensional vessel segmentation maps in the three directions by using a back projection method to obtain first three-dimensional vessel volume data; wherein the voxel value of the blood vessel part in the first three-dimensional blood vessel volume data is 0, and the voxel value of the non-blood vessel part is minus infinity;
and obtaining a blood three-dimensional model based on the first three-dimensional blood vessel volume data and the second three-dimensional blood vessel volume data corresponding to the registered bright blood image group.
9. The method according to claim 8, wherein the MIP maps of all directions are used as target domains, the fundus blood vessel map is used as a source domain, and a migration learning method is used for obtaining two-dimensional blood vessel segmentation maps corresponding to the MIP maps of all directions; the method comprises the following steps:
obtaining a pre-trained target neural network aiming at the eye fundus blood vessel map segmentation task; the target neural network is obtained by pre-training according to the fundus blood vessel map data set and the improved U-net network model;
respectively carrying out gray level inversion processing and contrast enhancement processing on the MIP images in all directions to obtain corresponding characteristic MIP images; wherein the characteristic MIP map has the same sample distribution as the fundus blood vessel map;
and respectively inputting the characteristic MIP maps of all directions into the target neural network to obtain corresponding two-dimensional vessel segmentation maps.
10. The method according to claim 1 or 9, wherein the obtaining of the intracranial vascular enhanced three-dimensional model based on the blood three-dimensional model, the vascular three-dimensional model and the contrast enhanced three-dimensional model comprises:
reserving an overlapped part of the contrast enhanced three-dimensional model and the blood vessel three-dimensional model to obtain a reserved contrast enhanced three-dimensional model;
and fusing the reserved contrast enhanced three-dimensional model with the blood three-dimensional model to obtain an intracranial vascular enhanced three-dimensional model.
CN202011322252.8A 2020-11-23 2020-11-23 Method for establishing intracranial vascular enhancement three-dimensional model based on transfer learning Withdrawn CN112669399A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011322252.8A CN112669399A (en) 2020-11-23 2020-11-23 Method for establishing intracranial vascular enhancement three-dimensional model based on transfer learning
CN202111381543.9A CN114170337A (en) 2020-11-23 2021-11-22 Method for establishing intracranial vascular enhancement three-dimensional model based on transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011322252.8A CN112669399A (en) 2020-11-23 2020-11-23 Method for establishing intracranial vascular enhancement three-dimensional model based on transfer learning

Publications (1)

Publication Number Publication Date
CN112669399A true CN112669399A (en) 2021-04-16

Family

ID=75403527

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202011322252.8A Withdrawn CN112669399A (en) 2020-11-23 2020-11-23 Method for establishing intracranial vascular enhancement three-dimensional model based on transfer learning
CN202111381543.9A Pending CN114170337A (en) 2020-11-23 2021-11-22 Method for establishing intracranial vascular enhancement three-dimensional model based on transfer learning

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202111381543.9A Pending CN114170337A (en) 2020-11-23 2021-11-22 Method for establishing intracranial vascular enhancement three-dimensional model based on transfer learning

Country Status (1)

Country Link
CN (2) CN112669399A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112669439A (en) * 2020-11-23 2021-04-16 西安电子科技大学 Method for establishing intracranial angiography enhanced three-dimensional model based on transfer learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112669439A (en) * 2020-11-23 2021-04-16 西安电子科技大学 Method for establishing intracranial angiography enhanced three-dimensional model based on transfer learning
CN112669439B (en) * 2020-11-23 2024-03-19 西安电子科技大学 Method for establishing intracranial angiography enhanced three-dimensional model based on transfer learning

Also Published As

Publication number Publication date
CN114170337A (en) 2022-03-11

Similar Documents

Publication Publication Date Title
Ansari et al. Dense-PSP-UNet: A neural network for fast inference liver ultrasound segmentation
US11830193B2 (en) Recognition method of intracranial vascular lesions based on transfer learning
WO2022105647A1 (en) Method for establishing enhanced three-dimensional model of intracranial angiography
WO2022105623A1 (en) Intracranial vascular focus recognition method based on transfer learning
CN112598619A (en) Method for establishing intracranial vascular simulation three-dimensional narrowing model based on transfer learning
CN110648338B (en) Image segmentation method, readable storage medium, and image processing apparatus
CN114187238A (en) Medical image segmentation and display method based on intelligent medical treatment
CN112509075A (en) Intracranial vascular lesion marking and three-dimensional display method based on intelligent medical treatment
CN114170151A (en) Intracranial vascular lesion identification method based on transfer learning
CN112562058B (en) Method for quickly establishing intracranial vascular simulation three-dimensional model based on transfer learning
US12112489B2 (en) Method of establishing an enhanced three-dimensional model of intracranial angiography
CN112509079A (en) Method for establishing intracranial angiography enhanced three-dimensional narrowing analysis model
CN112508873A (en) Method for establishing intracranial vascular simulation three-dimensional narrowing model based on transfer learning
CN114170152A (en) Method for establishing simulated three-dimensional intracranial vascular stenosis analysis model
CN114708280A (en) A Multimodal Cerebral Vessel Segmentation Algorithm
CN112509077A (en) Intracranial blood vessel image segmentation and display method based on intelligent medical treatment
CN112669439B (en) Method for establishing intracranial angiography enhanced three-dimensional model based on transfer learning
CN112669256B (en) Medical image segmentation and display method based on transfer learning
CN112508868A (en) Intracranial blood vessel comprehensive image generation method
CN112509080A (en) Method for establishing intracranial vascular simulation three-dimensional model based on transfer learning
CN112509076A (en) Intracranial vascular lesion marking and three-dimensional display system based on intelligent medical treatment
CN114240841A (en) Establishment method of simulated three-dimensional vascular stenosis analysis model
CN112634386A (en) Method for establishing angiography enhanced three-dimensional narrowing analysis model
CN114170337A (en) Method for establishing intracranial vascular enhancement three-dimensional model based on transfer learning
CN112508881A (en) Intracranial blood vessel image registration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210416

WW01 Invention patent application withdrawn after publication