CN117291877A - Image processing method, device and image processing equipment - Google Patents
Image processing method, device and image processing equipment Download PDFInfo
- Publication number
- CN117291877A CN117291877A CN202311174406.7A CN202311174406A CN117291877A CN 117291877 A CN117291877 A CN 117291877A CN 202311174406 A CN202311174406 A CN 202311174406A CN 117291877 A CN117291877 A CN 117291877A
- Authority
- CN
- China
- Prior art keywords
- image
- sequence
- image sequence
- volume
- mask
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 109
- 238000003672 processing method Methods 0.000 title claims abstract description 21
- 238000000034 method Methods 0.000 claims description 37
- 239000008280 blood Substances 0.000 claims description 29
- 210000004369 blood Anatomy 0.000 claims description 29
- 210000004204 blood vessel Anatomy 0.000 claims description 28
- 238000012937 correction Methods 0.000 claims description 17
- 230000002452 interceptive effect Effects 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 238000010606 normalization Methods 0.000 claims description 6
- 210000001519 tissue Anatomy 0.000 description 61
- 230000000875 corresponding effect Effects 0.000 description 17
- 210000005013 brain tissue Anatomy 0.000 description 13
- 238000004422 calculation algorithm Methods 0.000 description 11
- 239000002872 contrast media Substances 0.000 description 11
- 210000000988 bone and bone Anatomy 0.000 description 7
- 210000004556 brain Anatomy 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000010412 perfusion Effects 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 238000002583 angiography Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 210000004872 soft tissue Anatomy 0.000 description 2
- 208000019553 vascular disease Diseases 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000017531 blood circulation Effects 0.000 description 1
- 230000002490 cerebral effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/48—Diagnostic techniques
- A61B6/481—Diagnostic techniques involving the use of contrast agents
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
- A61B6/504—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of blood vessels, e.g. by angiography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/38—Registration of image sequences
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
- G06T2207/10121—Fluoroscopy
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Radiology & Medical Imaging (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biomedical Technology (AREA)
- Veterinary Medicine (AREA)
- Optics & Photonics (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- High Energy & Nuclear Physics (AREA)
- Public Health (AREA)
- Pathology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Quality & Reliability (AREA)
- Dentistry (AREA)
- Vascular Medicine (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The application discloses an image processing method, an image processing device and image processing equipment, and relates to the technical field of image processing. After the image processing device obtains the mask image sequence and the contrast image sequence of the target object, the contrast image sequence and the mask image sequence can be subtracted to obtain a subtraction image sequence, and then a volume image is obtained based on the subtraction image sequence. Therefore, the image processing device provided by the application can obtain the volume image only by one reconstruction, so that the acquisition efficiency of the volume image is effectively improved.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing device, and an image processing apparatus.
Background
Digital subtraction angiography (digital subtraction angiography, DSA) techniques are capable of clearly developing blood vessels in bone or dense soft tissue and are therefore widely used for diagnosis and treatment of vascular disease. The main principle of DSA technology is to remove a background image overlapped with a blood vessel image to obtain a volume image so that a doctor can diagnose based on the volume image. It follows that acquiring a volumetric image is particularly important.
Disclosure of Invention
The application provides an image processing method, an image processing device and image processing equipment, wherein the technical scheme is as follows:
in one aspect, there is provided an image processing method, the method including:
acquiring a mask image sequence and a contrast image sequence of a target area of a target object;
subtracting the contrast image sequence from the mask image sequence to obtain a subtraction image sequence;
reconstructing the subtraction image sequence to obtain a volume image.
Optionally, subtracting the contrast image sequence from the mask image sequence to obtain a subtracted image sequence, including:
registering the mask image sequence and the contrast image sequence;
and subtracting the contrast image sequence after the registration processing from the mask image sequence after the registration processing to obtain a subtraction image sequence.
Optionally, the registering the mask image sequence and the contrast image sequence includes:
and carrying out registration processing on the mask image sequence and the contrast image sequence by adopting an elastic registration method.
Optionally, reconstructing the subtracted image sequence to obtain a volumetric image includes:
performing artifact correction on the subtracted image sequence;
reconstructing the subtraction image sequence after artifact correction to obtain a volume image.
Optionally, the volume image is a sequence of volume images, and after the obtaining the volume image, the method further includes:
acquiring a gray average value, a voxel volume and a tissue weight of a target tissue of each image in the volume image sequence;
determining a blood volume of each image based on a gray average value, voxel volume, and tissue weight of the target tissue of the image;
wherein the target tissue is a tissue of a target region of the target object.
Optionally, before said determining the blood volume of each image based on the gray average value, voxel volume and tissue weight of the target tissue of said image, the method further comprises, for each image of said sequence of volumetric images:
acquiring a gray average value of a main blood vessel of a target tissue in the image;
and carrying out normalization processing on the gray average value of the target tissue based on the gray average value of the main blood vessel of the target tissue.
Optionally, before the acquiring the gray average value, voxel volume, and tissue weight of the target tissue of each image in the sequence of volumetric images, the method further comprises:
reconstructing the mask image sequence to obtain a mask volume image sequence;
identifying the position of the interfering tissue in the mask volume image sequence;
and marking the position of the interference tissue into a corresponding target volume image sequence.
Optionally, the method further comprises: the blood volume of each image is color coded and displayed in a corresponding image in the sequence of target volume images.
In another aspect, there is provided an image processing apparatus including:
the acquisition module is used for acquiring a mask image sequence and a contrast image sequence of a target area of a target object;
the first generation module is used for subtracting the contrast image sequence from the mask image sequence to obtain a subtraction image sequence;
and the second generation module is used for reconstructing the subtraction image sequence to obtain a volume image.
Optionally, the first generating module is configured to:
registering the mask image sequence and the contrast image sequence;
and subtracting the contrast image sequence after the registration processing from the mask image sequence after the registration processing to obtain a subtraction image sequence.
Optionally, the first generating module is configured to:
and carrying out registration processing on the mask image sequence and the contrast image sequence by adopting an elastic registration method.
In still another aspect, there is provided an image processing apparatus including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the image processing method as described in the above aspect when executing the computer program.
In yet another aspect, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, implements the image processing method as described in the above aspect.
The beneficial effects that this application provided technical scheme brought include at least:
the embodiment of the application provides an image processing method, an image processing device and image processing equipment, wherein after the image processing equipment obtains a mask image sequence and a contrast image sequence of a target object, the contrast image sequence and the mask image sequence can be subtracted to obtain a subtraction image sequence, and then a volume image is obtained based on the subtraction image sequence. Therefore, the image processing device provided by the application can obtain the volume image only by one reconstruction, so that the acquisition efficiency of the volume image is effectively improved.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
Fig. 1 is a flowchart of an image processing method provided in an embodiment of the present application;
FIG. 2 is a flowchart of another image processing method provided in an embodiment of the present application;
fig. 3 is a schematic structural view of an image processing apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural view of another image processing apparatus provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of an image processing apparatus provided in an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present application and are not to be construed as limiting the present application.
DSA technology can clearly show blood vessels in bones or dense soft tissues, and thus is widely used for diagnosis and treatment of vascular diseases. The main principle of DSA technology is to remove a background image overlapped with a blood vessel image to obtain a volume image so that a doctor can diagnose based on the volume image. It follows that acquiring a volumetric image is particularly important.
In the related art, an image processing device may acquire a plurality of mask images and a plurality of contrast images, reconstruct the plurality of mask images, and reconstruct the plurality of contrast images. Then, the image processing device may subtract the reconstructed mask image from the reconstructed contrast image to obtain a volumetric image.
However, since the reconstruction of an image generally takes a long time, the acquisition efficiency of a volumetric image in the related art is low.
The embodiment of the application provides an image processing method, which is applied to an image processing device, such as an image processing device in a DSA system. Referring to fig. 1, the method includes:
step 101, acquiring a mask image sequence and a contrast image sequence of a target area of a target object.
In embodiments of the present application, the mask image sequence (i.e., the projection data sequence without contrast agent) and the contrast image sequence (i.e., the projection data sequence with contrast agent) may be pre-stored by the image processing device. Alternatively, the DSA system may include a DSA body connected to the image processing apparatus. The DSA body may scan a target region of a target object to obtain a mask image sequence and a contrast image sequence of the target region, and send the mask image sequence and the contrast image sequence to an image processing device. Correspondingly, the image processing device can acquire the mask image sequence and the contrast image sequence.
The mask image sequence may include multiple frames of mask images, where each frame of mask image is obtained by scanning a target area of a target object by a DSA body before injecting a contrast agent into the target area. The contrast image sequence may comprise: and the multi-frame contrast images are in one-to-one correspondence with the multi-frame mask images, and each frame of contrast image is obtained by scanning the target area through the DSA body after a certain dose of contrast agent is injected into the target area through the contrast agent injector.
In an embodiment of the present application, the DSA body may include: the X-ray detector can be connected with image processing equipment. Wherein the X-ray emitter may emit X-rays, and the X-ray detector may convert the X-rays into an image (mask image or contrast image as described above) after detection thereof, and output the image to the image processing device.
Alternatively, the robotic arm may be a C-arm. The object may be a human body or a phantom. The target region may be a brain region, a neck region, or a heart region. For example, the target region may be a brain region.
Step 102, subtracting the contrast image sequence from the mask image sequence to obtain a subtraction image sequence.
In an embodiment of the present application, for each frame of a contrast image in a contrast image sequence, the image processing device subtracts the contrast image from the corresponding mask image, thereby obtaining a subtracted image sequence. Wherein the subtracted image sequence may comprise: multiple frames of subtracted images.
It will be appreciated that the plurality of pixels in each frame of the contrast image correspond one-to-one to the plurality of pixels in each frame of the mask image. Subtracting the contrast image from the corresponding mask image means: differences in gray values of corresponding pixels in the contrast image and the corresponding mask image are determined.
And 103, reconstructing the subtraction image sequence to obtain a volume image.
After the image processing device obtains the silhouette image sequence, the image processing device can reconstruct the subtraction image sequence to obtain a volume image. The reconstruction of the subtracted image sequence may be performed in three dimensions. The volume image may be a tomographic image, a three-dimensional image, or a sequence of volume images composed of a plurality of tomographic images, which may also be referred to as slice images. The number of images comprised by the sequence of volumetric images depends on the thickness of the tomographic image.
In summary, the embodiment of the present application provides an image processing method, where after an image processing device obtains a mask image sequence and a contrast image sequence of a target object, the contrast image sequence and the mask image sequence can be subtracted to obtain a subtraction image sequence, and then a volumetric image is obtained based on the subtraction image sequence. Therefore, the image processing device provided by the embodiment of the application can obtain the volume image only by one reconstruction, so that the acquisition efficiency of the volume image is effectively improved.
Fig. 2 is a flowchart of another image processing method provided in an embodiment of the present application, which can be applied to an image processing apparatus. Referring to fig. 2, the method may include:
step 201, acquiring a mask image sequence and a contrast image sequence of a target area of a target object.
In the embodiment of the present application, the mask image sequence and the contrast image sequence may be stored in advance by the image processing apparatus. Alternatively, the DSA system may include a DSA body connected to the image processing apparatus. The DSA body may scan a target region of a target object to obtain a mask image sequence and a contrast image sequence of the target region, and send the mask image sequence and the contrast image sequence to an image processing device. Correspondingly, the image processing device can acquire the mask image sequence and the contrast image sequence.
The mask image sequence may include a plurality of mask images, each of which is obtained by scanning a target area of a target object at a plurality of different angles by a DSA body before injecting a contrast agent into the target area. The contrast image sequence may comprise: and the multi-frame contrast image is in one-to-one correspondence with the multi-frame mask image, and each frame of contrast image is obtained by scanning the target area at a plurality of different angles through the DSA body after a certain dose of contrast agent is injected into the target area through the contrast agent injector. Each frame of contrast image may also be referred to as a live image.
For example, the DSA body may acquire a perfusion scan protocol and perform a first scan of the target region based on the perfusion scan protocol to obtain a sequence of mask images. Then, after a dose of contrast agent is injected into the target region and reaches a steady state, the DSA body may perform a second scan of the target region based on the perfusion scan protocol to obtain a contrast image sequence.
In an embodiment of the present application, the DSA body may include: the X-ray detector can be connected with image processing equipment. Wherein the X-ray emitter may emit X-rays, and the X-ray detector may convert the X-rays into an image (mask image or contrast image as described above) after detection thereof, and output the image to the image processing device.
Alternatively, the robotic arm may be a C-arm. The object may be a human body or a phantom. The target region may be a brain region, a neck region, or a heart region. For example, the target region may be a brain region.
And 202, registering the mask image sequence and the contrast image sequence.
Because respiration or movement of the target object may cause a change in the position of the target region relative to the DSA body, the image processing apparatus may perform registration processing on the acquired mask image sequence and the contrast image sequence, so that each mask image in the mask image sequence matches a feature point on a corresponding contrast image, thereby spatially aligning the mask image sequence and the contrast image sequence. In this way, a large number of motion artifacts in the obtained subtracted image sequence can be avoided, so that the contrast ratio of the generated volume image can be ensured to be higher, and the quality is better.
Alternatively, the image processing device may process the mask image sequence and the contrast image sequence using an elastic registration algorithm to register the mask image sequence and the contrast image sequence. Because the elastic registration algorithm is adopted to register the mask image sequence and the contrast image sequence at a high speed, the acquisition efficiency of the volume image can be further improved.
In the embodiment of the present application, for each mask image in the sequence of mask images, the image processing apparatus may perform feature detection on the mask image to obtain a plurality of feature points. The image processing device may then perform feature matching on the corresponding contrast image of the mask image based on the plurality of feature points, thereby registering the mask image with the corresponding contrast image.
Alternatively, each feature point may be a key point of the target region, or may be a boundary point of a bone of the target region. The image processing device may perform feature matching on the contrast image corresponding to the mask image based on the plurality of feature points, where the process may include: the image processing device may select a plurality of (e.g., 2) sub-images near the feature points as the template sub-image, and process the template sub-image and the contrast image using a feature matching algorithm, thereby registering the mask image with the corresponding contrast image.
It will be appreciated that reducing the size of the template sub-images while ensuring registration accuracy can greatly reduce the duration of the image processing apparatus registering the mask image sequence and the contrast image sequence.
Step 203, subtracting the contrast image sequence after the registration processing from the mask image sequence after the registration processing to obtain a subtraction image sequence.
After the image processing device performs registration processing on the blood vessel image sequence and the contrast image sequence, for each frame of contrast image in the contrast image sequence after registration processing, the image processing device can subtract the contrast image from a corresponding mask image in the mask image sequence after registration processing, so as to obtain a subtraction image sequence of a target area of the target object.
Wherein the sequence of subtraction images comprises a plurality of frames of subtraction images, each frame of subtraction images should typically contain only blood vessels. I.e. the subtracted image sequence is a clearer sequence of vessel images of the target area.
And 204, reconstructing the subtraction image sequence to obtain a volume image.
In the embodiment of the application, the image processing device may perform artifact correction on the subtracted image sequence, and reconstruct the subtracted image sequence after the artifact correction, so as to obtain a volumetric image. Therefore, the artifacts in the generated volume image can be effectively avoided, and the quality of the reconstructed volume image is further ensured to be better.
Alternatively, the image processing apparatus may apply an artifact correction algorithm to the subtracted image sequence. The artifact correction algorithm may be at least one of the following: a scatter artifact correction algorithm, a truncation artifact correction algorithm, a ring artifact correction algorithm, and a bar artifact correction algorithm.
The scattering artifact correction algorithm may perform artifact correction based on the double-gaussian-kernel analog scattering signal of Yu Mengka. The truncated artifact correction algorithm may use a b-spline interpolation to estimate the edges for artifact correction.
Step 205, determining the blood volume based on the volumetric image.
In the embodiment of the present application, in the case where the blood volume in the blood vessel is 100 milliliters (ml)/100 grams (g), that is, in the case where the blood vessel is filled with blood, the gray average value of the target tissue image in the volume image is linearly related to the concentration of the contrast agent in the blood vessel included in the target tissue. Thus, the image processing apparatus can calculate the blood volume of the target tissue using the ratio of the gray average value of the target tissue image to the gray average value of the blood vessel image in the volume image. Wherein the blood volume is positively correlated with the ratio. The target tissue is the tissue of the target region of the target object. The target tissue image is a sub-image of the target tissue in the volume image and the vessel image is a sub-image of the vessel in the volume image.
It is understood that the volumetric image may be a sequence of volumetric images. The image processing device may determine the blood volume based on the volumetric image, the process may include: the image processing device obtains a gray average value, voxel volume and tissue weight of brain tissue in each image in the sequence of volumetric images, and then determines the blood volume of the image based on the gray average value, voxel volume and tissue weight of brain tissue for each image. I.e. to determine the blood volume of the target tissue in the image.
Further, since the image processing apparatus calculates CBV on the premise that CBV of a main blood vessel of a target tissue is 100ml/100g, that is, cbv=1 of the blood vessel, the image processing apparatus needs to normalize the gray average value of the target tissue in the volume image sequence with the gray average value of the blood vessel measured based on the volume image sequence. Specifically, before determining the blood volume of each image based on the gray average value of the target tissue, the voxel volume, and the tissue weight of the image, the image processing apparatus may further acquire, for each image of the volumetric image sequence, the gray average value of the main blood vessel of the target tissue in the image, and perform normalization processing on the gray average value of the target tissue based on the gray average value of the blood vessel.
It will be appreciated that if the target area is a brain area, the target tissue is brain tissue, the main blood vessel is the large blood vessel of brain tissue, the large blood vessel is the blood vessel connecting collateral blood vessels, and is also the blood vessel into which the contrast agent first flows. Correspondingly, the blood volume is cerebral blood volume (cercbral blood volume, CBV). For example, the CBV may satisfy:
cbv= (brain tissue GL/large vessel gl×voxel volume)/tissue weight×100.
Wherein the unit of CBV is ml/100g, which indicates how many milliliters of blood pass per 100 grams of brain tissue. Brain tissue GL refers to: the mean gray scale of brain tissue in the sequence of volumetric images. The large blood vessel GL refers to: the gray average of the large vessels of the brain tissue in the sequence of volumetric images.
The volume of the voxel is in ml, and the volume of the voxel is equal to the volumeThe areas of the pixels in the image sequence are positively correlated. Tissue weight refers to the weight of brain tissue, which may be equal to the product of the pixel volume of brain tissue and the tissue density. The unit of the pixel volume is cubic centimeter (cm) 3 ) And the pixel volume may be equal to the product of the area of the pixel and the layer thickness in the volumetric image sequence. The unit of tissue density is grams per cubic centimeter (g/cm) 3 ) The tissue density refers to the mass per unit volume of tissue. And the tissue densities of different tissues are different. For example, the tissue density of brain tissue is about 1.05g/cm 3 。
In the embodiment of the present application, after the image processing device obtains the volume image sequence, the volume image sequence may be preprocessed, and then the blood volume may be determined based on the preprocessed volume image sequence. Wherein the process of preprocessing the sequence of volumetric images may comprise: and (3) carrying out noise reduction processing on the volume image sequence, and carrying out image segmentation processing on the volume image sequence after the noise reduction processing so as to avoid influencing the calculation accuracy of blood volume by bones and the like in target tissues.
Alternatively, the image processing apparatus may perform smoothing processing on the volume image sequence to realize noise reduction processing on the volume image sequence.
In the embodiment of the application, since the gray values of the air region and the bone region of the mask image sequence and the contrast image sequence may be different, residues may exist in the air region and the bone region in the acquired subtraction image sequence, which further affects the accuracy of determining the blood volume. Based on this, the image processing apparatus may further perform the following operations before acquiring the gray average value, voxel volume, and tissue weight of the brain tissue of each image in the volume image sequence based on the volume image sequence: reconstructing the mask image sequence to obtain a mask volume image sequence; identifying the position of the interfering tissue in the mask volume image sequence; the locations of the interfering tissue are marked into the corresponding sequence of target volume images. Wherein, the interfering organization may include: air regions and bone regions. In this way, the image processing device can perform image segmentation processing on the volumetric image sequence by using a segmentation method based on a threshold in combination with the position markers of the interfering tissue. Or in the blood volume calculation, the tissue position mark area is not interfered in statistics, so that the accuracy of determining the blood volume is improved.
It will be appreciated that the image processing apparatus may identify the location of the interfering tissue in each mask volume image in the sequence of mask volume images and mark the location of the interfering tissue in the sequence of volume images in the tomographic image to which the mask volume image corresponds.
It will also be appreciated that the image processing apparatus may also perform pseudo-colour processing on the sequence of volumetric images to obtain a pseudo-colour image. And the image processing apparatus may further include: and a display screen in which the image processing apparatus can display the pseudo-color image to assist doctor diagnosis. Wherein, different colors in the pseudo-color image represent different blood flow distribution conditions. Specifically, the image processing apparatus is capable of color-coding the blood volume of each image and displaying in the corresponding image in the target volume image sequence.
It is also understood that the sequence of the steps of the image processing method provided in the embodiment of the present application may be appropriately adjusted, and the steps may also be increased or decreased accordingly according to the situation. For example, step 202 may be deleted as appropriate; alternatively, step 205 may be deleted as appropriate. Any method that can be easily conceived by those skilled in the art within the technical scope of the present disclosure should be covered in the protection scope of the present application, and thus will not be repeated.
In summary, the embodiment of the present application provides an image processing method, where after an image processing device obtains a mask image sequence and a contrast image sequence of a target object, the contrast image sequence and the mask image sequence can be subtracted to obtain a subtraction image sequence, and then a volumetric image is obtained based on the subtraction image sequence. Therefore, the image processing device provided by the embodiment of the application can obtain the volume image only by one reconstruction, so that the acquisition efficiency of the volume image is effectively improved.
The embodiment of the application provides an image processing device, which can execute the image processing method provided by the embodiment of the method. Referring to fig. 3, the image processing apparatus 300 includes:
the acquiring module 301 is configured to acquire a mask image sequence and a contrast image sequence of a target area of a target object.
The first generation module 302 is configured to subtract the contrast image sequence from the mask image sequence to obtain a subtracted image sequence.
A second generating module 303 is configured to reconstruct the subtracted image sequence to obtain a volumetric image.
Alternatively, the first generating module 302 may be configured to:
registering the mask image sequence and the contrast image sequence;
and subtracting the contrast image sequence after the registration processing from the mask image sequence after the registration processing to obtain a subtraction image sequence.
Alternatively, the first generating module 302 may be configured to:
and carrying out registration processing on the mask image sequence and the contrast image sequence by adopting an elastic registration method.
Alternatively, the second generating module 303 may be configured to:
performing artifact correction on the subtracted image sequence;
reconstructing the artifact corrected subtraction image sequence to obtain a volume image.
Optionally, referring to fig. 4, the image processing apparatus may further include: a determination module 304. The determining module 304 is configured to determine a blood volume based on the volumetric image.
Alternatively, the determining module 304 may be configured to:
acquiring a gray average value, a voxel volume and a tissue weight of a target tissue of each image in a volume image sequence;
determining a blood volume of the image based on the gray average value, voxel volume, and tissue weight of the target tissue of each image;
wherein the target tissue is the tissue of the target area of the target object.
Optionally, in some embodiments, the apparatus may further comprise a normalization module. The normalization module is used for: for each image of the sequence of volumetric images, obtaining a gray average value of a main blood vessel of a target tissue in the image; and carrying out normalization processing on the gray average value of the target tissue based on the gray average value of the main blood vessel of the target tissue.
Optionally, in some embodiments, the apparatus may further comprise a marking module. The marking module is used for: reconstructing the mask image sequence to obtain a mask volume image sequence; identifying the position of the interfering tissue in the mask volume image sequence; the locations of the interfering tissue are marked into the corresponding sequence of target volume images.
Optionally, in some embodiments, the apparatus may further comprise an encoding module. The encoding module is used for carrying out color encoding on the blood volume of each image and displaying the blood volume in the corresponding image in the target volume image sequence.
In summary, the embodiments of the present application provide an image processing apparatus, which, after obtaining a mask image sequence and a contrast image sequence of a target object, can first subtract the contrast image sequence from the mask image sequence to obtain a subtraction image sequence, and then obtain a volumetric image based on the subtraction image sequence. Therefore, the image processing device provided by the embodiment of the application can obtain the volume image only by one reconstruction, so that the acquisition efficiency of the volume image is effectively improved.
An embodiment of the present application provides an image processing apparatus, referring to fig. 5, the image processing apparatus 400 includes: a processor 401 and a memory 403. Processor 401 is connected to memory 403, such as via bus 402. Optionally, the controller 400 may also include a transceiver 404. It should be noted that, in practical applications, the transceiver 404 is not limited to one, and the structure of the controller 400 is not limited to the embodiments of the present application.
The processor 401 may be a CPU (Central Processing Unit ), general purpose processor, DSP (Digital Signal Processor, data signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field Programmable Gate Array, field programmable gate array) or other programmable logic device, transistor logic device, hardware components, or any combination thereof. Which may implement or perform the various exemplary logical blocks, modules, and circuits described in connection with the present disclosure. Processor 401 may also be a combination that implements computing functionality, such as a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 402 may include a path to transfer information between the components. Bus 402 may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus or EISA (Extended Industry Standard Architecture ) bus, among others. Bus 402 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 5, but not only one bus or one type of bus.
The memory 403 is used to store a computer program corresponding to the image processing method of the above-described embodiment of the present application, the computer program being controlled to be executed by the processor 401. The processor 401 is arranged to execute a computer program stored in the memory 403 for realizing what is shown in the foregoing method embodiments.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing method provided in the above-described method embodiments. Such as the method shown in fig. 1 or fig. 2.
It should be noted that the logic and/or steps represented in the flowcharts or otherwise described herein, for example, may be considered as a ordered listing of executable instructions for implementing logical functions, and may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" is at least two, such as two, three, etc., unless explicitly defined otherwise.
In this application, unless specifically stated and limited otherwise, the terms "mounted," "connected," "secured," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly, through intermediaries, or both, may be in communication with each other or in interaction with each other, unless expressly defined otherwise. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art as the case may be.
Although embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.
Claims (10)
1. An image processing method, the method comprising:
acquiring a mask image sequence and a contrast image sequence of a target area of a target object;
subtracting the contrast image sequence from the mask image sequence to obtain a subtraction image sequence;
reconstructing the subtraction image sequence to obtain a volume image.
2. The method of claim 1, wherein subtracting the sequence of contrast images from the sequence of mask images results in a sequence of subtracted images, comprising:
registering the mask image sequence and the contrast image sequence;
and subtracting the contrast image sequence after the registration processing from the mask image sequence after the registration processing to obtain a subtraction image sequence.
3. The method of claim 2, wherein the registering the mask image sequence and the contrast image sequence comprises:
and carrying out registration processing on the mask image sequence and the contrast image sequence by adopting an elastic registration method.
4. A method according to any one of claims 1 to 3, wherein reconstructing the subtracted image sequence results in a volumetric image, comprising:
performing artifact correction on the subtracted image sequence;
reconstructing the subtraction image sequence after artifact correction to obtain a volume image.
5. A method according to any one of claims 1 to 3, wherein the volumetric image is a sequence of volumetric images, the method further comprising, after the obtaining of the volumetric image:
acquiring a gray average value, a voxel volume and a tissue weight of a target tissue of each image in the volume image sequence;
determining a blood volume of each image based on a gray average value, voxel volume, and tissue weight of the target tissue of the image;
wherein the target tissue is a tissue of a target region of the target object.
6. The method of claim 5, wherein prior to said determining the blood volume of each image based on the gray average value, voxel volume, and tissue weight of the target tissue for that image, the method further comprises, for each image of the sequence of volumetric images:
acquiring a gray average value of a main blood vessel of a target tissue in the image;
and carrying out normalization processing on the gray average value of the target tissue based on the gray average value of the main blood vessel of the target tissue.
7. The method of claim 5, wherein prior to the acquiring the gray average value, voxel volume, and tissue weight of the target tissue for each image in the sequence of volumetric images, the method further comprises:
reconstructing the mask image sequence to obtain a mask volume image sequence;
identifying the position of the interfering tissue in the mask volume image sequence;
and marking the position of the interference tissue into a corresponding target volume image sequence.
8. The method of claim 5, wherein the method further comprises:
the blood volume of each image is color coded and displayed in a corresponding image in the sequence of target volume images.
9. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a mask image sequence and a contrast image sequence of a target area of a target object;
the first generation module is used for subtracting the contrast image sequence from the mask image sequence to obtain a subtraction image sequence;
and the second generation module is used for reconstructing the subtraction image sequence to obtain a volume image.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the image processing method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311174406.7A CN117291877A (en) | 2023-09-12 | 2023-09-12 | Image processing method, device and image processing equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311174406.7A CN117291877A (en) | 2023-09-12 | 2023-09-12 | Image processing method, device and image processing equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117291877A true CN117291877A (en) | 2023-12-26 |
Family
ID=89250988
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311174406.7A Pending CN117291877A (en) | 2023-09-12 | 2023-09-12 | Image processing method, device and image processing equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117291877A (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080247503A1 (en) * | 2007-04-06 | 2008-10-09 | Guenter Lauritsch | Measuring blood volume with c-arm computed tomography |
CN102013116A (en) * | 2010-11-30 | 2011-04-13 | 北京万东医疗装备股份有限公司 | Cerebral rotational angiography-based three-dimensional reconstruction method |
CN103619237A (en) * | 2011-06-15 | 2014-03-05 | 米斯特雷塔医疗有限公司 | System and method for four dimensional angiography and fluoroscopy |
CN111091563A (en) * | 2019-12-24 | 2020-05-01 | 强联智创(北京)科技有限公司 | Method and system for extracting target region based on brain image data |
CN112862916A (en) * | 2021-03-11 | 2021-05-28 | 首都医科大学附属北京天坛医院 | CT perfusion function map quantitative parameter processing equipment and method |
CN113989172A (en) * | 2021-09-06 | 2022-01-28 | 北京东软医疗设备有限公司 | Subtraction map generation method, subtraction map generation device, storage medium, and computer apparatus |
CN114533096A (en) * | 2022-02-21 | 2022-05-27 | 郑州市中心医院 | Artifact removing method and artifact removing system in cerebrovascular angiography |
CN115018803A (en) * | 2022-06-20 | 2022-09-06 | 上海联影医疗科技股份有限公司 | Image processing method, image processing device, computer equipment and storage medium |
-
2023
- 2023-09-12 CN CN202311174406.7A patent/CN117291877A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080247503A1 (en) * | 2007-04-06 | 2008-10-09 | Guenter Lauritsch | Measuring blood volume with c-arm computed tomography |
CN102013116A (en) * | 2010-11-30 | 2011-04-13 | 北京万东医疗装备股份有限公司 | Cerebral rotational angiography-based three-dimensional reconstruction method |
CN103619237A (en) * | 2011-06-15 | 2014-03-05 | 米斯特雷塔医疗有限公司 | System and method for four dimensional angiography and fluoroscopy |
CN111091563A (en) * | 2019-12-24 | 2020-05-01 | 强联智创(北京)科技有限公司 | Method and system for extracting target region based on brain image data |
CN112862916A (en) * | 2021-03-11 | 2021-05-28 | 首都医科大学附属北京天坛医院 | CT perfusion function map quantitative parameter processing equipment and method |
CN113989172A (en) * | 2021-09-06 | 2022-01-28 | 北京东软医疗设备有限公司 | Subtraction map generation method, subtraction map generation device, storage medium, and computer apparatus |
CN114533096A (en) * | 2022-02-21 | 2022-05-27 | 郑州市中心医院 | Artifact removing method and artifact removing system in cerebrovascular angiography |
CN115018803A (en) * | 2022-06-20 | 2022-09-06 | 上海联影医疗科技股份有限公司 | Image processing method, image processing device, computer equipment and storage medium |
Non-Patent Citations (3)
Title |
---|
MARCO BOEGEL等: "A fully-automatic locally adaptive thresholding algorithm for blood vessel segmentation in 3D digital subtraction angiography", 《 2015 37TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC)》, 5 November 2015 (2015-11-05) * |
刘能冈: "肝脏DSA序列图像的血流动态信息提取方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 February 2013 (2013-02-15) * |
师为礼;倪虹;: "基于小波变换的数字减影图像配准算法", 长春理工大学学报(自然科学版), no. 04, 15 December 2008 (2008-12-15) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Fahrig et al. | Use of a C-arm system to generate true three-dimensional computed rotational angiograms: preliminary in vitro and in vivo results. | |
US20020114503A1 (en) | Method and apparatus for processing a computed tomography image of a lung obtained using contrast agent | |
CN111524200B (en) | Method, apparatus and medium for segmenting a metal object in a projection image | |
KR20080031358A (en) | 3d-2d adaptive shape model supported motion compensated reconstruction | |
US6574500B2 (en) | Imaging methods and apparatus particularly useful for two and three-dimensional angiography | |
CN111915696A (en) | Three-dimensional image data-assisted low-dose scanning data reconstruction method and electronic medium | |
US9424680B2 (en) | Image data reformatting | |
EP3084726B1 (en) | Moving structure motion compensation in imaging | |
JP2004320771A (en) | Method for performing digital subtraction angiography | |
CN109377481B (en) | Image quality evaluation method, image quality evaluation device, computer equipment and storage medium | |
Garden et al. | 3-D reconstruction of the heart from few projections: A practical implementation of the McKinnon-Bates algorithm | |
CN110473269A (en) | A kind of image rebuilding method, system, equipment and storage medium | |
CN111150419B (en) | Method and device for reconstructing image by spiral CT scanning | |
JP7267329B2 (en) | Method and system for digital mammography imaging | |
CN100361632C (en) | X-ray computerised tomograph capable of automatic eliminating black false image | |
Kuntz et al. | Fully automated intrinsic respiratory and cardiac gating for small animal CT | |
US11257262B2 (en) | Model regularized motion compensated medical image reconstruction | |
US11602320B2 (en) | Method for creating a three-dimensional digital subtraction angiography image and a C-arm X-ray device | |
CN117291877A (en) | Image processing method, device and image processing equipment | |
CN113313649A (en) | Image reconstruction method and device | |
US8873823B2 (en) | Motion compensation with tissue density retention | |
CN111462266A (en) | Image reconstruction method and device, CT (computed tomography) equipment and CT system | |
Haaker et al. | First clinical results with digital flashing tomosynthesis in coronary angiography | |
CN112513925A (en) | Method for providing automatic self-adaptive energy setting for CT virtual monochrome | |
EP4312772B1 (en) | Subtraction imaging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |