CN116524158A - Interventional navigation method, device, equipment and medium based on image registration - Google Patents
Interventional navigation method, device, equipment and medium based on image registration Download PDFInfo
- Publication number
- CN116524158A CN116524158A CN202310547584.3A CN202310547584A CN116524158A CN 116524158 A CN116524158 A CN 116524158A CN 202310547584 A CN202310547584 A CN 202310547584A CN 116524158 A CN116524158 A CN 116524158A
- Authority
- CN
- China
- Prior art keywords
- image
- end position
- head end
- dsa
- loss function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 90
- 210000004204 blood vessel Anatomy 0.000 claims abstract description 55
- 230000006870 function Effects 0.000 claims description 109
- 238000004590 computer program Methods 0.000 claims description 13
- 238000002583 angiography Methods 0.000 claims description 12
- 210000001161 mammalian embryo Anatomy 0.000 claims description 9
- 230000001131 transforming effect Effects 0.000 claims description 7
- 238000003062 neural network model Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 4
- 238000011524 similarity measure Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 abstract description 10
- 230000008901 benefit Effects 0.000 abstract description 5
- 238000013507 mapping Methods 0.000 abstract description 5
- 230000002441 reversible effect Effects 0.000 description 8
- 238000010276 construction Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 238000005457 optimization Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000001427 coherent effect Effects 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 239000005437 stratosphere Substances 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 239000002872 contrast media Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Architecture (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The application belongs to the technical field of medical image processing, and discloses an intervention navigation method, device, equipment and medium based on image registration, wherein the method comprises the following steps: extracting a first head end position of an intervention structure in an original DSA image in real time, and marking the first head end position to obtain a marked DSA image; extracting a three-dimensional blood vessel model image, and registering the marked DSA image to the three-dimensional blood vessel model image to obtain a registered image; searching a second head end position corresponding to the intervention structure in the marked DSA image from the registered image; and sending the second head end position to a display end to display the second head end position. The mapping process of the registration method has the advantages of smoothness and reversibility, and the marked DSA image can be registered to the three-dimensional blood vessel model image with higher accuracy, so that the second head end position of the interventional structure can be accurately displayed in real time.
Description
Technical Field
The present application relates to the technical field of medical image processing, for example, to an interventional navigation method, apparatus, device and medium based on image registration.
Background
Interventional techniques are becoming more and more widely used in the medical field, and the existing interventional techniques are that doctors determine the real-time position of an interventional structure such as a catheter or a guide wire in a three-dimensional human body according to DSA images, and the judgment standards of different doctors are different and lack uniform position judgment standards. The prior art, such as CN111681254a, inputs a video sequence containing a catheter region into a trained deep learning based codec, generates a binarized mask sequence corresponding to the catheter, and derives the position of the catheter in the video sequence based on the mask sequence. This prior art performs pixel-level object prediction and image segmentation of the catheter in the video sequence, and can only locate the catheter in the X-ray transmission sequence. However, the three-dimensional blood vessel model image is more complex than the X-ray image, contains more information, and has poor effect of performing pixel-level target prediction and image segmentation on the catheter or guide wire in the three-dimensional blood vessel model image by using the encoding and decoding structure, so that the accuracy of positioning the catheter or guide wire in the three-dimensional blood vessel model image is not high.
In summary, the prior art has the problem of low accuracy in positioning an interventional structure, such as a catheter or a guidewire, in a three-dimensional vessel model image.
Disclosure of Invention
The purpose of the application is that: the interventional navigation method, the device, the equipment and the medium based on image registration can solve the problem that the accuracy of positioning an interventional structure such as a catheter or a guide wire in a three-dimensional blood vessel model image is not high in the prior art.
In order to achieve the above object, the present application provides an interventional navigation method based on image registration, which is applied to a data processing end of an interventional navigation system, wherein the data processing end is connected with a display end, and the method includes the following steps executed by the data processing end:
acquiring an original digital subtraction angiography image DSA, wherein the original DSA image is an image reflecting the position of an interventional structure in a human blood vessel;
extracting a first head end position of the intervention structure in the original DSA image in real time, and marking the first head end position to obtain a marked DSA image;
extracting a three-dimensional blood vessel model image, and registering the marked DSA image to the three-dimensional blood vessel model image to obtain a registered image;
searching a second head end position corresponding to the intervention structure in the marked DSA image from the registered image;
and sending the second head end position to the display end so as to display the second head end position.
Preferably, the registering the marked DSA image to the three-dimensional blood vessel model image, to obtain a registered image, includes:
constructing a first loss function transformed from the labeled DSA image to the three-dimensional vessel model image;
constructing a second loss function transformed from the three-dimensional vessel model image to the labeled DSA image;
adding the first loss function and the second loss function to obtain a final loss function;
optimizing the final loss function to obtain a final differential homoembryo deformation field;
and carrying out interpolation operation on the marked DSA image according to the final differential embryo deformation field to obtain the registered image.
Preferably, the constructing a first loss function from the labeled DSA image to the three-dimensional vessel model image comprises:
the first loss function is constructed by the following formula:
wherein E is 1 Sigma, as said first loss function i Is the noise parameter, sigma x Sigma, a parameter of spatial uncertainty T For constraint parameters Sim is a similarity measure function, F is the three-dimensional vessel model image, M is the labeled DSA image, s is differential homoembryo deformation field, u is update deformation field, reg represents the constraint function and, I U I 2 Representing the result of a two-norm operation of the updated deformation field, M ° (s+u) representing the registration of the marked DSA image from the sum of the differential contemporaneous deformation field and the updated deformation field.
Preferably, the construction is transformed from the three-dimensional vessel model image to a second loss function of the labeled DSA image, comprising:
constructing the second loss function by the formula:
wherein s is -1 For the inverse matrix of the differential embryo deformation field, E 2 Is the second loss function.
Preferably, the optimizing the final loss function to obtain a final differential stratospheric deformation field includes:
optimizing the final loss function by using a Newton method to obtain a minimum function value of the final loss function;
and taking the differential stratospheric deformation field corresponding to the minimum function value as the final differential stratospheric deformation field.
Preferably, the searching out the second head end position corresponding to the intervention structure in the marked DSA image from the registered image includes:
detecting whether each pixel point in the registered image has a mark, if so, taking the corresponding pixel point as a target pixel point;
and determining the second head end position according to all the target pixel points.
Preferably, said extracting in real time a first head end position of said interventional structure in said original DSA image comprises:
the first head end position is extracted using a neural network model-based target recognition method, or the first head end position is extracted using an image threshold-based target recognition method.
The application provides an intervention navigation device based on image registration, is applied to the data processing end of intervention navigation system, the data processing end is connected with the display end, the device includes:
the image acquisition module is used for acquiring an original digital subtraction angiography image DSA, wherein the original DSA image is an image reflecting the position of an interventional structure in a human blood vessel;
the head end position marking module is used for extracting a first head end position of the intervention structure in the original DSA image in real time, marking the first head end position and obtaining a marked DSA image;
the registration module is used for extracting a three-dimensional blood vessel model image, registering the marked DSA image to the three-dimensional blood vessel model image, and obtaining a registered image;
a head end position searching module, configured to search out a second head end position corresponding to the intervention structure in the labeled DSA image from the registered image;
and the head end position display module is used for sending the second head end position to the display end so as to display the second head end position.
The application further provides a computer device comprising a memory and a processor, wherein the memory stores a computer program, and the processor executes the computer program to implement the steps of an interventional navigation method based on image registration and/or an interventional navigation method based on image registration.
The present application also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of an image registration based interventional navigation method as described in any of the above and/or an image registration based interventional navigation method as described in any of the above.
The interventional navigation method based on image registration is applied to a data processing end of an interventional navigation system, the data processing end is connected with a display end, and the method is implemented by the data processing end and comprises the following steps: an original digital subtraction angiography image DSA is acquired, the original DSA image being an image reflecting the position of the interventional structure in the human blood vessel. And extracting a first head end position of the intervention structure in the original DSA image in real time, and marking the first head end position to obtain a marked DSA image. And extracting a three-dimensional blood vessel model image, and registering the marked DSA image to the three-dimensional blood vessel model image to obtain a registered image. And searching a second head end position corresponding to the intervention structure in the marked DSA image from the registered image. And sending the second head end position to the display end so as to display the second head end position. The mapping procedure of the above registration method has the advantage that the marked DSA image is a two-dimensional image containing the first head end position of the interventional structure, e.g. a catheter or a guide wire, which is smooth and reversible, enabling registration of the marked DSA image to the three-dimensional vessel model image with a high degree of accuracy, thereby accurately displaying the second head end position of the interventional structure in real time in the three-dimensional registered image.
Drawings
FIG. 1 is a flow chart of an interventional navigation method based on image registration according to an embodiment;
FIG. 2 is a flow diagram of registering marked DSA images according to an embodiment;
FIG. 3 is a flow chart illustrating a method for calculating a final differential coherent deformation field according to an embodiment;
FIG. 4 is a flowchart illustrating a second headend location searching process according to an embodiment;
FIG. 5 is a block diagram of a schematic structure of an interventional navigation device based on image registration according to an embodiment;
fig. 6 is a block diagram of a computer device of an embodiment.
The realization, functional characteristics and advantages of the present application will be further described with reference to the embodiments, referring to the attached drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, modules, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, modules, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any module and all combination of one or more of the associated listed items.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In one embodiment, referring to fig. 1, a flowchart of an interventional navigation method based on image registration disclosed in the present application is shown, where the method is applied to a data processing end of an interventional navigation system, the data processing end is connected to a display end, and the method includes the following steps performed by the data processing end:
s1: an original digital subtraction angiography image DSA is acquired, the original DSA image being an image reflecting the position of the interventional structure in the human blood vessel.
DSA (Digital subtraction angiography ) is a digital subtraction technique based on sequential images, by subtracting a first frame image and a second frame image of the same part of the human body to obtain a difference part, and eliminating bone and soft tissue structures, so that vessels filled with a contrast agent in the second frame are visualized in a subtraction image, and the contrast is enhanced. The original DSA image is an unprocessed DSA image.
The interventional structure may be an interventional catheter or an interventional guidewire, and is not limited herein.
S2: and extracting a first head end position of the intervention structure in the original DSA image in real time, and marking the first head end position to obtain a marked DSA image.
The first head end position is extracted using a neural network model-based target recognition method, or the first head end position is extracted using an image threshold-based target recognition method.
As an example, the first head end position may be extracted using the YoloV5 neural network model, or may be extracted using the Fast RCNN neural network model.
As an example, the first head end position may be extracted using a PCA method of adaptive image threshold, or may be extracted using a histogram bi-peak method.
The outline of the first head end position in the DSA image may be set to a preset color, or a set of coordinates of the first head end position may be recorded, and a corresponding label may be set to the first head end position.
S3: and extracting a three-dimensional blood vessel model image, and registering the marked DSA image to the three-dimensional blood vessel model image to obtain a registered image.
Constructing a first loss function transformed from the labeled DSA image to the three-dimensional vessel model image;
constructing a second loss function transformed from the three-dimensional vessel model image to the labeled DSA image;
adding the first loss function and the second loss function to obtain a final loss function;
optimizing the final loss function to obtain a final differential homoembryo deformation field;
and carrying out interpolation operation on the marked DSA image according to the final differential embryo deformation field to obtain the registered image.
A three-dimensional vessel model image is extracted based on a CTA (computed tomographic angiography, electronic computed tomography vessel imaging) image of the vessel.
The differential stratosphere is a concept suitable for the category of differential manifold, and can be regarded as differential manifold by using the marked DSA image and the three-dimensional blood vessel model image, and the marked DSA image can be mapped under the coordinate system of the three-dimensional blood vessel model image smoothly and reversibly by using a symmetrical differential stratosphere registration method.
S4: and searching a second head end position corresponding to the intervention structure in the marked DSA image from the registered image.
Detecting whether each pixel point in the registered image has a mark, if so, taking the corresponding pixel point as a target pixel point;
and determining the second head end position according to all the target pixel points.
If the contour of the first head end position in the DSA image is set to a preset color, the color of the position after mapping is different from the color of other positions in the registered image. The second head end position in the registered image can be found out according to the color.
If the coordinate set of the first head end position is recorded, mapping the coordinate set of the first head end position by using a symmetrical differential homoembryo registration method to obtain another coordinate set, and determining a second head end position corresponding to the intervention structure in the registered image according to the coordinate set.
S5: and sending the second head end position to the display end so as to display the second head end position.
The display end may be a liquid crystal display or other display devices, which is not limited herein.
And taking the second head end position as the center, attaching a background area near the second head end position to obtain an image to be sent, sending the image to be sent to a display end, and highlighting the second head end position. The entire registered image including the second head end position may also be sent to the display end to display the second head end position.
The interventional navigation method based on image registration is applied to a data processing end of an interventional navigation system, the data processing end is connected with a display end, and the method comprises the following steps executed by the data processing end: an original digital subtraction angiography image DSA is acquired, the original DSA image being an image reflecting the position of the interventional structure in the human blood vessel. And extracting a first head end position of the intervention structure in the original DSA image in real time, and marking the first head end position to obtain a marked DSA image. And extracting a three-dimensional blood vessel model image, and registering the marked DSA image to the three-dimensional blood vessel model image to obtain a registered image. And searching a second head end position corresponding to the intervention structure in the marked DSA image from the registered image. And sending the second head end position to the display end so as to display the second head end position. The marked DSA image is a two-dimensional image containing the first head end position of an interventional structure, such as a catheter or a guidewire, and the mapping procedure of the symmetrical differential homoembryo registration method has the advantage of being smooth and reversible, and can register the marked DSA image to a three-dimensional blood vessel model image with higher accuracy, so that the second head end position of the interventional structure is accurately displayed in real time in the three-dimensional registered image.
In one embodiment, referring to fig. 2, the registering the labeled DSA image to the three-dimensional blood vessel model image results in a registered image, comprising:
s32: a first loss function is constructed that is transformed from the labeled DSA image to the three-dimensional vessel model image.
Before step S32, step S31 is further included: and extracting a three-dimensional blood vessel model image.
The first loss function is constructed by the following formula:
wherein E is 1 Sigma, as said first loss function i Is the noise parameter, sigma x Sigma, a parameter of spatial uncertainty T For constraint parameters Sim is a similarity measure function, F is the three-dimensional vessel model image, M is the labeled DSA image, s is differential homoembryo deformation field, u is update deformation field, reg represents the constraint function and, I U I 2 Representing the result of a two-norm operation of the updated deformation field, M ° (s+u) representing the registration of the marked DSA image from the sum of the differential contemporaneous deformation field and the updated deformation field.
Sim (F, M ° (s+u)) is used to measure the degree of difference between the registration result of the labeled DSA image and the three-dimensional vessel model image,for constraining the updated deformation field, reg(s) for manifold constraints on the differential coherent deformation field.
The first loss function is a loss function for differentially registering the marked DSA image with the three-dimensional blood vessel model image, and the smaller the first loss function is, the higher the accuracy rate of converting the marked DSA image into the coordinate system of the three-dimensional blood vessel model image is, and the higher the similarity between the registered image and the three-dimensional blood vessel model image is.
S33: a second loss function is constructed that is transformed from the three-dimensional vessel model image to the labeled DSA image.
Constructing the second loss function by the formula:
wherein s is -1 For the inverse matrix of the differential embryo deformation field, E 2 Is the second loss function.
Sim(M,F°(s -1 +u)) for measuring the degree of difference between the registration result of the three-dimensional vessel model image and the labeled DSA image, |u|| 2 2 For constraining the updated deformation field, reg (s -1 ) For manifold constraints on the inverse transformation of the differential embryo deformation field.
The second loss function is a loss function for registering the three-dimensional blood vessel model image to the marked DSA image, and the smaller the second loss function is, the higher the accuracy of transforming the three-dimensional blood vessel model image to the coordinate system of the marked DSA image is, and the higher the similarity between the registered blood vessel model image and the marked DSA image is.
S34: and adding the first loss function and the second loss function to obtain a final loss function.
E final =E 1 +E 2 ;
Wherein E is final E as a final loss function 1 As a first loss function E 2 Is a second loss function.
The smaller the final loss function, the better the effect can be obtained in both the process of transforming the marked DSA image into the three-dimensional blood vessel model image and the process of transforming the three-dimensional blood vessel model image into the marked DSA image.
S35: and optimizing the final loss function to obtain a final differential homoembryo deformation field.
Referring to fig. 3, step S35 includes steps S351-S352.
S351: and optimizing the final loss function by using a Newton method to obtain the minimum function value of the final loss function.
S352: and taking the differential stratospheric deformation field corresponding to the minimum function value as the final differential stratospheric deformation field.
In optimizing the final loss function, the value of the final loss function becomes smaller and smaller. And when the value of the final loss function is smaller than the final loss function threshold value, or when the optimization frequency of the final loss function is larger than the optimization frequency threshold value, stopping optimizing the final loss function to obtain the minimum function value of the final loss function, and taking the differential homoembryo deformation field corresponding to the minimum function value as the final differential homoembryo deformation field.
After the optimization is finished, the final loss function value is smaller, a better registration effect can be obtained in forward registration and reverse registration at the same time, and the process of transforming from the marked DSA image to the three-dimensional blood vessel model image can be further ensured to be reversible under the condition that both the forward registration and the reverse registration can be executed. The forward registration is the transformation from the marked DSA image to the three-dimensional vessel model image, and the reverse registration is the transformation from the three-dimensional vessel model image to the marked DSA image.
S36: and carrying out interpolation operation on the marked DSA image according to the final differential embryo deformation field to obtain the registered image.
And adding the coordinates of each pixel point of the marked DSA image with the coordinates of the corresponding point in the final differential homoembryo deformation field to obtain a post-registration coordinate, calculating the post-registration pixel value corresponding to the post-registration coordinate by using an interpolation method according to the post-registration coordinate, and forming a post-registration image by the post-registration coordinate and the post-registration pixel value.
The interpolation method can adopt a nearest interpolation method, a bilinear interpolation method or a bicubic interpolation method, wherein the nearest interpolation method has the fastest operation speed, but the worst interpolation effect; the bicubic interpolation method has the slowest operation speed, but the interpolation effect is the best; the operation speed of the bilinear interpolation method is between the nearest interpolation method and the bicubic interpolation method, and the interpolation effect of the bilinear interpolation method is between the nearest interpolation method and the bicubic interpolation method.
The registered image is a three-dimensional image, and the second head end position of the interventional structure can be accurately displayed in real time in the registered image.
As described above, by continuously optimizing the final loss function so that both the first loss function and the second loss function are smaller, it is ensured that both the forward registration and the reverse registration have a good effect, and it can be further ensured that the process of transforming from the labeled DSA image to the three-dimensional blood vessel model image is reversible.
In one embodiment, referring to fig. 4, the searching out the second head end position corresponding to the intervention structure in the marked DSA image from the registered image includes:
s41: detecting whether each pixel point in the registered image has a mark, and if so, taking the corresponding pixel point as a target pixel point.
Detecting whether each pixel point in the registered image has a color mark or a label mark, for example, detecting whether each pixel point has a red mark or a label corresponding to the first head end position, and if so, taking the pixel point as a target pixel point.
S42: and determining the second head end position according to all the target pixel points.
The outline of the interventional structure in the registered image can be determined according to all target pixel points, and the central coordinate of the outline can be used as the second head end position. After the second head end position is determined, the second head end position can be displayed in real time in the registered image so as to better show the change condition of the second head end position of the interventional structure.
As described above, the searching the second head end position corresponding to the intervention structure in the marked DSA image from the registered image includes detecting whether each pixel in the registered image has a mark, if so, taking the corresponding pixel as a target pixel. And determining the second head end position according to all the target pixel points. After the second head end position is determined, the second head end position can be displayed in real time in the registered image so as to better show the change condition of the second head end position of the interventional structure.
Referring to fig. 5, which is a schematic block diagram of an interventional navigation device based on image registration disclosed in the present application, the interventional navigation device based on image registration is applied to a data processing end of an interventional navigation system, the data processing end is connected with a display end, and the device includes:
an image acquisition module 10, configured to acquire an original digital subtraction angiography image DSA, where the original DSA image is an image reflecting a position of an interventional structure in a blood vessel of a human body;
a head end position marking module 20, configured to extract a first head end position of the intervention structure in the original DSA image in real time, and mark the first head end position to obtain a marked DSA image;
a registration module 30, configured to extract a three-dimensional blood vessel model image, register the marked DSA image to the three-dimensional blood vessel model image, and obtain a registered image;
a head end position searching module 40, configured to search out a second head end position corresponding to the intervention structure in the labeled DSA image from the registered image;
and the head end position display module 50 is configured to send the second head end position to the display end, so as to display the second head end position.
As described above, the image registration-based interventional navigation device can implement an image registration-based interventional navigation method.
In one embodiment, the registration module 30 further comprises:
a first loss function construction unit for constructing a first loss function transformed from the labeled DSA image to the three-dimensional blood vessel model image;
a second loss function construction unit for constructing a second loss function transformed from the three-dimensional blood vessel model image to the labeled DSA image;
a final loss function calculation unit, configured to add the first loss function and the second loss function to obtain a final loss function;
the final loss function optimizing unit is used for optimizing the final loss function to obtain a final differential homoembryo deformation field;
and the interpolation operation unit is used for carrying out interpolation operation on the marked DSA image according to the final differential metamorphic field to obtain the registered image.
In one embodiment, the first loss function construction unit further includes:
a first loss function construction subunit for constructing the first loss function by the following formula:
wherein E is 1 Sigma, as said first loss function i Is the noise parameter, sigma x Sigma, a parameter of spatial uncertainty T For constraint parameters Sim is a similarity measure function, F is the three-dimensional vessel model image, M is the labeled DSA image, s is differential homoembryo deformation field, u is update deformation field, reg represents the constraint function and, I U I 2 Representing the result of a two-norm operation of the updated deformation field, M ° (s+u) representing the registration of the marked DSA image from the sum of the differential contemporaneous deformation field and the updated deformation field.
In one embodiment, the second loss function construction unit further includes:
a second loss function construction subunit for constructing the second loss function by the following formula:
wherein s is -1 For the inverse matrix of the differential embryo deformation field, E 2 Is the second loss function.
In one embodiment, the final loss function optimization unit further comprises:
a final loss function optimizing subunit, configured to optimize the final loss function by using newton method, to obtain a minimum function value of the final loss function;
and the final differential stratospheric deformation field definition subunit is used for taking the differential stratospheric deformation field corresponding to the minimum function value as the final differential stratospheric deformation field.
In one embodiment, the headend location search module 40 further includes:
a target pixel point defining unit, configured to detect whether each pixel point in the registered image has a mark, and if so, take the corresponding pixel point as a target pixel point;
and the second head end position determining unit is used for determining the second head end position according to all the target pixel points.
In one embodiment, the headend location marking module 20 further includes:
a first head-end position extraction unit configured to extract the first head-end position using a target recognition method based on a neural network model, or extract the first head-end position using a target recognition method based on an image threshold.
Referring to fig. 6, a computer device is further provided in the embodiment of the present application, and the internal structure of the computer device may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The nonvolatile storage medium stores an operating device, a computer program, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used to store a first headend location, a second headend location, etc. The network interface of the computer device is used for communicating with an external terminal through a network connection. Further, the above-mentioned computer apparatus may be further provided with an input device, a display screen, and the like. The computer program is executed by a processor to realize an interventional navigation method based on image registration, the method is applied to a data processing end of an interventional navigation system, the data processing end is connected with a display end, and the method comprises the following steps executed by the data processing end: acquiring an original digital subtraction angiography image DSA, wherein the original DSA image is an image reflecting the position of an interventional structure in a human blood vessel; extracting a first head end position of the intervention structure in the original DSA image in real time, and marking the first head end position to obtain a marked DSA image; extracting a three-dimensional blood vessel model image, and registering the marked DSA image to the three-dimensional blood vessel model image to obtain a registered image; searching a second head end position corresponding to the intervention structure in the marked DSA image from the registered image; and sending the second head end position to the display end so as to display the second head end position. Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of a portion of the architecture in connection with the present application and is not intended to limit the computer device to which the present application is applied.
An embodiment of the present application further provides a computer readable storage medium, on which a computer program is stored, where the computer program when executed by a processor implements an interventional navigation method based on image registration, where the method is applied to a data processing end of an interventional navigation system, where the data processing end is connected to a display end, and where the method includes the following steps performed by the data processing end: acquiring an original digital subtraction angiography image DSA, wherein the original DSA image is an image reflecting the position of an interventional structure in a human blood vessel; extracting a first head end position of the intervention structure in the original DSA image in real time, and marking the first head end position to obtain a marked DSA image; extracting a three-dimensional blood vessel model image, and registering the marked DSA image to the three-dimensional blood vessel model image to obtain a registered image; searching a second head end position corresponding to the intervention structure in the marked DSA image from the registered image; and sending the second head end position to the display end so as to display the second head end position.
It is understood that the computer readable storage medium in this embodiment may be a volatile readable storage medium or a nonvolatile readable storage medium.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium provided herein and used in embodiments may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual speed data rate SDRAM (SSRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article or method that comprises the element.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes using the descriptions and drawings of the present invention or directly or indirectly applied to other related technical fields are included in the scope of the invention.
Claims (10)
1. An interventional navigation method based on image registration, which is characterized by being applied to a data processing end of an interventional navigation system, wherein the data processing end is connected with a display end, and the method comprises the following steps executed by the data processing end:
acquiring an original digital subtraction angiography image DSA, wherein the original DSA image is an image reflecting the position of an interventional structure in a human blood vessel;
extracting a first head end position of the intervention structure in the original DSA image in real time, and marking the first head end position to obtain a marked DSA image;
extracting a three-dimensional blood vessel model image, and registering the marked DSA image to the three-dimensional blood vessel model image to obtain a registered image;
searching a second head end position corresponding to the intervention structure in the marked DSA image from the registered image;
and sending the second head end position to the display end so as to display the second head end position.
2. The interventional navigation method based on image registration of claim 1, wherein said registering the labeled DSA image to the three-dimensional vessel model image results in a registered image comprising:
constructing a first loss function transformed from the labeled DSA image to the three-dimensional vessel model image;
constructing a second loss function transformed from the three-dimensional vessel model image to the labeled DSA image;
adding the first loss function and the second loss function to obtain a final loss function;
optimizing the final loss function to obtain a final differential homoembryo deformation field;
and carrying out interpolation operation on the marked DSA image according to the final differential embryo deformation field to obtain the registered image.
3. The image registration-based interventional navigation method of claim 2, wherein the transforming of the construct from the labeled DSA image to the first loss function of the three-dimensional vessel model image comprises:
the first loss function is constructed by the following formula:
wherein E is 1 Sigma, as said first loss function i Is the noise parameter, sigma x Sigma, a parameter of spatial uncertainty T For constraint parameters Sim is a similarity measure function, F is the three-dimensional vessel model image, M is the labeled DSA image, s is differential homoembryo deformation field, u is update deformation field, reg represents the constraint function and, I U I 2 Representing the result of a two-norm operation of the updated deformation field, M ° (s+u) representing the registration of the marked DSA image from the sum of the differential contemporaneous deformation field and the updated deformation field.
4. The image registration-based interventional navigation method of claim 3, wherein the transforming of the construct from the three-dimensional vessel model image to the second loss function of the labeled DSA image comprises:
constructing the second loss function by the formula:
wherein s is -1 For the inverse matrix of the differential embryo deformation field, E 2 Is the second loss function.
5. An interventional navigation method based on image registration according to claim 3, wherein said optimizing said final loss function to obtain a final differential homoembryo deformation field comprises:
optimizing the final loss function by using a Newton method to obtain a minimum function value of the final loss function;
and taking the differential stratospheric deformation field corresponding to the minimum function value as the final differential stratospheric deformation field.
6. The interventional navigation method based on image registration of claim 1, wherein the searching out of the registered images a second head end position corresponding to the interventional structure in the marked DSA image comprises:
detecting whether each pixel point in the registered image has a mark, if so, taking the corresponding pixel point as a target pixel point;
and determining the second head end position according to all the target pixel points.
7. The interventional navigation method based on image registration of claim 1, wherein the extracting in real time a first head end position of the interventional structure in the original DSA image comprises:
the first head end position is extracted using a neural network model-based target recognition method, or the first head end position is extracted using an image threshold-based target recognition method.
8. An interventional navigation device based on image registration, characterized by being applied to a data processing end of an interventional navigation system, the data processing end being connected with a display end, the device comprising:
the image acquisition module is used for acquiring an original digital subtraction angiography image DSA, wherein the original DSA image is an image reflecting the position of an interventional structure in a human blood vessel;
the head end position marking module is used for extracting a first head end position of the intervention structure in the original DSA image in real time, marking the first head end position and obtaining a marked DSA image;
the registration module is used for extracting a three-dimensional blood vessel model image, registering the marked DSA image to the three-dimensional blood vessel model image, and obtaining a registered image;
a head end position searching module, configured to search out a second head end position corresponding to the intervention structure in the labeled DSA image from the registered image;
and the head end position display module is used for sending the second head end position to the display end so as to display the second head end position.
9. A computer device comprising a memory and a processor, the memory having stored therein a computer program, characterized in that the processor, when executing the computer program, implements the steps of the image registration based interventional navigation method as defined in any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the interventional navigation method based on image registration as claimed in any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310547584.3A CN116524158A (en) | 2023-05-15 | 2023-05-15 | Interventional navigation method, device, equipment and medium based on image registration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310547584.3A CN116524158A (en) | 2023-05-15 | 2023-05-15 | Interventional navigation method, device, equipment and medium based on image registration |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116524158A true CN116524158A (en) | 2023-08-01 |
Family
ID=87390123
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310547584.3A Pending CN116524158A (en) | 2023-05-15 | 2023-05-15 | Interventional navigation method, device, equipment and medium based on image registration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116524158A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116778782A (en) * | 2023-08-25 | 2023-09-19 | 北京唯迈医疗设备有限公司 | Intervention operation in-vitro simulation training system and control method thereof |
-
2023
- 2023-05-15 CN CN202310547584.3A patent/CN116524158A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116778782A (en) * | 2023-08-25 | 2023-09-19 | 北京唯迈医疗设备有限公司 | Intervention operation in-vitro simulation training system and control method thereof |
CN116778782B (en) * | 2023-08-25 | 2023-11-17 | 北京唯迈医疗设备有限公司 | Intervention operation in-vitro simulation training system and control method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11288813B2 (en) | Systems and methods for anatomic structure segmentation in image analysis | |
US11880972B2 (en) | Tissue nodule detection and tissue nodule detection model training method, apparatus, device, and system | |
CN111192356B (en) | Method, device, equipment and storage medium for displaying region of interest | |
CN111161270B (en) | Vascular segmentation method for medical image, computer device and readable storage medium | |
CN110599526B (en) | Image registration method, computer device, and storage medium | |
CN112967236B (en) | Image registration method, device, computer equipment and storage medium | |
CN111210467A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN111080573B (en) | Rib image detection method, computer device and storage medium | |
CN111179372B (en) | Image attenuation correction method, image attenuation correction device, computer equipment and storage medium | |
US11954860B2 (en) | Image matching method and device, and storage medium | |
EP3722996A2 (en) | Systems and methods for processing 3d anatomical volumes based on localization of 2d slices thereof | |
CN105303550A (en) | Image processing apparatus and image processing method | |
CN116681716B (en) | Method, device, equipment and storage medium for dividing intracranial vascular region of interest | |
CN113298856B (en) | Image registration method, device, equipment and medium | |
CN115115772A (en) | Key structure reconstruction method and device based on three-dimensional image and computer equipment | |
US8306354B2 (en) | Image processing apparatus, method, and program | |
CN113610752A (en) | Mammary gland image registration method, computer device and storage medium | |
CN116524158A (en) | Interventional navigation method, device, equipment and medium based on image registration | |
CN111223158B (en) | Artifact correction method for heart coronary image and readable storage medium | |
CN111243052A (en) | Image reconstruction method and device, computer equipment and storage medium | |
CN112419339B (en) | Medical image segmentation model training method and system | |
CN111383236B (en) | Method, apparatus and computer-readable storage medium for labeling regions of interest | |
CN111161240B (en) | Blood vessel classification method, apparatus, computer device, and readable storage medium | |
CN114093462A (en) | Medical image processing method, computer device, and storage medium | |
CN116681704B (en) | Intracranial vascular blood flow obtaining method, computer device and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |