CN116883471B - Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture - Google Patents
Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture Download PDFInfo
- Publication number
- CN116883471B CN116883471B CN202310975574.XA CN202310975574A CN116883471B CN 116883471 B CN116883471 B CN 116883471B CN 202310975574 A CN202310975574 A CN 202310975574A CN 116883471 B CN116883471 B CN 116883471B
- Authority
- CN
- China
- Prior art keywords
- chest
- abdomen
- dimensional
- line structure
- registration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 210000001015 abdomen Anatomy 0.000 title claims abstract description 95
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000000605 extraction Methods 0.000 claims abstract description 16
- 239000011159 matrix material Substances 0.000 claims description 24
- 230000009466 transformation Effects 0.000 claims description 16
- 238000001914 filtration Methods 0.000 claims description 8
- 230000003287 optical effect Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 8
- 238000013519 translation Methods 0.000 claims description 7
- 238000012937 correction Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000007613 environmental effect Effects 0.000 claims description 4
- 238000012216 screening Methods 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 2
- 230000001351 cycling effect Effects 0.000 claims description 2
- 238000009499 grossing Methods 0.000 claims description 2
- 230000009467 reduction Effects 0.000 claims description 2
- 239000003550 marker Substances 0.000 abstract description 5
- 210000003128 head Anatomy 0.000 description 12
- 238000001574 biopsy Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 239000000523 sample Substances 0.000 description 3
- 210000004872 soft tissue Anatomy 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 206010028980 Neoplasm Diseases 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000010247 heart contraction Effects 0.000 description 1
- 238000002513 implantation Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000002324 minimally invasive surgery Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000007170 pathology Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000001959 radiotherapy Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a line structure light non-contact point cloud registration method for chest and abdomen percutaneous puncture, which comprises the steps of building a registration system; scanning a chest and abdomen target area; performing line structure light extraction on an original scanning image; three-dimensional line structure light information in the line structure light extraction image is realized, and depth reconstruction based on stereoscopic vision is completed; the three-dimensional coordinate information is induced, screened and acquired to form a point cloud set, and a real object surface point cloud of the chest and abdomen target area for registration is constructed; and registering the real object surface point cloud of the chest and abdomen target area with the three-dimensional chest and abdomen medical image is realized. The registration method disclosed by the invention does not contact the surface of the chest and abdomen, does not additionally adhere a marker and does not interfere with the original workflow of an operation, so that the high-efficiency and accurate registration which is free of contact, full-automatic, deformation-free and independent of any marker is realized, the complexity of registration is reduced, and the precision and efficiency of registration are greatly improved.
Description
Technical Field
The invention relates to the field of medical image navigation, in particular to a line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture.
Background
In recent years, with urgent demands for tumor disease treatment, the enrichment of image navigation minimally invasive treatment means and the accumulation of treatment experience, detection and local treatment means such as biopsy of tumor disease, close-range particle implantation radiotherapy and the like are widely emphasized and accepted through percutaneous minimally invasive surgery of chest and abdomen under image navigation. Biopsy is the most important part in diagnosis pathology under medical image navigation, and due to small focal volume, the biopsy is greatly influenced by complex anatomical structures, natural respiration and heart beating in the focal tissue extraction process, and puncture is easy to cause failure and endanger patient safety. At present, the success rate of single needle puncture is only 70 percent.
In order to improve the accuracy of percutaneous thoracoabdominal puncture operation, computer image information is utilized to assist in determining the focus position of the thoracoabdominal region, and registration is required between the three-dimensional medical image of the thoracoabdominal region and the thoracoabdominal region in real space. The chest and abdomen coordinate system in the real space is required to be registered with the coordinate system of the computer medical image, so that the chest and abdomen percutaneous puncture operation under medical image navigation is realized, and the registration effect and efficiency determine the quality of motion modeling. Thus, the method of registration is a crucial technical point in this. Aiming at image registration of chest and abdomen percutaneous puncture operation navigation, related researches at home and abroad mainly fall into the following categories:
(1) The registration method of the intrinsic feature points of the real objects comprises the following steps: based on the inherent characteristic points of the real objects, such as geometrical marking points with sharp characteristics, such as angular points, crossing points and the like, the optical probe is identified by the optical positioning instrument, the probe picks up the geometrical marking points, and meanwhile, the corresponding geometrical marking points are picked up in computer software, and the optimal transformation matrix is obtained by least square method calculation. At present, the method is widely applied to neurosurgery medical image navigation, and characteristic points such as human face corners of eyes, nose tips and the like are utilized for registration. The marking point of the algorithm is convenient to obtain, but the needle head is also required to be contacted with the surface of the chest and abdomen when the probe is used for taking the point, so that the soft tissues of the chest and abdomen are deformed, and the registration accuracy is seriously affected.
(2) Artificial labeling method: and attaching a plurality of developable mark points on the surface of the object, collecting a scanned image of the object, selecting the attached mark points from the image in computer software, obtaining the space coordinates of the mark points through an optical positioning instrument, and carrying out corresponding point registration through a least square method to obtain an optimal transformation matrix of the physical space of the object and the medical image space. The algorithm has relatively high precision, but has complex operation, and the mode of pasting the marker is in direct contact with a real object, so that a trace can be left on the surface of the chest and the abdomen, and the requirement of the chest and the abdomen percutaneous puncture operation cannot be met.
Therefore, there is a need to develop a registration method for soft tissues such as chest and abdomen with simple operation, high automation degree and no contact.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture.
The technical scheme for solving the technical problems is that the invention provides a line structure light contact-point-free cloud registration method for chest and abdomen percutaneous puncture, which is characterized by comprising the following steps:
step 1, constructing a line structured light contact-point-free cloud registration system facing to chest and abdomen percutaneous puncture: the system comprises a binocular camera, a cradle head, a line structured light emission source, a motion control board and a computer;
the line structure light emission source is fixed on the cradle head; the motion control board is used for controlling the motion of the cradle head; line structure light emitted by the line structure light emitting source can be projected to a chest and abdomen target area; the binocular camera is used for capturing a chest and abdomen target area with line structured light; the computer is used for receiving the original scanning image of the binocular camera, processing the original scanning image and calculating a registration result;
step 2, scanning a chest and abdomen target area: firstly, starting a binocular camera and judging a chest and abdomen target area; then, the line structure light emitting source emits line structure light to the boundary of the chest and abdomen target area to obtain the limit position of the chest and abdomen target area; determining the movement range and the movement rotation angle of the cradle head according to the limit position; then the cradle head rotates, the line structure light emission source modulates and emits line structure light, and the line structure light is clearly projected to a chest and abdomen target area; finally, optical parameters of the binocular camera are adjusted, and after the self-adaptive surrounding environment achieves a shooting state capable of clearly distinguishing the ambient light and line structured light, the binocular camera shoots to obtain an original scanning image;
step 3, carrying out line structure light extraction on the original scanning image: transmitting the original scanning image to a computer, performing preliminary processing on the original scanning image, filtering redundant image information, and further increasing the distinction between ambient light and line structure light to obtain a line structure light extraction image; the line structured light extraction image comprises a left eye image and a right eye image;
and 4, realizing three-dimensional of two-dimensional line structure light information in the line structure light extraction image, and completing depth reconstruction based on stereoscopic vision: firstly, self-calibration of a binocular camera is completed, and the best stereoscopic vision state is obtained in a self-adaptive mode; acquiring pixel coordinate values of the same two-dimensional point in the left-eye image and the right-eye image in the line structure light information, establishing a mapping relation, and further mapping all the two-dimensional points in the line structure light information one to obtain a plurality of two-dimensional matching point pairs; finally, carrying out three-dimensional treatment on all the two-dimensional matching point pairs to obtain corresponding three-dimensional points of each two-dimensional matching point pair; acquiring three-dimensional coordinates of the three-dimensional points in a world coordinate system to obtain three-dimensional coordinate information, and completing depth reconstruction based on stereoscopic vision;
step 5, three-dimensional coordinate information is induced, screened and acquired to form a point cloud set, and a real object surface point cloud for registering the chest and abdomen target area is constructed: firstly, screening three-dimensional coordinate information, filtering error results and environmental noise points to obtain three-dimensional coordinates meeting accuracy conditions, and obtaining a point cloud set; then, according to the actual clinical application and the algorithm execution speed requirement, performing gridding downsampling on the point cloud set to obtain uniform object surface point cloud which is most suitable for registration;
step 6, registering the real object surface point cloud of the chest and abdomen target area with the three-dimensional chest and abdomen medical image: firstly, enumerating all space pose states possibly existing in a three-dimensional chest and abdomen medical image in a computer medical image coordinate system, and recording an initial transformation matrix of each space pose state relative to an initial poseCalculating the centroid coordinates of the point cloud on the surface of the object and the centroid coordinates of the three-dimensional chest and abdomen medical image in each space pose state; respectively translating the three-dimensional chest and abdomen medical image in all space pose states to the position when the mass center of the three-dimensional chest and abdomen medical image is aligned with the mass center of the object surface point cloud, and recording all mass center transformation matrixes +.>Traversing all space pose states after translation, performing ICP registration to obtain all ICP transformation matrixes +.>Calculating ICP registration error RMSE after ICP registration under all space pose states i The method comprises the steps of carrying out a first treatment on the surface of the All ICP registration errors RMSE are then compared i Obtaining a minimum ICP registration error RMSE k And a spatial pose state corresponding to the minimum error; finally, a minimum ICP registration error RMSE is calculated k High iterative transformation matrix in corresponding spatial pose state +.>And obtaining a registration result.
Compared with the prior art, the invention has the beneficial effects that:
(1) The registration method disclosed by the invention does not contact the surface of the chest and abdomen, does not additionally adhere a marker and does not interfere with the original workflow of an operation, so that the high-efficiency and accurate registration which is free of contact, full-automatic, deformation-free and independent of any marker is realized, the complexity of registration is reduced, and the precision and efficiency of registration are greatly improved.
(2) The invention only needs to carry out the fine registration once, avoids the complicated coarse registration, reduces the complexity of the registration, has high execution speed of the whole process and improves the registration efficiency.
(3) The invention acquires the surface point cloud of the chest and abdomen area by means of line structured light without contacting with the chest and abdomen.
(4) The invention has high automation degree, does not need manual operation, and reduces the operation load.
(5) The invention can rapidly, accurately and efficiently complete registration of the chest and abdomen coordinate system and the chest and abdomen computer medical image coordinate system in real space, is assisted by the chest and abdomen medical image navigation software, provides a key basis for soft tissue percutaneous puncture operations of chest and abdomen and the like under subsequent image navigation, guides doctors to perform high-precision and high-efficiency chest and abdomen percutaneous puncture operations, reduces the operation burden of doctors and contacts with the chest and abdomen, improves the operation precision, ensures the operation safety, and has certain social value and economic benefit.
Drawings
FIG. 1 is a flow chart of the present invention;
fig. 2 is a block diagram of the registration system of the present invention;
FIG. 3 is a line structured light extraction image of step 3 of the present invention;
FIG. 4 is an effect diagram of the stereoscopic based depth reconstruction of step 4 of the present invention;
FIG. 5 is a physical surface point cloud of the chest and abdomen target area for registration of step 5 of the present invention;
fig. 6 is a diagram of all spatial pose states that the three-dimensional chest and abdomen medical image of step 6 of the present invention may be in a computer medical image coordinate system.
In the figure, a binocular camera 1, a cradle head 2, a linear light emitting source 3, a motion control board 4, a computer 5, a chest and abdomen target area 6, a physical surface point cloud 7 and a three-dimensional chest and abdomen medical image 8.
Detailed Description
Specific examples of the present invention are given below. The specific examples are provided only for further details of the present invention and do not limit the scope of the claims.
The invention provides a line structured light contact-point-free cloud registration method (short method) for chest and abdomen percutaneous puncture, which is characterized by comprising the following steps:
step 1, constructing a line structure light contact-point-free cloud registration system (abbreviated as a system) facing chest and abdomen percutaneous puncture: the system comprises a binocular camera 1, a cradle head 2, a linear light emitting source 3, a motion control board 4 and a computer 5;
the line structure light emission source 3 is fixed on the cradle head 2; the motion control board 4 transmits motor motion data to a motor of the cradle head 2 for controlling the motion of the cradle head 2; the line structure light emitted from the line structure light emitting source 3 can be projected to the chest and abdomen target area 6; the binocular camera 1 is used for capturing a chest and abdomen target area 6 with line structured light; the computer 5 is used for receiving the original scanning image of the binocular camera 1, processing the original scanning image and calculating a registration result;
preferably, in step 1, the pan-tilt 2 has two degrees of freedom of rotation, and intelligent motion control of the pan-tilt 2 is achieved by using the motion control board 4. The motion control board 4 is an Arduino board, preferably an UNO type Arduino board.
Step 2, scanning the chest and abdomen target area 6: firstly, starting the binocular camera 1 and judging a chest and abdomen target area 6; then the line structure light emitting source 3 emits line structure light to the boundary of the chest and abdomen target area 6 to obtain the limit position of the chest and abdomen target area 6; determining the movement range and the movement rotation angle of the cradle head 2 fixedly connected with the line structure light emission source 3 according to the limit position; then the cradle head 2 rotates, the line structure light emitting source 3 modulates and emits line structure light, and the line structure light is clearly projected to the chest and abdomen target area 6; finally, optical parameters of the binocular camera 1 are adjusted, after the self-adaptive surrounding environment achieves a shooting state capable of clearly distinguishing the ambient light and line structure light, the binocular camera 1 performs high-frequency exposure shooting to obtain an original scanning image;
preferably, in step 2, the optical parameters of the binocular camera 1 include exposure, brightness and gain of the camera.
Step 3, carrying out line structure light extraction on the original scanning image: transmitting the original scanning image to a computer 5, performing preliminary processing on the original scanning image in the computer 5, filtering redundant image information to reduce subsequent calculation load, and further increasing the distinction between ambient light and line structure light to obtain a line structure light extraction image (shown in fig. 3); the line structured light extraction image comprises a left eye image and a right eye image;
preferably, in step 3, the preliminary processing includes graying, noise reduction, smoothing, and cutting.
And 4, realizing three-dimensional of two-dimensional line structure light information in the line structure light extraction image, and completing depth reconstruction based on stereoscopic vision: firstly, self-calibration of the binocular camera 1 is completed, and the optimal stereoscopic vision state is obtained in a self-adaptive manner; acquiring pixel coordinate values of the same two-dimensional point in the left-eye image and the right-eye image in the line structure light information, establishing a mapping relation, and further mapping all the two-dimensional points in the line structure light information one to obtain a plurality of two-dimensional matching point pairs; finally, carrying out three-dimensional treatment on all the two-dimensional matching point pairs to obtain corresponding three-dimensional points of each two-dimensional matching point pair; acquiring three-dimensional coordinates of the three-dimensional points in a world coordinate system to obtain three-dimensional coordinate information, and completing depth reconstruction based on stereoscopic vision (shown in fig. 4);
preferably, in step 4, the self-calibration of the binocular camera 1 includes calibration of the internal reference matrix of the binocular camera 1, distortion correction of the image and binocular polar correction vertical coordinate alignment.
Preferably, in step 4, the specific steps for obtaining the two-dimensional matching point pair are: taking the ith two-1-dimensional point 1L in the left eye line structural light information in the left eye image i ,L i The pixel height of (2) isThen use L i Searching and L in right eye line structure light information as a reference point i Is 1 +.>Highly nearest pixel 1R i ,R i Is +.>When 1L i And R is i Satisfy the following requirementsAt 1L i And R is i Is a two-dimensional matching point pair; epsilon represents a preset threshold value and is set according to the error of the polar line correction.
Preferably, in step 4, the three-dimensionality is specifically: reconstructing depth information of two-dimensional matching point pairs by adopting a steady-state triangulation algorithm, and further obtaining corresponding three-dimensional points of each two-dimensional matching point pair; the steady-state triangularization algorithm is shown in FIG. 1:
in the formula (1), the pixel coordinate of a three-dimensional point in the left camera is (u) 1 ,v 1 ) Depth d at left camera 1 The method comprises the steps of carrying out a first treatment on the surface of the The three-dimensional point has a pixel coordinate (u) in the right camera 2 ,v 2 ) Depth d of right camera 2 The method comprises the steps of carrying out a first treatment on the surface of the K1 is a left camera reference matrix, and K2 is a right camera reference matrix; r is a rotation matrix from the left camera to the right camera; t is the translation matrix of the left camera to the right camera.
Step 5, three-dimensional coordinate information is induced, screened and acquired to form a point cloud set, and a real object surface point cloud 7 for registering the chest and abdomen target area 6 is constructed: firstly, screening three-dimensional coordinate information, filtering error results and environmental noise points to obtain three-dimensional coordinates meeting accuracy conditions, and obtaining a point cloud set; then, according to the actual clinical application and the algorithm execution speed requirement, the point cloud set is subjected to gridding downsampling to obtain a uniform and most suitable-for-registration real object surface point cloud 7 (shown in fig. 5);
preferably, in step 5, the error result is filtered from the ambient noise using a differential criterionDifferential criterionDifference criterion->To screen three-dimensional coordinates meeting the accuracy condition.
Preferably 1, in step 1 and step 5, the difference criterion indicates acquisition of two adjacent points P i And P i-1 The state of change of Z coordinate value along Y direction 1 in computer medical image coordinate system when the difference value is greater than threshold epsilon 1 Reject Point P i . In this embodiment, according to the actual test result ε 1 Take a value of 4.5.
Preferably 1, in step 1, step 5, the differentiation criterion indicates acquisition of two adjacent points P i And P i-1 The ratio of the Z-direction coordinate 1 differences in the computer medical image coordinate system when the differential value is greater than the threshold epsilon 2 Reject Point P i . In this embodiment, according to the actual test result ε 2 Take a value of 0.1.
Preferably 1, in step 1, step 5, the difference criterion indicates acquisition of two adjacent points P i And P i-1 The difference in Z direction in the coordinate system of the computer medical image when the difference is greater than a threshold epsilon 3 Reject Point P i . In this embodiment, according to the actual test result ε 3 And the value is 15.
Preferably, in step 5, the rasterized downsampling is implemented using a vexel grid filter in the PCL library. The algorithm replaces all other points in the voxels by voxel gravity centers through the voxel grid, so that the filtering of the point cloud is completed.
Step 6, matching the real object surface point cloud 7 of the chest and abdomen target area 6 with the three-dimensional chest and abdomen medical image 8Quasi-: first, enumerating all spatial pose states in which the three-dimensional chest and abdomen medical image 8 may be in the computer medical image coordinate system (as shown in fig. 6), and recording an initial transformation matrix for each spatial pose state relative to the initial poseCalculating the centroid coordinates of the object surface point cloud 7 and the centroid coordinates of the three-dimensional chest and abdomen medical image 8 in each space pose state; respectively translating the three-dimensional chest and abdomen medical image 8 in all space pose states to the position when the mass center of the three-dimensional chest and abdomen medical image 8 is aligned with the mass center of the object surface point cloud 7, and recording all mass center transformation matrixes ∈>Traversing all space pose states after translation, performing ICP registration to obtain all ICP transformation matrixes +.>Calculating ICP registration error RMSE after ICP registration under all space pose states i The method comprises the steps of carrying out a first treatment on the surface of the All ICP registration errors RMSE are then compared i Obtaining a minimum ICP registration error RMSE k And a spatial pose state corresponding to the minimum error; finally, a minimum ICP registration error RMSE is calculated k High iterative transformation matrix in corresponding spatial pose state +.>And obtaining a registration result.
Preferably, the three-dimensional chest and abdomen medical image 8 is obtained by three-dimensional reconstruction of the chest and abdomen medical image.
Preferably, in step 6, the specific operations enumerated are: cycling through X, Y and Z three axes in a computer medical image coordinate system, and obtaining all three-dimensional chest and abdomen medical images 8 according to the step length alpha to obtainA space pose state.
Preferably, in step 6, ICP registration error RMSE i The calculation formula of (2) is shown as the formula:
in the formula (2), N p For the number of points in the spatial surface point cloud,for the j-th point in the spatial surface point cloud, is->Is the spatial surface point ++in the three-dimensional chest and abdomen medical image (8) under the ith spatial pose state>R and t are ICP transformation matrix +.>A rotation matrix and a translation matrix of the device.
The invention is applicable to the prior art where it is not described.
Claims (10)
1. A line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture is characterized by comprising the following steps:
step 1, constructing a line structured light contact-point-free cloud registration system facing to chest and abdomen percutaneous puncture: the system comprises a binocular camera (1), a cradle head (2), a line structure light emission source (3), a motion control board (4) and a computer (5);
the line structure light emission source (3) is fixed on the cradle head (2); the motion control board (4) is used for controlling the motion of the cradle head (2); the line structure light emitted by the line structure light emitting source (3) can be projected to a chest and abdomen target area (6); the binocular camera (1) is used for capturing a chest and abdomen target area (6) with line structured light; the computer (5) is used for receiving the original scanning image of the binocular camera (1), processing the original scanning image and calculating a registration result;
step 2, scanning a chest and abdomen target area (6): firstly, starting a binocular camera (1) and judging a chest and abdomen target area (6); then, the line structure light emitting source (3) emits line structure light to the boundary of the chest and abdomen target area (6) to obtain the limit position of the chest and abdomen target area (6); determining the movement range and the movement rotation angle of the cradle head (2) according to the limit position; then the cradle head (2) rotates, the line structure light emitting source (3) modulates the emitted line structure light, and the line structure light is clearly projected to the chest and abdomen target area (6); finally, optical parameters of the binocular camera (1) are adjusted, and after the self-adaptive surrounding environment achieves a shooting state capable of clearly distinguishing the ambient light and line structure light, the binocular camera (1) shoots to obtain an original scanning image;
step 3, carrying out line structure light extraction on the original scanning image: transmitting the original scanning image to a computer (5), performing preliminary processing on the original scanning image, filtering redundant image information, and further increasing the distinction between ambient light and line structure light to obtain a line structure light extraction image; the line structured light extraction image comprises a left eye image and a right eye image;
and 4, realizing three-dimensional of two-dimensional line structure light information in the line structure light extraction image, and completing depth reconstruction based on stereoscopic vision: firstly, self-calibration of the binocular camera (1) is completed, and the optimal stereoscopic vision state is obtained in a self-adaptive mode; acquiring pixel coordinate values of the same two-dimensional point in the left-eye image and the right-eye image in the line structure light information, establishing a mapping relation, and further mapping all the two-dimensional points in the line structure light information one to obtain a plurality of two-dimensional matching point pairs; finally, carrying out three-dimensional treatment on all the two-dimensional matching point pairs to obtain corresponding three-dimensional points of each two-dimensional matching point pair; acquiring three-dimensional coordinates of the three-dimensional points in a world coordinate system to obtain three-dimensional coordinate information, and completing depth reconstruction based on stereoscopic vision;
step 5, three-dimensional coordinate information is induced, screened and acquired to form a point cloud set, and a real object surface point cloud (7) for registering the chest and abdomen target area (6) is constructed: firstly, screening three-dimensional coordinate information, filtering error results and environmental noise points to obtain three-dimensional coordinates meeting accuracy conditions, and obtaining a point cloud set; then, according to the actual clinical application and the algorithm execution speed requirement, the point cloud set is subjected to gridding downsampling to obtain uniform object surface point cloud (7) which is most suitable for registration;
step 6, registering the real object surface point cloud (7) of the chest and abdomen target area (6) with the three-dimensional chest and abdomen medical image (8): firstly, enumerating all spatial pose states of a three-dimensional chest and abdomen medical image (8) possibly in a computer medical image coordinate system, and recording an initial transformation matrix of each spatial pose state relative to an initial poseCalculating the centroid coordinates of the object surface point cloud (7) and the centroid coordinates of the three-dimensional chest and abdomen medical image (8) in each space pose state; respectively translating the three-dimensional chest and abdomen medical image (8) in all space pose states to the position when the mass center of the three-dimensional chest and abdomen medical image (8) is aligned with the mass center of the object surface point cloud (7), and recording all mass center transformation matrixes +.>Traversing all space pose states after translation, performing ICP registration to obtain all ICP transformation matrixes +.>Calculating ICP registration error RMSE after ICP registration under all space pose states i The method comprises the steps of carrying out a first treatment on the surface of the All ICP registration errors RMSE are then compared i Obtaining a minimum ICP registration error RMSE k And a spatial pose state corresponding to the minimum error; finally, a minimum ICP registration error RMSE is calculated k High iterative transformation matrix in corresponding spatial pose state +.>And obtaining a registration result.
2. The line structured light contact-free cloud registration method for chest and abdomen percutaneous puncture according to claim 1, wherein in step 1, the cradle head (2) has two rotational degrees of freedom; the motion control plate (4) adopts an Arduino plate.
3. The line structured light contact-free cloud registration method for chest and abdomen percutaneous puncture according to claim 1 wherein in step 2, the optical parameters of the binocular camera (1) include exposure, brightness and gain of the camera.
4. The line structured light contact-free cloud registration method for chest and abdomen percutaneous puncture according to claim 1, wherein in step 3, the preliminary processing includes graying, noise reduction, smoothing and cutting.
5. The line structured light contact-free cloud registration method for chest and abdomen percutaneous puncture according to claim 1, wherein in step 4, the self calibration of the binocular camera (1) includes calibration of an internal reference matrix of the binocular camera (1), distortion correction of an image, and dual-purpose polar correction vertical coordinate alignment.
6. The line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture according to claim 1, wherein in step 4, the specific step of obtaining the two-dimensional matching point pair is: taking the ith two-dimensional point L in the structural light information of the left eye line in the left eye image i ,L i The pixel height of (2) isThen use L i Find L in right eye line structured light information as reference point i Is +.>Highly nearest pixel point R i ,R i Is +.>When L i And R is i Satisfy->When L i And R is i Is a two-dimensional matching point pair; epsilon represents a preset threshold value and is set according to the error of the polar line correction.
7. The line structured light contact-free cloud registration method for chest and abdomen percutaneous puncture according to claim 1, wherein in step 4, three-dimensional is specifically: reconstructing depth information of two-dimensional matching point pairs by adopting a steady-state triangulation algorithm, and further obtaining corresponding three-dimensional points of each two-dimensional matching point pair; the steady-state triangularization algorithm is shown in FIG. 1:
in the formula (1), the pixel coordinate of a three-dimensional point in the left camera is (u) 1 ,v 1 ) Depth d at left camera 1 The method comprises the steps of carrying out a first treatment on the surface of the The three-dimensional point has a pixel coordinate (u) in the right camera 2 ,v 2 ) Depth d of right camera 2 The method comprises the steps of carrying out a first treatment on the surface of the K1 is a left camera reference matrix, and K2 is a right camera reference matrix; r is a rotation matrix from the left camera to the right camera; t is the translation matrix of the left camera to the right camera.
8. The line structured light contact-free cloud registration method for percutaneous puncture of chest and abdomen according to claim 1, wherein in step 5, a difference criterion is used for filtering error results and environmental noise pointsDifferential criterionDifference criterion->To complete;
differential criterion means acquisition of two adjacent points P i And P i-1 The state of change of the Z coordinate value along the Y direction in the computer medical image coordinate system;
the differential criterion indicates that two adjacent points P are acquired i And P i-1 The ratio of the Z-direction coordinate differences in the computer medical image coordinate system;
the difference criterion represents the acquisition of two adjacent points P i And P i-1 Differences in the Z-direction in the computer medical image coordinate system.
9. The line structured light contact-free cloud registration method for chest and abdomen percutaneous puncture according to claim 1, wherein in step 6, the specific operations enumerated are: cycling through X, Y and Z three axes in a computer medical image coordinate system, and obtaining all three-dimensional chest and abdomen medical images (8) according to the step length alpha to obtainA space pose state.
10. The line structured light contact-free cloud registration method for chest and abdomen percutaneous puncture according to claim 1, wherein in step 6, ICP registration error RMSE i The calculation formula of (2) is shown as the formula:
in the formula (2), N p For the number of points in the spatial surface point cloud,for the j-th point in the spatial surface point cloud, is->Is the spatial surface point ++in the three-dimensional chest and abdomen medical image (8) under the ith spatial pose state>R and t are ICP transformation matrix +.>A rotation matrix and a translation matrix of the device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310975574.XA CN116883471B (en) | 2023-08-04 | 2023-08-04 | Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310975574.XA CN116883471B (en) | 2023-08-04 | 2023-08-04 | Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116883471A CN116883471A (en) | 2023-10-13 |
CN116883471B true CN116883471B (en) | 2024-03-15 |
Family
ID=88264466
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310975574.XA Active CN116883471B (en) | 2023-08-04 | 2023-08-04 | Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116883471B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117224233B (en) * | 2023-11-09 | 2024-02-20 | 杭州微引科技有限公司 | Integrated perspective CT and interventional operation robot system and use method thereof |
CN118229930B (en) * | 2024-04-03 | 2024-09-10 | 艾瑞迈迪医疗科技(北京)有限公司 | Near infrared optical tracking method and system |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108154552A (en) * | 2017-12-26 | 2018-06-12 | 中国科学院深圳先进技术研究院 | A kind of stereo laparoscope method for reconstructing three-dimensional model and device |
CN109272537A (en) * | 2018-08-16 | 2019-01-25 | 清华大学 | A kind of panorama point cloud registration method based on structure light |
CN110070598A (en) * | 2018-01-22 | 2019-07-30 | 宁波盈芯信息科技有限公司 | Mobile terminal and its progress 3D scan rebuilding method for 3D scan rebuilding |
CN110415342A (en) * | 2019-08-02 | 2019-11-05 | 深圳市唯特视科技有限公司 | A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors |
CN110443840A (en) * | 2019-08-07 | 2019-11-12 | 山东理工大学 | The optimization method of sampling point set initial registration in surface in kind |
CN110731817A (en) * | 2019-10-11 | 2020-01-31 | 浙江大学 | radiationless percutaneous spine positioning method based on optical scanning automatic contour segmentation matching |
CN111053598A (en) * | 2019-12-03 | 2020-04-24 | 天津大学 | Augmented reality system platform based on projector |
CN112053432A (en) * | 2020-09-15 | 2020-12-08 | 成都贝施美医疗科技股份有限公司 | Binocular vision three-dimensional reconstruction method based on structured light and polarization |
WO2021088481A1 (en) * | 2019-11-08 | 2021-05-14 | 南京理工大学 | High-precision dynamic real-time 360-degree omnibearing point cloud acquisition method based on fringe projection |
CN113643427A (en) * | 2021-08-09 | 2021-11-12 | 重庆亲禾智千科技有限公司 | Binocular ranging and three-dimensional reconstruction method |
CN114937139A (en) * | 2022-06-01 | 2022-08-23 | 天津大学 | Endoscope augmented reality system and method based on video stream fusion |
CN115222893A (en) * | 2022-08-09 | 2022-10-21 | 沈阳度维科技开发有限公司 | Three-dimensional reconstruction splicing method for large-size components based on structured light measurement |
CN115546289A (en) * | 2022-10-27 | 2022-12-30 | 电子科技大学 | Robot-based three-dimensional shape measurement method for complex structural part |
CN115830217A (en) * | 2022-07-11 | 2023-03-21 | 深圳大学 | Method, device and system for generating point cloud of three-dimensional model of object to be modeled |
WO2023045455A1 (en) * | 2021-09-21 | 2023-03-30 | 西北工业大学 | Non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration |
CN116421313A (en) * | 2023-04-14 | 2023-07-14 | 郑州大学第一附属医院 | Augmented reality fusion method in navigation of lung tumor resection operation under thoracoscope |
-
2023
- 2023-08-04 CN CN202310975574.XA patent/CN116883471B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108154552A (en) * | 2017-12-26 | 2018-06-12 | 中国科学院深圳先进技术研究院 | A kind of stereo laparoscope method for reconstructing three-dimensional model and device |
CN110070598A (en) * | 2018-01-22 | 2019-07-30 | 宁波盈芯信息科技有限公司 | Mobile terminal and its progress 3D scan rebuilding method for 3D scan rebuilding |
CN109272537A (en) * | 2018-08-16 | 2019-01-25 | 清华大学 | A kind of panorama point cloud registration method based on structure light |
CN110415342A (en) * | 2019-08-02 | 2019-11-05 | 深圳市唯特视科技有限公司 | A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors |
CN110443840A (en) * | 2019-08-07 | 2019-11-12 | 山东理工大学 | The optimization method of sampling point set initial registration in surface in kind |
CN110731817A (en) * | 2019-10-11 | 2020-01-31 | 浙江大学 | radiationless percutaneous spine positioning method based on optical scanning automatic contour segmentation matching |
WO2021088481A1 (en) * | 2019-11-08 | 2021-05-14 | 南京理工大学 | High-precision dynamic real-time 360-degree omnibearing point cloud acquisition method based on fringe projection |
CN111053598A (en) * | 2019-12-03 | 2020-04-24 | 天津大学 | Augmented reality system platform based on projector |
CN112053432A (en) * | 2020-09-15 | 2020-12-08 | 成都贝施美医疗科技股份有限公司 | Binocular vision three-dimensional reconstruction method based on structured light and polarization |
CN113643427A (en) * | 2021-08-09 | 2021-11-12 | 重庆亲禾智千科技有限公司 | Binocular ranging and three-dimensional reconstruction method |
WO2023045455A1 (en) * | 2021-09-21 | 2023-03-30 | 西北工业大学 | Non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration |
CN114937139A (en) * | 2022-06-01 | 2022-08-23 | 天津大学 | Endoscope augmented reality system and method based on video stream fusion |
CN115830217A (en) * | 2022-07-11 | 2023-03-21 | 深圳大学 | Method, device and system for generating point cloud of three-dimensional model of object to be modeled |
CN115222893A (en) * | 2022-08-09 | 2022-10-21 | 沈阳度维科技开发有限公司 | Three-dimensional reconstruction splicing method for large-size components based on structured light measurement |
CN115546289A (en) * | 2022-10-27 | 2022-12-30 | 电子科技大学 | Robot-based three-dimensional shape measurement method for complex structural part |
CN116421313A (en) * | 2023-04-14 | 2023-07-14 | 郑州大学第一附属医院 | Augmented reality fusion method in navigation of lung tumor resection operation under thoracoscope |
Non-Patent Citations (7)
Title |
---|
Automatic segmentation of organs at risk and tumors in CT images of lung cancer from partially labelled datasets with a semi-supervised conditional nnU-Net;Shan Jiang;Computer Methods and Programs in Biomedicine(第211期);全文 * |
一种基于非量测相机图像的三维模型快速重建方法研究;黄腾达;李有鹏;吕亚磊;刘洋洋;;河南城建学院学报(第01期);全文 * |
基于TOF三维相机相邻散乱点云配准技术研究;张旭东;吴国松;胡良梅;王竹萌;;机械工程学报(第12期);全文 * |
基于双重配准的机器人双目视觉三维拼接方法研究;艾青林;刘赛;沈智慧;;机电工程(第10期);全文 * |
基于无监督学习的三维肺部CT图像配准方法研究;姜杉;天津大学学报(自然科学与工程技术版);第55卷(第3期);全文 * |
序列图像约束的点云初始配准方法;孙殿柱;沈江华;李延瑞;林伟;;机械工程学报(第09期);全文 * |
应用摄像机位姿估计的点云初始配准;郭清达;全燕鸣;姜长城;陈健武;;光学精密工程(第06期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN116883471A (en) | 2023-10-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109464196B (en) | Surgical navigation system adopting structured light image registration and registration signal acquisition method | |
CN116883471B (en) | Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture | |
US11123144B2 (en) | Registration of frames of reference | |
CN111627521B (en) | Enhanced utility in radiotherapy | |
US20190142359A1 (en) | Surgical positioning system and positioning method | |
Yang et al. | Automatic 3-D imaging and measurement of human spines with a robotic ultrasound system | |
JP2009501609A (en) | Method and system for mapping a virtual model of an object to the object | |
CN112509055B (en) | Acupuncture point positioning system and method based on combination of binocular vision and coded structured light | |
CN112215871A (en) | Moving target tracking method and device based on robot vision | |
CN112168357A (en) | System and method for constructing spatial positioning model of C-arm machine | |
US20230030343A1 (en) | Methods and systems for using multi view pose estimation | |
CN114404041B (en) | C-arm imaging parameter calibration system and method | |
CN113100941B (en) | Image registration method and system based on SS-OCT (scanning and optical coherence tomography) surgical navigation system | |
CN114463482A (en) | Calibration model and method of optical tracking three-dimensional scanner and surgical navigation system thereof | |
CN117122414A (en) | Active tracking type operation navigation system | |
Sun et al. | Using cortical vessels for patient registration during image-guided neurosurgery: a phantom study | |
CN114886558A (en) | Endoscope projection method and system based on augmented reality | |
CN114283179A (en) | Real-time fracture far-near end space pose acquisition and registration system based on ultrasonic images | |
CN111743628A (en) | Automatic puncture mechanical arm path planning method based on computer vision | |
Wang et al. | Towards video guidance for ultrasound, using a prior high-resolution 3D surface map of the external anatomy | |
CN113100967B (en) | Wearable surgical tool positioning device and positioning method | |
CN112085698A (en) | Method and device for automatically analyzing left and right breast ultrasonic images | |
CN110432919A (en) | A kind of C arm X-ray film ray machine real time imagery control methods based on 3D model | |
Yang et al. | A novel neurosurgery registration pipeline based on heat maps and anatomic facial feature points | |
CN115880469B (en) | Registration method of surface point cloud data and three-dimensional image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |