CN116452992A - Method for extracting center line of tubular structure of minimum path - Google Patents
Method for extracting center line of tubular structure of minimum path Download PDFInfo
- Publication number
- CN116452992A CN116452992A CN202310557810.6A CN202310557810A CN116452992A CN 116452992 A CN116452992 A CN 116452992A CN 202310557810 A CN202310557810 A CN 202310557810A CN 116452992 A CN116452992 A CN 116452992A
- Authority
- CN
- China
- Prior art keywords
- vertex
- minimum path
- tubular structure
- attention
- feature point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000012549 training Methods 0.000 claims abstract description 20
- 238000010586 diagram Methods 0.000 claims abstract description 17
- 238000000605 extraction Methods 0.000 claims abstract description 12
- 238000012360 testing method Methods 0.000 claims description 24
- 239000002243 precursor Substances 0.000 claims description 21
- 238000013507 mapping Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000006467 substitution reaction Methods 0.000 claims description 2
- 230000000295 complement effect Effects 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 6
- 238000001914 filtration Methods 0.000 description 5
- 230000011218 segmentation Effects 0.000 description 4
- 238000013459 approach Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/182—Network patterns, e.g. roads or rivers
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
A method for extracting the central line of tubular structure with minimum path features that the tubular structure image data set is sent to the Attention U-Net network to complete the training of network, and the optimal weight and optimal bias are stored after the training is completed. Detecting feature points by using a Shi-Tomasit feature point detector to generate a feature point diagram, then sending the feature point diagram into a minimum path algorithm, and calculating a minimum path between the feature points by using the minimum path algorithm fused with an AttenionU-Net network, thereby completing the extraction of the central line of the tubular structure. The attribute U-Net network is fused into the minimum path algorithm, so that the minimum path algorithm overcomes the shortcuts and increases the accuracy of the minimum path. Meanwhile, the minimum path algorithm provides a strong geometric priori for the Attention U-Net network, and the performance of the Attention U-Net network is greatly improved. The minimum path algorithm and the Attention U-Net network are mutually fused, alternately executed and complementary in advantages, so that the central line of the tubular structure can be rapidly and accurately extracted from the complex tubular structure image.
Description
Technical Field
The invention relates to the technical field of computer vision, in particular to a method for extracting a central line of a tubular structure of a minimum path.
Background
Segmentation of the centerline of tubular structures is an important issue in the field of computer vision, applications including medical image processing, robotic navigation, three-dimensional modeling, and the like. Over the past decades, researchers have proposed a number of segmentation methods for the centerline of tubular structures, which can be categorized into graph theory-based methods, filtering-based methods, surface evolution-based methods, and deep learning-based methods.
The graph theory-based method is to represent a tubular structure in the form of a graph, and find a center line through a graph theory algorithm. One classical approach is to trace the min-cut, which represents the tubular structure as a weighted graph, looking for the centerline by the min-cut algorithm. Shi et al (ref: jianbo Shi and J. Malik, "Normalized cuts and image segmentation," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.22, no.8, pp.888-905, aug.2000, doi: 10.1109/34.868688.) achieved good results in image segmentation by this method. The method has higher accuracy and robustness, but has higher computational complexity due to the need of building a complex graph model. The filtering-based method extracts the tubular structure by subjecting the image to a series of filters, wherein common filtering methods include gaussian filtering, hessian matrix, etc. The method has higher calculation efficiency, but has certain limitation on the need of manually adjusting the filtering parameters for tubular structures with different shapes and sizes. The method based on curved surface evolution is used for searching the center line through the curved surface evolution process by representing the tubular structure as a curved surface. Common methods include a Level Set model, an Active content model and the like. This approach can adaptively adapt to tubular structures of different shapes and sizes, but has higher computational complexity and requires longer computational time. As deep learning techniques develop, more and more researchers have begun to try to apply them to the segmentation problem of the tubular structure centerline. Common methods include Convolutional Neural Network (CNN) based methods and Recurrent Neural Network (RNN) based methods. The method has higher accuracy and robustness, but requires a large amount of data for training, and can have the problem of over fitting.
In summary, segmentation of the centerline of tubular structures is an important issue in the field of computer vision, and various techniques and methods have their own advantages and disadvantages.
Disclosure of Invention
In order to overcome the defects of the technology, the invention provides a method capable of rapidly and accurately extracting the central line of the tubular structure from the complex tubular structure image.
The technical scheme adopted for overcoming the technical problems is as follows:
the extraction method of the center line of the tubular structure of the minimum path comprises the following steps:
a) Acquiring m suburb road images containing annotation information of suburb road center lines to form a training set D' 1 ,D′ 1 ={T 1 ,T 2 ,...,T i ,...,T m }, T therein i For the ith suburb road image, i epsilon {1,2,.. M }, obtaining n suburb road images containing annotation information of suburb road center lines, and forming a test set D 2 ′,D′ 2 ={T 1 ,T 2 ,...,T j ,...,T n }, T therein j J e {1,2,., n }; b) Test set D' 2 Splitting suburban road images, correcting one-fourth of the total number of the split images by a method that the central line of the road in the images is positioned at the center of the images and is perpendicular to the edges of the images, and forming a positive sample set D 'by the corrected images' 2+ The remaining three quarters of the split images form a negative set D' 2- Positive sample set D' 2+ And negative sample set D' 2- Performing addition operation to obtain a data set D, and dividing the data set D into a training set train and a test set test;
c) Inputting images in a training set train into an Attention U-Net network, training the Attention U-Net network, and obtaining a trained Attention U-Net network;
d) Test set D' 2 The j-th suburban road image T j Inputting into a feature point extraction layer to obtain a feature point diagram H t And feature point set Q t Define a set of feature points Q t Disjoint point set S t ,Q t ∩S t Let phi be the empty set, set point S t Initializing to be an empty set;
e) Feature point diagram H t Inputting into a minimum path calculation layer to obtain a jth suburban road image T j Center line of the middle tubular structure.
Further, in step a), m suburban road images including annotation information for suburban road centerlines are acquired from the satellite image dataset EPFL, and n suburban road images including annotation information for suburban road centerlines are acquired.
Further, in step b), a PIL library is imported in python, an Image package is imported from the PIL library, and the crop () function test set D 'in the Image package is used' 2 Suburban road image splitting.
Preferably, in step b) the data set D is divided into training and test sets in a ratio of 7:3.
Further, step d) comprises the steps of:
d-1) the test set D' 2 The j-th suburban road image T j Inputting into a feature point extraction layer, detecting feature points by using a Shi-Tomasit feature point detector to obtain a feature point diagram H t ;
d-2) mapping the feature points H t Generating a feature point set Q from each feature point in the set t 。
Further, step e) comprises the steps of:
e-1) mapping the feature points H t Input into a trained Attention U-Net network, and output to obtain an Attention weight coefficient diagram H att ;
e-2) mapping the feature points H t Building an undirected graph G t ,G t =(V t ,E t ),V t As undirected graph G t The collection of vertices is such that, as undirected graph G t I-th vertex of (a)>As undirected graph G t The j-th vertex of (a)>As undirected graph G t I e {1,2,., o }, j e {1,2,., o }, u e {1,2,., o }, o is an undirected graph G t Total number of middle vertexes, feature point set Q t For set V t Proper subset of (E) t As undirected graph G t Is provided with a set of edges of the (c),wherein->Is the ith vertex->And j' th vertex>Edges therebetween;
e-3) will be the ith vertexAs the initial vertex, the u-th vertex +.>As an ending vertex, the starting vertex +.>From a set of feature points Q t Culling and adding point set S t ;
e-4) calculating the initial vertexAnd end vertex->Minimum path->Minimum path +.>As an initial vertex->And end vertex->A centerline of the tubular structure therebetween;
e-5) repeating steps e-3) to e-4) until feature point set Q t Adding all vertexes in the set S t Obtaining the j suburban road image T j Center line of the middle tubular structure.
Further, step e-4) comprises the steps of:
e-4.1) set V t Divided into two disjoint sets Q 1 And S is 1 ,Q 1 ∩S 1 =φ, set Q 1 Contains set V t All vertices of (1), set S 1 Is an empty set;
e-4.2) is calculated by the formulaCalculating the ith vertex->And j' th vertex>Weights of the sides between->In->For Franagi filter tube metrics, both ε and λ are constants,is the ith vertex->And j' th vertex>Euclidean distance between them to obtain set E t Initial weight set of edges of (a)
Close->
e-4.3) initial set of weights on edgeIs selected from and i-th vertex->The connected edges are the vertices of minimum weight +.>The ith vertex is ++by pi-function of Dijkstra algorithm>Set as vertex->Is to add the vertex->From set Q 1 Culling and adding set S 1 ;
e-4.4) from the attention weighting factor graph H att Obtaining vertices inAttention coefficient of->If attention coefficient->A threshold Th of greater than or equal to the vertex->Adjacent vertex set of adjacent vertices +.>Selecting an adjacent vertex->The ith vertex is ++by pi-function of Dijkstra algorithm>Set as vertex->If the attention coefficient is +.>Less than threshold Th, finding vertex ++through pi function of Dijkstra algorithm>Precursor node->Calculate vertex->And precursor node->Euclidean distance L of (2) j At the apex +>And precursor node->Defining a length L therebetween j Is a local path of (a)Build with Euclidean distance L j Square tubular image block P of side length j Tubular image block P j Is a local path->Tubular image block P using a warp Perselected () function in an Opencv library j Rotated along the central lineThe transformation operation obtains regularized image block->e-4.5) regularizing the image block +.>Judging regularized image block in classifier of the trained Attention U-Net network>Whether it is a prospect;
e-4.6) verticesAnd start vertex->The sum of the weights of all sides on the path between is +.>If regularized image block->For foreground, then vertex->And adjacent vertex->The weight of the edge between the two edges is kept unchanged, and whether the vertex is updated or not is judged through Dijkstra algorithm>To the initial vertex->Path weight sum between ∈>If updated, vertex->To the initial vertex->The sum of the path weights between is updated to +.>Vertex ++through Dijkstra algorithm>Set as adjacent vertex->Is a precursor node of (2); e-4.7) if regularized image block +.>As background, vertex->And adjacent vertex->The weight of the edge between the two isBy the formula->Calculating updated weightsWeight of update ∈>Is carried into Dijkstra algorithm, and whether to update is judged through the Dijkstra algorithmVertex->To the initial vertex->Path weight sum between ∈>If updated, vertex->To the initial vertex->The sum of the path weights between is updated to +.>Vertex ++through Dijkstra algorithm>Set as adjacent vertex->Is a precursor node of (2);
e-4.8) repeating steps e-4.3) through e-4.7) until the vertex is finishedJoining set S 1 ;
e-4.9) querying the set S by pi-function of Dijkstra algorithm 1 Middle ending vertexPrecursor node->
e-4.10) node precursorSubstitution of the ending vertex in step e-4.9)>Repeating the steps
e-4.10) until the starting vertex is obtainedAnd end vertex->Minimum path->
The value of E is 0.01, and lambda E is 0, 1.
Preferably, threshold Th e (0, 1).
The beneficial effects of the invention are as follows: and sending the tubular structure image data set into an attribute U-Net network to complete the training of the network, and storing the optimal weight and the optimal bias after the training is completed. Detecting feature points by using a Shi-Tomasit feature point detector to generate a feature point diagram, then sending the feature point diagram into a minimum path algorithm, and calculating a minimum path between the feature points by using the minimum path algorithm fused with an AttenionU-Net network, thereby completing the extraction of the central line of the tubular structure. The attribute U-Net network is fused into the minimum path algorithm, so that the minimum path algorithm overcomes the shortcuts and increases the accuracy of the minimum path. Meanwhile, the minimum path algorithm provides a strong geometric priori for the Attention U-Net network, and the performance of the Attention U-Net network is greatly improved. The minimum path algorithm and the Attention U-Net network are mutually fused, alternately executed and complementary in advantages, so that the central line of the tubular structure can be rapidly and accurately extracted from the complex tubular structure image.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The invention is further described with reference to fig. 1.
The extraction method of the center line of the tubular structure of the minimum path comprises the following steps:
a) Acquiring m suburb road images containing annotation information of suburb road center lines to form a training set D' 1 ,D′ 1 ={T 1 ,T 2 ,...,T i ,...,T m }, T therein i For the ith suburban road image, i epsilon {1,2,.. M }, obtaining n suburban road images containing annotation information of suburban road center lines, and forming a test set D '' 2 ,D′ 2 ={T 1 ,T 2 ,...,T j ,…,T n }, T therein j J e {1,2, …, n } is the j suburban road image. In one embodiment of the present invention, the value of m may be 5 and the value of n may be 9.
b) Test set D' 2 Splitting suburban road images, correcting one-fourth of the total number of the split images by a method that the central line of the road in the images is positioned at the center of the images and is perpendicular to the edges of the images, and forming a positive sample set D 'by the corrected images' 2+ The remaining three quarters of the split images form a negative set D' 2- Positive sample set D' 2+ And negative sample set D' 2- And performing addition operation to obtain a data set D, and dividing the data set D into a training set train and a test set test.
c) And inputting the images in the training set train into the Attention U-Net network, and training the Attention U-Net network to obtain the trained Attention U-Net network.
d) Test set D' 2 The j-th suburban road image T j Inputting into a feature point extraction layer to obtain a feature point diagram H t And feature point set Q t Define a set of feature points Q t Disjoint point set S t ,Q t ∩S t Let phi be the empty set, set point S t Initialized to an empty set.
e) Feature point diagram H t Inputting into a minimum path calculation layer to obtain a jth suburban road image T j Middle tube shapeThe centerline of the structure.
The method for extracting the central line of the tubular structure by embedding a progressive minimum path method into an attribute U-Net network. The centerline of the tubular structure is intended to be extracted from a high-complexity, poor-quality tubular image. The method is divided into two stages. In the first stage, the Attention U-Net network is trained. And in the second stage, fusing the AttenionU-Net network trained in the first stage into a Dijkstra algorithm to finish the extraction of the central line of the tubular structure. The attribute U-Net network is fused into the minimum path algorithm, so that the minimum path algorithm overcomes the shortcuts and increases the accuracy of the minimum path. Meanwhile, the minimum path algorithm provides a strong geometric priori for the Attention U-Net network, and the performance of the Attention U-Net network is greatly improved. The minimum path algorithm and the Attention U-Net network are mutually fused, alternately executed and complementary in advantages, so that the central line of the tubular structure can be rapidly and accurately extracted from the complex tubular structure image.
Example 1:
in step a), m suburban road images containing annotation information of suburban road centerlines are acquired from the satellite image data set EPFL, and n suburban road images containing annotation information of suburban road centerlines are acquired.
Example 2:
step b) of importing a PIL library in python, importing an Image package from the PIL library, and using a crop () function test set D 'in the Image package' 2 Suburban road image splitting.
Example 3:
in step b), the data set D is divided into a training set and a testing set according to the ratio of 7:3.
Example 4:
step d) comprises the steps of:
d-1) the test set D' 2 The j-th suburban road image T j Inputting into a feature point extraction layer, detecting feature points by using a Shi-Tomasit feature point detector to obtain a feature point diagram H t 。
d-2) mapping the feature points H t Generating a feature point set Q from each feature point in the set t 。
Example 5:
step e) comprises the steps of:
e-1) mapping the feature points H t Input into a trained Attention U-Net network, and output to obtain an Attention weight coefficient diagram H att 。
e-2) mapping the feature points H t Building an undirected graph G t ,G t =(V t ,E t ),V t As undirected graph G t The collection of vertices is such that,as undirected graph G t I-th vertex of (a)>As undirected graph G t The j-th vertex of (a)>As undirected graph G t I e {1,2, …, o }, j e {1,2, …, o }, u e {1,2, …, o }, o is the undirected graph G t Total number of middle vertexes, feature point set Q t For set V t Proper subset of (E) t As undirected graph G t Is provided with a set of edges of the (c),wherein->Is the ith vertex->And j' th vertex>And a border therebetween. Set V t Each vertex in (a) corresponds to a feature point diagram H t Through set E between pixels as vertices t The edges of (a) are connected.
e-3) will be the ith vertexAs the initial vertex, the u-th vertex +.>As an ending vertex, the starting vertex +.>From a set of feature points Q t Culling and adding point set S t 。
e-4) calculating the initial vertexAnd end vertex->Minimum path->Minimum path +.>As an initial vertex->And end vertex->A centerline of the tubular structure.
e-5) repeating steps e-3) to e-4) until feature point set Q t Adding all vertexes in the set S t Obtaining the j suburban road image T j Center line of the middle tubular structure.
Example 6:
step e-4) comprises the steps of:
e-4.1) set V t Divided into two disjoint sets Q 1 And S is 1 ,Q 1 ∩S 1 =φ, set Q 1 Contains set V t All vertices of (1), set S 1 Is an empty set.
e-4.2) is calculated by the formulaCalculating the ith vertex->And j' th vertex>Weights of the sides between->In->For Franagi filter tube measure, both ε and λ are constants, +.>Is the ith vertex->And j' th vertex>Euclidean distance between them to obtain set E t Initial weight set of edges of (a)
Closing devicee-4.3) initial set of weights on edges +.>Is selected fromIth vertex->The connected edges are the vertices of minimum weight +.>The ith vertex is ++by pi-function of Dijkstra algorithm>Set as vertex->Is to add the vertex->From set Q 1 Culling and adding set S 1 。
e-4.4) from the attention weighting factor graph H att Obtaining vertices inAttention coefficient of->If the attention coefficientA threshold Th of greater than or equal to the vertex->Adjacent vertex set of adjacent vertices +.>Selecting an adjacent vertex->The ith vertex is ++by pi-function of Dijkstra algorithm>Set as vertex->If the attention coefficient is +.>Less than threshold Th, finding vertex ++through pi function of Dijkstra algorithm>Precursor node->Calculate vertex->And precursor node->Euclidean distance L of (2) j At the apex +>And precursor node->Defining a length L therebetween j Is->Build with Euclidean distance L j Square tubular image block P of side length j Tubular image block P j Is a local pathTubular image block P using a warp Perselected () function in an Opencv library j Rotation operation along the center line to get regularized image block +.>e-4.5) regularizing the image block +.>Judging regularized image block in classifier of the trained Attention U-Net network>Whether it is a prospect.
e-4.6) verticesAnd start vertex->The sum of the weights of all sides on the path between is +.>If regularized image block->For foreground, then vertex->And adjacent vertex->The weight of the edge between the two edges is kept unchanged, and whether the vertex is updated or not is judged through Dijkstra algorithm>To the initial vertex->Path weight sum between ∈>If updated, vertex->To the initial vertex->The sum of the path weights between is updated to +.>Vertex ++through Dijkstra algorithm>Set as adjacent vertex->Is a precursor node of (c). e-4.7) if regularized image block +.>As background, vertex->And adjacent vertex->The weight of the edge between the two isBy the formula->Calculating updated weightsWeight of update ∈>Is carried into Dijkstra algorithm, and whether the vertex is updated or not is judged through Dijkstra algorithm>To the initial vertex->Path weight sum between ∈>If updated, vertex->To the initial vertex->The sum of the path weights between is updated to +.>Vertex ++through Dijkstra algorithm>Set as adjacent vertex->Is a precursor node of (c).
e-4.8) repeating steps e-4.3) through e-4.7) until the vertex is finishedJoining set S 1 。
e-4.9) querying the set S by pi-function of Dijkstra algorithm 1 Middle ending vertexPrecursor node->
e-4.10) node precursorInstead of step e-4.9) Ending vertex of->Repeating step e-4.10) until the initial vertex +.>And end vertex->Minimum path->
Example 7:
preferably, the value of E is 0.01, and lambda E is (0, 1). Threshold Th e (0, 1).
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (9)
1. The extraction method of the central line of the tubular structure of the minimum path is characterized by comprising the following steps:
a) Acquiring m suburb road images containing annotation information of suburb road center lines to form a training set D' 1 ,D′ 1 ={T 1 ,T 2 ,...,T i ,...,T m }, T therein i For the ith suburban road image, i epsilon {1,2,.. M }, obtaining n suburban road images containing annotation information of suburban road center lines, and forming a test set D '' 2 ,D′ 2 ={T 1 ,T 2 ,...,T j ,...,T n }, T therein j For the j suburban roadImage j e {1,2,., n };
b) Test set D' 2 Splitting suburban road images, correcting one-fourth of the total number of the split images by a method that the central line of the road in the images is positioned at the center of the images and is perpendicular to the edges of the images, and forming a positive sample set D 'by the corrected images' 2+ The remaining three quarters of the split images form a negative set D' 2- Positive sample set D' 2+ And negative sample set D' 2- Performing addition operation to obtain a data set D, and dividing the data set D into a training set train and a test set test;
c) Inputting images in a training set train into an Attention U-Net network, training the Attention U-Net network, and obtaining a trained Attention U-Net network;
d) Test set D' 2 The j-th suburban road image T j Inputting into a feature point extraction layer to obtain a feature point diagram H t And feature point set Q t Define a set of feature points Q t Disjoint point set S t ,Q t ∩S t Let phi be the empty set, set point S t Initializing to be an empty set;
e) Feature point diagram H t Inputting into a minimum path calculation layer to obtain a jth suburban road image T j Center line of the middle tubular structure.
2. The method for extracting a centerline of a tubular structure of minimum path according to claim 1, wherein: in step a), m suburban road images containing annotation information of suburban road centerlines are acquired from the satellite image data set EPFL, and n suburban road images containing annotation information of suburban road centerlines are acquired.
3. The method for extracting a centerline of a tubular structure of minimum path according to claim 1, wherein: step b) of importing a PIL library in python, importing an Image package from the PIL library, and using a crop () function test set D 'in the Image package' 2 Suburban road image splitting.
4. The method for extracting a centerline of a tubular structure of minimum path according to claim 1, wherein: in step b), the data set D is divided into a training set and a testing set according to the ratio of 7:3.
5. The method of extracting a centerline of a tubular structure of minimum path according to claim 1, wherein step d) comprises the steps of:
d-1) the test set D' 2 The j-th suburban road image T j Inputting into a feature point extraction layer, detecting feature points by using a Shi-Tomasit feature point detector to obtain a feature point diagram H t ;
d-2) mapping the feature points H t Generating a feature point set Q from each feature point in the set t 。
6. The method of extracting a centerline of a minimum path tubular structure according to claim 1, wherein step e) comprises the steps of:
e-1) mapping the feature points H t Input into a trained Attention U-Net network, and output to obtain an Attention weight coefficient diagram H att ;
e-2) mapping the feature points H t Building an undirected graph G t ,G t =(V t ,E t ),V t As undirected graph G t The collection of vertices is such that, as undirected graph G t I-th vertex of (a)>As undirected graph G t The j-th vertex of (a)>As undirected graphG t I e {1,2,., o }, j e {1,2,., o }, u e {1,2,., o }, o is an undirected graph G t Total number of middle vertexes, feature point set Q t For set V t Proper subset of (E) t As undirected graph G t Is provided with a set of edges of the (c),wherein->Is the ith vertex->And j' th vertex>Edges therebetween;
e-3) will be the ith vertexAs the initial vertex, the u-th vertex +.>As an ending vertex, the starting vertex +.>From a set of feature points Q t Culling and adding point set S t ;
e-4) calculating the initial vertexAnd end vertex->Minimum path->Minimum path +.>As an initial vertex->And end vertex->A centerline of the tubular structure therebetween;
e-5) repeating steps e-3) to e-4) until feature point set Q t Adding all vertexes in the set S t Obtaining the j suburban road image T j Center line of the middle tubular structure.
7. The method of extracting a centerline of a tubular structure of minimum path according to claim 6, wherein step e-4) comprises the steps of:
e-4.1) set V t Divided into two disjoint sets Q 1 And S is 1 ,Q 1 ∩S 1 =φ, set Q 1 Contains set V t All vertices of (1), set S 1 Is an empty set;
e-4.2) is calculated by the formulaCalculating the ith vertex->And j' th vertex>Weights of the sides between->In->For Franagi filter tube measure, both ε and λ are constants, +.>Is the ith vertex->And j' th vertex>Euclidean distance between them to obtain set E t An initial set of weights for edges of (2)>
e-4.3) initial set of weights on edgeIs selected from and i-th vertex->The connected edges are the vertices of minimum weight +.>The ith vertex is ++by pi-function of Dijkstra algorithm>Set as vertex->Is to add the vertex->From set Q 1 Culling and adding set S 1 ;
e-4.4) from the attention weighting factor graph H att Obtaining vertices inAttention coefficient of->If attention coefficient->A threshold Th of greater than or equal to the vertex->Adjacent vertex set of adjacent vertices +.>Selecting an adjacent vertexThe ith vertex is ++by pi-function of Dijkstra algorithm>Set as vertex->If the attention coefficient isLess than threshold Th, finding vertex ++through pi function of Dijkstra algorithm>Precursor node->Calculate vertex->And precursor node->Euclidean distance L of (2) j At the apex +>And precursor node->Defining a length L therebetween j Is->Build with Euclidean distance L j Square tubular image block P of side length j Tubular image block P j Is a local pathTubular image block P using a warp Perselected () function in an Opencv library j Rotation operation along the center line to get regularized image block +.>
e-4.5) regularizing the image blocksJudging regularized image block in classifier of the trained Attention U-Net network>Whether it is a prospect;
e-4.6) verticesAnd start vertex->The sum of the weights of all sides on the path between is +.>If regularized image block->For foreground, then vertex->And adjacent vertex->The weight of the edge between the two edges is kept unchanged, and whether the vertex is updated or not is judged through Dijkstra algorithm>To the initial vertex->Path weight sum between ∈>If updated, vertex->To the initial vertex->The sum of the path weights between is updated to +.>Vertex ++through Dijkstra algorithm>Set as adjacent vertex->Is a precursor node of (2); e-4.7) if regularized image block +.>As background, vertex->And adjacent vertex->The weight of the edge between the two isBy the formula->Calculating updated weightsWeight of update ∈>Is carried into Dijkstra algorithm, and whether the vertex is updated or not is judged through Dijkstra algorithm>To the initial vertex->Path weight sum between ∈>If updated, vertex->To the initial vertex->The sum of the path weights between is updated to +.>Vertex ++through Dijkstra algorithm>Set as adjacent vertex->Is a precursor node of (2);
e-4.8) repeating steps e-4.3) through e-4.7) until the vertex is finishedJoining set S 1 ;
e-4.9) querying the set S by pi-function of Dijkstra algorithm 1 Middle ending vertexPrecursor node->
e-4.10) node precursorSubstitution of the ending vertex in step e-4.9)>Repeating the steps
e-4.10) until the starting vertex is obtainedAnd end vertex->Minimum path->
8. The method for extracting a centerline of a tubular structure of minimum path according to claim 7, wherein: the value of E is 0.01, and lambda E is 0, 1.
9. The method for extracting a centerline of a tubular structure of minimum path according to claim 7, wherein: threshold Th e (0, 1).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310557810.6A CN116452992B (en) | 2023-05-18 | 2023-05-18 | Method for extracting center line of tubular structure of minimum path |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310557810.6A CN116452992B (en) | 2023-05-18 | 2023-05-18 | Method for extracting center line of tubular structure of minimum path |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116452992A true CN116452992A (en) | 2023-07-18 |
CN116452992B CN116452992B (en) | 2024-02-02 |
Family
ID=87128671
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310557810.6A Active CN116452992B (en) | 2023-05-18 | 2023-05-18 | Method for extracting center line of tubular structure of minimum path |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116452992B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117078720A (en) * | 2023-08-31 | 2023-11-17 | 齐鲁工业大学(山东省科学院) | Tubular structure rapid tracking method fusing neural network |
CN117611599A (en) * | 2023-12-28 | 2024-02-27 | 齐鲁工业大学(山东省科学院) | Blood vessel segmentation method and system integrating centre line diagram and contrast enhancement network |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108022251A (en) * | 2017-12-14 | 2018-05-11 | 北京理工大学 | A kind of extracting method and system of the center line of tubular structure |
US20190347802A1 (en) * | 2016-12-09 | 2019-11-14 | Children's National Medical Center | Image segmentation of complex structures |
CN112348821A (en) * | 2020-11-24 | 2021-02-09 | 中国科学院自动化研究所 | Guide wire segmentation and tip point positioning method, system and device based on X-ray image |
CN113450364A (en) * | 2021-06-29 | 2021-09-28 | 湖南大学 | Tree-shaped structure center line extraction method based on three-dimensional flux model |
WO2022095612A1 (en) * | 2020-11-05 | 2022-05-12 | 西安交通大学 | Method and system for extracting carotid artery vessel centerline in magnetic resonance image |
CN114881962A (en) * | 2022-04-28 | 2022-08-09 | 桂林理工大学 | Retina image blood vessel segmentation method based on improved U-Net network |
US20220309674A1 (en) * | 2021-03-26 | 2022-09-29 | Nanjing University Of Posts And Telecommunications | Medical image segmentation method based on u-net |
CN115908355A (en) * | 2022-12-06 | 2023-04-04 | 浙江工业大学 | Cerebrovascular segmentation method based on depth-distance transformation |
CN116012328A (en) * | 2022-12-28 | 2023-04-25 | 深圳英美达医疗技术有限公司 | Method and device for detecting cavity branch point, electronic equipment and readable storage medium |
-
2023
- 2023-05-18 CN CN202310557810.6A patent/CN116452992B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190347802A1 (en) * | 2016-12-09 | 2019-11-14 | Children's National Medical Center | Image segmentation of complex structures |
CN108022251A (en) * | 2017-12-14 | 2018-05-11 | 北京理工大学 | A kind of extracting method and system of the center line of tubular structure |
WO2022095612A1 (en) * | 2020-11-05 | 2022-05-12 | 西安交通大学 | Method and system for extracting carotid artery vessel centerline in magnetic resonance image |
CN112348821A (en) * | 2020-11-24 | 2021-02-09 | 中国科学院自动化研究所 | Guide wire segmentation and tip point positioning method, system and device based on X-ray image |
US20220309674A1 (en) * | 2021-03-26 | 2022-09-29 | Nanjing University Of Posts And Telecommunications | Medical image segmentation method based on u-net |
CN113450364A (en) * | 2021-06-29 | 2021-09-28 | 湖南大学 | Tree-shaped structure center line extraction method based on three-dimensional flux model |
CN114881962A (en) * | 2022-04-28 | 2022-08-09 | 桂林理工大学 | Retina image blood vessel segmentation method based on improved U-Net network |
CN115908355A (en) * | 2022-12-06 | 2023-04-04 | 浙江工业大学 | Cerebrovascular segmentation method based on depth-distance transformation |
CN116012328A (en) * | 2022-12-28 | 2023-04-25 | 深圳英美达医疗技术有限公司 | Method and device for detecting cavity branch point, electronic equipment and readable storage medium |
Non-Patent Citations (8)
Title |
---|
K. L. KEUNG: "A Shortest Path Graph Attention Network and Non-traditional Multi-deep Layouts in Robotic Mobile Fulfillment System", 2022 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL ENGINEERING AND ENGINEERING MANAGEMENT * |
TOBIAS MARTIN: "Topology-aware reconstruction of thin tubular structures", SA \'14: SIGGRAPH ASIA 2014 TECHNICAL BRIEFS * |
YU-SHUEN WANG: "Motion-based video retargeting with optimized crop-and-warp", SIGGRAPH \'10: ACM SIGGRAPH 2010 PAPERS * |
任天赐;黄向生;丁伟利;安重阳;翟鹏博;: "全局双边网络的语义分割算法", 计算机科学, no. 1 * |
王卓;闫浩文;禄小敏;冯天文;李亚珍;: "一种改进U-Net的高分辨率遥感影像道路提取方法", 遥感技术与应用, no. 04 * |
王建新;王子亚;田萱;: "基于深度学习的自然场景文本检测与识别综述", 软件学报, no. 05 * |
范红超;李万志;章超权;: "基于Anchor-free的交通标志检测", 地球信息科学学报, no. 01 * |
金飞;王龙飞;刘智;王番;贾桂芬;: "一种双U-Net的遥感影像道路提取方法", 测绘科学技术学报, no. 04 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117078720A (en) * | 2023-08-31 | 2023-11-17 | 齐鲁工业大学(山东省科学院) | Tubular structure rapid tracking method fusing neural network |
CN117078720B (en) * | 2023-08-31 | 2024-03-01 | 齐鲁工业大学(山东省科学院) | Tubular structure rapid tracking method fusing neural network |
CN117611599A (en) * | 2023-12-28 | 2024-02-27 | 齐鲁工业大学(山东省科学院) | Blood vessel segmentation method and system integrating centre line diagram and contrast enhancement network |
CN117611599B (en) * | 2023-12-28 | 2024-05-31 | 齐鲁工业大学(山东省科学院) | Blood vessel segmentation method and system integrating centre line diagram and contrast enhancement network |
Also Published As
Publication number | Publication date |
---|---|
CN116452992B (en) | 2024-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113012212B (en) | Depth information fusion-based indoor scene three-dimensional point cloud reconstruction method and system | |
US11556797B2 (en) | Systems and methods for polygon object annotation and a method of training an object annotation system | |
CN116452992B (en) | Method for extracting center line of tubular structure of minimum path | |
WO2020063527A1 (en) | Human hairstyle generation method based on multi-feature retrieval and deformation | |
CN112927357A (en) | 3D object reconstruction method based on dynamic graph network | |
CN111795704A (en) | Method and device for constructing visual point cloud map | |
Pistilli et al. | Learning robust graph-convolutional representations for point cloud denoising | |
CN111553858B (en) | Image restoration method and system based on generation countermeasure network and application thereof | |
WO2022205500A1 (en) | Method for constructing registration model for non-rigid multimodal medical image, and application thereof | |
CN110197505B (en) | Remote sensing image binocular stereo matching method based on depth network and semantic information | |
JP7135659B2 (en) | SHAPE COMPLEMENTATION DEVICE, SHAPE COMPLEMENTATION LEARNING DEVICE, METHOD, AND PROGRAM | |
CN106023298A (en) | Point cloud rigid registration method based on local Poisson curved surface reconstruction | |
US20230169677A1 (en) | Pose Estimation Method and Apparatus | |
CN109740537B (en) | Method and system for accurately marking attributes of pedestrian images in crowd video images | |
CN109829353B (en) | Face image stylizing method based on space constraint | |
CN111681300B (en) | Method for obtaining target area composed of outline sketch lines | |
CN115937552B (en) | Image matching method based on fusion of manual features and depth features | |
CN113838005A (en) | Intelligent rock fracture identification and three-dimensional reconstruction method and system based on dimension conversion | |
Chen et al. | A generalized asymmetric dual-front model for active contours and image segmentation | |
CN107316327B (en) | Fractured bone model registration method | |
Hirner et al. | FC-DCNN: A densely connected neural network for stereo estimation | |
Nguyen et al. | Building footprint extraction in dense areas using super resolution and frame field learning | |
CN117710603A (en) | Unmanned aerial vehicle image three-dimensional building modeling method under constraint of linear geometry | |
CN117078938A (en) | Remote sensing image semantic segmentation method based on Markov random field | |
CN113538278B (en) | Depth map completion method based on deformable convolution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |