[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN104166976B - The dividing method of prospect in a kind of 3-D view - Google Patents

The dividing method of prospect in a kind of 3-D view Download PDF

Info

Publication number
CN104166976B
CN104166976B CN201310182718.2A CN201310182718A CN104166976B CN 104166976 B CN104166976 B CN 104166976B CN 201310182718 A CN201310182718 A CN 201310182718A CN 104166976 B CN104166976 B CN 104166976B
Authority
CN
China
Prior art keywords
foreground
mrow
msub
image
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310182718.2A
Other languages
Chinese (zh)
Other versions
CN104166976A (en
Inventor
刘靖
李强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN201310182718.2A priority Critical patent/CN104166976B/en
Publication of CN104166976A publication Critical patent/CN104166976A/en
Application granted granted Critical
Publication of CN104166976B publication Critical patent/CN104166976B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides the dividing method of prospect in a kind of 3-D view, from 3-D view, namely obtain the method for foreground mask, described method comprises the steps: that (1) inputs described 3-D view and initial foreground mask M (1); (2) the image constraint prospect probability of each pixel in described 3-D view is calculated; (3) based on current foreground mask M (k), calculate the shape constraining prospect probability of each pixel in described 3-D view , k is iterations; (4) based on described image constraint prospect probability and shape constraining prospect probability , obtain next foreground mask M (k+1); (5) if described current foreground mask M (k)with next foreground mask M (k+1)change be less than predetermined value or iterations k equals predetermined maximum iteration time, then described current foreground mask M (k)be the foreground mask that this method obtains; Otherwise make M (k)=M (k+1), k=k+1, and return step (3).The technical program not only realizes simply, fast, stably can also extract the prospect in 3 d medical images.

Description

Method for segmenting foreground in three-dimensional image
Technical Field
The invention relates to the field of image processing, in particular to a method for segmenting a foreground in a three-dimensional image.
Background
Currently, for clinical medical images, common automatic foreground extraction methods include active contour models and region growing.
The active contour model is a method for evolving the initial mask boundary contour. The method utilizes the interesting characteristics in the image, stretches and deforms the outline of the initial mask as an overall curve, and searches the boundary of the interesting area, so that the method is insensitive to noise in the image, and can overcome segmentation leakage and error segmentation caused by local unclear parts at the interesting boundary. However, in this method, the mask boundary is evolved by optimizing an energy function formed by combining the image itself and the mask boundary curve, and the operation speed is generally slow.
And (4) increasing the area, expanding outwards from the boundary position of the initial mask by adopting an iterative method, and gradually combining pixel points which are adjacent to the boundary of the mask and have interesting characteristics into the mask until the boundary position of the interesting area is found. The algorithm of the method is simple and efficient to implement, but is very sensitive to noise in the image, and only the influence of the adjacent area of the mask is considered.
Disclosure of Invention
The invention solves the problem of providing a method for segmenting the foreground in the three-dimensional image, which is simple to realize and can quickly and stably extract the foreground in the three-dimensional medical image.
In order to solve the above problems, the present invention provides a method for segmenting a foreground in a three-dimensional image, comprising the following steps:
(1) inputting the three-dimensional image, and calculating the image constraint foreground probability of each pixel point in the three-dimensional image; recording the foreground mask of the three-dimensional image as M, M(1)Is an initial foreground mask;
(2) based on the current foreground mask M(k)Calculating the shape constraint foreground probability of each pixel point in the three-dimensional imageWherein k is iteration times, and k is more than or equal to 1;
(3) constraining the foreground probability based on the image and shapeGet the next foreground mask M(k+1)
(4) If the current foreground mask M(k)And the next foreground mask M(k+1)Is less than a predetermined value or number of iterationsk is equal to the predetermined maximum number of iterations, the iteration is ended, and the current foreground mask M(k)Namely the foreground to be segmented; otherwise, the step (2) is returned, and k is increased by 1.
In the method for segmenting the foreground in the three-dimensional image, the image constraint foreground probability p of each pixel point in the three-dimensional image is calculated1(X)The process comprises the following steps: 1) binarizing the three-dimensional image to obtain a binarized image; 2) carrying out spatial filtering and transformation on the binary image to obtain the image constraint foreground probability p1(X)
In the method for segmenting the foreground in the three-dimensional image, the image constraint foreground probability p is obtained1(X)The formula of (1) is:
<math> <mrow> <msub> <mi>p</mi> <mrow> <mn>1</mn> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>=</mo> <msub> <mi>g</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>G</mi> <msub> <mi>&sigma;</mi> <mn>1</mn> </msub> </msub> <mo>&CircleTimes;</mo> <msub> <mi>I</mi> <mi>b</mi> </msub> <mo>)</mo> </mrow> </mrow> </math>
wherein X represents the spatial position of any pixel point in the three-dimensional image; g1Is a transformation function;is a spatial filter operator;representing a convolution operation; i isbAnd obtaining the binary image.
The method for segmenting the foreground in the three-dimensional image comprises the following stepsComputing the shape constrained foreground probabilityThe formula of (1) is:
<math> <mrow> <msubsup> <mi>p</mi> <mrow> <mn>2</mn> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>(</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <msub> <mi>g</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>G</mi> <msub> <mi>&sigma;</mi> <mn>2</mn> </msub> </msub> <mo>&CircleTimes;</mo> <msup> <mi>M</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> </mrow> </math>
wherein, g2Is a transformation function;is a spatial filter operator;representing a convolution operation; k is the number of iterations; and X represents the spatial position of any pixel point in the three-dimensional image.
In the method for segmenting the foreground in the three-dimensional image, the next foreground mask M is obtained(k+1)The process comprises the following steps:
1) constraining the image to a foreground probability p1(X)And the shape constrained foreground probabilityCombining to obtain each pixel point in the three-dimensional imageProbability of becoming foreground
2) For the foreground probabilityCarrying out binarization to obtain a temporary foreground mask;
3) carrying out spatial smoothing filtering and binarization on the temporary foreground mask to obtain the next foreground mask M(k+1)
The method for segmenting the foreground in the three-dimensional image, wherein the formula of the combination isWherein w is the foreground probabilityIn (c) p1(X)0 < w < 1.
The method for segmenting the foreground in the three-dimensional image, wherein the formula of the combination is p ( X ) ( k + 1 ) = p 1 ( X ) w ( p 2 ( X ) ( k + 1 ) ) 1 - w p 1 ( X ) w ( p 2 ( X ) ( k + 1 ) ) 1 - w + ( 1 - p 1 ( X ) ) w ( 1 - p 2 ( X ) ( k + 1 ) ) 1 - w .
In the method for segmenting the foreground in the three-dimensional image, the next foreground mask M is obtained(k+1)The formula of (1) is:
<math> <mrow> <msup> <mi>M</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mi>I</mi> <mo>[</mo> <mrow> <mo>(</mo> <msub> <mi>G</mi> <msub> <mi>&sigma;</mi> <mn>3</mn> </msub> </msub> <mo>&CircleTimes;</mo> <mi>I</mi> <mo>[</mo> <msup> <mi>p</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>></mo> <msub> <mi>t</mi> <mn>2</mn> </msub> <mo>]</mo> <mo>)</mo> </mrow> <mo>></mo> <msub> <mi>t</mi> <mn>3</mn> </msub> <mo>]</mo> </mrow> </math>
wherein,is a spatial filter operator;representing a convolution operation; i is an index function, when inequalities in variables of the index function are satisfied, the value is 1, otherwise, the value is 0; t is t2And t3Is a predetermined threshold.
The method for segmenting the foreground in the three-dimensional image, wherein the value range of the preset value is 0.000001-0.001.
The method for segmenting the foreground in the three-dimensional image, wherein the value range of the preset maximum iteration number is 10-20.
Compared with the prior art, the whole process of the invention can be completed by carrying out spatial filtering and simple arithmetic operation after the image is binarized, and compared with an active contour model, the method has low complexity and high calculation speed;
further, based on shape constraints and region growing, the feature of the image itself and the shape feature of the mask are used simultaneously, and therefore higher stability is achieved.
Drawings
Fig. 1 is a schematic flow chart illustrating a method for segmenting a foreground in a three-dimensional image according to an embodiment of the present invention;
FIG. 2 shows that the next foreground mask M is obtained according to the embodiment of the present invention(k+1)A schematic flow diagram of (a);
fig. 3 is a schematic diagram illustrating the effect of the segmented foreground obtained by the method of the embodiment of the present invention and the segmented foreground obtained by the conventional region growing method.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, but rather construed as limited to the embodiments set forth herein.
Next, the present invention is described in detail by using schematic diagrams, and when the embodiments of the present invention are described in detail, the schematic diagrams are only examples for convenience of description, and the scope of the present invention should not be limited herein.
The following describes a method for segmenting the foreground in the three-dimensional image in detail with reference to the accompanying drawings and embodiments. As shown in fig. 1, the method for segmenting the foreground in the three-dimensional image according to the embodiment of the present invention first executes step S1, and inputs the three-dimensional image and the initial foreground mask M(1). Wherein M is a foreground mask of the three-dimensional image. In this embodiment, the input three-dimensional image is a CT abdominal image, and the foreground of the required segmentation is the liver (or spleen) of the CT image.
Then, step S2 is executed to calculate an image constraint foreground probability p of each pixel point in the three-dimensional image1(X). Wherein p is1(X)=p1(X)(O ═ 1| X), X is the spatial position of any pixel in the three-dimensional image,specifically, calculating the image constraint foreground probability p of any pixel point p in the three-dimensional image1(X)The process comprises the following steps: 1) binarizing the three-dimensional image by methods of setting a threshold value, a K-means clustering algorithm and the like to obtain a binarized image, namely extracting pixel points of gray values in an interested gray value interval in the three-dimensional image; 2) carrying out spatial filtering and transformation on the binary image, and obtaining the image constraint foreground probability p through a formula (1)1(X)The formula (1) is:
<math> <mrow> <msub> <mi>p</mi> <mrow> <mn>1</mn> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>=</mo> <msub> <mi>g</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>G</mi> <msub> <mi>&sigma;</mi> <mn>1</mn> </msub> </msub> <mo>&CircleTimes;</mo> <msub> <mi>I</mi> <mi>b</mi> </msub> <mo>)</mo> </mrow> </mrow> </math>
wherein X represents the spatial position of any pixel point in the three-dimensional image; g1Is a transformation function;is a spatial filter operator;representing a convolution operation; i isbAnd obtaining the binary image.
In this embodiment, first, a threshold method is used to binarize a CT abdominal image, and pixel points in the CT abdominal image whose gray values belong to a liver gray value interval are extracted to obtain a binarized image. And then, carrying out spatial filtering and transformation on the binary image through the formula (1) to obtain the image constraint foreground probability of each pixel point in the CT abdominal image. Wherein,width σ in three-dimensional direction for moving average filter1=[2,2,2]mm, transformation function of g1(t)=t。
Next, step S3 is executed to obtain the current foreground mask M(k)Calculating the shape constraint foreground probability of each pixel point in the three-dimensional imageWherein k is iteration number, k is more than or equal to 1, and when k is 1, the foreground mask M(1)The initial foreground mask entered in step S1. Specifically, the foreground mask M is masked by formula (2)(k)Performing spatial filtering and transformation to obtain shape constraint foreground probabilityThe formula (2) is:
<math> <mrow> <msubsup> <mi>p</mi> <mrow> <mn>2</mn> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>(</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <msub> <mi>g</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>G</mi> <msub> <mi>&sigma;</mi> <mn>2</mn> </msub> </msub> <mo>&CircleTimes;</mo> <msup> <mi>M</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> </mrow> </math>
wherein, g2Is a transformation function;is a spatial filter operator;representing a convolution operation; k is the number of iterations; and X represents the spatial position of any pixel point in the three-dimensional image.
In the present embodiment, for the liver in the CT abdomen image, the initial foreground mask M is applied to the inputted liver according to the formula (2)(1)Performing spatial filtering and transformation, namely performing first region increase (first iteration) on the initial foreground mask to obtain shape constraint foreground probability of performing first region increase on any pixel point in the CT abdominal imageThe function of the transformation is g1(t)=t2
Next, step S4 is executed to constrain foreground probability and shape based on the imageGet the next foreground mask M(k+1). Specifically, the next foreground mask M is obtained(k+1)As shown in fig. 2, first, step S201 is executed to constrain the image to a foreground probability p1(X)And the shape constrained foreground probabilityCombining to obtain the probability that each pixel point in the three-dimensional image becomes foregroundIn particular, by means of a linear opinion pool, i.e. formulaConstraining the foreground probability p to the image1(X)And shape constrained foreground probabilityCombining, wherein w is the foreground probabilityIn (c) p1(X)0 < w < 1; or by pools of logarithmic opinions, i.e. formulae p ( X ) ( k + 1 ) = p 1 ( X ) w ( p 2 ( X ) ( k + 1 ) ) 1 - w p 1 ( X ) w ( p 2 ( X ) ( k + 1 ) ) 1 - w + ( 1 - p 1 ( X ) ) w ( 1 - p 2 ( X ) ( k + 1 ) ) 1 - w , Constraining the foreground probability p to the image1(X)And shape constrained foreground probabilityAnd (4) combining. In the present embodiment, the foreground probability p is constrained to the image obtained in step S2 by the logarithm opinion pool1(X)And the shape-constrained foreground probability obtained in step S3Combining to obtain the probability that any pixel point in the first region increase CT abdominal image belongs to the liver
Then, step S202 is executed to determine the foreground probabilityAnd carrying out binarization to obtain a temporary foreground mask. Specifically, all foreground probabilities are combinedAnd comparing the value with a preset threshold, if the value is larger than the preset threshold, assigning the value to be 1, and if the value is smaller than or equal to the preset threshold, assigning the value to be 0. In the present embodiment, the probability of becoming foreground in step S201And assigning the value of the pixel point larger than 0.5 to be 1, and assigning the value of the rest pixel points to be 0, so that the temporary foreground mask for the first region increase is obtained.
Next, step S203 is executed to perform spatial smoothing filtering and binarization on the temporary foreground mask to obtain the next foreground mask M(k+1)I.e. by
<math> <mrow> <msup> <mi>M</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mi>I</mi> <mo>[</mo> <mrow> <mo>(</mo> <msub> <mi>G</mi> <msub> <mi>&sigma;</mi> <mn>3</mn> </msub> </msub> <mo>&CircleTimes;</mo> <mi>I</mi> <mo>[</mo> <msubsup> <mi>p</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>></mo> <msub> <mi>t</mi> <mn>2</mn> </msub> <mo>]</mo> <mo>)</mo> </mrow> <mo>></mo> <msub> <mi>t</mi> <mn>3</mn> </msub> <mo>]</mo> </mrow> </math>
Wherein,is a spatial filter operator;representing a convolution operation; i is an index function, when inequalities in variables of the index function are satisfied, the value is 1, otherwise, the value is 0; t is t2And t3Is a predetermined threshold.
In this embodiment, the temporary foreground mask obtained in step S203 is subjected to spatial smoothing filtering, and the smoothed foreground mask is binarized, that is, the value of the pixel value of the smoothed temporary foreground mask greater than 0.5 is assigned to 1, and the other pixel values are assigned to 0, where the binarized image is the foreground mask M(2)
Next, step S5 is executed to compare the foreground mask M(k)And a foreground mask M(k+1)Or comparing the iteration number k with a predetermined maximum iteration number, if the current foreground mask M is(k)And the next foreground mask M(k+1)Is less than a predetermined value or k is equal to a predetermined maximum number of iterations, step S6 is executed, the iteration is ended, and the current foreground mask M(k)The foreground to be segmented is obtained; if the current foreground mask M(k)And the next foreground mask M(k+1)If the change is equal to or greater than the predetermined value or k is smaller than the predetermined maximum number of iterations, the process returns to step S3, and the number of iterations k is increased by 1. Wherein, the change between the foreground masks can be an absolute value of the relative volume change of the foreground masks and the foreground masks; the value range of the preset value is 0.000001-0.001; the predetermined maximum number of iterations ranges from 10 to 20. In this embodiment, the foreground mask M obtained in step S203 is used(2)And an initial foreground mask M of the input liver(1)Is compared, and if the volume is greater than or equal to the predetermined value, the step S3 is executed again, and the foreground mask M is obtained(2)The same method is used to obtain the foreground mask M(3)Then mask the foreground with a mask M(2)And a foreground mask M(3)Comparing the volumes of (a); if the foreground mask M(2)And an initial foreground mask M of the input liver(1)Is less than a predetermined value, the initial foreground mask M(1)I.e. the foreground of the desired segmentation. And foreground mask M(1)And a foreground mask M(2)Ratio of (A) to (B)More equally, the foreground mask M(2)And a foreground mask M(3)Is less than a predetermined value, the foreground mask M(2)And if not, continuously calculating the next foreground mask until the change of the foreground mask in the two previous and next times is smaller than a preset value. In addition, in the embodiment, the predetermined maximum number of iterations is 12, and if the iteration reaches the 12 th iteration, the foreground mask M(12)And a foreground mask M(13)Is still greater than or equal to the predetermined value, at this point, the iteration is also ended, the foreground mask M(12)I.e. the foreground of the desired segmentation.
As shown in FIG. 3, the graphs (a) and (b) are the liver position and the spleen position obtained by the present method, respectively, and the graphs (c) and (d) are the liver position and the spleen position obtained by the conventional region growing method, respectively. It can be seen from the figure that with the same gray threshold and iteration times, a large number of mistakenly-segmented pixel points appear in the traditional region growing method, and the method can well retain the shapes of corresponding organs by segmenting the liver and the spleen in the CT abdominal image.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to limit the present invention, and those skilled in the art can make variations and modifications of the present invention without departing from the spirit and scope of the present invention by using the methods and technical contents disclosed above.

Claims (7)

1. A method for segmenting a foreground in a three-dimensional image is characterized by comprising the following steps:
(1) inputting the three-dimensional image, and calculating the image constraint foreground probability of each pixel point in the three-dimensional image; recording the foreground mask of the three-dimensional image as M, and the initial foreground mask as M(1)
(2) Based on the current foreground mask M(k)Calculating the shape constraint foreground probability of each pixel point in the three-dimensional imageWherein k is iteration times, and k is more than or equal to 1;
(3) constraining the foreground probability based on the image and shapeGet the next foreground mask M(k+1)
(4) If the current foreground mask M(k)And the next foreground mask M(k+1)Is less than a predetermined value or the number of iterations k is equal to a predetermined maximum number of iterations, the iteration is ended, the current foreground mask M(k)Namely the foreground to be segmented; otherwise, returning to the step (2), and increasing k by 1;
calculating the image constraint foreground probability p of each pixel point in the three-dimensional image1(Χ)The process comprises the following steps: 1) binarizing the three-dimensional image to obtain a binarized image; 2) carrying out spatial filtering and transformation on the binary image to obtain the image constraint foreground probability p1(Χ)
Based on the current foreground mask M(k)Calculating the shape constraint foreground probability of each pixel point in the three-dimensional imageThe formula of (1) is:g2is a transformation function;is a spatial filter operator;representing a convolution operation; k is the number of iterations; x represents the spatial position of any pixel point in the three-dimensional image;
constraining the foreground probability based on the image and shapeGet the next foreground mask M(k+1)The process comprises the following steps: 1) constraining the image to a foreground probability p1(Χ)And the shape constrained foreground probabilityCombining to obtain the probability that each pixel point in the three-dimensional image becomes foreground2) For the foreground probabilityCarrying out binarization to obtain a temporary foreground mask; 3) carrying out spatial smoothing filtering and binarization on the temporary foreground mask to obtain the next foreground mask M(k+1)
2. Method for segmenting a foreground in a three-dimensional image as claimed in claim 1 wherein the image constrained foreground probability p is obtained1(Χ)The formula of (1) is:
<math> <mrow> <msub> <mi>p</mi> <mrow> <mn>1</mn> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>=</mo> <msub> <mi>g</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>G</mi> <msub> <mi>&sigma;</mi> <mn>1</mn> </msub> </msub> <mo>&CircleTimes;</mo> <msub> <mi>I</mi> <mi>b</mi> </msub> <mo>)</mo> </mrow> </mrow> </math>
wherein X represents the spatial position of any pixel point in the three-dimensional image; g1Is a transformation function;is a spatial filter operator;representing a convolution operation; i isbAnd obtaining the binary image.
3. The method as claimed in claim 1, wherein the formula of the combination isWherein w is the foreground probabilityIn (c) p1(Χ)0 < w < 1.
4. The method as claimed in claim 1, wherein the formula of the combination is
p ( X ) ( k + 1 ) = p 1 ( X ) w g ( p 2 ( X ) ( k + 1 ) ) 1 - w p 1 ( X ) w g ( p 2 ( X ) ( k + 1 ) ) 1 - w + ( 1 - p 1 ( X ) ) w g ( 1 - p 2 ( x ) ( k + 1 ) ) 1 - w .
5. The method as claimed in claim 1, wherein the next foreground mask M is obtained(k+1)The formula of (1) is:
<math> <mrow> <msup> <mi>M</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mi>I</mi> <mo>[</mo> <mrow> <mo>(</mo> <msub> <mi>G</mi> <msub> <mi>&sigma;</mi> <mn>3</mn> </msub> </msub> <mo>&CircleTimes;</mo> <mi>I</mi> <mo>[</mo> <msubsup> <mi>p</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>></mo> <msub> <mi>t</mi> <mn>2</mn> </msub> <mo>]</mo> <mo>)</mo> </mrow> <mo>></mo> <msub> <mi>t</mi> <mn>3</mn> </msub> <mo>]</mo> </mrow> </math>
wherein,for spatial filtering operators;Representing a convolution operation; i is an index function, when inequalities in variables of the index function are satisfied, the value is 1, otherwise, the value is 0; t is t2And t3Is a predetermined threshold.
6. The method as claimed in claim 1, wherein the predetermined value is in a range of 0.000001 to 0.001.
7. The method as claimed in claim 1, wherein the predetermined maximum number of iterations ranges from 10 to 20.
CN201310182718.2A 2013-05-16 2013-05-16 The dividing method of prospect in a kind of 3-D view Active CN104166976B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310182718.2A CN104166976B (en) 2013-05-16 2013-05-16 The dividing method of prospect in a kind of 3-D view

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310182718.2A CN104166976B (en) 2013-05-16 2013-05-16 The dividing method of prospect in a kind of 3-D view

Publications (2)

Publication Number Publication Date
CN104166976A CN104166976A (en) 2014-11-26
CN104166976B true CN104166976B (en) 2015-12-02

Family

ID=51910767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310182718.2A Active CN104166976B (en) 2013-05-16 2013-05-16 The dividing method of prospect in a kind of 3-D view

Country Status (1)

Country Link
CN (1) CN104166976B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109509195B (en) * 2018-12-12 2020-04-17 北京达佳互联信息技术有限公司 Foreground processing method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100388314C (en) * 2003-08-14 2008-05-14 美国西门子医疗解决公司 System and method for locating compact objects in images
JP5932332B2 (en) * 2008-07-28 2016-06-08 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Using repair techniques for image correction
CN101567084B (en) * 2009-06-05 2011-04-06 西安电子科技大学 Method for detecting picture contour based on combination of level set and watershed
CN101599174A (en) * 2009-08-13 2009-12-09 哈尔滨工业大学 Method for outline extraction of level set medical ultrasonic image area based on edge and statistical nature

Also Published As

Publication number Publication date
CN104166976A (en) 2014-11-26

Similar Documents

Publication Publication Date Title
CN103390280B (en) Based on the Fast Threshold dividing method of Gray Level-Gradient two-dimensional symmetric Tsallis cross entropy
Wang et al. Multi-scale local region based level set method for image segmentation in the presence of intensity inhomogeneity
CN102135606B (en) KNN (K-Nearest Neighbor) sorting algorithm based method for correcting and segmenting grayscale nonuniformity of MR (Magnetic Resonance) image
CN102609917B (en) Image edge fitting B spline generating method based on clustering algorithm
CN105654453A (en) Robust FCM image segmentation method
CN103345731A (en) Anisotropy diffusion image noise reduction method based on McIlhagga edge detection operator
Balovsyak et al. Automatic determination of the gaussian noise level on digital images by high-pass filtering for regions of interest
CN101493933B (en) Partial structure self-adapted image diffusing and de-noising method
CN102074013B (en) Wavelet multi-scale Markov network model-based image segmentation method
CN103854281A (en) Hyperspectral remote sensing image vector C-V model segmentation method based on wave band selection
CN102930511B (en) Method for analyzing velocity vector of flow field of heart based on gray scale ultrasound image
CN105976364A (en) Simplified weighted-undirected graph-based statistical averaging model construction method
Huang et al. Variational level set method for image segmentation with simplex constraint of landmarks
CN103903227B (en) Method and device for noise reduction of image
CN103927730A (en) Image noise reduction method based on Primal Sketch correction and matrix filling
CN108921170B (en) Effective image noise detection and denoising method and system
CN105913402B (en) A kind of several remote sensing image fusion denoising methods based on DS evidence theory
CN111079208B (en) Particle swarm algorithm-based CAD model surface corresponding relation identification method
CN103065309A (en) Image segmentation method based on simplified local binary fitting (LBF) model
Liu et al. Segmenting lung parenchyma from CT images with gray correlation‐based clustering
CN104166976B (en) The dividing method of prospect in a kind of 3-D view
CN110047085A (en) A kind of accurate restorative procedure in lung film coalescence knuckle areas for lung CT carrying out image threshold segmentation result
CN103413306B (en) A kind of Harris angular-point detection method of adaptive threshold
CN105374024B (en) The method of high-resolution satellite image on-water bridge extraction
CN110570450B (en) Target tracking method based on cascade context-aware framework

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 201815 No. 1180 Xingxian Road, Jiading Industrial Zone, Jiading District, Shanghai

Patentee after: Shanghai Lianying Medical Technology Co., Ltd

Address before: 201815 No. 1180 Xingxian Road, Jiading Industrial Zone, Jiading District, Shanghai

Patentee before: SHANGHAI UNITED IMAGING HEALTHCARE Co.,Ltd.

CP01 Change in the name or title of a patent holder
CP02 Change in the address of a patent holder

Address after: 201807 2258 Chengbei Road, Jiading District, Shanghai

Patentee after: Shanghai Lianying Medical Technology Co.,Ltd.

Address before: 201815 No. 1180 Xingxian Road, Jiading Industrial Zone, Jiading District, Shanghai

Patentee before: Shanghai Lianying Medical Technology Co.,Ltd.

CP02 Change in the address of a patent holder