CN107330897B - Image segmentation method and system - Google Patents
Image segmentation method and system Download PDFInfo
- Publication number
- CN107330897B CN107330897B CN201710404301.4A CN201710404301A CN107330897B CN 107330897 B CN107330897 B CN 107330897B CN 201710404301 A CN201710404301 A CN 201710404301A CN 107330897 B CN107330897 B CN 107330897B
- Authority
- CN
- China
- Prior art keywords
- model
- general function
- energy general
- lif
- drlse
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 70
- 238000003709 image segmentation Methods 0.000 title claims abstract description 50
- 230000008571 general function Effects 0.000 claims abstract description 89
- 230000011218 segmentation Effects 0.000 claims abstract description 39
- 238000001514 detection method Methods 0.000 claims abstract description 25
- 239000011159 matrix material Substances 0.000 claims description 88
- 230000001413 cellular effect Effects 0.000 claims description 17
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000010276 construction Methods 0.000 claims description 9
- 238000010606 normalization Methods 0.000 claims description 7
- 238000005457 optimization Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 description 34
- 230000008569 process Effects 0.000 description 10
- 230000000007 visual effect Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 241000282412 Homo Species 0.000 description 3
- 238000013459 approach Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000010410 layer Substances 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005315 distribution function Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000036039 immunity Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000011524 similarity measure Methods 0.000 description 1
- 239000002356 single layer Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an image segmentation method and a system thereof, wherein the method comprises the following steps: detecting a saliency region of a target image to obtain an initialized boundary curve of the target region; generating a new energy general function according to the energy general function of the LIF model and the energy general function of the DRLSE model; according to the new energy general function and the preset iteration times, carrying out evolution on the initialized boundary curve to obtain an evolved boundary curve; and carrying out image segmentation according to the boundary curve after evolution. By carrying out significance detection, the initial curve starts near the edge of the target area, so that the evolution time is greatly saved, and the segmentation accuracy is improved; the image with complex background information and weak boundary can be effectively segmented by a level set method combining local information and gradient information.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image segmentation method and an image segmentation system.
Background
In the research and application of images, people often have interest in some parts of the images, and the interested parts generally correspond to specific areas (which may correspond to a single area or a plurality of areas) with special properties in the images, and are called targets or foregrounds; while the other part is referred to as the background of the image. In order to identify and analyze the target, the target needs to be isolated from an image, which is a problem to be studied in image segmentation. Image segmentation is a technique and process that divides an image into several specific regions with unique properties and proposes an object of interest. Image segmentation is a crucial pre-processing of image recognition and computer vision. Without a correct segmentation, a correct identification is not possible. However, the only basis for segmentation is the brightness and color of the pixels in the image, and the segmentation is handled automatically by a computer, which has various difficulties. For example, segmentation errors often occur due to uneven lighting, the influence of noise, the presence of unclear portions in an image, shadows, and the like. Image segmentation is therefore a technique that requires further investigation. It is desirable to introduce some artificial knowledge-oriented and artificial intelligence methods for correcting errors in some segmentations, which are promising approaches, but which add complexity to the problem. Image segmentation is a prerequisite for image understanding and recognition. As a basic link of image processing, the method is always a hotspot and difficult problem in the fields of image processing and computer vision. Active contour models implemented using a level set approach have received attention from a number of scholars in recent years.
There are three different approaches to image segmentation:
one is a pixel clustering method, i.e., a region method, in which each pixel is classified into a corresponding object or region;
the other is a boundary method for realizing segmentation by directly determining the boundary of the region;
and thirdly, detecting edge pixels, and connecting the edge pixels to form a boundary to form segmentation.
The level set method first proposed by Osher and Sethian: it is mainly based on curve evolution and level set theory. The basic idea of the level set is to embed an evolution curve or a curved surface of an image into a higher one-dimensional level set, convert the evolution curve of a high-dimensional curve or a curved surface into a high-dimensional partial differential equation, and obtain a final evolution curve by solving the partial differential equation. Mumford et al propose a geometric active contour model (Mumford-Shah, MS), which is a variational energy equation based on fracture mechanics and can be fractured and merged quickly in the evolution process, but the energy term of the MS model is difficult to find simple numerical approximation, thus hindering the development of the MS model.
Caseles et al propose a level set method based image segmentation model, geodesic Contour model (GAC), which enables topological changes of energy general functions based on a conventional model, but the initialization curve of the model must be completely inside or outside the target to be cut, otherwise topological structure changes cannot be naturally performed.
In 2001, Chan et al proposed a C-V model that added a penalty term including curve length and local area to the energy function, which was relatively good for images with noise, but for images with non-uniform gray levels, the C-V model based on global information of the image often did not have a good segmentation effect.
In order to overcome the problem that the conventional level set evolution needs to be initialized repeatedly, Li et al propose a distance normalized level set evolution (DRLSE) model without reinitialization, and the main idea is to add an internal energy penalty term into an energy function equation, wherein the internal energy penalty term ensures that a level set curve does not deviate too far from a symbol distance function in the evolution process and always remains as the symbol distance function or an approximate symbol distance function, so that the evolution curve does not need to be reinitialized. Although this method avoids re-initialization, when segmenting an image with complicated background information or uneven gray scale, the segmentation curve may deviate from the target region, and thus erroneous segmentation may occur.
Saliency detection is a research hotspot in the field of computers at present, and is used for selecting relevant contents (objects or regions) in a visual scene as a human eye visual attention region. The salient region detection can be used as a preprocessing stage of an image, and plays an increasingly important role in image retrieval, classification, image editing, image segmentation and the like. The saliency model can be divided into a top-down method and a bottom-up method, wherein the top-down method uses high-level information retrieval of images to obtain saliency values of the images, and the bottom-up method uses low-level information of the images, such as colors, distances and the like, to obtain the saliency values of the images. The existing salient region detection algorithm measures the significance of each image subregion by calculating the contrast ratio of each image subregion and an adjacent region in a certain range of the image subregion, and the salient region detection method plays a significant beneficial role in the following fields: image segmentation, object detection, content-preserving image scaling, etc.
Humans can quickly and accurately identify salient regions in the visual field. This ability to simulate humans on a machine is crucial to enabling a machine to process visual content like humans. Over the past few decades, a number of significant detection methods have been published publicly. Most of these methods tend to predict the visual fixation point of human eyes, and many methods have been proposed, for example, the method proposed by Itti et al extracts the color, direction, etc. of an image based on a bottom-up visual attention model to obtain a single saliency map of the image. Guo et al propose obtaining a saliency map of an image by computing the phase spectrum of a quaternary fourier transform of the image, each quadruple containing a set of colors, intensities and vectors.
The salient region detection algorithm classification method proposed by Borji et al in 2014 is that the salient object is detected by drawing the outline of the object. It can be seen that the bottom-up model is a trend of these years, and the top-down model is a model that is too complex because there is no a priori knowledge available and little information is available using visual attention.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the image segmentation method and the image segmentation system can solve the problems of complex background information and inaccurate image segmentation of weak boundaries in the prior art.
In order to solve the technical problems, the invention adopts the technical scheme that: an image segmentation method comprising:
detecting a saliency region of a target image to obtain an initialized boundary curve of the target region;
generating a new energy general function according to the energy general function of the LIF model and the energy general function of the DRLSE model;
according to the new energy general function and the preset iteration times, carrying out evolution on the initialized boundary curve to obtain an evolved boundary curve;
and carrying out image segmentation according to the boundary curve after evolution.
The invention also relates to an image segmentation system comprising:
the detection module is used for detecting a salient region of the target image to obtain an initialized boundary curve of the target region;
the generating module is used for generating a new energy general function according to the energy general function of the LIF model and the energy general function of the DRLSE model;
the evolution module is used for evolving the initialized boundary curve according to the new energy general function and the preset iteration times to obtain an evolved boundary curve;
and the segmentation module is used for carrying out image segmentation according to the evolved boundary curve.
The invention has the beneficial effects that: by carrying out significance detection, the edge information of the image, namely the gradient information of the target area, can be easily obtained, and the information of the background area can be well eliminated, so that the initial curve starts near the edge of the target area, the evolution time is greatly saved, the segmentation accuracy is improved, and the final curve is well positioned in the target area; the segmentation precision is well ensured by a level set method combining local information and gradient information, and images with complex background information and weak boundaries can be effectively segmented; the method can improve the efficiency and the accuracy of image segmentation, is greatly superior to a DRLSE model in both time and segmentation models, and has higher calculation efficiency and accuracy for various types of pictures.
Drawings
Fig. 1 is a flowchart of an image segmentation method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of an image segmentation system according to the present invention;
fig. 3 is a schematic structural diagram of a system according to a third embodiment of the present invention.
Description of reference numerals:
1. a detection module; 2. a generation module; 3. an evolution module; 4. a segmentation module;
101. a first division unit; 102. a second dividing unit; 103. a first building element; 104. a second building element; 105. a first calculation unit; 106. a third building element; 107. a normalization unit;
108. a fourth building element; 109. an optimization unit; 110. an evolution updating unit; 111. a first obtaining unit;
201. a generating unit; 202. a second obtaining unit; 203. a third obtaining unit; 204. and a fourth obtaining unit.
Detailed Description
In order to explain technical contents, objects and effects of the present invention in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
The most key concept of the invention is as follows: detecting the significance region through a cellular automaton; and effectively segmenting the image with complex background information and weak boundary by a level set method combining local information and gradient information.
The noun explains:
cellular automata: the cellular automata is composed of grids, discrete states are arranged on the grids, and the cells can update the states of the cells in discrete time according to corresponding rules. The current state of each cell is determined by the state of the previous time and the state of the previous time of the adjacent cell.
Super-pixel: and the small area is composed of a series of pixel points which are adjacent in position and similar in characteristics such as color, brightness, texture and the like.
Referring to fig. 1, an image segmentation method includes:
detecting a saliency region of a target image to obtain an initialized boundary curve of the target region;
generating a new energy general function according to the energy general function of the LIF model and the energy general function of the DRLSE model;
according to the new energy general function and the preset iteration times, carrying out evolution on the initialized boundary curve to obtain an evolved boundary curve;
and carrying out image segmentation according to the boundary curve after evolution.
From the above description, the beneficial effects of the present invention are: the evolution time can be greatly saved, and the segmentation accuracy is improved; the image with complex background information and weak boundary can be effectively segmented.
Furthermore, a local information LIF method is added on the basis of the DRLSE method, the LIF method well uses detail information as an energy item, the DRLSE method adds gradient information into an energy equation as a penalty item, and therefore not only is the speed and the direction of evolution controlled, but also an evolution curve is well stopped on the boundary of a target area.
Further, the "detecting the salient region of the target image according to the cellular automaton to obtain the initialized boundary curve of the target region" specifically includes:
dividing a target image into N superpixels;
dividing the super pixels positioned at the edge of the target image into K classes according to a K mean algorithm;
constructing K global color difference graphs and corresponding significant value matrixes thereof according to the divided K-class superpixels;
constructing a weight matrix according to the spatial distance between each super pixel and the super pixel in the K-type super pixels;
calculating to obtain a saliency map according to the saliency value matrix and the weight matrix;
defining an influence factor of one super pixel on another super pixel according to the adjacent relation and the space distance between the super pixels, and constructing an influence factor matrix;
normalizing the influence factor matrix according to a preset degree matrix to obtain a normalized influence factor matrix;
calculating the confidence coefficient of the current state of each super pixel according to the influence factors in the influence factor matrix, and constructing a confidence coefficient matrix;
optimizing the confidence coefficient matrix to obtain an optimized confidence coefficient matrix;
according to the normalized influence factor matrix and the optimized confidence coefficient matrix, carrying out evolution updating on the saliency map to obtain a final saliency map of the target image;
and obtaining an initialization boundary curve of the target area according to the final saliency map.
According to the description, the cellular automaton is used for quickly and accurately finding out the saliency map of the target image as the initial evolution curve of the new model, the defect that the DRLSE model manually selects the initial segmentation curve is overcome, the time is greatly saved, and the segmentation precision is high.
Further, the "generating a new energy general function according to the energy general function of the LIF model and the energy general function of the DRLSE model" specifically includes:
generating a new energy general function according to the energy general function of the LIF model and the energy general function of the DRLSE modelWherein E isLIFAs an energy general function of the LIF model, EDRLSEη is the preset weight corresponding to the LIF model, and rho is the preset weight corresponding to the DRLSE model;
obtaining a level set evolution equation of the LIF model according to the energy general function of the LIF model;
obtaining a level set evolution equation of the DRLSE model according to the energy general function of the DRLSE model;
and obtaining a new level set evolution equation according to the new energy general function, the level set evolution equation of the LIF model and the level set evolution equation of the DRLSE model.
From the above description, it can be known that the local information LIF method is added on the basis of the DRLSE method, the LIF method uses the detail information as the energy item well, the DRLSE method adds the gradient information into the energy equation as the penalty item, not only controls the speed and direction of the evolution, but also stops the evolution curve on the boundary of the target region well.
Further, the "evolving the initialized boundary curve according to the new energy general function and the preset iteration number to obtain an evolved boundary curve" specifically includes:
and evolving the initialized boundary curve according to the new level set evolution equation and preset iteration times to obtain an evolved boundary curve.
From the above description, the accuracy of segmentation is well guaranteed by using the new energy function of the combination of the LIF model and the DRLSE model.
Referring to fig. 2, the present invention further provides an image segmentation system, including:
the detection module is used for detecting a salient region of the target image to obtain an initialized boundary curve of the target region;
the generating module 2 is used for generating a new energy general function according to the energy general function of the LIF model and the energy general function of the DRLSE model;
the evolution module is used for evolving the initialized boundary curve according to the new energy general function and the preset iteration times to obtain an evolved boundary curve;
and the segmentation module is used for carrying out image segmentation according to the evolved boundary curve.
Further, the detection module is specifically configured to detect the saliency region of the target image according to the cellular automaton to obtain an initialization boundary curve of the target region.
Further, the detection module includes:
a first dividing unit for dividing the target image into N superpixels;
the second dividing unit is used for dividing the super pixels positioned at the edge of the target image into K classes according to a K mean value algorithm;
the first construction unit is used for constructing K global color difference graphs and corresponding significant value matrixes thereof according to the divided K classes of super pixels;
the second construction unit is used for constructing a weight matrix according to the spatial distance between each super pixel and the super pixel in the K-type super pixels;
the first calculation unit is used for calculating to obtain a saliency map according to the saliency value matrix and the weight matrix;
the third construction unit is used for defining an influence factor of one super pixel on another super pixel according to the adjacent relation and the space distance between the super pixels and constructing an influence factor matrix;
the normalization unit is used for normalizing the influence factor matrix according to a preset degree matrix to obtain a normalized influence factor matrix;
the fourth construction unit is used for calculating the confidence coefficient of the current state of each super pixel according to the influence factors in the influence factor matrix and constructing a confidence coefficient matrix;
the optimization unit is used for optimizing the confidence coefficient matrix to obtain an optimized confidence coefficient matrix;
the evolution updating unit is used for carrying out evolution updating on the saliency map according to the normalized influence factor matrix and the optimized confidence coefficient matrix to obtain a final saliency map of the target image;
and the first obtaining unit is used for obtaining an initialization boundary curve of the target area according to the final saliency map.
Further, the generating module includes:
a generating unit, configured to generate a new energy general function according to the energy general function of the LIF model and the energy general function of the DRLSE model, where the new energy general function isWherein E isLIFAs an energy general function of the LIF model, EDRLSEη is the preset weight corresponding to the LIF model, and rho is the preset weight corresponding to the DRLSE model;
the second obtaining unit is used for obtaining a level set evolution equation of the LIF model according to the energy general function of the LIF model;
a third obtaining unit, configured to obtain a level set evolution equation of the DRLSE model according to the energy general function of the DRLSE model;
and the fourth obtaining unit is used for obtaining a new level set evolution equation according to the new energy general function, the level set evolution equation of the LIF model and the level set evolution equation of the DRLSE model.
Further, the evolution module is specifically configured to evolve the initialized boundary curve according to the new level set evolution equation and a preset iteration number to obtain an evolved boundary curve.
Example one
Referring to fig. 1, a first embodiment of the present invention is: an image segmentation method based on salient region detection and level set, comprising the steps of:
s1: detecting a saliency region of a target image to obtain an initialized boundary curve of the target region; the target area in this embodiment is a salient area in the target image, and the target area is also a target to be segmented.
S2: generating a new energy function according to the energy function of the LIF model and the energy function of the DRLSE model;
s3: according to the new energy function and the preset iteration times, carrying out evolution on the initialized boundary curve to obtain an evolved boundary curve;
s4: and carrying out image segmentation according to the boundary curve after evolution.
In step S1, the saliency areas are the pixels that attract the most visual attention in the picture, and the criterion of saliency detection is as follows:
a. the most prominent objects that protrude;
b. highlighting the entire salient object consistently;
c. accurately conforming to the boundary of the object;
d. higher noise immunity;
e. full resolution.
In the embodiment, a saliency detection algorithm of a cellular automaton is adopted to detect the saliency region of the obtained target image. Specifically, step S1 includes the steps of:
s101: dividing a target image into N super pixels according to a simple linear iterative clustering algorithm; the Simple Linear Iterative Clustering (SLIC) algorithm is simple in structure, needs few parameters, and can effectively divide an image into pixel blocks with different sizes and shapes. The boundaries of the super-pixels are largely close to the boundaries of the objects in the figure, and each super-pixel is a representative area which not only contains the color and direction information of the bottom layer, but also contains the structure information of the middle layer. The super-pixel is used as a basic calculation unit to ensure that the final significance calculation result is more accurate in representing the boundary of the object. Further, in the present embodiment, the size of each super pixel is 9 × 9 pixels. Each super pixel represents a cell.
S102: dividing the super pixels positioned at the edge of the target image into K classes according to a K mean algorithm; preferably, K is 3, i.e. the superpixels located at the edge of the target image are divided into three classes.
S103: constructing K Global color difference maps (GCD) and a corresponding significant value matrix M, M [ M ] of the GCD according to the divided K classes of super pixelsk,i]K×N,mk,iThe significance value of the super pixel i in the kth global color difference image can be obtained by calculation according to a first formula;
wherein p iskThe total number of superpixels for the kth superpixel, K ═ 1,2, …, K, | di, dj |, the Euclidean distance in CILELAB color space for superpixel i and superpixel j, i ═ 1,2, …, N, σ1And β are preset constants, in this embodiment, σ1=0.2,β=10。
S104: and constructing a weight matrix W according to the space distance between each super pixel and the super pixel in the K-type super pixels.
K significant value matrices M are obtained in step S103; the obtained GCD graph based on edge clustering only cannot be satisfied, but each graph has some high-accuracy superpixels, and because the optimized superpixels have very large similarity, in order to optimize the GCD graph obtained based on edges, a weight matrix W is constructed as Wk,i]K×NTo balance the importance between different GCD graphs. Because the super pixel points of the image edge are connected with each other, the super pixel points of the image edge are taken as background seeds; w is ak,iThe spatial distance between the super pixel i and the kth type background seed, namely the super pixel in the kth type super pixel, can be obtained by calculation according to a second formula;
wherein r isiAnd rjThe coordinates of superpixel i and superpixel j (a superpixel contains a plurality of pixel points, the average value of all pixel points of each superpixel is used as the coordinate point of the superpixel), II ri, rj II are the Euclidean distance between superpixel i and superpixel j, sigma2Is a constant of the preset control weight, in the present embodiment, σ2=1.3。
S105: calculating to obtain a saliency map M according to the saliency value matrix M and the weight matrix Wbg,Mbg=[M1 bg,...,MN bg]T,Mi bgAnd calculating according to a third formula.
the spatial distance is used for constraining the GCD graph, the contrast of a local area can be enhanced, and therefore the accuracy of the significance value is improved. By effectively utilizing the advantages of different GCD maps, the saliency maps obtained based on the background are more convincing.
S106: defining the influence factor of a super-pixel on another super-pixel according to the adjacent relation and the space distance between the super-pixelsSub fi,jAnd an influence factor matrix F is constructed.
A neighbor cell of a cell is defined, including cells adjacent to it and cells sharing the same edge as the cells adjacent to it. Also we consider the superpixels at the edges of the image to be connected to each other, so they are all used as background seeds. The influence of the neighboring cells on the cells is not fixed. Intuitively, it is believed that if a neighboring cell has more similar color characteristics to a cell, the state of the cell at the next time will be more affected, and the similarity of any pair of cells is measured by the distance defined in the CIELAB color space.
Thus, by defining the influence factor f of a superpixel i on a superpixel ji,jTo construct an influence factor matrix F ═ Fi,j]N×N,fi,jCan be calculated according to a fourth formula;
fi,j=0,otherwise
wherein nb (i) is a set of neighboring superpixels (neighboring cells) of superpixel i (cell i), that is, when superpixel j is not a neighboring superpixel of superpixel i, the influence factor of superpixel i on superpixel j is 0; | di, dj | are Euclidean distances of superpixel i and superpixel j in CILELAB color space, σ3Is a preset parameter for controlling the similarity measure, in this embodiment, σ30.1. The size of F has no relation to the size of M or W.
S107: and normalizing the influence factor matrix F according to a preset degree matrix D to obtain a normalized influence factor matrix F. Definition matrix D ═ diag { D ═ D1,d2,…dNIn which d isi=∑jfij(ii) a Normalizing the influence factor matrix F according to a fifth formula;
the fifth formula: f ═ D-1F
S108: and calculating the confidence coefficient of the current state of each super pixel according to the influence factors in the influence factor matrix, and constructing a confidence coefficient matrix C.
Since the state of each cell at the next moment is determined by the current state and the state at the previous moment, it is necessary to balance these two determining factors. In color space, a super pixel and its neighbor super pixel have great difference, the state of its next time is mainly determined by the state of current time, then it is probably assimilated by local environment, so a confidence matrix C ═ diag { C is established1,c2,…cNTo better promote the update evolution of all cells, each cell has confidence c to its current time stateiCan be calculated according to a sixth formula;
s109: and optimizing the confidence coefficient matrix C to obtain an optimized confidence coefficient matrix C. To ensure ciIn a predetermined interval [ b, a + b]According to a seventh formula pair ciOptimizing to obtain ciObtaining the optimized confidence matrix C ═ diag { C ═ thereby1*,c2*,…cN*};
where j is 1,2, …, N, a and b are preset constants, in this embodiment, a is 0.6 and b is 0.2.
And automatically updating the cells to the next more accurate and stable state by adopting the optimized confidence matrix C.
S110: according to the normalized influence factor matrix F and the optimized confidence coefficient matrix C, the saliency map M is subjected tobgCarrying out evolution updating to obtain a final saliency map of the target image;
and the normalized influence factor matrix F is used for measuring the influence degree of the super pixel by the neighbor super pixel, the optimized confidence coefficient matrix C is used for measuring the influence of the previous cell on the next cell, and the F and C are used for measuring the influence degree of the neighbor cell and the previous cell state on the current cell.
Specifically, the updating is performed according to an eighth formula;
eighth formula: mt+1=C*Mt+(I-C*)F*Mt
Wherein M istIndicating the update status of the current time, Mt+1Indicates the updated state at the next time, and is the initial state when t is 0, i.e., M0=MbgPredetermined N, determined by the characteristics of the image itselflT +1 time steps (N)lDetermined by the number N of superpixels), a final saliency map, i.e. a saliency region, is obtained.
S111: and obtaining an initialization boundary curve of the target area according to the final saliency map.
Based on the essential characteristics of most images, the super-pixels belonging to the foreground generally have similar color characteristics, and by utilizing the inherent relation of the neighboring super-pixels, the single-layer cellular automaton can strengthen the consistency of the significant values of similar areas and form a stable local environment. Secondly, the salient object and its surroundings have a large difference in color space, and a clear boundary between the object and the background naturally appears through the influence between similar neighboring superpixels. Cellular automata can well reinforce the foreground and suppress the background, so based on cellular automata, an intuitive update mechanism is designed to take advantage of the intrinsic connection of protruding objects through interaction with neighbors. This context-based propagation may improve any given most advanced result to a similar level of greater accuracy, with greater accuracy and recall.
Since the DRLSE model manually selects the initial segmentation curve, the initial segmentation curve must be outside or inside the object to be segmented, which results in an increase in segmentation time. In the embodiment, the rough outline of the target to be segmented is found by using the significance model based on the cellular automata principle, and then the outline of the significant region is used as an initial segmentation curve, so that the time is greatly saved, and the algorithm efficiency is improved.
For step S2, a Local Image Fitting (LIF) model is to construct an energy function using local gray scale information of the image. Local information of an image is extracted by adopting a Gaussian kernel function, two local self-adaptive functions are used for approximating average gray values inside and outside a contour, and an energy function is minimized to obtain a segmentation result. Assume that the target image is I: o → Ol(O represents the field of the image and l represents the dimension of the image I (x)). First, a local fitting term I is constructed at point xLIFThen by locally fitting an energy term I to x over the domain omegaLIFTaking integral to obtain an energy general function of the LIF model, as shown in a ninth formula; the local image fitting energy formula is shown as a tenth formula, wherein m is1、m2Calculating according to an eleventh formula;
the tenth formula: i isLIF=m1Hω(φ)+m2(1-Hω(φ))
wherein denotes a convolution operation; hω(φ) is the Heaviside function; m is1(x),m2(x) Is the average of the local rectangular regions inside and outside the zero level set curve, which are respectively Andwherein W isK(x) Is a rectangular window, typically chosen to have a standard deviation σ and a size of (4h +1)) × (4h +1) truncated Gaussian window RσWhere h is the largest integer less than σ.
And minimizing the energy general function of the LIF model to obtain a level set evolution equation of the LIF model, wherein the level set evolution equation is shown as a twelfth formula.
the LIF model has a significant effect on image segmentation of a specific class, for example, the LIF model has a good effect on image segmentation of a weak boundary, but the LIF model has a disadvantage that a segmentation result depends on the size, shape, and position of an initial contour, and is likely to fall into a problem of local infinitesimal, so that an improper design of the initial contour may lead to an erroneous result.
The basic idea of active contour model (active contour model) is to use a continuous curve to express the target edge and define an energy functional such that its arguments include the edge curve, so that the segmentation process is transformed into a process of solving the minimum value of the energy functional, which is generally implemented by solving the euler (lagrange) equation corresponding to the function, and the position of the curve when the energy reaches the minimum is the contour of the target. The level set is a typical active contour model. However, the conventional level set model requires repeated initialization operations, which increases the time for evolution.
Therefore, the present embodiment further introduces a Distance Regularized Level Set Evolution (DRLSE) model.
The principle of the level set model is that an initial curve is continuously close to an edge through iteration, and finally an image is segmented. In the parameter active contour model, the curve evolution is realized by minimizing an energy function, and the level set method is to embed the curve into a 0 level set in a 3-dimensional curved surface, and then obtain the 0 level set from the curved surface after the evolution by evolving the curved surface in a 3-dimensional space, wherein the 0 level set is the curve after the evolution. By adding a punishment item in the level set function contour model, the principle signed distance function of the level set function in the evolution process is avoided, and meanwhile, a data item is also added, so that the evolution curve evolves towards the direction of the target contour curve. The method effectively avoids the problem that the level set function needs to be reinitialized in the evolution process. And the initialized level set function is not limited to a signed distance function, and the used iteration method also improves the calculation efficiency. However, a serious problem is that the image has a phenomenon of non-uniform gray scale. In order to overcome the initialization of the curve, a variational level set model which does not need repeated initialization is adopted, the model overcomes the problem that the diffusivity tends to infinity, a penalty term is added, the deviation between a level set function and a symbol distance is corrected, and the evolution speed of the curve is improved. The distance regularized energy general function is shown as a thirteenth formula;
a thirteenth formula: eDRLSE=μEP+Eext
Wherein E isPThe distance normalization term is added to ensure that the evolution curve is always kept or is close to a symbol distance function, and the expression of the distance normalization term is shown as a fourteenth formula; mu is a preset constant and is used for balancing the importance of distance normalization; eextThe method is an external energy function and ensures that the evolution curve can be well stopped in a target area.
wherein in the fourteenth formula, "═ isDefinition of representation, i.e. for EPSince the symbol distance function must satisfy | ▽ Φ ═ 1, the energy general function of the DRLSE model is as shown in the fifteenth formula;
a fifteenth formula: eDRLSE=μEP(φ)+λL(φ)+αA(φ)
Wherein, λ is greater than 0, α is a preset real number, the two parameters determine the proportion occupied by L (φ) and A (φ), and the expression of L (φ) and A (φ) is shown as a sixteenth formula;
wherein, in the sixteenth formula, "═ is alsoNamely, L (phi) and A (phi) are defined; (x) The impact function is a univariate impact function and can be obtained according to a seventeenth formula; hω(x) Is a Heaviside function, which can be obtained according to the eighteenth formula;
(x)=0,|x|>ω
Hω(x)=1,x>ω
Hω(x)=0,x<-ω
wherein, omega is a preset constant; g is an edge indication function, definedWhere R is a Gaussian distribution function with a standard deviation of σ, and σ is a scale parameter. g is a function of the properties of the image, and L (phi) evolves the curve towards the target region and stops the curve at the target. A (φ) determines the rate of evolution of the curve.
Therefore, by combining the sixteenth formula, the seventeenth formula and the eighteenth formula, the final evolution curve, namely the level set evolution equation of the DRLSE model, is obtained by taking the fastest descent speed process as the gradient flow, as shown in the nineteenth formula and the twentieth formula;
where Δ represents the laplacian operator. The model has the greatest advantages that the problem that a traditional level set needs to be reinitialized is solved, a curve is close to a symbol distance function in the evolution process, the evolution principle is based on gradient information, the image edge localization effect is good, the local gradient information of an image is utilized without considering the global information of the image, the image edge is blurred by the model, the image segmentation effect with less local information is not good, and the convergence speed is slow. Therefore, the local information of the image is combined, the gradient information of the image is not only considered, and the obtained segmentation model has a good segmentation effect.
Based on the analysis, local information and gradient information are combined, LIF model and DRLSE model factors are introduced, and a new energy general function is generated, as shown in a twenty-first formula;
wherein E isLIFAs an energy general function of the LIF model, EDRLSEFor the energy general function of the DRLSE model, η is the preset weight of the corresponding LIF model, and ρ is the preset weight of the corresponding DRLSE model.
Obtaining a final evolution curve equation according to a ninth formula, a twelfth formula, a fifteenth formula, a nineteenth formula and a twenty-first formula, wherein the final evolution curve equation is shown as a twenty-second formula;
the twenty-second formula:
discretizing the new level set evolution equation (twenty-second equation) into a finite difference equation according to the above equations, as shown in twenty-third equation;
wherein,representing the approximate solution to the right of the new level set evolution equation (the twenty-second equation), at represents the time step,showing the evolution curve at the current time instant,showing the evolution curve at the next time instant. It can be seen that increasing the step size can increase the evolution speed of the curve, and decreasing the step size can slow down the evolution speed of the curve.
For step S3 and step S4, the initialized boundary curve obtained in step S1, i.e., the saliency detection contour curve, is used as the initial contour curve, and a new evolution equation obtained in step S2 is used to guide the evolution of the curve. Finally, image segmentation is carried out according to the evolved curve
In the embodiment, the rough outline of the target to be segmented is found by using the significance model based on the cellular automata principle, and then the outline of the significant region is used as an initial segmentation curve, so that the time is greatly saved, and the algorithm efficiency is improved. A local information LIF method is added on the basis of a DRLSE method, the LIF method well uses detail information as an energy item, the DRLSE method adds gradient information into an energy equation as a penalty item, and therefore not only are the speed and the direction of evolution controlled, but also an evolution curve is well stopped on the boundary of a target area. The segmentation precision is well ensured by a level set method combining local information and gradient information, and images with complex background information and weak boundaries can be effectively segmented.
Example two
The present embodiment is a specific application scenario of the foregoing embodiments.
First, a new level set is setParameters of the evolution equation, η ═ 0.1, ρ ═ 0.9, time step Δ t ═ 1, μ ═ 0.2, λ ═ 5, α ═ 1.5, and iteration number is 11t+1I.e. saliency region, to find saliency map Mt+1Mean value M ofmeanAnd dividing the target image into two parts according to the threshold value by taking the average value as the threshold value, and taking the divided curve as an initial contour curve, namely an initialization boundary curve. According to the eleventh formula and the sixteenth formula, m is respectively calculated1And m2L (phi) and a (phi), and then evolve a level set function every Δ t 1s according to a new level set evolution equation and its finite difference equation. And if the evolution times do not meet the iteration times, continuing to evolve the curve until the iteration times are met to obtain a final evolution curve, namely a final segmentation curve, and segmenting the image according to the segmentation curve.
EXAMPLE III
Referring to fig. 3, the present embodiment is an image segmentation system corresponding to the above embodiment, including:
the detection module 1 is used for detecting a salient region of a target image to obtain an initialized boundary curve of the target region;
the generating module 2 is used for generating a new energy general function according to the energy general function of the LIF model and the energy general function of the DRLSE model;
the evolution module 3 is used for evolving the initialized boundary curve according to the new energy general function and the preset iteration times to obtain an evolved boundary curve;
and the segmentation module 4 is used for carrying out image segmentation according to the evolved boundary curve.
Further, the detection module 1 is specifically configured to detect a salient region of the target image according to the cellular automaton, so as to obtain an initialized boundary curve of the target region.
Further, the detection module 1 comprises:
a first dividing unit 101 for dividing the target image into N super pixels;
the second dividing unit 102 is configured to divide the super pixels located at the edge of the target image into K classes according to a K-means algorithm;
the first construction unit 103 is configured to construct K global color difference maps and significant value matrices corresponding to the K global color difference maps according to the divided K classes of super pixels;
a second constructing unit 104, configured to construct a weight matrix according to a spatial distance between each super pixel and a super pixel in the K-class super pixels;
a first calculating unit 105, configured to calculate a saliency map according to the saliency value matrix and the weight matrix;
a third constructing unit 106, configured to define an influence factor of one super pixel on another super pixel according to an adjacent relationship and a spatial distance between the super pixels, and construct an influence factor matrix;
a normalization unit 107, configured to normalize the impact factor matrix according to a preset degree matrix, to obtain a normalized impact factor matrix;
a fourth constructing unit 108, configured to calculate a confidence of the current state of each super pixel according to the influence factor in the influence factor matrix, and construct a confidence matrix;
the optimizing unit 109 is configured to optimize the confidence matrix to obtain an optimized confidence matrix;
the evolution updating unit 110 is configured to perform evolution updating on the saliency map according to the normalized influence factor matrix and the optimized confidence matrix to obtain a final saliency map of the target image;
a first obtaining unit 111, configured to obtain an initialization boundary curve of the target area according to the final saliency map.
Further, the generating module 2 includes:
a generating unit 201, configured to generate a new energy general function according to the energy general function of the LIF model and the energy general function of the DRLSE model, where the new energy general function isWherein E isLIFEnergy general function for LIF model,EDRLSEη is the preset weight corresponding to the LIF model, and rho is the preset weight corresponding to the DRLSE model;
a second obtaining unit 202, configured to obtain a level set evolution equation of the LIF model according to an energy general function of the LIF model;
a third obtaining unit 203, configured to obtain a level set evolution equation of the DRLSE model according to the energy general function of the DRLSE model;
a fourth obtaining unit 204, configured to obtain a new level set evolution equation according to the new energy general function, the level set evolution equation of the LIF model, and the level set evolution equation of the DRLSE model.
Further, the evolution module 3 is specifically configured to evolve the initialized boundary curve according to the new level set evolution equation and a preset iteration number to obtain an evolved boundary curve.
In summary, according to the image segmentation method and the system thereof provided by the present invention, by performing saliency detection, edge information of an image, that is, gradient information of a target region, can be easily obtained, and information of a background region can be well excluded, so that an initial curve starts near the edge of the target region, thereby greatly saving evolution time, improving segmentation accuracy, and making a final curve well located in the target region; the rough outline of the target to be segmented is found through a significance model based on the cellular automata principle, and then the outline of the significant region is used as an initial segmentation curve, so that the time is greatly saved, and the algorithm efficiency is improved; a local information LIF method is added on the basis of a DRLSE method, the LIF method well uses detail information as an energy item, the DRLSE method adds gradient information into an energy equation as a penalty item, and therefore not only are the speed and the direction of evolution controlled, but also an evolution curve is well stopped on the boundary of a target area; the segmentation precision is well ensured by a level set method combining local information and gradient information, and images with complex background information and weak boundaries can be effectively segmented.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.
Claims (10)
1. An image segmentation method, comprising:
detecting a saliency region of a target image to obtain an initialized boundary curve of the target region;
generating a new energy general function according to the energy general function of the LIF model and the energy general function of the DRLSE modelWherein E isLIFAs an energy general function of the LIF model, EDRLSEη is the preset weight corresponding to the LIF model, and rho is the preset weight corresponding to the DRLSE model;
according to the new energy general function and the preset iteration times, carrying out evolution on the initialized boundary curve to obtain an evolved boundary curve;
and carrying out image segmentation according to the boundary curve after evolution.
2. The image segmentation method according to claim 1, wherein the "detecting the salient region of the target image and obtaining the initialized boundary curve of the target region" specifically includes:
and detecting the salient region of the target image according to the cellular automaton to obtain an initialized boundary curve of the target region.
3. The image segmentation method according to claim 2, wherein the "detecting the salient region of the target image according to the cellular automaton to obtain the initialized boundary curve of the target region" specifically comprises:
dividing a target image into N superpixels;
dividing the super pixels positioned at the edge of the target image into K classes according to a K mean algorithm;
constructing K global color difference graphs and corresponding significant value matrixes thereof according to the divided K-class superpixels;
constructing a weight matrix according to the spatial distance between each super pixel and the super pixel in the K-type super pixels;
calculating to obtain a saliency map according to the saliency value matrix and the weight matrix;
defining an influence factor of one super pixel on another super pixel according to the adjacent relation and the space distance between the super pixels, and constructing an influence factor matrix;
normalizing the influence factor matrix according to a preset degree matrix to obtain a normalized influence factor matrix;
calculating the confidence coefficient of the current state of each super pixel according to the influence factors in the influence factor matrix, and constructing a confidence coefficient matrix;
optimizing the confidence coefficient matrix to obtain an optimized confidence coefficient matrix;
according to the normalized influence factor matrix and the optimized confidence coefficient matrix, carrying out evolution updating on the saliency map to obtain a final saliency map of the target image;
and obtaining an initialization boundary curve of the target area according to the final saliency map.
4. The image segmentation method according to claim 1, wherein the "generating a new energy general function according to the energy general function of the LIF model and the energy general function of the DRLSE model" specifically includes:
generating a new energy general function according to the energy general function of the LIF model and the energy general function of the DRLSE modelWherein E isLIFAs an energy general function of the LIF model, EDRLSEη is the preset weight corresponding to the LIF model, and rho is the preset weight corresponding to the DRLSE model;
obtaining a level set evolution equation of the LIF model according to the energy general function of the LIF model;
obtaining a level set evolution equation of the DRLSE model according to the energy general function of the DRLSE model;
and obtaining a new level set evolution equation according to the new energy general function, the level set evolution equation of the LIF model and the level set evolution equation of the DRLSE model.
5. The image segmentation method according to claim 4, wherein the step of evolving the initialized boundary curve according to the new energy general function and the preset iteration number to obtain an evolved boundary curve specifically comprises:
and evolving the initialized boundary curve according to the new level set evolution equation and preset iteration times to obtain an evolved boundary curve.
6. An image segmentation system, comprising:
the detection module is used for detecting a salient region of the target image to obtain an initialized boundary curve of the target region;
a generating module, configured to generate a new energy general function according to the energy general function of the LIF model and the energy general function of the DRLSE model, where the new energy general function isWherein E isLIFAs an energy general function of the LIF model, EDRLSEη is the preset weight corresponding to the LIF model, and rho is the preset weight corresponding to the DRLSE model;
the evolution module is used for evolving the initialized boundary curve according to the new energy general function and the preset iteration times to obtain an evolved boundary curve;
and the segmentation module is used for carrying out image segmentation according to the evolved boundary curve.
7. The image segmentation system of claim 6, wherein the detection module is specifically configured to detect a salient region of the target image according to a cellular automaton to obtain an initialized boundary curve of the target region.
8. The image segmentation system of claim 7, wherein the detection module comprises:
a first dividing unit for dividing the target image into N superpixels;
the second dividing unit is used for dividing the super pixels positioned at the edge of the target image into K classes according to a K mean value algorithm;
the first construction unit is used for constructing K global color difference graphs and corresponding significant value matrixes thereof according to the divided K classes of super pixels;
the second construction unit is used for constructing a weight matrix according to the spatial distance between each super pixel and the super pixel in the K-type super pixels;
the first calculation unit is used for calculating to obtain a saliency map according to the saliency value matrix and the weight matrix;
the third construction unit is used for defining an influence factor of one super pixel on another super pixel according to the adjacent relation and the space distance between the super pixels and constructing an influence factor matrix;
the normalization unit is used for normalizing the influence factor matrix according to a preset degree matrix to obtain a normalized influence factor matrix;
the fourth construction unit is used for calculating the confidence coefficient of the current state of each super pixel according to the influence factors in the influence factor matrix and constructing a confidence coefficient matrix;
the optimization unit is used for optimizing the confidence coefficient matrix to obtain an optimized confidence coefficient matrix;
the evolution updating unit is used for carrying out evolution updating on the saliency map according to the normalized influence factor matrix and the optimized confidence coefficient matrix to obtain a final saliency map of the target image;
and the first obtaining unit is used for obtaining an initialization boundary curve of the target area according to the final saliency map.
9. The image segmentation system of claim 6, wherein the generation module comprises:
a generating unit, configured to generate a new energy general function according to the energy general function of the LIF model and the energy general function of the DRLSE model, where the new energy general function isWherein E isLIFAs an energy general function of the LIF model, EDRLSEη is the preset weight corresponding to the LIF model, and rho is the preset weight corresponding to the DRLSE model;
the second obtaining unit is used for obtaining a level set evolution equation of the LIF model according to the energy general function of the LIF model;
a third obtaining unit, configured to obtain a level set evolution equation of the DRLSE model according to the energy general function of the DRLSE model;
and the fourth obtaining unit is used for obtaining a new level set evolution equation according to the new energy general function, the level set evolution equation of the LIF model and the level set evolution equation of the DRLSE model.
10. The image segmentation system according to claim 9, wherein the evolution module is specifically configured to evolve the initialized boundary curve according to the new level set evolution equation and a preset number of iterations to obtain an evolved boundary curve.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710404301.4A CN107330897B (en) | 2017-06-01 | 2017-06-01 | Image segmentation method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710404301.4A CN107330897B (en) | 2017-06-01 | 2017-06-01 | Image segmentation method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107330897A CN107330897A (en) | 2017-11-07 |
CN107330897B true CN107330897B (en) | 2020-09-04 |
Family
ID=60194046
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710404301.4A Active CN107330897B (en) | 2017-06-01 | 2017-06-01 | Image segmentation method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107330897B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108363055B (en) * | 2018-01-17 | 2020-01-31 | 电子科技大学 | radar foresight imaging area segmentation method |
CN108898611B (en) * | 2018-06-19 | 2021-09-24 | 东华理工大学 | Fuzzy region active contour segmentation model based on significant perception prior |
WO2020029064A1 (en) * | 2018-08-07 | 2020-02-13 | 温州医科大学 | Optical coherence tomographic image processing method |
CN110176021B (en) * | 2019-05-21 | 2021-04-16 | 山东大学 | Level set image segmentation method and system for saliency information combined with brightness correction |
CN110390667B (en) * | 2019-06-18 | 2023-10-20 | 平安科技(深圳)有限公司 | Focus extraction method, device, equipment and storage medium based on fundus OCT image |
CN110288581B (en) * | 2019-06-26 | 2022-11-04 | 电子科技大学 | Segmentation method based on model for keeping shape convexity level set |
CN111640188A (en) * | 2020-05-29 | 2020-09-08 | 中国地质大学(武汉) | Anti-noise three-dimensional grid optimization method based on Mumford-Shah algorithm framework |
CN112070712B (en) * | 2020-06-05 | 2024-05-03 | 湖北金三峡印务有限公司 | Printing defect detection method based on self-encoder network |
CN111739047A (en) * | 2020-06-24 | 2020-10-02 | 山东财经大学 | Tongue image segmentation method and system based on bispectrum reconstruction |
CN112037109A (en) * | 2020-07-15 | 2020-12-04 | 北京神鹰城讯科技股份有限公司 | Improved image watermarking method and system based on saliency target detection |
CN112001933A (en) * | 2020-09-09 | 2020-11-27 | 成都市精卫鸟科技有限责任公司 | Image capturing method, device, equipment and medium |
CN112435263A (en) * | 2020-10-30 | 2021-03-02 | 苏州瑞派宁科技有限公司 | Medical image segmentation method, device, equipment, system and computer storage medium |
CN113313672A (en) * | 2021-04-28 | 2021-08-27 | 贵州电网有限责任公司 | Active contour model image segmentation method based on SLIC superpixel segmentation and saliency detection algorithm |
CN114445443B (en) * | 2022-01-24 | 2022-11-04 | 山东省人工智能研究院 | Interactive image segmentation method based on asymmetric geodesic distance |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102831608A (en) * | 2012-08-06 | 2012-12-19 | 哈尔滨工业大学 | Unsteady measurement algorithm based image segmentation method of improved rule distance level set |
CN103093474A (en) * | 2013-01-28 | 2013-05-08 | 电子科技大学 | Three-dimensional mammary gland ultrasound image partition method based on homoplasmon and partial energy |
CN103700095A (en) * | 2013-12-10 | 2014-04-02 | 东北林业大学 | Log end surface image partitioning algorithm for improving active contour model based on circle constraint |
CN104867143A (en) * | 2015-05-15 | 2015-08-26 | 东华理工大学 | Level set image segmentation method based on local guide core-fitting energy model |
CN105825513A (en) * | 2016-03-21 | 2016-08-03 | 辽宁师范大学 | Image segmentation method based on adaptive fitting of global information and local information |
CN106204592A (en) * | 2016-07-12 | 2016-12-07 | 东北大学 | A kind of image level collection dividing method based on local gray level cluster feature |
CN106296649A (en) * | 2016-07-21 | 2017-01-04 | 北京理工大学 | A kind of texture image segmenting method based on Level Set Models |
CN106548478A (en) * | 2016-10-28 | 2017-03-29 | 中国科学院苏州生物医学工程技术研究所 | Active contour image partition method based on local fit image |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9269187B2 (en) * | 2013-03-20 | 2016-02-23 | Siemens Product Lifecycle Management Software Inc. | Image-based 3D panorama |
-
2017
- 2017-06-01 CN CN201710404301.4A patent/CN107330897B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102831608A (en) * | 2012-08-06 | 2012-12-19 | 哈尔滨工业大学 | Unsteady measurement algorithm based image segmentation method of improved rule distance level set |
CN103093474A (en) * | 2013-01-28 | 2013-05-08 | 电子科技大学 | Three-dimensional mammary gland ultrasound image partition method based on homoplasmon and partial energy |
CN103700095A (en) * | 2013-12-10 | 2014-04-02 | 东北林业大学 | Log end surface image partitioning algorithm for improving active contour model based on circle constraint |
CN104867143A (en) * | 2015-05-15 | 2015-08-26 | 东华理工大学 | Level set image segmentation method based on local guide core-fitting energy model |
CN105825513A (en) * | 2016-03-21 | 2016-08-03 | 辽宁师范大学 | Image segmentation method based on adaptive fitting of global information and local information |
CN106204592A (en) * | 2016-07-12 | 2016-12-07 | 东北大学 | A kind of image level collection dividing method based on local gray level cluster feature |
CN106296649A (en) * | 2016-07-21 | 2017-01-04 | 北京理工大学 | A kind of texture image segmenting method based on Level Set Models |
CN106548478A (en) * | 2016-10-28 | 2017-03-29 | 中国科学院苏州生物医学工程技术研究所 | Active contour image partition method based on local fit image |
Non-Patent Citations (3)
Title |
---|
《Distance Regularized Level Set Evolution and Its Application to Image Segmentation》;Chunming Li et al;;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;20101231;第19卷(第12期);第3243-3254页; * |
《基于偏微分方程图像分割技术的研究》;袁建军;《博士学位论文库》;20121231;第1-146页; * |
《基于粗糙集和新能量公式的水平集图像分割》;张迎春 等;;《自动化学报》;20151130;第41卷(第11期);第1913-1925页; * |
Also Published As
Publication number | Publication date |
---|---|
CN107330897A (en) | 2017-11-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107330897B (en) | Image segmentation method and system | |
Wang et al. | Active contours driven by edge entropy fitting energy for image segmentation | |
US10600247B2 (en) | Augmented reality occlusion | |
Papon et al. | Voxel cloud connectivity segmentation-supervoxels for point clouds | |
CN105513070A (en) | RGB-D salient object detection method based on foreground and background optimization | |
Wang et al. | Feature extraction of point clouds based on region clustering segmentation | |
CN108022244B (en) | Hypergraph optimization method for significant target detection based on foreground and background seeds | |
CN107369131A (en) | Conspicuousness detection method, device, storage medium and the processor of image | |
Wang et al. | Interactive multilabel image segmentation via robust multilayer graph constraints | |
CN111046868A (en) | Target significance detection method based on matrix low-rank sparse decomposition | |
CN110503113A (en) | A kind of saliency object detection method restored based on low-rank matrix | |
CN108846845B (en) | SAR image segmentation method based on thumbnail and hierarchical fuzzy clustering | |
CN109345536B (en) | Image super-pixel segmentation method and device | |
Marriott et al. | Plane-extraction from depth-data using a Gaussian mixture regression model | |
CN107392211B (en) | Salient target detection method based on visual sparse cognition | |
Manhardt et al. | Cps: Class-level 6d pose and shape estimation from monocular images | |
CN110136143A (en) | Geneva based on ADMM algorithm multiresolution remote sensing image segmentation method off field | |
CN110348311B (en) | Deep learning-based road intersection identification system and method | |
Ghosh et al. | Robust simultaneous registration and segmentation with sparse error reconstruction | |
CN108765384B (en) | Significance detection method for joint manifold sequencing and improved convex hull | |
Brockers | Cooperative stereo matching with color-based adaptive local support | |
US20230386023A1 (en) | Method for detecting medical images, electronic device, and storage medium | |
CN106778903A (en) | Conspicuousness detection method based on Sugeno fuzzy integrals | |
CN107492101B (en) | Multi-modal nasopharyngeal tumor segmentation algorithm based on self-adaptive constructed optimal graph | |
CN102855624B (en) | A kind of image partition method based on broad sense data fields and Ncut algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20220411 Address after: 350000 B505, 3 / F, building 10, phase I, innovation park, No. 3, Keji East Road, high tech Zone, Fuzhou, Fujian Patentee after: Fujian Leji Technology Co.,Ltd. Address before: Science and Technology Office of Fujian Normal University Patentee before: Fujian Normal University |
|
TR01 | Transfer of patent right |