CN115731214A - Medical image segmentation method and device based on artificial intelligence - Google Patents
Medical image segmentation method and device based on artificial intelligence Download PDFInfo
- Publication number
- CN115731214A CN115731214A CN202211513888.XA CN202211513888A CN115731214A CN 115731214 A CN115731214 A CN 115731214A CN 202211513888 A CN202211513888 A CN 202211513888A CN 115731214 A CN115731214 A CN 115731214A
- Authority
- CN
- China
- Prior art keywords
- image
- medical image
- spine
- medical
- processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 73
- 238000003709 image segmentation Methods 0.000 title claims abstract description 31
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 29
- 238000012545 processing Methods 0.000 claims abstract description 69
- 238000013136 deep learning model Methods 0.000 claims abstract description 29
- 230000009466 transformation Effects 0.000 claims abstract description 23
- 230000003044 adaptive effect Effects 0.000 claims abstract description 19
- 230000008569 process Effects 0.000 claims description 24
- 238000012549 training Methods 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 15
- 230000000670 limiting effect Effects 0.000 claims description 5
- 230000011218 segmentation Effects 0.000 abstract description 15
- 238000010586 diagram Methods 0.000 description 20
- 230000006870 function Effects 0.000 description 13
- 238000005070 sampling Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000003321 amplification Effects 0.000 description 2
- 210000000988 bone and bone Anatomy 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 230000001575 pathological effect Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000007943 implant Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000010827 pathological analysis Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000002980 postoperative effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides a spine image segmentation method and a spine image segmentation device based on artificial intelligence, wherein the method comprises the following steps: acquiring a spine image dataset, wherein the spine image dataset comprises a plurality of first medical images, and each first medical image comprises a characteristic sub-image and a non-characteristic sub-image; carrying out non-local mean value denoising processing, bottom-hat transformation processing and contrast-limiting adaptive histogram equalization processing on the first medical image to obtain a second medical image; and inputting the second medical image into the trained deep learning model to obtain a mask image of the characteristic sub-image. According to the method provided by the invention, the definition and the contrast of the characteristic sub-image can be obviously improved by carrying out non-local mean de-noising processing, bottom-hat transformation processing and contrast-limiting adaptive histogram equalization processing on the first medical image, so that the segmentation precision of the spine image is improved.
Description
Technical Field
The invention relates to the field of medical treatment, in particular to a medical image segmentation method and device based on artificial intelligence.
Background
In recent years, segmenting vertebral bodies from spinal CT images has become critical for pathological diagnosis, surgical planning, and post-operative assessment. However, it is difficult to automatically segment spine CT images due to pathological anatomical changes. Due to the high repeatability of the vertebral structure, the difference of different pathological forms such as fracture and implant, and different visual fields, most vertebral body segmentation methods are realized based on traditional images and machine learning, and the segmentation precision is poor.
Therefore, how to solve the above problems needs to be considered.
Disclosure of Invention
The invention provides a medical image segmentation method and a medical image segmentation device based on artificial intelligence, which are used for solving the problems.
In order to achieve the above object, the present invention provides a spine image segmentation method based on artificial intelligence, comprising: acquiring a spine image dataset comprising a plurality of first medical images, each first medical image comprising a feature sub-image and a non-feature sub-image; carrying out non-local mean value denoising processing, bottom-cap transformation processing and contrast-limiting adaptive histogram equalization processing on the first medical image to obtain a second medical image; and inputting the second medical image into a trained deep learning model to obtain a mask image of the characteristic subimage.
Optionally, after obtaining the mask image of the feature sub-image, the method further includes: and performing three-dimensional reconstruction on the plurality of mask images to obtain a three-dimensional image of the spine.
Optionally, the deep learning model is obtained by training based on an FC-DenseNet network structure; and a Dense connecting block Dense block of the FC-DenseNet structure adopts a multi-scale residual error structure.
Optionally, the multi-scale residual structure is a structure of multi-scale residuals of 4 channels.
Optionally, training the deep learning model includes: acquiring a spine image training set, wherein the spine image training set comprises a plurality of first sample medical images, and each first sample medical image comprises a sample characteristic sub-image and a sample non-characteristic sub-image; denoising, bottom-cap transformation and contrast-limited adaptive histogram equalization are carried out on the first sample medical image to obtain a second sample medical image; and inputting the second sample medical image and the mask image of the sample feature sub-image corresponding to the second sample medical image into a deep learning model, and training the deep learning model.
Optionally, before performing a non-local mean denoising process, a bottom-hat transformation process, and a contrast-limited adaptive histogram equalization process on the first medical image, the method further includes: processing the data format of the first medical image into a png data format.
Another embodiment of the present invention provides an artificial intelligence based spine image segmentation apparatus, including: an acquisition module for acquiring a spine image dataset comprising a plurality of first medical images, each first medical image comprising a feature sub-image and a non-feature sub-image; the processing module is used for carrying out non-local mean value denoising processing, bottom-cap transformation processing and contrast-limiting adaptive histogram equalization processing on the first medical image to obtain a second medical image; the processing module is further configured to input the second medical image into a trained deep learning model to obtain a mask image of the feature sub-image.
Yet another embodiment of the present invention provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the artificial intelligence based spine image segmentation method as described above when executing the program.
Another embodiment of the invention provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the artificial intelligence based spine image segmentation method as set forth above.
Yet another embodiment of the invention provides a computer program product comprising a computer program which, when executed by a processor, implements an artificial intelligence based spine image segmentation method as described above.
The technical scheme of the invention at least has the following beneficial effects:
each first medical image in the spine image data set is subjected to non-local mean value denoising treatment, so that the detail characteristics of the first medical image can be maintained to the maximum extent while noise is removed, and the first medical image subjected to the non-local mean value denoising treatment is subjected to bottom-cap transformation treatment, so that the first medical image can be enhanced; and the first medical image subjected to bottom-hat transformation processing is subjected to contrast-limited self-adaptive histogram equalization processing, so that the finally obtained second medical image is clearer, the contrast between bones and the surrounding environment is more obvious, and the noise cannot be excessively amplified. After the second medical image obtained based on the mode is input into the deep learning model, the mask image of the output characteristic sub-image is compared with the characteristic sub-image, the definition and the contrast of the characteristic sub-image can be obviously improved, and therefore the segmentation precision of the spine image is improved.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a spine image segmentation method based on artificial intelligence according to the present invention;
FIG. 2a is a schematic illustration of a first medical image provided by the present invention;
FIG. 2b is a schematic diagram of a first medical image after non-local mean denoising processing according to the present invention;
FIG. 2c is a schematic diagram of a first medical image after bottom-hat transformation processing according to the present invention;
FIG. 2d is a diagram illustrating a limited contrast adaptive histogram equalization process performed on a first medical image according to the present invention;
fig. 3 is a schematic diagram of the FC-DenseNet network structure provided by the present invention;
FIG. 4 is a schematic structural diagram of a Dense block module in the improved FC-DenseNet network provided by the present invention;
FIG. 5 is a schematic view of spine segmentation reconstruction provided by the present invention;
FIG. 6 is a block diagram of an artificial intelligence based spine image segmentation apparatus according to the present invention;
fig. 7 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the internal logic of the processes, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
It should be understood that in the present application, "comprising" and "having" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements explicitly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that, in the present invention, "a plurality" means two or more. "and/or" is merely an association relationship describing an associated object, meaning that there may be three relationships, for example, and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "comprising a, B and C", "comprising a, B, C" means that all three of a, B, C are comprised, "comprising a, B or C" means comprising one of three of a, B, C, "comprising a, B and/or C" means comprising any 1 or any 2 or 3 of three of a, B, C.
It should be understood that in the present invention, "B corresponding to a", "a corresponds to B", or "B corresponds to a" means that B is associated with a, and B can be determined from a. Determining B from a does not mean determining B from a alone, but may be determined from a and/or other information. And the matching of A and B means that the similarity of A and B is greater than or equal to a preset threshold value.
As used herein, "if" may be interpreted as "at \8230; \8230when" or "when 8230; \8230when" or "in response to a determination" or "in response to a detection", depending on the context.
The technical means of the present invention will be described in detail with reference to specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
For convenience of understanding, terms or technical terms used in the present application are introduced.
Denoising non-local mean value: the method makes full use of redundant information in the image, and can furthest maintain the detail characteristics of the image while denoising. The basic idea is as follows: the estimate of the current pixel is obtained from a weighted average of the pixels in the image that have a similar neighborhood structure to it.
Bottom cap conversion: the result of the image subtraction closed-loop operation is called Bottom-hat transform (Bottom-hat). The closing operation can delete the darker area under the background with higher brightness, and then the original image is subtracted by the closing operation result to obtain the darker gray area in the original image, so the bottom-cap transformation is also called black bottom-cap transformation.
And (3) limiting contrast self-adaptive histogram equalization processing: the amplification of the contrast is limited, thereby reducing the problem of noise amplification.
The spine image segmentation method based on artificial intelligence provided by the embodiment of the invention adopts a deep learning model to segment characteristic subimages of a spine image, and introduces a training process of the deep learning model, wherein the training process comprises the following steps:
acquiring a spine image training set, wherein the spine image training set comprises a plurality of first sample medical images, and each first sample medical image comprises a sample characteristic sub-image and a sample non-characteristic sub-image;
carrying out non-local mean value denoising processing, bottom-cap transformation processing and contrast-limiting adaptive histogram equalization processing on the first sample medical image to obtain a second sample medical image;
and inputting the second sample medical image and the mask image of the sample feature sub-image corresponding to the second sample medical image into a deep learning model, and training the deep learning model.
In model training, the background label pixel value of the medical image is marked as 0, the spine pixel value is set as 1, the batch _ size is set as 32, and the initial learning rate is set as 1e -4 Every 10000 iterations, the learning rate is attenuated to 0.95, the optimizer uses an Adam optimizer, and the used loss function is a Focal loss function. And (5) using the constructed network to train the segmented training set, and setting each iteration to be 1000 times to carry out one-time verification on the training full set and the verification full set. And judging the stopping time of the network training by a early stopping method to obtain a final deep learning model.
Wherein, the expression of the Focal loss function is as follows:
the probability (i.e., confidence) that the representative model predicts a certain class; alpha is used for balancing the number of positive and negative samples, wherein a larger number of samples is assigned a smaller alpha value, and a smaller number of samples is assigned a larger alpha value; gamma is used to adjust the imbalance problem of difficultly-separated samples. Generally, gamma is more than or equal to 1, and the loss of the easily-separable sample is reduced by a power function. Ride onThe method can enable the model to pay more attention to the difficultly-divided samples, and is beneficial to improving the prediction capability of the model. That is, the goal is to focus on difficult cases (i.e., give more weight to samples that are difficult to classify) using the Focal loss function. For the positive samples, the loss obtained for the samples with high prediction probability (simple samples) is reduced, while the loss of the samples with low prediction probability (difficult samples) is increased, thereby enhancing the attention on the difficult samples.
Referring next to fig. 1, the present invention provides an artificial intelligence based spine image segmentation method, including the following steps:
s11: a spine image dataset is acquired, the spine image dataset comprising a plurality of first medical images, each first medical image comprising a feature sub-image and a non-feature sub-image.
The plurality of first medical images in the spine image data set are CT images of the spine, and the plurality of CT images of the spine are arranged in a specific order. Each first medical image comprises different areas, the image in the target area needing to be segmented can be called as a characteristic sub-image, and the image in the non-target area can be called as a non-characteristic sub-image.
S12: and carrying out non-local mean value denoising processing, bottom-cap transformation processing and contrast-limiting adaptive histogram equalization processing on the first medical image to obtain a second medical image.
It should be noted that the process of S12 is performed on each first medical image, so that a plurality of processed second medical images can be obtained.
The order of the non-local mean denoising process, the bottom-hat transform process, and the contrast-limited adaptive histogram equalization process mentioned in S12 is not limited, and preferably, when the first medical image is processed, the non-local mean denoising process, the bottom-hat transform process, and the contrast-limited adaptive histogram equalization process are performed in this order. Optionally, in one mode, the first medical image may be further processed in the order of bottom-hat transform processing, non-local mean denoising processing, and limited contrast adaptive histogram equalization processing.
S13: and inputting the second medical image into a trained deep learning model to obtain a mask image of the characteristic sub-image.
The trained deep learning model is used to determine mask images for feature sub-images in the input medical image.
And inputting the second medical image into a trained deep learning model, so that the purpose of segmenting the characteristic sub-image from the second medical image is realized, namely a mask image of the characteristic sub-image is obtained.
According to the spine image segmentation method based on artificial intelligence, provided by the embodiment of the invention, each first medical image in a spine image data set is subjected to non-local mean value denoising treatment, the detail characteristics of the first medical image can be maintained to the greatest extent while noise is removed, and the first medical image subjected to the non-local mean value denoising treatment is subjected to bottom-cap transformation treatment, so that the first medical image can be enhanced; and the first medical image subjected to bottom-hat transformation processing is subjected to contrast-limited self-adaptive histogram equalization processing, so that the finally obtained second medical image is clearer, the contrast between bones and the surrounding environment is more obvious, and the noise cannot be excessively amplified. After the second medical image obtained based on the mode is input into the deep learning model, the mask image of the output characteristic sub-image is compared with the characteristic sub-image, the definition and the contrast of the characteristic sub-image can be obviously improved, and therefore the segmentation precision of the spine image is improved.
In the spine image segmentation method based on artificial intelligence provided by the embodiment of the present invention, after obtaining the mask image of the feature sub-image, the method further includes:
and performing three-dimensional reconstruction on the plurality of mask images to obtain a three-dimensional image of the spine.
Because each mask image is obtained after the non-local mean de-noising treatment, the bottom-hat transformation treatment and the contrast limiting self-adaptive histogram equalization treatment, the definition and the contrast of each mask image are enhanced. Therefore, the definition and the contrast of the three-dimensional image of the spine obtained by three-dimensional reconstruction based on the mask image are enhanced. According to the spine image obtained by three-dimensional reconstruction, a doctor can be assisted in surgical planning, and the surgical precision and success rate are improved.
As shown in fig. 3, in the spine image segmentation method based on artificial intelligence provided in the embodiment of the present invention, the deep learning model is obtained based on FC-DenseNet network structure training;
and a Dense connecting block Dense block of the FC-DenseNet structure adopts a multi-scale residual error structure. It should be noted that the FC-DenseNet network structure is constructed by using a Dense connection Block (Dense Block) and UNet architecture. The FC-DenseNet network structure is composed of two down-sampling paths for down-transition and two up-sampling paths for up-transition. And the FC-DenseNet network structure also comprises two horizontal jump connections, and the characteristic graph from the down sampling path is spliced with the corresponding characteristic graph in the up sampling path. It should be noted that the connection mode in the up-sampling path and the down-sampling path is not identical, and there is a skip-splice path outside each sense block in the down-sampling path, which results in a linear increase in the number of feature maps, but there is no such operation in the up-sampling path.
Similar to the ResNet network structure, the FC-DenseNet network structure also has the capability of solving the gradient disappearance, and compared with the UNet network structure, the FC-DenseNet network structure has stronger transitivity of each layer of structure characteristics, thereby better utilizing the characteristics in the characteristic diagram; when the data set is less, the overfitting is well assisted, and the overfitting resistance is high; while reducing the number of parameters, the FC-DenseNet network structure focuses more on the use of the parameters, so that the utilization rate of the parameters is high; although the network of the FC-DenseNet network structure is deep, the parameters are few.
As shown in fig. 4, in the spine image segmentation method based on artificial intelligence provided by the embodiment of the present invention, the multi-scale residual structure is a multi-scale residual structure of 4 channels.
In order to extract more detailed information from CT images and relieve the problem of under-segmentation of low-contrast edges, a Dense block module in an FC-DenseNet network is improved to reduce memory consumption, a multi-scale residual error network structure is introduced to optimize a basic module of a convolution-pooling structure, the network structure is deepened, the problem of gradient dispersion easily generated during deep network training is solved, the multi-scale characteristic representation capability is achieved, and the improved Dense block structure is shown in figure 4.
The multi-scale structure improves the stability of prediction; relay supervision of different scales enables the model to learn more multilevel rich characteristics; the global residual connection acting between the input and the output of each scale obviously improves the effect of the model.
The module improves a bottleneck residual error structure, replaces a 3 multiplied by 3 convolution kernel of a full channel by a group of smaller filter groups, connects the grouped filters in a residual error-like layering mode to increase the number of scales which can be represented by output characteristics, performs characteristic extraction on a characteristic graph output by 1 multiplied by 1 convolution in a layering mode after dividing the channel number into 4 blocks, and has the following characteristic calculation results:
wherein x is i For a subset of blocking features, K i 3x3 convolution, y, performed after partitioning for each layer i Is the output of the feature subset. Through block convolution and residual fusion, feature outputs of different receptive field sizes can be obtained: y is 4 Like the input features, y3 may be represented as a 3x3 convolution of the extracted feature map, y 2 Characteristic map, y, of a receptive field corresponding to a size of 5 x 5 1 Corresponding to profiles obtained over 7 × 7 receptive fields. The convolution output in the module is connected with the cross-layer of the original feature map, so that the distance between the front layer and the rear layer can be shortened, and the learning capability of the features is further improved.
Changing the number of channels of the input features by a convolution kernel of 1x1, and then respectively passing through four paths, y 4 Representing the feature constant, y 3 Representing that the signal passes through a convolution kernel of 3x3, and changing the number of channels into 1/4; y is 2 Features representing input and y 3 Adding the characteristics of the channels, performing convolution kernel of 3x3, and changing the number of the channels into 1/4; y is 1 Features representing input and y 2 The features of the channel are added, and the number of the channels is changed into 1/4 after 3x3 convolution kernels; and finally, combining the characteristics of y1, y2, y3 and y4, outputting through a 1x1 convolution kernel, combining the output with the input original characteristics, and outputting a final result.
The residual structure adopts a strategy of first splitting and then fusing, can increase the receptive field range of each network layer while controlling the quantity of parameters, does not increase the calculation load of the network, and can represent the characteristics from multiple scales. The multi-scale fusion is beneficial to extracting more detailed information, reduces the interference of low contrast ratio of the spine target and surrounding soft tissues on the segmentation result, and improves the segmentation precision.
Before the spine image segmentation method based on artificial intelligence provided by the embodiment of the invention performs non-local mean de-noising processing, bottom-hat transformation processing and contrast-limiting adaptive histogram equalization processing on the first medical image, the method further comprises the following steps:
and processing the data format of the first medical image into the png data format.
The data format of the first medical image may be a Dicom file or a Mask file. Fig. 5 is a schematic diagram of the segmentation reconstruction of Dicom data according to the present invention.
As shown in fig. 6, based on the same technical concept as the spine image segmentation method based on artificial intelligence, another embodiment of the present invention provides a spine image segmentation apparatus based on artificial intelligence, wherein the effect achieved by the segmentation apparatus is similar to the effect achieved by the method, and the description thereof is omitted. The dividing device includes:
an acquisition module 61 configured to acquire a spine image dataset comprising a plurality of first medical images, each first medical image comprising a feature sub-image and a non-feature sub-image;
the processing module 62 is configured to perform non-local mean denoising processing, bottom-cap transformation processing and contrast-limiting adaptive histogram equalization processing on the first medical image to obtain a second medical image;
the processing module 62 is further configured to input the second medical image into the trained deep learning model, so as to obtain a mask image of the feature sub-image.
In the segmentation apparatus provided in the embodiment of the present invention, after obtaining the mask image of the feature sub-image, the processing module 62 is further configured to: and performing three-dimensional reconstruction on the plurality of mask images to obtain a three-dimensional image of the spine.
According to the segmentation device provided by the embodiment of the invention, the deep learning model is obtained based on FC-DenseNet structure training;
and a Dense connection block Dense block of the FC-DenseNet network structure adopts a multi-scale residual error structure.
In the segmentation apparatus provided in the embodiment of the present invention, the multi-scale residual error structure is a multi-scale residual error structure with 4 channels.
In the segmentation apparatus provided in the embodiment of the present invention, when the obtaining module 61 trains the deep learning model, it is specifically configured to:
acquiring a spine image training set, wherein the spine image training set comprises a plurality of first sample medical images, and each first sample medical image comprises a sample characteristic sub-image and a sample non-characteristic sub-image;
the processing module 62 is configured to perform denoising processing, bottom-cap transformation processing, and contrast-limiting adaptive histogram equalization processing on the first sample medical image to obtain a second sample medical image;
the processing module 62 is further configured to input the second sample medical image and the mask image of the sample feature sub-image corresponding to the second sample medical image into a deep learning model, and train the deep learning model.
In the segmentation apparatus provided in the embodiment of the present invention, the processing module 62 is further configured to, before performing non-local mean denoising processing, bottom-hat transformation processing, and contrast-limited adaptive histogram equalization processing on the first medical image: and processing the data format of the first medical image into the png data format.
Referring next to fig. 7, fig. 7 illustrates a physical structure diagram of an electronic device, which may include: a processor (processor) 710, a communication Interface (Communications Interface) 720, a memory (memory) 730, and a communication bus 740, wherein the processor 710, the communication Interface 720, and the memory 730 communicate with each other via the communication bus 740. Processor 710 may invoke logic instructions in memory 730 to perform the artificial intelligence based spine image segmentation methods provided by the methods described above.
In addition, the logic instructions in the memory 730 can be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
Yet another embodiment of the present invention provides a computer program product comprising a computer program which, when executed by a processor, implements an artificial intelligence based spine image segmentation method as described above.
Another embodiment of the present invention provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the artificial intelligence based spine image segmentation method as described above.
The computer-readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as a punch card or an in-groove protruding structure with instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be interpreted as a transitory signal per se, such as a radio wave or other freely propagating electromagnetic wave, an electromagnetic wave propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or an electrical signal transmitted through an electrical wire.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives the computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer-readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It is noted that, unless expressly stated otherwise, all the features disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features. Where used, further, preferably, still further and more preferably is a brief introduction to the description of the other embodiment based on the foregoing embodiment, the combination of the contents of the further, preferably, still further or more preferably back strap with the foregoing embodiment being a complete construction of the other embodiment. Several further, preferred, still further or more preferred arrangements of the back tape of the same embodiment may be combined in any combination to form a further embodiment.
It will be appreciated by persons skilled in the art that the embodiments of the invention described above and shown in the drawings are given by way of example only and are not limiting of the invention. The objects of the invention have been fully and effectively accomplished. The functional and structural principles of the present invention have been shown and described in the embodiments, and any variations or modifications may be made to the embodiments of the present invention without departing from the principles described.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present disclosure, and not for limiting the same; while the present disclosure has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present disclosure.
Claims (10)
1. A spine image segmentation method based on artificial intelligence is characterized by comprising the following steps:
acquiring a spine image dataset comprising a plurality of first medical images, each first medical image comprising a feature sub-image and a non-feature sub-image;
carrying out non-local mean value denoising processing, bottom-cap transformation processing and contrast-limiting adaptive histogram equalization processing on the first medical image to obtain a second medical image;
and inputting the second medical image into a trained deep learning model to obtain a mask image of the characteristic sub-image.
2. The method of claim 1, wherein after obtaining the mask image of the feature sub-image, the method further comprises:
and performing three-dimensional reconstruction on the plurality of mask images to obtain a three-dimensional image of the spine.
3. The method of claim 1, wherein the deep learning model is trained based on an FC-DenseNet network structure;
and a Dense connecting block Dense block of the FC-DenseNet structure adopts a multi-scale residual error structure.
4. The method of claim 3, wherein the multi-scale residual structure is a structure of multi-scale residuals for 4 channels.
5. The method of claim 1, wherein training the deep learning model comprises:
acquiring a spine image training set, wherein the spine image training set comprises a plurality of first sample medical images, and each first sample medical image comprises a sample characteristic sub-image and a sample non-characteristic sub-image;
denoising, bottom-cap transformation and contrast-limited adaptive histogram equalization are carried out on the first sample medical image to obtain a second sample medical image;
and inputting the second sample medical image and the mask image of the sample feature sub-image corresponding to the second sample medical image into a deep learning model, and training the deep learning model.
6. The method of claim 1, wherein prior to performing the non-local mean denoising process, the bottom-hat transform process, and the constrained contrast adaptive histogram equalization process on the first medical image, the method further comprises:
and processing the data format of the first medical image into the png data format.
7. An artificial intelligence based spine image segmentation device, comprising:
an acquisition module for acquiring a spine image dataset comprising a plurality of first medical images, each first medical image comprising a feature sub-image and a non-feature sub-image;
the processing module is used for carrying out non-local mean value denoising processing, bottom-cap transformation processing and contrast ratio limiting self-adaptive histogram equalization processing on the first medical image to obtain a second medical image;
the processing module is further configured to input the second medical image into a trained deep learning model to obtain a mask image of the feature sub-image.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the artificial intelligence based spine image segmentation method of any of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the artificial intelligence based spine image segmentation method according to any one of claims 1 to 6.
10. A computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements an artificial intelligence based spine image segmentation method as claimed in any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211513888.XA CN115731214A (en) | 2022-11-29 | 2022-11-29 | Medical image segmentation method and device based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211513888.XA CN115731214A (en) | 2022-11-29 | 2022-11-29 | Medical image segmentation method and device based on artificial intelligence |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115731214A true CN115731214A (en) | 2023-03-03 |
Family
ID=85299158
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211513888.XA Pending CN115731214A (en) | 2022-11-29 | 2022-11-29 | Medical image segmentation method and device based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115731214A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116563218A (en) * | 2023-03-31 | 2023-08-08 | 北京长木谷医疗科技股份有限公司 | Spine image segmentation method and device based on deep learning and electronic equipment |
CN117036376A (en) * | 2023-10-10 | 2023-11-10 | 四川大学 | Lesion image segmentation method and device based on artificial intelligence and storage medium |
-
2022
- 2022-11-29 CN CN202211513888.XA patent/CN115731214A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116563218A (en) * | 2023-03-31 | 2023-08-08 | 北京长木谷医疗科技股份有限公司 | Spine image segmentation method and device based on deep learning and electronic equipment |
CN117036376A (en) * | 2023-10-10 | 2023-11-10 | 四川大学 | Lesion image segmentation method and device based on artificial intelligence and storage medium |
CN117036376B (en) * | 2023-10-10 | 2024-01-30 | 四川大学 | Lesion image segmentation method and device based on artificial intelligence and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111311518B (en) | Image denoising method and device based on multi-scale mixed attention residual error network | |
CN115661144B (en) | Adaptive medical image segmentation method based on deformable U-Net | |
WO2018081537A1 (en) | Method and system for image segmentation using controlled feedback | |
CN115731214A (en) | Medical image segmentation method and device based on artificial intelligence | |
CN109816666B (en) | Symmetrical full convolution neural network model construction method, fundus image blood vessel segmentation device, computer equipment and storage medium | |
CN110110723B (en) | Method and device for automatically extracting target area in image | |
CN110310280B (en) | Image recognition method, system, equipment and storage medium for hepatobiliary duct and calculus | |
CN109615614B (en) | Method for extracting blood vessels in fundus image based on multi-feature fusion and electronic equipment | |
CN111583285A (en) | Liver image semantic segmentation method based on edge attention strategy | |
CN110674824A (en) | Finger vein segmentation method and device based on R2U-Net and storage medium | |
CN110728626A (en) | Image deblurring method and apparatus and training thereof | |
CN113592783A (en) | Method and device for accurately quantifying basic indexes of cells in corneal confocal image | |
CN112330684A (en) | Object segmentation method and device, computer equipment and storage medium | |
CN109993701B (en) | Depth map super-resolution reconstruction method based on pyramid structure | |
CN111626379A (en) | X-ray image detection method for pneumonia | |
CN112884702B (en) | Polyp identification system and method based on endoscope image | |
CN117197166B (en) | Polyp image segmentation method and imaging method based on edge and neighborhood information | |
CN114882220B (en) | Domain-adaptive priori knowledge-based GAN (generic object model) image generation method and system | |
CN117710196A (en) | Data conversion method and device, training method, training equipment and training medium for model | |
CN116580187A (en) | Knee joint image segmentation method and device based on artificial intelligence and electronic equipment | |
CN116823848A (en) | Multi-mode brain tumor segmentation method based on image fusion technology | |
CN117115031A (en) | CBCT metal artifact removal method and system based on unpaired learning | |
CN115131361A (en) | Training of target segmentation model, focus segmentation method and device | |
CN116363429A (en) | Training method of image recognition model, image recognition method, device and equipment | |
CN111882551B (en) | Pathological image cell counting method, system and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 100176 2201, 22 / F, building 1, yard 2, Ronghua South Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing Applicant after: Beijing Changmugu Medical Technology Co.,Ltd. Applicant after: Zhang Yiling Address before: 100176 2201, 22 / F, building 1, yard 2, Ronghua South Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing Applicant before: BEIJING CHANGMUGU MEDICAL TECHNOLOGY Co.,Ltd. Applicant before: Zhang Yiling |