CN112435237B - Skin lesion segmentation method based on data enhancement and depth network - Google Patents
Skin lesion segmentation method based on data enhancement and depth network Download PDFInfo
- Publication number
- CN112435237B CN112435237B CN202011329333.0A CN202011329333A CN112435237B CN 112435237 B CN112435237 B CN 112435237B CN 202011329333 A CN202011329333 A CN 202011329333A CN 112435237 B CN112435237 B CN 112435237B
- Authority
- CN
- China
- Prior art keywords
- model
- data
- training
- beta
- enhancement
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 206010040882 skin lesion Diseases 0.000 title claims abstract description 17
- 231100000444 skin lesion Toxicity 0.000 title claims abstract description 17
- 230000011218 segmentation Effects 0.000 title claims abstract description 14
- 238000012549 training Methods 0.000 claims abstract description 53
- 238000012360 testing method Methods 0.000 claims abstract description 19
- 238000012795 verification Methods 0.000 claims abstract description 19
- 230000001105 regulatory effect Effects 0.000 claims description 13
- 230000000694 effects Effects 0.000 claims description 10
- 238000011156 evaluation Methods 0.000 claims description 7
- 238000013519 translation Methods 0.000 claims description 6
- 230000002708 enhancing effect Effects 0.000 claims description 5
- 101100261000 Caenorhabditis elegans top-3 gene Proteins 0.000 claims description 3
- 101100153591 Cricetulus griseus TOP1 gene Proteins 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims description 3
- 238000012986 modification Methods 0.000 claims description 3
- 230000004048 modification Effects 0.000 claims description 3
- 238000005286 illumination Methods 0.000 claims description 2
- 239000011159 matrix material Substances 0.000 claims description 2
- 238000013508 migration Methods 0.000 abstract description 2
- 230000005012 migration Effects 0.000 abstract description 2
- 230000003902 lesion Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 206010004146 Basal cell carcinoma Diseases 0.000 description 1
- 206010020649 Hyperkeratosis Diseases 0.000 description 1
- 208000001126 Keratosis Diseases 0.000 description 1
- 208000007256 Nevus Diseases 0.000 description 1
- 208000009621 actinic keratosis Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 206010016629 fibroma Diseases 0.000 description 1
- 210000002752 melanocyte Anatomy 0.000 description 1
- 201000001441 melanoma Diseases 0.000 description 1
- 230000006740 morphological transformation Effects 0.000 description 1
- 231100000216 vascular lesion Toxicity 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30088—Skin; Dermal
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention belongs to the technical field of image recognition, and particularly relates to a skin lesion segmentation method based on data enhancement and depth network, which comprises the following steps: data set segmentation: dividing the data set into a training set, a verification set and a test set; data color enhancement; the data form is enhanced, the data set is overturned and translated, and the learning of the network on the position characteristics is reduced; constructing a model; training a model; and (5) evaluating a model. According to the invention, through adjusting the color coefficients of all channels of the RGB data set, simulating the data under different natural conditions and changing the data form, the data set is effectively expanded, the generalization capability of the model is greatly improved, the overfitting of the model is reduced, the model training adopts a migration learning method, and the model training time is greatly shortened. The invention is used for segmenting the skin lesion image.
Description
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a skin lesion segmentation method based on data enhancement and a depth network.
Background
Because the lesion images are obtained under different conditions, the obtained lesion images are generally different and have different characteristic information, but the existing mode has poor network ubiquity performance due to insufficient data samples, and the lesion images obtained under different conditions cannot be effectively classified.
Problems or drawbacks with the prior art: the existing skin lesion area segmentation method is poor in robustness and poor in classification effect.
Disclosure of Invention
Aiming at the technical problems of poor robustness and poor classification effect of the existing skin lesion area segmentation method, the invention provides the skin lesion segmentation method based on the data enhancement and depth network, which has good classification effect, strong recognition effect and short training time.
In order to solve the technical problems, the invention adopts the following technical scheme:
a skin lesion segmentation method based on data enhancement and depth network comprises the following steps:
s1, data set segmentation: dividing the data set into a training set, a verification set and a test set;
S2, data color enhancement: the color value of each channel of the three-channel RGB image data is adjusted according to the threshold value based on the data set so as to simulate the data under different ambient light conditions, and the threshold values with different sizes are set for the modification of the colors of different channels;
S3, enhancing the data form: the data set is turned over and translated, so that the learning of the network on the position characteristics is reduced;
S4, constructing a model: the model adopts EFFICIENTNET-B7, and the initialization parameters adopt ImageNet parameters, so that the training time of network training is shortened;
S5, model training: inputting training data into a network for iterative training, stopping training when the model loss value is not reduced, and verifying by using a verification set to ensure that the model achieves the optimal recognition effect;
S6, evaluating a model: and carrying out classification prediction on the test set by using the model, and then evaluating the model according to classification results.
In S1, the data set is represented by 7:1: the proportion of 2 is divided into a training set, a verification set and a test set; the training set is used for training a model, the verification set is used for verifying that the model parameters reach the optimal state, and the test set is used for testing the model effect.
The method for enhancing the data color in the S2 comprises the following steps: the data set is RGB three-channel image data, each data represents the ith data by D i={βR,βG,βB } and represents D i, beta R,βG,βB represents color matrixes of three channels R, G and B respectively, the color coefficients of the three channels are adjusted to simulate the data under different illumination environments, when the data color enhancement is carried out, the adjustment amplitude of the single channel data is not more than 40%, beta' =beta (1+/-theta), and 0 < theta < 40%; when the two channels are regulated, the regulating amplitude is not more than 30%, and beta' =beta (1+/-delta), 0 < delta is less than or equal to 30%; when three channels are simultaneously regulated, the regulating amplitude is not more than 20%, beta '=beta (1+/-phi), 0 < phi is less than or equal to 20%, beta' is the regulating amplitude, and theta, delta and phi are regulating parameters.
The method for overturning and translating the data set in the S3 comprises the following steps: the overturning comprises up-down overturning and left-right overturning, and the translation mode is 10% translation along a horizontal axis and a vertical axis.
The model training method in S5 comprises the following steps: model verification uses verification set data to train the trained model again, if the model loss is not reduced, the model is saved, if the model loss is reduced, model parameters are adjusted, and the training set is continuously used for training the model.
The method for evaluating the model in the S6 is as follows: the model evaluation is carried out by using TOP-1 accuracy and TOP-3 accuracy through a model; after model training is completed, the test set data are classified and identified by using the model training device, and model evaluation is carried out according to the identification result.
Compared with the prior art, the invention has the beneficial effects that:
According to the invention, through adjusting the color coefficients of all channels of the RGB data set, simulating the data under different natural conditions and changing the data form, the data set is effectively expanded, the generalization capability of the model is greatly improved, the overfitting of the model is reduced, the model training adopts a migration learning method, and the model training time is greatly shortened.
Drawings
FIG. 1 is a schematic diagram of the main steps of the present invention;
Fig. 2 is a logic block diagram of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
A skin lesion segmentation method based on data enhancement and depth network comprises the following steps:
Step 1, data set segmentation: dividing the data set into a training set, a verification set and a test set; the present model was trained using the HAM10000 dataset, which had 1015 images of 7 types of skin lesions (melanoma, basal cell carcinoma, melanocyte nevi, actinic keratosis, benign keratosis, cutaneous fibroma, vascular lesions).
Step 2, data color enhancement: based on the original data, adjusting the color value of each channel of the three-channel RGB image data according to the threshold value so as to simulate the data under different ambient light conditions, and setting the threshold values with different sizes for the modification of the colors of different channels;
Step 3, enhancing the data form: based on the original data, the position feature of the network is reduced by overturning and translating the position feature;
Step 4, model construction: the skin lesion classification network is constructed based on EFFICIENTNET-B7, and the initial parameters migrate the training parameters of the skin lesion classification network on the ImageNet training set so as to speed up training and ensure identification performance.
Step 5, model training: inputting training data into a network for iterative training, stopping training when the model loss value is not reduced, and verifying by using a verification set to ensure that the model achieves the optimal recognition effect;
step 6, model evaluation: using the model to conduct classification prediction on the test set, and then evaluating the model according to classification results;
further, in step 1, the dataset is represented by 7:1: the proportion of 2 is divided into a training set, a verification set and a test set; the training set is used for training the model, the verification set is used for verifying that the model parameters reach the optimal state, and the test set is used for testing the model effect.
Further, in step 2, the data set is RGB three-channel image data, each data represents the ith data by D i={βR,βG,βB and D i, and β R,βG,βB represents the color matrix of the R, G, and B channels, respectively. The color coefficients of the three channels are adjusted to simulate data in different lighting environments. In order to ensure the rationality of data, when the data color enhancement is carried out, the adjustment amplitude of single channel data is not more than 40 percent, and beta' =beta (1+/-theta), wherein 0 < theta < 40 percent; when the two channels are regulated, the regulating amplitude is not more than 30%, and beta' =beta (1+/-delta), 0 < delta is less than or equal to 30%; when three channels are adjusted simultaneously, the adjustment amplitude is not more than 20%, and beta' =beta (1+/-phi), 0 < phi is less than or equal to 20%. After the single channel, the double channel and the three channels of the original data set are adjusted at least once, the data set is expanded to be at least 4 times of the original data.
Further, in step 3, the morphology is transformed: the data of the data set is turned over and translated, the turning over comprises up-down turning over and left-right turning over, the translation mode is translation along a horizontal axis and a vertical axis by 10%, and the data set is expanded to 64 times of the original data set after morphological transformation.
Further, the model training method in the step 5 is as follows: model verification uses verification set data to train the trained model again, if the model loss is not reduced, the model is saved, if the model loss is reduced, model parameters are adjusted, and the training set is continuously used for training the model.
Further, the method for evaluating the model in the step 6 is as follows: the model evaluation is carried out by using TOP-1 accuracy and TOP-3 accuracy through a model; after model training is completed, the test set data are classified and identified by using the model training device, and model evaluation is carried out according to the identification result.
The preferred embodiments of the present invention have been described in detail, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the spirit of the present invention, and the various changes are included in the scope of the present invention.
Claims (4)
1. A skin lesion segmentation method based on data enhancement and depth network is characterized in that: comprises the following steps:
S1, data set segmentation: dividing the data set into a training set, a verification set and a test set; the dataset was set at 7:1: the proportion of 2 is divided into a training set, a verification set and a test set; the training set is used for training a model, the verification set is used for verifying that the model parameters reach the optimal state, and the test set is used for testing the model effect;
S2, data color enhancement: the color value of each channel of the three-channel RGB image data is adjusted according to the threshold value based on the data set so as to simulate the data under different ambient light conditions, and the threshold values with different sizes are set for the modification of the colors of different channels; the method for enhancing the data color in the S2 comprises the following steps: the data set is RGB three-channel image data, each data represents the ith data by D i={βR,βG,βB } and D i, beta R,βG,βB represents the color matrix of R, G and B channels respectively, the color coefficients of the three channels are adjusted to simulate the data under different illumination environments, when the data color enhancement is carried out, the adjustment amplitude of the single channel data is not more than 40 percent, When the two channels are regulated, the regulating amplitude is not more than 30%, and beta' =beta (1+/-delta), 0 < delta is less than or equal to 30%; when three channels are simultaneously regulated, the regulating amplitude is not more than 20%, beta '=beta (1+/-phi), 0 < phi is not more than 20%, beta' is the regulating amplitude, and/>Delta, phi is an adjustment parameter;
S3, enhancing the data form: the data set is turned over and translated, so that the learning of the network on the position characteristics is reduced;
S4, constructing a model: the model adopts EFFICIENTNET-B7, and the initialization parameters adopt ImageNet parameters, so that the training time of network training is shortened;
S5, model training: inputting training data into a network for iterative training, stopping training when the model loss value is not reduced, and verifying by using a verification set to ensure that the model achieves the optimal recognition effect;
S6, evaluating a model: and carrying out classification prediction on the test set by using the model, and then evaluating the model according to classification results.
2. The method for segmenting skin lesions based on data enhancement and depth network according to claim 1, wherein: the method for overturning and translating the data set in the S3 comprises the following steps: the overturning comprises up-down overturning and left-right overturning, and the translation mode is 10% translation along a horizontal axis and a vertical axis.
3. The method for segmenting skin lesions based on data enhancement and depth network according to claim 1, wherein: the model training method in S5 comprises the following steps: model verification uses verification set data to train the trained model again, if the model loss is not reduced, the model is saved, if the model loss is reduced, model parameters are adjusted, and the training set is continuously used for training the model.
4. The method for segmenting skin lesions based on data enhancement and depth network according to claim 1, wherein: the method for evaluating the model in the S6 is as follows: the model evaluation is carried out by using TOP-1 accuracy and TOP-3 accuracy through a model; after model training is completed, the test set data are classified and identified by using the model training device, and model evaluation is carried out according to the identification result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011329333.0A CN112435237B (en) | 2020-11-24 | 2020-11-24 | Skin lesion segmentation method based on data enhancement and depth network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011329333.0A CN112435237B (en) | 2020-11-24 | 2020-11-24 | Skin lesion segmentation method based on data enhancement and depth network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112435237A CN112435237A (en) | 2021-03-02 |
CN112435237B true CN112435237B (en) | 2024-06-21 |
Family
ID=74693959
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011329333.0A Active CN112435237B (en) | 2020-11-24 | 2020-11-24 | Skin lesion segmentation method based on data enhancement and depth network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112435237B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116129199A (en) * | 2023-04-13 | 2023-05-16 | 西南石油大学 | Method, device, medium and equipment for classifying skin cancer with interpretability |
CN116757971B (en) * | 2023-08-21 | 2024-05-14 | 深圳高迪数码有限公司 | Image automatic adjustment method based on ambient light |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110543906A (en) * | 2019-08-29 | 2019-12-06 | 彭礼烨 | Skin type automatic identification method based on data enhancement and Mask R-CNN model |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108256527A (en) * | 2018-01-23 | 2018-07-06 | 深圳市唯特视科技有限公司 | A kind of cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network |
CN108765408B (en) * | 2018-05-31 | 2021-09-10 | 杭州同绘科技有限公司 | Method for constructing cancer pathological image virtual disease case library and multi-scale cancer detection system based on convolutional neural network |
CN109508650A (en) * | 2018-10-23 | 2019-03-22 | 浙江农林大学 | A kind of wood recognition method based on transfer learning |
CN111652213A (en) * | 2020-05-24 | 2020-09-11 | 浙江理工大学 | Ship water gauge reading identification method based on deep learning |
-
2020
- 2020-11-24 CN CN202011329333.0A patent/CN112435237B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110543906A (en) * | 2019-08-29 | 2019-12-06 | 彭礼烨 | Skin type automatic identification method based on data enhancement and Mask R-CNN model |
Also Published As
Publication number | Publication date |
---|---|
CN112435237A (en) | 2021-03-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108921092B (en) | Melanoma classification method based on convolution neural network model secondary integration | |
CN111753828B (en) | Natural scene horizontal character detection method based on deep convolutional neural network | |
CN108629338A (en) | A kind of face beauty prediction technique based on LBP and convolutional neural networks | |
CN108681692A (en) | Increase Building recognition method in a kind of remote sensing images based on deep learning newly | |
CN112435237B (en) | Skin lesion segmentation method based on data enhancement and depth network | |
CN106296695A (en) | Adaptive threshold natural target image based on significance segmentation extraction algorithm | |
CN109508634A (en) | Ship Types recognition methods and system based on transfer learning | |
CN112991493B (en) | Gray image coloring method based on VAE-GAN and mixed density network | |
CN106373096B (en) | A kind of shadow removing method of multiple features Weight number adaptively | |
CN116485785B (en) | Surface defect detection method for solar cell | |
CN109829507B (en) | Aerial high-voltage transmission line environment detection method | |
CN112906550B (en) | Static gesture recognition method based on watershed transformation | |
CN112037225B (en) | Marine ship image segmentation method based on convolution nerve | |
CN109886146B (en) | Flood information remote sensing intelligent acquisition method and device based on machine vision detection | |
CN111260645A (en) | Method and system for detecting tampered image based on block classification deep learning | |
CN108629762A (en) | A kind of stone age evaluation and test model reduces the image pre-processing method and system of interference characteristic | |
CN113554568B (en) | Unsupervised cyclic rain-removing network method based on self-supervision constraint and unpaired data | |
CN112541966B (en) | Face replacement method based on reconstruction and generation network | |
CN109145749B (en) | Cross-data-set facial expression recognition model construction and recognition method | |
CN112784880B (en) | Method for marking visibility grade of expressway in foggy days based on natural feature statistical method | |
CN112381176B (en) | Image classification method based on binocular feature fusion network | |
CN115861276A (en) | Method and device for detecting scratches on surface of graphite membrane | |
CN106570508B (en) | Music score spectral line detection and deletion method based on local binary mode | |
CN113034454A (en) | Underwater image quality evaluation method based on human visual sense | |
CN112270220A (en) | Sewing gesture recognition method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |