CN112435237A - Skin lesion segmentation method based on data enhancement and depth network - Google Patents
Skin lesion segmentation method based on data enhancement and depth network Download PDFInfo
- Publication number
- CN112435237A CN112435237A CN202011329333.0A CN202011329333A CN112435237A CN 112435237 A CN112435237 A CN 112435237A CN 202011329333 A CN202011329333 A CN 202011329333A CN 112435237 A CN112435237 A CN 112435237A
- Authority
- CN
- China
- Prior art keywords
- model
- data
- training
- color
- skin lesion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 230000011218 segmentation Effects 0.000 title claims abstract description 19
- 206010040882 skin lesion Diseases 0.000 title claims abstract description 19
- 231100000444 skin lesion Toxicity 0.000 title claims abstract description 19
- 238000012549 training Methods 0.000 claims abstract description 48
- 238000012360 testing method Methods 0.000 claims abstract description 19
- 238000012795 verification Methods 0.000 claims abstract description 19
- 230000002708 enhancing effect Effects 0.000 claims abstract description 8
- 230000000694 effects Effects 0.000 claims description 10
- 238000011156 evaluation Methods 0.000 claims description 9
- 238000013519 translation Methods 0.000 claims description 6
- 101100261000 Caenorhabditis elegans top-3 gene Proteins 0.000 claims description 3
- 101100153591 Cricetulus griseus TOP1 gene Proteins 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 238000012986 modification Methods 0.000 claims description 3
- 230000004048 modification Effects 0.000 claims description 3
- 238000005286 illumination Methods 0.000 claims description 2
- 238000013526 transfer learning Methods 0.000 abstract description 2
- 230000003902 lesion Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 206010004146 Basal cell carcinoma Diseases 0.000 description 1
- 206010053717 Fibrous histiocytoma Diseases 0.000 description 1
- 206010020649 Hyperkeratosis Diseases 0.000 description 1
- 208000001126 Keratosis Diseases 0.000 description 1
- 208000009077 Pigmented Nevus Diseases 0.000 description 1
- 208000009621 actinic keratosis Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 208000001119 benign fibrous histiocytoma Diseases 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 201000010305 cutaneous fibrous histiocytoma Diseases 0.000 description 1
- 201000001441 melanoma Diseases 0.000 description 1
- 201000002699 melanoma in congenital melanocytic nevus Diseases 0.000 description 1
- 231100000216 vascular lesion Toxicity 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30088—Skin; Dermal
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention belongs to the technical field of image recognition, and particularly relates to a skin lesion segmentation method based on data enhancement and a depth network, which comprises the following steps: data set segmentation: dividing a data set into a training set, a verification set and a test set; enhancing the data color; enhancing the data form, turning and translating the data set, and reducing the learning of the network on the position characteristics; constructing a model; training a model; and (6) evaluating the model. According to the method, the data under different natural conditions are simulated by adjusting the color coefficients of the channels of the RGB data set, the data form is changed, the data set is effectively expanded, the generalization capability of the model is greatly improved, overfitting of the model is reduced, a transfer learning method is adopted for model training, and the model training time is greatly shortened. The method is used for segmenting the skin lesion image.
Description
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a skin lesion segmentation method based on data enhancement and a depth network.
Background
Because the conditions for acquiring lesion images are different, the acquired lesion images generally have larger difference and different characteristic information, and the existing mode causes poorer network bloom performance due to insufficient data samples and cannot effectively classify the lesion images acquired under different conditions.
Problems or disadvantages of the prior art: the existing skin lesion area segmentation method is poor in robustness and poor in classification effect.
Disclosure of Invention
Aiming at the technical problems of poor robustness and poor classification effect of the existing skin lesion region segmentation method, the invention provides the skin lesion segmentation method based on data enhancement and the depth network, which has the advantages of good classification effect, strong recognition effect and short training time.
In order to solve the technical problems, the invention adopts the technical scheme that:
a skin lesion segmentation method based on data enhancement and depth network comprises the following steps:
s1, data set segmentation: dividing a data set into a training set, a verification set and a test set;
s2, data color enhancement: adjusting the color value of each channel of the three-channel RGB image data according to a threshold value based on the data set so as to simulate data under different ambient light conditions, and setting different threshold values for the color modification of different numbers of channels;
s3, enhancing data form: the data set is turned and translated, so that the learning of the network on the position characteristics is reduced;
s4, model construction: the model adopts EfficientNet-B7, and the initialization parameters adopt ImageNet parameters, so that the network training time is reduced;
s5, model training: inputting training data into a network, performing iterative training, stopping training when the loss value of the model does not decrease, and verifying by using a verification set to ensure that the model achieves the optimal recognition effect;
s6, model evaluation: and carrying out classification prediction on the test set by using the model, and then evaluating the model according to a classification result.
In S1, the data set is divided into 7: 1: 2, dividing the ratio into a training set, a verification set and a test set; the training set is used for training the model, the verification set is used for verifying that the model parameters reach the optimal state, and the test set is used for testing the model effect.
The method for enhancing the data color in S2 includes: the data set is RGB three-channel image data, and each piece of data is represented by Di={βR,βG,βBDenotes DiDenotes the ith data, betaR,βG,βBThe color matrixes respectively represent three channels of R, G and B, the color coefficients of the three channels are adjusted to simulate data in different illumination environments, when data color enhancement is carried out, when single channel data is adjusted, the adjustment range is not more than 40%, beta ═ beta (1 +/-theta), and theta is more than 0 and less than 40%; when the two channels are adjusted, the adjusting amplitude is not more than 30 percent, beta is beta (1 +/-delta), and delta is more than 0 and less than or equal to 30 percent; when the three channels are adjusted simultaneously, the adjusting amplitude is not more than 20 percent, beta is beta (1 +/-phi), phi is more than 0 and less than or equal to 20 percent, beta is the adjusting amplitude, and theta, delta and phi are adjusting parameters.
The method for flipping and translating the data set in S3 includes: the overturning comprises up-down overturning and left-right overturning, and the translation mode is translation of 10% along a horizontal shaft and a vertical shaft.
The method for training the model in the S5 comprises the following steps: and model verification uses the verification set data to train the trained model again, if the model loss is not reduced, the model is saved, if the model loss is reduced, the model parameters are adjusted, and the training set is continuously used for training the model.
The method for evaluating the model in the S6 comprises the following steps: the model evaluation is carried out by using TOP-1 accuracy and TOP-3 accuracy through a model; after the model training is finished, the model is used for carrying out classification and identification on the test set data, and model evaluation is carried out according to the identification result.
Compared with the prior art, the invention has the following beneficial effects:
according to the method, the data under different natural conditions are simulated by adjusting the color coefficients of the channels of the RGB data set, the data form is changed, the data set is effectively expanded, the generalization capability of the model is greatly improved, overfitting of the model is reduced, a transfer learning method is adopted for model training, and the model training time is greatly shortened.
Drawings
FIG. 1 is a schematic diagram of the main steps of the present invention;
FIG. 2 is a logic diagram of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A skin lesion segmentation method based on data enhancement and depth network comprises the following steps:
step 1, data set segmentation: dividing a data set into a training set, a verification set and a test set; the model was trained using HAM10000 datasets, HAM10000 contains 7 types of skin lesions (melanoma, basal cell carcinoma, melanocytic nevi, actinic keratosis, benign keratosis, dermatofibroma, vascular lesions) for a total of 1015 images.
Step 2, enhancing data color: based on original data, adjusting the color value of each channel of the three-channel RGB image data according to a threshold value to simulate data under different ambient light conditions, and setting different threshold values for the color modification of different numbers of channels;
step 3, enhancing data form: turning and translating the original data based on the original data to reduce the learning of the network on the position characteristics;
step 4, model construction: the skin lesion classification network is constructed on the basis of EfficientNet-B7, and the training parameters of the skin lesion classification network on the ImageNet training set are migrated by the initial parameters, so that the training speed is increased, and the recognition performance is ensured.
Step 5, model training: inputting training data into a network, performing iterative training, stopping training when the loss value of the model does not decrease, and verifying by using a verification set to ensure that the model achieves the optimal recognition effect;
step 6, model evaluation: classifying and predicting the test set by using the model, and then evaluating the model according to a classification result;
further, in step 1, the data set is updated by a method of 7: 1: 2, dividing the ratio into a training set, a verification set and a test set; the training set is used for training the model, the verification set is used for verifying that the model parameters reach the optimal state, and the test set is used for testing the model effect.
Further, in the step 2, the data color is enhanced, the data set is RGB three-channel image data, and each piece of data is represented by Di={βR,βG,βBDenotes DiDenotes the ith data, betaR,βG,βBAnd the color matrixes respectively represent three channels of R, G and B. The color coefficients of the three channels were adjusted to simulate data in different lighting environments. In order to ensure the reasonability of data, when data color enhancement is carried out, when single channel data is adjusted, the adjustment amplitude is not more than 40%, beta ═ beta (1 +/-theta), and theta is more than 0 and less than 40%; when the two channels are adjusted, the adjusting amplitude is not more than 30 percent, beta is beta (1 +/-delta), and delta is more than 0 and less than or equal to 30 percent; when three channels are adjusted simultaneously, the adjusting amplitude is not more than 20 percent, beta is beta (1 +/-phi), and phi is more than 0 and less than or equal to 20 percent. After the original data set is subjected to at least one adjustment of a single channel, a double channel and a three channel, the data set is expanded to be at least 4 times of the original data.
Further, in step 3, the modality is converted: and turning and translating the data of the data set, wherein the turning comprises up-down turning and left-right turning, the translation mode is translation of 10% along a horizontal axis and a vertical axis, and after form conversion, the data set is expanded to 64 times of the original data set.
Further, the method for model training in step 5 comprises: and model verification uses the verification set data to train the trained model again, if the model loss is not reduced, the model is saved, if the model loss is reduced, the model parameters are adjusted, and the training set is continuously used for training the model.
Further, the method for evaluating the model in step 6 is as follows: the model evaluation is carried out by using TOP-1 accuracy and TOP-3 accuracy through a model; after the model training is finished, the model is used for carrying out classification and identification on the test set data, and model evaluation is carried out according to the identification result.
Although only the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art, and all changes are encompassed in the scope of the present invention.
Claims (6)
1. A skin lesion segmentation method based on data enhancement and a depth network is characterized by comprising the following steps: comprises the following steps:
s1, data set segmentation: dividing a data set into a training set, a verification set and a test set;
s2, data color enhancement: adjusting the color value of each channel of the three-channel RGB image data according to a threshold value based on the data set so as to simulate data under different ambient light conditions, and setting different threshold values for the color modification of different numbers of channels;
s3, enhancing data form: the data set is turned and translated, so that the learning of the network on the position characteristics is reduced;
s4, model construction: the model adopts EfficientNet-B7, and the initialization parameters adopt ImageNet parameters, so that the network training time is reduced;
s5, model training: inputting training data into a network, performing iterative training, stopping training when the loss value of the model does not decrease, and verifying by using a verification set to ensure that the model achieves the optimal recognition effect;
s6, model evaluation: and carrying out classification prediction on the test set by using the model, and then evaluating the model according to a classification result.
2. The method for skin lesion segmentation based on data enhancement and depth network as claimed in claim 1, wherein: in S1, the data set is divided into 7: 1: 2, dividing the ratio into a training set, a verification set and a test set; the training set is used for training the model, the verification set is used for verifying that the model parameters reach the optimal state, and the test set is used for testing the model effect.
3. The method for skin lesion segmentation based on data enhancement and depth network as claimed in claim 1, wherein: the method for enhancing the data color in S2 includes: the data set is RGB three-channel image data, and each piece of data is represented by Di={βR,βG,βBDenotes DiDenotes the ith data, betaR,βG,βBColor matrixes respectively representing three channels of R, G and B, the color coefficients of the three channels are adjusted to simulate data under different illumination environments, the adjustment range of the data is not more than 40 percent when the data color is enhanced and the data of a single channel is adjusted,when the two channels are adjusted, the adjusting amplitude is not more than 30 percent, beta is beta (1 +/-delta), and delta is more than 0 and less than or equal to 30 percent; when three channels are adjusted simultaneously, the adjustment amplitude is not more than 20 percent, beta is beta (1 +/-phi), phi is more than 0 and less than or equal to 20 percent, and beta is the adjustment amplitudeDelta and phi are adjustment parameters.
4. The method for skin lesion segmentation based on data enhancement and depth network as claimed in claim 1, wherein: the method for flipping and translating the data set in S3 includes: the overturning comprises up-down overturning and left-right overturning, and the translation mode is translation of 10% along a horizontal shaft and a vertical shaft.
5. The method for skin lesion segmentation based on data enhancement and depth network as claimed in claim 1, wherein: the method for training the model in the S5 comprises the following steps: and model verification uses the verification set data to train the trained model again, if the model loss is not reduced, the model is saved, if the model loss is reduced, the model parameters are adjusted, and the training set is continuously used for training the model.
6. The method for skin lesion segmentation based on data enhancement and depth network as claimed in claim 1, wherein: the method for evaluating the model in the S6 comprises the following steps: the model evaluation is carried out by using TOP-1 accuracy and TOP-3 accuracy through a model; after the model training is finished, the model is used for carrying out classification and identification on the test set data, and model evaluation is carried out according to the identification result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011329333.0A CN112435237B (en) | 2020-11-24 | 2020-11-24 | Skin lesion segmentation method based on data enhancement and depth network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011329333.0A CN112435237B (en) | 2020-11-24 | 2020-11-24 | Skin lesion segmentation method based on data enhancement and depth network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112435237A true CN112435237A (en) | 2021-03-02 |
CN112435237B CN112435237B (en) | 2024-06-21 |
Family
ID=74693959
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011329333.0A Active CN112435237B (en) | 2020-11-24 | 2020-11-24 | Skin lesion segmentation method based on data enhancement and depth network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112435237B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116129199A (en) * | 2023-04-13 | 2023-05-16 | 西南石油大学 | Method, device, medium and equipment for classifying skin cancer with interpretability |
CN116757971A (en) * | 2023-08-21 | 2023-09-15 | 深圳高迪数码有限公司 | Image automatic adjustment method based on ambient light |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108256527A (en) * | 2018-01-23 | 2018-07-06 | 深圳市唯特视科技有限公司 | A kind of cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network |
CN108765408A (en) * | 2018-05-31 | 2018-11-06 | 杭州同绘科技有限公司 | Build the method in cancer pathology image virtual case library and the multiple dimensioned cancer detection system based on convolutional neural networks |
CN109508650A (en) * | 2018-10-23 | 2019-03-22 | 浙江农林大学 | A kind of wood recognition method based on transfer learning |
CN110543906A (en) * | 2019-08-29 | 2019-12-06 | 彭礼烨 | Skin type automatic identification method based on data enhancement and Mask R-CNN model |
CN111652213A (en) * | 2020-05-24 | 2020-09-11 | 浙江理工大学 | Ship water gauge reading identification method based on deep learning |
-
2020
- 2020-11-24 CN CN202011329333.0A patent/CN112435237B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108256527A (en) * | 2018-01-23 | 2018-07-06 | 深圳市唯特视科技有限公司 | A kind of cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network |
CN108765408A (en) * | 2018-05-31 | 2018-11-06 | 杭州同绘科技有限公司 | Build the method in cancer pathology image virtual case library and the multiple dimensioned cancer detection system based on convolutional neural networks |
CN109508650A (en) * | 2018-10-23 | 2019-03-22 | 浙江农林大学 | A kind of wood recognition method based on transfer learning |
CN110543906A (en) * | 2019-08-29 | 2019-12-06 | 彭礼烨 | Skin type automatic identification method based on data enhancement and Mask R-CNN model |
CN111652213A (en) * | 2020-05-24 | 2020-09-11 | 浙江理工大学 | Ship water gauge reading identification method based on deep learning |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116129199A (en) * | 2023-04-13 | 2023-05-16 | 西南石油大学 | Method, device, medium and equipment for classifying skin cancer with interpretability |
CN116757971A (en) * | 2023-08-21 | 2023-09-15 | 深圳高迪数码有限公司 | Image automatic adjustment method based on ambient light |
CN116757971B (en) * | 2023-08-21 | 2024-05-14 | 深圳高迪数码有限公司 | Image automatic adjustment method based on ambient light |
Also Published As
Publication number | Publication date |
---|---|
CN112435237B (en) | 2024-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110599409B (en) | Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel | |
CN111753828B (en) | Natural scene horizontal character detection method based on deep convolutional neural network | |
CN108629338B (en) | Face beauty prediction method based on LBP and convolutional neural network | |
CN111553837B (en) | Artistic text image generation method based on neural style migration | |
CN106296695B (en) | Adaptive threshold natural target image segmentation extraction algorithm based on conspicuousness | |
CN109949255A (en) | Image rebuilding method and equipment | |
WO2023125456A1 (en) | Multi-level variational autoencoder-based hyperspectral image feature extraction method | |
CN108734138A (en) | A kind of melanoma skin disease image classification method based on integrated study | |
CN110211127B (en) | Image partition method based on bicoherence network | |
CN112906550B (en) | Static gesture recognition method based on watershed transformation | |
CN111127360B (en) | Gray image transfer learning method based on automatic encoder | |
CN112435237A (en) | Skin lesion segmentation method based on data enhancement and depth network | |
CN116993737B (en) | Lightweight fracture segmentation method based on convolutional neural network | |
Mao et al. | Classroom micro-expression recognition algorithms based on multi-feature fusion | |
CN108711160B (en) | Target segmentation method based on HSI (high speed input/output) enhanced model | |
CN116051907A (en) | Fine classification method, system, medium, equipment and terminal for cultural relic fragments | |
CN108090460B (en) | Weber multidirectional descriptor-based facial expression recognition feature extraction method | |
CN112541966B (en) | Face replacement method based on reconstruction and generation network | |
CN110415816B (en) | Skin disease clinical image multi-classification method based on transfer learning | |
CN116933141A (en) | Multispectral laser radar point cloud classification method based on multicore graph learning | |
CN110378920A (en) | A kind of image outline extracting method of view-based access control model model | |
CN110020986A (en) | The single-frame image super-resolution reconstruction method remapped based on Euclidean subspace group two | |
CN112381176B (en) | Image classification method based on binocular feature fusion network | |
CN115359562A (en) | Sign language letter spelling recognition method based on convolutional neural network | |
CN115223033A (en) | Synthetic aperture sonar image target classification method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |