[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN108305253B - Pathological image classification method based on multiple-time rate deep learning - Google Patents

Pathological image classification method based on multiple-time rate deep learning Download PDF

Info

Publication number
CN108305253B
CN108305253B CN201810189171.1A CN201810189171A CN108305253B CN 108305253 B CN108305253 B CN 108305253B CN 201810189171 A CN201810189171 A CN 201810189171A CN 108305253 B CN108305253 B CN 108305253B
Authority
CN
China
Prior art keywords
pathological
image
images
cnn
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810189171.1A
Other languages
Chinese (zh)
Other versions
CN108305253A (en
Inventor
姜志国
郑钰山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Motic Xiamen Medical Diagnostic Systems Co ltd
Original Assignee
Motic Xiamen Medical Diagnostic Systems Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motic Xiamen Medical Diagnostic Systems Co ltd filed Critical Motic Xiamen Medical Diagnostic Systems Co ltd
Priority to CN201810189171.1A priority Critical patent/CN108305253B/en
Publication of CN108305253A publication Critical patent/CN108305253A/en
Application granted granted Critical
Publication of CN108305253B publication Critical patent/CN108305253B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

A pathology full-section diagnosis method based on multiple-time rate deep learning is characterized in that pathology sections are scanned into electronic calculation, and pathology images are expressed in RGB color space to obtain RGB pathology images; marking the pathological images, and establishing a multi-time pathological image training data set; establishing a multi-magnification CNN pathological image classification model, and performing deep learning training on the multi-magnification CNN pathological image classification model by using RGB pathological images; and (3) carrying out cancer region detection on the pathological full-section by using the CNN pathological image classification model after deep learning training. The method reduces the calculation amount of the existing cancerous region detection algorithm based on the multiple-rate pathological image, and simultaneously ensures the auxiliary diagnosis precision.

Description

Pathological image classification method based on multiple-time rate deep learning
Technical Field
The invention relates to the field of image processing and deep learning, in particular to a pathological image classification method based on multi-time rate deep learning.
Background
The digital pathological full section is a large-scale digital image with high multiplying power which is converted into pathological section by special scanning imaging system and can be displayed, transmitted and processed by computer. With the continuous maturity of digital pathological image imaging technology, computer-aided diagnosis methods based on digital pathological images are developed rapidly. In recent years, deep learning methods represented by a Convolutional Neural Network (CNN) have become mainstream methods for classifying pathological images.
According to the existing research, the CNN model established by the high-magnification digital pathological image (such as 20-fold) and the CNN model established by the low-magnification digital pathological image (such as 2-fold) have a complementary relationship in pathological image classification. In order to obtain higher classification accuracy, a scholars proposes to establish a CNN model to detect a canceration region under different magnifications, and then fusion is performed on image classification results obtained under each magnification to obtain a final image classification result. However, performing repeated detection at multiple high rates requires more computational resources; especially, judging all areas in a pathological full section under high magnification results in high calculation amount.
Disclosure of Invention
The invention aims to provide a pathological image classification method based on multi-magnification deep learning, which can exert the advantages of a CNN pathological image cancerous region detection model established based on different magnifications, reduce the calculated amount of the existing cancerous region detection algorithm based on multi-magnification pathological images and meet the requirements on pathological image classification accuracy and speed.
In order to solve the technical problems, the technical scheme of the invention is as follows: a pathological image classification method based on multiple-time rate deep learning is characterized by comprising the following steps:
step 1: scanning the pathological section into an electronic computer, and expressing the pathological image in an RGB color space to obtain an RGB pathological image;
step 2: marking pathological images, establishing a multiple-time pathological image training data set, and using the set X20×Represents;
and step 3: establishing a multi-magnification CNN pathological image classification model, and performing deep learning training on the multi-magnification CNN pathological image classification model by using the RGB pathological image obtained in the step (1);
and 4, step 4: and classifying the pathological images by using the CNN pathological image classification model after deep learning training.
Further, the step 3 comprises the following steps:
step 3.1, establishing a multi-magnification CNN pathological image classification model comprising a plurality of CNN networks represented as CNN、CNN、CNN20×Sequentially training a plurality of CNN networks according to the sequence of multiplying power from low to high;
step 3.2: from many multiplying powers pathology image training data set X20×Down-sampling each pathological image by 10 times to generate a training data set containing 2 times pathological images
Figure RE-GDA0002455753980000021
The ith sample in (A) is represented as (I)i,2×,yi) Use of the same
Figure RE-GDA0002455753980000022
Training classification network CNN
Step 3.3: by CNNTo X20×All pathological images in (1) are predicted, X20×The prediction result of the ith pathology image included in (1) is denoted by pi,2×Wherein p isi,2×∈[0,1]Representing the probability of including cancerous regions in the ith pathological image, and predicting the result pi,2×Storing;
step 3.4: definition for screening
Figure RE-GDA0002455753980000031
Threshold value T of difficult samples in (1)E (0,1), and forming the difficult samples into a difficult sample data set with 2 times of the difficult samples
Figure RE-GDA0002455753980000032
Is formulated as follows:
Figure RE-GDA0002455753980000033
step 3.5: at 20 times data set X20×In which the
Figure RE-GDA0002455753980000034
The pathological images contained in the training data set are 20 times of pathological images corresponding to the pathological images, and are down-sampled by 4 times to form a training data set under 5 times
Figure RE-GDA0002455753980000035
XThe ith sample in (A) is represented as (I)i,5×,yi);
Step 3.6: use of the same
Figure RE-GDA0002455753980000036
Training classification network CNN(ii) a After training is completed, CNN is usedTo pair
Figure RE-GDA0002455753980000037
All pathological images in (1) are predicted
Figure RE-GDA0002455753980000038
The prediction result of the ith pathology image included in (1) is denoted by pi,5×Wherein p isi,5×∈[0,1]The prediction result pi,5 × is stored;
step 3.7: definition for screening
Figure RE-GDA0002455753980000039
Threshold value T of difficult samples in (1)E (0,1), and forming the difficult samples into a difficult sample data set with 5 times of the difficult samples
Figure RE-GDA00024557539800000310
Is formulated as follows:
Figure RE-GDA00024557539800000311
step 3.8: at the 20-fold data set X20×In which the
Figure RE-GDA00024557539800000312
The pathological images contained in the training data set are 20 times of pathological images corresponding to the pathological images to form a training data set under 20 times of pathological images
Figure RE-GDA00024557539800000313
Step 3.9: use of the same
Figure RE-GDA00024557539800000314
Training classification network CNN20×(ii) a And after the training is finished, finishing the deep learning training.
Further, the step 4 comprises the following steps:
step 4.1: cutting the pathological full section to be diagnosed into a training set by a sliding window method under 2 times
Figure RE-GDA00024557539800000315
The sub-images with the same size in the image are combined into a 2-fold pathological image test set, and the pathological image test set is used
Figure RE-GDA00024557539800000316
Is represented by the formula, wherein NTo represent
Figure RE-GDA00024557539800000317
The number of sub-images contained in (a);
step 4.2: using trained CNNModel pair
Figure RE-GDA0002455753980000041
Predicting the image in (1) to obtain a prediction result set, and using P={pk,2×|k=1,2,...,NRepresents;
step 4.3: by threshold value TAnd set PJudging the above
Figure RE-GDA0002455753980000042
The method comprises the following steps of (1) collecting a difficult sample, and collecting 5 times of pathological images of the difficult sample at corresponding positions in a full section to form a 5 times pathological image test set, wherein the formula is as follows:
Figure RE-GDA0002455753980000043
using trained CNNModel pair
Figure RE-GDA0002455753980000044
Predicting the image in (1) to obtain a prediction result set, and using P={pk,5×|k=1,2,...,NDenotes wherein N isRepresents
Figure RE-GDA0002455753980000045
The number of pathological images contained in (a);
step 4.4: by threshold value TAnd set PJudging the above
Figure RE-GDA0002455753980000046
Collecting 20 times pathological images of the difficult sample at corresponding positions in the whole section to form a 20 times pathological image test set, and using the 20 times pathological image test setTo the formula:
Figure RE-GDA0002455753980000047
using trained CNN20×Model pair
Figure RE-GDA0002455753980000048
Predicting the image in (1) to obtain a prediction result set, and using P20×={pk,20×|k=1,2,...,N20×Denotes wherein N is20×Represents
Figure RE-GDA0002455753980000049
The number of pathological images contained in (a);
step 4.5: setting a threshold T20×Judgment of Ik,20×Whether the cancerous region is contained, the formula is as follows:
Figure RE-GDA00024557539800000410
step 4.6: and (3) finishing the detection of the cancerous region of the whole section, and judging whether the sub region of the section contains the cancerous region by using the prediction result to achieve the effect of pathological image classification, wherein the formula is as follows:
Figure RE-GDA0002455753980000051
further, the step 1 specifically comprises: pathological sections are scanned and stored in a computer by a section scanner, pixel values of each pixel point of an image in R, G, B three channels are obtained through channel decomposition, and therefore data information of the image is obtained, and a formula is used as follows:
Figure RE-GDA0002455753980000052
wherein Ir(x,y)、Ig(x,y)、IbAnd (x, y) respectively represent red, green and blue three color channels of I (x, y).
Further, the multiple-rate pathology image training data set X in step 220×Screening cases suitable for developing an auxiliary diagnosis method by a pathological expert, and labeling pathological change areas in pathological sections contained in the cases; and (4) cutting the pathological images of the labeled sub-region in 20-time pathological sections according to the label of a pathological expert to form a data set.
According to the method, the cancerous regions are detected according to the sequence of the full-slice multiplying power from small to large, the region judged to contain the cancer in the 2-time full-slice image does not participate in subsequent calculation any more, and the region judged to contain the cancer in the 5-time full-slice image does not participate in subsequent calculation any more, so that only a small part of the region in the slice needs to be judged by depending on the 20-time image, the calculation amount is greatly reduced, and the image classification time is shortened.
Drawings
The following describes embodiments of the present invention in further detail with reference to the accompanying drawings.
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of the training phase of the method of the present invention;
fig. 3 is a schematic diagram of the image classification phase of the method of the present invention.
Detailed Description
The invention is further described with reference to the following drawings and detailed description.
Fig. 1 provides a pathological image classification method based on multiple-magnification deep learning, which specifically includes the following steps:
step 1: scanning the pathological section into electronic calculation, and representing the pathological image in an RGB color space to obtain an RGB pathological image;
step 2: marking pathological images, establishing a multiple-time pathological image training data set, and using the set X20×Represents;
and step 3: establishing a multi-magnification CNN pathological image classification model, and performing deep learning training on the multi-magnification CNN pathological image classification model by using the RGB pathological image obtained in the step (1); (ii) a And 4, step 4: and classifying the pathological images by using the CNN pathological image classification model after deep learning training.
Specifically, in step 1, a slice scanner scans and stores pathological slices in a computer, and pixel values of each pixel point of an image in R, G, B three channels are obtained through channel decomposition, so that data information of the image is obtained, and a formula is used as follows:
Figure RE-GDA0002455753980000061
wherein Ir(x,y)、Ig(x,y)、IbAnd (x, y) respectively represent red, green and blue three color channels of I (x, y). Step 1, pixel values of each pixel point of the image in R, G, B channels are obtained through channel decomposition, so that data information of the image is obtained, and a basis is provided for auxiliary diagnosis.
Step 2: the multiple-rate pathology image training data set X in (1)20×Screening cases suitable for developing an image classification method by a pathological expert, and labeling pathological change areas in pathological sections contained in the cases; and (4) cutting the pathological images of the labeled sub-region in 20-time pathological sections according to the label of a pathological expert to form a data set. Said X20×The ith sample in (a) is represented as (I)i,20×,yi) In which Ii,20×Representing a pathological image of a sub-region, yiE {0,1} represents Ii,20×A corresponding label. In particular, yiPathological image I is represented by 1iComprising a cancerous region, yiPathological image I is represented by 0iDoes not contain a cancerous region.
As shown in fig. 2, step 3 specifically includes the following steps:
step 3.1, establishing a multi-magnification CNN pathological image classification model comprising a plurality of CNN networks represented as CNN、CNN、CNN20×Sequentially training a plurality of CNN networks according to the sequence of multiplying power from low to high;
step 3.2: from many multiplying powers pathology image training data set X20×Down-sampling each pathological image by 10 times to generate a training data set containing 2 times pathological images
Figure RE-GDA0002455753980000071
The ith sample in (A) is represented as (I)i,2×,yi) Use of the same
Figure RE-GDA0002455753980000072
Training classification network CNN
Step 3.3: by CNNTo X20×All pathological images in (1) are predicted, X20×The prediction result of the ith pathology image included in (1) is denoted by pi,2×Wherein p isi,2×∈[0,1]Representing the probability of including cancerous regions in the ith pathological image, and predicting the result pi,2×Storing;
step 3.4: definition for screening
Figure RE-GDA0002455753980000073
Threshold value T of difficult samples in (1)E (0,1), and forming the difficult samples into a difficult sample data set with 2 times of the difficult samples
Figure RE-GDA0002455753980000074
Is formulated as follows:
Figure RE-GDA0002455753980000075
step 3.5: at the 20-fold data set X20×In which the
Figure RE-GDA0002455753980000076
The pathological images contained in the training data set are 20 times of pathological images corresponding to the pathological images, and are down-sampled by 4 times to form a training data set under 5 times
Figure RE-GDA0002455753980000077
XThe ith sample in (A) is represented as (I)i,5×,yi);
Step 3.6: use of the same
Figure RE-GDA0002455753980000081
Training classification network CNN(ii) a After training is completed, CNN is usedTo pair
Figure RE-GDA0002455753980000082
All pathological images in (1) are predicted
Figure RE-GDA0002455753980000083
The prediction result of the ith pathology image included in (1) is denoted by pi,5×Wherein p isi,5×∈[0,1]Prediction result pi,5×Storing;
step 3.7: definition for screening
Figure RE-GDA0002455753980000084
Threshold value T of difficult samples in (1)E (0,1), and forming the difficult samples into a difficult sample data set with 5 times of the difficult samples
Figure RE-GDA0002455753980000085
Is formulated as follows:
Figure RE-GDA0002455753980000086
step 3.8: at the 20-fold data set X20×In which the
Figure RE-GDA0002455753980000087
The pathological images contained in the training data set are 20 times of pathological images corresponding to the pathological images to form a training data set under 20 times of pathological images
Figure RE-GDA0002455753980000088
Step 3.9: use of the same
Figure RE-GDA0002455753980000089
Training classification network CNN20×(ii) a And after the training is finished, finishing the deep learning training.
In training CNNThen, only CNN is included in the training setSamples with difficult classification, CNN in training processModels focus on differentiating for CNNAnd the difficult samples are not influenced by simple samples, so that better classification capability is obtained. In the same way, in training CNN20×Then, only CNN is included in the training setSamples with difficult classification, CNN in training process20×Models focus on differentiating for CNNAnd the difficult samples are not influenced by simple samples, so that better classification capability is obtained.
As shown in fig. 3, step 4 includes the following steps:
step 4.1: cutting the pathological full section to be diagnosed into a training set by a sliding window method under 2 times
Figure RE-GDA00024557539800000810
The sub-images with the same size in the image are combined into a 2-fold pathological image test set, and the pathological image test set is used
Figure RE-GDA00024557539800000811
Is represented by the formula, wherein NTo represent
Figure RE-GDA00024557539800000812
The number of sub-images contained in (a);
step 4.2: using trained CNNModel pair
Figure RE-GDA00024557539800000813
Predicting the image in (1) to obtain a prediction result set, and using P={pk,2×|k=1,2,...,NRepresents;
step 4.3: by threshold value TAnd set PJudging the above
Figure RE-GDA0002455753980000091
The method comprises the following steps of (1) collecting a difficult sample, and collecting 5 times of pathological images of the difficult sample at corresponding positions in a full section to form a 5 times pathological image test set, wherein the formula is as follows:
Figure RE-GDA0002455753980000092
using trained CNNModel pair
Figure RE-GDA0002455753980000093
Predicting the image in (1) to obtain a prediction result set, and using P={pk,5×|k=1,2,...,NDenotes wherein N isRepresents
Figure RE-GDA0002455753980000094
The number of pathological images contained in (a);
step 4.4: by threshold value TAnd set PJudging the above
Figure RE-GDA0002455753980000095
The method comprises the following steps of (1) collecting a 20-time pathological image of a corresponding position of a difficult sample in a full section to form a 20-time pathological image test set, wherein the formula is as follows:
Figure RE-GDA0002455753980000096
using trained CNN20×Model pair
Figure RE-GDA0002455753980000097
Predicting the image in (1) to obtain a prediction result set, and using P20×={pk,20×|k=1,2,...,N20×Denotes wherein N is20×Represents
Figure RE-GDA0002455753980000098
The number of pathological images contained in (a);
step 4.5: setting a threshold T20×Judgment of Ik,20×Whether the cancerous region is contained, the formula is as follows:
Figure RE-GDA0002455753980000099
step 4.6: and (3) after the detection of the full-slice canceration region is finished, judging whether the sub-region in the slice contains the canceration region by using the prediction result to achieve the effect of image classification, wherein the formula is as follows:
Figure RE-GDA0002455753980000101
the method is used for verifying the mammary lymph metastasis canceration image, and a threshold value T is set in the verification=0.92,T=0.85,T20×0.5. The result is shown in table 1, in which the classification accuracy of the classification problem of "whether or not the slice sub-images include the cancerous region" is used as an index, and the average time for analyzing a digital pathology whole-slice image (about 50000 × 50000 pixels at 20 times) is used as a calculation complexity index.
Table 1 feature extraction network performance comparison
Figure RE-GDA0002455753980000102
It can be seen that the speed of the method is improved by about 10 times compared with the method for classifying respectively under the multiple rates (2 times, 5 times and 20 times), and the precision is reduced by 0.67 percent.
While the invention has been particularly shown and described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (4)

1. A pathological image classification method based on multiple-time rate deep learning is characterized by comprising the following steps:
step 1: scanning the pathological section into an electronic computer, and expressing the pathological image in an RGB color space to obtain an RGB pathological image;
step 2: marking pathological images, establishing a multi-time pathological image training data set, and expressing the multi-time pathological image training data set by a set X20X;
and step 3: establishing a multi-magnification CNN pathological image classification model, and performing deep learning training on the multi-magnification CNN pathological image classification model by using the RGB pathological image obtained in the step (1), wherein the step (3) comprises the following steps;
step 3.1, establishing a multi-magnification CNN pathological image classification model comprising a plurality of CNN networks which are expressed as CNN2 x, CNN5 x and CNN20 x, and sequentially training the CNN networks according to the sequence of magnification from low to high;
step 3.2: each pathological image in a multi-magnification pathological image training data set X20 is downsampled by 10 times, and a training data set containing 2 times pathological images is generated
Figure FDA0002728021850000011
Figure FDA0002728021850000012
The ith sample in (Ii,2 x, yi) is used
Figure FDA0002728021850000013
Training a classification network CNN2 x;
step 3.3: predicting all pathological images in X20X by CNN 2X, wherein the prediction result of the ith pathological image contained in X20X is represented as pi, 2X, wherein pi, 2X belongs to [0,1], representing the probability that the ith pathological image contains a cancerous region, and storing the prediction result pi, 2X;
step 3.4: definition for screening
Figure FDA0002728021850000014
The threshold value T2 × (0,1) of the difficult samples in (1) is used for forming the difficult samples into a difficult sample data set with 2 times of the difficult samples
Figure FDA0002728021850000015
Is formulated as follows:
Figure FDA0002728021850000016
step 3.5: extracting the said in a 20 times data set X20 ×
Figure FDA0002728021850000017
The pathological images contained in the training data set are 20 times of pathological images corresponding to the pathological images, and are down-sampled by 4 times to form a training data set under 5 times
Figure FDA0002728021850000018
The ith sample in X5 × is denoted as (Ii,5 ×, yi);
step 3.6: use of the same
Figure FDA0002728021850000019
Training a classification network CNN5 x; after training, CNN 5X pairs are used
Figure FDA00027280218500000110
All pathological images in (1) are predicted
Figure FDA00027280218500000111
The prediction result of the ith pathology image contained in (1) is represented as pi,5 ×, where pi,5 ∈ [0,1]]The prediction result pi,5 × is stored;
step 3.7: definition for screening
Figure FDA00027280218500000112
The hard sample of (1) is composed of 5 times as large as the threshold value T5 × ∈ (0,1)Sample data set of the next difficulties
Figure FDA00027280218500000113
Is formulated as follows:
Figure FDA00027280218500000114
step 3.8: extracting the 20 times data set X20 ×
Figure FDA0002728021850000021
The pathological images contained in the training data set are 20 times of pathological images corresponding to the pathological images to form a training data set under 20 times of pathological images
Figure FDA0002728021850000022
Step 3.9: use of the same
Figure FDA0002728021850000023
Training a classification network CNN20 x; after the training is finished, finishing deep learning training;
and 4, step 4: and (3) carrying out cancer region detection on the pathological full-section by using the CNN pathological image classification model after deep learning training.
2. The pathological image classification method based on multiple-rate deep learning according to claim 1 or the step 4 comprises the following steps:
step 4.1: cutting the pathological full section to be diagnosed into a training set by a sliding window method under 2 times
Figure FDA0002728021850000024
The sub-images with the same size in the image are combined into a 2-fold pathological image test set, and the pathological image test set is used
Figure FDA0002728021850000025
Is shown, where N2 x represents
Figure FDA0002728021850000026
The number of sub-images contained in (a);
step 4.2: using trained CNN 2X model pairs
Figure FDA0002728021850000027
The image in (1) is predicted to obtain a prediction result set, which is expressed by P2 × { pk,2 × | k ═ 1,2, ·, N2 × };
step 4.3: the above is judged by using a threshold T2X and a set P2X
Figure FDA0002728021850000028
The method comprises the following steps of (1) collecting a difficult sample, and collecting 5 times of pathological images of the difficult sample at corresponding positions in a full section to form a 5 times pathological image test set, wherein the formula is as follows:
Figure FDA0002728021850000029
using trained CNN 5X model pairs
Figure FDA00027280218500000210
The image in (1) is predicted to obtain a prediction result set, which is expressed by P5 × (pk, 5 × | k ═ 1, 2., N5 ×, where N5 × represents
Figure FDA00027280218500000211
The number of pathological images contained in (a);
step 4.4: the above is judged by using a threshold T5X and a set P5X
Figure FDA00027280218500000212
The method comprises the following steps of (1) collecting a 20-time pathological image of a corresponding position of a difficult sample in a full section to form a 20-time pathological image test set, wherein the formula is as follows:
Figure FDA00027280218500000213
using trained CNN 20X model pairs
Figure FDA0002728021850000031
The image in (1) is predicted to obtain a prediction result set, which is expressed by P20 × (pk, 20 × | k ═ 1, 2., N20 ×, where N20 × represents
Figure FDA0002728021850000032
The number of pathological images contained in (a);
step 4.5: setting a threshold T20X to determine if Ik and 20X contain cancerous regions, using the following formula:
Figure FDA0002728021850000033
step 4.6: and (3) after the detection of the full-slice canceration region is finished, judging whether the sub-region in the slice contains the canceration region by using the prediction result to achieve the effect of image classification, wherein the formula is as follows:
the k-th sub-image corresponds to a position in the slice:
Figure FDA0002728021850000034
3. the pathological image classification method based on multi-magnification deep learning according to claim 1, wherein the step 1 specifically comprises: pathological sections are scanned and stored in a computer by a section scanner, pixel values of each pixel point of an image in R, G, B three channels are obtained through channel decomposition, and therefore data information of the image is obtained, and a formula is used as follows:
Figure FDA0002728021850000035
wherein Ir (x, y), Ig (x, y) and Ib (x, y) respectively represent red, green and blue three color channels of I (x, y).
4. The pathological image classification method based on multiple-rate deep learning of claim 1, wherein the multiple-rate pathological image training data set X20X in step 2 is used by a pathologist to screen cases suitable for developing the image classification method and label pathological sections in pathological sections contained in the cases; and (4) cutting the pathological images of the labeled sub-region in 20-time pathological sections according to the label of a pathological expert to form a data set.
CN201810189171.1A 2018-03-08 2018-03-08 Pathological image classification method based on multiple-time rate deep learning Active CN108305253B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810189171.1A CN108305253B (en) 2018-03-08 2018-03-08 Pathological image classification method based on multiple-time rate deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810189171.1A CN108305253B (en) 2018-03-08 2018-03-08 Pathological image classification method based on multiple-time rate deep learning

Publications (2)

Publication Number Publication Date
CN108305253A CN108305253A (en) 2018-07-20
CN108305253B true CN108305253B (en) 2021-04-06

Family

ID=62849654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810189171.1A Active CN108305253B (en) 2018-03-08 2018-03-08 Pathological image classification method based on multiple-time rate deep learning

Country Status (1)

Country Link
CN (1) CN108305253B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3874447A4 (en) * 2018-10-30 2022-08-17 Perimeter Medical Imaging, Inc. Methods and systems for medical image processing using a convolutional neural network (cnn)
CN109903284B (en) * 2019-03-04 2022-07-05 武汉大学 HER2 immunohistochemical image automatic discrimination method and system
CN113053512B (en) * 2019-12-27 2024-04-09 无锡祥生医疗科技股份有限公司 Evolutionary learning method, system and storage medium suitable for ultrasonic diagnosis
CN111814893A (en) * 2020-07-17 2020-10-23 首都医科大学附属北京胸科医院 Lung full-scan image EGFR mutation prediction method and system based on deep learning
CN112070726B (en) * 2020-08-17 2024-09-17 清华大学 Deep learning-based grape embryo slice image processing method and device
CN113109327B (en) * 2021-03-09 2023-11-17 杭州市林业科学研究院 Method for predicting dry rot of hickory
CN113269747B (en) * 2021-05-24 2023-06-13 浙江大学医学院附属第一医院 Pathological image liver cancer diffusion detection method and system based on deep learning
CN114638292B (en) * 2022-03-10 2023-05-05 中国医学科学院北京协和医院 Artificial intelligence pathology auxiliary diagnosis system based on multi-scale analysis
CN114782753B (en) * 2022-05-18 2024-07-12 合肥工业大学 Lung cancer tissue pathology full-section classification method based on weak supervision learning and converter

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012118448A (en) * 2010-12-03 2012-06-21 Sony Corp Image processing method, image processing apparatus and image processing program
US9235891B2 (en) * 2011-01-10 2016-01-12 Rutgers, The State University Of New Jersey Boosted consensus classifier for large images using fields of view of various sizes
US9224106B2 (en) * 2012-12-21 2015-12-29 Nec Laboratories America, Inc. Computationally efficient whole tissue classifier for histology slides
US9412162B2 (en) * 2013-08-21 2016-08-09 Sectra Ab Methods, systems and circuits for generating magnification-dependent images suitable for whole slide images
CN105938243B (en) * 2016-06-29 2018-04-13 华南理工大学 More multiplying power microscope quick focusing methods in a kind of TFT LCD detections

Also Published As

Publication number Publication date
CN108305253A (en) 2018-07-20

Similar Documents

Publication Publication Date Title
CN108305253B (en) Pathological image classification method based on multiple-time rate deep learning
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
CN108229576B (en) Cross-magnification pathological image feature learning method
CN110428432B (en) Deep neural network algorithm for automatically segmenting colon gland image
CN110472676A (en) Stomach morning cancerous tissue image classification system based on deep neural network
CN112017191A (en) Method for establishing and segmenting liver pathology image segmentation model based on attention mechanism
CN112288706A (en) Automatic chromosome karyotype analysis and abnormality detection method
CN106934418B (en) Insulator infrared diagnosis method based on convolution recursive network
CN106952269A (en) The reversible video foreground object sequence detection dividing method of neighbour and system
CN106803257B (en) Method for segmenting disease spots in crop disease leaf image
CN111784711A (en) Lung pathology image classification and segmentation method based on deep learning
CN110929762A (en) Method and system for detecting body language and analyzing behavior based on deep learning
CN115909006A (en) Mammary tissue image classification method and system based on convolution Transformer
CN115661459A (en) 2D mean teacher model using difference information
CN117557779A (en) YOLO-based multi-scale target detection method
CN111489369B (en) Helicobacter pylori positioning method and device and electronic equipment
CN117557557A (en) Thyroid pathological section cell detection method based on cell nucleus segmentation model
CN111104855A (en) Workflow identification method based on time sequence behavior detection
CN107145888A (en) Video caption real time translating method
CN113673534B (en) RGB-D image fruit detection method based on FASTER RCNN
CN114708229A (en) Pathological section digital image full-hierarchy analysis system
CN115393283A (en) Polyp image segmentation method based on shallow layer supervision and attention feedback
CN113077438B (en) Cell nucleus region extraction method and imaging method for multi-cell nucleus color image
CN114037684A (en) Defect detection method based on yolov5 and attention mechanism model
CN116596891B (en) Wood floor color classification and defect detection method based on semi-supervised multitasking detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant