CN108305253B - Pathological image classification method based on multiple-time rate deep learning - Google Patents
Pathological image classification method based on multiple-time rate deep learning Download PDFInfo
- Publication number
- CN108305253B CN108305253B CN201810189171.1A CN201810189171A CN108305253B CN 108305253 B CN108305253 B CN 108305253B CN 201810189171 A CN201810189171 A CN 201810189171A CN 108305253 B CN108305253 B CN 108305253B
- Authority
- CN
- China
- Prior art keywords
- pathological
- image
- images
- cnn
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Investigating Or Analysing Biological Materials (AREA)
Abstract
A pathology full-section diagnosis method based on multiple-time rate deep learning is characterized in that pathology sections are scanned into electronic calculation, and pathology images are expressed in RGB color space to obtain RGB pathology images; marking the pathological images, and establishing a multi-time pathological image training data set; establishing a multi-magnification CNN pathological image classification model, and performing deep learning training on the multi-magnification CNN pathological image classification model by using RGB pathological images; and (3) carrying out cancer region detection on the pathological full-section by using the CNN pathological image classification model after deep learning training. The method reduces the calculation amount of the existing cancerous region detection algorithm based on the multiple-rate pathological image, and simultaneously ensures the auxiliary diagnosis precision.
Description
Technical Field
The invention relates to the field of image processing and deep learning, in particular to a pathological image classification method based on multi-time rate deep learning.
Background
The digital pathological full section is a large-scale digital image with high multiplying power which is converted into pathological section by special scanning imaging system and can be displayed, transmitted and processed by computer. With the continuous maturity of digital pathological image imaging technology, computer-aided diagnosis methods based on digital pathological images are developed rapidly. In recent years, deep learning methods represented by a Convolutional Neural Network (CNN) have become mainstream methods for classifying pathological images.
According to the existing research, the CNN model established by the high-magnification digital pathological image (such as 20-fold) and the CNN model established by the low-magnification digital pathological image (such as 2-fold) have a complementary relationship in pathological image classification. In order to obtain higher classification accuracy, a scholars proposes to establish a CNN model to detect a canceration region under different magnifications, and then fusion is performed on image classification results obtained under each magnification to obtain a final image classification result. However, performing repeated detection at multiple high rates requires more computational resources; especially, judging all areas in a pathological full section under high magnification results in high calculation amount.
Disclosure of Invention
The invention aims to provide a pathological image classification method based on multi-magnification deep learning, which can exert the advantages of a CNN pathological image cancerous region detection model established based on different magnifications, reduce the calculated amount of the existing cancerous region detection algorithm based on multi-magnification pathological images and meet the requirements on pathological image classification accuracy and speed.
In order to solve the technical problems, the technical scheme of the invention is as follows: a pathological image classification method based on multiple-time rate deep learning is characterized by comprising the following steps:
step 1: scanning the pathological section into an electronic computer, and expressing the pathological image in an RGB color space to obtain an RGB pathological image;
step 2: marking pathological images, establishing a multiple-time pathological image training data set, and using the set X20×Represents;
and step 3: establishing a multi-magnification CNN pathological image classification model, and performing deep learning training on the multi-magnification CNN pathological image classification model by using the RGB pathological image obtained in the step (1);
and 4, step 4: and classifying the pathological images by using the CNN pathological image classification model after deep learning training.
Further, the step 3 comprises the following steps:
step 3.1, establishing a multi-magnification CNN pathological image classification model comprising a plurality of CNN networks represented as CNN2×、CNN5×、CNN20×Sequentially training a plurality of CNN networks according to the sequence of multiplying power from low to high;
step 3.2: from many multiplying powers pathology image training data set X20×Down-sampling each pathological image by 10 times to generate a training data set containing 2 times pathological imagesThe ith sample in (A) is represented as (I)i,2×,yi) Use of the sameTraining classification network CNN2×;
Step 3.3: by CNN2×To X20×All pathological images in (1) are predicted, X20×The prediction result of the ith pathology image included in (1) is denoted by pi,2×Wherein p isi,2×∈[0,1]Representing the probability of including cancerous regions in the ith pathological image, and predicting the result pi,2×Storing;
step 3.4: definition for screeningThreshold value T of difficult samples in (1)2×E (0,1), and forming the difficult samples into a difficult sample data set with 2 times of the difficult samplesIs formulated as follows:
step 3.5: at 20 times data set X20×In which theThe pathological images contained in the training data set are 20 times of pathological images corresponding to the pathological images, and are down-sampled by 4 times to form a training data set under 5 times X5×The ith sample in (A) is represented as (I)i,5×,yi);
Step 3.6: use of the sameTraining classification network CNN5×(ii) a After training is completed, CNN is used5×To pairAll pathological images in (1) are predictedThe prediction result of the ith pathology image included in (1) is denoted by pi,5×Wherein p isi,5×∈[0,1]The prediction result pi,5 × is stored;
step 3.7: definition for screeningThreshold value T of difficult samples in (1)5×E (0,1), and forming the difficult samples into a difficult sample data set with 5 times of the difficult samplesIs formulated as follows:
step 3.8: at the 20-fold data set X20×In which theThe pathological images contained in the training data set are 20 times of pathological images corresponding to the pathological images to form a training data set under 20 times of pathological images
Step 3.9: use of the sameTraining classification network CNN20×(ii) a And after the training is finished, finishing the deep learning training.
Further, the step 4 comprises the following steps:
step 4.1: cutting the pathological full section to be diagnosed into a training set by a sliding window method under 2 timesThe sub-images with the same size in the image are combined into a 2-fold pathological image test set, and the pathological image test set is usedIs represented by the formula, wherein N2×To representThe number of sub-images contained in (a);
step 4.2: using trained CNN2×Model pairPredicting the image in (1) to obtain a prediction result set, and using P2×={pk,2×|k=1,2,...,N2×Represents;
step 4.3: by threshold value T2×And set P2×Judging the aboveThe method comprises the following steps of (1) collecting a difficult sample, and collecting 5 times of pathological images of the difficult sample at corresponding positions in a full section to form a 5 times pathological image test set, wherein the formula is as follows:
using trained CNN5×Model pairPredicting the image in (1) to obtain a prediction result set, and using P5×={pk,5×|k=1,2,...,N5×Denotes wherein N is5×RepresentsThe number of pathological images contained in (a);
step 4.4: by threshold value T5×And set P5×Judging the aboveCollecting 20 times pathological images of the difficult sample at corresponding positions in the whole section to form a 20 times pathological image test set, and using the 20 times pathological image test setTo the formula:
using trained CNN20×Model pairPredicting the image in (1) to obtain a prediction result set, and using P20×={pk,20×|k=1,2,...,N20×Denotes wherein N is20×RepresentsThe number of pathological images contained in (a);
step 4.5: setting a threshold T20×Judgment of Ik,20×Whether the cancerous region is contained, the formula is as follows:
step 4.6: and (3) finishing the detection of the cancerous region of the whole section, and judging whether the sub region of the section contains the cancerous region by using the prediction result to achieve the effect of pathological image classification, wherein the formula is as follows:
further, the step 1 specifically comprises: pathological sections are scanned and stored in a computer by a section scanner, pixel values of each pixel point of an image in R, G, B three channels are obtained through channel decomposition, and therefore data information of the image is obtained, and a formula is used as follows:
wherein Ir(x,y)、Ig(x,y)、IbAnd (x, y) respectively represent red, green and blue three color channels of I (x, y).
Further, the multiple-rate pathology image training data set X in step 220×Screening cases suitable for developing an auxiliary diagnosis method by a pathological expert, and labeling pathological change areas in pathological sections contained in the cases; and (4) cutting the pathological images of the labeled sub-region in 20-time pathological sections according to the label of a pathological expert to form a data set.
According to the method, the cancerous regions are detected according to the sequence of the full-slice multiplying power from small to large, the region judged to contain the cancer in the 2-time full-slice image does not participate in subsequent calculation any more, and the region judged to contain the cancer in the 5-time full-slice image does not participate in subsequent calculation any more, so that only a small part of the region in the slice needs to be judged by depending on the 20-time image, the calculation amount is greatly reduced, and the image classification time is shortened.
Drawings
The following describes embodiments of the present invention in further detail with reference to the accompanying drawings.
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of the training phase of the method of the present invention;
fig. 3 is a schematic diagram of the image classification phase of the method of the present invention.
Detailed Description
The invention is further described with reference to the following drawings and detailed description.
Fig. 1 provides a pathological image classification method based on multiple-magnification deep learning, which specifically includes the following steps:
step 1: scanning the pathological section into electronic calculation, and representing the pathological image in an RGB color space to obtain an RGB pathological image;
step 2: marking pathological images, establishing a multiple-time pathological image training data set, and using the set X20×Represents;
and step 3: establishing a multi-magnification CNN pathological image classification model, and performing deep learning training on the multi-magnification CNN pathological image classification model by using the RGB pathological image obtained in the step (1); (ii) a And 4, step 4: and classifying the pathological images by using the CNN pathological image classification model after deep learning training.
Specifically, in step 1, a slice scanner scans and stores pathological slices in a computer, and pixel values of each pixel point of an image in R, G, B three channels are obtained through channel decomposition, so that data information of the image is obtained, and a formula is used as follows:
wherein Ir(x,y)、Ig(x,y)、IbAnd (x, y) respectively represent red, green and blue three color channels of I (x, y). Step 1, pixel values of each pixel point of the image in R, G, B channels are obtained through channel decomposition, so that data information of the image is obtained, and a basis is provided for auxiliary diagnosis.
Step 2: the multiple-rate pathology image training data set X in (1)20×Screening cases suitable for developing an image classification method by a pathological expert, and labeling pathological change areas in pathological sections contained in the cases; and (4) cutting the pathological images of the labeled sub-region in 20-time pathological sections according to the label of a pathological expert to form a data set. Said X20×The ith sample in (a) is represented as (I)i,20×,yi) In which Ii,20×Representing a pathological image of a sub-region, yiE {0,1} represents Ii,20×A corresponding label. In particular, yiPathological image I is represented by 1iComprising a cancerous region, yiPathological image I is represented by 0iDoes not contain a cancerous region.
As shown in fig. 2, step 3 specifically includes the following steps:
step 3.1, establishing a multi-magnification CNN pathological image classification model comprising a plurality of CNN networks represented as CNN2×、CNN5×、CNN20×Sequentially training a plurality of CNN networks according to the sequence of multiplying power from low to high;
step 3.2: from many multiplying powers pathology image training data set X20×Down-sampling each pathological image by 10 times to generate a training data set containing 2 times pathological imagesThe ith sample in (A) is represented as (I)i,2×,yi) Use of the sameTraining classification network CNN2×;
Step 3.3: by CNN2×To X20×All pathological images in (1) are predicted, X20×The prediction result of the ith pathology image included in (1) is denoted by pi,2×Wherein p isi,2×∈[0,1]Representing the probability of including cancerous regions in the ith pathological image, and predicting the result pi,2×Storing;
step 3.4: definition for screeningThreshold value T of difficult samples in (1)2×E (0,1), and forming the difficult samples into a difficult sample data set with 2 times of the difficult samplesIs formulated as follows:
step 3.5: at the 20-fold data set X20×In which theThe pathological images contained in the training data set are 20 times of pathological images corresponding to the pathological images, and are down-sampled by 4 times to form a training data set under 5 timesX5×The ith sample in (A) is represented as (I)i,5×,yi);
Step 3.6: use of the sameTraining classification network CNN5×(ii) a After training is completed, CNN is used5×To pairAll pathological images in (1) are predictedThe prediction result of the ith pathology image included in (1) is denoted by pi,5×Wherein p isi,5×∈[0,1]Prediction result pi,5×Storing;
step 3.7: definition for screeningThreshold value T of difficult samples in (1)5×E (0,1), and forming the difficult samples into a difficult sample data set with 5 times of the difficult samplesIs formulated as follows:
step 3.8: at the 20-fold data set X20×In which theThe pathological images contained in the training data set are 20 times of pathological images corresponding to the pathological images to form a training data set under 20 times of pathological images
Step 3.9: use of the sameTraining classification network CNN20×(ii) a And after the training is finished, finishing the deep learning training.
In training CNN5×Then, only CNN is included in the training set2×Samples with difficult classification, CNN in training process5×Models focus on differentiating for CNN2×And the difficult samples are not influenced by simple samples, so that better classification capability is obtained. In the same way, in training CNN20×Then, only CNN is included in the training set5×Samples with difficult classification, CNN in training process20×Models focus on differentiating for CNN5×And the difficult samples are not influenced by simple samples, so that better classification capability is obtained.
As shown in fig. 3, step 4 includes the following steps:
step 4.1: cutting the pathological full section to be diagnosed into a training set by a sliding window method under 2 timesThe sub-images with the same size in the image are combined into a 2-fold pathological image test set, and the pathological image test set is usedIs represented by the formula, wherein N2×To representThe number of sub-images contained in (a);
step 4.2: using trained CNN2×Model pairPredicting the image in (1) to obtain a prediction result set, and using P2×={pk,2×|k=1,2,...,N2×Represents;
step 4.3: by threshold value T2×And set P2×Judging the aboveThe method comprises the following steps of (1) collecting a difficult sample, and collecting 5 times of pathological images of the difficult sample at corresponding positions in a full section to form a 5 times pathological image test set, wherein the formula is as follows:
using trained CNN5×Model pairPredicting the image in (1) to obtain a prediction result set, and using P5×={pk,5×|k=1,2,...,N5×Denotes wherein N is5×RepresentsThe number of pathological images contained in (a);
step 4.4: by threshold value T5×And set P5×Judging the aboveThe method comprises the following steps of (1) collecting a 20-time pathological image of a corresponding position of a difficult sample in a full section to form a 20-time pathological image test set, wherein the formula is as follows:
using trained CNN20×Model pairPredicting the image in (1) to obtain a prediction result set, and using P20×={pk,20×|k=1,2,...,N20×Denotes wherein N is20×RepresentsThe number of pathological images contained in (a);
step 4.5: setting a threshold T20×Judgment of Ik,20×Whether the cancerous region is contained, the formula is as follows:
step 4.6: and (3) after the detection of the full-slice canceration region is finished, judging whether the sub-region in the slice contains the canceration region by using the prediction result to achieve the effect of image classification, wherein the formula is as follows:
the method is used for verifying the mammary lymph metastasis canceration image, and a threshold value T is set in the verification2×=0.92,T5×=0.85,T20×0.5. The result is shown in table 1, in which the classification accuracy of the classification problem of "whether or not the slice sub-images include the cancerous region" is used as an index, and the average time for analyzing a digital pathology whole-slice image (about 50000 × 50000 pixels at 20 times) is used as a calculation complexity index.
Table 1 feature extraction network performance comparison
It can be seen that the speed of the method is improved by about 10 times compared with the method for classifying respectively under the multiple rates (2 times, 5 times and 20 times), and the precision is reduced by 0.67 percent.
While the invention has been particularly shown and described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (4)
1. A pathological image classification method based on multiple-time rate deep learning is characterized by comprising the following steps:
step 1: scanning the pathological section into an electronic computer, and expressing the pathological image in an RGB color space to obtain an RGB pathological image;
step 2: marking pathological images, establishing a multi-time pathological image training data set, and expressing the multi-time pathological image training data set by a set X20X;
and step 3: establishing a multi-magnification CNN pathological image classification model, and performing deep learning training on the multi-magnification CNN pathological image classification model by using the RGB pathological image obtained in the step (1), wherein the step (3) comprises the following steps;
step 3.1, establishing a multi-magnification CNN pathological image classification model comprising a plurality of CNN networks which are expressed as CNN2 x, CNN5 x and CNN20 x, and sequentially training the CNN networks according to the sequence of magnification from low to high;
step 3.2: each pathological image in a multi-magnification pathological image training data set X20 is downsampled by 10 times, and a training data set containing 2 times pathological images is generated The ith sample in (Ii,2 x, yi) is usedTraining a classification network CNN2 x;
step 3.3: predicting all pathological images in X20X by CNN 2X, wherein the prediction result of the ith pathological image contained in X20X is represented as pi, 2X, wherein pi, 2X belongs to [0,1], representing the probability that the ith pathological image contains a cancerous region, and storing the prediction result pi, 2X;
step 3.4: definition for screeningThe threshold value T2 × (0,1) of the difficult samples in (1) is used for forming the difficult samples into a difficult sample data set with 2 times of the difficult samplesIs formulated as follows:
step 3.5: extracting the said in a 20 times data set X20 ×The pathological images contained in the training data set are 20 times of pathological images corresponding to the pathological images, and are down-sampled by 4 times to form a training data set under 5 timesThe ith sample in X5 × is denoted as (Ii,5 ×, yi);
step 3.6: use of the sameTraining a classification network CNN5 x; after training, CNN 5X pairs are usedAll pathological images in (1) are predictedThe prediction result of the ith pathology image contained in (1) is represented as pi,5 ×, where pi,5 ∈ [0,1]]The prediction result pi,5 × is stored;
step 3.7: definition for screeningThe hard sample of (1) is composed of 5 times as large as the threshold value T5 × ∈ (0,1)Sample data set of the next difficultiesIs formulated as follows:
step 3.8: extracting the 20 times data set X20 ×The pathological images contained in the training data set are 20 times of pathological images corresponding to the pathological images to form a training data set under 20 times of pathological images
Step 3.9: use of the sameTraining a classification network CNN20 x; after the training is finished, finishing deep learning training;
and 4, step 4: and (3) carrying out cancer region detection on the pathological full-section by using the CNN pathological image classification model after deep learning training.
2. The pathological image classification method based on multiple-rate deep learning according to claim 1 or the step 4 comprises the following steps:
step 4.1: cutting the pathological full section to be diagnosed into a training set by a sliding window method under 2 timesThe sub-images with the same size in the image are combined into a 2-fold pathological image test set, and the pathological image test set is usedIs shown, where N2 x representsThe number of sub-images contained in (a);
step 4.2: using trained CNN 2X model pairsThe image in (1) is predicted to obtain a prediction result set, which is expressed by P2 × { pk,2 × | k ═ 1,2, ·, N2 × };
step 4.3: the above is judged by using a threshold T2X and a set P2XThe method comprises the following steps of (1) collecting a difficult sample, and collecting 5 times of pathological images of the difficult sample at corresponding positions in a full section to form a 5 times pathological image test set, wherein the formula is as follows:
using trained CNN 5X model pairsThe image in (1) is predicted to obtain a prediction result set, which is expressed by P5 × (pk, 5 × | k ═ 1, 2., N5 ×, where N5 × representsThe number of pathological images contained in (a);
step 4.4: the above is judged by using a threshold T5X and a set P5XThe method comprises the following steps of (1) collecting a 20-time pathological image of a corresponding position of a difficult sample in a full section to form a 20-time pathological image test set, wherein the formula is as follows:
using trained CNN 20X model pairsThe image in (1) is predicted to obtain a prediction result set, which is expressed by P20 × (pk, 20 × | k ═ 1, 2., N20 ×, where N20 × representsThe number of pathological images contained in (a);
step 4.5: setting a threshold T20X to determine if Ik and 20X contain cancerous regions, using the following formula:
step 4.6: and (3) after the detection of the full-slice canceration region is finished, judging whether the sub-region in the slice contains the canceration region by using the prediction result to achieve the effect of image classification, wherein the formula is as follows:
the k-th sub-image corresponds to a position in the slice:
3. the pathological image classification method based on multi-magnification deep learning according to claim 1, wherein the step 1 specifically comprises: pathological sections are scanned and stored in a computer by a section scanner, pixel values of each pixel point of an image in R, G, B three channels are obtained through channel decomposition, and therefore data information of the image is obtained, and a formula is used as follows:
wherein Ir (x, y), Ig (x, y) and Ib (x, y) respectively represent red, green and blue three color channels of I (x, y).
4. The pathological image classification method based on multiple-rate deep learning of claim 1, wherein the multiple-rate pathological image training data set X20X in step 2 is used by a pathologist to screen cases suitable for developing the image classification method and label pathological sections in pathological sections contained in the cases; and (4) cutting the pathological images of the labeled sub-region in 20-time pathological sections according to the label of a pathological expert to form a data set.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810189171.1A CN108305253B (en) | 2018-03-08 | 2018-03-08 | Pathological image classification method based on multiple-time rate deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810189171.1A CN108305253B (en) | 2018-03-08 | 2018-03-08 | Pathological image classification method based on multiple-time rate deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108305253A CN108305253A (en) | 2018-07-20 |
CN108305253B true CN108305253B (en) | 2021-04-06 |
Family
ID=62849654
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810189171.1A Active CN108305253B (en) | 2018-03-08 | 2018-03-08 | Pathological image classification method based on multiple-time rate deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108305253B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3874447A4 (en) * | 2018-10-30 | 2022-08-17 | Perimeter Medical Imaging, Inc. | Methods and systems for medical image processing using a convolutional neural network (cnn) |
CN109903284B (en) * | 2019-03-04 | 2022-07-05 | 武汉大学 | HER2 immunohistochemical image automatic discrimination method and system |
CN113053512B (en) * | 2019-12-27 | 2024-04-09 | 无锡祥生医疗科技股份有限公司 | Evolutionary learning method, system and storage medium suitable for ultrasonic diagnosis |
CN111814893A (en) * | 2020-07-17 | 2020-10-23 | 首都医科大学附属北京胸科医院 | Lung full-scan image EGFR mutation prediction method and system based on deep learning |
CN112070726B (en) * | 2020-08-17 | 2024-09-17 | 清华大学 | Deep learning-based grape embryo slice image processing method and device |
CN113109327B (en) * | 2021-03-09 | 2023-11-17 | 杭州市林业科学研究院 | Method for predicting dry rot of hickory |
CN113269747B (en) * | 2021-05-24 | 2023-06-13 | 浙江大学医学院附属第一医院 | Pathological image liver cancer diffusion detection method and system based on deep learning |
CN114638292B (en) * | 2022-03-10 | 2023-05-05 | 中国医学科学院北京协和医院 | Artificial intelligence pathology auxiliary diagnosis system based on multi-scale analysis |
CN114782753B (en) * | 2022-05-18 | 2024-07-12 | 合肥工业大学 | Lung cancer tissue pathology full-section classification method based on weak supervision learning and converter |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012118448A (en) * | 2010-12-03 | 2012-06-21 | Sony Corp | Image processing method, image processing apparatus and image processing program |
US9235891B2 (en) * | 2011-01-10 | 2016-01-12 | Rutgers, The State University Of New Jersey | Boosted consensus classifier for large images using fields of view of various sizes |
US9224106B2 (en) * | 2012-12-21 | 2015-12-29 | Nec Laboratories America, Inc. | Computationally efficient whole tissue classifier for histology slides |
US9412162B2 (en) * | 2013-08-21 | 2016-08-09 | Sectra Ab | Methods, systems and circuits for generating magnification-dependent images suitable for whole slide images |
CN105938243B (en) * | 2016-06-29 | 2018-04-13 | 华南理工大学 | More multiplying power microscope quick focusing methods in a kind of TFT LCD detections |
-
2018
- 2018-03-08 CN CN201810189171.1A patent/CN108305253B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN108305253A (en) | 2018-07-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108305253B (en) | Pathological image classification method based on multiple-time rate deep learning | |
CN108562589B (en) | Method for detecting surface defects of magnetic circuit material | |
CN108229576B (en) | Cross-magnification pathological image feature learning method | |
CN110428432B (en) | Deep neural network algorithm for automatically segmenting colon gland image | |
CN110472676A (en) | Stomach morning cancerous tissue image classification system based on deep neural network | |
CN112017191A (en) | Method for establishing and segmenting liver pathology image segmentation model based on attention mechanism | |
CN112288706A (en) | Automatic chromosome karyotype analysis and abnormality detection method | |
CN106934418B (en) | Insulator infrared diagnosis method based on convolution recursive network | |
CN106952269A (en) | The reversible video foreground object sequence detection dividing method of neighbour and system | |
CN106803257B (en) | Method for segmenting disease spots in crop disease leaf image | |
CN111784711A (en) | Lung pathology image classification and segmentation method based on deep learning | |
CN110929762A (en) | Method and system for detecting body language and analyzing behavior based on deep learning | |
CN115909006A (en) | Mammary tissue image classification method and system based on convolution Transformer | |
CN115661459A (en) | 2D mean teacher model using difference information | |
CN117557779A (en) | YOLO-based multi-scale target detection method | |
CN111489369B (en) | Helicobacter pylori positioning method and device and electronic equipment | |
CN117557557A (en) | Thyroid pathological section cell detection method based on cell nucleus segmentation model | |
CN111104855A (en) | Workflow identification method based on time sequence behavior detection | |
CN107145888A (en) | Video caption real time translating method | |
CN113673534B (en) | RGB-D image fruit detection method based on FASTER RCNN | |
CN114708229A (en) | Pathological section digital image full-hierarchy analysis system | |
CN115393283A (en) | Polyp image segmentation method based on shallow layer supervision and attention feedback | |
CN113077438B (en) | Cell nucleus region extraction method and imaging method for multi-cell nucleus color image | |
CN114037684A (en) | Defect detection method based on yolov5 and attention mechanism model | |
CN116596891B (en) | Wood floor color classification and defect detection method based on semi-supervised multitasking detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |