CN116205868A - Intelligent pathological section image detection method and system - Google Patents
Intelligent pathological section image detection method and system Download PDFInfo
- Publication number
- CN116205868A CN116205868A CN202310122248.4A CN202310122248A CN116205868A CN 116205868 A CN116205868 A CN 116205868A CN 202310122248 A CN202310122248 A CN 202310122248A CN 116205868 A CN116205868 A CN 116205868A
- Authority
- CN
- China
- Prior art keywords
- organ
- image
- classification
- model
- slice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/695—Preprocessing, e.g. image segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/698—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
- G06T2207/10061—Microscopic image from scanning electron microscope
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30056—Liver; Hepatic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30084—Kidney; Renal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of pathological section image detection, and particularly relates to an intelligent pathological section image detection method and system, wherein the method comprises the following steps: collecting organ slice electron microscopic images; processing the acquired image to manufacture a data set; establishing a classification model; inputting the organ slice images to be detected into a trained classification model, and identifying the first classification and the second classification; correcting the final prediction result; detecting different cells and areas in the pathological section image according to the organ prediction result obtained in the previous step; and integrating information and outputting result statistics. According to the invention, through improving the YOLO network model, the activation function is optimized, the accuracy of organ classification is improved, the microscopic images of organ sections can be accurately and efficiently detected to obtain the duty ratio information of each cell and each region in the images, errors caused by subjective factors of personnel are avoided, and the detection result is objectively obtained.
Description
Technical Field
The invention belongs to the technical field of pathological section image detection, and particularly relates to an intelligent pathological section image detection method and system.
Background
For diagnosis of organ slice images, a macroscopic observation method is generally adopted, and the tissue or organ to which the slice to be observed belongs, such as liver, spleen, kidney, lung and tumor, is firstly observed with naked eyes by holding the slice to be observed. And judging whether the part has no lesion according to the consistency of the quality, the color and the like of the slice. Therefore, the organ types of liver, spleen, kidney, lung and tumor are distinguished, and the ratio information of different areas and cells in each slice image is counted, so that the current state of the organ can be effectively known.
The patent with publication number CN109376802A discloses a gastroscope organ classification method based on dictionary learning, which comprises the steps of preprocessing an original image, extracting color and texture characteristics from image data, fusing the color and texture characteristics, constructing a test set and a multi-class training set, establishing a K-time singular value decomposition dictionary learning model, solving a multi-class training set matrix input model, and iteratively updating to respectively train a plurality of classes of dictionaries; obtaining sparse coefficients under multiple classes of dictionaries through an orthogonal matching pursuit algorithm, and further calculating a test set reconstructed under each class of dictionary; and finally, constructing a mean square error classifier, and classifying multiple organs by comparing the mean square error of the reconstructed test set and the original test set.
Patent publication No. CN114463290A discloses an intelligent identification method and system for organoid type based on microscopic image. The method comprises the following steps: acquiring an alveolar organoid microscopic image; each alveolar organoid is framed and extracted separately, and xi represents one of the alveolar organoid images; inputting each alveolar organoid image into 3 types of intelligent judgment and identification models to obtain 3 prediction results; after all alveolar organoids are processed, counting the results of the accuracy rate and recall rate of 3 types of intelligent judgment and identification models, evaluating three models by utilizing the results, and selecting the model with the optimal evaluation result as the final sorting model; the method is used for identifying and judging the alveolar organoid type.
The prior art has at least the following disadvantages:
1. the existing organ type detection is used for classifying and identifying organs, and does not classify and identify slice images of the organs.
2. In the image of organ section, the distribution of cells and areas is judged manually, and the cells and areas are not intelligently distinguished and counted.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides an intelligent pathological section image detection method and an intelligent pathological section image detection system, which further realize the statistics of the duty ratio of different cells and areas in an organ section image on the premise of judging the organ category, reduce the influence of subjective judgment of manual detection and carry out batch processing.
In order to achieve the above purpose, the technical scheme of the invention is as follows:
an intelligent detection method for pathological section images comprises the following steps:
s1, collecting an organ slice electron microscopic image;
s2, processing the acquired electron microscopic images of all organ sections to manufacture a data set;
step S3, a neural network based on a YOLO network model is used for establishing a classification model, a Meta-Acon function is used as an activation function, and a processed pathological section image set is input for classification model training to obtain a trained classification model;
s4, inputting the organ slice images to be detected into a trained classification model, and identifying the first classification and the second classification;
s5, correcting a final prediction result according to the prediction categories of the first classification and the second classification obtained in the step S4;
step S6, detecting different cells and areas in the pathological section image according to the organ prediction result obtained in the step S5;
and S7, integrating the information and outputting result statistics.
Preferably, the organ slice electron microscopy images in step S1 are slice images of normal lung, normal liver, normal spleen, normal kidney, model lung, model liver, model spleen, model kidney, tumor acquired by electron microscopy at different multiples.
Preferably, step S2 includes:
s201, fold uniformization: processing the electron microscopic images of the organ sections with different magnification factors into images with the same magnification factor;
s202, enhancing image data: carrying out data enhancement processing on the image, and simulating organ slice images in various environments;
s203, image size unification processing: unifying the sizes according to the input requirements of the neural network;
s204, creating a data set: creating a data set according to the organ category, and storing pathological organ slice images into the data set; for each organ category according to 7:2:1 randomly dividing training set, test set and verification set, naming the folder to store pictures by category name and placing under the respective set folder.
Preferably, in step S5, the first classification comprises 5 categories of lung, kidney, liver, spleen, tumor, and the second classification comprises 9 categories of normal lung, normal liver, normal spleen, normal kidney, model lung, model liver, model spleen, model kidney, tumor.
Preferably, the training of the classification model in step S3 includes: selecting a YOLO network model for improvement, loading pre-training network parameters by adopting a transfer learning method, performing model training, and selecting a network model with highest precision according to a pathological organ slice image data verification set as a pathological organ slice image detection network model.
More preferably, the YOLO network model is a YOLOv5 model.
More preferably, in the network structure diagram of the pathological organ slice image detection network model, meta-Acon is an activation function, conv is a convolution layer, adaptive avgpool2d is adaptive average pooling, bottleck is a Bottleneck layer, and Batchnorm2d is a data normalization layer.
Preferably, step S5 includes:
s501, reading the first type classification recognition result and the second type classification recognition result output in the step S4;
s502, merging the two recognition results, and removing repeated information;
s503, obtaining a final prediction result according to the first classification recognition result and the second classification recognition result; the final prediction result is determined as follows:
when an organ slice image is predicted as organ A in the first classification and identification, predicting as a subclass of organ A in the second classification and identification, namely judging the final prediction result as the subclass of organ A; when an organ slice image is predicted as organ a in the first classification, it is predicted as a sub-class of organ B in the second classification, i.e., the final prediction is determined as organ a.
Preferably, step S6 includes:
s601, reading the final prediction category of the step S5;
s602, reading image length and width information of an image to be detected to obtain the pixel area of the image;
s603, loading corresponding parameter information according to organ prediction types to detect blue cell duty ratio, white area duty ratio, red cell duty ratio, pink cell duty ratio, brown cell duty ratio, blue solid cell duty ratio and blue hollow cell duty ratio in organ slice images;
s604, writing the detection result into the file.
More preferably, the blue solid cell detection step in S603 is as follows: screening out blue parts through a threshold value; setting the information of the area size, roundness, convexity and the like of the area of the target; performing spot detection according to the set information; drawing a circle from the center position information of the detected target; making a part of the drawn circle as a mask; adding the mask and the original image pixel by pixel to obtain a new image; extracting a blue region in the new image to obtain the area of a blue solid cell region; the duty cycle of blue solid cells was calculated.
The calculation method is as follows:wherein area1 is the duty ratio of blue solid cells, S 0 S602 is the pixel area of the image to be detected obtained in S 1 And extracting to obtain the area of the blue solid cell area.
More preferably, the blue hollow cell detection step in S602 is: screening out blue parts through a threshold value; removing the blue solid cell area part in a pattern subtraction mode to obtain a blue hollow cell area; the duty cycle of the blue hollow cells was calculated.
The calculation method is as follows:wherein area2 is the duty cycle of blue hollow cells, S 2 And screening out the blue partial area for the image to be detected through a threshold value.
Preferably, the data integration and output in step S7 specifically includes the following steps:
s701, creating a table file, reading organ slice microscopic images, and writing organ slice electron microscopic image information into the file;
s702, writing the final prediction result and probability information obtained in the step S5 into a file;
s703, writing the ratio information of the different cells and the areas detected in the step S6 into a file.
The invention also provides an intelligent detection system for the pathological section image, which adopts the intelligent detection method for the pathological section image and comprises the following steps:
the organ slice image acquisition module is used for acquiring organ slice electron microscopic images and multi-category organ slice microscopic images under different environments at different angles;
the organ slice type prediction module is used for carrying out image processing on a plurality of organ slice electron microscopic images under different angles and different environments acquired by the organ slice image acquisition module, making a data set, establishing a classification model based on a YOLO network model, adopting a Meta-Acon function as an activation function, inputting the processed pathological slice image set for classification model training, obtaining a first type classification model and a second type classification model which are trained, predicting an image of an input organ slice image, and correcting a final prediction result according to the first type classification and result information of the first type classification;
the system comprises a cell and region ratio statistics module for each cell and region of an organ slice image, and a blue cell ratio, a white region ratio, a red cell ratio, a pink cell ratio, a brown cell ratio, a blue solid cell ratio and a blue hollow cell ratio in the organ slice image are detected by loading corresponding parameter information according to organ prediction types;
and the data integration and output module creates a table file, writes the acquired organ slice electron microscopic image information into the file, writes the probability and the prediction result information obtained by the organ slice type prediction module into the file, and writes the statistical information of the cell and region ratio of the organ slice image into the file to obtain a final result file.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, through improving the YOLO network model, the activation function is optimized, and the accuracy of organ classification is improved.
The invention realizes the correction of the final prediction result through the detection of the first classification model and the second classification model.
The invention can accurately and efficiently detect the microscopic image of the organ slice to obtain the ratio information of each cell and each region in the image, avoid errors caused by subjective factors of personnel and objectively obtain the detection result.
According to the invention, through the data persistence storage operation, the organ slice type prediction algorithm and the detection result information system of the image cell and region ratio detection algorithm are integrated together, so that the image information can be checked in a form of a table, the organ type probability and the cell and region ratio information in different images can be displayed in batches, and the detection efficiency of pathological organ slice images is improved.
Drawings
FIG. 1 is a flow chart of the detection method of the present invention.
FIG. 2 is a network structure diagram of a pathological organ slice image detection network model in the invention.
FIG. 3 is a test result of a test set of the first classification model.
Fig. 4 is a test result of a test set of the second classification model.
FIG. 5 is a flow chart of the detection of blue solid cells.
Detailed Description
The technical solutions of the present invention will be clearly described below with reference to the accompanying drawings, and it is obvious that the described embodiments are not all embodiments of the present invention, and all other embodiments obtained by a person skilled in the art without making any inventive effort are within the scope of protection of the present invention.
It should be noted that the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments should not be construed as limiting the scope of the present invention unless it is specifically stated otherwise. Furthermore, it should be understood that the dimensions of the various elements shown in the figures are not necessarily drawn to actual scale, e.g., the thickness, width, length, or distance of some elements may be exaggerated relative to other structures for ease of description.
The following description of the exemplary embodiment(s) is merely illustrative, and is in no way intended to limit the invention, its application, or uses. Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail herein, but where applicable, should be considered part of the present specification.
As shown in fig. 1, the invention provides an intelligent detection method for pathological section images, which comprises the following steps:
s1, acquiring organ slice electron microscopic images, wherein the organ slice electron microscopic images are obtained by acquiring normal lung, normal liver, normal spleen, normal kidney, model lung, model liver, model spleen, model kidney and tumor with different multiples through an electron microscope;
classifying according to organ categories, wherein the first category comprises 5 categories of lung, kidney, liver, spleen and tumor, wherein the lung comprises normal lung and model lung, the kidney comprises normal kidney and model kidney, the liver comprises normal liver and model liver, and the spleen comprises normal spleen and model spleen; the second category includes 9 categories of normal lung, normal liver, normal spleen, normal kidney, model lung, model liver, model spleen, model kidney, tumor.
S2, processing the acquired electron microscopic images of all organ sections to manufacture a data set;
step S3, a neural network, namely a backhaul part, based on a YOLO network model is used for establishing a classification model, a Meta-Acon function is used as an activation function, and a processed pathological section image set is input for classification model training to obtain a trained classification model;
s4, inputting the organ slice images to be detected into a trained classification model, and identifying the first classification and the second classification;
s5, correcting a final prediction result according to the prediction categories of the first classification and the second classification obtained in the step S4;
step S6, detecting different cells and areas in the pathological section image according to the organ prediction result obtained in the step S5;
and S7, integrating the information and outputting result statistics.
Preferably, step S2 includes:
s201, fold uniformization: processing the electron microscopic images of the organ sections with different magnification factors into images with the same magnification factor; taking a 100-fold image with a resolution of 2880×2048 as an example of a 200-fold image with a resolution of 225×225, firstly dividing the image into four to obtain 4 images 1440×1024, and scaling the images 1440×1024 to obtain an image 225×225;
s202, enhancing image data: carrying out data enhancement processing on the image, and simulating organ slice images in various environments; in order to expand training data and improve generalization capability, noise and manpower are used to increase the size of the training set, so that the problem of unbalanced sample size is solved. Organ slice images under various circumstances are simulated by random rotation, contrast adjustment, brightness transformation, etc.
S203, image size unification processing: unifying the sizes according to the input requirements of the neural network;
s204, creating a data set: creating a data set according to the organ category, and storing pathological organ slice images into the data set; for each organ category according to 7:2:1 randomly dividing training set, test set and verification set, naming the folder to store pictures by category name and placing under the respective set folder.
The training of the classification model in the step S3 includes: selecting a YOLO network model for improvement, loading pre-training network parameters by adopting a transfer learning method, performing model training, and selecting a network model with highest precision according to a pathological organ slice image data verification set as a pathological organ slice image detection network model. Wherein the YOLO network model is a YOLOv5 model.
The network structure diagram of the pathological organ slice image detection network model is shown in fig. 2, wherein Meta-Acon is an activation function, conv is a convolution layer, adaptive avgpool2d is adaptive average pooling, bottleck is a Bottleneck layer, and Batchnorm2d is a data normalization layer.
In order to improve the nonlinear expression capacity of the model, a Meta-Acon activation function is introduced into a neural network, and the related explanation is as follows:
the Swish activation function formula is:
Swish(x)=x·σ(β c x)
wherein σ represents a Sigmoid function, β c Representing a constant.
ACON (Activate Or Not) the most extensive smooth version of ACON-C in the activation function is defined as:
ACON-C(x)=(p 1 -p 2 )x·σ[β c (p 1 -p 2 )x]+p 2 x
in p 1 And p 2 Is a learnable parameter for controlling the values of the upper and lower bounds of the function. In general, beta c =p 1 =1,p 2 =0。
Meta-Acon is consistent with the ACON-C activation function, except that the Meta-Acon activation function is activated by beta c To control whether neurons are activated or not, beta in the Meta-Acon function c The adaptive function of (2) is as follows:
wherein beta is c Parameter controlling whether or not neurons are activated (beta) c Considered inactive when 0), thus calculating β using convolution or the like c And (3) value, realizing adaptive activation. The specific operation is as follows: assuming that the input feature size is c×h×ω, firstly, averaging in h and ω dimensions, then passing through two 1×1 convolution layers, and finally mapping the result into (0, 1) range by using Sigmoid function to obtain the calculated β c And a value for determining whether to activate.
Preferably, step S5 includes:
s501, reading the first type classification recognition result and the second type classification recognition result output in the step S4;
s502, merging the two recognition results, and removing repeated information;
s503, obtaining a final prediction result according to the first classification recognition result and the second classification recognition result; the final prediction result is determined as follows:
when an organ slice image is predicted as organ A in the first classification and identification, predicting as a subclass of organ A in the second classification and identification, namely judging the final prediction result as the subclass of organ A; when an organ slice image is predicted as organ a in the first classification, it is predicted as a sub-class of organ B in the second classification, i.e., the final prediction is determined as organ a.
Preferably, step S6 includes:
s601, reading the final prediction category of the step S5;
s602, reading image length and width information of an image to be detected to obtain the pixel area of the image;
s603, loading corresponding parameter information according to organ prediction types to detect blue cell duty ratio, white area duty ratio, red cell duty ratio, pink cell duty ratio, brown cell duty ratio, blue solid cell duty ratio and blue hollow cell duty ratio in organ slice images;
s604, writing the detection result into the file.
More preferably, the blue solid cell detection step in S603 is as follows: as shown in fig. 5, the original pathological section image is a graph a, and blue parts are screened out through a threshold value; setting the information of the area size, roundness, convexity and the like of the area of the target; performing spot detection according to the set information; drawing a circle according to the center position information of the detected target, wherein b is the drawing result; extracting the part of the drawn circle to manufacture a mask and adding the mask with the original image pixel by pixel to obtain a c image; extracting a blue region in the graph c to obtain a blue solid cell region to obtain a graph d; performing binarization operation on the d graph to obtain an e graph; and (5) extracting the area of a white area in the e graph, and calculating to obtain the duty ratio of the blue solid cells.
The calculation method is as follows:
wherein area1 is the duty ratio of blue solid cells, S 0 S602 is the pixel area of the image to be detected obtained in S 1 And extracting to obtain the area of the blue solid cell area.
More preferably, the blue hollow cell detection step in S602 is: screening out blue parts through a threshold value; removing the blue solid cell area part in a pattern subtraction mode to obtain a blue hollow cell area; the duty cycle of the blue hollow cells was calculated.
The calculation method is as follows:
wherein area2 is the duty cycle of blue hollow cells, S 2 And screening out the blue partial area for the image to be detected through a threshold value.
Preferably, the data integration and output in step S7 specifically includes the following steps:
s701, creating a table file, reading organ slice microscopic images, and writing organ slice electron microscopic image information into the file;
s702, writing the final prediction result and probability information obtained in the step S5 into a file;
s703, writing the ratio information of the different cells and the areas detected in the step S6 into a file.
The invention also provides an intelligent detection system for the pathological section image, which adopts the intelligent detection method for the pathological section image and comprises the following steps:
the organ slice image acquisition module is used for acquiring organ slice electron microscopic images and multi-category organ slice microscopic images under different environments at different angles;
the organ slice type prediction module is used for carrying out image processing on a plurality of organ slice electron microscopic images under different angles and different environments acquired by the organ slice image acquisition module, making a data set, establishing a classification model based on a YOLO network model, adopting a Meta-Acon function as an activation function, inputting the processed pathological slice image set for classification model training, obtaining a first type classification model and a second type classification model which are trained, predicting an image of an input organ slice image, and correcting a final prediction result according to the first type classification and result information of the first type classification;
the system comprises a cell and region ratio statistics module for each cell and region of an organ slice image, and a blue cell ratio, a white region ratio, a red cell ratio, a pink cell ratio, a brown cell ratio, a blue solid cell ratio and a blue hollow cell ratio in the organ slice image are detected by loading corresponding parameter information according to organ prediction types;
and the data integration and output module creates a table file, writes the acquired organ slice electron microscopic image information into the file, writes the probability and the prediction result information obtained by the organ slice type prediction module into the file, and writes the statistical information of the cell and region ratio of the organ slice image into the file to obtain a final result file.
Examples
The intelligent detection method for pathological section images is used for model training and testing, a Yolov5 network is used for improvement, a classification model is built based on a background part of the Yolov5 network model, a Meta-Acon function is used as an activation function, a processed pathological organ section image set is input for classification model training, a trained classification model is obtained, and the trained model comprises a pathological section image five classification model and a pathological section image nine classification model.
The network structure diagram is shown in fig. 2, wherein Meta-Acon is an activation function, conv is a convolution layer, adaptive avgpool2d is adaptive average pooling, bottleck is a Bottleneck layer, and batch norm2d is data normalization;
training of a model: before training, parameters such as learning rate, training iteration number and the like are set, data enhancement is enabled, and images are set through probability functions to perform random overturn, rotation and brightness adjustment. And executing training codes, loading configuration files, and obtaining a model training result after training, wherein the information of the model training result is shown in table 1. In the table, top-1_acc refers to the highest accuracy rate of the prediction category conforming to the actual result, and top-5_acc refers to the accuracy rate of the prediction category containing the actual result. Train_loss is the loss on the training data, measuring the fitting ability of the model on the training set, val_loss is the loss on the validation set, and measuring the fitting ability on the unseen data. From the data in table 1, it can be seen that the loss of the improved pathological section image model is reduced, and the accuracy is improved.
TABLE 1 pathological section image Performance of models
Network model | train_loss | val_loss | top1_acc | top5_acc |
vgg16 five-classification model | 0.056 | 0.068 | 0.78 | 1 |
Five-classification model for pathological section image | 0.00137 | 0.000101 | 1 | 1 |
yolov5 five-classification model | 0.0054 | 0.0071 | 1 | 1 |
Pathological section image nine-classification model | 0.178 | 0.368 | 0.867 | 1 |
yolov5 nine-classification model | 0.513 | 0.391 | 0.809 | 1 |
The test set is tested by using a pathological section image nine-classification model, 1104 sheets of test set images are used in the embodiment, test results are shown in table 2, partial test results are shown in fig. 4, and the title above each picture is the actual label of the image, and the label is the prediction label. The test set is tested by using a pathological section image five-classification model, in this embodiment, 242 test set images are used in total, the test results are shown in table 3, the partial test results are shown in fig. 3, and the title above each picture is the actual label of the image, and the label is the prediction label.
Table 2 pathological section image nine-classification model detection results on test set
TABLE 3 detection results of five-class model of pathological section image on test set
Category(s) | Number of images | top1_acc | top5_acc |
Sum up | 242 | 1 | 1 |
Liver | 49 | 1 | 1 |
Lung (lung) | 54 | 1 | 1 |
Kidney and kidney | 52 | 1 | 1 |
Spleen | 54 | 1 | 1 |
Tumor(s) | 33 | 1 | 1 |
The above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to examples, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the scope of the technical solution of the present invention, which is intended to be covered by the claims of the present invention.
Claims (12)
1. An intelligent detection method for pathological section images is characterized by comprising the following steps:
s1, collecting an organ slice electron microscopic image;
s2, processing the acquired electron microscopic images of all organ sections to manufacture a data set;
step S3, a neural network part based on a YOLO network model is used for establishing a classification model, a Meta-Acon function is used as an activation function, and a processed pathological section image set is input for classification model training to obtain a trained classification model;
s4, inputting the organ slice images to be detected into a trained classification model, and identifying the first classification and the second classification;
s5, correcting a final prediction result according to the prediction categories of the first classification and the second classification obtained in the step S4;
step S6, detecting different cells and areas in the pathological section image according to the organ prediction result obtained in the step S5;
and S7, integrating the information and outputting result statistics.
2. The method according to claim 1, wherein the electron microscopic image of the organ slice in the step S1 is a slice image of normal lung, normal liver, normal spleen, normal kidney, model lung, model liver, model spleen, model kidney, and tumor acquired by electron microscope at different times.
3. The method for intelligently detecting pathological section images according to claim 1, wherein the step S2 comprises:
s201, fold uniformization: processing the electron microscopic images of the organ sections with different magnification factors into images with the same magnification factor;
s202, enhancing image data: carrying out data enhancement processing on the image, and simulating organ slice images in various environments;
s203, image size unification processing: unifying the sizes according to the input requirements of the neural network;
s204, creating a data set: creating a data set according to the organ category, and storing pathological organ slice images into the data set; for each organ category according to 7:2:1 randomly dividing training set, test set and verification set, naming the folder to store pictures by category name and placing under the respective set folder.
4. The method according to claim 1, wherein in step S5, the first classification includes 5 categories of lung, kidney, liver, spleen, and tumor, and the second classification includes 9 categories of normal lung, normal liver, normal spleen, normal kidney, model lung, model liver, model spleen, model kidney, and tumor.
5. The method for intelligent detection of pathological section images according to claim 1, wherein the training of the classification model in step S3 comprises: selecting a YOLO network model for improvement, loading pre-training network parameters by adopting a transfer learning method, performing model training, and selecting a network model with highest precision according to a pathological organ slice image data verification set as a pathological organ slice image detection network model.
6. The method for intelligently detecting pathological section images according to claim 5, wherein Meta-Acon is an activation function, conv is a convolution layer, adaptive avgpool2d is an adaptive average pooling, bottleneck is a Bottleneck layer, and BatchNorm2d is a data normalization layer in a network structure diagram of the pathological organ section image detection network model.
7. The method for intelligently detecting pathological section images according to claim 1, wherein step S5 comprises:
s501, reading the first type classification recognition result and the second type classification recognition result output in the step S4;
s502, merging the two recognition results, and removing repeated information;
s503, obtaining a final prediction result according to the first classification recognition result and the second classification recognition result; the final prediction result is determined as follows:
when an organ slice image is predicted as organ A in the first classification and identification, predicting as a subclass of organ A in the second classification and identification, namely judging the final prediction result as the subclass of organ A; when an organ slice image is predicted as organ a in the first classification, it is predicted as a sub-class of organ B in the second classification, i.e., the final prediction is determined as organ a.
8. The method for intelligently detecting pathological section images according to claim 1, wherein step S6 includes:
s601, reading the final prediction category of the step S5;
s602, reading image length and width information of an image to be detected to obtain the pixel area of the image;
s603, loading corresponding parameter information according to organ prediction types to detect blue cell duty ratio, white area duty ratio, red cell duty ratio, pink cell duty ratio, brown cell duty ratio, blue solid cell duty ratio and blue hollow cell duty ratio in organ slice images;
s604, writing the detection result into the file.
9. The method for intelligently detecting pathological section images according to claim 8, wherein the step of detecting blue solid cells in S603 is as follows: screening out blue parts through a threshold value; setting the information of the area size, roundness, convexity and the like of the area of the target; performing spot detection according to the set information; drawing a circle from the center position information of the detected target; making a part of the drawn circle as a mask; adding the mask and the original image pixel by pixel to obtain a new image; extracting a blue region in the new image to obtain a blue solid cell region; calculating to obtain the duty ratio of blue solid cells;
the calculation method is as follows:
wherein area1 is the duty ratio of blue solid cells, S 0 S602 is the pixel area of the image to be detected obtained in S 1 And extracting to obtain the area of the blue solid cell area.
10. The method for intelligently detecting pathological section images according to claim 9, wherein the step of detecting blue hollow cells in S602 is as follows: screening out blue parts through a threshold value; removing the blue solid cell area part in a pattern subtraction mode to obtain a blue hollow cell area; calculating to obtain the duty ratio of the blue hollow cells;
11. The method for intelligently detecting pathological section images according to claim 1, wherein the data integration and output in step S7 specifically comprises the following steps:
s701, creating a table file, reading organ slice microscopic images, and writing organ slice electron microscopic image information into the file;
s702, writing the final prediction result and probability information obtained in the step S5 into a file;
s703, writing the ratio information of the different cells and the areas detected in the step S6 into a file.
12. A pathological section image intelligent detection system, adopting the pathological section image intelligent detection method as claimed in any one of claims 1-11, comprising:
the organ slice image acquisition module is used for acquiring organ slice electron microscopic images and multi-category organ slice microscopic images under different environments at different angles;
the organ slice type prediction module is used for carrying out image processing on a plurality of organ slice electron microscopic images under different angles and different environments acquired by the organ slice image acquisition module, making a data set, establishing a classification model based on a YOLO network model, adopting a Meta-Acon function as an activation function, inputting the processed pathological slice image set for classification model training, obtaining a first type classification model and a second type classification model which are trained, predicting an image of an input organ slice image, and correcting a final prediction result according to the first type classification and result information of the first type classification;
the system comprises a cell and region ratio statistics module for each cell and region of an organ slice image, and a blue cell ratio, a white region ratio, a red cell ratio, a pink cell ratio, a brown cell ratio, a blue solid cell ratio and a blue hollow cell ratio in the organ slice image are detected by loading corresponding parameter information according to organ prediction types;
and the data integration and output module creates a table file, writes the acquired organ slice electron microscopic image information into the file, writes the probability and the prediction result information obtained by the organ slice type prediction module into the file, and writes the statistical information of the cell and region ratio of the organ slice image into the file to obtain a final result file.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310122248.4A CN116205868A (en) | 2023-01-19 | 2023-01-19 | Intelligent pathological section image detection method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310122248.4A CN116205868A (en) | 2023-01-19 | 2023-01-19 | Intelligent pathological section image detection method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116205868A true CN116205868A (en) | 2023-06-02 |
Family
ID=86518730
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310122248.4A Pending CN116205868A (en) | 2023-01-19 | 2023-01-19 | Intelligent pathological section image detection method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116205868A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118172774A (en) * | 2024-05-13 | 2024-06-11 | 青岛山大齐鲁医院(山东大学齐鲁医院(青岛)) | Low-magnification image analysis method and device for automatically identifying region of interest |
-
2023
- 2023-01-19 CN CN202310122248.4A patent/CN116205868A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118172774A (en) * | 2024-05-13 | 2024-06-11 | 青岛山大齐鲁医院(山东大学齐鲁医院(青岛)) | Low-magnification image analysis method and device for automatically identifying region of interest |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111739075B (en) | Deep network lung texture recognition method combining multi-scale attention | |
CN109886179B (en) | Image segmentation method and system of cervical cell smear based on Mask-RCNN | |
CN112101451B (en) | Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block | |
CN114582470B (en) | Model training method and device and medical image report labeling method | |
CN112102229A (en) | Intelligent industrial CT detection defect identification method based on deep learning | |
CN111090764B (en) | Image classification method and device based on multitask learning and graph convolution neural network | |
CN112819821B (en) | Cell nucleus image detection method | |
CN110543916B (en) | Method and system for classifying missing multi-view data | |
CN110188774A (en) | A kind of current vortex scan image classifying identification method based on deep learning | |
CN109948527B (en) | Small sample terahertz image foreign matter detection method based on integrated deep learning | |
Taher et al. | Bayesian classification and artificial neural network methods for lung cancer early diagnosis | |
CN111914902B (en) | Traditional Chinese medicine identification and surface defect detection method based on deep neural network | |
CN115909006B (en) | Mammary tissue image classification method and system based on convolution transducer | |
CN112348059A (en) | Deep learning-based method and system for classifying multiple dyeing pathological images | |
CN113298780A (en) | Child bone age assessment method and system based on deep learning | |
CN113077444A (en) | CNN-based ultrasonic nondestructive detection image defect classification method | |
Lin et al. | Determination of the varieties of rice kernels based on machine vision and deep learning technology | |
CN113313678A (en) | Automatic sperm morphology analysis method based on multi-scale feature fusion | |
CN114445356A (en) | Multi-resolution-based full-field pathological section image tumor rapid positioning method | |
CN114049935A (en) | HER2 image classification system based on multi-convolution neural network | |
Turi et al. | Classification of Ethiopian coffee beans using imaging techniques | |
CN116205868A (en) | Intelligent pathological section image detection method and system | |
CN117011222A (en) | Cable buffer layer defect detection method, device, storage medium and equipment | |
Zhang et al. | Robust procedural model fitting with a new geometric similarity estimator | |
CN114694143A (en) | Cell image recognition method and device based on optical means |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |