[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113689412A - Thyroid image processing method and device, electronic equipment and storage medium - Google Patents

Thyroid image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113689412A
CN113689412A CN202110995168.0A CN202110995168A CN113689412A CN 113689412 A CN113689412 A CN 113689412A CN 202110995168 A CN202110995168 A CN 202110995168A CN 113689412 A CN113689412 A CN 113689412A
Authority
CN
China
Prior art keywords
image
detected
thyroid
learning model
detection result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110995168.0A
Other languages
Chinese (zh)
Inventor
王庆军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
6th Medical Center of PLA General Hospital
Original Assignee
6th Medical Center of PLA General Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 6th Medical Center of PLA General Hospital filed Critical 6th Medical Center of PLA General Hospital
Priority to CN202110995168.0A priority Critical patent/CN113689412A/en
Publication of CN113689412A publication Critical patent/CN113689412A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a thyroid image processing method, a thyroid image processing device, an electronic device and a storage medium. The method comprises the following steps: acquiring an image to be detected, wherein the image to be detected comprises an image obtained by detecting a thyroid part; inputting an image to be detected into a preset deep learning model to obtain a first detection result of the preset deep learning model on the image to be detected; and when the first detection result shows that thyroid nodules exist in the image to be detected, inputting a focus region which shows that thyroid nodules exist in the image to be detected in the first detection result into the tested machine learning model, and obtaining a second detection result of the machine learning model on the focus region. In the scheme, the image to be detected is detected by combining the preset depth learning model and the machine learning model, and the machine learning model does not need to detect the image without thyroid nodules and the region without focus map areas, so that the calculation amount can be reduced, the calculation efficiency is improved, and the accuracy rate of thyroid nodule detection is favorably improved.

Description

Thyroid image processing method and device, electronic equipment and storage medium
Technical Field
The application relates to the field of computer image processing, in particular to a thyroid image processing method and device, an electronic device and a storage medium.
Background
At present, in the process of detecting thyroid nodules in a human body, an image of a thyroid part is generally obtained by ultrasonic detection, and then the thyroid nodules are detected on the detected image manually or by a machine. The manual identification mode has high requirement on professional knowledge of operators, and the identification efficiency is low. The machine identification method is limited by the processing method of the current network model, and the identification accuracy and efficiency need to be improved.
Disclosure of Invention
An object of the embodiments of the present application is to provide a thyroid image processing method, a thyroid image processing apparatus, an electronic device, and a storage medium, which can improve accuracy and efficiency of thyroid nodule detection.
In order to achieve the above object, embodiments of the present application are implemented as follows:
in a first aspect, an embodiment of the present application provides a thyroid image processing method, including: acquiring an image to be detected, wherein the image to be detected comprises an image obtained by detecting a thyroid part; inputting the image to be detected into a preset deep learning model to obtain a first detection result of the preset deep learning model on the image to be detected; and when the first detection result shows that thyroid nodules exist in the image to be detected, inputting a focus map region which shows that thyroid nodules exist in the image to be detected in the first detection result into a tested machine learning model to obtain a second detection result of the machine learning model on the focus map region, wherein the second detection result shows the severity of the thyroid nodules in the focus map region.
In the above embodiment, the preset deep learning model is used to detect whether a thyroid nodule exists in the image to be detected, and when the thyroid nodule exists, the focus map area in the image to be detected is input into the machine learning model, so that the machine learning model does not need to detect the image without the thyroid nodule and the area which is not the focus map area, thereby reducing the amount of computation and improving the computation efficiency. In addition, the preset deep learning model and the machine learning model are combined, so that the thyroid nodule detection accuracy is improved.
With reference to the first aspect, in some optional embodiments, the preset deep learning model includes a residual error network and a candidate area network, and before acquiring the image to be measured, the method further includes:
acquiring a training data set and a testing data set, wherein each image in the training data set and the testing data set corresponds to at least one label, and the labels are used for marking a region with thyroid nodules or a region without thyroid nodules;
inputting the training data set into the residual error network to obtain a training image with thyroid nodules in the training data set;
inputting the training image region with the thyroid nodule into the candidate region network to obtain a candidate region, wherein the candidate region is a focus map region which is segmented from the training image and represents the thyroid nodule;
training a gcForest model by using the candidate region to obtain a trained gcForest model;
and testing the trained gcForest model according to the test data set to obtain the tested machine learning model.
In the above embodiment, the residual error network, the candidate area network, and the gcForest model are trained by the training data set, and the gcForest model is tested by the test data set, so that the accuracy of model detection can be improved.
With reference to the first aspect, in some optional embodiments, training a gcForest model by using the candidate region to obtain a trained gcForest model includes:
extracting features of the candidate region through sliding windows with different specified window sizes to obtain feature maps corresponding to the different specified window sizes;
inputting the characteristic diagram into a random forest in the gcForest model, and training the gcForest model to obtain the trained gcForest model.
In the above embodiment, feature extraction is performed from the candidate region through a multi-size sliding window to train the gcForest model, so that the training samples are reduced on the basis of realizing the purpose of model training.
With reference to the first aspect, in some optional embodiments, acquiring an image to be measured includes:
and selecting an interface corresponding to the image type to acquire the image to be detected according to the image type of the image to be detected, wherein the image type comprises at least one of PNG, JPG, BMP and DICOM.
In the above embodiment, the image to be detected is obtained through the interface corresponding to the image type, so that the image to be detected of different image types can be detected, and the application range of the image to be detected is favorably improved.
With reference to the first aspect, in some optional embodiments, inputting the image to be detected into a preset deep learning model to obtain a first detection result of the preset deep learning model on the image to be detected, where the method includes:
inputting the image to be detected into a residual error network in the preset deep learning model to obtain an intermediate detection result of whether thyroid nodules exist in the image to be detected;
and when the intermediate detection result shows that thyroid nodules exist in the image to be detected, inputting the image to be detected into the candidate area network in the deep learning model to obtain a first detection result, wherein the first detection result comprises the focus map area which is segmented from the image to be detected and shows that thyroid nodules exist.
With reference to the first aspect, in some optional embodiments, the image to be detected input into the preset deep learning model is an image obtained by preprocessing the image to be detected, where the preprocessing includes at least one of image denoising and image enhancement.
With reference to the first aspect, in some optional embodiments, the image to be tested includes an image of a thyroid site obtained by magnetic resonance imaging.
In a second aspect, the present application also provides a thyroid image processing apparatus, the apparatus comprising:
the device comprises an acquisition unit, a detection unit and a processing unit, wherein the acquisition unit is used for acquiring an image to be detected, and the image to be detected comprises an image obtained by detecting a thyroid part;
the first detection unit is used for inputting the image to be detected into a preset deep learning model to obtain a first detection result of the preset deep learning model on the image to be detected;
and a second detection unit, configured to, when the first detection result indicates that a thyroid nodule exists in the image to be detected, input a lesion map region indicating that a thyroid nodule exists in the image to be detected in the first detection result into a tested machine learning model, to obtain a second detection result of the machine learning model for the lesion map region, where the second detection result indicates a severity of the thyroid nodule in the lesion map region.
In a third aspect, the present application further provides an electronic device, which includes a processor and a memory coupled to each other, wherein the memory stores a computer program, and when the computer program is executed by the processor, the electronic device is caused to perform the method described above.
In a fourth aspect, the present application also provides a computer-readable storage medium having stored thereon a computer program which, when run on a computer, causes the computer to perform the method described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic flow chart of a thyroid image processing method according to an embodiment of the present application.
Fig. 2 is a schematic network structure diagram of a deep learning model according to an embodiment of the present disclosure.
Fig. 3 is a schematic network structure diagram of a machine learning model according to an embodiment of the present disclosure.
Fig. 4 is a block diagram of a thyroid image processing apparatus according to an embodiment of the present application.
Icon: 200-thyroid gland image processing device; 210-an obtaining unit; 220-a first detection unit; 230-second detection unit.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. It should be noted that the terms "first," "second," and the like are used merely to distinguish one description from another, and are not intended to indicate or imply relative importance. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
The application provides an electronic equipment can be used for carrying out analysis and detection to the thyroid gland image that obtains shooting for the supplementary user carries out thyroid nodule and detects.
The electronic device may include a processing module and a memory module. The storage module stores therein a computer program that, when executed by the processing module, enables the electronic device to execute each step in the thyroid image processing method described below.
In this embodiment, the electronic device may further include other modules, for example, the electronic device may further include a communication module for establishing a communication connection with other devices. The processing module, the storage module and the communication module are electrically connected directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines.
The electronic device may be, but is not limited to, a personal computer, a server, etc.
Referring to fig. 1, the present application provides a thyroid image processing method, which can be applied to the electronic device, where the electronic device executes or implements the steps of the method, and the method includes the following steps:
step S110, acquiring an image to be detected, wherein the image to be detected comprises an image obtained by detecting a thyroid part;
step S120, inputting the image to be detected into a preset deep learning model to obtain a first detection result of the preset deep learning model on the image to be detected;
step S130, when the first detection result indicates that a thyroid nodule exists in the image to be detected, inputting a lesion map region indicating that a thyroid nodule exists in the image to be detected in the first detection result into the machine learning model subjected to the test, to obtain a second detection result of the machine learning model on the lesion map region, where the second detection result indicates a severity of the thyroid nodule in the lesion map region.
In the above embodiment, the preset deep learning model is used to detect whether a thyroid nodule exists in the image to be detected, and when the thyroid nodule exists, the focus map area in the image to be detected is input into the machine learning model, so that the machine learning model does not need to detect the image without the thyroid nodule and the area which is not the focus map area, thereby reducing the amount of computation and improving the computation efficiency. In addition, the preset deep learning model and the machine learning model are combined, so that the thyroid nodule detection accuracy is improved.
The individual steps of the process are explained in detail below, as follows:
before step S110, the method includes the steps of model training and testing. For example, referring to fig. 2 and fig. 3, the predetermined deep learning model may be Mask R-CNN. The Mask R-CNN is a branch added with a prediction segmentation Mask (which can be interpreted as a 'Mask') on the basis of the fast R-CNN (Region Convolutional Neural Network). Mask R-CNN includes a residual network and a candidate area network (RPN), and before step S110, the method may include steps S101 to S105 as follows:
step S101, a training data set and a testing data set are obtained, each image in the training data set and the testing data set corresponds to at least one label, and the labels are used for marking the regions with thyroid nodules or marking the regions without thyroid nodules;
step S102, inputting the training data set into the residual error network to obtain a training image with thyroid nodules in the training data set;
step S103, inputting the training image region with thyroid nodules into the candidate region network to obtain a candidate region, wherein the candidate region is a focus map region which is segmented from the training image and represents the thyroid nodules;
step S104, training a gcForest model by using the candidate region to obtain the trained gcForest model;
and S105, testing the trained gcForest model according to the test data set to obtain the tested machine learning model.
Understandably, the training data set and the test data set are image data prepared in advance for the user. The training data set and the test data set both include a plurality of images, and the number of the included images can be flexibly determined according to actual situations, which is not specifically limited herein. The training data set is used for training the model, and the testing data set is used for testing the model so as to improve the accuracy of model detection.
The training data set and the testing data set are images obtained by shooting the thyroid part of a human body, and corresponding labels can be manually set for each image. The label content can be added according to actual conditions, for example, the label indicates whether a thyroid map area in the diagram has a nodule or not, and when the thyroid nodule exists, the area where the thyroid nodule exists is marked. The image may be, but is not limited to, an MRI (Magnetic Resonance Imaging) image obtained by Magnetic Resonance Imaging of the thyroid region.
When the images in the training data set and the testing data set are used for carrying out regional labeling on thyroid nodules, a user can flexibly label and segment according to actual conditions. For example, a user can directly draw a rectangle or an ellipse on an image for a thyroid nodule map region by using a qpair tool in a pyqt5.qtgui package, and then call a rectangle and ellipse interface in an OpenCV package to directly segment the drawn region; or clicking and selecting points by utilizing a QPair in a PyQt5.QtGui packet according to a mouse, then connecting all the points into polygons, and finally calling a polygon interface in the OpenCV packet to directly segment the delineation area; or, the Livewire algorithm in the OpenCV package is used for realizing the magnetic region segmentation operation; or based on a preset threshold range of the gray value of a pixel point in the image, a threshold segmentation result is generated after the threshold size is customized by using a threshold tool in an OpenCV (open channel computer vision library) package, and the preset threshold range is a gray threshold range corresponding to the existing thyroid nodule. Note that software tool packages for thyroid nodule region labeling (such as pyqt5.qtgui package, qpair tool, OpenCV package, etc.) are well known to those skilled in the art.
In this embodiment, compared with ultrasonic imaging, an MRI image has better tissue resolution and contrast, can simultaneously scan the thyroid gland and surrounding tissues in multiple directions and layers, and can provide information on morphology, tumor function and biological characteristics, so that the MRI image is used for detecting thyroid nodules, which is beneficial to improving the accuracy of detection.
In this embodiment, the residual network can be flexibly determined according to actual situations. For example, the residual network may be a ResNet101 convolutional neural network. The residual network is used for extracting image features of nodule or micro-nodule areas in the image, and then the multi-scale change condition in thyroid nodule detection can be processed through a Feature Pyramid Network (FPN) under the condition of increasing extremely small calculation amount, so that more thyroid nodule features are extracted.
The candidate area network is a lightweight neural network, which performs a convolution operation by scanning a feature map through a Sliding Window (Sliding Window), and generates Anchors by setting different Window sizes and aspect ratios, wherein the Anchors are used for defining relative position reference points in an image.
Referring to fig. 2 again, the candidate area network may perform convolution processing on the image output by the residual error network and the optimized ADC (Apparent Diffusion Coefficient map) to obtain a "middle map". The ADC map optimization method is well known to those skilled in the art. The ADC is used to describe parameters of diffusion movement speed and range of different water molecules in magnetic resonance diffusion weighted imaging, and the ADC map is a pre-acquired map and is well known to those skilled in the art.
The candidate area network may output two kinds of information for each Anchor: firstly, predicting the foreground or background category of an Anchor, wherein the foreground category represents that a certain or multiple categories of targets exist in the Anchor with a certain probability, and the background category refers to other objects except the targets to be detected and can be filtered subsequently; secondly, presetting fine adjustment of a frame, and outputting the change percentage of the position information (x, y, w, h) when the center of the target is not completely superposed with the center of the foreground Anchor, namely, when offset exists, so as to accurately adjust the Anchor position, and the fitting of the position of the target is more correct. Wherein x and y are horizontal and vertical coordinates of the center of the target, w | refers to the width of the preset frame, and h refers to the height of the preset frame. The foreground anchors are overlapped with each other, the anchors with low foreground scores are filtered out in a non-maximum suppression mode, the anchors with the highest scores are reserved, and finally a candidate Region (for example, a square Region corresponding to a middle image in fig. 2) is obtained, namely, Region probes.
Referring to fig. 2 again, the candidate area network may input the obtained Region suggestions into an ROI Align layer (refer to a custom operation name, may map the generated Region suggestion box onto the feature map, and may write the ROI Align map), and after the Region suggestions are mapped onto the feature map, the Region of interest (ROIs) with the same size are output for subsequent classification, regression, and Mask generation. Among them, FC (full Connected Layers) is used to realize frame regression, result classification, and the like. FCN (full volumetric Networks) is used to implement image segmentation.
Mask R-CNN generates a Mask for detecting thyroid nodules by adding a Mask branch at the last output (i.e., the last fully-connected layer in a conventional convolutional neural network is all replaced with a convolutional layer).
In this embodiment, the gcForest model can optimize the diagnostic ability of the model according to the size characteristics of the thyroid nodule map region. Namely, the gcForest model can extract a focus map region with thyroid nodules through different windows, and can realize the training and testing of the model on the basis of reducing training samples so as to improve the accuracy of detection. Exemplarily, step S104 may include:
extracting features of the candidate region through sliding windows with different specified window sizes to obtain feature maps corresponding to the different specified window sizes;
inputting the characteristic diagram into a random forest in the gcForest model, and training the gcForest model to obtain the trained gcForest model.
In this embodiment, the gcForest model may include a Multi-granular Scanning (Multi-granular Scanning) unit and a Cascade Forest (Cascade Forest) unit. The multi-granularity scanning unit can generate feature maps with different sizes by setting different scanning window sizes, and then input two types of input vectors of the preset random forest generation cascade forest unit. Wherein, the two types of preset random forests refer to: and the random forest and the completely random forest have different leaf nodes and are used for generating a feature map as input of the cascade unit. And the cascade forest unit generates a detection result by learning the characteristic diagram generated by the multi-granularity scanning unit. The test results included content indicating the severity of thyroid nodules. For example, the test results may include the size, severity, etc. of the thyroid nodules as benign or malignant.
In this embodiment, the images input into the residual error network, the training data set of the gcForest model, and the test data set may be preprocessed images. For example, data noise reduction and data enhancement are performed on MRI images. For example, different wavelet basis functions are adopted to perform two-dimensional wavelet transformation on the thyroid MRI image, so that noise reduction on the MRI image is completed; and adopting a Gauss Laplacian operator to carry out smoothing processing on the MRI image so as to fulfill the aim of strengthening the texture characteristics of the MRI image. And inputting the preprocessed image into a Mask R-CNN network model to train models such as a residual error network, a candidate area network and a gcForest model in the Mask R-CNN network model, so that the accuracy of model identification and detection is improved.
Since the maximum diameter of the thyroid nodule lesion is usually less than or equal to 1 cm, and the aspect ratio of the lesion map area is between 0.5 and 2.0, the size of the sliding window in the candidate area network may be set to [4,8,16] (where the numbers 4,8,16 respectively refer to long or wide pixel points of 3 sliding windows with different sizes), and the window aspect ratio may be set to [0.5,1,2] (the numbers 0.5,1, 1.2 respectively correspond to the aspect ratios of three sliding windows) to optimize the Region probes generated by the candidate area network, and generate a final thyroid nodule automatic detection segmentation model, as follows:
L=Lcls1Lbox2Lmask (1)
in equation (1), L is the total loss function, LclsAs a function of classification loss, LboxTo detect the loss function, LMaskAs a function of the division loss, λ1、λ2To balance the weight parameters. Due to the need of further extracting the image omics characteristic value of the focus region, the segmentation branch of the Mask R-CNN model can use an average binary cross entropy loss function to extract the characteristic value, as follows:
Figure BDA0003233800730000101
in formula (2), m represents the dimension of the two-dimensional mask, i, j are loop variables, k represents the class, y and
Figure BDA0003233800730000102
respectively representing the value and the predicted value of the label.
In this embodiment, the electronic device may detect the thyroid nodule focal region location using Mask R-CNN, but not correctly classify (e.g., not clear the severity of the thyroid nodule). While the gcForest model can be used to classify the severity of thyroid nodules, it is therefore necessary to determine the severity of thyroid nodules by training the gcForest model.
Referring again to FIG. 3, the Mask R-CNN input to the gcForest model can include two types of graphs, for example, the A-way input is a diffusion weighted graph and the B-way input is an apparent diffusion coefficient graph. The administrator can set the sliding window size of the multi-scale scanning process of the gcForest model to different specified window sizes according to the size characteristics of thyroid nodules, for example, 3 × 3 and 5 × 5 are respectively, 3 × 3 indicates that the width and the height of the window are 3 pixels, and 5 × 5 indicates that the width and the height of the window are 5 pixels.
Referring to fig. 3 again, to further improve the identification accuracy of the gcForest model, the electronic device may fuse the extracted features of the imagery group into a cascade process. For example, the electronic device splices the extracted micro-nodule omics features with the output values and input vectors of the random forest of the previous layer to be used as the input of the next layer, and then repeats the process until the model verification converges. Wherein 120div shown in fig. 3 refers to a feature vector composed of extracted 120-type proteomic features.
In this embodiment, when testing the gcForest model, the label of the image in the test data set is labeled with a content indicating the thyroid nodule severity level. The gcForest model is tested through the test data set until the model converges, so that the accuracy of the gcForest model in classifying the severity of thyroid nodules can be improved.
In step S110, the manner in which the electronic device can acquire the image to be measured can be flexibly determined according to actual situations. For example, the electronic device may obtain the image to be tested from a usb disk, a server, or the like, in which the device to be tested is stored.
In addition, the electronic equipment can select the corresponding interface to acquire the image to be detected according to different image types. For example, step S110 may include: according to the image type of the image to be detected, selecting an interface corresponding to the image type to obtain the image to be detected, wherein the image type includes but is not limited to at least one of PNG (Portable Network Graphics), JPEG (Joint Photographic Experts Group), BMP (Bitmap), DICOM (Digital Imaging and Communications in Medicine). Among them, PNG, JPG, BMP, and DICOM are data formats of images, and are well known to those skilled in the art.
Exemplarily, if the image type of the image to be detected is PNG, or JPG, or BMP, the image to be detected is read by using an OpenCV interface in the electronic device, basic information of the image to be detected is acquired, the image is converted from a BGR (Blue Green Red ) color space into an RGB (Red Green Blue ) color space, and finally, the image to be detected is displayed by using a QImage interface in PyQt5. The PyQt5 is a framework for Python binding Digia QT5 applications, and serves as a module of Python, which is well known to those skilled in the art as a tool kit.
If the image type of the image to be detected is DICOM, reading the image by using a Pydicom interface, acquiring basic information of the image, converting the basic information into a JPEG format, converting the image from a BGR color space into an RGB color space, and finally displaying the image by using a QIlarge interface in PyQt5. Among them, the OpenCV interface, the Pydicom interface, and the QImage interface are interfaces for acquiring images in electronic devices, and are well known to those skilled in the art.
In this embodiment, the image to be detected input into the preset deep learning model is an image obtained by preprocessing the image to be detected, where the preprocessing includes at least one of image denoising and image enhancement. That is, the image to be measured needs to be preprocessed before step S120.
The image denoising includes, but is not limited to, gaussian filtering denoising, median filtering denoising, P-M (Perona-Malik) (the name is a combination of two names, and the P-M equation algorithm utilizes the principle of isotropic nonlinear diffusion to realize image filtering) equation denoising and TV (Total Variation) method denoising.
The gaussian filtering denoising method may be: and calling a GaussianBlur interface in an OpenCV toolkit which is pre-installed in the electronic equipment, and setting filter parameters to realize filtering and denoising, wherein the filter parameters can be flexibly determined according to actual conditions.
The median filtering denoising method can be as follows: and calling a media nerve interface in an OpenCV toolkit which is pre-installed in the electronic equipment, and setting filtering parameters to realize filtering and denoising, wherein the filtering parameters can be flexibly determined according to actual conditions.
The denoising method of the P-M equation can be as follows: and (4) realizing image denoising according to a nonlinear anisotropic diffusion equation (P-M diffusion equation).
The TV method denoising method can be as follows: the anisotropic model, which smoothes an image according to a gradient descent flow, smoothes the image as much as possible inside the image (the difference between adjacent pixels is small), and does not smooth as much as possible at the edges of the image (the image contour).
The image enhancement may be: and calling an ImageEnhance interface in a PIL package pre-installed by the electronic equipment to realize automatic image enhancement. The PIL packet is a common image packet for Python, and can be used for enhancing images to enhance the texture features of the images, and is a software tool well known to those skilled in the art.
The electronic equipment can filter partial interference factors in the image through the image preprocessing, and enhance the corresponding image characteristics, thereby being beneficial to improving the accuracy of the preset deep learning model and the machine learning model in image detection of the image to be detected.
In step S120, the image to be detected input into the preset deep learning model is an image obtained by preprocessing the image to be detected, where the preprocessing includes at least one of the image denoising and the image enhancement.
As an alternative implementation, step S120 may include:
inputting the image to be detected into a residual error network in the preset deep learning model to obtain an intermediate detection result of whether thyroid nodules exist in the image to be detected;
and when the intermediate detection result shows that thyroid nodules exist in the image to be detected, inputting the image to be detected into the candidate area network in the deep learning model to obtain a first detection result, wherein the first detection result comprises the focus map area which is segmented from the image to be detected and shows that thyroid nodules exist.
Understandably, the residual network can be used for detecting whether thyroid nodules exist in the image to be detected. Aiming at the image to be detected with thyroid nodules, the candidate area network can position the map region of the thyroid nodules in the image to be detected, and the map region of the thyroid nodules is used as a focus map region and is divided from the image to be detected. The focus region obtained by segmentation can be used as a first detection result and is input into the gcForest model, so that the gcForest model can perform the targeted detection of the severity degree on the focus region subsequently, the detection on the whole image to be detected is not needed, the calculation amount is reduced, and the detection accuracy is improved.
In step S130, the machine learning model includes, but is not limited to, a gcForest model for detecting the severity of thyroid nodules in the lesion map. Severity can be classified as actual, for example, severity includes a classification that characterizes thyroid nodules as benign or malignant. So, electronic equipment detects through treating the image of surveying, can obtain whether there is the thyroid nodule to and the severity of nodule, thereby can provide the diagnosis foundation for the staff, be convenient for supplementary staff accomplishes the detection achievement of thyroid nodule, improve the efficiency and the accuracy that detect.
Referring to fig. 4, an embodiment of the present invention further provides a thyroid image processing apparatus 200, which can be applied to the electronic device described above for executing the steps of the method. The thyroid image processing apparatus 200 includes at least one software functional module which may be stored in a memory module in the form of software or Firmware (Firmware) or solidified in an Operating System (OS) of an electronic device. The processing module is used for executing executable modules stored in the storage module, such as a software function module and a computer program included in the thyroid image processing apparatus 200.
The thyroid image processing apparatus 200 may include an acquisition unit 210, a first detection unit 220, and a second detection unit 230, and may perform the following operation steps:
an obtaining unit 210, configured to obtain an image to be detected, where the image to be detected includes an image obtained by detecting a thyroid gland part;
the first detection unit 220 is configured to input the image to be detected into a preset deep learning model, so as to obtain a first detection result of the preset deep learning model on the image to be detected;
a second detecting unit 230, configured to, when the first detection result indicates that a thyroid nodule exists in the image to be detected, input a lesion map region indicating that a thyroid nodule exists in the image to be detected in the first detection result into the machine learning model after the test, to obtain a second detection result of the machine learning model for the lesion map region, where the second detection result indicates a severity of the thyroid nodule in the lesion map region.
Optionally, the thyroid image processing apparatus 200 may further include a model training unit and a model testing unit. Before step S110 is executed, the obtaining unit 210 may be further configured to obtain a training data set and a testing data set, where each image in the training data set and the testing data set corresponds to at least one label, and the label is used to label a region where a thyroid nodule exists or label a region where a thyroid nodule does not exist. The model training unit is used for inputting the training data set into the residual error network to obtain a training image with thyroid nodules in the training data set, inputting the training image region with the thyroid nodules into the candidate region network to obtain a candidate region, wherein the candidate region is a focus map region which is segmented from the training image and represents the presence of the thyroid nodules, and the candidate region is used for training a gcForest model to obtain the trained gcForest model. And the model testing unit is used for testing the trained gcForest model according to the test data set to obtain the tested machine learning model.
Optionally, the model training unit may be further configured to: extracting features of the candidate region through sliding windows with different specified window sizes to obtain feature maps corresponding to the different specified window sizes; inputting the characteristic diagram into a random forest in the gcForest model, and training the gcForest model to obtain the trained gcForest model.
Optionally, the obtaining unit 210 may be further configured to: and selecting an interface corresponding to the image type to acquire the image to be detected according to the image type of the image to be detected, wherein the image type comprises at least one of PNG, JPG, BMP and DICOM.
Optionally, the first detecting unit 220 may be further configured to: inputting the image to be detected into a residual error network in the preset deep learning model to obtain an intermediate detection result of whether thyroid nodules exist in the image to be detected; and when the intermediate detection result shows that thyroid nodules exist in the image to be detected, inputting the image to be detected into the candidate area network in the deep learning model to obtain a first detection result, wherein the first detection result comprises the focus map area which is segmented from the image to be detected and shows that thyroid nodules exist.
In this embodiment, the processing module may be an integrated circuit chip having signal processing capability. The processing module may be a general purpose processor. For example, the Processor may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Network Processor (NP), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present Application.
The memory module may be, but is not limited to, a random access memory, a read only memory, a programmable read only memory, an erasable programmable read only memory, an electrically erasable programmable read only memory, and the like. In this embodiment, the storage module may be configured to store an image to be tested, a preset deep learning model, a machine learning model, and the like. Of course, the storage module may also be used to store a program, and the processing module executes the program after receiving the execution instruction.
The communication module is used for establishing communication connection between the electronic equipment and other equipment (such as a server) through a network and receiving and transmitting data through the network.
It should be noted that, as will be clear to those skilled in the art, for convenience and brevity of description, the specific working process of the electronic device described above may refer to the corresponding process of each step in the foregoing method, and will not be described in detail herein.
The embodiment of the application also provides a computer readable storage medium. The computer-readable storage medium has stored therein a computer program which, when run on a computer, causes the computer to execute the thyroid image processing method as described in the above embodiments.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by hardware, or by software plus a necessary general hardware platform, and based on such understanding, the technical solution of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions to enable a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments of the present application.
In summary, in the present solution, a preset deep learning model is used to detect whether a thyroid nodule exists in an image to be detected, and when a thyroid nodule exists, a focus map area in the image to be detected is input into the machine learning model, so that the machine learning model does not need to detect an image without a thyroid nodule and an area other than the focus map area, thereby reducing the amount of computation and improving the computation efficiency. In addition, the preset deep learning model and the machine learning model are combined, so that the thyroid nodule detection accuracy is improved.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus, system, and method may be implemented in other ways. The apparatus, system, and method embodiments described above are illustrative only, as the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A thyroid image processing method, comprising:
acquiring an image to be detected, wherein the image to be detected comprises an image obtained by detecting a thyroid part;
inputting the image to be detected into a preset deep learning model to obtain a first detection result of the preset deep learning model on the image to be detected;
and when the first detection result shows that thyroid nodules exist in the image to be detected, inputting a focus map region which shows that thyroid nodules exist in the image to be detected in the first detection result into a tested machine learning model to obtain a second detection result of the machine learning model on the focus map region, wherein the second detection result shows the severity of the thyroid nodules in the focus map region.
2. The method of claim 1, wherein the pre-set deep learning model comprises a residual network and a candidate area network, and before acquiring the image to be measured, the method further comprises:
acquiring a training data set and a testing data set, wherein each image in the training data set and the testing data set corresponds to at least one label, and the labels are used for marking a region with thyroid nodules or a region without thyroid nodules;
inputting the training data set into the residual error network to obtain a training image with thyroid nodules in the training data set;
inputting the training image region with the thyroid nodule into the candidate region network to obtain a candidate region, wherein the candidate region is a focus map region which is segmented from the training image and represents the thyroid nodule;
training a gcForest model by using the candidate region to obtain a trained gcForest model;
and testing the trained gcForest model according to the test data set to obtain the tested machine learning model.
3. The method of claim 2, wherein training a gcForest model using the candidate region to obtain the trained gcForest model comprises:
extracting features of the candidate region through sliding windows with different specified window sizes to obtain feature maps corresponding to the different specified window sizes;
inputting the characteristic diagram into a random forest in the gcForest model, and training the gcForest model to obtain the trained gcForest model.
4. The method of claim 1, wherein acquiring an image under test comprises:
and selecting an interface corresponding to the image type to acquire the image to be detected according to the image type of the image to be detected, wherein the image type comprises at least one of PNG, JPG, BMP and DICOM.
5. The method of claim 1, wherein inputting the image to be detected into a preset deep learning model to obtain a first detection result of the preset deep learning model on the image to be detected comprises:
inputting the image to be detected into a residual error network in the preset deep learning model to obtain an intermediate detection result of whether thyroid nodules exist in the image to be detected;
and when the intermediate detection result shows that thyroid nodules exist in the image to be detected, inputting the image to be detected into the candidate area network in the deep learning model to obtain a first detection result, wherein the first detection result comprises the focus map area which is segmented from the image to be detected and shows that thyroid nodules exist.
6. The method according to claim 1, wherein the image to be detected input into the preset deep learning model is an image obtained by preprocessing the image to be detected, and the preprocessing includes at least one of image denoising and image enhancement.
7. The method of claim 1, wherein the image to be tested comprises an image of a thyroid site obtained by magnetic resonance imaging.
8. A thyroid image processing apparatus characterized by comprising:
the device comprises an acquisition unit, a detection unit and a processing unit, wherein the acquisition unit is used for acquiring an image to be detected, and the image to be detected comprises an image obtained by detecting a thyroid part;
the first detection unit is used for inputting the image to be detected into a preset deep learning model to obtain a first detection result of the preset deep learning model on the image to be detected;
and a second detection unit, configured to, when the first detection result indicates that a thyroid nodule exists in the image to be detected, input a lesion map region indicating that a thyroid nodule exists in the image to be detected in the first detection result into a tested machine learning model, to obtain a second detection result of the machine learning model for the lesion map region, where the second detection result indicates a severity of the thyroid nodule in the lesion map region.
9. An electronic device, characterized in that the electronic device comprises a processor and a memory coupled to each other, the memory storing a computer program which, when executed by the processor, causes the electronic device to perform the method according to any of claims 1-7.
10. A computer-readable storage medium, in which a computer program is stored which, when run on a computer, causes the computer to carry out the method according to any one of claims 1 to 7.
CN202110995168.0A 2021-08-27 2021-08-27 Thyroid image processing method and device, electronic equipment and storage medium Pending CN113689412A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110995168.0A CN113689412A (en) 2021-08-27 2021-08-27 Thyroid image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110995168.0A CN113689412A (en) 2021-08-27 2021-08-27 Thyroid image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113689412A true CN113689412A (en) 2021-11-23

Family

ID=78583402

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110995168.0A Pending CN113689412A (en) 2021-08-27 2021-08-27 Thyroid image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113689412A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820591A (en) * 2022-06-06 2022-07-29 北京医准智能科技有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN115393323A (en) * 2022-08-26 2022-11-25 数坤(上海)医疗科技有限公司 Target area obtaining method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120172243A1 (en) * 2009-04-29 2012-07-05 Elai Davicioni Systems and methods for expression-based classification of thyroid tissue
CN106056595A (en) * 2015-11-30 2016-10-26 浙江德尚韵兴图像科技有限公司 Method for automatically identifying whether thyroid nodule is benign or malignant based on deep convolutional neural network
CN108229550A (en) * 2017-12-28 2018-06-29 南京信息工程大学 A kind of cloud atlas sorting technique that network of forests network is cascaded based on more granularities
CN111243042A (en) * 2020-02-28 2020-06-05 浙江德尚韵兴医疗科技有限公司 Ultrasonic thyroid nodule benign and malignant characteristic visualization method based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120172243A1 (en) * 2009-04-29 2012-07-05 Elai Davicioni Systems and methods for expression-based classification of thyroid tissue
CN106056595A (en) * 2015-11-30 2016-10-26 浙江德尚韵兴图像科技有限公司 Method for automatically identifying whether thyroid nodule is benign or malignant based on deep convolutional neural network
CN108229550A (en) * 2017-12-28 2018-06-29 南京信息工程大学 A kind of cloud atlas sorting technique that network of forests network is cascaded based on more granularities
CN111243042A (en) * 2020-02-28 2020-06-05 浙江德尚韵兴医疗科技有限公司 Ultrasonic thyroid nodule benign and malignant characteristic visualization method based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
AHMED NAGLAH ET.AL: "Novel MRI-Based CAD System for Early Detection of Thyroid Cancer Using Multi-Input CNN", 《SENSORS》, pages 2 - 10 *
FATEMEH ABDOLALI ET.AL: "Automated thyroid nodule detection fromultrasound imaging using deep convolutional neural networks", 《COMPUTERS IN BIOLOGY ANDMEDICINE》, pages 3 - 4 *
HONGBO ZHU ET.AL: "MR-Forest: A Deep Decision Framework for False Positive Reduction in Pulmonary Nodule Detection", 《IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS》, vol. 24, no. 6, pages 1654 - 1659 *
黄姗: "三维肺部CT图像中的GGO征象检测方法研究", 中国优秀硕士学位论文全文数据库 医药卫生科技辑, no. 07, 15 July 2021 (2021-07-15), pages 26 - 29 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820591A (en) * 2022-06-06 2022-07-29 北京医准智能科技有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN114820591B (en) * 2022-06-06 2023-02-21 北京医准智能科技有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN115393323A (en) * 2022-08-26 2022-11-25 数坤(上海)医疗科技有限公司 Target area obtaining method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112106107B (en) Focus weighted machine learning classifier error prediction for microscopic slice images
CN111862044B (en) Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium
CN108230292B (en) Object detection method, neural network training method, device and electronic equipment
JP6798854B2 (en) Target number estimation device, target number estimation method and program
US20100008576A1 (en) System and method for segmentation of an image into tuned multi-scaled regions
US9330336B2 (en) Systems, methods, and media for on-line boosting of a classifier
US20090252429A1 (en) System and method for displaying results of an image processing system that has multiple results to allow selection for subsequent image processing
US9311567B2 (en) Manifold learning and matting
CN113362331A (en) Image segmentation method and device, electronic equipment and computer storage medium
DE112008001052T5 (en) Image segmentation and enhancement
CN111931751A (en) Deep learning training method, target object identification method, system and storage medium
CN112348082B (en) Deep learning model construction method, image processing method and readable storage medium
CN109241867B (en) Method and device for recognizing digital rock core image by adopting artificial intelligence algorithm
CN114581709B (en) Model training, method, apparatus and medium for identifying objects in medical images
CN111160114A (en) Gesture recognition method, device, equipment and computer readable storage medium
US20170178341A1 (en) Single Parameter Segmentation of Images
CN116645592B (en) Crack detection method based on image processing and storage medium
CN113689412A (en) Thyroid image processing method and device, electronic equipment and storage medium
JPWO2020066257A1 (en) Classification device, classification method, program, and information recording medium
CN116052090A (en) Image quality evaluation method, model training method, device, equipment and medium
CN111368698A (en) Subject recognition method, subject recognition device, electronic device, and medium
CN108765399B (en) Lesion site recognition device, computer device, and readable storage medium
CN117893744A (en) Remote sensing image segmentation method based on improved boundary guide context aggregation network
CN111062909A (en) Method and equipment for judging benign and malignant breast tumor
CN116959712A (en) Lung adenocarcinoma prognosis method, system, equipment and storage medium based on pathological image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination