CN116091453B - Lesion detection method for breast cancer - Google Patents
Lesion detection method for breast cancer Download PDFInfo
- Publication number
- CN116091453B CN116091453B CN202310038015.6A CN202310038015A CN116091453B CN 116091453 B CN116091453 B CN 116091453B CN 202310038015 A CN202310038015 A CN 202310038015A CN 116091453 B CN116091453 B CN 116091453B
- Authority
- CN
- China
- Prior art keywords
- breast cancer
- data set
- breast
- image recognition
- dynamic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 206010006187 Breast cancer Diseases 0.000 title claims abstract description 53
- 208000026310 Breast neoplasm Diseases 0.000 title claims abstract description 53
- 238000001514 detection method Methods 0.000 title claims abstract description 13
- 230000003902 lesion Effects 0.000 title claims description 10
- 238000012549 training Methods 0.000 claims abstract description 20
- 238000000034 method Methods 0.000 claims abstract description 19
- 230000006870 function Effects 0.000 claims description 36
- 238000002372 labelling Methods 0.000 claims description 12
- 230000001629 suppression Effects 0.000 claims description 10
- 206010006272 Breast mass Diseases 0.000 claims description 9
- 210000000481 breast Anatomy 0.000 claims description 8
- 230000003211 malignant effect Effects 0.000 claims description 7
- 238000012360 testing method Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 5
- 206010048782 Breast calcifications Diseases 0.000 claims description 4
- 230000010354 integration Effects 0.000 claims description 2
- 238000010200 validation analysis Methods 0.000 claims 1
- 238000004364 calculation method Methods 0.000 description 4
- 238000012795 verification Methods 0.000 description 4
- 208000004434 Calcinosis Diseases 0.000 description 3
- 206010054107 Nodule Diseases 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000002308 calcification Effects 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 3
- 238000012216 screening Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 210000005075 mammary gland Anatomy 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 201000011510 cancer Diseases 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000002626 targeted therapy Methods 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30068—Mammography; Breast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/032—Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of target detection, and particularly relates to a focus detection method of breast cancer, which comprises the following steps: acquiring an image dataset related to breast cancer; establishing a breast cancer image recognition model based on a Faster-RCNN network, and adding a dynamic weight adjustment branch into an RPN (remote procedure network) of the Faster-RCNN network to form a constructed breast cancer image recognition model; training the constructed breast cancer image recognition model based on the acquired image data set to obtain a trained breast cancer image recognition model; and inputting the breast cancer image obtained in practice into a trained breast cancer image recognition model to obtain a recognition result. The invention adjusts the FL function weight parameters based on the dynamic weight adjustment branches, thereby realizing the dynamic adjustment of the FL function weight parameters and leading the FL function weight parameters to have sensibility.
Description
Technical Field
The invention belongs to the technical field of target detection, and particularly relates to a focus detection method for breast cancer.
Background
The incidence rate of Breast Cancer (BC) in women at home and abroad is in an increasing trend year by year, and is one of three major malignant tumors of women. In China, the risk of female suffering from breast cancer is high due to lack of related health consciousness. Early breast cancer is better after treatment prognosis. The breast cancer in the middle and later stages is not easy to treat, and serious inconvenience and mental stress can be brought to the life of a patient. Thus, prediction of breast cancer can reduce the risk of a patient to develop breast cancer and provide targeted therapy. Breast cancer lesions include breast nodules classified into two major categories, calcification, breast nodules are classified as benign and malignant, and typically benign nodules have no significant impact on life, but risk developing malignant nodules. Calcification is usually benign and malignant calcification is usually accompanied by malignant nodules. Breast cancer is also less prevalent in women under the age of 20 and menopausal, with higher prevalence between 40 and 55 years. For women of child bearing age between 20 and 40, it is very necessary to reduce the risk of breast cancer.
Traditional breast cancer screening and diagnosis requires a great deal of experience from the clinical and imaging doctors, and different doctors have different clinical diagnostic experiences in specific businesses. Therefore, there is a risk of misdiagnosis in the diagnosis of breast cancer. In addition, it is difficult to obtain results quickly by viewing medical images with the human eye. The reason is that the breast nodule and surrounding neighborhood tissue (HU) values show low variability and are unpredictable in the shape, distribution, texture, etc. of breast calcification, and the requirements of doctors are high.
Based on the points, the high-order mammary gland X-ray film features which cannot be identified by human eyes are extracted by utilizing the strong generalization capability of the neural network, so that the positioning and classification of target lesions are realized, are common in the existing research, and have proved to have feasibility in practical application.
Focus screening and diagnosis belongs to target detection and identification, and the existing algorithms based on the neural network comprise a base YOLO system and an RCNN series algorithm, wherein classical algorithms comprise Yolo-v3, yolo-v5, faster-RCNN, retinanet and the like. With respect to the accuracy of the algorithm, it is more difficult to acquire rich high-order features for medical DICOM images relative to RGB images due to the higher accuracy of the fast-RCNN. Therefore, the YOLO series algorithm is more suitable for target detection and identification of RGB images, and the Faster-RCNN algorithm is more suitable for target detection and identification of DICOM images. Although sacrificing time complexity, accuracy is higher. However, the drawbacks are also obvious, in the existing target recognition method based on the fast-RCNN network, for example, the weight parameter is fixed from FL (Focal loss) loss function, and in each small batch of data training, the adjustment of the FL function weight parameter cannot be performed according to the number distribution of marked prior frames of each target class in the sample. Therefore, the FL function weight parameter is not sensitive.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides a focus detection method for breast cancer, which aims to solve the problem that FL function weight parameters in the prior art have no sensitivity.
In order to solve the technical problems, the invention adopts the following technical scheme:
a method for detecting a lesion of breast cancer, comprising the steps of:
acquiring an image dataset related to breast cancer;
establishing a breast cancer image recognition model based on a Faster-RCNN network, and adding a dynamic weight adjustment branch into an RPN (remote procedure network) of the Faster-RCNN network to form a constructed breast cancer image recognition model;
training the constructed breast cancer image recognition model based on the acquired image data set to obtain a trained breast cancer image recognition model;
and inputting the breast cancer image obtained in practice into a trained breast cancer image recognition model to obtain a recognition result.
The invention adjusts the FL function weight parameters based on the dynamic weight adjustment branches, thereby realizing the dynamic adjustment of the FL function weight parameters and leading the FL function weight parameters to have sensibility.
Preferably, the RPN network includes a background judging branch for judging the background, a position parameter adjusting branch for adjusting the position of the parameter, and the dynamic weight adjusting branch for performing weight adjustment.
Preferably, the dynamic weight adjustment branch is used for calculating the number of prior labeling frames of each target class, calculating the number distribution of prior labeling frames of all target classes, and modifying the weight of the Focal Loss function based on the number distribution.
Preferably, the step of modifying the weights of the Focal local function by the dynamic weight adjustment branch is as follows:
A. the dynamic weight adjustment branch firstly adopts a convolution module of 1*1, and the number of output channels is equal to that of label categories, so that the purpose of feature integration is achieved;
B. the softmax module connected through the convolution module of 1*1 calculates the scores of all the anchor frames corresponding to all the classifications, and the classification corresponding to the item with the highest score is the label classification of any anchor frame;
C. obtaining candidate frames based on a non-maximum suppression strategy;
D. and counting the number of candidate frames of all categories to form weight_ratio_info learning parameters, and combining the learning parameters to a Loss function of a classifer for modifying the weight of the Focal Loss function.
Preferably, the step D includes the steps of:
D1. recombining the anchor frames subjected to non-maximum suppression, and calculating the total number of anchor frames of each category and the distribution of the number of anchor frames of all categories in each batch in the data training of each batch;
D2. the dynamic weights are calculated based on the following formula:
σ(x)=1/(1+e -x );
C i =max(NMS(σ(R×I×9),nms));
wherein: sigma () represents a softmax function, x represents an input; nms represents the nms value in the fast-RCNN network, nms being a super parameter; NMS is a non-maximum suppression function, C represents category, i represents category index; n represents the number of categories, w i Dynamic weights representing any one category; 1-w i Dynamic learning weights representing the category indexed i;representing penalty parameters.
Preferably, the dataset comprises a breast DDSM dataset and a VOC2007 dataset;
the VOC2007 data set is used for verifying the validity of the breast cancer image identification model;
the DDSM dataset was used to verify the lesion detection of the algorithm on three specific categories of malignant breast nodules, benign breast nodules, and breast calcifications.
Preferably, the VOC2007 dataset is an RGB dataset and is three-channel, and the DDSM dataset is a breast x-ray sheet dataset and is single-channel; and pre-processing the DDSM dataset.
Preferably, the preprocessing includes removing markers and noise from the data and extracting a maximum profile; the DDSM data adopts contour labeling, converts the contour labeling into labeling of rectangular frames, and forms a training data format identical to that of the VOC2007 training data set.
Preferably, the breast DDSM data set and the VOC2007 data set are divided into a training set, a test set and a verification set, which are used for training, verifying and evaluating the breast cancer image recognition model, respectively.
The beneficial effects of the invention include:
the invention adjusts the FL function weight parameters based on the dynamic weight adjustment branches, thereby realizing the dynamic adjustment of the FL function weight parameters and leading the FL function weight parameters to have sensibility; in addition, the invention improves the network structure of the fast-RCNN, increases the calculation of the prior frame quantity distribution of all marked classes in each training batch, is beneficial to the study of difficult samples (classes with fewer prior frames), and is beneficial to the study among all sample classes to keep a good balance.
Drawings
Fig. 1 is a schematic structural diagram of a breast cancer effect recognition model according to the present invention.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
The invention is described in further detail below with reference to fig. 1:
the original Faster-RCNN algorithm adopts a cross entropy Loss function, and a Retinonet network is realized by using a (Focal Loss, FL) function; wherein the a-parameter of the FL function can realize the evolution of the Faster-RCNN network to the Retinonet network, and simultaneously concentrate on learning classes with fewer samples in the sample set. Based on the above, in order to keep a good learning balance between the easy sample (the category with more prior frames) and the difficult sample, the structure of the fast-RCNN is changed so as to better learn the pertinence of each target belonging to different categories;
see fig. 1: the method comprises the steps of establishing a breast cancer image recognition model based on a fast-RCNN network, adding a dynamic weight adjustment branch in an RPN (remote procedure network) of the fast-RCNN network, and calculating weight_ratio_info parameters;
after a dynamic weight adjustment branch is newly added, the RPN network comprises three branches in total;
background judgment branch: the method is used for judging the front background and the back background, the parameters are 18, 18 channels are represented, the calculation mode is 9*2, and 9 candidate frames are respectively represented for the foreground and the background;
parameter adjustment branch: representing the adjustment of the position parameters, wherein the parameters are 36, 36 channels are represented, and the calculation mode is 4*9;4 represents the number of standard frame parameters;
dynamic weight adjustment branching: parameters are 9 x (target class number+1); for example: in VOC2007 there are a total of 20 target classes, plus 21 background, so the total channel number is 189. (2) In the DDSM breast dataset, the number of target classes was 3, plus background was 4, and the total number of channels was 9*4. In the dynamic weight adjustment branch, all channels are subjected to softmax calculation and score, candidate frames are obtained through a non-maximum suppression strategy, the categories corresponding to all the candidate frames are counted, weight_ratio_info learning parameters are formed, and finally the weight_ratio_info learning parameters are combined to a loss function of a classifer.
The cross entropy loss function and FL loss function are shown as follows:
loss=-y log y-(1-y)log(1-y′);
wherein: loss is a cross entropy loss function fl Is the FL loss function; the a parameter is a penalty parameter that seeks a balance in learning ability between positive and negative samples for the generalization ability of the network. However, the a-parameter is a super-parameter, and in order to learn specifically for each target class, the a-super-parameter needs to be characterized as a dynamic non-super-parameter.
MDDW loss function:
σ(x)=1/(1+e -x );
C i =max(NMS(σ(R×I×9),nms));
wherein: sigma () represents a softmax function, x represents an input; nms represents the nms value in the fast-RCNN network, nms being a super parameter; the NMS is a non-maximum suppression function, each pixel of the feature map can generate 9 candidate frames with different sizes, and the NMS can screen the candidate frames to obtain candidate frames meeting the conditions; c represents a category, i represents a category index; n represents the number of categories, w i Dynamic weights representing any one category; 1-w i Dynamic learning weights representing the category indexed i;representing penalty parameters; />The function of (2) can prevent over-learning of the network, but the condition that loss can occur in the process of learning the mammary gland data set by the network.
The method for breast cancer identification based on the breast cancer image identification model is as follows:
step one, download breast DDSM dataset and VOC2007 dataset. The VOC2007 dataset is used to verify the effectiveness of the algorithm, and the DDSM dataset is used to verify the lesion detection of the algorithm on three specific categories of malignant breast nodules, benign breast nodules, breast calcification, etc. The VOC2007 dataset is an RGB dataset and the DDSM is a mammogram dataset. The VOC2007 data set does not need to be processed, and the DDSM data set needs a preprocessing process of removing marks and noise in the data and extracting the largest contour. Second, because DDSM data adopts contour labeling, it needs to be transformed into labeling of rectangular boxes and form the same data format as the VOC2007 training dataset.
(1) VOC2007 data download address:
http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar;
http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar;
http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCdevkit_08-Jun-2007.tar。
(2) DDSM data download address: https:// github.com/multinormal/ddsm.
At the same time, the VOC2007 and DDSM data sets are divided into three parts of training, verification and test set for verification and evaluation of the network model.
Step two, downloading addresses of standard fast-RCNN network source codes: https:// github.com/bubbliiciding/faster-rcnn-pyrach. According to the related principle in the invention content, modifying a standard Faster-RCNN network model, newly adding a shortcut in an RPN network, calculating the number of priori labeling frames of each target class, and then calculating the number distribution of the priori labeling frames of all the target classes for the weight modification of FL functions to finally form a Faster-RCNN-Mutant (breast cancer image recognition model) network model.
In the RPN network, as shown in fig. 1, in the RPN network, we newly add a link, and also integrate the feature graphs by adopting 1×1 convolution first, where the number of output channels of the data set VOC2007 is 189 equal to (background class: 1+number of target classes: 20) × (number of anchor frames generated by each pixel of the feature graph: 9), and the number of output channels of the data set DDSM is 36 equal to (background class: 1+number of target classes: 3) × (number of anchor frames generated by each pixel of the feature graph: 9). Furthermore, a 1×1 convolution module is followed by a softmax module to distinguish each anchor box as belonging to a specific class. Since many anchor frames are generated for each pixel point of the feature map in the RPN network, screening for each anchor frame is required. In Faster-RCNN, the present invention uses a non-maximum suppression strategy. Similarly, the anchor frames after non-maximum suppression are recombined by the fast-RCNN-Mutant, the total number of anchor frames belonging to each category and the distribution of the number of anchor frames of all categories in each small batch are calculated in each small batch data training, and corresponding dynamic weights are calculated by referring to MDDW loss functions.
And thirdly, inputting a training set and a verification set in VOC2007 and DSSM into a Faster-RCNN-Mutant (breast cancer image recognition model) network for training the breast cancer image recognition model, wherein the test set is used for evaluating the performance of the breast cancer image recognition model. The test set is evaluated using the get_map. Py file, while the predict. Py is used to test the final one effect. The final network superparameter settings are shown in table 1.
TABLE 1 experimental parameters
The foregoing examples merely represent specific embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that, for those skilled in the art, several variations and modifications can be made without departing from the technical solution of the present application, which fall within the protection scope of the present application.
Claims (8)
1. A method for detecting a lesion of breast cancer, comprising the steps of:
acquiring an image dataset related to breast cancer;
establishing a breast cancer image recognition model based on a Faster-RCNN network, and adding a dynamic weight adjustment branch into an RPN (remote procedure network) of the Faster-RCNN network to form a constructed breast cancer image recognition model;
training the constructed breast cancer image recognition model based on the acquired image data set to obtain a trained breast cancer image recognition model;
inputting the breast cancer image obtained in practice into a trained breast cancer image recognition model to obtain a recognition result;
the dynamic weight adjustment branch is used for calculating the number of priori marking frames of each target class, calculating the number distribution of the priori marking frames of all the target classes, and modifying the weight of the Focal Loss function based on the number distribution.
2. The method according to claim 1, wherein the RPN network includes a background judging branch for judging the background, a position parameter adjusting branch for adjusting the position of the parameter, and the dynamic weight adjusting branch for performing weight adjustment.
3. The method of claim 1, wherein the step of modifying the weights of the Focal Loss function by the dynamic weight adjustment branch is as follows:
A. the dynamic weight adjustment branch firstly adopts a convolution module of 1*1, and the number of output channels is equal to that of label categories, so that the purpose of feature integration is achieved;
B. the softmax module connected through the convolution module of 1*1 calculates the scores of all the anchor frames corresponding to all the classifications, and the classification corresponding to the item with the highest score is the label classification of any anchor frame;
C. obtaining candidate frames based on a non-maximum suppression strategy;
D. and counting the number of candidate frames of all categories to form weight_ratio_info learning parameters, and combining the learning parameters to a Loss function of a classifer for modifying the weight of the Focal Loss function.
4. A method for detecting a breast cancer lesion according to claim 3, wherein said step D comprises the steps of:
D1. recombining the anchor frames subjected to non-maximum suppression, and calculating the total number distribution of the anchor frames of each category and the total number of the anchor frames of all categories in each batch in the data training of each batch;
D2. the dynamic weights are calculated based on the following formula:
σ(x)=1/(1+e -x );
C i =max(NMS(σ(R×I×9),nms));
wherein: sigma () represents a softmax function, x represents an input; nms represents the nms value in the fast-RCNN network, nms being a super parameter; NMS is a non-maximum suppression function, C represents category, i represents category index; n represents the number of categories, w i Dynamic weights representing any one category; 1-w i Dynamic learning weights representing the category indexed i;representing penalty parameters.
5. The method of claim 1, wherein the data set comprises a breast DDSM data set and a VOC2007 data set;
the VOC2007 data set is used for verifying the validity of the breast cancer image identification model;
the DDSM dataset was used to verify the lesion detection of the algorithm on three specific categories of malignant breast nodules, benign breast nodules, and breast calcifications.
6. The method of claim 5, wherein the VOC2007 data set is an RGB data set and is three-channel, and the DDSM data set is a breast x-ray film data set and is single-channel; and pre-processing the DDSM dataset.
7. The method of claim 6, wherein the preprocessing includes removing markers and noise from the data and extracting a maximum profile; the DDSM data adopts contour labeling, converts the contour labeling into labeling of rectangular frames, and forms a training data format identical to that of the VOC2007 training data set.
8. The method of claim 5, wherein the breast DDSM dataset and VOC2007 dataset are divided into a training set, a test set and a validation set for training, validating and evaluating breast cancer image recognition models, respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310038015.6A CN116091453B (en) | 2023-01-07 | 2023-01-07 | Lesion detection method for breast cancer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310038015.6A CN116091453B (en) | 2023-01-07 | 2023-01-07 | Lesion detection method for breast cancer |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116091453A CN116091453A (en) | 2023-05-09 |
CN116091453B true CN116091453B (en) | 2024-03-26 |
Family
ID=86204012
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310038015.6A Active CN116091453B (en) | 2023-01-07 | 2023-01-07 | Lesion detection method for breast cancer |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116091453B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118037644A (en) * | 2024-01-19 | 2024-05-14 | 成都成电金盘健康数据技术有限公司 | Detection method for breast cancer focus area in breast image based on convolution network |
CN118762238B (en) * | 2024-09-05 | 2024-11-08 | 杭州电子科技大学 | Different-body-position focus image generation method based on mammography image |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111028224A (en) * | 2019-12-12 | 2020-04-17 | 广西医准智能科技有限公司 | Data labeling method, model training device, image processing method, image processing device and storage medium |
CN112634261A (en) * | 2020-12-30 | 2021-04-09 | 上海交通大学医学院附属瑞金医院 | Stomach cancer focus detection method and device based on convolutional neural network |
CN113673510A (en) * | 2021-07-29 | 2021-11-19 | 复旦大学 | Target detection algorithm combining feature point and anchor frame joint prediction and regression |
CN113988222A (en) * | 2021-11-29 | 2022-01-28 | 东北林业大学 | Forest fire detection and identification method based on fast-RCNN |
KR102378887B1 (en) * | 2021-02-15 | 2022-03-25 | 인하대학교 산학협력단 | Method and Apparatus of Bounding Box Regression by a Perimeter-based IoU Loss Function in Object Detection |
-
2023
- 2023-01-07 CN CN202310038015.6A patent/CN116091453B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111028224A (en) * | 2019-12-12 | 2020-04-17 | 广西医准智能科技有限公司 | Data labeling method, model training device, image processing method, image processing device and storage medium |
CN112634261A (en) * | 2020-12-30 | 2021-04-09 | 上海交通大学医学院附属瑞金医院 | Stomach cancer focus detection method and device based on convolutional neural network |
KR102378887B1 (en) * | 2021-02-15 | 2022-03-25 | 인하대학교 산학협력단 | Method and Apparatus of Bounding Box Regression by a Perimeter-based IoU Loss Function in Object Detection |
CN113673510A (en) * | 2021-07-29 | 2021-11-19 | 复旦大学 | Target detection algorithm combining feature point and anchor frame joint prediction and regression |
CN113988222A (en) * | 2021-11-29 | 2022-01-28 | 东北林业大学 | Forest fire detection and identification method based on fast-RCNN |
Non-Patent Citations (1)
Title |
---|
基于改进深度学习的乳腺癌医学影像检测方法;陈彤;;现代计算机(14);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN116091453A (en) | 2023-05-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110600122B (en) | Digestive tract image processing method and device and medical system | |
CN108898595B (en) | Construction method and application of positioning model of focus region in chest image | |
CN116091453B (en) | Lesion detection method for breast cancer | |
Antal et al. | Improving microaneurysm detection using an optimally selected subset of candidate extractors and preprocessing methods | |
WO2019184851A1 (en) | Image processing method and apparatus, and training method for neural network model | |
CN112287970A (en) | Mammary gland energy spectrum image classification system, equipment and medium based on multi-view multi-mode | |
CN104751160B (en) | Galactophore image processing method based on sparse autocoding depth network | |
Gabriella et al. | Early detection of tuberculosis using chest X-Ray (CXR) with computer-aided diagnosis | |
CN113284136A (en) | Medical image classification method of residual error network and XGboost of double-loss function training | |
CN113012093B (en) | Training method and training system for glaucoma image feature extraction | |
CN114549452A (en) | New coronary pneumonia CT image analysis method based on semi-supervised deep learning | |
CN110135506A (en) | A kind of seven paracutaneous neoplasm detection methods applied to web | |
CN112071418B (en) | Gastric cancer peritoneal metastasis prediction system and method based on enhanced CT image histology | |
Lalli et al. | A development of knowledge-based inferences system for detection of breast cancer on thermogram images | |
Abdulrazzak et al. | Computer-Aid System for Automated Jaundice Detection | |
Sarosa et al. | Breast cancer classification using GLCM and BPNN | |
Xu et al. | A dark and bright channel prior guided deep network for retinal image quality assessment | |
CN111242168B (en) | Human skin image lesion classification method based on multi-scale attention features | |
CN108573485A (en) | Identification device, recognition methods and program recorded medium | |
CN109948706B (en) | Micro-calcification cluster detection method combining deep learning and feature multi-scale fusion | |
Antal et al. | Evaluation of the grading performance of an ensemble-based microaneurysm detector | |
CN112614096A (en) | Ordinal number regression-based breast molybdenum target lesion benign and malignant analysis method | |
Wang et al. | A r-cnn based approach for microaneurysm detection in retinal fundus images | |
Rehman et al. | Dermoscopy cancer detection and classification using geometric feature based on resource constraints device (Jetson Nano) | |
Adepoju et al. | Detection of tumour based on breast tissue categorization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |