CN113313695A - Automatic deep learning defect detection and identification method based on small sample aeroengine blade CT image - Google Patents
Automatic deep learning defect detection and identification method based on small sample aeroengine blade CT image Download PDFInfo
- Publication number
- CN113313695A CN113313695A CN202110627897.0A CN202110627897A CN113313695A CN 113313695 A CN113313695 A CN 113313695A CN 202110627897 A CN202110627897 A CN 202110627897A CN 113313695 A CN113313695 A CN 113313695A
- Authority
- CN
- China
- Prior art keywords
- defect
- blade
- image
- deep learning
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000007547 defect Effects 0.000 title claims abstract description 249
- 238000001514 detection method Methods 0.000 title claims abstract description 87
- 238000013135 deep learning Methods 0.000 title claims abstract description 71
- 238000000034 method Methods 0.000 title claims abstract description 44
- 230000002950 deficient Effects 0.000 claims abstract description 31
- 238000012549 training Methods 0.000 claims abstract description 27
- 238000005520 cutting process Methods 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims abstract description 11
- 238000013136 deep learning model Methods 0.000 claims abstract description 6
- 230000008569 process Effects 0.000 claims description 14
- 238000007514 turning Methods 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 8
- 238000005070 sampling Methods 0.000 claims description 8
- 239000002893 slag Substances 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 5
- 230000006870 function Effects 0.000 claims description 5
- 239000008186 active pharmaceutical agent Substances 0.000 claims description 4
- 238000009432 framing Methods 0.000 claims description 4
- 238000011478 gradient descent method Methods 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 claims description 4
- 238000012163 sequencing technique Methods 0.000 claims description 4
- 230000005764 inhibitory process Effects 0.000 claims description 3
- 238000012937 correction Methods 0.000 abstract description 2
- 238000002591 computed tomography Methods 0.000 description 35
- 238000010586 diagram Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000005266 casting Methods 0.000 description 3
- BQCIDUSAKPWEOX-UHFFFAOYSA-N 1,1-Difluoroethene Chemical compound FC(F)=C BQCIDUSAKPWEOX-UHFFFAOYSA-N 0.000 description 2
- 208000003464 asthenopia Diseases 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003754 machining Methods 0.000 description 1
- 239000006249 magnetic particle Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000009659 non-destructive testing Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
- 239000011148 porous material Substances 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000003466 welding Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method for automatically detecting and identifying deep learning defects based on a small sample aeroengine blade CT image, which comprises the following steps: carrying out digital processing on the blade CT film; manually calibrating the type and the position of each defect to establish a defect sample label set; cutting a local defect area image of the blade, and performing data expansion and correction expansion of a corresponding label to establish a deep learning model training sample set; constructing a deep learning aeroengine blade defect detection and identification network; training a deep learning aeroengine blade defect detection and identification network; establishing an automatic detection and identification model according to the defect detection and identification network and the final training parameters; and inputting the CT image into a defect detection and identification model to automatically detect, identify and position the blade defects. The method solves the problem of small sample amount of the defective blade, and overcomes the defect that the ray detection efficiency of the blade of the aero-engine and the detection precision of the tiny defect are greatly improved due to the influence of human factors.
Description
Technical Field
The invention belongs to the field of machining, manufacturing and quality inspection of aero-engine blades, and particularly relates to a deep learning defect automatic detection and identification method based on a small-sample aero-engine blade CT image.
Background
The blade of the aero-engine is a main bearing part in the working process of the engine, and the quality of the blade is closely related to the safe operation of the aero-engine. The current nondestructive testing technology (ray detection, eddy current detection, magnetic particle detection, penetration detection and the like) is widely applied to the field of aeroengine blade detection. With the continuous development of the aviation industry, the performance of the aircraft is continuously improved, meanwhile, higher requirements are put forward on the reliability of the blades of the aero-engine, and the quality inspection requirements are increasingly strict.
The engine blade is limited by a casting processing technology, and the defects of cracks, cold shut, air holes, slag inclusion, looseness, core breaking, redundancy and the like are difficult to avoid in the casting processing process of the engine blade. One of the current methods commonly used for detecting the above defects is to perform a Computed Tomography (CT) scan on the blade by CT, and then manually evaluate the CT image of the blade generated by the scan. Due to the influence of human factors such as experience difference, standard understanding, eye fatigue and the like, missed detection and false detection are easy to occur in the blade defect detection process, and a small detection negligence can cause huge economic loss and even cause air accident. In recent years, turbine blade defect automatic identification has been studied, but all of them are based on conventional image feature extraction methods such as image segmentation and morphological calculation. The conventional defect detection methods are still high in omission factor and false detection rate, and the delivery quality of the turbine blade cannot be effectively improved.
With the advent of the artificial intelligence era, the deep learning technology has been gradually applied to defect detection and is applied to defect identification with obvious morphological characteristics such as welding seams and holes, however, for most defect detections, the traditional nondestructive detection technology is still adopted. The casting process of the turbine blade is complex, the typical defects are various, and the defect detection of the turbine blade is evaluated by manual experience for a long time. Therefore, the deep learning technology is applied to the aeroengine defect detection. The advantage of deep learning is exerted to the maximum extent, and the finding of an efficient and intelligent blade defect detection means is significant.
Disclosure of Invention
The technical problem solved by the invention is as follows: aiming at the problems and the defects in the prior art, the invention aims to provide a method for automatically detecting and identifying the deep learning defects based on a small-sample aviation engine blade CT image.
The technical scheme of the invention is as follows: a deep learning defect automatic detection and identification method based on a small sample aeroengine blade CT image comprises the following steps:
step 1: scanning an aircraft engine blade CT film to obtain a digital image, and establishing an aircraft engine defective blade CT image sample database;
step 2: the method comprises the following steps of sequentially carrying out manual calibration on the types and positions of defects in digital images of each defective blade to generate corresponding defect label data files, and establishing a defect sample label set;
and step 3: cutting an image of a defective part of an aero-engine blade and performing data expansion to construct an aero-engine blade defect image deep learning model training sample set;
and 4, step 4: constructing a deep learning aeroengine blade defect detection and identification network;
and 5: training a deep learning aeroengine blade defect detection and identification network by using a defective blade sample image data set;
step 6: based on the steps 4 and 5, establishing an automatic detection and identification model for the blade defects of the deep learning aircraft engine;
and 7: inputting the CT image of the aero-engine blade to be detected into an automatic detection and identification model of the deep learning aero-engine blade defect, automatically detecting and identifying the model and outputting the position, type and confidence information of the defect.
The further technical scheme of the invention is as follows: the specific operation of establishing the CT image sample database of the defective blade of the aircraft engine in the step 1 is as follows: CT films with cracks, cold shut, air holes, slag inclusion, looseness, excess and core breaking defects are manually selected from traditional films of different types and batches of aeroengine blades, a scanner is adopted to carry out DS-level scanning on the traditional films within the blackness range of 0.5-4.5D, single blade images with the defects are manually cut from each large-size high-resolution scanned digital image, the same size of the single blade images with the same type is ensured, and the single blade images are named and stored in sequence.
The further technical scheme of the invention is as follows: the specific operation of generating the corresponding defect label data file in the step 2 is as follows: opening CT images of defective leaves by using deep learning label manufacturing software, sequentially framing defects on each image, and determining position coordinates (x) of a defect boundary frame in the images1,y1,x2,y2) And then marking the defect type, saving and outputting an xml tag data file containing information such as image size, defect type, defect position coordinates and the like.
The further technical scheme of the invention is as follows: the specific steps of cutting out the image of the defect part of the blade of the aircraft engine and performing data expansion in the step 3 are as follows:
step 1: counting the defect position coordinates obtained in the step (2) to obtain the maximum defect size m;
step 2: performing clipping on each defect 9 times on the image of the blade with a frame of size m × m (wherein m > n, and m is an integral multiple of 32), and taking the center of the m × m frame as a coordinate origin during clipping of each defect, so that the coordinates of the center of the defect fall on the coordinates in sequence(0,0)、
And step 3: for each defect, randomly turning left and right the cut 9 m multiplied by m local defect pictures;
and 4, step 4: or/and randomly turning over the cut local defect pictures with the size of 9 m multiplied by m up and down aiming at each defect;
and 5: or/and performing random anticlockwise rotation processing around the center on the cut 9 m multiplied by m local defect pictures aiming at each defect;
step 6: or/and random brightness increase and decrease processing is carried out on the cut local defect pictures with the size of 9 m multiplied by m aiming at each defect;
and 7: and correcting the corresponding defect label data file according to the cut picture size, the defect coordinate information, the turning, the rotation and other operations.
The further technical scheme of the invention is as follows: the specific steps of constructing the deep learning aeroengine blade defect detection and identification network in the step 4 are as follows: building a primary network framework for detecting the defect of the deep learning blade based on a Yolov4 target detection algorithm, wherein a main feature extraction network adopts an improved CSPDarknet53 network, three main feature graphs are output respectively after the 4 th Resblock _ body, the 8 th Resblock _ body and the 16 th Resblock _ body, then an SPPNet network structure is connected to the CSPDarknet53 network at last, and a fourth main feature graph is output, wherein the sizes of the four main feature graphs are 1/4, 1/8, 1/16 and 1/32 of the size of an original input image in sequence; adding a group of up-sampling and down-sampling to the PANet network; and finally outputting four Yolo heads.
The further technical scheme of the invention is as follows: the specific training process of the blade defect detection and identification network in the step 5 is as follows: randomly sequencing the training samples obtained in the step 3, selecting a specific number of defect samples each time, inputting the defect samples into a deep learning aeroengine blade defect detection and identification network, inputting label data corresponding to the defects at the same time, constructing a cross entropy loss function by using results obtained by calculation of real defect labels and the defect detection network, optimizing network internal parameters by using a random gradient descent method until the network converges, and outputting finally optimized network parameters.
The further technical scheme of the invention is as follows: the step 6 of establishing the automatic detection and identification model for the blade defects of the deep learning aircraft engine comprises the following steps: loading the final optimized parameters obtained in the step (5) to a deep learning aeroengine blade defect detection and identification network, and then carrying out rapid non-maximum inhibition on detection results generated by 4 Yolo heads to obtain a defect output result with the maximum probability so as to establish a blade defect detection main body model; adding a picture segmentation operation in front of the main body model, namely uniformly segmenting the blade CT image with any size into N pictures with the sizes of m multiplied by m; and finally adding a picture combination operation to the main body model, namely combining the divided N m x m pictures into an image with the original size according to the inverse process of the original division operation, and modifying the coordinates of the defect position to enable the coordinates of the defect on the m x m pictures to be mapped to the combined complete blade image.
Effects of the invention
The invention has the technical effects that: compared with the prior art, the invention has the following positive beneficial effects:
(1) the method for automatically detecting and identifying the deep learning defect based on the small-sample aircraft engine blade CT image adopts the deep convolutional neural network to extract the characteristics of the defect, reduces the incompleteness of the traditional manual extraction of the characteristics, simultaneously, the deep learning network adjusts the optimization parameters by using a back propagation algorithm, can automatically learn different defect type characteristics, and has better defect detection and identification performance compared with the traditional image characteristic extraction method.
(2) In the step (2), the image of the defect part of the blade of the aircraft engine is cut out and subjected to data expansion, so that the number of training samples is expanded to 9 times of the number of the CT image samples of the original defect blade, and the proportion of the tiny defect in the whole image is increased by locally cutting out the defect, thereby greatly improving the generalization capability of the model and the detection precision of the tiny defect.
(3) The method for automatically detecting and identifying the deep learning defects of the blade CT images of the small-sample aero-engine effectively overcomes the influence of human factors such as experience difference, artificial eye fatigue evaluation, standard understanding and the like, realizes high-efficiency intellectualization of the ray detection of the blade of the aero-engine, and greatly improves the detection efficiency and the detection precision.
Drawings
FIG. 1 is a flow chart of the invention for automatic detection and identification of deep learning defects based on CT images of small sample aircraft engine blades.
FIG. 2 is an exemplary diagram of a partial defect picture cropping
FIG. 3 is a flow chart of a training sample set for constructing an aircraft engine blade defect image deep learning model.
Fig. 4 is a diagram of a modified CSPDarknet53 network architecture.
Fig. 5 is a diagram of an improved PANet network architecture.
FIG. 6 is a view of a deep learning aeroengine blade defect detection and identification subject model.
Detailed Description
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Referring to fig. 1 to 6, in order to achieve the above object, the technical solution adopted by the present invention is as follows:
the invention relates to a deep learning defect automatic detection and identification method based on a small sample aeroengine blade CT image, which comprises the following steps:
(1) scanning an aviation engine blade CT Film by adopting a Film digital scanner (Film digital scanner) to obtain a digital image, and establishing an aviation engine defective blade CT image sample database;
(2) the method comprises the following steps of sequentially carrying out manual calibration on the types and positions of defects in digital images of each defective blade to generate corresponding defect label data files, and establishing a defect sample label set;
(3) cutting an image of a defective part of an aero-engine blade and performing data expansion to construct an aero-engine blade defect image deep learning model training sample set;
(4) constructing a deep learning aeroengine blade defect detection and identification network;
(5) training a deep learning aeroengine blade defect detection and identification network by using a defective blade sample image data set;
(6) establishing an automatic detection and identification model for the blade defects of the deep learning aircraft engine based on the steps (4) and (5);
(7) inputting the CT image of the aero-engine blade to be detected into an automatic detection and identification model of the deep learning aero-engine blade defect, automatically detecting and identifying the model and outputting the position, type and confidence information of the defect.
According to the method for automatically detecting and identifying the deep learning defect, preferably, the specific operation of establishing the CT image sample database of the defective blade of the aircraft engine in the step (1) is as follows:
CT films with defects of cracks, cold shut, air holes, slag inclusion, looseness, excess and broken cores are manually selected from traditional films of different types and batches of aeroengine blades, a VIDAR NDT PRO industrial film scanner is adopted to scan the traditional films in a DS (highest level) level with the blackness range of 0.5-4.5D, the scanning resolution is set to be the highest value 7980 multiplied by 9690 and DPI 570, single blade images with defects are manually cut from each large-size high-resolution scanned digital image, the same size of the single blade images with the defects is guaranteed, and the single blade images with the same type are named and stored sequentially.
According to the above method for automatically detecting and identifying deep learning defects, preferably, the specific operations of generating the corresponding defect label data file in step (2) are as follows:
opening CT images of defective leaves by using deep learning label manufacturing software, sequentially framing defects on each image, and determining position coordinates (x) of a defect boundary frame in the images1,y1,x2,y2) Wherein (x)1,y1) Is the coordinate of the top left corner vertex of the bounding box, (x)2,y2) Coordinates for the vertex in the lower right corner of the bounding box, followed by the coordinates for the defect type (including: cracks, cold shut, pores, slag inclusions, porosity, excess material, breakageCore), and outputs an xml tag data file containing information such as image size, defect type, defect position coordinates and the like after storage.
According to the above method for automatically detecting and identifying deep learning defects, preferably, the step (3) of cutting out an image of a defective portion of an aircraft engine blade and performing data expansion comprises the specific steps of:
a) counting the defect position coordinates obtained in the step (2) to obtain the maximum defect size n;
b) performing clipping on each defect 9 times on the image of the blade with a frame of size m × m (wherein m > n, and m is an integral multiple of 32), and taking the center of the m × m frame as a coordinate origin during clipping of each defect, so that the coordinates of the center of the defect fall on the coordinates in sequence (0,0)、
c) For each defect, randomly turning left and right the cut 9 m multiplied by m local defect pictures;
d) or/and randomly turning over the cut local defect pictures with the size of 9 m multiplied by m up and down aiming at each defect;
e) or/and performing random anticlockwise rotation processing around the center on the cut 9 m multiplied by m local defect pictures aiming at each defect;
f) or/and random brightness increase and decrease processing is carried out on the cut local defect pictures with the size of 9 m multiplied by m aiming at each defect;
g) and correcting the corresponding defect label data file according to the cut picture size, the defect coordinate information, the turning, the rotation and other operations.
According to the method for automatically detecting and identifying the deep learning defects, preferably, the specific steps of constructing the deep learning aeroengine blade defect detection and identification network in the step (4) are as follows: building a primary network framework for detecting the defect of the deep learning blade based on a Yolov4 target detection algorithm, wherein a main feature extraction network adopts an improved CSPDarknet53 network, three main feature graphs are output respectively after the 4 th Resblock _ body, the 8 th Resblock _ body and the 16 th Resblock _ body, then an SPPNet network structure is connected to the CSPDarknet53 network at last, and a fourth main feature graph is output, wherein the sizes of the four main feature graphs are 1/4, 1/8, 1/16 and 1/32 of the size of an original input image in sequence; adding a group of up-sampling and down-sampling to the PANet network; and finally outputting four Yolo heads.
According to the above method for automatically detecting and identifying deep learning defects, preferably, the specific training process of the blade defect detection and identification network in step (5) is as follows: randomly sequencing the training samples obtained in the step (3), selecting a specific number of defect samples each time, inputting the defect samples into a deep learning aircraft engine blade defect detection and identification network, inputting label data corresponding to the defects at the same time, constructing a cross entropy loss function by using results obtained by calculation of real defect labels and the defect detection network, optimizing network internal parameters by using a random gradient descent method until the network converges, and outputting finally optimized network parameters.
According to the above method for automatically detecting and identifying deep learning defects, preferably, the step (6) of establishing an automatic detection and identification model for blade defects of a deep learning aircraft engine is as follows: loading the final optimized parameters obtained in the step (5) to a deep learning aeroengine blade defect detection and identification network, and then carrying out rapid non-maximum inhibition on detection results generated by 4 Yolo heads to obtain a defect output result with the maximum probability so as to establish a blade defect detection main body model; adding a picture segmentation operation in front of the main body model, namely uniformly segmenting the blade CT image with any size into N pictures with the sizes of m multiplied by m; and finally adding a picture combination operation to the main body model, namely combining the divided N m x m pictures into an image with the original size according to the inverse process of the original division operation, and modifying the coordinates of the defect position to enable the coordinates of the defect on the m x m pictures to be mapped to the combined complete blade image.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments, which are not intended to limit the scope of the present invention.
As shown in fig. 1, a method for automatically detecting and identifying a deep learning defect based on a small sample CT image of an aircraft engine blade includes:
(1) scanning an aviation engine blade CT Film by adopting a Film digital scanner (Film digital scanner) to obtain a digital image, and establishing an aviation engine defective blade CT image sample database.
For example, 300 CT films with 7 defects of cracks, cold shut, air holes, slag inclusion, looseness, excess and broken cores are manually selected from existing traditional films of blades of aeroengines of 5 models and 46 batches, a VIDAR NDT PRO industrial film scanner is adopted to carry out DS-level (highest-level) scanning on the 300 traditional films with the defects in a blackness range of 0.5-4.5D, the scanning resolution is set to be the highest value of 7980X 9690 and 570, as each film is provided with a plurality of blade images, a proper cutting frame is preset according to the sizes of the blade images of different models, a single blade image with the defects is manually cut from each large-size high-resolution scanning digital image, the size of the single blade image of the same model is ensured to be the same, and the single blade images of the same model are named and stored in sequence.
It should be noted that the number of the scanned films is 300, the number of the cut images with defective leaves is 1482, and the number relates to the effect of the model training.
(2) And (4) manually calibrating the types and positions of the defects in the digital images of the defective leaves in sequence, generating corresponding defect label data files, and establishing a defect sample label set.
The label set is an answer set in the deep network training process, and the adopted deep learning mode is a supervised learning mode, so that the loss function needs to be solved by using the defect labels, and further, the loss back propagation is utilized for repairingThe positive network parameters, and therefore the accuracy of the tag, have a large impact on the accuracy of the model detection. In the step, the defect label file is manufactured by adopting artificial defect calibration, and the operation process is relatively complicated but careful. Firstly, utilizing a deep learning label manufacturing software Labelimg to open a CT image of a defective blade, judging the position and the size of the defect according to expert experience, sequentially framing the defect on each image, and determining the position coordinate (x) of a defect boundary frame in the image1,y1,x2,y2) Wherein (x)1,y1) Is the coordinate of the top left corner vertex of the bounding box, (x)2,y2) The coordinates of the vertex at the lower right corner of the bounding box are obtained, and then the type of the defect is judged according to experience (including: cracks, cold shut, air holes, slag inclusion, looseness, excess, broken cores) and inputting the name of the defect type, and outputting an xml tag data file containing information such as image size, defect type, defect position coordinates and the like after storage.
(3) And cutting an image of the defect part of the blade of the aero-engine and carrying out data expansion to construct a training sample set of the deep learning model of the image of the defect part of the blade of the aero-engine.
Counting the defect size of 1482 defect blades obtained by cutting in the step (2) to obtain the maximum size n which is 239; taking the value of m as 256 according to the principle that m is greater than n and is an integral multiple of 32; utilizing frames with the size of 256 multiplied by 256 to cut each defect on the image of the defective blade for 9 times in turn, taking the center of the 256 multiplied by 256 frame as a coordinate origin in the process of cutting each defect, so that the central coordinates of the defect fall on coordinates (-64,64), (0,64), (64,64), (-64,0), (0,0), (64,0), (-64 ), (0, -64), (64, -64) in turn, and the cutting example of the defect is shown in fig. 2; then, 1482 × 9 pictures are randomly subjected to left-right turning, up-down turning, 90-degree rotation, 180-degree rotation and brightness increase and decrease processing, so that the picture quantity is further expanded by 10 times.
Because the original label data corresponds to the original 1482 defective blades one by one, the defect coordinates are calculated by taking the top left corner vertex of the original defective blade image as the origin, after the image extraction of the local defect area, the image expansion such as random image turning, rotation and the like, the labels need to be corrected one by one and correspondingly expanded, the correction of the defect coordinate position is reversely deduced according to the position coordinates extracted for 9 times, and the corrected defect coordinate position is calculated by taking the top left corner vertex of the currently extracted 256 × 256 size image as the origin. For the coordinates with the defect coordinates out of range, the coordinate values are taken to the outer boundary of the 256 × 256 picture.
It should be understood that, in this embodiment, starting from a preset number of 1482 defective blade pictures, the defective area is cut out and expanded into 1482 × 9 defective pictures with a fixed size (256 × 256), and then each picture is subjected to image processing such as flipping, rotation, brightness increase and decrease, and the image amount expanded by 10 times, that is, 14820, so as to establish a training sample set. The number of the training sample pictures is related to the effect of model training, and those skilled in the art can deduce the number of the required training samples according to the experimental effect, which is not described herein again.
(4) And constructing a deep learning aeroengine blade defect detection and identification network.
The method comprises the following specific steps: firstly, a Yolov4 target detection algorithm network framework is built, the size of an input picture is defined to be 256 multiplied by 256, the trunk feature extraction network CSPDarknet53 of the Yolov4 is used for adjusting the number of network layers, namely, 2, 4, 8 and 4 Resblock _ bodies are sequentially connected behind a DarknetConv2D _ BN _ Mish module, and the last layer of each group of Resblock _ body modules is downsampled to reduce the size of the picture by one time, as shown in FIG. 4;
then, three main characteristic diagrams are output after the 4 th, 8 th and 16 th resplock _ bodies respectively, and an SPPNet network structure is connected to the CSPDarknet53 network finally and a fourth main characteristic diagram is output, wherein the sizes of the four main characteristic diagrams are sequentially 64 × 64, 32 × 32, 16 × 16 and 8 × 8;
and finally, modifying the PANET network of the Yolov4, completely modifying a Concat + Conv × 5 structure in the PANET network into a Concat + Conv × 3 structure, adding one group of up-sampling and down-sampling on the basis of the PANET network, and finally outputting four Yolo heads, wherein the structure diagram of the improved PANET network is shown in FIG. 5.
(5) And training a deep learning aeroengine blade defect detection and identification network by using the defective blade sample image data set. The training process is as follows: randomly sequencing 14820 defect training samples obtained in the step (3), selecting 16 defect samples each time, inputting the 16 defect samples into the deep learning aircraft engine blade defect detection and identification network established in the step (5), simultaneously inputting label data corresponding to the defects, constructing a cross entropy loss function by using results obtained by calculation of real defect labels and the defect detection network, optimizing network internal parameters by using a random gradient descent method until the network converges, and outputting finally optimized network parameters.
(6) And (5) establishing an automatic detection and identification model for the blade defects of the deep learning aircraft engine based on the steps (4) and (5). The method comprises the following specific steps: loading the final optimized parameters obtained in the step (5) to a deep learning aircraft engine blade defect detection and identification network, and then carrying out rapid non-maximum suppression on detection results generated by 4 Yolo heads to obtain a defect output result with a maximum probability, so as to establish a deep learning aircraft engine blade defect detection main body model, as shown in fig. 6; adding a picture segmentation operation in front of a main body model, uniformly segmenting a blade CT image with any size into a plurality of pictures of 256 multiplied by 256, if the length and width of the blade CT image are not integral multiples of 256, performing filling operation of full black of pixel values (0,0,0) around the picture, so that filled black edges are respectively symmetrical up and down and left and right, and the size of the filled picture is just integral multiples of 256; and finally adding a picture combination operation to the main body model, namely combining a plurality of 256 multiplied by 256 pictures which are divided into an image with the original size according to the inverse process of the original division operation, and simultaneously modifying the position coordinates of the defects to enable the defect coordinates on the 256 multiplied by 256 pictures to be mapped to the combined complete blade image, thereby completing the establishment of a complete deep learning aviation engine blade defect automatic detection and identification model.
(7) Inputting the CT image of the aero-engine blade to be detected into an automatic detection and identification model of the deep learning aero-engine blade defect, automatically detecting and identifying the model and outputting the position, type and confidence information of the defect.
The above-mentioned embodiments only express one implementation of the present application, but not limited to the above-mentioned examples, and therefore, the invention is not to be construed as being limited to the claims. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (7)
1. A deep learning defect automatic detection and identification method based on a small sample aeroengine blade CT image is characterized by comprising the following steps:
step 1: scanning an aircraft engine blade CT film to obtain a digital image, and establishing an aircraft engine defective blade CT image sample database;
step 2: the method comprises the following steps of sequentially carrying out manual calibration on the types and positions of defects in digital images of each defective blade to generate corresponding defect label data files, and establishing a defect sample label set;
and step 3: cutting an image of a defective part of an aero-engine blade and performing data expansion to construct an aero-engine blade defect image deep learning model training sample set;
and 4, step 4: constructing a deep learning aeroengine blade defect detection and identification network;
and 5: training a deep learning aeroengine blade defect detection and identification network by using a defective blade sample image data set;
step 6: based on the steps 4 and 5, establishing an automatic detection and identification model for the blade defects of the deep learning aircraft engine;
and 7: inputting the CT image of the aero-engine blade to be detected into an automatic detection and identification model of the deep learning aero-engine blade defect, automatically detecting and identifying the model and outputting the position, type and confidence information of the defect.
2. The method for automatically detecting and identifying the deep learning defect based on the small-sample CT image of the blade of the aircraft engine according to claim 1, wherein the specific operation of establishing the CT image sample database of the blade of the aircraft engine with the defect in the step 1 is as follows: CT films with cracks, cold shut, air holes, slag inclusion, looseness, excess and core breaking defects are manually selected from traditional films of different types and batches of aeroengine blades, a scanner is adopted to carry out DS-level scanning on the traditional films within the blackness range of 0.5-4.5D, single blade images with the defects are manually cut from each large-size high-resolution scanned digital image, the same size of the single blade images with the same type is ensured, and the single blade images are named and stored in sequence.
3. The method for automatically detecting and identifying the deep learning defect based on the CT image of the small sample blade of the aircraft engine as claimed in claim 1, wherein the specific operation of generating the corresponding defect label data file in the step 2 is as follows: opening CT images of defective leaves by using deep learning label manufacturing software, sequentially framing defects on each image, and determining position coordinates (x) of a defect boundary frame in the images1,y1,x2,y2) And then marking the defect type, saving and outputting an xml tag data file containing information such as image size, defect type, defect position coordinates and the like.
4. The method for automatically detecting and identifying the deep learning defect based on the small sample CT image of the blade of the aircraft engine as claimed in claim 1, wherein the specific steps of cutting the image of the defective part of the blade of the aircraft engine and performing data expansion in the step 3 are as follows:
step 1: counting the defect position coordinates obtained in the step (2) to obtain the maximum defect size m;
step 2: performing clipping on each defect 9 times on the image of the blade with a frame of size m × m (wherein m > n, and m is an integral multiple of 32), and taking the center of the m × m frame as a coordinate origin during clipping of each defect, so that the coordinates of the center of the defect fall on the coordinates in sequence(0,0)、
And step 3: for each defect, randomly turning left and right the cut 9 m multiplied by m local defect pictures;
and 4, step 4: or/and randomly turning over the cut local defect pictures with the size of 9 m multiplied by m up and down aiming at each defect;
and 5: or/and performing random anticlockwise rotation processing around the center on the cut 9 m multiplied by m local defect pictures aiming at each defect;
step 6: or/and random brightness increase and decrease processing is carried out on the cut local defect pictures with the size of 9 m multiplied by m aiming at each defect;
and 7: and correcting the corresponding defect label data file according to the cut picture size, the defect coordinate information, the turning, the rotation and other operations.
5. The method for automatically detecting and identifying the deep learning defect based on the small sample blade CT image of the aero-engine as claimed in claim 1, wherein the specific steps for constructing the deep learning aero-engine blade defect detection and identification network in the step 4 are as follows: building a primary network framework for detecting the defect of the deep learning blade based on a Yolov4 target detection algorithm, wherein a main feature extraction network adopts an improved CSPDarknet53 network, three main feature graphs are output respectively after the 4 th Resblock _ body, the 8 th Resblock _ body and the 16 th Resblock _ body, then an SPPNet network structure is connected to the CSPDarknet53 network at last, and a fourth main feature graph is output, wherein the sizes of the four main feature graphs are 1/4, 1/8, 1/16 and 1/32 of the size of an original input image in sequence; adding a group of up-sampling and down-sampling to the PANet network; and finally outputting four Yolo heads.
6. The method for automatically detecting and identifying the deep learning defects based on the CT image of the small-sample aero-engine blade as claimed in claim 1, wherein the specific training process of the blade defect detection and identification network in the step 5 is as follows: randomly sequencing the training samples obtained in the step 3, selecting a specific number of defect samples each time, inputting the defect samples into a deep learning aeroengine blade defect detection and identification network, inputting label data corresponding to the defects at the same time, constructing a cross entropy loss function by using results obtained by calculation of real defect labels and the defect detection network, optimizing network internal parameters by using a random gradient descent method until the network converges, and outputting finally optimized network parameters.
7. The method for automatically detecting and identifying the deep learning defect based on the small sample CT image of the aircraft engine blade as claimed in claim 1, wherein the step of establishing the model for automatically detecting and identifying the deep learning aircraft engine blade defect in the step 6 comprises the following steps: loading the final optimized parameters obtained in the step (5) to a deep learning aeroengine blade defect detection and identification network, and then carrying out rapid non-maximum inhibition on detection results generated by 4 Yolo heads to obtain a defect output result with the maximum probability so as to establish a blade defect detection main body model; adding a picture segmentation operation in front of the main body model, namely uniformly segmenting the blade CT image with any size into N pictures with the sizes of m multiplied by m; and finally adding a picture combination operation to the main body model, namely combining the divided N m x m pictures into an image with the original size according to the inverse process of the original division operation, and modifying the coordinates of the defect position to enable the coordinates of the defect on the m x m pictures to be mapped to the combined complete blade image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110627897.0A CN113313695A (en) | 2021-06-05 | 2021-06-05 | Automatic deep learning defect detection and identification method based on small sample aeroengine blade CT image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110627897.0A CN113313695A (en) | 2021-06-05 | 2021-06-05 | Automatic deep learning defect detection and identification method based on small sample aeroengine blade CT image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113313695A true CN113313695A (en) | 2021-08-27 |
Family
ID=77377409
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110627897.0A Pending CN113313695A (en) | 2021-06-05 | 2021-06-05 | Automatic deep learning defect detection and identification method based on small sample aeroengine blade CT image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113313695A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113688777A (en) * | 2021-09-07 | 2021-11-23 | 西北工业大学 | Airport pavement airplane real-time detection method based on embedded CPU |
CN113838013A (en) * | 2021-09-13 | 2021-12-24 | 中国民航大学 | Blade crack real-time detection method and device in aero-engine operation and maintenance based on YOLOv5 |
CN116777292A (en) * | 2023-06-30 | 2023-09-19 | 北京京航计算通讯研究所 | Defect rate index correction method based on multi-batch small sample space product |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109636772A (en) * | 2018-10-25 | 2019-04-16 | 同济大学 | The defect inspection method on the irregular shape intermetallic composite coating surface based on deep learning |
CN111833328A (en) * | 2020-07-14 | 2020-10-27 | 汪俊 | Aircraft engine blade surface defect detection method based on deep learning |
CN111968084A (en) * | 2020-08-08 | 2020-11-20 | 西北工业大学 | Method for quickly and accurately identifying defects of aero-engine blade based on artificial intelligence |
CN112102229A (en) * | 2020-07-23 | 2020-12-18 | 西安交通大学 | Intelligent industrial CT detection defect identification method based on deep learning |
CN112215208A (en) * | 2020-11-10 | 2021-01-12 | 中国人民解放军战略支援部队信息工程大学 | Remote sensing image bridge target detection algorithm based on improved YOLOv4 |
CN112288043A (en) * | 2020-12-23 | 2021-01-29 | 飞础科智慧科技(上海)有限公司 | Kiln surface defect detection method, system and medium |
-
2021
- 2021-06-05 CN CN202110627897.0A patent/CN113313695A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109636772A (en) * | 2018-10-25 | 2019-04-16 | 同济大学 | The defect inspection method on the irregular shape intermetallic composite coating surface based on deep learning |
CN111833328A (en) * | 2020-07-14 | 2020-10-27 | 汪俊 | Aircraft engine blade surface defect detection method based on deep learning |
CN112102229A (en) * | 2020-07-23 | 2020-12-18 | 西安交通大学 | Intelligent industrial CT detection defect identification method based on deep learning |
CN111968084A (en) * | 2020-08-08 | 2020-11-20 | 西北工业大学 | Method for quickly and accurately identifying defects of aero-engine blade based on artificial intelligence |
CN112215208A (en) * | 2020-11-10 | 2021-01-12 | 中国人民解放军战略支援部队信息工程大学 | Remote sensing image bridge target detection algorithm based on improved YOLOv4 |
CN112288043A (en) * | 2020-12-23 | 2021-01-29 | 飞础科智慧科技(上海)有限公司 | Kiln surface defect detection method, system and medium |
Non-Patent Citations (1)
Title |
---|
李彬 等: "改进YOLOv4算法的航空发动机部件表面缺陷检测", 《激光与光电子学进展》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113688777A (en) * | 2021-09-07 | 2021-11-23 | 西北工业大学 | Airport pavement airplane real-time detection method based on embedded CPU |
CN113838013A (en) * | 2021-09-13 | 2021-12-24 | 中国民航大学 | Blade crack real-time detection method and device in aero-engine operation and maintenance based on YOLOv5 |
CN116777292A (en) * | 2023-06-30 | 2023-09-19 | 北京京航计算通讯研究所 | Defect rate index correction method based on multi-batch small sample space product |
CN116777292B (en) * | 2023-06-30 | 2024-04-16 | 北京京航计算通讯研究所 | Defect rate index correction method based on multi-batch small sample space product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113313695A (en) | Automatic deep learning defect detection and identification method based on small sample aeroengine blade CT image | |
CN111968084B (en) | Rapid and accurate identification method for defects of aero-engine blade based on artificial intelligence | |
JP7004145B2 (en) | Defect inspection equipment, defect inspection methods, and their programs | |
CN114549997B (en) | X-ray image defect detection method and device based on regional feature extraction | |
TW202101377A (en) | Method, device, and apparatus for target detection and training target detection network, storage medium | |
CN111814867A (en) | Defect detection model training method, defect detection method and related device | |
CN110599445A (en) | Target robust detection and defect identification method and device for power grid nut and pin | |
CN116468725B (en) | Industrial defect detection method, device and storage medium based on pre-training model | |
CN112465746A (en) | Method for detecting small defects in radiographic film | |
CN111027511A (en) | Remote sensing image ship detection method based on region of interest block extraction | |
CN116485709A (en) | Bridge concrete crack detection method based on YOLOv5 improved algorithm | |
CN112304960B (en) | High-resolution image object surface defect detection method based on deep learning | |
CN111951284B (en) | Optical remote sensing satellite image refined cloud detection method based on deep learning | |
CN115588024B (en) | Complex industrial image edge extraction method and device based on artificial intelligence | |
CN113870236B (en) | Composite material defect nondestructive inspection method based on deep learning algorithm | |
CN112906689B (en) | Image detection method based on defect detection and segmentation depth convolutional neural network | |
Zou et al. | Automatic segmentation, inpainting, and classification of defective patterns on ancient architecture using multiple deep learning algorithms | |
CN115797314B (en) | Method, system, equipment and storage medium for detecting surface defects of parts | |
CN113537017A (en) | Optical remote sensing image airplane detection method and device based on cascade regression correction | |
JP2000113198A (en) | Method for automatically inspecting print quality using elastic model | |
CN116630323A (en) | Automatic calculation method, system, medium and equipment for corrosion depth of dense metal | |
CN112508935A (en) | Product packaging detection method and system based on deep learning and product packaging sorting system | |
CN115713622A (en) | Casting defect detection method and system based on three-dimensional model and flaw detection image | |
CN112381794B (en) | Printing defect detection method based on deep convolution generation network | |
CN117197146A (en) | Automatic identification method for internal defects of castings |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210827 |