CN117557565A - Detection method and device for lithium battery pole piece - Google Patents
Detection method and device for lithium battery pole piece Download PDFInfo
- Publication number
- CN117557565A CN117557565A CN202410042138.1A CN202410042138A CN117557565A CN 117557565 A CN117557565 A CN 117557565A CN 202410042138 A CN202410042138 A CN 202410042138A CN 117557565 A CN117557565 A CN 117557565A
- Authority
- CN
- China
- Prior art keywords
- image
- pole piece
- module
- curve
- deflection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 110
- WHXSMMKQMYFTQS-UHFFFAOYSA-N Lithium Chemical compound [Li] WHXSMMKQMYFTQS-UHFFFAOYSA-N 0.000 title claims abstract description 55
- 229910052744 lithium Inorganic materials 0.000 title claims abstract description 55
- 238000000034 method Methods 0.000 claims abstract description 60
- 238000012545 processing Methods 0.000 claims abstract description 43
- 238000007781 pre-processing Methods 0.000 claims abstract description 19
- 230000006870 function Effects 0.000 claims description 37
- 238000001914 filtration Methods 0.000 claims description 32
- 238000010606 normalization Methods 0.000 claims description 32
- 230000004927 fusion Effects 0.000 claims description 31
- 238000004422 calculation algorithm Methods 0.000 claims description 30
- 238000004364 calculation method Methods 0.000 claims description 21
- 230000008447 perception Effects 0.000 claims description 21
- 230000001419 dependent effect Effects 0.000 claims description 15
- 238000012549 training Methods 0.000 claims description 15
- 230000001629 suppression Effects 0.000 claims description 11
- 239000011248 coating agent Substances 0.000 claims description 8
- 238000000576 coating method Methods 0.000 claims description 8
- 238000003860 storage Methods 0.000 claims description 8
- 210000001525 retina Anatomy 0.000 claims description 5
- 230000003628 erosive effect Effects 0.000 claims description 4
- 230000007797 corrosion Effects 0.000 claims description 3
- 238000005260 corrosion Methods 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 description 18
- 238000005070 sampling Methods 0.000 description 10
- 230000009466 transformation Effects 0.000 description 9
- 238000009826 distribution Methods 0.000 description 7
- 238000004519 manufacturing process Methods 0.000 description 7
- 230000008901 benefit Effects 0.000 description 6
- 238000012986 modification Methods 0.000 description 6
- 230000004048 modification Effects 0.000 description 6
- 238000003062 neural network model Methods 0.000 description 6
- 230000007547 defect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 239000000463 material Substances 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 238000012216 screening Methods 0.000 description 4
- PCTMTFRHKVHKIS-BMFZQQSSSA-N (1s,3r,4e,6e,8e,10e,12e,14e,16e,18s,19r,20r,21s,25r,27r,30r,31r,33s,35r,37s,38r)-3-[(2r,3s,4s,5s,6r)-4-amino-3,5-dihydroxy-6-methyloxan-2-yl]oxy-19,25,27,30,31,33,35,37-octahydroxy-18,20,21-trimethyl-23-oxo-22,39-dioxabicyclo[33.3.1]nonatriaconta-4,6,8,10 Chemical compound C1C=C2C[C@@H](OS(O)(=O)=O)CC[C@]2(C)[C@@H]2[C@@H]1[C@@H]1CC[C@H]([C@H](C)CCCC(C)C)[C@@]1(C)CC2.O[C@H]1[C@@H](N)[C@H](O)[C@@H](C)O[C@H]1O[C@H]1/C=C/C=C/C=C/C=C/C=C/C=C/C=C/[C@H](C)[C@@H](O)[C@@H](C)[C@H](C)OC(=O)C[C@H](O)C[C@H](O)CC[C@@H](O)[C@H](O)C[C@H](O)C[C@](O)(C[C@H](O)[C@H]2C(O)=O)O[C@H]2C1 PCTMTFRHKVHKIS-BMFZQQSSSA-N 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000004256 retinal image Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000001965 increasing effect Effects 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 229910052757 nitrogen Inorganic materials 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 239000007772 electrode material Substances 0.000 description 1
- 238000005530 etching Methods 0.000 description 1
- 230000008570 general process Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 229910052740 iodine Inorganic materials 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 238000009928 pasteurization Methods 0.000 description 1
- 229910052698 phosphorus Inorganic materials 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000013138 pruning Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02E—REDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
- Y02E60/00—Enabling technologies; Technologies with a potential or indirect contribution to GHG emissions mitigation
- Y02E60/10—Energy storage using batteries
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a detection method and a detection device for a lithium battery pole piece. The detection method comprises the following steps: acquiring an X-ray image of a lithium battery pole piece, and preprocessing the X-ray image to obtain a preprocessed image; inputting the preprocessed image into a trained target detection model, and predicting the preprocessed image by the target detection model to obtain a corresponding prediction result; the lithium battery pole piece comprises a cathode pole piece and an anode pole piece; the prediction result comprises: the ROI area where the sharp point of the cathode plate is located and the ROI area where the sharp point of the anode plate is located; and processing the ROI area to determine the sharp point of the cathode pole piece and the sharp point of the anode pole piece. The device can realize the method. In addition, the detection method and the detection device can also realize the detection of the overHang allowance of the lithium battery and the shape recognition and classification function of the top of the anode of the lithium battery.
Description
Technical Field
The invention relates to the technical field of image detection and identification, in particular to a detection method and a detection device for a lithium battery pole piece.
Background
In the modern lithium battery manufacturing process, detection and identification of pole piece defects is critical to ensure battery quality, improve battery performance and extend battery life. The pole piece is one of the components playing a key role in the battery, and the defects of turning, misplacement, bending and the like of the pole piece can lead to the reduction of the battery performance, the shortening of the service life of the battery and even serious safety problems. In order to ensure the quality and safety of the battery, the microstructure information of the laminated battery pole pieces is obtained in a large-scale X-ray imaging mode at present, and defects of the battery pole pieces are automatically detected and identified according to the information.
X-ray imaging is a common nondestructive detection method, but due to complex noise and interference in images, the traditional image processing method is difficult to accurately and efficiently detect and identify defects, and visual inspection is mainly carried out manually in actual production. However, the manual inspection has the technical defects of low efficiency, high false detection rate, high subjectivity and the like, and thus cannot meet the requirement of mass production.
In recent years, neural network models have achieved significant results in the field of target detection. The neural network model is a computational model based on an artificial neural network that can recognize and process complex patterns and data through learning and training. Because of the different actual hardware environments and the influence of different electrode materials of the lithium battery, the pole piece X-ray image of the lithium battery has the characteristics of different morphological characteristics, complex and changeable noise interference and similar characteristics among classes, and the method provides challenges for the detection precision of a general target detection model.
Accordingly, there is a need for improvements in light of the deficiencies of the prior art.
Disclosure of Invention
The invention mainly solves the technical problem of providing a detection method and a detection device for a lithium battery pole piece, wherein the detection method and the detection device utilize an improved YOLO neural network model to automatically detect and identify X-ray images of the lithium battery, and can accurately identify sharp points of cathode and anode pole pieces of the lithium battery.
According to a first aspect, in one embodiment, a method for detecting a lithium battery pole piece is provided. The detection method comprises the following steps:
pretreatment: acquiring an X-ray image of a lithium battery pole piece, and preprocessing the X-ray image to obtain a preprocessed image;
predicting: inputting the preprocessed image into a trained target detection model, and predicting the preprocessed image by the target detection model to obtain a corresponding prediction result; the lithium battery pole piece comprises a cathode pole piece and an anode pole piece; the prediction result comprises: the ROI area where the sharp point of the cathode plate is located and the ROI area where the sharp point of the anode plate is located;
wherein the sharp point is an upper end point and a lower end point of the coating range of the anode pole piece or the cathode pole piece in the Y direction; the Y direction is a direction perpendicular to the horizontal direction in the preprocessed image;
The step of identification: and processing the ROI area to determine the sharp point of the cathode pole piece and the sharp point of the anode pole piece.
In one embodiment, the identifying step S300 further includes:
taking the difference value between the sharp point of the cathode pole piece and the sharp point of the anode pole piece in the Y direction as the overHang allowance of the lithium battery pole piece;
in one embodiment, in the identifying step: processing the ROI area to determine a point of the cathode pole piece and a point of the anode pole piece, comprising:
acquiring the ROI area, and sequentially performing median filtering and erosion treatment on the ROI area to obtain the eroded ROI area;
acquiring the minimum circumscribed outline of the ROI area after the corrosion treatment;
taking the index of each row of pixels in the minimum external contour as a first independent variable, and taking the average value of all the pixels in each row as a corresponding first dependent variable to obtain a first relation curve between the first independent variable and the first dependent variable;
performing first derivative calculation on the first relation curve to obtain a corresponding derivative curve;
and obtaining a maximum value point of the derivative curve, and taking the index corresponding to the maximum value point as a sharp point of the cathode pole piece or the anode pole piece.
In one embodiment, the step of identifying further comprises:
obtaining a skeleton curve of the anode plate by utilizing the ROI;
determining the deflection angle of the anode pole piece according to the skeleton curve;
determining a deflection site of the anode pole piece and a deflection direction of the deflection site by utilizing the skeleton curve;
determining the gesture type of the anode plate according to the deflection locus and the deflection direction;
wherein, the obtaining the skeleton curve of the anode plate by using the ROI area comprises the following steps:
sequentially performing expansion treatment, box filtering and binarization treatment on the ROI region to obtain the binarized ROI region, and processing the binarized ROI region by adopting a skeletonization algorithm to obtain a skeleton curve of the anode pole piece;
wherein, the determining the deflection angle of the anode pole piece according to the skeleton curve comprises:
performing straight line fitting on the skeleton curve by using a RANSAC algorithm to obtain a corresponding fitting straight line;
calculating an included angle between the fitting straight line and the length direction of the anode pole piece or the cathode pole piece;
and taking an included angle between the fitting straight line and the length direction of the anode pole piece or the cathode pole piece as the deflection angle.
In one embodiment, in the identifying step: determining a deflection site of the anode plate and a deflection direction of the deflection site by using the skeleton curve, wherein the method comprises the following steps of:
taking the ordinate of each effective pixel site in the skeleton curve as a second independent variable and the abscissa of each effective pixel site in the skeleton curve as a corresponding second dependent variable to obtain a second relation curve between the second independent variable and the second dependent variable; the effective pixel sites are pixel points with the pixel value of 1 in the pixel points corresponding to the skeleton curve;
performing first derivative calculation on the second relation curve to obtain a corresponding first derivative curve; the abscissa of the first derivative curve is the abscissa of the corresponding effective pixel site, and the ordinate of the first derivative curve is the corresponding first derivative;
obtaining a maximum point of the first derivative curve, and taking the index corresponding to the maximum point of the first derivative curve as a deflection site of the anode plate;
determining a deflection direction of the deflection locus according to whether the corresponding first derivative is greater than 0; if the corresponding first derivative is greater than 0, the corresponding deflection direction is right deflection; and if the corresponding first derivative is smaller than 0, the corresponding deflection direction is left-biased.
In one embodiment, the trained object detection model comprises: the system comprises a first CBL module, a second CBL module, a first C3 module, a third CBL module, a second C3 module, a fourth CBL module, a third C3 module, a fifth CBL module, a fourth C3 module, a global perception module, a multi-scale fusion module, a sixth CBL module, a first upsampling module, a first fusion module, a fifth C3 module, a seventh CBL module, a second upsampling module, a second fusion module, a sixth C3 module, an eighth CBL module, a third upsampling module, a third fusion module, a seventh C3 module and a first convolution module which are sequentially connected;
the global perception module is used for receiving the image result input by the fourth C3 module and capturing global information of the image result.
In one embodiment, the global perception module comprises: a plurality of encoder layers, the plurality of encoder layers being connected in series with one another, each of the encoder layers comprising: a first sub-layer connection structure and a second sub-layer connection structure;
wherein the first sublayer connection structure comprises a multi-head self-attention sublayer, a first normalization layer and a first residual connection,
The second sublayer connecting structure comprises a feedforward full-connection sublayer, a second normalization layer and a second residual connection;
the image result input to the global perception module by the fifth CBL module is used as an input result in the first encoder layer, and the output result of the corresponding encoder layer is used as an input result of the next encoder layer; taking the output result of the last encoder layer in the global perception module as the output result of the global perception module;
the data processing flow inside the encoder layer comprises the following steps:
the method comprises the steps of inputting an input result input into the encoder layer into the multi-head self-focusing sublayer, inputting an output result of the multi-head self-focusing sublayer into the first normalization layer, inputting an image result into the first normalization layer through the first residual error connection, inputting an output result of the first normalization layer into the feedforward full-connection sublayer, inputting an output result of the feedforward full-connection sublayer into the second normalization layer, inputting an output result of the first normalization layer into the second normalization layer through the second residual error connection, and taking the second normalization layer as an output result of the corresponding encoder layer.
In an embodiment, in the predicting step, the target detection model predicts the preprocessed image to obtain a corresponding prediction result, including:
the first convolution module outputs a corresponding first feature map, and predicts the first feature map to obtain a preliminary prediction result;
processing the preliminary prediction result by adopting a non-maximum suppression function suitable for the rotation rectangle to obtain the corresponding prediction result;
the trained target detection model is obtained by training a loss function for a Gaussian bounding box;
the calculation flow of the cross ratio utilized by the non-maximum suppression function is as follows:
determining an intersection point of the rotation rectangle corresponding to the preliminary prediction result and another corresponding rotation rectangle;
determining whether the intersection points are satisfied to be inside the two rotating rectangles;
and forming a polygon by using the intersection points and sides which are both inside the two rotating rectangles, taking the area of the polygon as the intersection of the two rotating rectangles, taking the area of the two rotating rectangles as the union of the two rotating rectangles, and calculating the intersection ratio by using the intersection and the union.
In one embodiment, the preprocessing step includes: acquiring an X-ray image of a lithium battery pole piece, preprocessing the X-ray image to obtain a preprocessed image, and comprising the following steps:
performing binarization processing on the X-ray image to obtain a binarized image, performing binarization image inversion processing on the binarized image to obtain a binarized image inversion processed image, obtaining a minimum circumscribed rectangle corresponding to the maximum circumscribed outline of the binarized image inversion processed image, and intercepting the minimum circumscribed rectangle according to preset parameters to obtain the ROI region;
enhancing the contrast of the dark region in the ROI region to obtain an enhanced contrast image;
processing the contrast-enhanced image by adopting a multi-scale retina image enhancement algorithm to obtain a texture feature enhanced image;
filtering the texture feature enhanced image to obtain a filtered image;
carrying out nonlinear brightness enhancement on the filtered image to obtain a nonlinear brightness enhanced image;
and carrying out normalization processing on the nonlinear brightness enhancement image to obtain a normalized image, and taking the normalized image as the preprocessed image.
According to a second aspect, in one embodiment, a detection device for a lithium battery pole piece is provided. The detection device comprises:
a memory for storing a program for constructing an object detection model as described in any of the embodiments of the present application;
a processor, configured to implement the detection method according to any one of the embodiments of the present application by executing the program stored in the memory.
According to a third aspect, a computer-readable storage medium is provided in one embodiment. The computer-readable storage medium includes a program. The program can be executed by a processor to implement the detection method as described in any of the embodiments herein.
The beneficial effects of this application are:
the detection method of the lithium battery pole piece provided by the application comprises the following steps: acquiring an X-ray image of a lithium battery pole piece, and preprocessing the X-ray image to obtain a preprocessed image; inputting the preprocessed image into a trained target detection model, and predicting the preprocessed image by the target detection model to obtain a corresponding prediction result; the lithium battery pole piece comprises a cathode pole piece and an anode pole piece; the prediction result comprises: the ROI area where the sharp point of the cathode plate is located and the ROI area where the sharp point of the anode plate is located; and processing the ROI area to determine the sharp point of the cathode pole piece and the sharp point of the anode pole piece. The device provided by the application can realize the method.
Drawings
Fig. 1 is a flow chart of a method for detecting a lithium battery pole piece according to an embodiment;
FIG. 2 is a flow chart of preprocessing an X-ray image to obtain a preprocessed image according to an embodiment;
FIG. 3 is a block diagram of a target detection model according to an embodiment;
FIG. 4 is a schematic diagram of a first sub-layer connection structure and a second sub-layer connection structure of an embodiment;
FIG. 5 is a schematic view of a rotating rectangle of an embodiment;
FIG. 6 is a schematic diagram of the cross-over ratio of a rotating rectangle according to one embodiment;
FIG. 7 is a schematic flow chart of determining the sharp point of the cathode sheet and the sharp point of the anode sheet according to one embodiment;
FIG. 8 is a flow chart of the steps of identification of one embodiment;
fig. 9 is a schematic block diagram of a detection device for a lithium battery pole piece according to an embodiment.
Detailed Description
The invention will be described in further detail below with reference to the drawings by means of specific embodiments. Wherein like elements in different embodiments are numbered alike in association. In the following embodiments, numerous specific details are set forth in order to provide a better understanding of the present application. However, one skilled in the art will readily recognize that some of the features may be omitted, or replaced by other elements, materials, or methods in different situations. In some instances, some operations associated with the present application have not been shown or described in the specification to avoid obscuring the core portions of the present application, and may not be necessary for a person skilled in the art to describe in detail the relevant operations based on the description herein and the general knowledge of one skilled in the art.
Furthermore, the described features, operations, or characteristics of the description may be combined in any suitable manner in various embodiments. Also, various steps or acts in the method descriptions may be interchanged or modified in a manner apparent to those of ordinary skill in the art. Thus, the various orders in the description and drawings are for clarity of description of only certain embodiments, and are not meant to be required orders unless otherwise indicated.
The numbering of the components itself, e.g. "first", "second", etc., is used herein merely to distinguish between the described objects and does not have any sequential or technical meaning. The terms "coupled" and "connected," as used herein, are intended to encompass both direct and indirect coupling (coupling), unless otherwise indicated.
The technical scheme of the present application will be described in detail with reference to examples.
Referring to fig. 1, the present application provides a method for detecting a lithium battery pole piece, which includes:
step S100 of pretreatment: acquiring an X-ray image of a lithium battery pole piece, and preprocessing the X-ray image to obtain a preprocessed image;
step S200 of prediction: inputting the preprocessed image into a trained target detection model, and predicting the preprocessed image by the target detection model to obtain a corresponding prediction result; the lithium battery pole piece comprises a cathode pole piece and an anode pole piece; the prediction result comprises: a ROI region where the sharp point of the cathode pole piece is located and a ROI region where the sharp point of the anode pole piece is located;
The sharp point is an upper end point and a lower end point of a coating range of the anode pole piece or the cathode pole piece in the Y direction; the Y direction is a direction perpendicular to the horizontal direction in the preprocessed image;
step S300 of identification: the ROI area is processed to determine the sharp point of the cathode sheet and the sharp point of the anode sheet.
The lithium battery pole piece generally consists of a base material, a coating and a diaphragm. Coating the required coating on the substrate, wherein the upper end and the lower end of the coating range of the coating are pole piece sharp points.
It should be noted that, in the step S100 of preprocessing, the "obtaining the X-ray image of the lithium battery pole piece" belongs to the prior art in the field, and therefore, a description thereof is not repeated here.
In some embodiments, please refer to fig. 2, in the preprocessing step S100: acquiring an X-ray image of a lithium battery pole piece, preprocessing the X-ray image to obtain a preprocessed image, and comprising the following steps:
step S110: binarizing the X-ray image to obtain a binarized image,
performing binary image inversion processing on the binarized image to obtain a binary image inversion processed image, obtaining a minimum circumscribed rectangle corresponding to the maximum circumscribed outline of the binary image inversion processed image, and intercepting the minimum circumscribed rectangle according to preset parameters to obtain an ROI (region of interest);
Step S120: enhancing the contrast of the dark region in the ROI region to obtain an enhanced contrast image;
step S130: processing the contrast-enhanced image by adopting a multi-scale retina image enhancement algorithm to obtain a texture feature enhanced image;
step S140: filtering the texture feature enhanced image to obtain a filtered image;
step S150: carrying out nonlinear brightness enhancement on the filtered image to obtain a nonlinear brightness enhanced image;
step S160: and carrying out normalization processing on the nonlinear brightness enhancement image to obtain a normalized image, and taking the normalized image as a preprocessed image.
In some embodiments, in the step S110, the OTSU method may be used to perform binarization processing on the X-ray image to obtain a binarized image, and then perform binarization image inversion processing on the binarized image to obtain a binarized image inversion processed image, so as to obtain a minimum circumscribed rectangle corresponding to a maximum circumscribed contour of the binarized image inversion processed image. The above-mentioned oxford method (OTSU) is an algorithm for determining the binary segmentation threshold of an image. The minimum circumscribed rectangle is an angular rotation rectangle, but in actual operation, in order to improve algorithm performance, the minimum circumscribed positive rectangle can be used for approximation. Here, the binary image inversion processing changes a black pixel to a white pixel and changes a white pixel to a black pixel. The binary image inversion processing is one of basic operations in image processing. And intercepting the minimum circumscribed rectangle according to preset parameters to obtain the ROI area. Wherein, the preset parameters here include: pre-offset in X-direction and Y-direction, width and height of ROI area. The pre-offset is used for carrying out certain offset after finding out the minimum circumscribed positive rectangle so as to avoid incomplete intercepted ROI (region of interest); the width and height of the ROI area are used to ensure uniformity of the truncated ROI area in aspect ratio. Wherein ROI (Region of Interest) represents a region of interest, which means that a region to be processed is outlined from a processed image in a square, circular, elliptical, irregular polygonal or other manner; the interested ROI area can be obtained through various operators and functions, and the method is widely applied to the fields of hot spot maps, face recognition, image segmentation and the like.
It should be noted that, in the step S110, the step of obtaining the minimum circumscribed rectangle corresponding to the maximum circumscribed contour of the image after the binary image inversion processing belongs to the prior art in the field, and therefore a description thereof is not repeated here.
It should be noted that, the preset parameters may be adjusted according to actual needs by those skilled in the art, and the preset parameters are not limited herein. The "obtaining the ROI area by intercepting the minimum bounding rectangle according to the preset parameters" belongs to the prior art in the field, so that a detailed description thereof is omitted here.
In some embodiments, in step S120, the contrast of the dark region in the ROI region may be enhanced using gamma transformation to obtain an enhanced contrast image. The gamma transformation (Gamma Transformation) is a commonly used nonlinear gray scale transformation method that can adjust the contrast of an image. The gamma transformation is based on a non-linear relationship between illumination intensity and human eye perception.
In some embodiments, in step S130, a multi-scale retinal image enhancement algorithm may be used to process the enhanced contrast image to obtain the texture feature enhanced image. The multi-scale retina image enhancement algorithm is an algorithm which can realize the compression of the dynamic range of the image and can better maintain the consistency of the color sense. The general procedure of the multi-scale retina image enhancement algorithm includes:
1) The image output obtained by the flat panel detector is a 16-bit single-channel image, which is marked as I; converting the I into a 32-bit space, and then carrying out logarithmic transformation to obtain LogI;
2) Gaussian blur of the original image (e.g., the enhanced contrast image) at three scales, 30, 150 and 150300, each scale is given a weight of 1/3; after Gaussian blur is carried out on each graph, logarithmic transformation is carried out similarly, and LogI is obtained 30 、LogI 150 And LogI 300 ;
3) The new LogI is obtained by subtracting the original LogI from the blurred pictures of each scale in turn according to the above weight, i.e., new logi=logi-W 30 * LogI 30 -W 150 * LogI 150 - W 300 * LogI 300 ;
4) The latest LogI is restored according to the set gain coefficient and the set bias, and a processing result (namely the texture feature enhanced image) is obtained.
It should be noted that, the multi-scale retinal image enhancement algorithm belongs to the prior art in the field, so detailed procedures of the multi-scale retinal image enhancement algorithm are not described here.
In some embodiments, in step S140, the texture feature enhanced image is filtered to obtain a filtered image. The general process of the filtering treatment is as follows:
1) Constructing a filter kernel with a size of 5*1, wherein each parameter of the filter kernel is {5,0,0,0, -5}, and performing first filtering on the texture feature enhanced image by using the filter kernel to obtain a first filtered image fm1;
2) Multiplying the filtering kernel by-1 to change the direction of the filtering kernel, and meanwhile turning the original image (namely the texture feature enhanced image) along the Y axis to obtain a Y axis turned image, and performing secondary filtering on the Y axis turned image by utilizing the filtering kernel with the changed direction of the filtering kernel to obtain a secondary filtered image fm2; the turning of the original image along the Y axis means that the original image is turned horizontally along the Y axis of the center of the original image, namely, the left image is turned to the right, and the right image is turned to the left;
3) Shifting the second filtered image fm2 to the left by 3 pixels to eliminate the position deviation caused by filtering, and then adding the second filtered image fm2 and the first filtered image fm1 to obtain a filtered image fm3;
4) Performing corrosion treatment on the filtered image fm3 to remove noise interference, and performing box filtering again to further enable the obtained image (i.e. the filtered image) to be smoother and more uniform; the box filtering (i.e. block filtering) is a special case of mean filtering, which is similar in principle to mean filtering, but differs in that each pixel in the filter of the block filtering has the same weight, i.e. equal weight.
Note that fm1 obtained by the first filtering and fm2 obtained by the second filtering are different in the direction of the filter kernel and in the direction of the horizontal shift.
In some embodiments, in the step S150, the non-linear brightness enhancement is performed on the filtered image according to a window size of 100 and a step coefficient of 0.2 to obtain a non-linear brightness enhanced image, so as to avoid the problem of inconsistent light intensity of the X-ray image due to material absorption. Specifically, the non-linear luminance enhancement may be performed by: because the brightness of different areas of the original image is different, different weights can be adopted for brightness enhancement aiming at the areas with different brightness, so that the bright areas of the original image are brighter after enhancement, and the dark areas are darker after enhancement.
In some embodiments, in the step S160, the nonlinear brightness enhancement image may be normalized (e.g., min-Max normalized) to obtain an 8-bit image (i.e., a normalized image). In some embodiments, the longest side of the 8-bit image may also be scaled to the size of the long side of the image acceptable to the model, while the short side is scaled equally, the excess portion may be filled with (127, 127, 127); stacking the images by 3 channels yields an 8-bit 3-channel image (i.e., normalized image). The above 8-bit image or 8-bit 3-channel image may be used as the preprocessed image. The whole pretreatment process is completed. Where Min-Max normalization, also known as dispersion normalization, is a linear transformation of the raw data, mapping the resulting values between 0, 1.
In some embodiments, in the step S200 of predicting, the Y direction is a direction perpendicular to the horizontal direction in the preprocessed image. For example, the Y direction may be determined as follows: in a digital image (such as the pre-processed image described above), the Y-direction refers to the vertical direction of the digital image, i.e., from top to bottom of the image or from bottom to top of the image; and when a point in the Y direction increases in the coordinate value of the Y axis, it means that the point in the Y direction moves downward; when the coordinate value of a point in the Y direction on the Y axis decreases, the point in the Y direction moves upwards; the origin of the Y-axis is typically located in the upper left corner of the digital image, i.e., the minimum coordinate value of the Y-axis corresponds to the top position of the digital image.
It should be noted that the object detection model of the present application is different from the existing YOLOV5 model in that: in a first aspect, the above-mentioned "global perception module" is added between the fifth CBL module and the multi-scale fusion module in the existing YOLOV5 model; in a second aspect, the dimension of the angle information is also increased in the final decoder layer. Therefore, the other module structures inside the object detection model of the present application will not be described in detail here. The person skilled in the art can determine the actual internal structure of the target detection model according to the actual requirements, for example, a second convolution module and a third convolution module which function similarly to the first convolution module are added. The existing YOLOV5 model is a single-stage target detection algorithm, and the network structure is divided into: input, backbone, neck and head, the algorithm adds some new improvement ideas on the basis of YOLOv4, so that the speed and the precision of the algorithm are improved greatly.
It should be noted that, in the present application, an encoder of the existing transducer model is used as the "global sensing module", and the encoding result of the "global sensing module" is directly used as the input of the next module (i.e. the multi-scale fusion module). The transducer model is a neural network model based on a self-attention mechanism and is used for processing sequence data. The transducer model has better parallel performance and shorter training time than the conventional recurrent neural network model. The core of the transducer model is a self-attention mechanism that is able to assign a weight to each position in each input sequence and then take these weighted position vectors as output. The transducer model is composed of an encoder and a decoder, each of which is composed of a plurality of attention mechanism modules and a feedforward neural network module.
In some embodiments, in the step S200 of prediction, please refer to fig. 3, the trained target detection model includes: the system comprises a first CBL module, a second CBL module, a first C3 module, a third CBL module, a second C3 module, a fourth CBL module, a third C3 module, a fifth CBL module, a fourth C3 module, a global perception module, a multi-scale fusion module, a sixth CBL module, a first upsampling module, a first fusion module, a fifth C3 module, a seventh CBL module, a second upsampling module, a second fusion module, a sixth C3 module, an eighth CBL module, a third upsampling module, a third fusion module, a seventh C3 module and a first convolution module which are sequentially connected; the global perception module is used for receiving the image result input by the fourth C3 module and capturing global information of the image result.
In some embodiments, the global perception module comprises: a plurality of encoder layers connected in series with each other, each encoder layer comprising: a first sub-layer connection structure and a second sub-layer connection structure; referring to fig. 4, the first sub-layer connection structure includes a multi-head self-focusing sub-layer, a first normalization layer and a first residual connection, and the second sub-layer connection structure includes a feedforward full-connection sub-layer, a second normalization layer and a second residual connection; the image result input to the global perception module by the fourth C3 module is used as an input result in the first encoder layer, and the output result of the corresponding encoder layer is used as an input result of the next encoder layer; taking the output result of the last encoder layer in the global perception module as the output result of the global perception module; the data processing flow inside the encoder layer comprises the following steps: the method comprises the steps of inputting an input result of an input encoder layer into a multi-head self-focusing sublayer, inputting an output result of the multi-head self-focusing sublayer into a first normalization layer, inputting an image result into the first normalization layer through a first residual error connection, inputting an output result of the first normalization layer into a feedforward full-connection sublayer, inputting an output result of the feedforward full-connection sublayer into a second normalization layer, inputting an output result of the first normalization layer into the second normalization layer through a second residual error connection, and taking the second normalization layer as an output result of a corresponding encoder layer.
In some embodiments, referring to fig. 3, the output end of the first CBL module is connected to the input end of the second CBL module, the output end of the second CBL module is connected to the input end of the first C3 module, the output end of the first C3 module is connected to the input end of the third CBL module, the output end of the third CBL module is connected to the input end of the second C3 module, the output end of the second C3 module is connected to the input end of the fourth CBL module, the output end of the fourth CBL module is connected to the input end of the third C3 module, the output end of the fifth CBL module is connected to the input end of the fourth C3 module, the output end of the fourth C3 module is connected to the input end of the global sensing module, the output end of the global sensing module is connected to the input end of the multi-scale fusion module, the output end of the multi-scale fusion module is connected to the input end of the sixth CBL module, the output end of the sixth CBL module is connected to the input end of the first sampling module; the output end of the first fusion module is connected with the input end of a fifth C3 module, the output end of the fifth C3 module is connected with the input end of a seventh CBL module, the output end of the seventh CBL module is connected with the input end of a second up-sampling module 18, the output end of the second up-sampling module 18 is connected with the input end of the second fusion module, the output end of the second fusion module is connected with the input end of a sixth C3 module, the output end of the sixth C3 module is connected with the input end of an eighth CBL module, the output end of the eighth CBL module is connected with the input end of a third up-sampling module, the output end of the third up-sampling module is connected with the input end of the third fusion module, the output end of the third fusion module is connected with the input end of the seventh C3 module; the output end of the first C3 module is also connected with the input end of the third fusion module, the output end of the second C3 module is also connected with the input end of the second fusion module, and the output end of the third C3 module is also connected with the input end of the first fusion module; the output end of the seventh C3 module is connected with the input end of the first convolution module, and the first convolution module is used for outputting a corresponding first characteristic diagram. Wherein,
The input end of the first CBL module is used for receiving target image data information of a target to be detected, and the second CBL module and the third CBL module are respectively used for increasing receptive fields; the first C3 module, the second C3 module, the third C3 module, the fourth C3 module, the fifth C3 module, the sixth C3 module and the seventh C3 module are respectively used for carrying out residual processing on the input characteristic information; the fourth CBL module, the fifth CBL module, the sixth CBL module, the seventh CBL module and the eighth CBL module are respectively used for extracting the characteristics of the input data information so as to output the corresponding characteristic information; the multiscale fusion module is used for multiscale fusion of the input characteristic information so as to increase the receptive field; the first up-sampling module, the second up-sampling module and the third up-sampling module are respectively used for up-sampling the input characteristic information so as to enlarge the resolution of the image; the first fusion module, the second fusion module and the third fusion module are respectively used for carrying out feature fusion on the input feature images so as to splice the input feature images on the channel; the first convolution module is used for outputting first characteristic information of target image data information.
In some embodiments, in the predicting step S200, the target detection model predicts the preprocessed image to obtain a corresponding prediction result, including:
The first convolution module outputs a corresponding first feature map, predicts the first feature map and obtains a preliminary prediction result;
processing the preliminary prediction result by adopting a non-maximum suppression function suitable for the rotating rectangle to obtain a corresponding prediction result;
the trained target detection model is obtained by training a loss function for a Gaussian bounding box;
the calculation flow of the cross ratio utilized by the non-maximum suppression function is as follows:
determining an intersection point of the rotation rectangle corresponding to the preliminary prediction result and another corresponding rotation rectangle;
determining whether the intersection points are satisfied to be inside the two rotating rectangles;
and forming a polygon by utilizing the intersection points and sides which are all in the two rotating rectangles, taking the area of the polygon as the intersection of the two rotating rectangles, taking the sum of the areas of the two rotating rectangles as the union of the two rotating rectangles, and calculating the intersection ratio by utilizing the intersection and the union.
In the step S200 of the above prediction, the first feature map may be predicted by an Anchor Free method (i.e., an Anchor-Free detector) to obtain a preliminary prediction result. Among them, existing deep learning-based detectors can be roughly classified into an Anchor-free and an Anchor-base category. The Anchor-free detector directly learns the object likelihood and bounding box coordinates without an Anchor reference. Compared with an Anchor-base based detector, the Anchor-free detector gets rid of the hyper-parameters and complex calculations related to Anchor, making the training process quite simple. Specifically, bx, by, bw, and bh of decoding output of each preset grid point in the existing YOLOV5 model are defined as the center point coordinates (x and y) of the preset anchor frame corresponding to the grid point, the offset of the width, and the offset of the length. While the object detection model constructed based on the existing YOLOV5 model of the present application defines the grid point decoding output as bx, by, ls, ss and θ, respectively, where bx is redefined as the center coordinates relative to the grid point xOffset of the center point coordinate x of the preset anchor frame corresponding to the grid point; wherein bx is independent of the center point coordinates; by is redefined as the offset relative to the center coordinate y of the grid point, ls and ss are redefined as the long and short sides of the prediction frame, respectively, and θ is the deflection angle of the prediction frame. In particular, referring to fig. 5, the prediction frame adopted by the target detection model of the present application is a rotation rectangle. Where θ is the angle by which the lower left corner of the rotating rectangle rotates counterclockwise in the x-direction (i.e., the direction of the x-axis). The value range of theta is [ -pi/2, pi/2]。
In some embodiments, referring to fig. 3, each channel in the feature map output by the fourth C3 module may be regarded as a 20×20 patch (i.e., subset), and the set of patches may be input to the global awareness module for self-attention operation. The self-attention mechanism adopted by the global perception module can better capture global information of the image, so that higher-layer features obtain a larger receptive field, interference of random noise of the image on a detection result is reduced, and stability of the detection result is improved.
In some embodiments, in the predicting step S200, the preliminary prediction result is processed by using a non-maximum suppression function applicable to the rotation rectangle to obtain a corresponding prediction result. The existing YOLOV5 model adopts a conventional non-maximum suppression function (NMS), and the target detection model of the application adopts a non-maximum suppression function (nms_rotation) applicable to a rotating rectangle. Specifically, the original orthogonal rectangular intersection ratio (IOU) is calculated by the following steps: and calculating whether four points of the two positive rectangles are satisfied in one rectangle and the other rectangle, calculating an intersection of the positive rectangles according to a new rectangle formed by candidate points which satisfy the conditions, and calculating a union of the original positive rectangles, thereby calculating the corresponding intersection ratio. In the object detection model of the present application, please refer to fig. 6, which illustrates three cases of calculating the intersection ratio of the rotated rectangles.
The calculation flow of the cross ratio of the non-maximum suppression function adopted in the application is as follows:
determining the intersection point of the rotated rectangle corresponding to the preliminary prediction result and the corresponding other rotated rectangle (for example, for the calculation case of the intersection ratio on the left side in fig. 6 (i.e., corresponding to the rotated rectangle ABCD and the rotated rectangle EFGH on the left side in fig. 6), the corresponding intersection points are N, H, M and B, respectively; for the calculation case of the intersection ratio on the middle in fig. 6 (i.e., corresponding to the rotated rectangle ABCD and the rotated rectangle EFGH on the middle in fig. 6), the corresponding intersection points are I, J, L and K, respectively; for the case of calculation of the intersection ratio on the right side in fig. 6 (i.e., corresponding to the rotation rectangle ABCD and the rotation rectangle EFGH on the right side in fig. 6)), the corresponding intersection points are I, P, O, N, M, L, K and J, respectively;
determining whether the intersection points are satisfied to be in the two rotating rectangles;
and forming a polygon by utilizing the intersection points and sides which are all in the two rotating rectangles, taking the area of the polygon as the intersection of the two rotating rectangles, taking the sum of the areas of the two rotating rectangles as the union of the two rotating rectangles, and calculating the intersection ratio by utilizing the intersection and the union.
In some embodiments, in the step S200 of predicting, the trained target detection model is trained using a loss function for a gaussian bounding box. The existing YOLOV5 model uses ciou_loss for a positive rectangle. The CIOU_loss formula is a loss function for calculating the distance between two bounding boxes in target detection, is an improvement based on the cross-over ratio (IOU), and can reflect the similarity degree between the two bounding boxes more accurately. While the object detection model of the present application employs probiou_ loss (Probabilistic IoU) for a rotating rectangle. The ProbIoU_loss is a new calculation method of the target similarity. Specifically, probiou_loss was originally a loss function for the gaussian bounding box (Gaussian Bounding Boxes), but due to its consistency in parameters with the rotating bounding box (Oriented bounding Boxes), and ProbIoU (an existing loss function) based on the hallinger distance (Hellinger Distance) also has: all parameters in the function can be reduced, the Hailingge distance meets all distance measurement standards, the characteristic that the object is not scaled by the loss function is achieved, network convergence can be accelerated, and model accuracy is improved. The method comprises the following steps:
(1) An expression of a gaussian bounding box is obtained. To determine a two-dimensional gaussian distribution in a 2-dimensional region, the mean μ and covariance matrix Σ need to be calculated, where μ isx 0 ,y 0 ) T, covariance matrix Σ may be calculated by the following formula; in the regression task as target detection, the gaussian bounding box parameter is expressed as (x 0 ,y 0 ,a’,b’,θ );
(2) The following assumptions are followed in the rotation box to gaussian box conversion: the target area is a 2-dimensional binary area Ω, and Ω conforms to a uniform probability distribution, and the mean μ and covariance matrix Σ of the distribution can be calculated by the following formula:
,
wherein N represents the area of region Ω;
(3) The conversion of the rotated bounding box (OBB) into the Gaussian Bounding Box (GBB) requires the calculation of (a ', b', θ), the variances a 'and b' can be calculated by converting the rotated box into a horizontal box, and the covariance matrix can be calculated by the following formula:
;
(4) To calculate the similarity between different Gaussian Bounding Boxes (GBBs), firstly, the pasteurizer coefficients (Bhattacharyya Coefficient, BC) are used; the Pasteur coefficient between the two probability density functions p (x) and q (x) is calculated according to the following formula:
;
wherein BC (p, q) ∈ [0,1], BC (p, q) =1 if and only if the two distributions are the same.
Based on the above BC (p, q), the pasteurization distance (Bhattacharyya Distance, BD) between the different distributions can be obtained, and BD between the two probability density functions p (x) and q (x) is calculated according to the following formula:
;
when p-N (μ1, Σ1), q-N (μ2, Σ2) and the actual problem in target detection is a 2-dimensional vector and matrix, the Papanicolaou distance can be calculated by the following formula:
;
(5) Since the pasteurized distance does not satisfy the triangle inequality, it is not a true distance, and thus, to represent the true distance, a sea-ringer distance (Hellinger Distance (HD)) is employed, the formula of which is as follows:
;
wherein HD (p, q) ∈ [0,1], HD (p, q) =0 if and only if the two distributions are identical;
(6) Based on the sea-ringer distance, the specific calculation formula of the Gaussian distribution similarity calculation method ProbIoU is as follows:
;
(7) Finally, the loss of the positioning function; assuming that the gaussian bounding box GBB is p= (x 1, y1, a1, b1, c 1), the true gaussian bounding box GBB is p= (x 2, y2, a2, b2, c 2), its loss function is as follows:
;
l when the predicted Gaussian bounding box GBB is far from the true Gaussian bounding box GBB distance 1 The value of the loss function is close to 1, the gradient generated in the training process is small, and the convergence speed is low; and L is 2 The loss function avoids the above problem, but has a weaker geometric relationship with the cross-correlation ratio (IoU), and therefore the above L is used in the initial stage of training of the object detection model of the present application 2 Training a loss function, and switching to L after the target detection model tends to be converged 1 The loss function continues to train so as to improve the detection precision of the target detection model.
In some embodiments, in the step S200 of predicting, the trained target detection model is obtained through the following process training:
the method comprises the steps of S1, marking the sharp point of an anode pole piece and the sharp point of a cathode pole piece in an X-ray image of a lithium battery pole piece by using an image marking tool (such as a robaleImg software) to obtain corresponding image labels (such as an image label in an xml format), wherein the form of the label is a rotating rectangle, and the label comprises five dimensions, namely a center point coordinate (such as an abscissa X and an ordinate y) of the rotating rectangle, a long side (long_side) of the rotating rectangle, a short side (short_side) of the rotating rectangle, a deflection angle (theta) of the rotating rectangle and a corresponding category (id); the ropylelmg software is an image marking tool;
s2, converting the image label to obtain a converted image label; the format of the converted image label is as follows: each row represents a target object, and the target object consists of the coordinates of the central point, the long side, the short side, the deflection angle and the category of the rotating rectangle; in order to facilitate convergence of the target detection model constructed in the application, the long side and the short side of each target object are adjusted to be the same size;
Step S3, preprocessing data enhancement applied to the image in the data set (such as the image obtained by using the image labeling tool); wherein the methods of preprocessing herein may include one or more of randomly translating, randomly rotating (e.g., between-30 ° and 30 °), randomly scaling, randomly transforming hue/saturation/brightness the image;
s4, dividing the data set into a training set, a verification set and a test set according to a preset proportion (such as 8:1:1);
s5, constructing a target detection model of the application; the target detection model of the method comprises the steps of performing decoding prediction in an Anchor Free mode, and introducing a transducer layer to enhance global perceptibility before a multi-scale fusion module (SPPF); the target detection model of the application adopts a non-maximum suppression function NMS_Rotate applicable to rotating rectangles; the loss function of the target detection model of the application adopts Probou_loss applicable to the rotating rectangle;
s6, inputting the training set, the verification set and the test set into a constructed target detection model, performing iterative training, and judging whether the target detection model is converged according to performance indexes (such as average precision (mAP)) of the target detection model; the average precision is a commonly used index for evaluating the performance of the target detection model;
Step S7, after the target detection model converges, the target detection model can be exported (such as exported in a TensorRT format), and pruning optimization is carried out on the target detection model by using a tretec tool; wherein TensorRT is a high-performance deep learning reasoning SDK.
In some embodiments, in the step S1, the preprocessing step S100 may be used to pre-process the X-ray image of the lithium battery pole piece to reduce noise interference, and then perform subsequent labeling work and so on.
It should be noted that, the training process of the target detection model of the present application belongs to the prior art in the field, so detailed description of the training process of the target detection model of the present application is not repeated here.
In some embodiments, after the object detection model of the present application is trained, the step S100 of preprocessing and the step S200 of predicting described above may be performed.
In some embodiments, the pre-screening treatment may be performed on the prediction result obtained by the target detection model according to the prior knowledge and the statistical data in production, and the prediction result with errors such as overlapping, misplacement, or mis-grabbing may be screened out to obtain a prediction result after the screening treatment, and the prediction result after the screening treatment is used as the corresponding prediction result in the step S200 of prediction. The preliminary prediction result that is not subjected to the screening process may be used as the prediction result corresponding to the step S200 of the prediction.
In some embodiments, the identified step S300 includes only step S310 (i.e., the ROI area is processed to determine the sharp point of the cathode pole piece and the sharp point of the anode pole piece).
In some embodiments, referring to fig. 7, in step S310, the ROI area is processed to determine the point of the cathode pole piece and the point of the anode pole piece, including:
step S311: acquiring an ROI region, and sequentially performing median filtering and erosion treatment on the ROI region to obtain an eroded ROI region;
step S312: acquiring the minimum circumscribed outline of the eroded ROI area;
step S313: taking the index of each row of pixels in the minimum external contour as a first independent variable, and taking the average value of all the pixels in each row as a corresponding first dependent variable to obtain a first relation curve between the first independent variable and the first dependent variable;
step S314: performing first derivative calculation on the first relation curve to obtain a corresponding derivative curve;
step S315: and obtaining a maximum value point of the derivative curve, and taking an index corresponding to the maximum value point as a sharp point of the cathode pole piece or the anode pole piece.
In some embodiments, in the step S300 of identifying, each prediction result may be traversed, and the ROI area is sequentially subjected to median filtering and erosion processing to obtain the eroded ROI area. The minimum circumscribed outline of the eroded ROI area can be obtained by adopting a related operator (such as a related operator in OpenCV); the index of each row of pixels in the minimum external contour can be used as a first independent variable, and the average value of all the pixels in each row is used as a corresponding first dependent variable, so that a first relation curve between the first independent variable and the first dependent variable is obtained; performing first derivative calculation on the first relation curve to obtain a corresponding derivative curve; and obtaining a maximum value point of the derivative curve, and taking an index corresponding to the maximum value point as a sharp point of the cathode pole piece or the anode pole piece. Wherein OpenCV is a cross-platform computer vision and machine learning software library.
Wherein the Y-direction generally refers to a direction perpendicular to the horizontal axis in a two-dimensional image (such as the preprocessed image). In the pre-processed image, the value in the Y direction represents the number of lines or the index of vertical pixels of the pre-processed image.
It should be noted that the median filtering and etching treatment in the step S311 described above belongs to the prior art in the field. In step S312, the minimum circumscribed contour of the eroded ROI area may be obtained by using the prior art (such as the related operator in OpenCV); contours refer to the boundaries of an object or object in an image.
The abscissa of the first relationship curve (i.e., the first argument) obtained in the above step S313 may be understood as a position (typically, an index of a row number or a vertical pixel) in the Y direction, and the ordinate of the first relationship curve (i.e., the first argument) may be understood as an average pixel value or a correlation index corresponding thereto.
In the step S314, the purpose of obtaining the corresponding derivative curve by performing the first derivative calculation on the first relationship curve is to find the maximum point of the corresponding derivative curve.
In step S315, a maximum point, i.e., a local maximum point on the derivative curve, is found on the corresponding derivative curve. These maxima represent the maximum rate of change of the derivative curve at the corresponding locations and thus most likely correspond to sharp locations in the pole piece (pole), i.e., the point of the cathode pole piece or the point of the anode pole piece.
In some embodiments, referring to fig. 8, the identifying step S300 further includes: step S320: and taking the difference value between the sharp point of the cathode pole piece and the sharp point of the anode pole piece in the Y direction as the overHang allowance of the lithium battery pole piece.
In some embodiments, whether the calculated OverHang margin of the lithium battery pole piece is within a preset range can be judged. The process of obtaining the difference between the sharp point of the cathode plate and the sharp point of the anode plate in the Y direction belongs to common knowledge in the art, and thus will not be described herein. Wherein, overhang refers to the part of the length and width direction of the negative electrode plate (such as the negative electrode plate) beyond the positive electrode plate (such as the positive electrode plate).
In some embodiments, the identifying step S300 further includes:
step S330: obtaining a skeleton curve of the anode plate by utilizing the ROI;
step S340: determining the deflection angle of the anode pole piece according to the skeleton curve;
step S350: determining a deflection site of the anode pole piece and a deflection direction of the deflection site by utilizing a skeleton curve;
step S360: and determining the gesture type of the anode plate according to the deflection site and the deflection direction.
The method for obtaining the skeleton curve of the anode pole piece by utilizing the ROI comprises the following steps:
Sequentially performing expansion treatment, box filtering and binarization treatment on the ROI region to obtain a binarized ROI region, and processing the binarized ROI region by adopting a skeletonization algorithm to obtain a skeleton curve of the anode pole piece;
wherein, confirm the deflection angle of positive pole piece according to skeleton curve, include:
performing straight line fitting on the skeleton curve by using a RANSAC algorithm to obtain a corresponding fitting straight line;
calculating an included angle between the fitting straight line and the length direction of the anode pole piece or the cathode pole piece;
and taking the included angle between the fitting straight line and the length direction of the anode pole piece or the cathode pole piece as a deflection angle.
It should be noted that, in the above "obtaining the skeleton curve of the anode sheet by using the ROI region", the "expansion process", "box filtering", "binarization process" and "skeletonizing algorithm" all belong to the prior art in the field. The dilation process essentially spreads the pixels over the image boundaries to achieve the effect of widening the objects in the image. Box filtering (i.e., block filtering) is a special case of mean filtering, the principle of which is similar to mean filtering. Binarization processing is an image processing technique whose main purpose is to convert an image into a pattern of only two colors: black and white. Common skeletonizing algorithms include a refinement algorithm, a distance transformation algorithm, a medial axis transformation algorithm and the like. These algorithms may gradually refine the edges of the object into a skeleton structure through iterative, corrosive, expansive, etc. operations.
It should be noted that, the "RANSAC algorithm" in the above "determining the deflection angle of the anode plate according to the skeleton curve" belongs to the prior art in the field. The RANSAC (Random Sample Consensus) algorithm is an iterative algorithm based on random sampling for estimating a mathematical model parameter. The inputs to the RANSAC algorithm are a set of observations, a method of fitting a model to the observations, and some confidence parameters.
It should be noted that, the skeleton curve, the maximum point of the subsequent first derivative, the second derivative, and the like are only aimed at the anode pole piece, and the cathode pole piece does not need to be determined in an auxiliary way by the methods.
In some embodiments, in step S300 of identifying, step S350: determining a deflection site of the anode pole piece and a deflection direction of the deflection site by using a skeleton curve, wherein the method comprises the following steps of:
taking the ordinate of each effective pixel site in the skeleton curve as a second independent variable, and taking the abscissa of each effective pixel site in the skeleton curve as a corresponding second dependent variable, so as to obtain a second relation curve between the second independent variable and the second dependent variable; the effective pixel site is a pixel point with a pixel value of 1 in the pixel points corresponding to the skeleton curve;
Performing first derivative calculation on the second relation curve to obtain a corresponding first derivative curve; the abscissa of the first derivative curve is the abscissa of the corresponding effective pixel site, and the ordinate of the first derivative curve is the corresponding first derivative;
obtaining a maximum value point of a first derivative curve, and taking an index corresponding to the maximum value point of the first derivative curve as a deflection site of the anode plate;
determining the deflection direction of the deflection locus according to whether the corresponding first derivative is larger than 0; if the corresponding first derivative is greater than 0, the corresponding deflection direction is right deflection; if the corresponding first derivative is less than 0, the corresponding deflection direction is left-biased.
In some embodiments, a second derivative calculation of the second relationship curve may be calculated to yield a corresponding second derivative curve. And then combining the maximum value point of the corresponding first derivative and the maximum value point of the corresponding second derivative to determine the deflection site of the cathode pole piece or the deflection site of the anode pole piece.
It should be noted that, in a binary image, the pixel value may be 0 or 1, and the pixel point with the pixel value of 1 is the effective pixel site. A straight line with the width of 1 (such as the skeleton curve) can be formed by a plurality of effective pixel sites, and the straight line is L x,y . Each active pixel site is adjacent to at least one other active pixel site. And the second independent variable is the ordinate of the corresponding effective pixel site in the skeleton curve. The second dependent variable is the pair in the skeleton curveThe abscissa of the corresponding effective pixel site.
In step S350, only the coordinates of each effective pixel point of the skeletonized curve (i.e., the curve obtained by thinning the pole piece to a width of 1) are considered when calculating the deflection point of the anode or the cathode, and the pixel value of the effective pixel point is not considered. The reference direction of the deflection angle is the Y direction (e.g., the direction perpendicular to the horizontal direction), wherein the value of the deflection angle is positive when rotating clockwise and negative when rotating counterclockwise.
It should be noted that the normal anode plate is vertical, and the included angle between the normal anode plate and the Y direction is 0. When disturbed by external impact or other factors, the anode sheet deflects at a point known as the deflection point. The deflection angle is positive when the deflection is clockwise, and the deflection angle is negative when the deflection is anticlockwise.
The X-ray image of the lithium battery pole piece is a two-dimensional planar image obtained by projection, and therefore, the above-mentioned deflection direction may refer to clockwise deflection or counterclockwise deflection.
In the step S360, whether the posture category of the pole piece is normal, left-biased, right-biased or S-bent may be determined according to the number of the deflection sites and the deflection direction. For example, if the corresponding first derivative is greater than 0, the corresponding deflection direction is right-biased; if two maximum points exist, judging that the corresponding pole piece deflects twice; if a maximum value point and a minimum value point exist, judging that the corresponding pole piece is subjected to one left deflection and one right deflection, namely, the posture type of the pole piece is S-bend.
In some embodiments, after the step S360, the detected information such as the OverHang margin, the anode deflection angle value, and the attitude category may be formatted and output, and uploaded to a production execution system (MES).
It can be seen that, in some embodiments, the feature of the detection object can be effectively extracted by preprocessing the pole piece X-ray image of the lithium battery through a specific image processing process (such as the preprocessing step S100), so as to reduce noise interference, and meanwhile, the network structure of the existing YOLO5 target detection model is specifically improved to obtain the target detection model of the application, so that the target detection model of the application can improve the detection and identification capability of the target detection model of the application to the image, and the detection precision of the target detection model of the application can meet the production requirement.
It can be seen that in some embodiments, according to the detection method for the lithium battery pole piece provided by the application, the improved YOLO neural network model is utilized to automatically detect and identify the X-ray image of the lithium battery, so that the sharp points of the cathode pole piece and the anode pole piece of the lithium battery can be accurately identified.
It can be seen that in some embodiments, the detection method for the lithium battery pole piece provided by the application can realize detection of the OverHang allowance of the lithium battery.
It can be seen that in some embodiments, the method for detecting a lithium battery pole piece provided by the application can also realize the shape recognition and classification functions of the top of the lithium battery anode.
The above is some descriptions of a method for detecting a lithium battery pole piece. Referring to fig. 9, some embodiments of the present application further disclose a detection device for a lithium battery pole piece. The method comprises the following steps:
a memory 100 for storing a program for constructing an object detection model as in any of the embodiments of the present application;
the processor 200 is configured to implement the detection method according to any one of the embodiments of the present application by executing a program stored in the memory 100.
It should be noted that, the specific execution flow and technical effect of the detection device for the lithium battery pole piece of the present application are substantially the same as those of the detection method for the lithium battery pole piece of the present application, so that the description thereof is omitted here.
The above is some descriptions of a method for detecting a lithium battery pole piece. Also disclosed in some embodiments of the present application is a computer-readable storage medium comprising a program executable by the processor 200 to implement a detection method as in any of the embodiments of the present application.
Reference is made to various exemplary embodiments herein. However, those skilled in the art will recognize that changes and modifications may be made to the exemplary embodiments without departing from the scope herein. For example, the various operational steps and components used to perform the operational steps may be implemented in different ways (e.g., one or more steps may be deleted, modified, or combined into other steps) depending on the particular application or taking into account any number of cost functions associated with the operation of the system.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. Additionally, as will be appreciated by one of skill in the art, the principles herein may be reflected in a computer program product on a computer readable storage medium preloaded with computer readable program code. Any tangible, non-transitory computer readable storage medium may be used, including magnetic storage devices (hard disks, floppy disks, etc.), optical storage devices (CD-to-ROM, DVD, blu-Ray disks, etc.), flash memory, and/or the like. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified. These computer program instructions may also be stored in a computer-readable memory 100 that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in computer-readable memory 100 produce an article of manufacture including means which implement the specified functions. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified.
While the principles herein have been shown in various embodiments, many modifications of structure, arrangement, proportions, elements, materials, and components, which are particularly adapted to specific environments and operative requirements, may be used without departing from the principles and scope of the present disclosure. The above modifications and other changes or modifications are intended to be included within the scope of this document.
The foregoing detailed description has been described with reference to various embodiments. However, those skilled in the art will recognize that various modifications and changes may be made without departing from the scope of the present disclosure. Accordingly, the present disclosure is to be considered as illustrative and not restrictive in character, and all such modifications are intended to be included within the scope thereof. Also, advantages, other advantages, and solutions to problems have been described above with regard to various embodiments. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, system, article, or apparatus. Furthermore, the term "couple" and any other variants thereof are used herein to refer to physical connections, electrical connections, magnetic connections, optical connections, communication connections, functional connections, and/or any other connection.
Those skilled in the art will recognize that many changes may be made to the details of the above-described embodiments without departing from the underlying principles of the invention. Accordingly, the scope of the invention should be determined only by the following claims.
Claims (11)
1. The detection method of the lithium battery pole piece is characterized by comprising the following steps:
pretreatment: acquiring an X-ray image of a lithium battery pole piece, and preprocessing the X-ray image to obtain a preprocessed image;
predicting: inputting the preprocessed image into a trained target detection model, and predicting the preprocessed image by the target detection model to obtain a corresponding prediction result; the lithium battery pole piece comprises a cathode pole piece and an anode pole piece; the prediction result comprises: the ROI area where the sharp point of the cathode plate is located and the ROI area where the sharp point of the anode plate is located;
wherein the sharp point is an upper end point and a lower end point of the coating range of the anode pole piece or the cathode pole piece in the Y direction; the Y direction is a direction perpendicular to the horizontal direction in the preprocessed image;
the step of identification: and processing the ROI area to determine the sharp point of the cathode pole piece and the sharp point of the anode pole piece.
2. The method of detecting as claimed in claim 1, wherein the step of identifying further comprises:
and taking the difference value between the sharp point of the cathode pole piece and the sharp point of the anode pole piece in the Y direction as the overHang allowance of the lithium battery pole piece.
3. The method of detection according to claim 1 or 2, wherein in the step of identifying: processing the ROI area to determine a point of the cathode pole piece and a point of the anode pole piece, comprising:
acquiring the ROI area, and sequentially performing median filtering and erosion treatment on the ROI area to obtain the eroded ROI area;
acquiring the minimum circumscribed outline of the ROI area after the corrosion treatment;
taking the index of each row of pixels in the minimum external contour as a first independent variable, and taking the average value of all the pixels in each row as a corresponding first dependent variable to obtain a first relation curve between the first independent variable and the first dependent variable;
performing first derivative calculation on the first relation curve to obtain a corresponding derivative curve;
and obtaining a maximum value point of the derivative curve, and taking the index corresponding to the maximum value point as a sharp point of the cathode pole piece or the anode pole piece.
4. The method of detecting as claimed in claim 2, wherein the step of identifying further comprises:
obtaining a skeleton curve of the anode plate by utilizing the ROI;
determining the deflection angle of the anode pole piece according to the skeleton curve;
determining a deflection site of the anode pole piece and a deflection direction of the deflection site by utilizing the skeleton curve;
determining the gesture type of the anode plate according to the deflection locus and the deflection direction;
wherein, the obtaining the skeleton curve of the anode plate by using the ROI area comprises the following steps:
sequentially performing expansion treatment, box filtering and binarization treatment on the ROI region to obtain the binarized ROI region, and processing the binarized ROI region by adopting a skeletonization algorithm to obtain a skeleton curve of the anode pole piece;
wherein, the determining the deflection angle of the anode pole piece according to the skeleton curve comprises:
performing straight line fitting on the skeleton curve by using a RANSAC algorithm to obtain a corresponding fitting straight line;
calculating an included angle between the fitting straight line and the length direction of the anode pole piece or the cathode pole piece;
And taking an included angle between the fitting straight line and the length direction of the anode pole piece or the cathode pole piece as the deflection angle.
5. The method of detecting as claimed in claim 4, wherein in the step of identifying: determining a deflection site of the anode plate and a deflection direction of the deflection site by using the skeleton curve, wherein the method comprises the following steps of:
taking the ordinate of each effective pixel site in the skeleton curve as a second independent variable and the abscissa of each effective pixel site in the skeleton curve as a corresponding second dependent variable to obtain a second relation curve between the second independent variable and the second dependent variable; the effective pixel sites are pixel points with the pixel value of 1 in the pixel points corresponding to the skeleton curve;
performing first derivative calculation on the second relation curve to obtain a corresponding first derivative curve; the abscissa of the first derivative curve is the abscissa of the corresponding effective pixel site, and the ordinate of the first derivative curve is the corresponding first derivative;
obtaining a maximum value point of the first derivative curve, and taking an index corresponding to the maximum value point of the first derivative curve as a deflection site of the anode plate;
Determining a deflection direction of the deflection locus according to whether the corresponding first derivative is greater than 0; if the corresponding first derivative is greater than 0, the corresponding deflection direction is right deflection; and if the corresponding first derivative is smaller than 0, the corresponding deflection direction is left-biased.
6. The method of claim 1, wherein the trained object detection model comprises: the system comprises a first CBL module, a second CBL module, a first C3 module, a third CBL module, a second C3 module, a fourth CBL module, a third C3 module, a fifth CBL module, a global perception module, a multi-scale fusion module, a fourth C3 module, a sixth CBL module, a first upsampling module, a first fusion module, a fifth C3 module, a seventh CBL module, a second upsampling module, a second fusion module, a sixth C3 module, an eighth CBL module, a third upsampling module, a third fusion module, a seventh C3 module and a first convolution module which are sequentially connected; the global perception module is used for receiving the image result input by the fifth CBL module and capturing global information of the image result.
7. The detection method of claim 6, wherein the global perception module comprises: a plurality of encoder layers, the plurality of encoder layers being connected in series with one another, each of the encoder layers comprising: a first sub-layer connection structure and a second sub-layer connection structure;
The first sublayer connecting structure comprises a multi-head self-attention sublayer, a first normalization layer and a first residual connection, and the second sublayer connecting structure comprises a feedforward full-connection sublayer, a second normalization layer and a second residual connection;
the image result input to the global perception module by the fifth CBL module is used as an input result in the first encoder layer, and the output result of the corresponding encoder layer is used as an input result of the next encoder layer; taking the output result of the last encoder layer in the global perception module as the output result of the global perception module;
the data processing flow inside the encoder layer comprises the following steps:
the method comprises the steps of inputting an input result input into the encoder layer into the multi-head self-focusing sublayer, inputting an output result of the multi-head self-focusing sublayer into the first normalization layer, inputting an image result into the first normalization layer through the first residual error connection, inputting an output result of the first normalization layer into the feedforward full-connection sublayer, inputting an output result of the feedforward full-connection sublayer into the second normalization layer, inputting an output result of the first normalization layer into the second normalization layer through the second residual error connection, and taking the second normalization layer as an output result of the corresponding encoder layer.
8. The method of detecting according to claim 6, wherein in the step of predicting: the target detection model predicts the preprocessed image to obtain a corresponding prediction result, and the method comprises the following steps:
the first convolution module outputs a corresponding first feature map, and predicts the first feature map to obtain a preliminary prediction result;
processing the preliminary prediction result by adopting a non-maximum suppression function suitable for the rotation rectangle to obtain the corresponding prediction result;
the trained target detection model is obtained by training a loss function for a Gaussian bounding box;
the calculation flow of the cross ratio utilized by the non-maximum suppression function is as follows:
determining an intersection point of the rotation rectangle corresponding to the preliminary prediction result and another corresponding rotation rectangle;
determining whether the intersection points are satisfied to be inside the two rotating rectangles;
and forming a polygon by using the intersection points and sides which are both inside the two rotating rectangles, taking the area of the polygon as the intersection of the two rotating rectangles, taking the area of the two rotating rectangles as the union of the two rotating rectangles, and calculating the intersection ratio by using the intersection and the union.
9. The method of detection according to claim 1, wherein in the step of preprocessing: acquiring an X-ray image of a lithium battery pole piece, preprocessing the X-ray image to obtain a preprocessed image, and comprising the following steps:
performing binarization processing on the X-ray image to obtain a binarized image, performing binarization image inversion processing on the binarized image to obtain a binarized image inversion processed image, obtaining a minimum circumscribed rectangle corresponding to the maximum circumscribed outline of the binarized image inversion processed image, and intercepting the minimum circumscribed rectangle according to preset parameters to obtain the ROI region;
enhancing the contrast of the dark region in the ROI region to obtain an enhanced contrast image;
processing the contrast-enhanced image by adopting a multi-scale retina image enhancement algorithm to obtain a texture feature enhanced image;
filtering the texture feature enhanced image to obtain a filtered image;
carrying out nonlinear brightness enhancement on the filtered image to obtain a nonlinear brightness enhanced image;
and carrying out normalization processing on the nonlinear brightness enhancement image to obtain a normalized image, and taking the normalized image as the preprocessed image.
10. The utility model provides a detection device of lithium cell pole piece which characterized in that includes:
a memory for storing a program for constructing the object detection model according to any one of claims 1 to 9;
a processor for implementing the detection method according to any one of claims 1 to 9 by executing a program stored in the memory.
11. A computer-readable storage medium, characterized by comprising a program executable by a processor to implement the detection method according to any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410042138.1A CN117557565B (en) | 2024-01-11 | 2024-01-11 | Detection method and device for lithium battery pole piece |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410042138.1A CN117557565B (en) | 2024-01-11 | 2024-01-11 | Detection method and device for lithium battery pole piece |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117557565A true CN117557565A (en) | 2024-02-13 |
CN117557565B CN117557565B (en) | 2024-05-03 |
Family
ID=89815154
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410042138.1A Active CN117557565B (en) | 2024-01-11 | 2024-01-11 | Detection method and device for lithium battery pole piece |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117557565B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117974632A (en) * | 2024-03-28 | 2024-05-03 | 大连理工大学 | A lithium battery CT cathode and anode alignment detection method based on segmentation large model |
CN119198807A (en) * | 2024-11-22 | 2024-12-27 | 宁德时代新能源科技股份有限公司 | Battery cell detection method, device, storage medium and computer program product |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114594114A (en) * | 2022-03-09 | 2022-06-07 | 广东兆众自动化设备有限公司 | Full-automatic online nondestructive detection method for lithium battery cell |
WO2023121299A1 (en) * | 2021-12-22 | 2023-06-29 | 주식회사 엘지에너지솔루션 | Battery cell test system and method |
CN116503348A (en) * | 2023-04-23 | 2023-07-28 | 深圳市卓茂科技有限公司 | Method and equipment for detecting alignment degree of cathode and anode plates of battery core of coiled lithium battery |
CN116721055A (en) * | 2023-04-23 | 2023-09-08 | 深圳市卓茂科技有限公司 | A method and equipment for detecting the alignment of cathode and anode sheets of laminated lithium batteries |
CN117011281A (en) * | 2023-08-30 | 2023-11-07 | 上海大学 | Deep learning-based battery overlap value anomaly detection method |
-
2024
- 2024-01-11 CN CN202410042138.1A patent/CN117557565B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023121299A1 (en) * | 2021-12-22 | 2023-06-29 | 주식회사 엘지에너지솔루션 | Battery cell test system and method |
CN114594114A (en) * | 2022-03-09 | 2022-06-07 | 广东兆众自动化设备有限公司 | Full-automatic online nondestructive detection method for lithium battery cell |
CN116503348A (en) * | 2023-04-23 | 2023-07-28 | 深圳市卓茂科技有限公司 | Method and equipment for detecting alignment degree of cathode and anode plates of battery core of coiled lithium battery |
CN116721055A (en) * | 2023-04-23 | 2023-09-08 | 深圳市卓茂科技有限公司 | A method and equipment for detecting the alignment of cathode and anode sheets of laminated lithium batteries |
CN117011281A (en) * | 2023-08-30 | 2023-11-07 | 上海大学 | Deep learning-based battery overlap value anomaly detection method |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117974632A (en) * | 2024-03-28 | 2024-05-03 | 大连理工大学 | A lithium battery CT cathode and anode alignment detection method based on segmentation large model |
CN117974632B (en) * | 2024-03-28 | 2024-06-07 | 大连理工大学 | Lithium battery CT cathode-anode alignment detection method based on segmentation large model |
CN119198807A (en) * | 2024-11-22 | 2024-12-27 | 宁德时代新能源科技股份有限公司 | Battery cell detection method, device, storage medium and computer program product |
Also Published As
Publication number | Publication date |
---|---|
CN117557565B (en) | 2024-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109272489B (en) | Infrared weak and small target detection method based on background suppression and multi-scale local entropy | |
CN108961235B (en) | Defective insulator identification method based on YOLOv3 network and particle filter algorithm | |
CN110060237B (en) | Fault detection method, device, equipment and system | |
CN117557565B (en) | Detection method and device for lithium battery pole piece | |
CN109118473B (en) | Angular point detection method based on neural network, storage medium and image processing system | |
CN111833306A (en) | Defect detection method and model training method for defect detection | |
CN104568986A (en) | Method for automatically detecting printing defects of remote controller panel based on SURF (Speed-Up Robust Feature) algorithm | |
US20230386023A1 (en) | Method for detecting medical images, electronic device, and storage medium | |
CN116228780B (en) | Silicon wafer defect detection method and system based on computer vision | |
CN107220962A (en) | A kind of image detecting method and device of tunnel crackle | |
CN118275449B (en) | Copper strip surface defect detection method, device and equipment | |
Fang et al. | Laser stripe image denoising using convolutional autoencoder | |
CN113256624A (en) | Continuous casting round billet defect detection method and device, electronic equipment and readable storage medium | |
CN114037992A (en) | Instrument reading identification method and device, electronic equipment and storage medium | |
CN111325728A (en) | Product defect detection method, device, equipment and storage medium | |
CN117392201A (en) | Target paper bullet hole identification and target reporting method based on visual detection | |
CN106127210A (en) | A kind of significance detection method based on multiple features | |
CN117853942A (en) | Cloud and fog identification method, cloud and fog identification device and cloud and fog identification system | |
CN111178111A (en) | Two-dimensional code detection method, electronic device, storage medium and system | |
CN117372487A (en) | Image registration method, device, computer equipment and storage medium | |
CN114241204B (en) | Image recognition method, device, equipment, medium and computer product | |
CN112652004B (en) | Image processing method, device, equipment and medium | |
CN115018751A (en) | A method and system for crack detection based on Bayesian density analysis | |
CN112183531A (en) | Method, device, medium and electronic equipment for determining character positioning frame | |
CN119205786B (en) | Underwater pipeline detection method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |