[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN115017931A - Method and system for extracting QR codes in batches in real time - Google Patents

Method and system for extracting QR codes in batches in real time Download PDF

Info

Publication number
CN115017931A
CN115017931A CN202210669471.6A CN202210669471A CN115017931A CN 115017931 A CN115017931 A CN 115017931A CN 202210669471 A CN202210669471 A CN 202210669471A CN 115017931 A CN115017931 A CN 115017931A
Authority
CN
China
Prior art keywords
module
image
code
data
rectangular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210669471.6A
Other languages
Chinese (zh)
Other versions
CN115017931B (en
Inventor
陈荣军
黄宏兴
于永兴
马勇枝
任金昌
王磊军
吕巨建
赵慧民
李建波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Chuxiu Information Technology Co ltd
Guangdong Polytechnic Normal University
Original Assignee
Guangzhou Chuxiu Information Technology Co ltd
Guangdong Polytechnic Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Chuxiu Information Technology Co ltd, Guangdong Polytechnic Normal University filed Critical Guangzhou Chuxiu Information Technology Co ltd
Priority to CN202210669471.6A priority Critical patent/CN115017931B/en
Publication of CN115017931A publication Critical patent/CN115017931A/en
Application granted granted Critical
Publication of CN115017931B publication Critical patent/CN115017931B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1447Methods for optical code recognition including a method step for retrieval of the optical code extracting optical codes from image or text carrying said optical code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1452Methods for optical code recognition including a method step for retrieval of the optical code detecting bar code edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/146Methods for optical code recognition the method including quality enhancement steps

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Toxicology (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a system for extracting QR codes in batch in real time, which are characterized in that edge information of QR code graphs is used for detecting the QR code graphs, single-frame images containing a plurality of QR code graphs are processed through the steps of gray level conversion, deblurring, edge detection, binarization, morphological processing, contour extraction, grading multistep discrimination, network model discrimination and the like, the graph blurring problem is considered in a repeated point, and the images are deblurred through counting the gradient distribution of edge gray values; the traditional image processing mode and the neural network model are combined, so that the recognition accuracy of the method is ensured, and the operation efficiency is improved; the method can be rapidly deployed in an actual embedded real-time scene and applied to the problem of offline real-time detection and extraction of images containing a large number of QR codes, and meets the real-time requirements of detecting and extracting a plurality of QR codes in scenes such as batch medical test tube registration and large-batch warehouse goods registration.

Description

Method and system for extracting QR codes in batches in real time
Technical Field
The invention relates to the technical field of information technology and Internet of things, in particular to a method and a system for extracting QR codes in batches in real time.
Background
With the rapid development of information technology and internet of things technology, a QR Code (Quick Response Code) is widely deployed in various fields such as electronic commerce, warehouse logistics, and device management as a two-dimensional Code identification scheme with large information capacity, strong security, and low cost in the sensing layer of the internet of things. Particularly, in some workshop pipelines, each article and goods is provided with a QR code label, and with the transmission of a conveyor belt, QR code scanning equipment needs to read all QR codes in a scanned area timely, quickly and without omission. Similarly, in some biochemical laboratories, laboratory personnel need to record and access by identifying a large number of sample tubes, QR code identifiers on vaccine reagents. In addition, the warehouse logistics industry also needs to identify the QR code labels on the bulk goods for registration and warehousing tracking. These scenarios all require a technology that can detect large volumes of QR in real time.
A System on Chip (SoC) is a Chip-level System integrating a processor, a dedicated Circuit, and a peripheral controller, and a user can customize the System freely, and a Field Programmable Gate Array (FPGA) is one of SoC semi-custom Circuit chips in the Field of Application Specific Integrated Circuits (ASICs). The FPGA adopts a Logic Cell Array (LCA) including an Interconnect (Interconnect), a Configurable Logic Block (CLB), and an Input/Output Block (IOB), and has a large number of Input/Output pins and flip-flops therein. The algorithm is realized by using the FPGA, so that the operation of the program can be accelerated from the bottom layer of the hardware, and the effect of real-time operation is achieved. In order to reduce the development of the repetitive modules of the FPGA, an Intellectual Property Core (IP Core) is widely used in the development stage of the FPGA. The IP core is an integrated circuit module which is verified, can be recycled and has a determined function, development time and cost can be greatly reduced, and design efficiency is improved.
The prior art discloses a patent of a batch QR code image extraction method and a system, wherein the patent comprises the following steps: firstly, preprocessing operations such as gray level transformation, filtering and denoising and the like are carried out on a high-resolution image which is actually shot and contains a plurality of QR codes; then, extracting an edge gradient value by adopting an edge detection method, and obtaining an edge image through truncation normalization: determining the optimal block size after iterative search by setting an initial value of the block size; then, carrying out blocking, feature calculation and threshold segmentation again according to the optimal blocking size, clustering the marked blocks to obtain a candidate rectangular frame set, and removing the rectangular frames which obviously do not meet the QR code regional characteristics through screening; and finally, compressing the image, training a lightweight high-performance mobileNet series classifier, judging whether the candidate frame contains a QR code pattern, eliminating a background rectangular frame, separating a corresponding area in the original image, and completing the operations of segmenting and extracting the QR code pattern. However, the method disclosed in the patent has many calculation steps, involves a large amount of calculation, has relatively low requirements on hardware performance, and has extremely low detection accuracy when an input picture is blurred.
Disclosure of Invention
The invention provides a method for extracting QR codes in batches in real time, which can detect pictures containing large-batch QR codes in real time and extract the QR codes.
The invention further aims to provide a batch QR code real-time extraction system applying the method.
In order to achieve the technical effects, the technical scheme of the invention is as follows:
a batch QR code real-time extraction method comprises the following steps:
s1: acquiring QR code image data and caching;
s2: filtering and edge detection processing are carried out on the cached QR code image data, and gray level conversion, deblurring, edge detection, binaryzation and morphological processing of a QR code image are completed;
s3: contour extraction is carried out on the QR code image processed in the step S2 to obtain QR code rectangular frames, and IOU matrixes among the rectangular frames are rapidly calculated in a pipeline acceleration mode, so that background areas of the QR code rectangular frames are eliminated;
s4: using the QR code rectangular frame obtained in step S3, performing classification calculation using a convolutional neural network, and displaying the calculation result.
Further, in step S2, the process of performing gray scale conversion is:
converting QR code image data from an RGB color space to a gray color space, wherein the conversion formula is as follows (1):
I=0.30×I r +0.59×I g +0.11×I b (1)
wherein I represents the converted gray scale image, I r Data information representing the r channel of the input image, I g Data information representing the g channel of the input image, I b Data information indicating a b-channel of the input image;
carrying out accelerated treatment on the formula (1), as shown in a formula (2):
I=(300×I r +590×I g +110×I b +500)>>10 (2)
the gray scale value of the obtained image data pixel ranges from 0 to 255.
Further, in step S2, the process of performing deblurring is:
1) dividing the QR code image into B multiplied by B image blocks, and then calculating DCT coefficient C of each block in the vertical direction v (k) And DCT coefficient C in the horizontal direction h (k) As shown in formulas (3) and (4), wherein f (m, n) is likePixel (m, n) grayscale value:
Figure BDA0003694281410000031
Figure BDA0003694281410000032
2) judging the edge response of the image block through the DCT coefficient, and calculating the edge response of the whole image through a formula (5), wherein M is the number of edge directions larger than 0, and s i For the edge response of an image block:
Figure BDA0003694281410000033
3) and estimating the PSF coefficient by utilizing the S (n), and then carrying out deblurring processing on the gray level graph by using a wiener filter.
Further, in step S2, the process of performing edge detection is:
performing convolution operation on the Sobel operator template and image data so as to perform edge detection on the image:
defining the Sobel operator template as a constant type of data:
parameter h1=8'hff,h2=8'h00,h3=8'h01,h4=8'hfe,h5=8'h00,h6=8'h02,h7=8'hff,h8=8'h00,h9=8'h01;
parameter v1=8'h01,v2=8'h02,v3=8'h01,v4=8'h00,v5=8'h00,v6=8'h00,v7=8'hff,v8=8'hfe,v9=8'hff;
the bit width of variables in the Sobel template is 8, decimal-1 is represented by a complement 8 'hff, decimal-2 is represented by a complement 8' hfe, the highest bit represents a sign bit, after the gradient values of the horizontal direction and the vertical direction of the image are obtained by using the Sobel template, absolute values of the two are added, and the gray gradient of the whole image is obtained.
Further, in step S2, the binarization process is:
the binarization operation mainly converts the gradient image into a black-white image, and the conversion formula is as follows (6):
Figure BDA0003694281410000034
further, in step S2, the morphological processing includes: the morphological processing is to eliminate noise, segment out independent QR code image elements and find a maximum value area or a minimum value area in an image, and comprises Erosion Erosis and expansion Dilation operations, as shown in formula (7), wherein I (x, y) represents a pixel value with a position (x, y) in QR code image data:
Figure BDA0003694281410000041
further, in step S3, since the image data processed by the morphological processing is stored in the data cache unit, the image data in the data cache unit is read first, and then the findContours () function in the OpenCV library is called to extract the outline of each QR code foreground, where the ith outline is defined as [ x [ ] i ,y i ,w i ,h i ]N represents the number of the outlines of the rectangular frames obtained by the current picture, x and y represent the abscissa and the ordinate of the vertex at the upper left corner of the rectangular frame respectively, w and h represent the width and the height of the rectangular frame respectively, then the squareness of each rectangular frame is calculated by using an equation (8), the QR rectangular frames are pre-screened by an algorithm 1, and the rectangular frames possibly containing QR codes are added into an aggregate Lambda out And (3) lining:
Figure BDA0003694281410000042
Figure BDA0003694281410000043
after pre-screening, further screening out the rectangular frame containing the QR code, and screening the rectangular frame set possibly containing the QR code output by the algorithm 1 again by using the obtained rectangular frame containing the QR code as prior information, wherein the specific process is as follows:
Figure BDA0003694281410000051
the rectangular boxes filtered by algorithm 2 are included in the set, and the IOU value of each rectangular box in the set is calculated using equation (9), wherein,
Figure BDA0003694281410000052
represents a rectangular frame rect i And rectangle frame rect j IOU value of (i), Inter (rect) i ,rect j ) Is the overlapping area size of the two rectangular frames, Union (rect) i ,rect j ) Is the area size of the union of these two rectangular boxes:
Figure BDA0003694281410000053
dividing the rectangular frame after further screening into independent rectangular frame and overlapped rectangular frame, the overlapped rectangular frame can be divided into crossed rectangular frame and rectangular frame, the specific flow is as algorithm 3, after obtaining the screened rectangular frame according to algorithm 3, judging whether two rectangular frames are overlapped, adding the non-overlapped rectangular frame sets into the independent rectangular frame set, calculating the mutual overlap ratio of the overlapped rectangular frames, if the coincidence degree is less than 1, the two rectangular frames are judged to be intersected, if the coincidence degree is more than or equal to 1, the rectangular frames are added into the rectangular frame containing set, redundant outer frames are removed, the contained rectangular frames are reserved in the independent rectangular frame set, then taking the independent rectangular frame set as prior information, screening out the rectangular frames containing the QR codes by calculating gray scale distribution of each rectangular frame in the intersected rectangular frame set, and reserving the rectangular frames in the independent rectangular frame set:
Figure BDA0003694281410000061
Figure BDA0003694281410000071
further, in step S4, the convolutional neural network includes 3 convolutional layers, 3 pooling layers, two full-link layers and one softmax; the method comprises the steps that image feature information is mapped into 16-dimensional, 32-dimensional and 64-dimensional high-latitude spaces by a first convolutional layer, a second convolutional layer and a third convolutional layer respectively, maximum pooling is used by the first pooling layer, the second pooling layer and the third pooling layer for down-sampling processing of the feature information is carried out, the output data dimension of a first full connection layer 1 is 1 multiplied by 64, the output data dimension of a second full connection layer is 1 multiplied by 2, two probability values of a background and a QR code foreground are represented, a score value with the data dimension of 1 multiplied by 1 is obtained through a softmax layer, and the score value represents the probability that an input QR code area with the data dimension of 32 multiplied by 3 can be judged to contain a QR code object.
Further, in step S4, before the convolutional neural network classifies the rectangular frame region on the image, it is necessary to perform structuring processing on the trained network parameter weight values and bias values, and perform fixed-point quantization on the parameter values and bias values; the fixed point quantization mode used is shown in formulas (10) - (12), wherein r 'represents a network weight of a floating point real number, q represents a quantized fixed point number, and the data type is a signed 8-bit integer, r' max And r' min Are the maximum and minimum values of r', q, respectively max And q is min Q is the maximum and minimum values, S' is a scaling sparsity factor, Z represents the integer size when 0 in a floating-point real number maps to a number, round (·) represents rounding:
Figure BDA0003694281410000072
Figure BDA0003694281410000073
Figure BDA0003694281410000074
a batch QR code real-time extraction system comprises a camera module, an image acquisition module, a data cache module, a storage module, a bus configuration module, a filtering and edge detection module, a contour extraction module, a multi-stage step-by-step parallel discrimination module, a network model classification module, an image display module and a display screen module;
the bus configuration module, the camera module, the image acquisition module and the data cache module are sequentially connected, and the storage module is connected to the data cache module; the data caching module, the filtering and edge detecting module, the contour extracting module, the multi-stage step-by-step parallel judging module, the network model classifying module, the image display module and the display screen module are sequentially connected;
the system configures the camera module through the bus configuration module, and the image acquisition module receives an image signal transmitted by the camera module and then transmits the image signal to the data cache module;
the data cache module transmits the image data to the filtering and edge detection module, and gray level conversion, deblurring, edge detection, binaryzation and morphological processing are carried out in the filtering and edge detection module;
rectangular frames of the QR code are extracted through a contour extraction module and input into a multistage distribution parallel judgment module, IOU matrixes among the rectangular frames are rapidly calculated in a pipeline acceleration mode, so that background areas are removed, and the rectangular frames possibly containing the QR code are sent into a network model classification module;
the network model classification module loads the rectangular frame data, the network offset value data and the network weight data from the data cache module and the storage module at the same time, then carries out forward calculation, and transmits the result to the image display module, so that the display screen module displays an effect image.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the method utilizes the edge information of the QR code graph to detect the QR code graph, processes a single-frame image containing a plurality of QR code graphs through the steps of gray level conversion, deblurring, edge detection, binarization, morphological processing, contour extraction, grading multistep discrimination, network model discrimination and the like, considers the graph blurring problem in a repeated way, and deblurrs the image through counting the gradient distribution of the edge gray value. In addition, the method combines the traditional image processing mode and the neural network model, ensures the identification accuracy of the method and improves the operation efficiency. The method can be quickly deployed in an actual embedded real-time scene and applied to the problem of offline real-time detection and extraction of images containing a large number of QR codes, and meets the real-time requirements of detecting and extracting a plurality of QR codes in scenes such as batch medical test tube registration and large-batch warehouse goods inspection and recording.
Drawings
FIG. 1 is a block diagram of the system of the present invention;
FIG. 2 is a schematic diagram of a Sobel operator template;
FIG. 3 is an intermediate output image and a resultant output image of the filtering and edge detection module;
FIG. 4 is a flow chart of a multi-stage step-by-step discrimination algorithm;
FIG. 5 is a diagram of a rectangular box type decision network model;
fig. 6 is a flowchart of convolution operation of the convolutional neural network.
FIG. 7 is a schematic diagram of an IP core structure of a rectangular box type decision network model;
FIG. 8 is a diagram of the effect of processing a data set picture;
fig. 9 is a diagram illustrating the processing effect of the camera capturing the picture.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the present embodiments, certain elements of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1, a batch QR code real-time extraction system includes a camera module, an image acquisition module, a data caching module, a storage module, a bus configuration module, a filtering and edge detection module, a contour extraction module, a multi-stage step-by-step parallel discrimination module, a network model classification module, an image display module, and a display screen module;
the bus configuration module, the camera module, the image acquisition module and the data cache module are sequentially connected, and the storage module is connected to the data cache module; the data caching module, the filtering and edge detecting module, the contour extracting module, the multi-stage step-by-step parallel judging module, the network model classifying module, the image display module and the display screen module are sequentially connected;
the system configures the camera module through the bus configuration module, and the image acquisition module receives an image signal transmitted by the camera module and then transmits the image signal to the data cache module;
the data cache module transmits the image data to the filtering and edge detection module, and gray level conversion, deblurring, edge detection, binaryzation and morphological processing are carried out in the filtering and edge detection module;
rectangular frames of the QR code are extracted through a contour extraction module and input into a multistage distribution parallel judgment module, IOU matrixes among the rectangular frames are rapidly calculated in a pipeline acceleration mode, so that background areas are removed, and the rectangular frames possibly containing the QR code are sent into a network model classification module;
the network model classification module loads the rectangular frame data, the network offset value data and the network weight data from the data cache module and the storage module at the same time, then carries out forward calculation, and transmits the result to the image display module, so that the display screen module displays an effect image.
Example 2
A batch QR code real-time extraction method comprises the following steps:
s1: acquiring QR code image data and caching;
s2: filtering and edge detection processing are carried out on the cached QR code image data, and gray level conversion, deblurring, edge detection, binaryzation and morphological processing of a QR code image are completed;
s3: contour extraction is carried out on the QR code image processed in the step S2 to obtain QR code rectangular frames, and IOU matrixes among the rectangular frames are rapidly calculated in a pipeline acceleration mode, so that background areas of the QR code rectangular frames are eliminated;
s4: using the QR code rectangular frame obtained in step S3, performing classification calculation using a convolutional neural network, and displaying the calculation result.
In step S2, the process of performing gradation conversion is:
transforming the data from the RGB color space to the gray color space, the transformation formula is as follows (1):
I=0.30×I r +0.59×I g +0.11×I b (1)
wherein I represents the converted gray scale image, I r Data information representing the r channel of the input image, I g Data information representing the g channel of the input image, I b Data information indicating a b-channel of the input image;
carrying out accelerated treatment on the formula (1), as shown in a formula (2):
I=(300×I r +590×I g +110×I b +500)>>10 (2)
the gray scale value of the obtained image data pixel ranges from 0 to 255.
In step S2, the process of performing deblurring is:
1) dividing the image into B × B image blocks, and calculating the vertical DCT coefficient C of each block v (k) And DCT coefficient C in horizontal direction h (k) As shown in formulas (3) and (4), f (m, n) is the gray level of the pixel (m, n):
Figure BDA0003694281410000101
Figure BDA0003694281410000102
2) judging the edge response of the image block through the DCT coefficient, and calculating the edge response of the whole image through a formula (5), wherein M is the number of edge directions larger than 0, and s i For the edge response of the image block:
Figure BDA0003694281410000103
3) and estimating the PSF coefficient by utilizing the S (n), and then carrying out deblurring processing on the gray level graph by using a wiener filter.
As shown in fig. 2, in step S2, the process of performing edge detection is:
performing convolution operation on the Sobel operator template and image data so as to perform edge detection on the image:
defining the Sobel operator template as a constant type of data:
parameter h1=8'hff,h2=8'h00,h3=8'h01,h4=8'hfe,h5=8'h00,h6=8'h02,h7=8'hff,h8=8'h00,h9=8'h01;
parameter v1=8'h01,v2=8'h02,v3=8'h01,v4=8'h00,v5=8'h00,v6=8'h00,v7=8'hff,v8=8'hfe,v9=8'hff;
the bit width of variables in the Sobel template is 8, decimal-1 adopts a complementary code 8 'hff, decimal-2 adopts a complementary code 8' hfe, wherein the highest bit represents a sign bit, after the gradient values of the horizontal direction and the vertical direction of the image are obtained by using the Sobel template, absolute values of the two are added to obtain the gray gradient of the whole image,
in step S2, the process of binarization in the filtering and edge detection module is:
the binarization operation mainly converts the gradient image into a black-white image, and the conversion formula is as follows (6):
Figure BDA0003694281410000111
in step S2, the morphological processing is performed as follows: the morphological processing is noise elimination, segmentation of independent image elements and finding of a maximum area or a minimum area in the image, and includes Erosion error and Dilation operations, as shown in formula (7), I (x, y) represents a pixel value with a position (x, y) in the image data:
Figure BDA0003694281410000112
fig. 3(a) is an original, fig. 3(b) is an intermediate output image of the filtering and edge detection module, and fig. 3(c) is an output image after morphological processing.
In step S3, the contour extraction is performed on the image data after the morphological processing, and mainly a findContours () function transplanted into the OpenCV library on the ZYNQ platform is called. In the concrete transplanting operation, a ZYNQ hardware system is constructed, a design constraint file is added, and then the hardware configuration file is exported. And then generating an FSBL file and a BOOT.BIN starting file on the PC, copying a Linux kernel file, an equipment tree and a file system, thereby generating a Ubuntu starting mirror image, and transplanting the Ubuntu environment to a ZYNQ platform with a configured hardware environment through an SD card. And then compiling an OpenCV static library on a PC (personal computer) by using an ARM cross compiling tool, and transplanting the OpenCV static library into a Ubuntu system of ZYNQ through an SD (secure digital) card, thereby realizing the transplanting of the OpenCV library to a ZYNQ platform. Since the image data processed by the last morphological processing in the filtering and edge detection module is stored in the data cache unit, the image data in the data cache unit is read first, then findContours () function in OpenCV library is called to extract the outline of each QR code foreground, and the ith outline is used as [ x ] i ,y i ,w i ,h i ]N represents the number of the outlines of the rectangular frame obtained by the current picture, x and y represent the abscissa and ordinate of the vertex at the upper left corner of the rectangular frame, respectively, and w and h represent the width of the rectangular frame, respectivelyAnd degree and height, then calculating the squareness of each rectangular box by using an equation (8), pre-screening the QR rectangular boxes by using an algorithm 1, and adding the rectangular boxes possibly containing QR codes into an aggregate Lambda out And (3) lining:
Figure BDA0003694281410000121
Figure BDA0003694281410000122
after pre-screening, the rectangular frame containing the QR code is further screened out by the multi-level distribution parallel judgment module, the obtained QR code rectangular frame is used as prior information, and the rectangular frame set which is output by the algorithm 1 and possibly contains the QR code is screened again, and the specific process is as algorithm 2:
Figure BDA0003694281410000123
Figure BDA0003694281410000131
the rectangular boxes screened by algorithm 2 are included in the set, and the IOU value of each rectangular box in the set is calculated using formula (9), wherein,
Figure BDA0003694281410000132
represents a rectangular frame rect i And rectangle frame rect j IOU value of, Inter (rect) i ,rect j ) Is the overlapping area size of the two rectangular frames, Union (rect) i ,rect j ) Is the area size of the union of these two rectangular boxes:
Figure BDA0003694281410000133
the main Verilog code for calculating the rectangular box IOU values is as follows:
real left_col_max,right_col_min,up_row_max,down_row_min;
real s1,s2,cross,result;
initial begin
left_col_max<=r1_x1>r2_x1r1_x1:r2_x1;
right_col_min<=r1_x2<r2_x2r1_x2:r2_x2;
up_row_max<=r1_y1>r2_y1r1_y1:r2_y1;
down_row_min<=r1_y2<r2_y2r1_y2:r2_y2;
s1<=0.0;s2<=0.0;cross<=0.0;
result<=0.0;
end
begin
if(left_col_max>=right_col_min||down_row_min<=up_row_max)begin
result=0.0;
end
else begin
s1=r1_w*r1_h;
s2=r2_w*r2_h;
cross=(down_row_min-up_row_max)*(right_col_min-left_col_max);
result=cross/(s1+s2-cross);
end
end
dividing the rectangular frame after further screening into independent rectangular frame and overlapped rectangular frame, the overlapped rectangular frame can be divided into crossed rectangular frame and rectangular frame, the specific flow is as shown in figure 4 as algorithm 3, after obtaining the screened rectangular frame according to algorithm 3, judging whether two rectangular frames are overlapped, adding the non-overlapped rectangular frame sets into the independent rectangular frame set, calculating the overlap ratio of the overlapped rectangular frames, if the coincidence ratio is less than 1, judging that the two rectangular frames are intersected, if the coincidence ratio is more than or equal to 1, adding the rectangular frames into a rectangular frame containing set, removing redundant outer frames, retaining the contained rectangular frames in an independent rectangular frame set, then taking the independent rectangular frame set as prior information, screening out the rectangular frames containing the QR codes by calculating gray scale distribution of each rectangular frame in the intersected rectangular frame set, and reserving the rectangular frames in the independent rectangular frame set:
Figure BDA0003694281410000141
Figure BDA0003694281410000151
as shown in fig. 5, in step S4, the network model of the convolutional neural network includes 3 convolutional layers, 3 pooling layers, two fully-connected layers, and one softmax; the method comprises the steps that image feature information is mapped into 16-dimensional, 32-dimensional and 64-dimensional high-latitude spaces by a first convolutional layer, a second convolutional layer and a third convolutional layer respectively, maximum pooling is used by the first pooling layer, the second pooling layer and the third pooling layer for down-sampling processing of the feature information is carried out, the output data dimension of a first full connection layer 1 is 1 multiplied by 64, the output data dimension of a second full connection layer is 1 multiplied by 2, two probability values of a background and a QR code foreground are represented, a score value with the data dimension of 1 multiplied by 1 is obtained through a softmax layer, and the score value represents the probability that an input QR code area with the data dimension of 32 multiplied by 3 can be judged to contain a QR code object.
As shown in fig. 6, when performing convolution operation, taking a single convolution kernel operation as an example, the present invention uses a plurality of FIFO buffer units and a plurality of registers to implement pipeline parallel accelerated processing of convolution operation. Because the size of a single frame image is 32 × 32, three lines of data of the image can be input by delaying 32 time units each other by using the FIFO buffer unit 1, the FIFO buffer unit 2 and the FIFO buffer unit 3, then, three lines of image data can be read only each time by using the FIFO buffer unit 4, the FIFO buffer unit 5 and 3 register groups, that is, the register group 1 outputs image data v 0-v 8 corresponding to a convolution window read at this time, then, the data and convolution kernels with the size of 3 × 3 are subjected to product accumulation operation, C0-C8 represent weight values of the convolution kernels, intermediate operation results are stored in registers reg 9-reg 24, and a register reg25 stores final convolution operation output results. It can be known from the architecture in the figure that when convolution operation is performed on the right side, the image data is continuously read out by the left side according to the clock period, and the data is transmitted to the right side through each FIFIO cache unit and the register to wait for the next operation, so that the parallel operation processing of the production line is realized, and the operation efficiency of the network model is improved.
In step S4, before the network model of the convolutional neural network is called to classify the rectangular frame region on the image, the trained network parameter weights and bias values need to be structured first, and these parameter values and bias values are quantized at fixed points and then input into the storage module; the fixed point quantization mode used is shown in formulas (10) - (12), wherein r 'represents a network weight of a floating point real number, q represents a quantized fixed point number, and the data type is a signed 8-bit integer, r' max And r' min Are the maximum and minimum values of r', q, respectively max And q is min Q is the maximum and minimum values, S' is a scaling sparsity factor, Z represents the integer size when 0 in a floating-point real number maps to a number, round (·) represents rounding:
Figure BDA0003694281410000161
Figure BDA0003694281410000162
Figure BDA0003694281410000163
example 3
The High-level Synthesis (HLS) technology is used to perform rapid deployment of the FPGA platform on the network model, and fig. 7 is a schematic diagram of an IP core structure of the rectangular box type determination network model. Before the network model module is called to classify the rectangular frame area on the image, the trained network parameter weight and bias value are firstly structured, and the parameter values and bias value are quantified at fixed points and then input into the storage module.
The fixed point quantization used is shown in equations (10) - (12). Wherein r 'represents a network weight of a floating-point real number, q represents a quantized fixed-point number, and the data type is a signed 8-bit integer, r' max And r' min Are the maximum and minimum values of r', q, respectively max And q is min Q is the maximum and minimum values, S' is a scaling sparsity factor, Z represents the integer size when 0 in a floating-point real number maps to a number, round (·) represents rounding:
Figure BDA0003694281410000171
Figure BDA0003694281410000172
Figure BDA0003694281410000173
the global control module mainly provides control signals of each operation during network model operation, so that the network weight and the image data can be loaded into the network operation unit. Specifically, after the network model classification module receives an operation start enable signal from the global control module, the image data read from the FIFO buffer unit and the weight data of the storage module are transmitted to the on-chip memory for block storage, and then the network operation unit obtains the data and performs forward calculation, and finally outputs and stores the calculated score value.
A ZYNQ chip with the model number of XC7Z020 series is adopted, the chip comprises two parts of a PS (processing System) and a PL (programmable logic), the PS side is a dual-core Cortex-A9 processor, the maximum frequency can reach 766Mhz, and the PL end comprises 85K logic units, a 4.9Mbit block random access memory and the like. The data storage module on the hardware platform comprises 1GB DDR3 and 8GB EMMC, and in addition, the platform is also provided with external interfaces such as a VGA interface, an SD card interface and the like. The algorithm is compiled and synthesized by Xilinx Vivado 2017.4 software after being programmed by Verilog language, then is downloaded into a chip of a hardware platform to form a corresponding hardware circuit, and the Ubuntu system is transplanted to a ZYNQ platform. After the system is started, the bus configuration module configures the OV5640 camera module through an I2C bus protocol, so that the OV5640 camera module can output RGB images with 24-bit resolution, the images are cached in the data caching unit through the image acquisition module, then the QR code extracts an IP core to load preset parameters, weight data and the like from the storage module, the image data in the caching unit is read and processed, and finally the result image is displayed on a display screen through the image display module. The image display module adopts VGA protocol, the interface of the image display module comprises a field synchronizing signal line, a line synchronizing signal line, an R signal line, a B signal line and two I2C communication lines, and the image display module also comprises an independent block random memory unit which is used as a display memory area and is used for image display cache. Fig. 8 and 9 are diagrams showing the effect of QR code extraction in actual application of the hardware system of the present invention, where fig. 8(a) is a picture from a QR code dataset stored in the EMMC memory module of the hardware platform and processed by directly calling a program, and fig. 8(b) is an effect image thereof. Fig. 9(a) is a QR code image captured by a camera module on a hardware platform, and fig. 9(b) is a processing effect image of the system.
The same or similar reference numerals correspond to the same or similar parts;
the positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A batch QR code real-time extraction method is characterized by comprising the following steps:
s1: acquiring QR code image data and caching;
s2: filtering and edge detection processing are carried out on the cached QR code image data, and gray level conversion, deblurring, edge detection, binaryzation and morphological processing of a QR code image are completed;
s3: contour extraction is carried out on the QR code image processed in the step S2 to obtain QR code rectangular frames, and IOU matrixes among the rectangular frames are rapidly calculated in a pipeline acceleration mode, so that background areas of the QR code rectangular frames are eliminated;
s4: using the QR code rectangular frame obtained in step S3, performing classification calculation using a convolutional neural network, and displaying the calculation result.
2. The batch QR code real-time extraction method according to claim 1, wherein in the step S2, the gray scale conversion process is:
converting QR code image data from an RGB color space to a gray color space, wherein the conversion formula is as follows (1):
I=0.30×I r +0.59×I g +0.11×I b (1)
wherein I represents the converted gray scale image, I r Data information representing the r channel of the input image, I g Data information representing the g channel of the input image, I b Data information indicating a b-channel of the input image;
carrying out accelerated treatment on the formula (1), as shown in a formula (2):
I=(300×I r +590×I g +110×I b +500)>>10 (2)
the gray scale value of the obtained image data pixel ranges from 0 to 255.
3. The method for extracting QR codes in batches in real time according to claim 2, wherein in step S2, the process of performing deblurring is:
1) dividing the QR code image into B multiplied by B image blocks, and then calculating DCT coefficient C of each block in the vertical direction v (k) And DCT coefficient C in horizontal direction h (k) As shown in formulas (3) and (4), f (m, n) is the gray level of the pixel (m, n):
Figure FDA0003694281400000011
Figure FDA0003694281400000012
2) judging the edge response of the image block through the DCT coefficient, and calculating the edge response of the whole image through a formula (5), wherein M is the number of edge directions larger than 0, and s is i For the edge response of the image block:
Figure FDA0003694281400000021
3) and estimating the PSF coefficient by utilizing the S (n), and then carrying out deblurring processing on the gray level graph by using a wiener filter.
4. The method for extracting QR codes in batch in real time according to claim 3, wherein in the step S2, the process of performing the edge detection is as follows:
performing convolution operation on the Sobel operator template and image data so as to perform edge detection on the image:
defining the Sobel operator template as a constant type of data:
parameter h1=8'hff,h2=8'h00,h3=8'h01,h4=8'hfe,h5=8'h00,h6=8'h02,h7=8'hff,h8=8'h00,h9=8'h01;
parameter v1=8'h01,v2=8'h02,v3=8'h01,v4=8'h00,v5=8'h00,v6=8'h00,v7=8'hff,v8=8'hfe,v9=8'hff;
the bit width of variables in the Sobel template is 8, decimal-1 is represented by a complement 8 'hff, decimal-2 is represented by a complement 8' hfe, the highest bit represents a sign bit, after the gradient values of the horizontal direction and the vertical direction of the image are obtained by using the Sobel template, absolute values of the two are added, and the gray gradient of the whole image is obtained.
5. The batch QR code real-time extraction method according to claim 4, wherein in the step S2, the binarization process is as follows:
the binarization operation is to convert the gradient image into a black-white image, and the conversion formula is as follows (6):
Figure FDA0003694281400000022
6. the method for extracting QR codes in batch in real time according to claim 5, wherein the morphological processing in step S2 is performed by: the morphological processing is to eliminate noise, segment out independent QR code image elements and find a maximum value area or a minimum value area in an image, and comprises Erosion Erosis and expansion Dilation operations, as shown in formula (7), wherein I (x, y) represents a pixel value with a position (x, y) in QR code image data:
Figure FDA0003694281400000023
7. the method as claimed in claim 6, wherein in step S3, since the image data processed by the morphological processing is stored in the data caching unit, the image data in the data caching unit is read out first, and then the findContours () function in the OpenCV library is called to extract the outline of each QR code foreground, wherein the ith outline is [ x [ ] [ i ,y i ,w i ,h i ]Denotes that n denotes the current pictureThe number of the obtained outlines of the rectangular boxes, x and y respectively represent the abscissa and the ordinate of the top left corner vertex of the rectangular box, w and h respectively represent the width and the height of the rectangular box, then the squareness of each rectangular box is calculated by using a formula (8), the QR rectangular boxes are pre-screened by an algorithm 1, and the rectangular boxes possibly containing QR codes are added into an aggregate Lambda out And (3) lining:
Figure FDA0003694281400000031
Figure FDA0003694281400000032
after pre-screening, further screening out the rectangular frame containing the QR code, and screening the rectangular frame set possibly containing the QR code output by the algorithm 1 again by using the obtained rectangular frame containing the QR code as prior information, wherein the specific process is as follows:
Figure FDA0003694281400000033
Figure FDA0003694281400000041
the rectangular boxes filtered by algorithm 2 are included in the set, and the IOU value of each rectangular box in the set is calculated using equation (9), wherein,
Figure FDA0003694281400000043
represents a rectangular frame rect i And rectangle frame rect j IOU value of, Inter (rect) i ,rect j ) Is the overlapping area size of the two rectangular frames, Union (rect) i ,rect j ) Is the area size of the union of these two rectangular boxes:
Figure FDA0003694281400000042
dividing the rectangular frame after further screening into independent rectangular frame and overlapped rectangular frame, the overlapped rectangular frame can be divided into crossed rectangular frame and rectangular frame, the specific flow is as algorithm 3, after obtaining the screened rectangular frame according to algorithm 3, judging whether two rectangular frames are overlapped, adding the non-overlapped rectangular frame sets into the independent rectangular frame set, calculating the mutual overlap ratio of the overlapped rectangular frames, if the coincidence ratio is less than 1, judging that the two rectangular frames are intersected, if the coincidence ratio is more than or equal to 1, adding the rectangular frames into a rectangular frame containing set, removing redundant outer frames, retaining the contained rectangular frames in an independent rectangular frame set, then taking the independent rectangular frame set as prior information, screening out the rectangular frames containing the QR codes by calculating gray scale distribution of each rectangular frame in the intersected rectangular frame set, and reserving the rectangular frames in the independent rectangular frame set:
Figure FDA0003694281400000051
Figure FDA0003694281400000061
8. the batch QR code real-time extraction method according to claim 7, wherein in the step S4, the convolutional neural network comprises 3 convolutional layers, 3 pooling layers, two full-link layers and one softmax; the method comprises the steps that image feature information is mapped into 16-dimensional, 32-dimensional and 64-dimensional high-latitude spaces by a first convolutional layer, a second convolutional layer and a third convolutional layer respectively, maximum pooling is used by the first pooling layer, the second pooling layer and the third pooling layer for down-sampling processing of the feature information is carried out, the output data dimension of a first full connection layer 1 is 1 multiplied by 64, the output data dimension of a second full connection layer is 1 multiplied by 2, two probability values of a background and a QR code foreground are represented, a score value with the data dimension of 1 multiplied by 1 is obtained through a softmax layer, and the score value represents the probability that an input QR code area with the data dimension of 32 multiplied by 3 can be judged to contain a QR code object.
9. The method for extracting QR codes in batch in real time according to claim 8, wherein in step S4, before the convolutional neural network classifies rectangular frame regions on the image, it is necessary to perform structuring processing on trained network parameter weights and bias values, and perform fixed-point quantization on the parameter values and bias values; the fixed point quantization mode used is shown in formula (10) -f12), wherein r 'represents a network weight of a floating point real number, q represents a quantized fixed point number, and the data type is a signed 8-bit integer, r' max And r' min Are the maximum and minimum values of r', q, respectively max And q is min Q is the maximum and minimum values, S' is a scaling sparsity factor, Z represents the integer size when 0 in a floating-point real number maps to a number, round (·) represents rounding:
Figure FDA0003694281400000062
Figure FDA0003694281400000063
Figure FDA0003694281400000064
10. the system for applying the batch QR code real-time extraction method of claim 9 is characterized by comprising a camera module, an image acquisition module, a data caching module, a storage module, a bus configuration module, a filtering and edge detection module, a contour extraction module, a multi-stage step-by-step parallel discrimination module, a network model classification module, an image display module and a display screen module;
the bus configuration module, the camera module, the image acquisition module and the data cache module are sequentially connected, and the storage module is connected to the data cache module; the data caching module, the filtering and edge detecting module, the contour extracting module, the multi-stage step-by-step parallel judging module, the network model classifying module, the image display module and the display screen module are sequentially connected;
the system configures the camera module through the bus configuration module, and the image acquisition module receives an image signal transmitted by the camera module and then transmits the image signal to the data cache module;
the data cache module transmits the image data to the filtering and edge detection module, and gray level conversion, deblurring, edge detection, binaryzation and morphological processing are carried out in the filtering and edge detection module;
rectangular frames of the QR code are extracted through a contour extraction module and input into a multistage distribution parallel discrimination module, IOU matrixes among the rectangular frames are rapidly calculated in a pipeline acceleration mode, so that background areas are removed, and the rectangular frames possibly containing the QR code are sent into a network model classification module;
the network model classification module loads the rectangular frame data, the network offset value data and the network weight data from the data cache module and the storage module at the same time, then carries out forward calculation, and transmits the result to the image display module, so that the display screen module displays an effect image.
CN202210669471.6A 2022-06-14 2022-06-14 Batch QR code real-time extraction method and system Active CN115017931B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210669471.6A CN115017931B (en) 2022-06-14 2022-06-14 Batch QR code real-time extraction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210669471.6A CN115017931B (en) 2022-06-14 2022-06-14 Batch QR code real-time extraction method and system

Publications (2)

Publication Number Publication Date
CN115017931A true CN115017931A (en) 2022-09-06
CN115017931B CN115017931B (en) 2024-06-14

Family

ID=83075806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210669471.6A Active CN115017931B (en) 2022-06-14 2022-06-14 Batch QR code real-time extraction method and system

Country Status (1)

Country Link
CN (1) CN115017931B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117314951A (en) * 2023-11-20 2023-12-29 四川数盾科技有限公司 Two-dimensional code recognition preprocessing method and system
CN117573709A (en) * 2023-10-23 2024-02-20 昆易电子科技(上海)有限公司 Data processing system, electronic device, and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902806A (en) * 2019-02-26 2019-06-18 清华大学 Method is determined based on the noise image object boundary frame of convolutional neural networks
CN109961049A (en) * 2019-03-27 2019-07-02 东南大学 Cigarette brand recognition methods under a kind of complex scene
CN111597848A (en) * 2020-04-21 2020-08-28 中山大学 Batch QR code image extraction method and system
CN113450376A (en) * 2021-06-14 2021-09-28 石河子大学 Cotton plant edge detection method based on FPGA

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902806A (en) * 2019-02-26 2019-06-18 清华大学 Method is determined based on the noise image object boundary frame of convolutional neural networks
CN109961049A (en) * 2019-03-27 2019-07-02 东南大学 Cigarette brand recognition methods under a kind of complex scene
CN111597848A (en) * 2020-04-21 2020-08-28 中山大学 Batch QR code image extraction method and system
CN113450376A (en) * 2021-06-14 2021-09-28 石河子大学 Cotton plant edge detection method based on FPGA

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHEN, R.等: "Fast Restoration for Out-of-focus Blurred Images of QR Code with Edge Prior Information via Image Sensing", IEEE SENSORS JOURNAL [ONLINE], vol. 21, no. 6, 15 August 2021 (2021-08-15), pages 18222 - 18236 *
崔吉,崔建国: "《弱图像信号的复原理论与方法研究》", 上海交通大学出版社, pages: 58 - 12 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117573709A (en) * 2023-10-23 2024-02-20 昆易电子科技(上海)有限公司 Data processing system, electronic device, and medium
CN117573709B (en) * 2023-10-23 2024-09-20 昆易电子科技(上海)有限公司 Data processing system, electronic device, and medium
CN117314951A (en) * 2023-11-20 2023-12-29 四川数盾科技有限公司 Two-dimensional code recognition preprocessing method and system
CN117314951B (en) * 2023-11-20 2024-01-26 四川数盾科技有限公司 Two-dimensional code recognition preprocessing method and system

Also Published As

Publication number Publication date
CN115017931B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
CN111681273B (en) Image segmentation method and device, electronic equipment and readable storage medium
CN109978839B (en) Method for detecting wafer low-texture defects
CN104025118B (en) Use the object detection of extension SURF features
CN111080660A (en) Image segmentation method and device, terminal equipment and storage medium
US20110211726A1 (en) System and method for processing image data relative to a focus of attention within the overall image
CN115017931B (en) Batch QR code real-time extraction method and system
CN111695609A (en) Target damage degree determination method, target damage degree determination device, electronic device, and storage medium
Mukherjee et al. Enhancement of image resolution by binarization
CN114169381A (en) Image annotation method and device, terminal equipment and storage medium
CN112364873A (en) Character recognition method and device for curved text image and computer equipment
CN110335233B (en) Highway guardrail plate defect detection system and method based on image processing technology
CN111639704A (en) Target identification method, device and computer readable storage medium
CN113255555A (en) Method, system, processing equipment and storage medium for identifying Chinese traffic sign board
CN110570442A (en) Contour detection method under complex background, terminal device and storage medium
CN113516053A (en) Ship target refined detection method with rotation invariance
CN108960246B (en) Binarization processing device and method for image recognition
CN115147405A (en) Rapid nondestructive testing method for new energy battery
CN112396564A (en) Product packaging quality detection method and system based on deep learning
CN116309612B (en) Semiconductor silicon wafer detection method, device and medium based on frequency decoupling supervision
CN111291767A (en) Fine granularity identification method, terminal equipment and computer readable storage medium
CN115345895B (en) Image segmentation method and device for visual detection, computer equipment and medium
CN117132540A (en) PCB defect post-processing method based on segmentation model
CN116311290A (en) Handwriting and printing text detection method and device based on deep learning
CN115131355A (en) Intelligent method for detecting abnormality of waterproof cloth by using data of electronic equipment
CN113963004A (en) Sampling method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant