[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN115017931B - Batch QR code real-time extraction method and system - Google Patents

Batch QR code real-time extraction method and system Download PDF

Info

Publication number
CN115017931B
CN115017931B CN202210669471.6A CN202210669471A CN115017931B CN 115017931 B CN115017931 B CN 115017931B CN 202210669471 A CN202210669471 A CN 202210669471A CN 115017931 B CN115017931 B CN 115017931B
Authority
CN
China
Prior art keywords
module
image
code
data
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210669471.6A
Other languages
Chinese (zh)
Other versions
CN115017931A (en
Inventor
陈荣军
黄宏兴
于永兴
马勇枝
任金昌
王磊军
吕巨建
赵慧民
李建波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Chuxiu Information Technology Co ltd
Guangdong Polytechnic Normal University
Original Assignee
Guangzhou Chuxiu Information Technology Co ltd
Guangdong Polytechnic Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Chuxiu Information Technology Co ltd, Guangdong Polytechnic Normal University filed Critical Guangzhou Chuxiu Information Technology Co ltd
Priority to CN202210669471.6A priority Critical patent/CN115017931B/en
Publication of CN115017931A publication Critical patent/CN115017931A/en
Application granted granted Critical
Publication of CN115017931B publication Critical patent/CN115017931B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1447Methods for optical code recognition including a method step for retrieval of the optical code extracting optical codes from image or text carrying said optical code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1452Methods for optical code recognition including a method step for retrieval of the optical code detecting bar code edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/146Methods for optical code recognition the method including quality enhancement steps

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Toxicology (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a system for extracting a batch of QR codes in real time, wherein edge information of QR code patterns is utilized to detect the QR code patterns, single-frame images containing a plurality of QR code patterns are processed through steps of gray level conversion, deblurring, edge detection, binarization, morphological processing, contour extraction, hierarchical multi-step discrimination, network model discrimination and the like, the problem of pattern blurring is considered again, and the deblurring processing is carried out on the images through statistics of gradient distribution of edge gray values; the traditional image processing mode and the neural network model are combined, so that the identification accuracy of the method is ensured, and the operation efficiency is improved; aiming at the problems of offline real-time detection and extraction of images containing a large number of QR codes, the method can be rapidly deployed in an actual embedded real-time scene and applied, and the real-time requirements of detecting and extracting a plurality of QR codes in the scenes such as batch medical test tube registration, batch warehouse goods inspection and recording and the like are met.

Description

Batch QR code real-time extraction method and system
Technical Field
The invention relates to the technical fields of information technology and Internet of things, in particular to a batch QR code real-time extraction method and system.
Background
Along with the rapid development of information technology and internet of things technology, a QR code (Quick Response Code, QR code) is used as a two-dimensional code identification scheme with large information capacity, high safety and low cost in an internet of things sensing layer, and is widely deployed in various fields of electronic commerce, warehouse logistics, equipment management and the like. Particularly, on some workshop assembly lines, a QR code label is arranged on each article, and along with the transmission of the conveyor belt, the QR code scanning device needs to timely and quickly read all QR codes in the scanned area. Similarly, in some biochemical laboratories, laboratory personnel are required to record and access by identifying a large number of sample tubes, QR code identifiers on vaccine reagents. In addition, the warehouse logistics industry also needs to identify QR code labels on a large number of goods for registration and warehouse tracking. These scenarios all require a technique that enables real-time detection of large quantities of QR.
A System on Chip (SoC) is a Chip-level System integrated with a processor, a dedicated Circuit, and a peripheral controller, and a user can customize the System on Chip freely, and a field programmable gate array (FPGA, field Programmable Gate) is a SoC semi-custom Circuit Chip which is one of fields of Application SPECIFIC INTEGRATED circuits. FPGAs employ an array of Logic cells (LCA, logic CELL ARRAY) comprising internal wiring (Interconnect), configurable Logic blocks (CLBs, configurable Logic Block) and input-output blocks (IOBs, input Output Block), with a large number of input-output pins and flip-flops inside. The FPGA is used for carrying out algorithm realization, so that the operation of a program can be accelerated from a hardware bottom layer, and the effect of real-time operation is achieved. In order to reduce the development of the FPGA repetition block, intellectual property cores (IP cores, intellectual Property Core) are widely used in the development phase of the FPGA. The IP core is an integrated circuit module which is verified, can be recycled and has a certain function, so that development time and cost can be greatly reduced, and design efficiency can be improved.
The prior art discloses a method and a system for extracting batch QR code images, wherein the method comprises the following steps: carrying out preprocessing operations such as gray level conversion, filtering denoising and the like on a high-resolution image which is actually shot and contains a plurality of QR codes; then extracting an edge gradient value by adopting an edge detection method, and obtaining an edge image through truncation normalization: determining the optimal block size after iterative search by setting the initial value of the block size; then, performing blocking again according to the optimal blocking size, performing feature calculation and threshold segmentation, clustering the marked blocks to obtain a candidate rectangular frame set, and removing rectangular frames which obviously do not meet the QR code region features through screening; finally, compressing, training a lightweight and high-performance mobileNet-series classifier, judging whether the candidate frames contain QR code patterns, eliminating the background rectangular frames, separating the corresponding areas in the original image, and finishing operations of segmenting and extracting the QR code patterns. However, the method of this patent involves a large number of calculation steps, involves a large amount of calculation, requires relatively low hardware performance, and has extremely low detection accuracy when an input picture is blurred.
Disclosure of Invention
The invention provides a real-time extraction method for a batch of QR codes, which can detect pictures containing a large number of the QR codes in real time and extract the QR codes.
The invention further aims at providing a batch QR code real-time extraction system applying the method.
In order to achieve the technical effects, the technical scheme of the invention is as follows:
a real-time extraction method for a batch of QR codes comprises the following steps:
s1: collecting and caching QR code image data;
S2: filtering and edge detection processing are carried out on the cached QR code image data, and gray conversion, deblurring, edge detection, binarization and morphological processing of the QR code image are completed;
S3: performing contour extraction on the QR code image processed in the step S2 to obtain rectangular frames of the QR code, and rapidly calculating IOU matrixes among the rectangular frames in a pipeline acceleration mode, so that a background area of the rectangular frames of the QR code is removed;
s4: and (3) classifying and calculating the rectangular frame of the QR code by using a convolutional neural network, and displaying the calculation result.
Further, in the step S2, the gray level conversion process is as follows:
Converting QR code image data from an RGB color space to a gray color space, the conversion formula being as in formula (1):
I=0.30×Ir+0.59×Ig+0.11×Ib (1)
Wherein I represents the converted gray-scale image, I r represents the r-channel data information of the input image, I g represents the g-channel data information of the input image, and I b represents the b-channel data information of the input image;
Accelerating the formula (1) as in formula (2):
I=(300×Ir+590×Ig+110×Ib+500)>>10 (2)
The gray value range of the image data pixel obtained at this time is 0 to 255.
Further, in the step S2, the deblurring process is performed as follows:
1) Dividing the QR code image into image blocks of b×b, and then calculating DCT coefficients C v (k) in the vertical direction and DCT coefficients C h (k) in the horizontal direction of each block, as shown in formulas (3) and (4), wherein f (m, n) is a pixel point (m, n) gray value:
2) Judging the edge response of the image block through the DCT coefficient, and calculating the edge response of the whole image through a formula (5), wherein M is the number with the edge direction larger than 0, and s i is the edge response of the image block:
3) And estimating a PSF coefficient by using S (n), and performing deblurring processing on the gray pattern by using a wiener filter.
Further, in the step S2, the edge detection process is as follows:
performing convolution operation on the image data through the Sobel operator template, so as to perform edge detection on the image:
defining the Sobel operator template as constant data:
parameter h1=8'hff,h2=8'h00,h3=8'h01,h4=8'hfe,h5=8'h00,h6=8'h02,h7=8'hff,h8=8'h00,h9=8'h01;
parameter v1=8'h01,v2=8'h02,v3=8'h01,v4=8'h00,v5=8'h00,v6=8'h00,v7=8'hff,v8=8'hfe,v9=8'hff;
The bit width of the variable in the Sobel template is 8, the decimal-1 is represented by the complement 8'hff, the decimal-2 is represented by the complement 8' hfe, wherein the highest bit represents the sign bit, the Sobel template is used for obtaining the gradient values of the image in the horizontal direction and the vertical direction, and then the absolute values are added to obtain the gray gradient of the whole image.
Further, in the step S2, the binarization process is:
The binarization operation mainly converts the gradient image into a black-and-white image, and the conversion formula is as shown in formula (6):
further, in the step S2, the morphological processing is performed as follows: the morphological processing is to eliminate noise, partition out independent QR code image elements and find the maximum area or minimum area in the image, including Erosion Erosion and expansion treatment operations, as shown in formula (7), where I (x, y) represents pixel values with positions (x, y) in the QR code image data:
Further, in the step S3, since the image data processed by the morphological processing is stored in the data buffer unit, the image data in the data buffer unit is first read out, then a findContours () function in the OpenCV library is called to extract the outline of the foreground of each QR code, the ith outline is represented by [ x i,yi,wi,hi ], n represents the number of rectangular frame outlines obtained by the current picture, x and y represent the abscissa and ordinate of the top left corner vertex of the rectangular frame, w and h represent the width and height of the rectangular frame, then the squareness of each rectangular frame is calculated by equation (8), the rectangular frames are pre-screened by algorithm 1, and the rectangular frames possibly containing the QR code are added into the set Λ out:
After pre-screening, further screening the rectangular frames containing the QR codes, and using the obtained rectangular frames of the QR codes as prior information, and screening the rectangular frame set which is output by the algorithm 1 and possibly contains the QR codes again, wherein the specific process is as follows in the algorithm 2:
The rectangular boxes screened by algorithm 2 are contained in a set, and the IOU value of each rectangular box in the set is calculated by using formula (9), wherein, IOU values representing rectangular boxes rect i and rectangular box rect j, inter (rect i,rectj) is the area size where the two rectangular boxes overlap, union (rect i,rectj) is the area size where the two rectangular boxes merge together:
dividing the further screened rectangular frames into independent rectangular frames and overlapped rectangular frames, wherein the overlapped rectangular frames can be divided into intersecting rectangular frames and containing rectangular frames, the specific flow is as algorithm 3, judging whether every two rectangular frames are overlapped after obtaining the screened rectangular frames according to algorithm 3, adding a mutually non-overlapped rectangular frame set into the independent rectangular frame set, calculating the mutual overlapping ratio of the overlapped rectangular frames, judging that the two rectangular frames are intersected if the overlapping ratio is smaller than 1, adding the rectangular frames into the containing rectangular frame set if the overlapping ratio is larger than or equal to 1, removing redundant outer frames, reserving the contained rectangular frames into the independent rectangular frame set, then taking the independent rectangular frame set as prior information, screening the rectangular frames containing QR codes by calculating the gray scale distribution of each rectangular frame in the intersecting rectangular frame set, and reserving the rectangular frames into the independent rectangular frame set:
Further, in the step S4, the convolutional neural network includes 3 convolutional layers, 3 pooling layers, two fully-connected layers, and one softmax; the first convolution layer, the second convolution layer and the third convolution layer map the image characteristic information into high latitude spaces of 16 dimensions, 32 dimensions and 64 dimensions respectively, the first pooling layer, the second pooling layer and the third pooling layer perform downsampling processing on the characteristic information by using maximum pooling, the output data dimension of the first full-connection layer 1 is 1 multiplied by 64, the output data dimension of the second full-connection layer is 1 multiplied by 2, the probability values of the background and the QR code foreground are represented, the score value of the data dimension of 1 multiplied by 1 is obtained through the softmax layer, and the score value represents the probability that the QR code area with the input data dimension of 32 multiplied by 3 can be judged to contain a QR code object.
Further, in the step S4, before the convolutional neural network classifies the rectangular frame area on the image, the trained network parameter weight and bias value need to be structured, and these parameter values and bias values are quantized at fixed points; the fixed-point quantization mode is shown in formulas (10) - (12), wherein r ' represents the network weight of the floating-point real number, q represents the quantized fixed-point number, the data type is a signed 8-bit integer, r ' max and r ' min are respectively the maximum value and the minimum value of r ', q max and q min are respectively the maximum value and the minimum value of q, S ' is a scaling sparse factor, Z represents the integer size corresponding to the mapping of 0 in the floating-point real number to the number, and round (·) represents rounding:
a batch QR code real-time extraction system comprises a camera module, an image acquisition module, a data cache module, a storage module, a bus configuration module, a filtering and edge detection module, a contour extraction module, a multi-stage step-by-step parallel discrimination module, a network model classification module, an image display module and a display screen module;
The bus configuration module, the camera module, the image acquisition module and the data cache module are sequentially connected, and the storage module is connected to the data cache module; the device comprises a data caching module, a filtering and edge detection module, a contour extraction module, a multi-stage step-by-step parallel discrimination module, a network model classification module, an image display module and a display screen module which are sequentially connected;
the system configures the camera module through the bus configuration module, and the image acquisition module receives the image signals transmitted by the camera module and then transmits the image signals to the data buffer module;
The data buffer module transmits the image data to the filtering and edge detection module, and gray conversion, deblurring, edge detection, binarization and morphological processing are carried out in the filtering and edge detection module;
extracting rectangular frames of the QR codes through a contour extraction module, inputting the rectangular frames into a multistage distribution parallel discrimination module, rapidly calculating IOU matrixes among the rectangular frames in a pipeline acceleration mode, removing background areas, and sending the rectangular frames possibly containing the QR codes into a network model classification module;
The network model classification module loads rectangular frame data, network bias value data and network weight data from the data caching module and the storage module simultaneously, performs forward calculation, and transmits the result to the image display module so that the display screen module displays the effect image.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
According to the invention, the edge information of the QR code patterns is utilized to detect the QR code patterns, single-frame images containing a plurality of QR code patterns are processed through the steps of gray level conversion, deblurring, edge detection, binarization, morphological processing, contour extraction, hierarchical multi-step discrimination, network model discrimination and the like, the pattern blurring problem is considered again, and the deblurring processing is carried out on the images through the slope distribution of statistical edge gray values. In addition, the method combines the traditional image processing mode and the neural network model, ensures the identification accuracy of the method and improves the operation efficiency. Aiming at the problems of offline real-time detection and extraction of images containing a large number of QR codes, the invention can be rapidly deployed in an actual embedded real-time scene and applied, thereby meeting the real-time requirements of detecting and extracting a plurality of QR codes in scenes such as batch medical test tube registration, batch warehouse goods inspection and recording and the like.
Drawings
FIG. 1 is a block diagram of a system of the present invention;
FIG. 2 is a schematic diagram of a Sobel operator template;
FIG. 3 is an intermediate output image and a resulting output image of the filtering and edge detection module;
FIG. 4 is a flow chart of a multi-stage step-by-step discrimination algorithm;
FIG. 5 is a diagram of a rectangular box class decision network model;
fig. 6 is a convolution operation flow chart of the convolution neural network.
FIG. 7 is a schematic diagram of an IP core structure of a rectangular box class decision network model;
FIG. 8 is a diagram of the processing effect of a dataset picture;
fig. 9 is a processing effect diagram of capturing a picture by the camera.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
For the purpose of better illustrating the embodiments, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the actual product dimensions;
It will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical scheme of the invention is further described below with reference to the accompanying drawings and examples.
Example 1
As shown in FIG. 1, the batch QR code real-time extraction system comprises a camera module, an image acquisition module, a data buffer module, a storage module, a bus configuration module, a filtering and edge detection module, a contour extraction module, a multi-stage step-by-step parallel discrimination module, a network model classification module, an image display module and a display screen module;
The bus configuration module, the camera module, the image acquisition module and the data cache module are sequentially connected, and the storage module is connected to the data cache module; the device comprises a data caching module, a filtering and edge detection module, a contour extraction module, a multi-stage step-by-step parallel discrimination module, a network model classification module, an image display module and a display screen module which are sequentially connected;
the system configures the camera module through the bus configuration module, and the image acquisition module receives the image signals transmitted by the camera module and then transmits the image signals to the data buffer module;
The data buffer module transmits the image data to the filtering and edge detection module, and gray conversion, deblurring, edge detection, binarization and morphological processing are carried out in the filtering and edge detection module;
extracting rectangular frames of the QR codes through a contour extraction module, inputting the rectangular frames into a multistage distribution parallel discrimination module, rapidly calculating IOU matrixes among the rectangular frames in a pipeline acceleration mode, removing background areas, and sending the rectangular frames possibly containing the QR codes into a network model classification module;
The network model classification module loads rectangular frame data, network bias value data and network weight data from the data caching module and the storage module simultaneously, performs forward calculation, and transmits the result to the image display module so that the display screen module displays the effect image.
Example 2
A real-time extraction method for a batch of QR codes comprises the following steps:
s1: collecting and caching QR code image data;
S2: filtering and edge detection processing are carried out on the cached QR code image data, and gray conversion, deblurring, edge detection, binarization and morphological processing of the QR code image are completed;
S3: performing contour extraction on the QR code image processed in the step S2 to obtain rectangular frames of the QR code, and rapidly calculating IOU matrixes among the rectangular frames in a pipeline acceleration mode, so that a background area of the rectangular frames of the QR code is removed;
s4: and (3) classifying and calculating the rectangular frame of the QR code by using a convolutional neural network, and displaying the calculation result.
In step S2, the gray conversion process is:
Converting the data from the RGB color space to the gray color space, the conversion formula being as in formula (1):
I=0.30×Ir+0.59×Ig+0.11×Ib (1)
Wherein I represents the converted gray-scale image, I r represents the r-channel data information of the input image, I g represents the g-channel data information of the input image, and I b represents the b-channel data information of the input image;
Accelerating the formula (1) as in formula (2):
I=(300×Ir+590×Ig+110×Ib+500)>>10 (2)
The gray value range of the image data pixel obtained at this time is 0 to 255.
In step S2, the process of deblurring is:
1) Dividing the image into blocks of b×b, and then calculating DCT coefficients C v (k) in the vertical direction and DCT coefficients C h (k) in the horizontal direction of each block, as shown in formulas (3) and (4), wherein f (m, n) is a pixel point (m, n) gray value:
2) Judging the edge response of the image block through the DCT coefficient, and calculating the edge response of the whole image through a formula (5), wherein M is the number with the edge direction larger than 0, and s i is the edge response of the image block:
3) And estimating a PSF coefficient by using S (n), and performing deblurring processing on the gray pattern by using a wiener filter.
As shown in fig. 2, in step S2, the process of performing edge detection is:
performing convolution operation on the image data through the Sobel operator template, so as to perform edge detection on the image:
defining the Sobel operator template as constant data:
parameter h1=8'hff,h2=8'h00,h3=8'h01,h4=8'hfe,h5=8'h00,h6=8'h02,h7=8'hff,h8=8'h00,h9=8'h01;
parameter v1=8'h01,v2=8'h02,v3=8'h01,v4=8'h00,v5=8'h00,v6=8'h00,v7=8'hff,v8=8'hfe,v9=8'hff;
the bit width of the variable in the Sobel template is 8, the decimal-1 is represented by the complement 8'hff, the decimal-2 is represented by the complement 8' hfe, wherein the highest bit represents the sign bit, the Sobel template is used for obtaining the gradient values of the image in the horizontal direction and the vertical direction, the absolute values are added to obtain the gray gradient of the whole image,
In step S2, the process of binarization in the filtering and edge detection module is:
The binarization operation mainly converts the gradient image into a black-and-white image, and the conversion formula is as shown in formula (6):
In step S2, the morphological processing is performed as follows: morphological processing is to eliminate noise, segment out individual image elements and find maxima or minima regions in the image, including Erosion Erosion and expansion resolution operations, as shown in equation (7), where I (x, y) represents pixel values at positions (x, y) in the image data:
Fig. 3 (a) is an original image, fig. 3 (b) is an intermediate output image of the filtering and edge detection module, and fig. 3 (c) is a morphological processed output image.
In step S3, contour extraction is performed on the image data after morphological processing, mainly by calling findContours () function in OpenCV library transplanted to ZYNQ platform. In a specific transplanting operation, a ZYNQ hardware system is built, a design constraint file is added, and then the hardware configuration file is exported. And then generating an FSBL file and a BOOT.BIN starting file on the PC, copying a Linux kernel file, a device tree and a file system, generating a Ubuntu starting mirror image, and transplanting the Ubuntu environment to a ZYNQ platform with a configured hardware environment through the SD card. And compiling the OpenCV static library on a PC by using an ARM cross compiling tool, and transplanting the OpenCV static library into a Ubuntu system of the ZYNQ through an SD card, thereby realizing the transplanting of the OpenCV library into a ZYNQ platform. Since the image data processed by the last morphological processing in the filtering and edge detecting module is stored in the data buffer unit, the image data in the data buffer unit is firstly read out, then findContours () function in the OpenCV library is called to extract the outline of each QR code foreground, the ith outline is represented by [ x i,yi,wi,hi ], n represents the number of rectangular frame outlines obtained by the current picture, x and y represent the abscissa and ordinate of the top left corner vertex of the rectangular frame, w and h represent the width and height of the rectangular frame, then the squareness of each rectangular frame is calculated by using formula (8), the QR rectangular frames are pre-screened by algorithm 1, and the rectangular frames possibly containing the QR codes are added into the set out:
after pre-screening, the multistage distribution parallel discrimination module further screens rectangular frames containing the QR codes, and the obtained rectangular frames of the QR codes are used as prior information to screen a rectangular frame set which is output by the algorithm 1 and possibly contains the QR codes, wherein the specific process is as follows in the algorithm 2:
The rectangular boxes screened by algorithm 2 are contained in a set, and the IOU value of each rectangular box in the set is calculated by using formula (9), wherein, IOU values representing rectangular boxes rect i and rectangular box rect j, inter (rect i,rectj) is the area size where the two rectangular boxes overlap, union (rect i,rectj) is the area size where the two rectangular boxes merge together:
the main Verilog code to calculate the rectangular box IOU value is as follows:
real left_col_max,right_col_min,up_row_max,down_row_min;
real s1,s2,cross,result;
initial begin
left_col_max<=r1_x1>r2_x1r1_x1:r2_x1;
right_col_min<=r1_x2<r2_x2r1_x2:r2_x2;
up_row_max<=r1_y1>r2_y1r1_y1:r2_y1;
down_row_min<=r1_y2<r2_y2r1_y2:r2_y2;
s1<=0.0;s2<=0.0;cross<=0.0;
result<=0.0;
end
begin
if(left_col_max>=right_col_min||down_row_min<=up_row_max)begin
result=0.0;
end
else begin
s1=r1_w*r1_h;
s2=r2_w*r2_h;
cross=(down_row_min-up_row_max)*(right_col_min-left_col_max);
result=cross/(s1+s2-cross);
end
end
Dividing the further screened rectangular frames into independent rectangular frames and overlapped rectangular frames, wherein the overlapped rectangular frames can be divided into intersecting rectangular frames and containing rectangular frames, the specific flow is as shown in an algorithm 3 in fig. 4, judging whether every two rectangular frames are overlapped after obtaining the screened rectangular frames according to the algorithm 3, adding a mutually non-overlapped rectangular frame set into the independent rectangular frame set, calculating the mutual overlapping ratio of the overlapped rectangular frames, judging that the two rectangular frames are intersected if the overlapping ratio is smaller than 1, adding the rectangular frames into the containing rectangular frame set if the overlapping ratio is larger than or equal to 1, removing redundant outer frames, reserving the contained rectangular frames into the independent rectangular frame set, taking the independent rectangular frame set as prior information, screening the rectangular frames containing QR codes by calculating gray scale distribution of each rectangular frame in the intersecting rectangular frame set, and reserving the rectangular frames into the independent rectangular frame set:
As shown in fig. 5, in step S4, the network model of the convolutional neural network includes 3 convolutional layers, 3 pooling layers, two fully-connected layers, and one softmax; the first convolution layer, the second convolution layer and the third convolution layer map the image characteristic information into high latitude spaces of 16 dimensions, 32 dimensions and 64 dimensions respectively, the first pooling layer, the second pooling layer and the third pooling layer perform downsampling processing on the characteristic information by using maximum pooling, the output data dimension of the first full-connection layer 1 is 1 multiplied by 64, the output data dimension of the second full-connection layer is 1 multiplied by 2, the probability values of the background and the QR code foreground are represented, the score value of the data dimension of 1 multiplied by 1 is obtained through the softmax layer, and the score value represents the probability that the QR code area with the input data dimension of 32 multiplied by 3 can be judged to contain a QR code object.
As shown in fig. 6, in the case of performing a convolution operation, taking a single convolution kernel operation as an example, the present invention uses a plurality of FIFO buffer units and a plurality of registers to implement pipeline parallel acceleration processing of the convolution operation. Because the size of a single frame image is 32×32, three lines of data of the image can be input by using the FIFO buffer unit 1, the FIFO buffer unit 2 and the FIFO buffer unit 3 and delaying each other by 32 time units, then, the purpose of only reading three lines of three columns of image data each time is realized by using the FIFO buffer unit 4, the FIFO buffer unit 5 and 3 register groups, namely, the register group 1 outputs the image data v 0-v 8 read at the moment corresponding to a convolution window, then, the data and a convolution kernel with the size of 3×3 are subjected to multiply-accumulate operation, the weights of the convolution kernel are represented by C0-C8, the intermediate operation result is stored in registers reg 9-reg 24, and the register reg25 stores the final convolution operation output result. According to the framework in the figure, when convolution operation is performed on the right side, the left side continuously reads out image data according to a clock period, and the data is transmitted to the right side through each FIFIO buffer unit and a register to wait for next operation, so that pipeline parallel operation processing is realized, and the operation efficiency of a network model is improved.
In step S4, before the network model of the convolutional neural network is called to classify the rectangular frame area on the image, the trained network parameter weight and bias value are structured, and these parameter values and bias values are quantized at fixed points and then input into the storage module; the fixed-point quantization mode is shown in formulas (10) - (12), wherein r ' represents the network weight of the floating-point real number, q represents the quantized fixed-point number, the data type is a signed 8-bit integer, r ' max and r ' min are respectively the maximum value and the minimum value of r ', q max and q min are respectively the maximum value and the minimum value of q, S ' is a scaling sparse factor, Z represents the integer size corresponding to the mapping of 0 in the floating-point real number to the number, and round (·) represents rounding:
Example 3
Fast deployment of the FPGA platform is performed on the network model by using a High-level synthesis (High-LEVEL SYNTHESIS, HLS) technology, and fig. 7 is a schematic diagram of the IP core structure of the rectangular frame type judgment network model. Before the network model module is called to classify the rectangular frame area on the image, the trained network parameter weight and bias value are structured, fixed-point quantization is carried out on the parameter value and bias value, and then the parameter value and bias value are input into the storage module.
The fixed point quantization method used is shown in formulas (10) - (12). Wherein r ' represents the network weight of the floating-point real number, q represents the quantized fixed-point number, the data type is a signed 8-bit integer, r ' max and r ' min are the maximum value and the minimum value of r ' respectively, q max and q min are the maximum value and the minimum value of q respectively, S ' is a scaling sparse factor, Z represents the integer size corresponding to the mapping of 0 in the floating-point real number to the number, and round () represents rounding:
the global control module mainly provides control signals of various operations during network model operation, so that network weights and image data can be loaded into the network operation unit. Specifically, after the network model classification module receives the operation start enabling signal from the global control module, the network model classification module transmits the image data read from the FIFO buffer unit and the weight data of the storage module to the on-chip memory for block storage, and then the network operation unit obtains the data and performs forward calculation, and finally outputs and stores the calculated score value.
A ZYNQ chip with the model number of XC7Z020 series is adopted, the chip comprises PS (Processing System) and PL (Programmable Logic) parts, a dual-core Cortex-A9 processor is arranged on the PS side, the maximum frequency can reach 766Mhz, and a block random access memory with 85K logic units, 4.9Mbit and the like is arranged on the PL side. The data storage module on the hardware platform comprises DDR3 of 1GB and EMMC of 8GB, and in addition, the platform is also provided with peripheral interfaces such as VGA interfaces, SD card interfaces and the like. After the algorithm is programmed by the Verilog language, the algorithm is compiled and synthesized by using Xilinx Vivado 2017.4 software, then the algorithm is downloaded into a chip of a hardware platform to form a corresponding hardware circuit, and a Ubuntu system is transplanted onto a ZYNQ platform. After the system is started, the bus configuration module configures the OV5640 camera module through an I2C bus protocol, so that the OV5640 camera module can output RGB images with 24-bit resolution, the images are cached in the data caching unit through the image acquisition module, then the QR code extraction IP core loads preset parameters, weight data and the like from the storage module, the image data in the caching unit are read and processed, and finally the result images are displayed on the display screen through the image display module. The image display module adopts VGA protocol, and its interface includes field synchronous signal line, line synchronous signal line, R signal line, B signal line and two I2C communication lines, and the module also includes an independent block random memory unit as display memory area for image display buffer. Fig. 8 and 9 are effect diagrams of QR code extraction in actual application of the hardware system of the present invention, in which fig. 8 (a) is a picture from a QR code data set, stored in an EMMC storage module of a hardware platform, and an image is processed by a direct calling program, and fig. 8 (b) is an effect image thereof. Fig. 9 (a) is a QR code image captured by a camera module on a hardware platform, and fig. 9 (b) is a processing effect image of the system.
The same or similar reference numerals correspond to the same or similar components;
The positional relationship depicted in the drawings is for illustrative purposes only and is not to be construed as limiting the present patent;
It is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.

Claims (8)

1. The application method of the batch QR code real-time extraction system is characterized in that the batch QR code real-time extraction system comprises a camera module, an image acquisition module, a data buffer module, a storage module, a bus configuration module, a filtering and edge detection module, a profile extraction module, a multi-stage step-by-step parallel discrimination module, a network model classification module, an image display module and a display screen module;
The bus configuration module, the camera module, the image acquisition module and the data cache module are sequentially connected, and the storage module is connected to the data cache module; the device comprises a data caching module, a filtering and edge detection module, a contour extraction module, a multi-stage step-by-step parallel discrimination module, a network model classification module, an image display module and a display screen module which are sequentially connected;
the system configures the camera module through the bus configuration module, and the image acquisition module receives the image signals transmitted by the camera module and then transmits the image signals to the data buffer module;
The data buffer module transmits the image data to the filtering and edge detection module, and gray conversion, deblurring, edge detection, binarization and morphological processing are carried out in the filtering and edge detection module;
extracting rectangular frames of the QR codes through a contour extraction module, inputting the rectangular frames into a multistage distribution parallel discrimination module, rapidly calculating IOU matrixes among the rectangular frames in a pipeline acceleration mode, removing background areas, and sending the rectangular frames possibly containing the QR codes into a network model classification module;
The network model classification module loads rectangular frame data, network bias value data and network weight data from the data caching module and the storage module simultaneously, performs forward calculation, and transmits the result to the image display module so that the display screen module displays an effect image; wherein the method comprises the steps of:
s1: collecting and caching QR code image data;
S2: filtering and edge detection processing are carried out on the cached QR code image data, and gray conversion, deblurring, edge detection, binarization and morphological processing of the QR code image are completed;
S3: performing contour extraction on the QR code image processed in the step S2 to obtain rectangular frames of the QR code, and rapidly calculating IOU matrixes among the rectangular frames in a pipeline acceleration mode, so that a background area of the rectangular frames of the QR code is removed;
s4: and (3) classifying and calculating the rectangular frame of the QR code by using a convolutional neural network, and displaying the calculation result.
2. The application method of the batch QR code real-time extraction system according to claim 1, wherein in the step S2, the gray level conversion process is:
Converting QR code image data from an RGB color space to a gray color space, the conversion formula being as in formula (1):
I=0.30×Ir+0.59×Ig+0.11×Ib(1)
Wherein I represents the converted gray-scale image, I r represents the r-channel data information of the input image, I g represents the g-channel data information of the input image, and I b represents the b-channel data information of the input image;
Accelerating the formula (1) as in formula (2):
I=(300×Ir+590×Ig+110×Ib+500)>>10 (2)
The gray value range of the image data pixel obtained at this time is 0 to 255.
3. The application method of the batch QR code real-time extraction system according to claim 2, wherein in the step S2, the process of performing deblurring is:
1) Dividing the QR code image into image blocks of b×b, and then calculating DCT coefficients C v (k) in the vertical direction and DCT coefficients C h (k) in the horizontal direction of each block, as shown in formulas (3) and (4), wherein f (m, n) is a pixel point (m, n) gray value:
2) Judging the edge response of the image block through the DCT coefficient, and calculating the edge response of the whole image through a formula (5), wherein M is the number with the edge direction larger than 0, and s i is the edge response of the image block:
3) And estimating a PSF coefficient by using S (n'), and performing deblurring processing on the gray image by using a wiener filter.
4. The application method of the batch QR code real-time extraction system according to claim 3, wherein in the step S2, the edge detection process is:
performing convolution operation on the image data through the Sobel operator template, so as to perform edge detection on the image:
defining the Sobel operator template as constant data:
parameter h1=8'hff,h2=8'h00,h3=8'h01,h4=8'hfe,h5=8'h00,h6=8'h02,h7=8'hff,h8=8'h00,h9=8'h01;
parameter v1=8'h01,v2=8'h02,v3=8'h01,v4=8'h00,v5=8'h00,v6=8'h00,v7=8'hff,v8=8'hfe,v9=8'hff;
The bit width of the variable in the Sobel template is 8, the decimal-1 is represented by the complement 8'hff, the decimal-2 is represented by the complement 8' hfe, wherein the highest bit represents the sign bit, the Sobel template is used for obtaining the gradient values of the image in the horizontal direction and the vertical direction, and then the absolute values are added to obtain the gray gradient of the whole image.
5. The application method of the batch QR code real-time extraction system according to claim 4, wherein in the step S2, the binarization process is:
The binarization operation is to convert the gradient image into a black-and-white image, and the conversion formula is as shown in formula (6):
6. The method for applying the batch QR code real-time extraction system according to claim 5, wherein in the step S2, the morphological processing is performed by: the morphological processing is to eliminate noise, partition out independent QR code image elements and find the maximum area or minimum area in the image, including Erosion Erosion and expansion treatment operations, as shown in formula (7), where I (x, y) represents pixel values with positions (x, y) in the QR code image data:
7. The method according to claim 6, wherein in the step S4, the convolutional neural network includes 3 convolutional layers, 3 pooling layers, two full-connection layers, and one softmax layer; the first convolution layer, the second convolution layer and the third convolution layer map the image feature information into a 16-dimensional, 32-dimensional and 64-dimensional high-dimensional space respectively, the first pooling layer, the second pooling layer and the third pooling layer perform downsampling processing on the feature information by using maximum pooling, the output data dimension of the first fully-connected layer is 1 multiplied by 64, the output data dimension of the second fully-connected layer is 1 multiplied by 2, the probability values of the background and the QR code foreground are represented, a score value with the data dimension of 1 multiplied by 1 is obtained through a softmax layer, and the score value represents the probability that the QR code area with the input data dimension of 32 multiplied by 3 can be judged to contain a QR code object.
8. The application method of the batch QR code real-time extraction system according to claim 7, wherein in the step S4, before the convolutional neural network classifies the rectangular frame area on the image, the trained network parameter weight and bias value need to be structured, and fixed-point quantization is performed on the network parameter weight and bias value; the fixed-point quantization mode is shown in formulas (10) - (12), wherein r ' represents the network weight of the floating-point real number, q represents the quantized fixed-point number, the data type is a signed 8-bit integer, r ' max and r ' min are respectively the maximum value and the minimum value of r ', q max and q min are respectively the maximum value and the minimum value of q, S ' is a scaling sparse factor, Z represents the integer size corresponding to the mapping of 0 in the floating-point real number to the number, and round (·) represents rounding:
CN202210669471.6A 2022-06-14 2022-06-14 Batch QR code real-time extraction method and system Active CN115017931B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210669471.6A CN115017931B (en) 2022-06-14 2022-06-14 Batch QR code real-time extraction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210669471.6A CN115017931B (en) 2022-06-14 2022-06-14 Batch QR code real-time extraction method and system

Publications (2)

Publication Number Publication Date
CN115017931A CN115017931A (en) 2022-09-06
CN115017931B true CN115017931B (en) 2024-06-14

Family

ID=83075806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210669471.6A Active CN115017931B (en) 2022-06-14 2022-06-14 Batch QR code real-time extraction method and system

Country Status (1)

Country Link
CN (1) CN115017931B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117573709B (en) * 2023-10-23 2024-09-20 昆易电子科技(上海)有限公司 Data processing system, electronic device, and medium
CN117314951B (en) * 2023-11-20 2024-01-26 四川数盾科技有限公司 Two-dimensional code recognition preprocessing method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902806A (en) * 2019-02-26 2019-06-18 清华大学 Method is determined based on the noise image object boundary frame of convolutional neural networks
CN111597848A (en) * 2020-04-21 2020-08-28 中山大学 Batch QR code image extraction method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109961049B (en) * 2019-03-27 2022-04-26 东南大学 Cigarette brand identification method under complex scene
CN113450376A (en) * 2021-06-14 2021-09-28 石河子大学 Cotton plant edge detection method based on FPGA

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902806A (en) * 2019-02-26 2019-06-18 清华大学 Method is determined based on the noise image object boundary frame of convolutional neural networks
CN111597848A (en) * 2020-04-21 2020-08-28 中山大学 Batch QR code image extraction method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
余顺园.《弱图像信号的复原理论与方法研究》.中国纺织出版社,2021,第6-12页. *
崔吉,崔建国.《工业视觉实用教程》.上海交通大学出版社,2018,第58-64页. *

Also Published As

Publication number Publication date
CN115017931A (en) 2022-09-06

Similar Documents

Publication Publication Date Title
CN109190752B (en) Image semantic segmentation method based on global features and local features of deep learning
Liu et al. Understanding the effective receptive field in semantic image segmentation
Alidoost et al. A CNN-based approach for automatic building detection and recognition of roof types using a single aerial image
CN115017931B (en) Batch QR code real-time extraction method and system
CN108898065B (en) Deep network ship target detection method with candidate area rapid screening and scale self-adaption
WO2023193401A1 (en) Point cloud detection model training method and apparatus, electronic device, and storage medium
CN110363211B (en) Detection network model and target detection method
CN111695609A (en) Target damage degree determination method, target damage degree determination device, electronic device, and storage medium
CN113255555A (en) Method, system, processing equipment and storage medium for identifying Chinese traffic sign board
CN115830004A (en) Surface defect detection method, device, computer equipment and storage medium
CN113159064A (en) Method and device for detecting electronic element target based on simplified YOLOv3 circuit board
CN113516053A (en) Ship target refined detection method with rotation invariance
CN110570442A (en) Contour detection method under complex background, terminal device and storage medium
CN111145196A (en) Image segmentation method and device and server
CN114821554A (en) Image recognition method, electronic device, and storage medium
CN111161348A (en) Monocular camera-based object pose estimation method, device and equipment
CN111291767A (en) Fine granularity identification method, terminal equipment and computer readable storage medium
CN116309612B (en) Semiconductor silicon wafer detection method, device and medium based on frequency decoupling supervision
CN112396564A (en) Product packaging quality detection method and system based on deep learning
CN114913345B (en) Simplified image feature extraction method of SIFT algorithm based on FPGA
CN115619618A (en) Image processing method and system based on high-level comprehensive tool
CN112348823A (en) Object-oriented high-resolution remote sensing image segmentation algorithm
Scott Applied machine vision
CN113989938B (en) Behavior recognition method and device and electronic equipment
CN115049581B (en) Notebook screen defect detection method, system and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant