CN106326802B - Quick Response Code bearing calibration, device and terminal device - Google Patents
Quick Response Code bearing calibration, device and terminal device Download PDFInfo
- Publication number
- CN106326802B CN106326802B CN201610692045.9A CN201610692045A CN106326802B CN 106326802 B CN106326802 B CN 106326802B CN 201610692045 A CN201610692045 A CN 201610692045A CN 106326802 B CN106326802 B CN 106326802B
- Authority
- CN
- China
- Prior art keywords
- dimensional code
- correction
- determining
- graph
- center point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012937 correction Methods 0.000 claims abstract description 106
- 238000001514 detection method Methods 0.000 claims abstract description 52
- 239000000284 extract Substances 0.000 claims abstract description 6
- 238000000034 method Methods 0.000 claims description 26
- 238000012545 processing Methods 0.000 claims description 10
- 230000009466 transformation Effects 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000010365 information processing Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 229910002056 binary alloy Inorganic materials 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/146—Methods for optical code recognition the method including quality enhancement steps
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Toxicology (AREA)
- Health & Medical Sciences (AREA)
- Electromagnetism (AREA)
- General Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
This application provides a kind of Quick Response Code bearing calibrations, including, extract two-dimension code area;Determine three detection figures at least three vertex of two-dimension code area;Determine the intersection point for the detection graphic limit extended line for being used for determining correction graph;The point search correction graph centered on the intersection point;Figure and the correction graph are detected using described at least three to determine for decoded two-dimension code pattern from two-dimension code area.Present invention also provides a kind of Quick Response Code means for correcting and terminal devices.Using scheme provided by the present application, the accuracy of Quick Response Code identification can be improved.
Description
Technical Field
The present application relates to the field of information technologies, and in particular, to a two-dimensional code correction method, an apparatus, and a terminal device.
Background
Nowadays, with the development of the internet, especially the mobile internet, two-dimensional codes gradually enter the visual field of people, and due to the characteristics of large information capacity, easy manufacturing, low cost, durability and the like, the application of the two-dimensional codes is very common, and comprises information acquisition (business cards, maps and WiFi passwords), mobile phone e-commerce (user scanning and directly browsing advertisements pushed by merchants), anti-counterfeiting tracing (user scanning and can check a production place; meanwhile, a final consumption place can be seen in a background), member management (acquiring electronic member information and VIP service on a user mobile phone), mobile phone payment (scanning commodity two-dimensional codes and completing payment through a mobile phone end channel provided by bank or third party payment), hospital medical treatment (reservation, registration and diagnosis) and the like. Therefore, the identification of the two-dimensional code graph, especially the identification of key point positions in the two-dimensional code becomes the key for decoding the two-dimensional code.
The two-dimensional bar code/two-dimensional code (2-dimensional bar code) records data symbol information by using black and white alternate graphs which are distributed on a plane (two-dimensional direction) according to a certain rule by using a certain specific geometric figure; the concept of "0" and "1" bit stream forming the internal logic basis of computer is used ingeniously in coding, several geometric forms correspondent to binary system are used to represent literal numerical information, and can be automatically read by means of image input equipment or photoelectric scanning equipment so as to implement automatic information processing.
The two-dimensional code comprises at least 3 detection graphs (Finder patterns), and a correction graph (Alignment Pattern) is introduced when the version number is larger than 2, because the version number is increased, the two-dimensional code becomes complicated, only by the detection graphs, the information of the two-dimensional code at the position where the detection graphs are difficult to recover, the two-dimensional code graph can be more accurately positioned by adding the correction graph so as to decode a data area to obtain information such as a character value, and therefore, the correction graph is a key for decoding the two-dimensional code. Fig. 1 shows a rectangular two-dimensional code. As shown in fig. 1, the two-dimensional code includes three detection patterns and one correction pattern, and further includes a data area. However, the corrected graph is obtained by detecting the graph, and in the prior art, the two-dimensional code graph obtained by scanning often has the problems of perspective deformation and the like, so that the obtained corrected graph is not accurate enough.
Disclosure of Invention
The application provides a two-dimensional code correction method, which comprises the following steps: extracting a two-dimensional code area from the two-dimensional code image; determining at least three detection graphs in the two-dimensional code area; determining an intersection point of extension lines of boundaries of two detection patterns used for determining the correction pattern; searching to obtain a corrected graph by taking the intersection point as a starting point; and correcting the two-dimensional code area by using the at least three detection graphs and the correction graph to obtain a two-dimensional code graph for decoding.
The application also provides a two-dimensional code correcting unit, includes: the scanning module extracts a two-dimensional code area from the two-dimensional code image; determining at least three detection graphs in the two-dimensional code area; the positioning module is used for determining the intersection point of the extension lines of the boundaries of the two detection graphs for determining the correction graph; searching to obtain a corrected graph by taking the intersection point as a starting point; and the correction module corrects the two-dimensional code area by using the at least three detection graphs and the correction graph to obtain a two-dimensional code graph for decoding.
The application also provides terminal equipment comprising the device.
By adopting the method, the device and the terminal equipment provided by the application, the two-dimension code recognition function has stronger adaptability, the corrected graph can be accurately positioned, and the accuracy of recognizing the two-dimension code graph is improved.
Drawings
In order to more clearly illustrate the technical solutions in the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only examples of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive effort. Wherein,
FIG. 1 is a schematic diagram of an example rectangular two-dimensional code;
fig. 2 is a schematic flow chart of a two-dimensional code correction method in the present application example;
FIG. 3 is a flow chart of a method of searching for a corrected graph in an example of the present application;
FIG. 4 is a schematic diagram illustrating an intersection point in a two-dimensional code region determined according to an example of the present application;
FIG. 5A is a schematic view of a calibration pattern according to an embodiment of the present application;
FIG. 5B is a diagram illustrating a search for a corrected pattern in a two-dimensional code region according to an example of the present application;
FIG. 5C is a diagram of a missing correction pattern in a two-dimensional code region in an example of the present application;
fig. 6 is a schematic structural diagram of a two-dimensional code correction device in an example of the present application; and
fig. 7 is a schematic structural diagram of a computing device in an example of the present application.
Detailed Description
The technical solutions in the present application will be described clearly and completely with reference to the accompanying drawings, and it is obvious that the described examples are some, but not all examples of the present application. All other examples, which can be obtained by a person skilled in the art without making any inventive step based on the examples in this application, are within the scope of protection of this application.
Some examples of the present application provide a two-dimensional code correction method, which can be applied to a two-dimensional code scanning module (alternatively referred to as a two-dimensional code scanning engine) on various electronic devices, where the two-dimensional code scanning module can be built in various Application (APP) clients (e.g., microblog, wechat, QQ, blog, BBS, paypal, wechat payment, etc.). Here, the electronic device may be various portable handheld devices such as a mobile phone, and may also be various fixed devices having a scanning function.
As shown in fig. 2, the method comprises the steps of:
step 201: and extracting a two-dimensional code area from the two-dimensional code image.
Here, the two-dimensional code scanning module may scan through an image acquisition device in the electronic device to obtain a two-dimensional code image, and extract an area with an obvious two-dimensional code characteristic to obtain a two-dimensional code area.
Step 202: and determining at least three detection graphs in the two-dimensional code area.
The two-dimensional code region is analyzed to determine at least three detection patterns, which may include three detection patterns, and the three detection patterns respectively cover three vertexes of the two-dimensional code region.
Step 203: an intersection of extension lines of the boundaries of the two detection patterns in which the correction pattern is determined.
In some examples, a boundary line that does not cover the boundary of the two-dimensional code may be taken from each of the two detection patterns, and one of the boundary lines is a transverse line and the other boundary line is a longitudinal line, and an intersection point of extension lines of the two taken boundary lines may be used to determine the correction pattern. Here, the two detection patterns for determining the correction pattern may cover any two opposite vertices of the two-dimensional code, respectively, that is, the two opposite vertices should be located at both ends of any diagonal line of the rectangular two-dimensional code.
Fig. 4 shows an example of a rectangular two-dimensional code, in which two detection patterns 401 and 402 for determining a calibration pattern respectively cover two top left and bottom right vertices 41 and 42 of the two-dimensional code, and an extension 403 of the top boundary line of the bottom left detection pattern 401 and an extension 404 of the left boundary line of the top right detection pattern 402 are respectively made, and an intersection 405 of the two extension 403 and 404 can be used to determine the calibration pattern.
Step 204: and searching to obtain a correction graph by taking the intersection point as a starting point.
Step 205: and correcting the two-dimensional code area by using the at least three detection graphs and the correction graph to obtain a two-dimensional code graph for decoding.
In this case, a perspective transformation formula can be obtained by using the detection pattern and the correction pattern, and then the two-dimensional code pattern can be corrected based on the perspective transformation formula, so that even if the two-dimensional code presented in the two-dimensional code region in the two-dimensional code image obtained by initial scanning has perspective deformation so that the information carried in the two-dimensional code region is difficult to decode and identify, such as curling and incorrect placement, the two-dimensional code region can be corrected by using the perspective transformation formula and restored into the two-dimensional code pattern placed in a specified direction and on a plane for decoding and identifying.
According to the technical scheme, the method for determining the corrected graph through the boundary extension line of the detected graph is a relative measurement method, the corrected graph can be accurately positioned, and the two-dimensional code identification accuracy is improved.
Fig. 3 shows a specific method for searching the correction pattern in step 104 from the intersection point as a starting point in an example of the present application. As shown in fig. 3, the method includes:
step 301: and taking the intersection point as an initial candidate center point of the correction graph.
Step 302: and comparing whether the graphic features of the area with the preset size and with the features of the corrected graphic, wherein the area with the current candidate center point as the center is matched with the features of the corrected graphic.
Step 303: if the candidate center point is matched with the center point of the correction graph, determining that the candidate center point is the center point of the correction graph; otherwise, determining a point around the current candidate center as the candidate center point according to a predetermined rule, and then repeating step 302.
Here, when the matching of the pattern feature of the predetermined size area searched by the candidate center point and the corrected pattern feature fails, the scan engine searches to select a point near the current candidate center point as a new candidate center point, such as: a point at the top left corner of the current candidate center point may be taken, and the search radius may be a graph basic unit (which may be referred to as a unit for short, and may be the width of the minimum square block in the two-dimensional code graph). Here, the unit is determined by the two-dimensional code scanning module after the two-dimensional code image is acquired, the unit of each two-dimensional code may be different, and the two-dimensional code scanning module (or the two-dimensional code scanning engine) automatically determines the size of one unit after the two-dimensional code image is acquired.
Step 304: determining the correction pattern according to the determined center point of the correction pattern.
In some examples, the two-dimensional code image may be an image subjected to binarization processing, and the step 302 may specifically include the following processing:
1. taking at least two straight lines passing through the candidate center points in a predetermined manner in the region of the predetermined size.
2. Determining a pixel value sequence corresponding to each of the at least two straight lines; wherein the sequence of pixel values comprises two values representing a black pixel and a white pixel, respectively. Here, in the binarized image, each pixel has two values, for example: 0 represents black and 1 represents white. Each taken straight line corresponds to a pixel value sequence, and the pixel value sequence comprises the pixel values of all pixel points passed by the straight line and is arranged according to a corresponding sequence.
3. And determining whether the arrangement mode and the proportional relation of the two numerical values in each pixel value sequence represent the characteristics of the correction graph, and if so, determining that the two numerical values are matched with the correction graph.
In some examples, step 3 may specifically include the following steps:
1) the correction pattern is characterized in that black pixels and white pixels are distributed in the following order and proportion:
black: white: black: white: black 1: 1: 1: 1: 1.
2) whether the first and second values in the sequence of pixel values are arranged in the following order and proportion:
the first value is: the second value is: the first value is: the second value is: the first value is 1: 1: 1: 1: 1. wherein the first value represents a black pixel and the second value represents a white pixel.
In some examples, step 3 may specifically include the following steps:
1) the correction pattern is characterized in that black pixels and white pixels are distributed in the following order and proportion:
white: black: white 1: 1: 1.
2) whether the first and second values in the sequence of pixel values are arranged in the following order and proportion:
the second value is: the first value is: the second value is 1: 1: 1.
wherein the first value represents a black pixel and the second value represents a white pixel.
In the above example, when searching for the correction pattern, it is the partial feature of the correction pattern that is matched, and a preferable effect can also be obtained. Specifically, only the characteristics of the middle white frame and the middle black frame of the correction pattern are matched (that is, the characteristics of the correction pattern are that black pixels and white pixels are distributed in the following order and proportion, white: black: white: 1: 1: 1), so that the data amount of the pixel value sequence adopted in matching is small, and only three units of pixel values in the two-dimensional code region are matched for each pixel value sequence. Through testing, the method can also ensure the accuracy of positioning the correction graph, obviously improve the processing efficiency and reduce the calculated amount.
Fig. 5A shows an example of a correction pattern, and it can be seen that the side length ratio of the central black box, the middle white box, and the outermost black box is 1: 3: 5, if two horizontal and vertical straight lines are made from the center of the calibration graph, and 0 represents a black pixel and 1 represents a white pixel, the pixel values of the pixels in the pixel sequence corresponding to the two straight lines should be arranged in a certain order: the first number of 0, the second number of 1, the third number of 0, the fourth number of 1, the fifth number of 0, and the first to fifth numbers are almost the same (for example, 21 to 23), that is, the proportional relationship of these pixel values is 1: 1: 1: 1: 1. the correction graph can be searched and obtained by utilizing the characteristics of the correction graph, namely the arrangement mode and the proportional relation of all pixel values in the pixel sequence corresponding to a straight line passing through the center.
Fig. 5B shows one block of an image area in the two-dimensional code area. If the intersection point obtained by the method is a, two horizontal and vertical straight lines passing through the intersection point a are taken, two corresponding pixel sequences are determined, whether the two pixel sequences are both matched with the characteristics of the correction graph or not is judged, that is, whether the arrangement mode and the proportional relation of each pixel value in the two pixel sequences are consistent with the preset parameters representing the characteristics of the correction graph or not (for example, the parameters representing the arrangement sequence and the proportional relation of each pixel value in the correction graph) is judged. Assuming that the intersection a is located exactly at the center of the calibration pattern (the calibration pattern shown by the oblique lines in fig. 5B is actually present), and it is determined that the intersection a matches the parameter representing the characteristic of the calibration pattern, the intersection a is determined as the center of the calibration pattern, and the calibration pattern is searched. Assuming that the intersection a is not located at the center of the calibration pattern and it is determined that the intersection a does not match the parameters representing the features of the calibration pattern, a point b is further taken at the upper right corner of the intersection a, two horizontal and vertical lines passing through the point b are taken, and the above steps are repeated until the center of the calibration pattern is found.
In some examples, when the center point of the calibration pattern is determined to be failed according to the method, the intersection point may be used as the center point of the calibration pattern, and the calibration pattern may be simulated. Here, a condition for determining that the determination of the center point of the correction pattern has failed may be preset, and the process of simulating the correction pattern may be started when the condition is satisfied. Because the correction graph has a preset size, the area range for searching the correction graph can be set, and if the position of the pixel point serving as the current candidate center point exceeds the preset range, the failure of determining the center point of the correction graph can be judged, namely the correction graph cannot be searched.
With the above example, even if a missing correction pattern occurs in the scanned two-dimensional code image (as shown in fig. 5C, a blank area 501 occurs, so that the correction pattern cannot be searched by the above method), the correction pattern can still be simulated in the two-dimensional code area. As shown in fig. 5C, the intersection 502 obtained by the above method is used as a center point, a calibration pattern is generated according to a preset calibration pattern feature (such as the calibration pattern feature shown in fig. 5A), and the generated calibration pattern and the detection pattern can be used for calibration processing.
In the above example, even if the correction pattern is missing, the two-dimensional code pattern available for decoding can still be obtained, so that the success rate of two-dimensional code identification can be obviously improved, and the performance of the two-dimensional code scanning module is improved.
Based on each example above, this application has still provided a two-dimensional code correcting unit. In some examples, the two-dimensional code correction device may be implemented by the structure shown in fig. 6, and includes three functional modules: a scanning module 601, a positioning module 602, and a correction module 603.
The scanning module 601 may extract a two-dimensional code region from the two-dimensional code image, and determine at least three detection patterns in the two-dimensional code region.
Here, the scanning module 601 scans the two-dimensional code image through the image acquisition device, and extracts the region with obvious two-dimensional code characteristics; at least three detection patterns are determined through analysis, for example, the detection patterns can comprise three detection patterns which are respectively positioned at three vertexes of the two-dimensional code area.
The positioning module 602 may determine an intersection point of extension lines of boundaries of two probe patterns used for determining the calibration pattern, and search for the calibration pattern with the intersection point as a starting point.
In some examples, the positioning module 602 may take one boundary line from each of the two detection patterns, which does not cover the boundary of the two-dimensional code, and one of the boundary lines is a transverse line and the other boundary line is a longitudinal line, and an intersection point of extension lines of the two taken boundary lines may be used to determine the correction pattern.
The correction module 603 may correct the two-dimensional code region using the at least three detection patterns and the correction pattern to obtain a two-dimensional code pattern for decoding.
In some examples, the two-dimensional code is rectangular; the at least three detection patterns determined by the scanning module 601 include: three detection graphs respectively covering three vertexes of the two-dimensional code; the corrected graph obtained by the positioning module 602 is located near the fourth vertex of the two-dimensional code.
In some examples, the two detection profiles used to determine the correction profile may include: and the two detection graphs respectively cover two opposite vertexes of the two-dimensional code.
The location module 602 may determine the intersection point by: respectively taking a boundary line which does not cover the boundary of the two-dimensional code from the two detection graphs, wherein one boundary line is transverse and the other boundary line is longitudinal; and determining the intersection point of the extension lines of the two taken boundary lines.
In some examples, the positioning module 602 may derive the correction pattern by: determining whether the graphic features of the region with the preset size taking the current candidate center point as the center are matched with the correction graph or not by taking the intersection point as the initial candidate center point of the correction graph; if the candidate center point is matched with the center point of the correction graph, determining that the candidate center point is the center point of the correction graph; otherwise, determining one point around the current candidate center as a candidate center point according to a preset rule, and then repeatedly executing the step; and determining the correction graph according to the determined central point of the correction graph.
In some examples, the positioning module 602 may further use the intersection point as a center point of the calibration graph and simulate the calibration graph when determining the center point of the calibration graph fails.
The embodiment of the application also provides terminal equipment which at least comprises the device in any embodiment, so that the terminal equipment has the two-dimensional code correction function in any embodiment and has stronger two-dimensional code identification capability.
The specific implementation principle of the functions of the above modules has been described in the foregoing, and is not described herein again.
In addition, the two-dimensional code correction method and the two-dimensional code correction device in each example of the present application and each module thereof may be integrated in one processing unit, or each module may exist alone physically, or two or more devices or modules may be integrated in one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
In an embodiment, the two-dimensional code correction apparatus may be run in various computing devices capable of performing user information processing based on the internet, and loaded in a memory of the computing device.
Fig. 7 is a block diagram showing a configuration of a computing device in which the two-dimensional code correction apparatus is located. As shown in fig. 7, the computing device includes one or more processors (CPUs) 702, a communication module 704, a memory 706, a user interface 710, and a communication bus 708 for interconnecting these components.
The processor 702 may receive and transmit data via the communication module 704 to enable network communications and/or local communications.
User interface 710 includes one or more output devices 712, including one or more speakers and/or one or more visual displays. The user interface 710 also includes one or more input devices 714, including, for example, a keyboard, a mouse, a voice command input unit or microphone, a touch screen display, a touch sensitive tablet, a gesture capture camera or other input buttons or controls, and the like.
The memory 706 may be a high-speed random access memory such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; or non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
The memory 706 stores a set of instructions executable by the processor 702, including:
an operating system 716 including programs for handling various basic system services and for performing hardware related tasks;
the two-dimensional code correction device 718 includes various applications for processing two-dimensional code correction, and such applications can implement the processing flow in each of the above examples. In some examples, the two-dimensional code calibration device 718 can include the modules 601-603 shown in FIG. 6, and each of the modules 601-603 can store machine executable instructions. The processor 702 can implement the functions of the modules 601-603 by executing the machine-executable instructions in the modules 601-603 in the memory 706.
In addition, each example of the present application can be realized by a data processing program executed by a data processing apparatus such as a computer. It is clear that a data processing program constitutes the present application. Further, the data processing program, which is generally stored in one storage medium, is executed by directly reading the program out of the storage medium or by installing or copying the program into a storage device (such as a hard disk and/or a memory) of the data processing device. Such a storage medium therefore also constitutes the present application. The storage medium may use any type of recording means, such as a paper storage medium (e.g., paper tape, etc.), a magnetic storage medium (e.g., a flexible disk, a hard disk, a flash memory, etc.), an optical storage medium (e.g., a CD-ROM, etc.), a magneto-optical storage medium (e.g., an MO, etc.), and the like.
The present application thus also provides a non-volatile storage medium having stored therein a data processing program for executing any one of the examples of the method of the present application.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the scope of the present application.
Claims (12)
1. A two-dimensional code correction method is characterized by comprising the following steps:
extracting a two-dimensional code area from the two-dimensional code image;
determining at least three detection graphs in the two-dimensional code area;
taking a boundary line which does not cover the boundary of the two-dimensional code from two detection graphs for determining a correction graph respectively, wherein one boundary line is in a transverse direction and the other boundary line is in a longitudinal direction, and determining the intersection point of extension lines of the two taken boundary lines, wherein the two detection graphs for determining the correction graph are two detection graphs respectively covering two opposite vertexes of the two-dimensional code in the at least three detection graphs;
searching to obtain a corrected graph by taking the intersection point as a starting point; and
and obtaining a perspective transformation formula by using the at least three detection graphs and the correction graph, and correcting the two-dimensional code area based on the perspective transformation formula to obtain a two-dimensional code graph for decoding.
2. The method of claim 1, wherein the two-dimensional code is rectangular;
the at least three detection patterns include: three detection graphs respectively covering three vertexes of the two-dimensional code;
the correction pattern is located near a fourth vertex of the two-dimensional code.
3. The method of claim 1, wherein the searching for a correction pattern starting from the intersection point comprises:
taking the intersection point as an initial candidate center point of the correction graph;
determining whether a pattern feature of a region of a predetermined size centered on a current candidate center point matches the corrected pattern; if the candidate center point is matched with the center point of the correction graph, determining that the candidate center point is the center point of the correction graph; otherwise, determining one point around the current candidate center as a candidate center point according to a preset rule, and then repeatedly executing the step; and
determining the correction pattern according to the determined center point of the correction pattern.
4. The method of claim 3, further comprising:
and when the central point of the correction graph is determined to fail, taking the intersection point as the central point of the correction graph, and simulating the correction graph.
5. The method according to claim 3, wherein the two-dimensional code image is an image subjected to binarization processing;
the determining whether the graphic feature of the region of the predetermined size centered on the current candidate center point matches the corrected graphic includes:
taking at least two straight lines passing through the candidate center points in a region with a preset size according to a preset mode;
determining a pixel value sequence corresponding to each of the at least two straight lines; wherein the sequence of pixel values comprises two values representing a black pixel and a white pixel, respectively;
and determining whether the arrangement mode and the proportional relation of the two numerical values in each pixel value sequence represent the characteristics of the correction graph, and if so, determining that the two numerical values are matched with the correction graph.
6. The method of claim 5, wherein the correction pattern is characterized by black and white pixels distributed in the following order and proportion:
white: black: white 1: 1: 1;
the determining whether the arrangement mode and the proportional relation of the two numerical values in each pixel value sequence represent the characteristics of the correction graph comprises the following steps:
whether the first numerical value and the second numerical value in the pixel value sequence are arranged according to the following sequence and proportion relation:
the second value is: the first value is: the second value is 1: 1: 1;
wherein the first value represents a black pixel and the second value represents a white pixel.
7. The method of claim 5, wherein the correction pattern is characterized by black and white pixels distributed in the following order and proportion:
black: white: black: white: black 1: 1: 1: 1: 1;
the determining whether the arrangement mode and the proportional relation of the two numerical values in each pixel value sequence represent the characteristics of the correction graph comprises the following steps:
whether the first numerical value and the second numerical value in the pixel value sequence are arranged according to the following sequence and proportion relation:
the first value is: the second value is: the first value is: the second value is: the first value is 1: 1: 1: 1: 1;
wherein the first value represents a black pixel and the second value represents a white pixel.
8. A two-dimensional code correcting device is characterized by comprising:
the scanning module extracts a two-dimensional code area from the two-dimensional code image; determining at least three detection graphs in the two-dimensional code area;
the positioning module is used for taking a boundary line which does not cover the boundary of the two-dimensional code from two detection graphs for determining a correction graph respectively, wherein one boundary line is transverse, the other boundary line is longitudinal, and determining the intersection point of the extension lines of the two taken boundary lines, wherein the two detection graphs for determining the correction graph are two detection graphs respectively covering two opposite vertexes of the two-dimensional code in the at least three detection graphs;
searching to obtain a corrected graph by taking the intersection point as a starting point;
and the correcting module is used for obtaining a perspective transformation formula by utilizing the at least three detection graphs and the correction graph and correcting the two-dimensional code area based on the perspective transformation formula to obtain a two-dimensional code graph for decoding.
9. The apparatus of claim 8, wherein the two-dimensional code is rectangular;
the at least three detection patterns determined by the scanning module include: three detection graphs respectively covering three vertexes of the two-dimensional code;
the correction graph obtained by the positioning module is located near a fourth vertex of the two-dimensional code.
10. The apparatus of claim 8, wherein the positioning module obtains the correction pattern by:
taking the intersection point as an initial candidate center point of the correction graph;
determining whether a pattern feature of a region of a predetermined size centered on a current candidate center point matches the corrected pattern; if the candidate center point is matched with the center point of the correction graph, determining that the candidate center point is the center point of the correction graph; otherwise, determining one point around the current candidate center as a candidate center point according to a preset rule, and then repeatedly executing the step; and
determining the correction pattern according to the determined center point of the correction pattern.
11. The apparatus of claim 10, wherein the positioning module further uses the intersection point as a center point of the calibration pattern and simulates the calibration pattern when determining the center point of the calibration pattern fails.
12. A terminal device, characterized in that it comprises an apparatus according to any one of claims 8 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610692045.9A CN106326802B (en) | 2016-08-19 | 2016-08-19 | Quick Response Code bearing calibration, device and terminal device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610692045.9A CN106326802B (en) | 2016-08-19 | 2016-08-19 | Quick Response Code bearing calibration, device and terminal device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106326802A CN106326802A (en) | 2017-01-11 |
CN106326802B true CN106326802B (en) | 2018-07-27 |
Family
ID=57744560
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610692045.9A Active CN106326802B (en) | 2016-08-19 | 2016-08-19 | Quick Response Code bearing calibration, device and terminal device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106326802B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107729790B (en) * | 2017-09-27 | 2020-12-29 | 创新先进技术有限公司 | Two-dimensional code positioning method and device |
CN107729968B (en) * | 2017-09-30 | 2018-11-23 | 中联惠众信息技术(北京)有限公司 | It is a kind of two dimension code encoding method and decoding method |
CN107577980B (en) * | 2017-09-30 | 2018-10-09 | 中联惠众信息技术(北京)有限公司 | A kind of Quick Response Code error-correcting decoding method and its code translator |
CN109711223A (en) * | 2018-12-28 | 2019-05-03 | 福州符号信息科技有限公司 | A kind of promotion QR code decoding rate method and apparatus |
US11182759B2 (en) | 2019-04-15 | 2021-11-23 | Advanced New Technologies Co., Ltd. | Self-service checkout counter |
US11113680B2 (en) | 2019-04-16 | 2021-09-07 | Advanced New Technologies Co., Ltd. | Self-service checkout counter checkout |
CN110264645A (en) * | 2019-04-16 | 2019-09-20 | 阿里巴巴集团控股有限公司 | A kind of self-service cash method and equipment of commodity |
CN111178869B (en) * | 2019-10-25 | 2023-10-17 | 腾讯科技(深圳)有限公司 | Identification code display method and device, terminal equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103699869A (en) * | 2013-12-30 | 2014-04-02 | 优视科技有限公司 | Method and device for recognizing two-dimension codes |
CN103745221A (en) * | 2014-01-08 | 2014-04-23 | 杭州晟元芯片技术有限公司 | Two-dimensional code image correction method |
CN104598904A (en) * | 2014-11-14 | 2015-05-06 | 腾讯科技(深圳)有限公司 | Graphic code correction graphic center positioning method and device |
CN104766037A (en) * | 2015-03-20 | 2015-07-08 | 中国联合网络通信集团有限公司 | Two-dimension code recognition method and device |
-
2016
- 2016-08-19 CN CN201610692045.9A patent/CN106326802B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103699869A (en) * | 2013-12-30 | 2014-04-02 | 优视科技有限公司 | Method and device for recognizing two-dimension codes |
CN103745221A (en) * | 2014-01-08 | 2014-04-23 | 杭州晟元芯片技术有限公司 | Two-dimensional code image correction method |
CN104598904A (en) * | 2014-11-14 | 2015-05-06 | 腾讯科技(深圳)有限公司 | Graphic code correction graphic center positioning method and device |
CN104766037A (en) * | 2015-03-20 | 2015-07-08 | 中国联合网络通信集团有限公司 | Two-dimension code recognition method and device |
Also Published As
Publication number | Publication date |
---|---|
CN106326802A (en) | 2017-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106326802B (en) | Quick Response Code bearing calibration, device and terminal device | |
CN109961009B (en) | Pedestrian detection method, system, device and storage medium based on deep learning | |
CN110046529B (en) | Two-dimensional code identification method, device and equipment | |
CN112183536B (en) | Custom functional patterns for optical bar codes | |
CN109815770B (en) | Two-dimensional code detection method, device and system | |
US8948445B2 (en) | Embedding visual information in a two-dimensional bar code | |
CN103714327B (en) | Method and system for correcting image direction | |
US9098888B1 (en) | Collaborative text detection and recognition | |
CN110232326B (en) | Three-dimensional object recognition method, device and storage medium | |
US20190122070A1 (en) | System for determining alignment of a user-marked document and method thereof | |
CN104798086A (en) | Detecting embossed characters on form factor | |
Dubská et al. | Real-time precise detection of regular grids and matrix codes | |
US11551388B2 (en) | Image modification using detected symmetry | |
WO2013112753A1 (en) | Rules for merging blocks of connected components in natural images | |
JPWO2004055713A1 (en) | Bar code recognition device | |
WO2018059365A1 (en) | Graphical code processing method and apparatus, and storage medium | |
CN110765795B (en) | Two-dimensional code identification method and device and electronic equipment | |
KR20200050091A (en) | Method and Electronic device for reading a barcode | |
US9501681B1 (en) | Decoding visual codes | |
KR101842535B1 (en) | Method for the optical detection of symbols | |
CN114092949A (en) | Method and device for training class prediction model and identifying interface element class | |
CN108108646B (en) | Bar code information identification method, terminal and computer readable storage medium | |
CN105631850B (en) | Aligned multi-view scanning | |
JP6011885B2 (en) | Code reading apparatus and code reading method | |
JP4163406B2 (en) | Bar code recognition device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |