[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2021151319A1 - 卡片边框检测方法、装置、设备及可读存储介质 - Google Patents

卡片边框检测方法、装置、设备及可读存储介质 Download PDF

Info

Publication number
WO2021151319A1
WO2021151319A1 PCT/CN2020/122132 CN2020122132W WO2021151319A1 WO 2021151319 A1 WO2021151319 A1 WO 2021151319A1 CN 2020122132 W CN2020122132 W CN 2020122132W WO 2021151319 A1 WO2021151319 A1 WO 2021151319A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
image
card
frame
detection
Prior art date
Application number
PCT/CN2020/122132
Other languages
English (en)
French (fr)
Inventor
张国辉
雷晨雨
宋晨
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021151319A1 publication Critical patent/WO2021151319A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • This application relates to the field of image processing, and in particular to a method, device, equipment and readable storage medium for detecting a card frame.
  • the border (edge) detection algorithm of various cards is a very important part of the card recognition algorithm.
  • the existing card border detection algorithm mainly sets various conditions to filter out some edge information to obtain the card border.
  • the inventor realizes that this method is prone to misjudgment in the case of a complex background or blurred edges, resulting in frame detection errors and affecting the subsequent operation of other services such as card information extraction.
  • An embodiment of the present application provides a method for detecting a card frame, and the method for detecting a card frame includes:
  • the embodiment of the present application also provides a card frame detection device, the card frame detection device includes:
  • An image acquisition module for acquiring an original card image, and preprocessing the original card image to obtain an image to be recognized
  • a model input module configured to input the image to be recognized into a preset detection model, extract corresponding image feature information from the preset detection model, and calculate frame line parameters according to the image feature information;
  • An area interception module configured to obtain the detection frame line of the original card image according to the frame line parameter, and respectively intercept the edge area of each side of the card in the original card image according to the detection frame line;
  • the straight line detection module is used to cut the edge area of each side to obtain the cutting area of each side, and perform straight line detection on each cutting area to obtain the effective straight line of each cutting area;
  • the straight line fitting module is used to obtain the vertices of the edge area of the same side according to the effective straight lines of the same side, and to fit the vertices of the edge area of the same side to obtain the corresponding frame fitting straight line;
  • the straight line combination module is used to combine the borders corresponding to each side to fit a straight line to obtain the card border.
  • An embodiment of the present application also provides a card frame detection device.
  • the card frame detection device includes a processor, a memory, and a computer program stored on the memory and executable by the processor, wherein the computer program is When the processor executes, the following steps are implemented:
  • the embodiment of the present application also provides a readable storage medium on which a computer program is stored.
  • a computer program is stored on a readable storage medium on which a computer program is stored.
  • FIG. 1 is a schematic diagram of the hardware structure of the card frame detection device involved in the solution of the embodiment of the application;
  • FIG. 2 is a schematic flowchart of a first embodiment of a method for detecting a card frame according to this application;
  • FIG. 3 is a schematic diagram of the functional modules of the first embodiment of the card frame detection device according to the present application.
  • the card frame detection method involved in the embodiments of this application is mainly applied to a card frame detection device, which can be a mobile phone, a tablet computer, a personal computer (PC), a notebook computer, a server, and other devices with data processing functions. .
  • FIG. 1 is a schematic diagram of the hardware structure of the card frame detection device involved in the solution of the embodiment of the application.
  • the card frame detection device may include a processor 1001 (for example, a central processing unit, a CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005.
  • a processor 1001 for example, a central processing unit, a CPU
  • a communication bus 1002 for example, a central processing unit, a CPU
  • user interface 1003 for example, a user interface 1003, a network interface 1004, and a memory 1005.
  • the communication bus 1002 is used to realize the connection and communication between these components;
  • the user interface 1003 may include a display screen (Display), an input unit such as a keyboard (Keyboard);
  • the network interface 1004 may optionally include a standard wired interface, a wireless interface (Such as wireless fidelity WIreless-FIdelity, WI-FI interface);
  • memory 1005 can be high-speed random access memory (random access memory, RAM), can also be stable memory (non-volatile memory), such as disk memory, memory
  • 1005 may also be a storage device independent of the aforementioned processor 1001.
  • the memory 1005 may be volatile or non-volatile.
  • the memory 1005 as a computer-readable storage medium in FIG. 1 may include an operating system, a network communication module, and a computer program.
  • the network communication module can be used to connect to a preset database and perform data communication with the database; and the processor 1001 can call a computer program stored in the memory 1005 and execute the card frame detection method provided in the embodiment of the present application.
  • the embodiment of the present application provides a method for detecting a card frame.
  • FIG. 2 is a schematic flowchart of a first embodiment of a method for detecting a card frame according to the present application.
  • the card frame detection method includes the following steps:
  • Step S10 acquiring an original card image, and preprocessing the original card image to obtain an image to be recognized;
  • the border (edge) detection algorithm of various cards is a very important part of the card recognition algorithm.
  • the existing card border detection algorithm is mainly to set various conditions to filter out some edge information to obtain the card border. This method is prone to misjudgment in the case of a complex background or blurred edges, leading to error detection of the border and affecting subsequent evaluation.
  • the operation of other services such as the extraction of card information.
  • this embodiment proposes a card border detection method. After obtaining the original card image, preprocessing is performed, and then the border line parameters of the card are obtained through the detection model, and then the direct detection and fitting are performed in segments based on the border line parameters.
  • the card edge detection method in this embodiment is implemented by a card edge detection device.
  • the card frame detection device can be a mobile phone, a tablet computer, a personal computer, a server, and other devices.
  • a mobile phone is used as an example for description.
  • a network model for detecting the outline of the card from the graph will be obtained (built) first, and this model may be called a detection model.
  • the detection model of this embodiment is improved based on the deeplab v3 model; the original deeplab v3 model is relatively large, and the process of data processing (from input to output) takes a certain amount of time and has high requirements for hardware performance.
  • the detection is performed through a mobile phone, so the deeplab v3 model is improved to obtain a detection model.
  • the detection model includes a codec and arithmetic unit, and the codec includes an encoder and a decoder.
  • the codec includes an encoder and a decoder.
  • the mobile phone can first obtain the original card image to be detected.
  • the original card image can be obtained by calling the camera of the mobile phone, or downloaded from the network or transmitted from other terminals; of course, the original card image There are cards included.
  • the mobile phone obtains the original card image, it will first preprocess the original card image so that the original card image meets the input requirements of the detection model, which is also beneficial to improve the processing efficiency of the detection model and reduce the performance pressure of the mobile phone.
  • step S10 includes:
  • Step S11 Obtain an RGB color original card image through a camera, and scale the RGB color original card image to obtain a scaled image of a preset size;
  • the original card image in this embodiment may be taken by a mobile phone through a camera, and the taken image may be an original card image in RGB color.
  • the user may not necessarily take it according to the input size requirements of the detection model when shooting, so the RGB color original card image needs to be scaled to obtain a scaled image of a preset size.
  • the RGB color original card image is reduced (or enlarged) or 128*256 size zoomed image.
  • step S12 the pixel value of the zoomed image is reduced to obtain the image to be recognized.
  • the pixel value of the zoomed image will be adjusted, and the pixel value of the zoomed image will be reduced to obtain the image to be recognized for later image processing.
  • the specific reduction method can be to divide the pixel value of the zoomed image by 255 , Get the new pixel value; of course, it can also be processed by binarization.
  • the original card image can be obtained by shooting with a mobile phone, and the original card image can be preprocessed to zoom and reduce the pixels, so that the original card image meets the input requirements of the detection model, so that the image processing can be carried out later and the mobile phone can be reduced. Performance pressure.
  • Step S20 input the image to be recognized into a preset detection model, extract corresponding image feature information from the preset detection model, and calculate frame line parameters according to the image feature information;
  • the image to be recognized when the image to be recognized is obtained, the image to be recognized can be input to the detection model, and the corresponding image feature information can be obtained through the detection model.
  • the process of feature extraction may include convolution and pooling of the image to be recognized. And other processing to extract image feature information that can represent image features from the image to be recognized.
  • the frame line parameter When the image feature information is obtained, the frame line parameter will be calculated according to the image feature information.
  • the frame line parameter is used to describe the outline information of the card frame in the image. For example, it can be the line expression of the card frame, the coordinates of the vertices of the card, and the area where the card is located. Location information, etc.
  • the preset detection model includes a codec and an arithmetic unit
  • the step S20 includes:
  • Step S21 input the image to be recognized into a preset detection model, and perform feature extraction on the image to be recognized through a codec to obtain feature images of each side of the card;
  • the detection model in this embodiment includes a codec and an arithmetic unit, and the codec includes an encoder (Encoder) and a decoder (Decoder), and the image feature feature information may be in the form of a feature image.
  • the Encoder can use the shufflenet_0.5 network.
  • Shufflenet is a lightweight network model that includes a grouped convolution part and a channel confusion part to extract image features from images; because this embodiment is applied to mobile phones , And the processing performance of the mobile phone has certain limitations. In order to improve the processing efficiency and reduce the performance pressure of the mobile phone, the shufflenet is reduced in this embodiment, and the width is set to 0.5 times the original.
  • the size of the primary feature image obtained is reduced compared with the image to be recognized.
  • the size of the image to be recognized is 128*256, and the primary feature image is 32* 64 size.
  • the decoder is based on the deeplab v3 model. In the ASPP (core module) of the deeplab v3 model, there are five branches. The decoder in this embodiment can take only two branches to reduce the amount of data processing; After obtaining a feature image, the first feature image will be transformed by Decoder (including convolution and pooling), so as to obtain the corresponding feature image (feature map); Compared with the primary image, the feature map has a certain enlargement in size. For example, the primary feature image is 32*64 size, and the feature image is 128*256 size.
  • step S22 a weighted least square operation is performed on the feature image by the arithmetic unit to obtain the corresponding frame line parameter.
  • the feature after obtaining the feature
  • the feature can be The map performs a weighted least squares operation (Weighted_least_squares) to analyze the feature map to obtain the frame line parameters corresponding to the feature image.
  • the frame line parameters are used to describe the outline information of the card frame in the image. Among them, because the card has four borders, in order to improve the accuracy of the analysis, when extracting the feature image, you can output the corresponding feature map for each border, and then perform the weighted least square operation on each feature map to obtain The corresponding border line parameter.
  • W*[y_map, 1] A* W*x_map;
  • W represents the preset eigenvalue
  • x_map is the eigenvalue of the x-axis coordinate
  • y_map is the eigenvalue of the y-axis coordinate
  • the mobile phone can analyze the image to be recognized, and obtain the frame line parameters, roughly determine the position of the card frame according to the frame line parameters, and obtain the detection frame line in the original card image; in addition, because the detection model is end-to-end The end model is therefore also conducive to improving the accuracy of detection.
  • Step S30 Obtain the detection frame line of the original card image according to the frame line parameter, and respectively intercept the edge area of each side of the card in the original card image according to the detection frame line;
  • the mobile phone when the frame line parameters are obtained, the mobile phone can roughly determine the position of the card frame according to the frame line parameters, and obtain the detection frame line in the original card image. It is worth noting that the detection frame line in this embodiment is obtained according to the approximate position of the card frame. In order to further improve the accuracy of frame detection, this embodiment will perform further analysis based on the detection frame line to more accurately locate Card border.
  • the detection frame line When the detection frame line is obtained, the edge area of each side of the card can be respectively intercepted in the original card image based on the detection frame line.
  • the detection frame line is used as the baseline, and the line is translated a distance to both sides of the line to obtain the edge area. ; It can be considered that the actual sides of the card are located in the edge area.
  • each detection frame line obtains the corresponding edge area, and a card is usually rectangular with four sides, that is, there are four detection frame lines, and then four edges are obtained. area.
  • Step S40 cutting the edge area of each side to obtain the cutting area of each side, and performing straight line detection on each cutting area to obtain the effective straight line of each cutting area;
  • transverse cutting when obtaining the edge regions of each side, for each edge region, transverse cutting can be performed separately to obtain several cutting regions. Since each side of the card is actually located in the edge area, when cutting the edge area, it can also be considered as cutting the line where the card frame is located to obtain several straight lines (line segments); then, these cutting areas can be separately Perform straight line detection to obtain effective straight lines in each cutting area.
  • a suitable algorithm can be selected according to actual needs. For example, the detection can be based on Hough transform. Specifically, the points on the image (region) can be transformed into a set of parameter spaces.
  • this solution corresponds to the slope k and the constant b corresponding to the preset coordinates of the straight line, so as to detect the point; of course, it can also be based on LSD fast line detection algorithm and other methods for line detection.
  • Step S50 obtaining the vertices of the edge area of the same side according to the effective straight lines of the same side, and fitting the vertices of the edge area of the same side to obtain the corresponding frame fitting straight line;
  • the intersection of the effective line of each cutting area and the edge of the area will form two edge area vertices; if the edge area of one side is cut as x (x> 1) A cutting area, the edge area of the side corresponds to 2x edge area vertices. For each edge area, the vertices of the edge area can be fitted separately to obtain the corresponding frame fitting straight line. Since there are 4 edge regions, 4 border fitting straight lines will be obtained.
  • Step S60 Combine the borders corresponding to each side to fit a straight line to obtain a card border.
  • the straight line fitted to each frame when a straight line fitted to each frame is obtained, the straight line fitted to each frame can be combined to obtain a closed frame, which is the card frame.
  • the card image in the frame can be recognized based on the card frame, and then the card information can be obtained.
  • the OCR (Optical Character Recognition, optical character recognition) module can be used to Each single-character image is recognized, and the OCR module can classify and recognize the single-character image through CNN (Convolutional Neural Networks, convolutional neural network) and other technologies. In this way, the card information in the card is recognized.
  • the optical character recognition algorithm is used to recognize the card image in the card frame to obtain the character information in the card image, so that the user does not need to actively input the card information, which reduces the user's operations.
  • the frame position information corresponding to the original card image and/or the card frame may also be stored in a node of a blockchain.
  • the card information can also be stored in a node of a blockchain.
  • the blockchain referred to in this embodiment is a new application mode of computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information for verification. The validity of the information (anti-counterfeiting) and the generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
  • an original card image is obtained, and the original card image is preprocessed to obtain an image to be recognized; the image to be recognized is input to a preset detection model, and corresponding image features are extracted by the preset detection model Information, and calculate the frame line parameters according to the image feature information; obtain the detection frame line of the original card image according to the frame line parameters, and capture each card in the original card image according to the detection frame line Edge area of the side; cut the edge area of each side to obtain the cutting area of each side, and perform straight line detection on each cutting area to obtain the effective straight line of each cutting area; obtain the same edge according to the effective straight line of the same side The area vertices are fitted to the edge area vertices of the same side to obtain the corresponding frame fitting straight line; the frame fitting straight lines corresponding to each side are combined to obtain the card frame.
  • this embodiment performs preprocessing after acquiring the original card image, and then obtains the border line parameters of the card through the detection model, and then performs the post-processing of segmented direct detection and fitting based on the border line parameters to obtain the card in the image Border, so that the border of the card is detected by image edge detection, which helps to avoid misjudgment in the case of complex background or blurred edges, improves the accuracy of card border detection, and provides processing for subsequent card information extraction and other processing convenient.
  • step S40 includes:
  • Step S41 Perform edge detection on the edge area of each side respectively to obtain the edge contour of each side;
  • edge detection is performed on each edge region to obtain the edge contour of each side; the edge detection may be implemented based on the sobel operator.
  • the Sobel operator includes two sets of horizontal and vertical matrices, which are convolved with the edge area (image) in a plane to obtain the horizontal and vertical brightness difference approximations respectively, and then calculate the gradient based on the brightness difference approximation value at a certain point. If the gradient is greater than a certain A threshold, the point is an edge point.
  • Step S42 cutting the edge area of each side to obtain the cutting area of each side;
  • transverse cutting can be performed separately to obtain several cutting areas. It is worth noting that in this embodiment, the edge detection is performed first, and then the segmentation is performed; in practice, it may also be cut first, and then the edge detection is performed.
  • Step S43 Determine the gray scale threshold of each cut area by Otsu method and the edge contour, and perform binarization processing on each cut area according to the gray threshold;
  • the gray threshold of each cut area will be determined by the Otsu method (OTSU) and the edge contour of each edge area.
  • the gray threshold can be regarded as the segmentation threshold between the foreground and the background. Specifically, for a cut area, suppose the proportion of foreground pixels in the cut area is w0, the average gray scale of foreground pixels u0, the proportion of background pixels in the cut area is w1, and the average gray scale of foreground pixels Degree u1, the variance between classes is g, then there is
  • the traversal method can be used to continuously transform w0, u0, w1, and u1.
  • g is the largest
  • g is the gray threshold.
  • the cutting area can be binarized according to the gray threshold, and the cutting area can be divided into white and gray (or black) areas.
  • step S44 linear detection is performed on each of the binarized cutting areas to obtain the effective straight lines of each cutting area.
  • the straight line detection can be performed on each cutting area respectively, and the effective straight line of each cutting area can be obtained. It is worth noting that in some cutting areas, multiple straight lines may be detected, which may be caused by the complex background of the card or other interference factors. At this time, it is necessary to select one of them as the effective straight line of the cutting area.
  • the cutting area can be recorded as the target cutting area; at this time, for each area line of the target cutting area, You can first determine the gray-scale area enclosed by the edge of the target cutting area, and determine the area of each gray-scale area; when the background or interference factors overlap with the card, the detected card area is generally larger than the actual avoidance.
  • the gray area with the smallest gray area area can be selected (that is, the side with the smallest area is considered to be the actual edge of the card), and the area line corresponding to the gray area is determined as the effective line of the area.
  • the present embodiment can obtain the effective straight line of the corresponding area in each cutting area, thereby facilitating the subsequent fitting of the corresponding frame fitting line according to the effective line fitting of multiple areas, and improving the accuracy of the card frame detection.
  • the step S50 includes:
  • Step S51 randomly selecting two test points from the edge area vertices of the same side to construct a test line, and calculating the test distance from each edge area vertex of the same side to the test line;
  • the intersection of the effective line of each cutting area and the edge of the area will form two edge area vertices; if the edge area of one side is cut as x (x>1) A cutting area, the edge area of the side corresponds to a total of 2x edge area vertices.
  • the vertices of the edge area can be fitted separately to obtain the corresponding frame fitting straight line.
  • the straight line fitting in this embodiment will use the ransac straight line fitting method, which is beneficial to improve the anti-noise ability and reduce the adverse effect of the complex background of the card image or other interference factors on the frame recognition. Since the card has four edges, when fitting, the vertices of each edge area are respectively fitted.
  • the following takes one of the edges as an example for description.
  • two test points can be randomly selected to construct the corresponding test line, and then the test distances from the vertices of the other edge areas to the test line can be calculated respectively.
  • Step S52 Determine the edge area vertices whose test distance is less than the distance threshold, and obtain a test point set according to the edge area vertices whose test distance is less than the distance threshold;
  • the edge area vertices whose test distance is less than the distance threshold will be further determined. These points can form a test point set, and the number of test points included in the test point set will be recorded.
  • Step S53 Repeat the above steps until n test point sets are obtained, where n is a positive integer greater than 1;
  • Step S54 Determine the number of test points in each test point set, and determine the test point set with the largest number of test points as the target point set;
  • test point sets When n test point sets are obtained, the number of test points in each test point set can be determined respectively, and the test point set with the largest number of test points is determined as the target point set.
  • step S55 a straight line fitting is performed according to the test points of the target point set to obtain a corresponding frame fitting straight line.
  • a straight line fitting can be performed based on the test points of the target point set to obtain the corresponding frame fitting straight line.
  • the vertices of the edge region that do not belong to the target point set they can be considered as noise data.
  • this embodiment adopts the ransac straight line fitting method, which is beneficial to improve the anti-noise ability and reduce the adverse effect of the complex background of the card image or other interference factors on the frame recognition.
  • the embodiment of the present application also provides a card frame detection device.
  • FIG. 3 is a schematic diagram of the functional modules of the first embodiment of the card frame detection device of the present application.
  • the card frame detection device includes:
  • the image acquisition module 10 is configured to acquire an original card image, and preprocess the original card image to obtain an image to be recognized;
  • the model input module 20 is configured to input the image to be recognized into a preset detection model, extract corresponding image feature information from the preset detection model, and calculate frame line parameters according to the image feature information;
  • the area interception module 30 is configured to obtain the detection frame line of the original card image according to the frame line parameter, and respectively intercept the edge area of each side of the card in the original card image according to the detection frame line;
  • the straight line detection module 40 is used for cutting the edge area of each side to obtain the cutting area of each side, and detecting the straight line of each cutting area to obtain the effective straight line of each cutting area;
  • the straight line fitting module 50 is used to obtain the vertices of the edge area of the same side according to the effective straight lines of the same side, and to fit the vertices of the edge area of the same side to obtain the corresponding frame fitting straight line;
  • the straight line combination module 60 is used to combine the frame corresponding to each side to fit a straight line to obtain the card frame.
  • each virtual function module of the above-mentioned card frame detection device is stored in the memory 1005 of the card frame detection device shown in FIG. Function.
  • the image acquisition module 10 includes:
  • the image scaling unit is used to obtain the RGB color original card image through the camera, and scale the RGB color original card image to obtain a scaled image of a preset size;
  • the pixel reduction unit is used to reduce the pixel value of the zoomed image to obtain the image to be recognized.
  • the preset detection model includes a codec and an arithmetic unit
  • the model input module 20 includes:
  • the feature extraction unit is configured to input the image to be recognized into a preset detection model, and perform feature extraction on the image to be recognized through a codec to obtain feature images of each side of the card;
  • the weighting operation unit is used to perform a weighted least square operation on the feature image through an arithmetic unit to obtain the corresponding frame line parameter.
  • the weighting operation unit is specifically configured to establish a rectangular coordinate system according to the characteristic image, and obtain the characteristic point coordinates of each characteristic point in the characteristic image according to the rectangular coordinate system; according to the characteristic of each characteristic point
  • the point coordinates establish an equation set, and the equation set is solved to obtain the border line parameters of the original card image.
  • the straight line detection module 40 includes:
  • the edge detection unit is used to perform edge detection on the edge area of each side to obtain the edge contour of each side;
  • the edge cutting unit is used to cut the edge area of each side to obtain the cutting area of each side;
  • a binary processing unit configured to determine the gray-scale threshold of each cut area through the Otsu method and the edge contour, and perform binarization processing on each cut area according to the gray-scale threshold;
  • the straight line detection unit is used to perform straight line detection on each of the binarized cutting areas to obtain the effective straight line of each cutting area.
  • the straight line detection unit is specifically configured to perform straight line detection on each of the binarized cutting areas to obtain the area straight lines of each cutting area; if there is a target cutting area with a number of area lines greater than one, the target cutting area is obtained The area of the gray area enclosed by the straight lines in each area; the area line corresponding to the area of the gray area with the smallest area is determined as the effective line of the target cutting area.
  • the straight line fitting module 50 includes:
  • the point set acquisition unit is used to randomly select two test points from the edge area vertices of the same side to construct a test line, and calculate the test distance from each edge area vertex of the same side to the test line; determine that the test distance is less than the distance threshold
  • the edge area vertices of, and the test point set is obtained according to the edge area vertices whose test distance is less than the distance threshold; the above steps are repeated until n test point sets are obtained, and n is a positive integer greater than 1;
  • the target determination unit is used to determine the number of test points in each test point set, and determine the test point set with the largest number of test points as the target point set;
  • the straight line fitting unit is used to perform straight line fitting according to the test points of the target point set to obtain the corresponding frame fitting straight line.
  • each module in the above-mentioned card frame detection device corresponds to each step in the embodiment of the above-mentioned card frame detection method, and the functions and realization processes thereof will not be repeated here.
  • the embodiments of the present application also provide a computer-readable storage medium.
  • the computer-readable storage medium may be volatile or non-volatile.
  • a computer program is stored on the computer-readable storage medium of the present application, and when the computer program is executed by a processor, the steps of the method for detecting the card frame as described above are realized.
  • the method implemented when the computer program is executed can refer to the various embodiments of the card frame detection method of the present application, which will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本申请涉及人工智能,提供一种卡片边框检测方法、装置、设备及可读存储介质,所述方法包括:获取到原始卡片图像后进行预处理,然后通过检测模型获取卡片的边框直线参数,再基于边框直线参数进行分段直接检测和拟合的后处理,得到图像中的卡片边框。此外,本申请还涉及区块链技术,原始卡片图像和/卡片边框可存储于区块链中。本申请通过图像边缘检测的方式检测得到卡片的边框,有利于避免在复杂背景或边缘模糊情况下误判的情况,提高了卡片边框检测的准确性,为后续卡片信息的提取等处理提供了方便。

Description

卡片边框检测方法、装置、设备及可读存储介质
本申请要求于2020年07月30日提交中国专利局、申请号为CN 202010756730.X、名称为“卡片边框检测方法、装置、设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理领域,尤其涉及一种卡片边框检测方法、装置、设备及可读存储介质。
背景技术
随着身份证、社保卡和银行卡等各种卡片大量的使用,相关的卡片识别服务也随之而来。其中各种卡片的边框(边缘)检测算法是卡片识别算法中很重要的一环。现有的卡片边框检测算法,主要是设置各种条件过滤掉一些边缘信息,得到卡片边框。
技术问题
发明人意识到,这种方法在复杂背景或者边缘模糊的情况下,容易出现误判,导致边框检测错误,影响后续对卡片信息的提取等其他服务的运行。
技术解决方案
本申请实施例提供一种卡片边框检测方法,所述卡片边框检测方法包括:
获取原始卡片图像,并对所述原始卡片图像进行预处理,得到待识别图像;
将所述待识别图像输入至预设检测模型,通过所述预设检测模型提取得到对应的图像特征信息,并根据所述图像特征信息计算得到边框直线参数;
根据所述边框直线参数获取所述原始卡片图像的检测边框直线,并根据所述检测边框直线在所述原始卡片图像中分别截取卡片各边的边缘区域;
对各边的边缘区域进行切割,得到各边的切割区域,并对各切割区域进行直线检测,得到各切割区域的区域有效直线;
根据同边的有效直线得到同边的边缘区域顶点,并对同边的边缘区域顶点进行拟合,得到对应的边框拟合直线;
组合各边对应的边框拟合直线,得到卡片边框。
本申请实施例还提供一种卡片边框检测装置,所述卡片边框检测装置包括:
图像获取模块,用于获取原始卡片图像,并对所述原始卡片图像进行预处理,得到待识别图像;
模型输入模块,用于将所述待识别图像输入至预设检测模型,通过所述预设检测模型提取得到对应的图像特征信息,并根据所述图像特征信息计算得到边框直线参数;
区域截取模块,用于根据所述边框直线参数获取所述原始卡片图像的检测边框直线,并根据所述检测边框直线在所述原始卡片图像中分别截取卡片各边的边缘区域;
直线检测模块,用于对各边的边缘区域进行切割,得到各边的切割区域,并对各切割区域进行直线检测,得到各切割区域的区域有效直线;
直线拟合模块,用于根据同边的有效直线得到同边的边缘区域顶点,并对同边的边缘区域顶点进行拟合,得到对应的边框拟合直线;
直线组合模块,用于组合各边对应的边框拟合直线,得到卡片边框。
本申请实施例还提供一种卡片边框检测设备,所述卡片边框检测设备包括处理器、存储器、以及存储在所述存储器上并可被所述处理器执行的计算机程序,其中所述计算机程序被所述处理器执行时,实现以下步骤:
获取原始卡片图像,并对所述原始卡片图像进行预处理,得到待识别图像;
将所述待识别图像输入至预设检测模型,通过所述预设检测模型提取得到对应的图像特征信息,并根据所述图像特征信息计算得到边框直线参数;
根据所述边框直线参数获取所述原始卡片图像的检测边框直线,并根据所述检测边框直线在所述原始卡片图像中分别截取卡片各边的边缘区域;
对各边的边缘区域进行切割,得到各边的切割区域,并对各切割区域进行直线检测,得到各切割区域的区域有效直线;
根据同边的有效直线得到同边的边缘区域顶点,并对同边的边缘区域顶点进行拟合,得到对应的边框拟合直线;
组合各边对应的边框拟合直线,得到卡片边框。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有计算机程序,其中所述计算机程序被处理器执行时,实现以下步骤:
获取原始卡片图像,并对所述原始卡片图像进行预处理,得到待识别图像;
将所述待识别图像输入至预设检测模型,通过所述预设检测模型提取得到对应的图像特征信息,并根据所述图像特征信息计算得到边框直线参数;
根据所述边框直线参数获取所述原始卡片图像的检测边框直线,并根据所述检测边框直线在所述原始卡片图像中分别截取卡片各边的边缘区域;
对各边的边缘区域进行切割,得到各边的切割区域,并对各切割区域进行直线检测,得到各切割区域的区域有效直线;
根据同边的有效直线得到同边的边缘区域顶点,并对同边的边缘区域顶点进行拟合,得到对应的边框拟合直线;
组合各边对应的边框拟合直线,得到卡片边框。
附图说明
图1为本申请实施例方案中涉及的卡片边框检测设备的硬件结构示意图;
图2为本申请卡片边框检测方法第一实施例的流程示意图;
图3为本申请卡片边框检测装置第一实施例的功能模块示意图。
如下具体实施方式将结合上述附图进一步说明本申请。
本发明的实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请实施例涉及的卡片边框检测方法主要应用于卡片边框检测设备,该卡片边框检测设备可以是手机、平板电脑、个人计算机(personal computer,PC)、笔记本电脑、服务器等具有数据处理功能的设备。
参照图1,图1为本申请实施例方案中涉及的卡片边框检测设备的硬件结构示意图。本申请实施例中,该卡片边框检测设备可以包括处理器1001(例如中央处理器Central Processing Unit,CPU),通信总线1002,用户接口1003,网络接口1004,存储器1005。其中,通信总线1002用于实现这些组件之间的连接通信;用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard);网络接口1004可选的可以包括标准的有线接口、无线接口(如无线保真WIreless-FIdelity,WI-FI接口);存储器1005可以是高速随机存取存储器(random access memory,RAM),也可以是稳定的存储器(non-volatile memory),例如磁盘存储器,存储器1005可选的还可以是独立于前述处理器1001的存储装置,存储器1005可以是易失性的,也可以是非易失性的。本领域技术人员可以理解,图1中示出的硬件结构并不构成对本申请的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
继续参照图1,图1中作为一种计算机可读存储介质的存储器1005可以包括操作系统、网络通信模块以及计算机程序。在图1中,网络通信模块可用于连接预设数据库,与数据库进行数据通信;而处理器1001可以调用存储器1005中存储的计算机程序,并执行本申请实施例提供的卡片边框检测方法。
基于上述的硬件架构,提出本申请卡片边框检测方法的各实施例。
本申请实施例提供了一种卡片边框检测方法。
参照图2,图2为本申请卡片边框检测方法第一实施例的流程示意图。
本实施例中,所述卡片边框检测方法包括以下步骤:
步骤S10,获取原始卡片图像,并对所述原始卡片图像进行预处理,得到待识别图像;
随着身份证、社保卡和银行卡等各种卡片大量的使用,相关的卡片识别服务也随之而来。其中各种卡片的边框(边缘)检测算法是卡片识别算法中很重要的一环。现有的卡片边框检测算法,主要是设置各种条件过滤掉一些边缘信息,得到卡片边框,这种方法在复杂背景或者边缘模糊的情况下,容易出现误判,导致边框检测错误,影响后续对卡片信息的提取等其他服务的运行。对此,本实施例提出一种卡片边框检测方法,获取到原始卡片图像后进行预处理,然后通过检测模型获取卡片的边框直线参数,再基于边框直线参数进行分段直接检测和拟合的后处理,得到图像中的卡片边框,从而通过图像边缘检测的方式检测得到卡片的边框,有利于避免在复杂背景或边缘模糊情况下误判的情况,提高了卡片边框检测的准确性,为后续卡片信息的提取等处理提供了方便。
本实施例中的卡片边缘检测方法是由卡片边缘检测设备实现的,该卡片边框检测设备可以是手机、平板电脑、个人计算机、服务器等设备,本实施例中以手机为例进行说明。本实施例在进行检查前,首先将获取(构建)一个用以从图形中进行检测出卡片轮廓的网络模型,该模型可称为检测模型。本实施例的检测模型是基于deeplab v3模型改进而来;原始的deeplab v3模型比较大,对于数据处理(从输入到输出)的过程需要花费一定的时间、且对硬件性能具有较高要求,而本实施例是通过手机进行检测,因此对deeplab v3模型进行了改进,得到检测模型。检测模型包括编解码器和运算器,其中编解码器又包括编码器和解码器。当然,在构建检测模型时,需要通过一定数量的图像样本进行训练,以得到符合一定要求的检测模型。
在得到检测模型时,手机可先获取将要检测的原始卡片图像,该原始卡片图像,可以是手机调用摄像头拍摄得到,也可以是从网络上下载或从其它终端传输得到的;当然该原始卡片图像中包括有卡片。手机获取到原始卡片图像时,首先将会对原始卡片图像进行预处理,以使得原始卡片图像满足检测模型的输入要求,同时也有利于提高检测模型的处理效率,降低手机的性能压力。
进一步的,所述步骤S10包括:
步骤S11,通过摄像头获取RGB彩色原始卡片图像,并对RGB彩色原始卡片图像进行缩放,得到预设尺寸的缩放图像;
本实施例中的原始卡片图像可以是手机通过摄像头拍摄得到的,拍摄得到的图像可以是RGB彩色形式的原始卡片图像。对于该拍摄得到的RGB彩色原始卡片图像,用户在进行拍摄时不一定是按照检测模型要求的输入尺寸要求来拍摄的,因此需要对RGB彩色原始卡片图像进行缩放,得到预设尺寸的缩放图像。例如,将RGB彩色原始卡片图像缩小(或放大)或128*256尺寸的缩放图像。
步骤S12,降低所述缩放图像的像素值,得到待识别图像。
在得到缩放图像后,还将对缩放图像的像素值进行调整,降低缩放图像的像素值,得到待识别图像,以便后期进行图像处理,具体降低的方式可以是用缩放图像的像素值除以255,得到新的像素值;当然还可以是采用二值化的处理等。通过上述方式,可通过手机拍摄的方式获取到原始卡片图像,并对原始卡片图像进行缩放和降低像素的预处理,以使得原始卡片图像满足检测模型的输入要求,以便后期进行图像处理,降低手机的性能压力。
步骤S20,将所述待识别图像输入至预设检测模型,通过所述预设检测模型提取得到对应的图像特征信息,并根据所述图像特征信息计算得到边框直线参数;
本实施例中,在得到待识别图像时,即可将待识别图像输入至检测模型,通过检测模型提取得到对应的图像特征信息,特征提取的过程,可以包括对待识别图像的卷积、池化等处理,以从待识别图像中提取出能够表示图像特征的图像特征信息。在得到图像特征信息时,将根据图像特征信息计算得到边框直线参数,该边框直线参数用以描述图像中卡片边框的轮廓信息,例如可以是卡片边框直线表达式、卡片的顶点坐标、卡片所在区域的位置信息等。
相应的,所述预设检测模型包括编解码器和运算器,所述步骤S20包括:
步骤S21,将所述待识别图像输入至预设检测模型,通过编解码器对待识别图像进行特征提取,得到卡片各边的特征图像;
本实施例中的检测模型包括编解码器和运算器,编解码器又包括编码器(Encoder)和解码器(Decoder),所述图像特征特征信息可以是特征图像的形式。具体的,Encoder 可以采用shufflenet_0.5网络,shufflenet是一种轻量级的网络模型,包括分组卷积部分和通道混淆部分,用以对图像进行图像特征的提取;由于本实施例是应用于手机,而手机的处理性能具有一定的限制,为了提高处理效率,降低手机的性能压力,本实施例是对shufflenet进行了缩减,将宽度设置原来的0.5倍,也即得到shufflenet_0.5,从而减少了数据处理量;待识别图像通过编码器进行编码后,得到的一次特征图像与待识别图像相比,其大小具有一定的缩减,例如待识别图像是128*256尺寸,而一次特征图像为32*64尺寸。而Decoder是基于deeplab v3模型实现的,在deeplab v3模型的ASPP(核心模块)中,包括有五个branch,本实施例的decoder可以是只取其中的2个branch,以减少数据处理量;在得到一次特征图像后,将通过Decoder对一次特征图像进行变换)包括卷积和池化等处理),从而得到对应的特征图像(feature map);feature map与一次图像相比,其大小具有一定的放大,例如一次特征图像是32*64尺寸,而特征图像是128*256尺寸。
步骤S22,通过运算器对所述特征图像进行加权最小二乘运算,得到对应的边框直线参数。
本实施例中,在得到feature map时,可通过运算器对feature map进行加权最小二乘运算(Weighted_least_squares),从而对feature map进行分析,得到该特征图像对应的边框直线参数,本实施例中,该边框直线参数用以描述图像中卡片边框的轮廓信息。其中,由于卡片是具有四条边框,为了提高分析的准确性,对于提取得到特征图像时,可以是对每条边框输出对应的feature map,然后对每个feature map分别进行加权最小二乘运算,得到对应的边框直线参数。具体的,对于每个feature map,可先以某一点为坐标原点建立坐标系,例如feature map的中心,然后确定feature map各特征点的特征坐标,并建立对应的方程组:W*[y_map,1] =A* W*x_map;在该方程组中,W表示预设特征值,x_map为x轴坐标的特征值,y_map为y轴坐标的特征值,A表示边框直线参数(矩阵或集合的方式,如直线y=ax+b中的a和b);通过求解根据上述方程组,可得到A = inv(T(WY)*WY)*(T(WY)*WX),其中T(WY)为WY的转置,inv(T(WY)*WY)为(T(WY)*WY)的逆。
通过上述方式,手机可对待识别图像进行分析,得到边框直线参数时,以根据边框直线参数粗略确定卡片边框的位置,并获取到原始卡片图像中的检测边框直线;此外,由于检测模型是端到端的模型,因此还有利于提高检测的精度。
步骤S30,根据所述边框直线参数获取所述原始卡片图像的检测边框直线,并根据所述检测边框直线在所述原始卡片图像中分别截取卡片各边的边缘区域;
本实施例中,在得到边框直线参数时,手机即可根据边框直线参数粗略确定卡片边框的位置,并获取到原始卡片图像中的检测边框直线。值得说明的是,本实施例中的检测边框直线根据卡片边框的大致位置获取的,为了进一步提高边框检测的准确性,本实施例后续将根据检测边框直线进行进一步的分析,以更准确地定位卡片边框。在得到检测边框直线时,可以基于检测边框直线在所述原始卡片图像中分别截取卡片各边的边缘区域,例如,以检测边框直线为基线,分别向线的两侧平移一段距离,得到边缘区域;可以认为,卡片各边的实际即是位于在边缘区域中。当然,在获取边缘区域时,是每条检测边框直线分别获取对应的边缘区域,而一张卡片通常是矩形的,具有四条边,也即是有四条检测边框直线,进而是获取到四个边缘区域。
步骤S40,对各边的边缘区域进行切割,得到各边的切割区域,并对各切割区域进行直线检测,得到各切割区域的区域有效直线;
本实施例中,在得到各边的边缘区域时,对于每一个边缘区域,可分别进行横向切割,得到若干个切割区域。由于卡片各边的实际位于在边缘区域中,那么在对边缘区域切割时,也可认为是对卡片边框所在的直线进行了切割,得到若干段直线(线段);然后,可分别对这些切割区域进行直线检测,得到各切割区域的区域有效直线。其中,在进行直线检测时,可以根据实际需要选取合适的算法进行,例如可以是基于霍夫(Hough)变换进行检测,具体的,可将图像(区域)上的点变换到一组参数空间上,根据参数空间点的累计结果找到一个极大值对应的解,那么这个解就对应着要直线的在预设坐标下对应的斜率k与常数b,从而检测到该点;当然还可以是基于LSD快速直线检测算法等方式进行直线检测。
步骤S50,根据同边的有效直线得到同边的边缘区域顶点,并对同边的边缘区域顶点进行拟合,得到对应的边框拟合直线;
本实施例中,在得到各切割区域的区域有效直线时,每个切割区域的区域有效直线与区域边缘相交都将形成两个边缘区域顶点;若一个边的边缘区域被切割为x(x>1)个切割区域,则该边的边缘区域共对应了2x个边缘区域顶点。对于每一个边缘区域,可分别对边缘区域顶点进行拟合,从而得到对应的边框拟合直线。由于有4个边缘区域,因此将得到4个边框拟合直线。
步骤S60,组合各边对应的边框拟合直线,得到卡片边框。
本实施例中,当得到各边框拟合直线时,即可组合各边框拟合直线,从而得到一个闭合的边框,该边框即为卡片边框。
进一步的,当得到卡片边框时,可基于该卡片边框对边框内的卡片图像进行识别,进而获取的卡片信息,例如,可通过OCR(Optical Character Recognition,光学字符识别)模块对该卡片图像中的每个单字符图像进行识别,其中,在该OCR模块可以通过CNN(Convolutional Neural Networks,卷积神经网络)等技术对该单字符图像进行分类识别。如此,即实现了对该卡片中的卡片信息进行识别。通过光学字符识别算法对所述卡片边框内的卡片图像进行识别,获得所述卡片图像中的字符信息,使得用户不需要主动地输入卡片信息,减少了用户的操作。
需要强调的是,为进一步保证上述原始卡片图像和/卡片的私密和安全性,上述的原始卡片图像和/卡片边框对应的边框位置信息还可以存储于一区块链的节点中。当然,在得到卡片信息时,卡片信息也可以是存储于一区块链的节点中。
本实施例所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。
本实施例获取原始卡片图像,并对所述原始卡片图像进行预处理,得到待识别图像;将所述待识别图像输入至预设检测模型,通过所述预设检测模型提取得到对应的图像特征信息,并根据所述图像特征信息计算得到边框直线参数;根据所述边框直线参数获取所述原始卡片图像的检测边框直线,并根据所述检测边框直线在所述原始卡片图像中分别截取卡片各边的边缘区域;对各边的边缘区域进行切割,得到各边的切割区域,并对各切割区域进行直线检测,得到各切割区域的区域有效直线;根据同边的有效直线得到同边的边缘区域顶点,并对同边的边缘区域顶点进行拟合,得到对应的边框拟合直线;组合各边对应的边框拟合直线,得到卡片边框。通过以上方式,本实施例获取到原始卡片图像后进行预处理,然后通过检测模型获取卡片的边框直线参数,再基于边框直线参数进行分段直接检测和拟合的后处理,得到图像中的卡片边框,从而通过图像边缘检测的方式检测得到卡片的边框,有利于避免在复杂背景或边缘模糊情况下误判的情况,提高了卡片边框检测的准确性,为后续卡片信息的提取等处理提供了方便。
基于上述卡片边框检测方法第一实施例,提出本申请卡片边框检测方法第二实施例。
本实施例中,所述步骤S40包括:
步骤S41,分别对各边的边缘区域进行边缘检测,得到各边的边缘轮廓;
本实施例中,在得到各边的边缘区域时,对于每个边缘区域进行边缘检测,得到各边的边缘轮廓;该边缘检测可以是基于sobel算子的方式实现的。Sobel算子包括横向及纵向两组矩阵,将之与边缘区域(图像)做平面卷积,可分别得到横向及纵向的亮度差分近似值,然后再基于某点亮度差分近似值计算梯度,若梯度大于某一阈值,则该点为边缘点。当然,为了减少运算量和内置函数所用的空间,可以是将所有的边缘区域变换为同一方向,然后仅用一个算子来检测边缘轮廓(例如通过横向的算子来检测竖向的边框轮廓)。
步骤S42,对各边的边缘区域进行切割,得到各边的切割区域;
本实施例中,在得到边缘轮廓后,对于每一个边缘区域,可分别进行横向切割,得到若干个切割区域。值得说明的是,本实施例中是先进行边缘检测,再进行分割;实际中也可以是先切割,再进行边缘检测。
步骤S43,通过大津法和所述边缘轮廓确定各切割区域的灰度阈值,并根据所述灰度阈值对各切割区域进行二值化处理;
在得到各切割区域后,将通过大津法(OTSU)和各边缘区域的边缘轮廓确定各切割区域的灰度阈值,该灰度阈值可认为是前景与背景的分割阈值。具体的,对于一个切割区域,假设前景的像素点数占切割区域的比例为w0,前景的像素点的平均灰度u0,背景的像素点数占切割区域的比例为w1,前景的像素点的平均灰度u1,类间方差为g,则有
g=w0*w1*(μ0-μ1)²
可采用遍历的方法不断变换w0、u0、w1、u1,当g最大时,g即为灰度阈值。而当得到灰度阈值时,即可根据该灰度阈值对切割区域进行二值化处理,将切割区域分为白和灰(或黑)两种区域。
步骤S44,对二值化的各切割区域分别进行直线检测,得到各切割区域的区域有效直线。
在进行二值化后,可分别对各切割区域分别进行直线检测,得到各切割区域的区域有效直线。值得说明的是,有的切割区域,可能会检测到多条直线,这有可能是卡片背景复杂或其它干扰因素导致,此时需要从中选取一条作为该切割区域的区域有效直线。具体的,若某一切割区域在进行直线检测后,该区域检测得到的区域直线数量大于一,则该切割区域可记为目标切割区域;此时,对于该目标切割区域的每一条区域直线,可先确定其与目标切割区域边缘所围成灰度区域,并确定各灰度区域的面积;背景或干扰因素与卡片重叠时,一般导致检测到的卡片面积与实际避免相比偏大,因此可选出灰度区域面积最小的灰度区域(即认为最小面积的边才是卡片的实际边),并将该灰度区域所对应的区域直线确定为区域有效直线。
通过以上方式,本实施例可在各切割区域中得到对应区域有效直线,从而便于后续根据多个区域有效直线拟合得到对应的边框拟合直线,提高了卡片边框检测的准确性。
基于上述卡片边框检测方法第一或第二实施例,提出本申请卡片边框检测方法第三实施例。
本实施例中,所述步骤S50包括:
步骤S51,从同边的边缘区域顶点中随机选取两个测试点以构建测试线,并计算同边的各边缘区域顶点到所述测试线的测试距离;
本实施例在得到各切割区域的区域有效直线时,每个切割区域的区域有效直线与区域边缘相交都将形成两个边缘区域顶点;若一个边的边缘区域被切割为x(x>1)个切割区域,则该边的边缘区域共对应了2x个边缘区域顶点。对于每一个边缘区域,可分别对边缘区域顶点进行拟合,从而得到对应的边框拟合直线。本实施例的直线拟合,将采用ransac直线拟合的方式,有利于提高抗噪能力,降低卡片图像复杂背景或其它干扰因素对边框识别带来的不利影响。由于卡片是有四个边,因此在拟合时,是分别对各边缘区域的边缘区域顶点进行拟合,以下以其中一个边为例进行说明。对于边缘区域顶点所组成的点击,首先可随机取两个测试点构建对应测试线,然后分别计算其它边缘区域顶点到该测试线的测试距离。
步骤S52,确定测试距离小于距离阈值的边缘区域顶点,并根据测试距离小于距离阈值的边缘区域顶点得到测试点集;
当得到其它边缘区域顶点倒该测试线的测试距离时,将进一步确定测试距离小于距离阈值的边缘区域顶点,这些点可组成一个测试点集,并记录该测试点集中包括的测试点数量。
步骤S53,重复执行上述步骤,直至得到n个测试点集,n为大于1的正整数;
重复上述S51和S52步骤(也即重新选取测试点并构建直线,然后获取测试点集),直至得到n个测试点集,n为大于1的正整数。
步骤S54,确定各测试点集中的测试点数量,并将测试点数量最多的测试点集确定为目标点集;
当得到n个测试点集时,可分别确定各测试点集中的测试点数量,并将测试点数量最多的测试点集确定为目标点集。
步骤S55,根据目标点集的测试点进行直线拟合,得到对应的边框拟合直线。
在得到目标点集时,可基于目标点集的测试点进行直线拟合,得到对应的边框拟合直线。而对于不属于目标点集的边缘区域顶点,则可以认为是噪声数据。
通过以上方式,本实施例采用ransac直线拟合的方式,有利于提高抗噪能力,降低卡片图像复杂背景或其它干扰因素对边框识别带来的不利影响。
此外,本申请实施例还提供一种卡片边框检测装置。
参照图3,图3为本申请卡片边框检测装置第一实施例的功能模块示意图。
本实施例中,所述卡片边框检测装置包括:
图像获取模块10,用于获取原始卡片图像,并对所述原始卡片图像进行预处理,得到待识别图像;
模型输入模块20,用于将所述待识别图像输入至预设检测模型,通过所述预设检测模型提取得到对应的图像特征信息,并根据所述图像特征信息计算得到边框直线参数;
区域截取模块30,用于根据所述边框直线参数获取所述原始卡片图像的检测边框直线,并根据所述检测边框直线在所述原始卡片图像中分别截取卡片各边的边缘区域;
直线检测模块40,用于对各边的边缘区域进行切割,得到各边的切割区域,并对各切割区域进行直线检测,得到各切割区域的区域有效直线;
直线拟合模块50,用于根据同边的有效直线得到同边的边缘区域顶点,并对同边的边缘区域顶点进行拟合,得到对应的边框拟合直线;
直线组合模块60,用于组合各边对应的边框拟合直线,得到卡片边框。
其中,上述卡片边框检测装置的各虚拟功能模块存储于图1所示卡片边框检测设备的存储器1005中,用于实现计算机程序的所有功能;各模块被处理器1001执行时,可实现卡片边框检测的功能。
进一步的,所述图像获取模块10包括:
图像缩放单元,用于通过摄像头获取RGB彩色原始卡片图像,并对RGB彩色原始卡片图像进行缩放,得到预设尺寸的缩放图像;
像素降低单元,用于降低所述缩放图像的像素值,得到待识别图像。
进一步的,所述预设检测模型包括编解码器和运算器,所述模型输入模块20包括:
特征提取单元,用于将所述待识别图像输入至预设检测模型,通过编解码器对待识别图像进行特征提取,得到卡片各边的特征图像;
加权运算单元,用于通过运算器对所述特征图像进行加权最小二乘运算,得到对应的边框直线参数。
进一步的,所述加权运算单元,具体用于根据所述特征图像建立直角坐标系,根据所述直角坐标系获取所述特征图像中各特征点的特征点坐标;根据各所述特征点的特征点坐标建立方程组,求解所述方程组获得所述原始卡片图像的边框直线参数。
进一步的,所述直线检测模块40包括:
边缘检测单元,用于分别对各边的边缘区域进行边缘检测,得到各边的边缘轮廓;
边缘切割单元,用于对各边的边缘区域进行切割,得到各边的切割区域;
二值处理单元,用于通过大津法和所述边缘轮廓确定各切割区域的灰度阈值,并根据所述灰度阈值对各切割区域进行二值化处理;
直线检测单元,用于对二值化的各切割区域分别进行直线检测,得到各切割区域的区域有效直线。
进一步的,所述直线检测单元,具体用于对二值化的各切割区域分别进行直线检测,得到各切割区域的区域直线;若存在区域直线数大于一的目标切割区域,则获取目标切割区域中各区域直线分别围成的灰度区域面积;将面积最小的灰度区域面积对应的区域直线确定为目标切割区域的区域有效直线。
进一步的,所述直线拟合模块50,包括:
点集获取单元,用于从同边的边缘区域顶点中随机选取两个测试点以构建测试线,并计算同边的各边缘区域顶点到所述测试线的测试距离;确定测试距离小于距离阈值的边缘区域顶点,并根据测试距离小于距离阈值的边缘区域顶点得到测试点集;重复执行上述步骤,直至得到n个测试点集,n为大于1的正整数;
目标确定单元,用于确定各测试点集中的测试点数量,并将测试点数量最多的测试点集确定为目标点集;
直线拟合单元,用于根据目标点集的测试点进行直线拟合,得到对应的边框拟合直线。
其中,上述卡片边框检测装置中各个模块的功能实现与上述卡片边框检测方法实施例中各步骤相对应,其功能和实现过程在此处不再一一赘述。
此外,本申请实施例还提供一种计算机可读存储介质,计算机可读存储介质可以是易失性的,也可以是非易失性的。
本申请计算机可读存储介质上存储有计算机程序,其中所述计算机程序被处理器执行时,实现如上述的卡片边框检测方法的步骤。
其中,计算机程序被执行时所实现的方法可参照本申请卡片边框检测方法的各个实施例,此处不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种卡片边框检测方法,其中,所述卡片边框检测方法包括:
    获取原始卡片图像,并对所述原始卡片图像进行预处理,得到待识别图像;
    将所述待识别图像输入至预设检测模型,通过所述预设检测模型提取得到对应的图像特征信息,并根据所述图像特征信息计算得到边框直线参数;
    根据所述边框直线参数获取所述原始卡片图像的检测边框直线,并根据所述检测边框直线在所述原始卡片图像中分别截取卡片各边的边缘区域;
    对各边的边缘区域进行切割,得到各边的切割区域,并对各切割区域进行直线检测,得到各切割区域的区域有效直线;
    根据同边的有效直线得到同边的边缘区域顶点,并对同边的边缘区域顶点进行拟合,得到对应的边框拟合直线;
    组合各边对应的边框拟合直线,得到卡片边框。
  2. 如权利要求1所述的卡片边框检测方法,其中,所述预设检测模型包括编解码器和运算器,所述将所述待识别图像输入至预设检测模型,通过所述预设检测模型提取得到对应的图像特征信息,并根据所述图像特征信息计算得到边框直线参数的步骤包括:
    将所述待识别图像输入至预设检测模型,通过编解码器对待识别图像进行特征提取,得到卡片各边的特征图像;
    通过运算器对所述特征图像进行加权最小二乘运算,得到对应的边框直线参数。
  3. 如权利要求2所述的卡片边框检测方法,其中,所述通过运算器对所述特征图像进行加权最小二乘运算,得到对应的边框直线参数的步骤包括:
    根据所述特征图像建立直角坐标系,根据所述直角坐标系获取所述特征图像中各特征点的特征点坐标;
    根据各特征点的特征点坐标建立方程组,求解所述方程组获得所述原始卡片图像的边框直线参数。
  4. 如权利要求1所述的卡片边框检测方法,其中,所述对各边的边缘区域进行切割,得到各边的切割区域,并对各切割区域进行直线检测,得到各切割区域的区域有效直线的步骤包括:
    分别对各边的边缘区域进行边缘检测,得到各边的边缘轮廓;
    对各边的边缘区域进行切割,得到各边的切割区域;
    通过大津法和所述边缘轮廓确定各切割区域的灰度阈值,并根据所述灰度阈值对各切割区域进行二值化处理;
    对二值化的各切割区域分别进行直线检测,得到各切割区域的区域有效直线。
  5. 如权利要求4所述的卡片边框检测方法,其中,所述对二值化的各切割区域分别进行直线检测,得到各切割区域的区域有效直线的步骤包括:
    对二值化的各切割区域分别进行直线检测,得到各切割区域的区域直线;
    若存在区域直线数大于一的目标切割区域,则获取目标切割区域中各区域直线分别围成的灰度区域面积;
    将面积最小的灰度区域面积对应的区域直线确定为目标切割区域的区域有效直线。
  6. 如权利要求1所述的卡片边框检测方法,其中,所述对同边的边缘区域顶点进行拟合,得到对应的边框拟合直线的步骤包括:
    从同边的边缘区域顶点中随机选取两个测试点以构建测试线,并计算同边的各边缘区域顶点到所述测试线的测试距离;
    确定测试距离小于距离阈值的边缘区域顶点,并根据测试距离小于距离阈值的边缘区域顶点得到测试点集;
    重复执行上述步骤,直至得到n个测试点集,n为大于1的正整数;
    确定各测试点集中的测试点数量,并将测试点数量最多的测试点集确定为目标点集;
    根据目标点集的测试点进行直线拟合,得到对应的边框拟合直线。
  7. 如权利要求1所述的卡片边框检测方法,其中,所述原始卡片图像和/或卡片边框对应的边框位置信息存储于区块链中。
  8. 一种卡片边框检测装置,其中,所述卡片边框检测装置包括:
    图像获取模块,用于获取原始卡片图像,并对所述原始卡片图像进行预处理,得到待识别图像;
    模型输入模块,用于将所述待识别图像输入至预设检测模型,通过所述预设检测模型提取得到对应的图像特征信息,并根据所述图像特征信息计算得到边框直线参数;
    区域截取模块,用于根据所述边框直线参数获取所述原始卡片图像的检测边框直线,并根据所述检测边框直线在所述原始卡片图像中分别截取卡片各边的边缘区域;
    直线检测模块,用于对各边的边缘区域进行切割,得到各边的切割区域,并对各切割区域进行直线检测,得到各切割区域的区域有效直线;
    直线拟合模块,用于根据同边的有效直线得到同边的边缘区域顶点,并对同边的边缘区域顶点进行拟合,得到对应的边框拟合直线;
    直线组合模块,用于组合各边对应的边框拟合直线,得到卡片边框。
  9. 一种卡片边框检测设备,其中,所述卡片边框检测设备包括处理器、存储器、以及存储在所述存储器上并可被所述处理器执行的计算机程序,其中所述计算机程序被所述处理器执行时,实现以下步骤:
    获取原始卡片图像,并对所述原始卡片图像进行预处理,得到待识别图像;
    将所述待识别图像输入至预设检测模型,通过所述预设检测模型提取得到对应的图像特征信息,并根据所述图像特征信息计算得到边框直线参数;
    根据所述边框直线参数获取所述原始卡片图像的检测边框直线,并根据所述检测边框直线在所述原始卡片图像中分别截取卡片各边的边缘区域;
    对各边的边缘区域进行切割,得到各边的切割区域,并对各切割区域进行直线检测,得到各切割区域的区域有效直线;
    根据同边的有效直线得到同边的边缘区域顶点,并对同边的边缘区域顶点进行拟合,得到对应的边框拟合直线;
    组合各边对应的边框拟合直线,得到卡片边框。
  10. 如权利要求9所述的卡片边框检测设备,其中,所述预设检测模型包括编解码器和运算器,所述将所述待识别图像输入至预设检测模型,通过所述预设检测模型提取得到对应的图像特征信息,并根据所述图像特征信息计算得到边框直线参数的步骤包括:
    将所述待识别图像输入至预设检测模型,通过编解码器对待识别图像进行特征提取,得到卡片各边的特征图像;
    通过运算器对所述特征图像进行加权最小二乘运算,得到对应的边框直线参数。
  11. 如权利要求10所述的卡片边框检测设备,其中,所述通过运算器对所述特征图像进行加权最小二乘运算,得到对应的边框直线参数的步骤包括:
    根据所述特征图像建立直角坐标系,根据所述直角坐标系获取所述特征图像中各特征点的特征点坐标;
    根据各特征点的特征点坐标建立方程组,求解所述方程组获得所述原始卡片图像的边框直线参数。
  12. 如权利要求9所述的卡片边框检测设备,其中,所述对各边的边缘区域进行切割,得到各边的切割区域,并对各切割区域进行直线检测,得到各切割区域的区域有效直线的步骤包括:
    分别对各边的边缘区域进行边缘检测,得到各边的边缘轮廓;
    对各边的边缘区域进行切割,得到各边的切割区域;
    通过大津法和所述边缘轮廓确定各切割区域的灰度阈值,并根据所述灰度阈值对各切割区域进行二值化处理;
    对二值化的各切割区域分别进行直线检测,得到各切割区域的区域有效直线。
  13. 如权利要求12所述的卡片边框检测设备,其中,所述对二值化的各切割区域分别进行直线检测,得到各切割区域的区域有效直线的步骤包括:
    对二值化的各切割区域分别进行直线检测,得到各切割区域的区域直线;
    若存在区域直线数大于一的目标切割区域,则获取目标切割区域中各区域直线分别围成的灰度区域面积;
    将面积最小的灰度区域面积对应的区域直线确定为目标切割区域的区域有效直线。
  14. 如权利要求9所述的卡片边框检测设备,其中,所述对同边的边缘区域顶点进行拟合,得到对应的边框拟合直线的步骤包括:
    从同边的边缘区域顶点中随机选取两个测试点以构建测试线,并计算同边的各边缘区域顶点到所述测试线的测试距离;
    确定测试距离小于距离阈值的边缘区域顶点,并根据测试距离小于距离阈值的边缘区域顶点得到测试点集;
    重复执行上述步骤,直至得到n个测试点集,n为大于1的正整数;
    确定各测试点集中的测试点数量,并将测试点数量最多的测试点集确定为目标点集;
    根据目标点集的测试点进行直线拟合,得到对应的边框拟合直线。
  15. 如权利要求9所述的卡片边框检测设备,其中,所述原始卡片图像和/或卡片边框对应的边框位置信息存储于区块链中。
  16. 一种可读存储介质,其中,所述可读存储介质上存储有计算机程序,其中所述计算机程序被处理器执行时,实现以下步骤:
    获取原始卡片图像,并对所述原始卡片图像进行预处理,得到待识别图像;
    将所述待识别图像输入至预设检测模型,通过所述预设检测模型提取得到对应的图像特征信息,并根据所述图像特征信息计算得到边框直线参数;
    根据所述边框直线参数获取所述原始卡片图像的检测边框直线,并根据所述检测边框直线在所述原始卡片图像中分别截取卡片各边的边缘区域;
    对各边的边缘区域进行切割,得到各边的切割区域,并对各切割区域进行直线检测,得到各切割区域的区域有效直线;
    根据同边的有效直线得到同边的边缘区域顶点,并对同边的边缘区域顶点进行拟合,得到对应的边框拟合直线;
    组合各边对应的边框拟合直线,得到卡片边框。
  17. 如权利要求16所述的可读存储介质,其中,所述预设检测模型包括编解码器和运算器,所述将所述待识别图像输入至预设检测模型,通过所述预设检测模型提取得到对应的图像特征信息,并根据所述图像特征信息计算得到边框直线参数的步骤包括:
    将所述待识别图像输入至预设检测模型,通过编解码器对待识别图像进行特征提取,得到卡片各边的特征图像;
    通过运算器对所述特征图像进行加权最小二乘运算,得到对应的边框直线参数。
  18. 如权利要求17所述的可读存储介质,其中,所述通过运算器对所述特征图像进行加权最小二乘运算,得到对应的边框直线参数的步骤包括:
    根据所述特征图像建立直角坐标系,根据所述直角坐标系获取所述特征图像中各特征点的特征点坐标;
    根据各特征点的特征点坐标建立方程组,求解所述方程组获得所述原始卡片图像的边框直线参数。
  19. 如权利要求16所述的可读存储介质,其中,所述对各边的边缘区域进行切割,得到各边的切割区域,并对各切割区域进行直线检测,得到各切割区域的区域有效直线的步骤包括:
    分别对各边的边缘区域进行边缘检测,得到各边的边缘轮廓;
    对各边的边缘区域进行切割,得到各边的切割区域;
    通过大津法和所述边缘轮廓确定各切割区域的灰度阈值,并根据所述灰度阈值对各切割区域进行二值化处理;
    对二值化的各切割区域分别进行直线检测,得到各切割区域的区域有效直线。
  20. 如权利要求19所述的可读存储介质,其中,所述对二值化的各切割区域分别进行直线检测,得到各切割区域的区域有效直线的步骤包括:
    对二值化的各切割区域分别进行直线检测,得到各切割区域的区域直线;
    若存在区域直线数大于一的目标切割区域,则获取目标切割区域中各区域直线分别围成的灰度区域面积;
    将面积最小的灰度区域面积对应的区域直线确定为目标切割区域的区域有效直线。
PCT/CN2020/122132 2020-07-30 2020-10-20 卡片边框检测方法、装置、设备及可读存储介质 WO2021151319A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010756730.X 2020-07-30
CN202010756730.XA CN111899270B (zh) 2020-07-30 2020-07-30 卡片边框检测方法、装置、设备及可读存储介质

Publications (1)

Publication Number Publication Date
WO2021151319A1 true WO2021151319A1 (zh) 2021-08-05

Family

ID=73183745

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/122132 WO2021151319A1 (zh) 2020-07-30 2020-10-20 卡片边框检测方法、装置、设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN111899270B (zh)
WO (1) WO2021151319A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114372227A (zh) * 2021-12-31 2022-04-19 浙江大学舟山海洋研究中心 鱿鱼白片智能切割计算方法、装置、设备及存储介质
CN114648542A (zh) * 2022-03-11 2022-06-21 联宝(合肥)电子科技有限公司 一种目标物提取方法、装置、设备及可读存储介质
CN115035316A (zh) * 2022-06-30 2022-09-09 招联消费金融有限公司 目标区域图像识别方法、装置、计算机设备
US11832621B1 (en) 2021-12-31 2023-12-05 Ocean Research Center Of Zhoushan, Zhejiang University Methods and systems for intelligent processing of aquatic products
CN118392888A (zh) * 2024-04-19 2024-07-26 北京锐业制药(潜山)有限公司 一种粉液双室袋管盖焊接质量检测系统

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112837285B (zh) * 2021-01-29 2022-07-26 山东建筑大学 一种板面图像的边缘检测方法及装置
CN115775315A (zh) * 2023-02-10 2023-03-10 武汉精立电子技术有限公司 Roi提取方法、装置、设备及可读存储介质
CN118212188A (zh) * 2024-03-12 2024-06-18 广东兴艺数字印刷股份有限公司 一种动漫卡自动检测控制方法、系统、设备及介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139378A (zh) * 2015-07-31 2015-12-09 小米科技有限责任公司 卡片边界检测方法及装置
US20180129878A1 (en) * 2013-06-30 2018-05-10 Google Llc Extracting card data from multiple cards
CN111259891A (zh) * 2020-01-19 2020-06-09 福建升腾资讯有限公司 一种自然场景下身份证识别方法、装置、设备和介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB203734A (en) * 1922-04-13 1923-09-13 Alfred Perkins Improvements in or relating to sorting devices for separating and classifying flat sheets, tallies, cards or the like
CN109559344B (zh) * 2017-09-26 2023-10-13 腾讯科技(上海)有限公司 边框检测方法、装置及存储介质
CN108960062A (zh) * 2018-06-01 2018-12-07 平安科技(深圳)有限公司 校正发票图像的方法、装置、计算机设备和存储介质
CN109815763A (zh) * 2019-01-04 2019-05-28 广州广电研究院有限公司 二维码的检测方法、装置和存储介质
CN110610174A (zh) * 2019-07-16 2019-12-24 北京工业大学 复杂条件下银行卡号识别方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180129878A1 (en) * 2013-06-30 2018-05-10 Google Llc Extracting card data from multiple cards
CN105139378A (zh) * 2015-07-31 2015-12-09 小米科技有限责任公司 卡片边界检测方法及装置
CN111259891A (zh) * 2020-01-19 2020-06-09 福建升腾资讯有限公司 一种自然场景下身份证识别方法、装置、设备和介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114372227A (zh) * 2021-12-31 2022-04-19 浙江大学舟山海洋研究中心 鱿鱼白片智能切割计算方法、装置、设备及存储介质
CN114372227B (zh) * 2021-12-31 2023-04-14 浙江大学舟山海洋研究中心 鱿鱼白片智能切割计算方法、装置、设备及存储介质
US11832621B1 (en) 2021-12-31 2023-12-05 Ocean Research Center Of Zhoushan, Zhejiang University Methods and systems for intelligent processing of aquatic products
CN114648542A (zh) * 2022-03-11 2022-06-21 联宝(合肥)电子科技有限公司 一种目标物提取方法、装置、设备及可读存储介质
CN115035316A (zh) * 2022-06-30 2022-09-09 招联消费金融有限公司 目标区域图像识别方法、装置、计算机设备
CN118392888A (zh) * 2024-04-19 2024-07-26 北京锐业制药(潜山)有限公司 一种粉液双室袋管盖焊接质量检测系统

Also Published As

Publication number Publication date
CN111899270B (zh) 2023-09-05
CN111899270A (zh) 2020-11-06

Similar Documents

Publication Publication Date Title
WO2021151319A1 (zh) 卡片边框检测方法、装置、设备及可读存储介质
CN110163080B (zh) 人脸关键点检测方法及装置、存储介质和电子设备
WO2022161286A1 (zh) 图像检测方法、模型训练方法、设备、介质及程序产品
CN110348294B (zh) Pdf文档中图表的定位方法、装置及计算机设备
US10210415B2 (en) Method and system for recognizing information on a card
CN112102340B (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
WO2014160426A1 (en) Classifying objects in digital images captured using mobile devices
WO2020082731A1 (zh) 电子装置、证件识别方法及存储介质
CN110852311A (zh) 一种三维人手关键点定位方法及装置
WO2021147437A1 (zh) 证卡边缘检测方法、设备及存储介质
CN111539238B (zh) 二维码图像修复方法、装置、计算机设备和存储介质
CN111553251A (zh) 证件四角残缺检测方法、装置、设备及存储介质
CN112651953A (zh) 图片相似度计算方法、装置、计算机设备及存储介质
CN112581344A (zh) 一种图像处理方法、装置、计算机设备及存储介质
WO2021174940A1 (zh) 人脸检测方法与系统
CN113627423A (zh) 圆形印章字符识别方法、装置、计算机设备和存储介质
CN115131714A (zh) 视频图像智能检测分析方法及系统
WO2021218183A1 (zh) 证件边沿检测方法、装置、设备及介质
CN114925348A (zh) 一种基于指纹识别的安全验证方法及系统
Kim et al. Algorithm of a perspective transform-based PDF417 barcode recognition
CN111428740A (zh) 网络翻拍照片的检测方法、装置、计算机设备及存储介质
CN114359352A (zh) 图像处理方法、装置、设备、存储介质及计算机程序产品
WO2024174726A1 (zh) 基于深度学习的手写及打印文本检测方法和装置
WO2024169397A1 (zh) 印章识别方法、装置、电子设备及存储介质
US20220383663A1 (en) Method for obtaining data from an image of an object of a user that has a biometric characteristic of the user

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20916934

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20916934

Country of ref document: EP

Kind code of ref document: A1