CN114758424B - Intelligent payment equipment based on multiple verification mechanisms and payment method thereof - Google Patents
Intelligent payment equipment based on multiple verification mechanisms and payment method thereof Download PDFInfo
- Publication number
- CN114758424B CN114758424B CN202210663952.6A CN202210663952A CN114758424B CN 114758424 B CN114758424 B CN 114758424B CN 202210663952 A CN202210663952 A CN 202210663952A CN 114758424 B CN114758424 B CN 114758424B
- Authority
- CN
- China
- Prior art keywords
- feature map
- iris
- face
- map
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000007246 mechanism Effects 0.000 title claims abstract description 94
- 238000012795 verification Methods 0.000 title claims abstract description 61
- 238000000034 method Methods 0.000 title claims abstract description 52
- 210000005252 bulbus oculi Anatomy 0.000 claims abstract description 109
- 238000003062 neural network model Methods 0.000 claims abstract description 18
- 230000004927 fusion Effects 0.000 claims abstract description 13
- 230000008569 process Effects 0.000 claims abstract description 12
- 238000013527 convolutional neural network Methods 0.000 claims description 56
- 238000001514 detection method Methods 0.000 claims description 46
- 238000010586 diagram Methods 0.000 claims description 46
- 230000003993 interaction Effects 0.000 claims description 30
- 230000006870 function Effects 0.000 claims description 21
- 238000000605 extraction Methods 0.000 claims description 19
- 239000011159 matrix material Substances 0.000 claims description 19
- 238000013528 artificial neural network Methods 0.000 claims description 17
- 238000012937 correction Methods 0.000 claims description 15
- 239000003623 enhancer Substances 0.000 claims description 14
- 230000004913 activation Effects 0.000 claims description 11
- 238000011176 pooling Methods 0.000 claims description 11
- 230000004044 response Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 230000001815 facial effect Effects 0.000 claims description 4
- 230000009471 action Effects 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 3
- 230000003044 adaptive effect Effects 0.000 abstract description 20
- 230000000694 effects Effects 0.000 abstract description 6
- 210000000887 face Anatomy 0.000 description 6
- 230000001939 inductive effect Effects 0.000 description 5
- 238000007792 addition Methods 0.000 description 4
- 230000001680 brushing effect Effects 0.000 description 3
- 238000005034 decoration Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000005065 mining Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 102100033620 Calponin-1 Human genes 0.000 description 1
- 102100033591 Calponin-2 Human genes 0.000 description 1
- 101000945318 Homo sapiens Calponin-1 Proteins 0.000 description 1
- 101000945403 Homo sapiens Calponin-2 Proteins 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 210000001508 eye Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000004804 winding Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/08—Payment architectures
- G06Q20/14—Payment architectures specially adapted for billing systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/38—Payment protocols; Details thereof
- G06Q20/40—Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
- G06Q20/401—Transaction verification
- G06Q20/4014—Identity check for transactions
- G06Q20/40145—Biometric identity checks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/26—Government or public services
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Accounting & Taxation (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Finance (AREA)
- Health & Medical Sciences (AREA)
- General Business, Economics & Management (AREA)
- General Health & Medical Sciences (AREA)
- Strategic Management (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Tourism & Hospitality (AREA)
- Educational Administration (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Computer Security & Cryptography (AREA)
- Image Analysis (AREA)
Abstract
The application relates to the field of intelligent payment, and particularly discloses an intelligent payment device based on a multiple verification mechanism and a payment method thereof, the intelligent payment equipment excavates deep local implicit associated characteristic information from a face image and an eyeball area image of an object to be verified through a deep neural network model so as to improve the safety and convenience of payment by combining face recognition and iris recognition, and in the process, in order to improve the fusion effect and improve the classification accuracy, the iris feature map is further weighted by using a data dense cluster mechanism based on self attention, therefore, the parameter adaptive variability of the fused feature map to the classification target function can be improved through the adaptive dependence of the data dense cluster in the iris feature map, so that the classification accuracy is improved, and the safety and the convenience of payment are improved.
Description
Technical Field
The present invention relates to the field of intelligent payment, and more particularly, to an intelligent payment device based on multiple verification mechanisms and a payment method thereof.
Background
Public transport, especially subway, as the important component in the transportation industry at present plays a key role in solving the problem of road congestion and improving traffic efficiency, but the form and payment speed of riding and paying become a problem of great concern. The speed and the convenience of payment not only influence the getting-on time but also determine the riding experience of passengers, and more determine the traveling efficiency of people, and also bring certain influence on the road traffic capacity.
At present, most of payment modes of taking a bus are cash, card swiping and code scanning payment, and some uncertain factors such as forgetting to take a bus card, defaulting of a mobile phone, shutdown of the mobile phone, recognition errors in code scanning and the like often appear in the taking process, which bring inconvenience to outgoing of people so that payment cannot be carried out, and lead to an embarrassed situation that a bus cannot be taken.
Therefore, an intelligent payment device for public transportation is expected to operate in a non-inductive payment manner to improve the payment efficiency of people traveling by bus and make the traveling more convenient.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The embodiment of the application provides intelligent payment equipment based on a multiple verification mechanism and a payment method thereof, deep local implicit associated feature information is mined from a face image and an eyeball area image of an object to be verified through a deep neural network model to improve the safety and accuracy of payment by combining face recognition and iris recognition, in addition, in the process, in order to improve the fusion effect and improve the classification accuracy, a data dense cluster mechanism based on self attention is further used for weighting an iris feature map, so that the parameter adaptive variability of the fused feature map on a classification target function can be improved through the adaptive dependence of the data dense cluster in the iris feature map to improve the classification accuracy, and the safety and the convenience of payment are improved.
According to an aspect of the present application, there is provided an intelligent payment device based on a multiple verification mechanism, comprising:
the face acquisition module is used for acquiring a face image of the object to be verified acquired by a camera of the intelligent payment equipment;
the face detection module is used for enabling the face image of the object to be verified to pass through a first convolution neural network model serving as a face area detection network so as to obtain a face feature map;
an eyeball region extraction module, which is used for extracting an eyeball region feature map corresponding to eyeballs from the face feature map based on the positions of the eyeballs in the face image;
the eyeball area pixel enhancement module is used for enabling the eyeball area characteristic graph to pass through a generator model serving as a pixel enhancer to obtain a generated eyeball area image;
the iris feature extraction module is used for enabling the generated eyeball area image to pass through a second convolution neural network model with a significance detection module so as to obtain an iris feature image;
a feature distribution correction module, configured to weight the iris feature map based on the face feature map by using a self-attention-based data dense cluster mechanism to obtain a weighted iris feature map, where the weighting of the iris feature map by using the self-attention-based data dense cluster mechanism is performed based on a spatial interaction feature map obtained by multiplying the face feature map and the iris feature map by a position point, and a distance between the face feature map and the iris feature map;
the characteristic distribution fusion module is used for fusing the weighted iris characteristic image and the face characteristic image to obtain a classification characteristic image;
the verification result generation module is used for enabling the classification characteristic graph to pass through a classifier to obtain a classification result, and the classification result is that an object to be detected corresponds to an object label in a database; and
and the payment module is used for paying the fee required by the vehicle taking based on the amount of money in the payment account associated with the object tag.
In the intelligent payment device based on the multiple verification mechanism, the first convolutional neural network model is Fast R-CNN, Fast R-CNN or RetinaNet.
In the above intelligent payment device based on multiple verification mechanisms, the iris feature extraction module is further configured to encode input data in the following manner by each layer of the second convolutional neural network: carrying out convolution coding on the input data by using first convolution cores of all layers of the second convolution neural network so as to obtain a convolution characteristic diagram; performing reel-to-reel encoding on the convolution feature map by using a second convolution unit of each layer of the second convolution neural network and a second convolution kernel to obtain a re-convolution feature map, wherein the first convolution unit and the second convolution unit form the significant feature detection module, and the size of the first convolution kernel is larger than that of the second convolution kernel; performing mean pooling based on a local feature matrix on the re-convolution feature map by using pooling units of each layer of the second convolution neural network to obtain a pooled feature map; and carrying out nonlinear activation on the feature values of all positions in the pooled feature map by using the activation units of all layers of the second convolutional neural network to obtain an activated feature map; wherein the output of the last layer of the second convolutional neural network is the iris feature map.
In the above intelligent payment device based on multiple verification mechanisms, the feature distribution correction module includes: the space interaction feature map generation unit is used for calculating the point-by-point multiplication between the face feature map and the iris feature map to obtain the space interaction feature map; the data difference measurement unit is used for calculating the square root of the Euclidean distance between the face feature map and the iris feature map; the attention unit is used for dividing the characteristic value of each position in the space interaction characteristic map by the square root of the Euclidean distance between the face characteristic map and the iris characteristic map to obtain an attention characteristic map; an exponential operation unit, which is used for calculating a natural exponential function value taking the characteristic value of each position in the attention characteristic map as power to obtain an exponential attention characteristic map; a first class probability unit, configured to pass the exponential attention feature map through the classifier to obtain a first class probability index; the second-class probability unit is used for enabling the iris feature map to pass through the classifier to obtain a second-class probability index; the action unit is used for calculating the product of the first class probability index and the second class probability index as the weighting coefficient of the iris feature map; and the correction unit is used for weighting the iris characteristic diagram by the weighting coefficient to obtain the weighted iris characteristic diagram.
In the above intelligent payment device based on multiple verification mechanisms, the feature distribution correction module is further configured to weight the iris feature map by using a self-attention-based data dense cluster mechanism according to the following formula to obtain the weighted iris feature map;
wherein the formula is:
whereinF face A graph representing the facial features of the human face,F iris the iris feature map is represented by a graph of the iris,the expression point multiplication, softmax (·) represents probability values obtained by a characteristic diagram through a classifier, d (·, ·) represents distances between characteristic diagrams, exp (·) represents exponential operation of the characteristic diagram, the exponential operation of the characteristic diagram represents calculation of natural exponential function values raised by characteristic values of positions in the characteristic diagram, and division of the characteristic diagram by parameters represents division of the characteristic values of the positions in the characteristic diagram by the parameters respectively.
In the above intelligent payment device based on multiple verification mechanisms, the feature distribution fusion module is further configured to: fusing the weighted iris feature map and the face feature map according to the following formula to obtain the classification feature map;
wherein the formula is:
F=αwF
iris
+βF
face
wherein,Fin order to be able to classify the feature map,F iris for the characteristic map of the iris in question,F face in order to obtain the face feature map,wF iris representing said weighted iris feature map "+"represents the addition of elements at the corresponding positions of the weighted iris feature map and the face feature map,αandβis a weighting parameter for controlling the balance between the weighted iris feature map and the face feature map.
In the above intelligent payment device based on multiple verification mechanisms, the verification result generation module is further configured to: the classifier processes the classification feature map to generate a classification result according to the following formula:softmax{(W n , B n ):...:(W 1 , B 1 )|Project(F) Therein ofProject(F) Representing the projection of the classification feature map as a vector,W 1 toW n Is a weight matrix of the fully connected layers of each layer,B 1 toB n A bias matrix representing the layers of the fully connected layer.
In the above intelligent payment device based on multiple verification mechanisms, the payment module is further configured to generate an insufficient-charge prompt in response to that the amount of money in the payment account associated with the object tag is smaller than the charge required for the ride.
According to another aspect of the present application, a payment method of an intelligent payment device based on a multiple verification mechanism includes:
acquiring a face image of an object to be verified, which is acquired by a camera of intelligent payment equipment;
the face image of the object to be checked is processed by a first convolution neural network model serving as a face area detection network to obtain a face characteristic image;
extracting an eyeball area feature map corresponding to eyeballs from the face feature map based on the positions of the eyeballs in the face image;
enabling the eyeball area characteristic graph to pass through a generator model serving as a pixel enhancer to obtain a generated eyeball area image;
passing the generated eyeball area image through a second convolutional neural network model with a significance detection module to obtain an iris characteristic diagram;
weighting the iris feature map by using a self-attention-based data dense cluster mechanism to obtain a weighted iris feature map based on the face feature map, wherein the weighting of the iris feature map by using the self-attention-based data dense cluster mechanism is performed based on a spatial interaction feature map obtained by multiplying the face feature map and the iris feature map by position points and the distance between the face feature map and the iris feature map;
fusing the weighted iris feature map and the face feature map to obtain a classification feature map;
the classification characteristic graph is processed by a classifier to obtain a classification result, and the classification result is that an object to be detected corresponds to an object label in a database; and
and paying the fee required by the bus based on the amount of money in the payment account associated with the object tag.
In the payment method of the intelligent payment equipment based on the multiple verification mechanisms, the first convolutional neural network model is Fast R-CNN, Fast R-CNN or RetinaNet.
In the above payment method of the intelligent payment device based on multiple verification mechanisms, passing the generated eyeball area image through a second convolutional neural network model with a significance detection module to obtain an iris feature map, including: encoding input data in layers of the second convolutional neural network as follows: performing convolution coding on the input data by using first convolution units of each layer of the second convolution neural network and first convolution cores to obtain a convolution characteristic diagram; performing reel-to-reel encoding on the convolution feature map by using a second convolution unit of each layer of the second convolution neural network and a second convolution kernel to obtain a re-convolution feature map, wherein the first convolution unit and the second convolution unit form the significant feature detection module, and the size of the first convolution kernel is larger than that of the second convolution kernel; performing local feature matrix-based mean pooling on the re-convolution feature map by pooling units of each layer of the second convolution neural network to obtain a pooled feature map; and carrying out nonlinear activation on the feature values of all positions in the pooled feature map by using the activation units of all layers of the second convolutional neural network to obtain an activated feature map; wherein the output of the last layer of the second convolutional neural network is the iris feature map.
In the above payment method of an intelligent payment device based on a multiple verification mechanism, based on the face feature map, weighting the iris feature map by using a data dense cluster mechanism based on self-attention to obtain a weighted iris feature map, the method includes: calculating the position-based multiplication between the face feature map and the iris feature map to obtain the space interaction feature map; calculating the square root of the Euclidean distance between the face feature image and the iris feature image; dividing feature values of all positions in the space interaction feature map by the square root of the Euclidean distance between the face feature map and the iris feature map to obtain an attention feature map; calculating natural exponential function values taking the feature values of all positions in the attention feature map as powers to obtain an exponential attention feature map; passing the exponential attention feature map through the classifier to obtain a first class probability index; passing the iris feature map through the classifier to obtain a second class probability index; calculating the product of the first class probability index and the second class probability index as the weighting coefficient of the iris feature map; and weighting the iris characteristic diagram by the weighting coefficient to obtain the weighted iris characteristic diagram.
In the above payment method of an intelligent payment device based on multiple verification mechanisms, based on the face feature map, the iris feature map is weighted by using a data dense cluster mechanism based on self-attention to obtain a weighted iris feature map, which includes: weighting the iris feature map by using a self-attention-based data dense cluster mechanism according to the following formula to obtain the weighted iris feature map;
wherein the formula is:
whereinF face The face feature map is represented and the face feature map is displayed,F iris the iris feature map is represented by a graph of the iris,the method comprises the steps of representing point multiplication, softmax (DEG) represents probability values obtained by a classifier of a feature map, D (DEG) represents distances between the feature maps, exp (DEG) represents exponential operation of the feature map, the exponential operation of the feature map represents calculation of a natural exponent function value taking feature values of all positions in the feature map as power, and the division of the feature map by parameters represents division of the feature values of all the positions in the feature map by the parameters.
In the above payment method of an intelligent payment device based on a multiple verification mechanism, fusing the weighted iris feature map and the face feature map to obtain a classification feature map, including: fusing the weighted iris feature map and the face feature map to obtain the classification feature map according to the following formula;
wherein the formula is:
F=αwF
iris
+βF
face
wherein,Fin order to be able to classify the feature map,F iris for the characteristic map of the iris in question,F face in order to obtain the face feature map,wF iris representing said weighted iris feature map "+"represents the addition of elements at the corresponding positions of the weighted iris feature map and the face feature map,αandβis a weighting parameter for controlling the balance between the weighted iris feature map and the face feature map.
In the above payment method of an intelligent payment device based on a multiple verification mechanism, passing the classification feature map through a classifier to obtain a classification result, the method includes: the classifier processes the classification feature map to generate a classification result according to the following formula;
wherein the formula is:softmax{(W n , B n ):...:(W 1 , B 1 )|Project(F) Therein ofProject(F) Representing the projection of the classification feature map as a vector,W 1 toW n Is a weight matrix of the fully connected layers of each layer,B 1 to is thatB n A bias matrix representing the layers of the fully connected layer.
In the above payment method for an intelligent payment device based on a multiple verification mechanism, the payment of the vehicle taking charge based on the amount of money in the payment account associated with the object tag includes: and generating an under-fare prompt in response to the amount of money in the payment account associated with the object tag being less than the fare required for the ride.
Compared with the prior art, according to the intelligent payment equipment based on the multiple verification mechanisms and the payment method thereof, deep local implicit associated feature information is mined from the face image and the eyeball area image of the object to be verified through the deep neural network model to improve the safety and accuracy of payment by combining face recognition and iris recognition, and in the process, in order to improve the fusion effect and improve the accuracy of classification, the iris feature map is further weighted by using a data dense cluster mechanism based on self-attention, so that the parameter adaptive variability of the fused feature map on a classification target function is improved through the adaptive dependence of the data dense cluster in the iris feature map to improve the classification accuracy, and the safety and convenience of payment are improved.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally indicate like parts or steps.
Fig. 1 is an application scenario diagram of an intelligent payment device based on a multiple verification mechanism according to an embodiment of the present application.
Fig. 2 is a block diagram of an intelligent payment device based on a multiple verification mechanism according to an embodiment of the present application.
Fig. 3 is a block diagram of a feature distribution correction module in an intelligent payment device based on a multiple verification mechanism according to an embodiment of the present application.
Fig. 4 is a flowchart of a payment method of an intelligent payment device based on a multiple verification mechanism according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of a payment method of an intelligent payment device based on a multiple verification mechanism according to an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only a few embodiments of the present application, and not all embodiments of the present application, and it should be understood that the present application is not limited to the example embodiments described herein.
Overview of a scene
As mentioned above, public transportation, especially subway as an important component in the transportation industry, plays a key role in solving the problem of road congestion and improving traffic efficiency, but the form of bus payment and payment speed become a great concern. The speed and convenience of payment not only influence the boarding time, but also determine the riding experience of passengers, and further determine the traveling efficiency of people, and certain influence can be brought to the road traffic capacity.
At present, most of payment modes of taking a bus are cash, card swiping and code scanning payment, and some uncertain factors such as forgetting to take a bus card, defaulting of a mobile phone, shutdown of the mobile phone, recognition errors in code scanning and the like often appear in the taking process, which bring inconvenience to outgoing of people so that payment cannot be carried out, and lead to an embarrassed situation that a bus cannot be taken. Therefore, an intelligent payment device for public transportation is expected to be operated in a non-inductive payment manner to improve the payment efficiency of people traveling in a bus and make the traveling more convenient.
Accordingly, it is considered that face-brushing payment is a common non-inductive payment means, but in the field of public transportation, face-brushing payment cannot well meet the requirements of application scenarios. The inventor finds that people can do various decorations when going out, for example, women can dress up and wear sun shading equipment, so that people cannot accurately recognize faces when brushing faces and paying, and cannot pay or pay wrongly. Therefore, in the present application, the present invention thermally attempts to improve security and convenience of payment by combining face recognition and iris recognition.
Specifically, in the technical scheme of the application, firstly, a camera of the intelligent payment device is used for collecting a face image of an object to be verified. Then, a convolution neural network with excellent performance in the aspect of local implicit feature extraction of the image is used for deep mining of implicit features of the face image of the object to be verified. It should be understood that the target detection method based on deep learning divides the network into two categories, Anchor-based (Anchor-based) and Anchor-free (Anchor-free) according to whether an Anchor window is used in the network. Anchor window based methods such as Fast R-CNN, RetinaNet, etc., anchor window based methods such as CenterNet, ExtremeNet, RePoints, etc. The method based on the anchor window can enable regression of target classification and boundary box coordinates among networks, the prior is added to enable training to be stable, the recall capability of the network target can be effectively improved, and the method is obviously improved for small target detection. Therefore, in the technical scheme of the application, the face image of the object to be verified is used as a first convolution neural network model of the face area detection network to obtain the face feature map. In particular, here, the first convolutional neural network model is Fast R-CNN, Fast R-CNN or RetinaNet.
Then, an eyeball region feature map corresponding to the eyeballs is extracted from the face feature map further based on the positions of the eyeballs in the face image. Here, the convolutional neural network is used to not deform at the position of the extracted feature, so that the eyeball area feature map corresponding to the eyeball in the face feature map can be determined according to the position of the eyeball in the face image in the original image. It should be understood that, because the resolution of the eyeball image in the face image of the object to be verified, which is usually acquired, is not high, and the iris cannot be clearly recognized, in the technical solution of the present application, the eyeball region feature map is further passed through a generator model as a pixel enhancer to obtain a generated eyeball region image.
And then, the generated eyeball area image is subjected to local high-dimensional implicit feature extraction in a second convolutional neural network model with a significance detection module to obtain an iris feature map. Specifically, in the embodiment of the present application, the input data is convolution-encoded by a first convolution core with a first convolution unit of each layer of the second convolutional neural network to obtain a convolution feature map; performing coil-winding encoding on the convolution feature map by using a second convolution unit of each layer of the second convolution neural network and a second convolution kernel to obtain a re-convolution feature map, wherein the first convolution unit and the second convolution unit form the significant feature detection module, and the size of the first convolution kernel is larger than that of the second convolution kernel; then, performing mean pooling based on a local feature matrix on the re-convolution feature map by using pooling units of each layer of the second convolution neural network to obtain a pooled feature map; secondly, carrying out nonlinear activation on the feature values of all positions in the pooled feature map by using the activation units of all layers of the second convolutional neural network to obtain an activated feature map; wherein the output of the last layer of the second convolutional neural network is the iris feature map.
It should be understood that when obtaining the iris feature map, extracting the local region in the image by using the candidate frame as the reference window, and obtaining the iris feature map by resolution enhancement and saliency detection, the iris feature map is in a data-intensive processHigher than the face feature image in degree, if the face feature image is directly fusedF face And iris feature mapF iris It may cause a classification bias of the fused feature map with respect to the face feature map and the iris feature map.
Therefore, the iris feature map is weighted by using a self-attention-based data dense cluster mechanism, specifically:
whereinF face A graph representing the facial features of the human face,F iris the iris feature map is represented by a graph of the iris,the expression point multiplication, softmax (·) represents probability values obtained by a characteristic diagram through a classifier, d (·, ·) represents distances between characteristic diagrams, exp (·) represents exponential operation of the characteristic diagram, the exponential operation of the characteristic diagram represents calculation of natural exponential function values raised by characteristic values of positions in the characteristic diagram, and division of the characteristic diagram by parameters represents division of the characteristic values of the positions in the characteristic diagram by the parameters respectively.
Here, the self-attention-based data-dense clustering mechanism enables spatial interaction of local features and global features based on a reference window and expresses similarity between data-dense object instances by a measure of data dissimilarity represented by a feature map distance. By weighting the iris feature map and fusing the weighted iris feature map with the face feature map, the adaptive dependence of the dense data clusters in the iris feature map can be used for improving the parameter adaptive variability of the fused feature map on a classification target function, so that the classification accuracy is improved.
And then fusing the weighted iris feature map and the face feature map to obtain a classification feature map, and then enabling the classification feature map to pass through a classifier to obtain a classification result for representing the object to be detected to correspond to the object label in the database. Further, the fee required by the vehicle is paid based on the amount of money in the payment account associated with the object tag. In an embodiment of the application, in response to the amount of money in the payment account associated with the object tag being less than the fee required for the ride, a fee insufficiency prompt is generated.
Based on this, this application has proposed a intelligent payment equipment based on multiple check-up mechanism, it includes: the face acquisition module is used for acquiring a face image of an object to be verified, which is acquired by a camera of the intelligent payment equipment; the face detection module is used for enabling the face image of the object to be verified to pass through a first convolution neural network model serving as a face area detection network so as to obtain a face feature map; an eyeball region extraction module, which is used for extracting an eyeball region feature map corresponding to eyeballs from the face feature map based on the positions of the eyeballs in the face image; the eyeball area pixel enhancement module is used for enabling the eyeball area characteristic graph to pass through a generator model serving as a pixel enhancer to obtain a generated eyeball area image; the iris feature extraction module is used for enabling the generated eyeball area image to pass through a second convolution neural network model with a significance detection module so as to obtain an iris feature image; a feature distribution correction module, configured to weight the iris feature map based on the face feature map by using a self-attention-based data dense cluster mechanism to obtain a weighted iris feature map, where the weighting of the iris feature map by using the self-attention-based data dense cluster mechanism is performed based on a spatial interaction feature map obtained by multiplying the face feature map and the iris feature map by a position point, and a distance between the face feature map and the iris feature map; the characteristic distribution fusion module is used for fusing the weighted iris characteristic diagram and the face characteristic diagram to obtain a classification characteristic diagram; the verification result generation module is used for enabling the classification characteristic graph to pass through a classifier to obtain a classification result, and the classification result is that an object to be detected corresponds to an object label in a database; and the payment module is used for paying the fee required by the vehicle taking based on the amount of money in the payment account associated with the object tag.
Fig. 1 illustrates an application scenario diagram of an intelligent payment device based on a multiple verification mechanism according to an embodiment of the present application. As shown in fig. 1, in this application scenario, first, a face image of an object to be verified is acquired by a camera (e.g., C as illustrated in fig. 1) in an intelligent payment device (e.g., T as illustrated in fig. 1). Then, the acquired face image of the object to be verified is input into a server (for example, a cloud server S as illustrated in fig. 1) deployed with an intelligent payment algorithm based on a multiple verification mechanism, wherein the server can process the face image of the object to be verified by the intelligent payment algorithm based on the multiple verification mechanism to generate a classification result indicating that the object to be verified corresponds to the object tag in the database, and further, the passenger-required fee is paid based on the amount of money in the payment account associated with the object tag.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary System
Fig. 2 illustrates a block diagram of an intelligent payment device based on a multiple verification mechanism according to an embodiment of the present application. As shown in fig. 2, the intelligent payment device 200 based on multiple verification mechanisms according to an embodiment of the present application includes: the face acquisition module 210 is configured to acquire a face image of an object to be verified acquired by a camera of the intelligent payment device; a face detection module 220, configured to pass the face image of the object to be verified through a first convolutional neural network model serving as a face area detection network to obtain a face feature map; an eyeball region extraction module 230, configured to extract an eyeball region feature map corresponding to an eyeball from the face feature map based on the position of the eyeball in the face image; an eyeball region pixel enhancement module 240, configured to pass the eyeball region feature map through a generator model as a pixel enhancer to obtain a generated eyeball region image; an iris feature extraction module 250, configured to pass the generated eyeball region image through a second convolutional neural network model with a saliency detection module to obtain an iris feature map; a feature distribution correction module 260, configured to weight the iris feature map based on the face feature map by using a self-attention-based data dense cluster mechanism to obtain a weighted iris feature map, where the weighting of the iris feature map by using the self-attention-based data dense cluster mechanism is performed based on a spatial interaction feature map obtained by multiplying the face feature map and the iris feature map by a position point, and a distance between the face feature map and the iris feature map; a feature distribution fusion module 270, configured to fuse the weighted iris feature map and the face feature map to obtain a classification feature map; a verification result generation module 280, configured to pass the classification feature map through a classifier to obtain a classification result, where the classification result is that an object to be detected corresponds to an object tag in a database; and a payment module 290, configured to pay the fee required for the ride based on the amount in the payment account associated with the object tag.
Specifically, in this embodiment of the application, the face acquisition module 210 and the face detection module 220 are configured to acquire a face image of an object to be verified acquired by a camera of an intelligent payment device, and obtain a face feature map by using the face image of the object to be verified as a first convolutional neural network model of a face area detection network. As described above, it is considered that face-brushing payment is a common non-inductive payment means, but in the field of public transportation, face-brushing payment cannot well meet the application scenario requirements. The reason for this is that people can do various decorations when going out, for example, women can dress up and wear sun shading equipment, which makes people unable to accurately recognize faces when brushing faces and paying, resulting in unable payment or wrong payment. Therefore, in the technical scheme of the application, the safety and the accuracy of payment are expected to be improved by combining face recognition and iris recognition.
Specifically, in the technical scheme of the application, firstly, a camera of the intelligent payment device is used for collecting a face image of an object to be verified. Then, a convolution neural network with excellent performance in the aspect of local implicit feature extraction of the image is used for deep mining of implicit features of the face image of the object to be verified. It should be understood that the target detection method based on deep learning divides the network into two categories, Anchor-based (Anchor-based) and Anchor-free (Anchor-free) according to whether an Anchor window is used in the network. Such as Fast R-CNN, RetinaNet, etc., and anchorless window-based methods such as CenterNet, ExtrmeNet, RePoints, etc. The method based on the anchor window can enable regression of target classification and boundary box coordinates among networks, training is stable by adding prior, the recall capability of network targets can be effectively improved, and the method is very obvious for small target detection. Therefore, in the technical scheme of the application, the face image of the object to be verified is processed in the first convolution neural network model serving as the face area detection network to extract the local high-dimensional implicit feature distribution in the face image so as to obtain the face feature map. In particular, here, the first convolutional neural network model is Fast R-CNN, Fast R-CNN or RetinaNet.
Specifically, in the embodiment of the present application, the eyeball region extraction module 230 and the eyeball region pixel enhancement module 240 are configured to extract an eyeball region feature map corresponding to an eyeball from the face feature map based on the position of the eyeball in the face image, and pass the eyeball region feature map through a generator model as a pixel enhancer to obtain a generated eyeball region image. That is, in order to improve the security and accuracy of payment by combining face recognition and iris recognition, an eyeball region feature map corresponding to eyeballs is extracted from the face feature map further based on the positions of the eyeballs in the face image. It should be understood that, here, the convolutional neural network is utilized to not deform at the position of extracting the feature, and therefore, in the technical solution of the present application, the eyeball area feature map corresponding to the eyeball in the face feature map may be determined according to the position of the eyeball in the original image in the face image.
Then, it should be understood that, since the resolution of the eyeball image in the face image of the object to be verified, which is usually acquired, is not high, and the iris cannot be clearly recognized, in the technical solution of the present application, the eyeball region feature map is further passed through a generator model as a pixel enhancer to obtain an eyeball region image with enhanced generated features.
Specifically, in this embodiment of the present application, the iris feature extraction module 250 is configured to pass the generated eyeball region image through a second convolutional neural network model having a saliency detection module to obtain an iris feature map. That is, in the technical solution of the present application, in this way, the generated eyeball region image is subjected to local high-dimensional implicit feature extraction in a second convolutional neural network model with a saliency detection module, so as to obtain an iris feature map corresponding to the eyeball region.
More specifically, in this embodiment of the present application, the iris feature extraction module is further configured to encode the input data in the following manner in each layer of the second convolutional neural network: performing convolution coding on the input data by using first convolution units of each layer of the second convolution neural network and first convolution cores to obtain a convolution characteristic diagram; performing reel-to-reel encoding on the convolution feature map by using a second convolution unit of each layer of the second convolution neural network and a second convolution kernel to obtain a re-convolution feature map, wherein the first convolution unit and the second convolution unit form the significant feature detection module, and the size of the first convolution kernel is larger than that of the second convolution kernel; performing local feature matrix-based mean pooling on the re-convolution feature map by pooling units of each layer of the second convolution neural network to obtain a pooled feature map; and carrying out nonlinear activation on the feature values of all positions in the pooled feature map by using the activation units of all layers of the second convolutional neural network to obtain an activated feature map; wherein the output of the last layer of the second convolutional neural network is the iris feature map.
Specifically, in this embodiment, the feature distribution correction module 260 is configured to weight the iris feature map based on the face feature map by using a self-attention-based data dense clustering mechanism to obtain a weighted iris feature map, where the iris feature map is obtained by weighting the iris feature map, and the feature distribution is obtained by using a self-attention-based data dense clustering mechanismWeighting the iris feature map by using a self-attention-based data dense cluster mechanism is performed based on a spatial interaction feature map obtained by multiplying the face feature map and the iris feature map by position points and the distance between the face feature map and the iris feature map. It should be understood that when the iris feature map is obtained, the local region in the image is extracted by using the candidate frame as the reference window, and then the iris feature map is obtained by resolution enhancement and saliency detection, which makes the iris feature map higher than the face feature map in data density, if the face feature map is directly fused, the iris feature map is higher than the face feature mapF face And the iris feature mapF iris A classification bias of the fused feature map with respect to the face feature map and the iris feature map may result. Therefore, in the technical solution of the present application, the iris feature map is further weighted by using a self-attention-based data dense cluster mechanism.
Accordingly, in a specific example, first, the position-based point multiplication between the face feature map and the iris feature map is calculated to obtain the spatial interaction feature map. Then, the square root of the Euclidean distance between the face feature image and the iris feature image is calculated. Then, the feature value of each position in the space interaction feature map is divided by the square root of the Euclidean distance between the face feature map and the iris feature map to obtain an attention feature map. Then, a natural exponent function value raised by a feature value of each position in the attention feature map is calculated to obtain an exponential attention feature map. The exponential attention feature map is then passed through the classifier to obtain a first class probability index. Then, the iris feature map is passed through the classifier to obtain a second class probability index. Then, calculating the product between the first class probability index and the second class probability index as the weighting coefficient of the iris feature map. And finally, weighting the iris characteristic diagram by the weighting coefficient to obtain the weighted iris characteristic diagram. It should be appreciated that the self-attention-based data-dense clustering mechanism enables spatial interaction of local features and global features based on a reference window and expresses similarity between data-dense object instances by a measure of dissimilarity of data represented by a feature map distance. Therefore, the iris feature map is weighted and then fused with the face feature map, and the parameter adaptive variability of the fused feature map to a classification target function can be improved through the adaptive dependence of the data dense cluster in the iris feature map, so that the classification accuracy is improved.
More specifically, in an embodiment of the present application, the feature distribution correction module is further configured to: weighting the iris feature map by using a self-attention-based data dense cluster mechanism according to the following formula to obtain the weighted iris feature map;
wherein the formula is:
whereinF face A graph representing the facial features of the human face,F iris the iris feature map is represented by a graph of the iris,the method comprises the steps of representing point multiplication, softmax (DEG) represents probability values obtained by a classifier of a feature map, D (DEG) represents distances between the feature maps, exp (DEG) represents exponential operation of the feature map, the exponential operation of the feature map represents calculation of a natural exponent function value taking feature values of all positions in the feature map as power, and the division of the feature map by parameters represents division of the feature values of all the positions in the feature map by the parameters.
Fig. 3 illustrates a block diagram of a feature distribution correction module in an intelligent payment device based on a multiple verification mechanism according to an embodiment of the present application. As shown in fig. 3, the feature distribution correction module 260 includes: a spatial interaction feature map generation unit 261, configured to calculate a point-by-point multiplication between the face feature map and the iris feature map to obtain the spatial interaction feature map; a data dissimilarity measurement unit 262, configured to calculate a square root of a euclidean distance between the face feature map and the iris feature map; an attention unit 263, configured to divide feature values at various positions in the spatial interaction feature map by a square root of a euclidean distance between the face feature map and the iris feature map to obtain an attention feature map; an exponential operation unit 264 for calculating a natural exponential function value raised by a feature value of each position in the attention feature map to obtain an exponential attention feature map; a first class probability unit 265, configured to pass the exponential attention feature map through the classifier to obtain a first class probability index; a second-class probability unit 266, configured to pass the iris feature map through the classifier to obtain a second-class probability index; an action unit 267, configured to calculate a product between the first class probability index and the second class probability index as a weighting coefficient of the iris feature map; and a correcting unit 268, configured to weight the iris feature map by using the weighting coefficient to obtain the weighted iris feature map.
Specifically, in this embodiment of the present application, the feature distribution fusion module 270 is configured to fuse the weighted iris feature map and the face feature map to obtain a classification feature map. It should be understood that, in this way, by using the self-attention-based data dense cluster mechanism to weight the iris feature map and then fuse the iris feature map with the face feature map, the parameter adaptive variability of the fused classification feature map on the classification target function can be improved through the adaptive dependence of the data dense cluster in the iris feature map, thereby improving the classification accuracy.
More specifically, in an embodiment of the present application, the feature distribution fusion module is further configured to: fusing the weighted iris feature map and the face feature map according to the following formula to obtain the classification feature map;
wherein the formula is:
F=αwF
iris
+βF
face
wherein,Fin order to be able to classify the feature map,F iris for the characteristic map of the iris in question,F face in order to obtain the face feature map,wF iris representing said weighted iris feature map "+"represents the addition of elements at the corresponding positions of the weighted iris feature map and the face feature map,αandβis a weighting parameter for controlling the balance between the weighted iris feature map and the face feature map.
Specifically, in this embodiment of the application, the verification result generating module 280 and the payment module 290 are configured to pass the classification feature map through a classifier to obtain a classification result, where the classification result is that the object to be verified corresponds to an object tag in a database, and pay the fee required for the vehicle taking based on the amount of money in the payment account associated with the object tag. That is, in the technical solution of the present application, after the classification feature map is obtained, the classification feature map is further passed through a classifier to obtain a classification result indicating that the object to be checked corresponds to the object tag in the database, and further, the fee required for the vehicle taking is paid based on the amount of money in the payment account associated with the object tag. Accordingly, in one particular example, the under-fare prompt is generated in response to the amount of money in the payment account associated with the object tag being less than the fare required for the ride.
More specifically, in the embodiment of the present application, the classifier processes the classification feature map to generate a classification result according to the following formula;
the formula is:softmax{(W n , B n ):...:(W 1 , B 1 )|Project(F) Therein ofProject(F) Representing the projection of the classification feature map as a vector,W 1 toW n Is a weight matrix of the fully connected layers of each layer,B 1 toB n A bias matrix representing the layers of the fully connected layer.
In summary, the multiple verification mechanism-based intelligent payment apparatus 200 according to the embodiment of the present application is illustrated, which uses a deep neural network model to mine deep local implicit associated feature information from a face image and an eyeball area image of an object to be verified, so as to improve the security and accuracy of payment by combining face recognition and iris recognition, and in this process, in order to improve the fusion effect and improve the classification accuracy, a self-attention-based data dense cluster mechanism is further used to weight an iris feature map, so that the parameter adaptive variability of the fused feature map on a classification target function can be improved by the adaptive dependence of the data dense cluster in the iris feature map, thereby improving the classification accuracy.
As described above, the intelligent payment device 200 based on multiple verification mechanisms according to the embodiment of the present application may be implemented in various terminal devices, such as a server of an intelligent payment algorithm based on multiple verification mechanisms. In one example, the intelligent payment device 200 based on the multiple verification mechanism according to the embodiment of the present application may be integrated into the terminal device as one software module and/or hardware module. For example, the multi-verification mechanism based intelligent payment device 200 may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the intelligent payment device 200 based on the multiple verification mechanism can also be one of many hardware modules of the terminal device.
Alternatively, in another example, the multi-verification mechanism based intelligent payment device 200 and the terminal device may be separate devices, and the multi-verification mechanism based intelligent payment device 200 may be connected to the terminal device through a wired and/or wireless network and transmit the interaction information according to the agreed data format.
Exemplary method
Fig. 4 illustrates a flow chart of a payment method of an intelligent payment device based on a multiple verification mechanism. As shown in fig. 4, a payment method of an intelligent payment device based on multiple verification mechanisms according to an embodiment of the present application includes the steps of: s110, acquiring a face image of an object to be verified, which is acquired by a camera of the intelligent payment equipment; s120, the face image of the object to be checked is processed by a first convolution neural network model serving as a face area detection network to obtain a face feature map; s130, extracting an eyeball area feature map corresponding to eyeballs from the face feature map based on the positions of the eyeballs in the face image; s140, enabling the eyeball area characteristic graph to pass through a generator model serving as a pixel enhancer to obtain a generated eyeball area image; s150, enabling the generated eyeball area image to pass through a second convolutional neural network model with a significance detection module to obtain an iris characteristic diagram; s160, based on the face feature map, weighting the iris feature map by using a self-attention-based data dense cluster mechanism to obtain a weighted iris feature map, wherein the weighting of the iris feature map by using the self-attention-based data dense cluster mechanism is performed based on a spatial interaction feature map obtained by multiplying the face feature map and the iris feature map by position points and the distance between the face feature map and the iris feature map; s170, fusing the weighted iris feature map and the face feature map to obtain a classification feature map; s180, passing the classification characteristic graph through a classifier to obtain a classification result, wherein the classification result is an object label of an object to be detected corresponding to a database; and S190, paying the fee required by the vehicle taking based on the amount of money in the payment account associated with the object tag.
Fig. 5 illustrates an architecture diagram of a payment method of an intelligent payment device based on a multiple verification mechanism according to an embodiment of the present application. As shown in fig. 5, in the network architecture of the payment method of the intelligent payment device based on the multiple verification mechanism, firstly, a face image (e.g., P1 as illustrated in fig. 5) of the obtained object to be verified is passed through a first convolutional neural network model (e.g., CNN1 as illustrated in fig. 5) as a face area detection network to obtain a face feature map (e.g., F1 as illustrated in fig. 5); then, based on the position of the eyeball in the face image, extracting an eyeball region feature map corresponding to the eyeball from the face feature map (for example, as illustrated in fig. 5, F2); then, passing the eyeball region feature map through a generator model (e.g., GM as illustrated in fig. 5) as a pixel enhancer to obtain a generated eyeball region image (e.g., F3 as illustrated in fig. 5); then, passing the generated eye region image through a second convolutional neural network model (e.g., CNN2 as illustrated in fig. 5) with a saliency detection module to obtain an iris feature map (e.g., F4 as illustrated in fig. 5); then, based on the face feature map, weighting the iris feature map using a self-attention-based data dense cluster mechanism (e.g., DCM as illustrated in fig. 5) to obtain a weighted iris feature map (e.g., F as illustrated in fig. 5); then, fusing the weighted iris feature map and the face feature map to obtain a classification feature map (for example, FC as illustrated in fig. 5); then, the classification feature map is passed through a classifier (e.g., a classifier as illustrated in fig. 5) to obtain a classification result, where the object to be detected corresponds to an object label in a database; and finally, paying the fee required by the bus based on the amount of money in the payment account associated with the object tag. Inputting the obtained input signal waveform (e.g., IN1 as illustrated IN fig. 5) and output signal waveform (e.g., IN2 as illustrated IN fig. 4) of the current time point into a convolutional neural network (e.g., CNN as illustrated IN fig. 4), respectively, to obtain an input signal profile (e.g., F1 as illustrated IN fig. 4) and an output signal profile (e.g., F2 as illustrated IN fig. 4); then, pooling the input signal profile and the output signal profile along a channel dimension to obtain an input profile matrix (e.g., M1 as illustrated in fig. 4) and an output profile matrix (e.g., M2 as illustrated in fig. 4), respectively; then, passing the eyeball region feature map through a generator model as a pixel enhancer to obtain a generated eyeball region image (e.g., MT1 as illustrated in fig. 4); then, processing the obtained input signal waveform (for example, IN3 as illustrated IN fig. 4) and output signal waveform (for example, IN4 as illustrated IN fig. 4) of a series of predetermined time points before the current time point IN a manner of processing the input signal waveform and output signal waveform of the current time point, respectively, to obtain a second to an nth transfer matrix (for example, MT2-MTn as illustrated IN fig. 4); then, fusing the weighted iris feature map and the face feature map to obtain a classification feature map (e.g., MD as illustrated in fig. 4); then, a correlation coordinate estimation is performed on the distance matrix to obtain a classification matrix (e.g., MC as illustrated in fig. 4); and, finally, passing the classification matrix through an encoder for regression (e.g., E as illustrated in fig. 4) to obtain the required duty cycle of the PWN excitation wave.
More specifically, in step S110 and step S120, a face image of an object to be verified acquired by a camera of the smart payment device is acquired, and the face image of the object to be verified is passed through a first convolutional neural network model as a face area detection network to obtain a face feature map. It should be understood that, considering that face-brushing payment is a common non-inductive payment means, in the field of public transportation, face-brushing payment cannot well meet the requirements of application scenarios. The reason is that people can make various decorations when going out, for example, women can make up and dress up and wear sun shading equipment, so that people cannot accurately recognize faces when brushing faces to pay, and payment cannot be made or wrong payment is made. Therefore, in the technical scheme of the application, the safety and the accuracy of payment are expected to be improved by combining the face recognition and the iris recognition.
Specifically, in the technical scheme of the application, firstly, a camera of the intelligent payment device is used for collecting a face image of an object to be verified. Then, a convolution neural network with excellent performance in the aspect of local implicit feature extraction of the image is used for deep mining of implicit features of the face image of the object to be verified. It should be understood that the target detection method based on deep learning divides the network into two categories, Anchor-based (Anchor-based) and Anchor-free (Anchor-free) according to whether an Anchor window is used in the network. Such as Fast R-CNN, Faster R-CNN, RetinaNet, etc., and anchorless window-based methods such as CenterNet, ExtremNet, RePoints, etc. The method based on the anchor window can enable regression of target classification and bounding box coordinates among networks, training is stable by adding prior, the recall capability of network targets can be effectively improved, and the method is very obvious in improvement of small target detection. Therefore, in the technical scheme of the application, the face image of the object to be verified is processed in the first convolution neural network model serving as the face area detection network to extract the local high-dimensional implicit feature distribution in the face image so as to obtain the face feature map. In particular, here, the first convolutional neural network model is Fast R-CNN, Fast R-CNN or RetinaNet.
More specifically, in step S130 and step S140, based on the position of the eyeball in the face image, an eyeball region feature map corresponding to the eyeball is extracted from the face feature map, and the eyeball region feature map is passed through a generator model as a pixel enhancer to obtain a generated eyeball region image. That is, in order to improve the security and accuracy of payment by combining face recognition and iris recognition, an eyeball region feature map corresponding to an eyeball is extracted from the face feature map further based on the position of the eyeball in the face image. It should be understood that, here, the convolutional neural network is utilized to not deform at the position of extracting the feature, and therefore, in the technical solution of the present application, the eyeball area feature map corresponding to the eyeball in the face feature map may be determined according to the position of the eyeball in the original image in the face image.
Then, it should be understood that, since the resolution of the eyeball image in the face image of the object to be verified, which is usually acquired, is not high, and the iris cannot be clearly recognized, in the technical solution of the present application, the eyeball region feature map is further passed through a generator model as a pixel enhancer to obtain an eyeball region image with enhanced generated features.
More specifically, in step S150, the generated eyeball area image is passed through a second convolutional neural network model with a saliency detection module to obtain an iris feature map. That is, in the technical solution of the present application, in this way, the generated eyeball region image is subjected to local high-dimensional implicit feature extraction in a second convolutional neural network model with a saliency detection module, so as to obtain an iris feature map corresponding to the eyeball region.
More specifically, in step S160, based on the face feature map, the iris feature map is weighted by using a self-attention-based data dense cluster mechanism to obtain a weighted iris feature map, wherein the weighting of the iris feature map by using the self-attention-based data dense cluster mechanism is performed based on a spatial interaction feature map obtained by multiplying the face feature map and the iris feature map by a position point, and a distance between the face feature map and the iris feature map. It should be understood that when the iris feature map is obtained, the local region in the image is extracted by using the candidate frame as the reference window, and then the iris feature map is obtained by resolution enhancement and saliency detection, which makes the iris feature map higher than the face feature map in data density, if the face feature map is directly fused, the iris feature map is higher than the face feature mapF face And the iris feature mapF iris A classification bias of the fused feature map with respect to the face feature map and the iris feature map may result. Therefore, in the technical solution of the present application, the iris feature map is further weighted by using a self-attention-based data dense cluster mechanism.
Accordingly, in a specific example, first, the position-based point multiplication between the face feature map and the iris feature map is calculated to obtain the spatial interaction feature map. Then, the square root of the Euclidean distance between the face feature image and the iris feature image is calculated. Then, the feature value of each position in the space interaction feature map is divided by the square root of the Euclidean distance between the face feature map and the iris feature map to obtain an attention feature map. Then, a natural exponent function value raised by a feature value of each position in the attention feature map is calculated to obtain an exponential attention feature map. The exponential attention feature map is then passed through the classifier to obtain a first class probability index. Then, the iris feature map is passed through the classifier to obtain a second class probability index. Then, the product of the first class probability index and the second class probability index is calculated as the weighting coefficient of the iris feature map. And finally, weighting the iris characteristic diagram by the weighting coefficient to obtain the weighted iris characteristic diagram. It should be appreciated that the self-attention based data-dense clustering mechanism enables spatial interaction of local features with global features based on a reference window and expresses the similarity between data-dense object instances by a measure of dissimilarity in the data represented by the feature map distance. Therefore, the iris feature map is weighted and then fused with the face feature map, so that the parameter adaptive variability of the fused feature map to a classification target function can be improved through the adaptive dependence of the data dense cluster in the iris feature map, and the classification accuracy is improved.
More specifically, in step S170, the weighted iris feature map and the face feature map are fused to obtain a classification feature map. It should be understood that, in this way, by using the self-attention-based data dense cluster mechanism to weight the iris feature map and then fuse the iris feature map with the face feature map, the parameter adaptive variability of the fused classification feature map on the classification target function can be improved through the adaptive dependence of the data dense cluster in the iris feature map, thereby improving the classification accuracy.
More specifically, in step S180 and step S190, the classification feature map is passed through a classifier to obtain a classification result, the classification result is that the object to be detected corresponds to the object tag in the database, and the fee required for the ride is paid based on the amount of money in the payment account associated with the object tag. That is, in the technical solution of the present application, after the classification feature map is obtained, the classification feature map is further passed through a classifier to obtain a classification result indicating that an object to be checked corresponds to an object tag in a database, and further, based on an amount in a payment account associated with the object tag, a fee required for taking a car is paid. Accordingly, in one particular example, an under-fare prompt is generated in response to the amount of money in the payment account associated with the object tag being less than the fare required for the ride.
In summary, the payment method of the intelligent payment device based on the multiple verification mechanisms is clarified, the deep local implicit associated feature information is mined from the face image and the eyeball area image of the object to be verified through the deep neural network model to improve the safety and the accuracy of payment by combining face recognition and iris recognition, and in the process, in order to improve the fusion effect and improve the accuracy of classification, a data dense cluster mechanism based on self-attention is further used for weighting the iris feature map, so that the parameter adaptive variability of the fused feature map on the classification target function can be improved through the adaptive dependence of the data dense cluster in the iris feature map to improve the classification accuracy, and the safety and the convenience of payment are improved.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.
Claims (8)
1. An intelligent payment device based on multiple verification mechanism, comprising:
the face acquisition module is used for acquiring a face image of an object to be verified, which is acquired by a camera of the intelligent payment equipment;
the face detection module is used for enabling the face image of the object to be verified to pass through a first convolution neural network model serving as a face area detection network so as to obtain a face feature map;
an eyeball region extraction module, which is used for extracting an eyeball region feature map corresponding to eyeballs from the face feature map based on the positions of the eyeballs in the face image;
the eyeball area pixel enhancement module is used for enabling the eyeball area characteristic graph to pass through a generator model serving as a pixel enhancer to obtain a generated eyeball area image;
the iris feature extraction module is used for enabling the generated eyeball area image to pass through a second convolution neural network model with a significance detection module so as to obtain an iris feature image;
a feature distribution correction module, configured to weight the iris feature map based on the face feature map by using a self-attention-based data dense cluster mechanism to obtain a weighted iris feature map, where the weighting of the iris feature map by using the self-attention-based data dense cluster mechanism is performed based on a spatial interaction feature map obtained by multiplying the face feature map and the iris feature map by location points, and a distance between the face feature map and the iris feature map;
the characteristic distribution fusion module is used for fusing the weighted iris characteristic image and the face characteristic image to obtain a classification characteristic image;
the verification result generation module is used for enabling the classification characteristic graph to pass through a classifier to obtain a classification result, and the classification result is that an object to be detected corresponds to an object label in a database; and
the payment module is used for paying the fee required by the riding based on the amount of money in the payment account associated with the object tag;
wherein the feature distribution correction module comprises:
the space interaction feature map generation unit is used for calculating the point-by-point multiplication between the face feature map and the iris feature map to obtain the space interaction feature map;
the data difference measurement unit is used for calculating the square root of the Euclidean distance between the face feature map and the iris feature map;
the attention unit is used for dividing the characteristic value of each position in the space interaction characteristic map by the square root of the Euclidean distance between the face characteristic map and the iris characteristic map to obtain an attention characteristic map;
an exponential operation unit, which is used for calculating a natural exponential function value taking the characteristic value of each position in the attention characteristic map as power to obtain an exponential attention characteristic map;
a first class probability unit, configured to pass the exponential attention feature map through the classifier to obtain a first class probability index;
the second-class probability unit is used for enabling the iris feature map to pass through the classifier to obtain a second-class probability index;
the action unit is used for calculating the product between the first class probability index and the second class probability index as a weighting coefficient of the iris feature map; and
and the correction unit is used for weighting the iris characteristic diagram by the weighting coefficient to obtain the weighted iris characteristic diagram.
2. The intelligent payment device based on multiple verification mechanism of claim 1, wherein the first convolutional neural network model is Fast R-CNN, Fast R-CNN or RetinaNet.
3. The intelligent payment device based on multiple verification mechanisms of claim 2, wherein said iris feature extraction module is further configured to encode the input data in the following manner in each layer of said second convolutional neural network:
performing convolution coding on the input data by using first convolution units of each layer of the second convolution neural network and first convolution cores to obtain a convolution characteristic diagram;
performing reel-to-reel encoding on the convolution feature map by using a second convolution unit of each layer of the second convolution neural network and a second convolution kernel to obtain a re-convolution feature map, wherein the first convolution unit and the second convolution unit form the significant feature detection module, and the size of the first convolution kernel is larger than that of the second convolution kernel;
performing mean pooling based on a local feature matrix on the re-convolution feature map by using pooling units of each layer of the second convolution neural network to obtain a pooled feature map; and
carrying out nonlinear activation on the feature values of all positions in the pooled feature map by using the activation units of all layers of the second convolutional neural network to obtain an activation feature map;
wherein the output of the last layer of the second convolutional neural network is the iris feature map.
4. The intelligent payment device based on multiple verification mechanisms of claim 3, wherein the feature distribution correction module is further configured to weight the iris feature map using a self-attention based data dense cluster mechanism to obtain the weighted iris feature map according to the following formula;
wherein the formula is:
whereinF face A graph representing the facial features of the human face,F iris the iris feature map is represented by a graph of the iris,the method comprises the steps of representing point multiplication, softmax (DEG) represents probability values obtained by a classifier of a feature map, D (DEG) represents distances between the feature maps, exp (DEG) represents exponential operation of the feature map, the exponential operation of the feature map represents calculation of a natural exponent function value taking feature values of all positions in the feature map as power, and the division of the feature map by parameters represents division of the feature values of all the positions in the feature map by the parameters.
5. The intelligent payment device based on multiple verification mechanism of claim 4, wherein the feature distribution fusion module is further configured to: fusing the weighted iris feature map and the face feature map according to the following formula to obtain the classification feature map;
wherein the formula is:
F=αwF
iris
+βF
face
wherein,Fin order to be able to classify the feature map,F iris for the characteristic map of the iris in question,F face in order to obtain the face feature map,wF iris representing said weighted iris feature map "+"represents the addition of elements at the corresponding positions of the weighted iris feature map and the face feature map,αandβis a weighting parameter for controlling the balance between the weighted iris feature map and the face feature map.
6. The intelligent payment device based on multiple verification mechanism of claim 5, wherein the verification result generation module is further configured to: the classifier processes the classification feature map to generate a classification result according to the following formula;
wherein the formula is:softmax{(W n , B n ):...:(W 1 , B 1 )|Project(F) Therein ofProject(F) Representing the projection of the classification feature map as a vector,W 1 toW n Is a weight matrix of the fully connected layers of each layer,B 1 toB n A bias matrix representing the layers of the fully connected layer.
7. The intelligent payment device based on multiple verification mechanisms of claim 6, wherein the payment module is further configured to generate an under-fare prompt in response to the amount of money in the payment account associated with the object tag being less than the fare required for the ride.
8. A payment method of intelligent payment equipment based on multiple verification mechanisms is characterized by comprising the following steps:
acquiring a face image of an object to be verified, which is acquired by a camera of intelligent payment equipment;
the face image of the object to be checked is processed by a first convolution neural network model serving as a face area detection network to obtain a face characteristic image;
extracting an eyeball area feature map corresponding to eyeballs from the face feature map based on the positions of the eyeballs in the face image;
enabling the eyeball area characteristic graph to pass through a generator model serving as a pixel enhancer to obtain a generated eyeball area image;
enabling the generated eyeball area image to pass through a second convolution neural network model with a significance detection module to obtain an iris characteristic diagram;
weighting the iris feature map by using a self-attention-based data dense cluster mechanism to obtain a weighted iris feature map based on the face feature map, wherein the weighting of the iris feature map by using the self-attention-based data dense cluster mechanism is performed based on a spatial interaction feature map obtained by multiplying the face feature map and the iris feature map by position points and the distance between the face feature map and the iris feature map;
fusing the weighted iris feature map and the face feature map to obtain a classification feature map;
enabling the classification characteristic graph to pass through a classifier to obtain a classification result, wherein the classification result is that an object to be detected corresponds to an object label in a database; and
paying the fee required by the bus taking based on the amount of money in the payment account associated with the object tag;
based on the face feature map, weighting the iris feature map by using a data dense cluster mechanism based on self attention to obtain a weighted iris feature map, wherein the method comprises the following steps of:
calculating the position-based multiplication between the face feature map and the iris feature map to obtain the space interaction feature map;
calculating the square root of the Euclidean distance between the face feature image and the iris feature image;
dividing feature values of all positions in the space interaction feature map by the square root of the Euclidean distance between the face feature map and the iris feature map to obtain an attention feature map;
calculating natural exponent function values raised by the feature values of the positions in the attention feature map to obtain an exponent attention feature map;
passing the exponential attention feature map through the classifier to obtain a first class probability index;
passing the iris feature map through the classifier to obtain a second class probability index;
calculating the product of the first class probability index and the second class probability index as the weighting coefficient of the iris feature map; and
and weighting the iris characteristic diagram by the weighting coefficient to obtain the weighted iris characteristic diagram.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210663952.6A CN114758424B (en) | 2022-06-14 | 2022-06-14 | Intelligent payment equipment based on multiple verification mechanisms and payment method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210663952.6A CN114758424B (en) | 2022-06-14 | 2022-06-14 | Intelligent payment equipment based on multiple verification mechanisms and payment method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114758424A CN114758424A (en) | 2022-07-15 |
CN114758424B true CN114758424B (en) | 2022-09-02 |
Family
ID=82336349
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210663952.6A Active CN114758424B (en) | 2022-06-14 | 2022-06-14 | Intelligent payment equipment based on multiple verification mechanisms and payment method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114758424B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115471674B (en) * | 2022-09-20 | 2023-06-27 | 浙江科达利实业有限公司 | Performance monitoring system of new energy vehicle carbon dioxide pipe based on image processing |
CN115272831B (en) * | 2022-09-27 | 2022-12-09 | 成都中轨轨道设备有限公司 | Transmission method and system for monitoring images of suspension state of contact network |
CN115984952B (en) * | 2023-03-20 | 2023-11-24 | 杭州叶蓁科技有限公司 | Eye movement tracking system and method based on bulbar conjunctiva blood vessel image recognition |
CN117894107A (en) * | 2024-03-14 | 2024-04-16 | 山东新竹智能科技有限公司 | Intelligent building security monitoring system and method |
CN118411766B (en) * | 2024-07-02 | 2024-09-24 | 浙江元衡生物科技有限公司 | Multi-mode biological recognition system and method based on big data |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108446667A (en) * | 2018-04-04 | 2018-08-24 | 北京航空航天大学 | Based on the facial expression recognizing method and device for generating confrontation network data enhancing |
CN110781784A (en) * | 2019-10-18 | 2020-02-11 | 高新兴科技集团股份有限公司 | Face recognition method, device and equipment based on double-path attention mechanism |
CN112200161A (en) * | 2020-12-03 | 2021-01-08 | 北京电信易通信息技术股份有限公司 | Face recognition detection method based on mixed attention mechanism |
CN112966643A (en) * | 2021-03-23 | 2021-06-15 | 成都天佑路航轨道交通科技有限公司 | Face and iris fusion recognition method and device based on self-adaptive weighting |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102707594B1 (en) * | 2016-11-11 | 2024-09-19 | 삼성전자주식회사 | Method and apparatus for extracting iris region |
CN113591747B (en) * | 2021-08-06 | 2024-02-23 | 合肥工业大学 | Multi-scene iris recognition method based on deep learning |
CN114565041A (en) * | 2022-02-28 | 2022-05-31 | 上海嘉甲茂技术有限公司 | Payment big data analysis system based on internet finance and analysis method thereof |
-
2022
- 2022-06-14 CN CN202210663952.6A patent/CN114758424B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108446667A (en) * | 2018-04-04 | 2018-08-24 | 北京航空航天大学 | Based on the facial expression recognizing method and device for generating confrontation network data enhancing |
CN110781784A (en) * | 2019-10-18 | 2020-02-11 | 高新兴科技集团股份有限公司 | Face recognition method, device and equipment based on double-path attention mechanism |
CN112200161A (en) * | 2020-12-03 | 2021-01-08 | 北京电信易通信息技术股份有限公司 | Face recognition detection method based on mixed attention mechanism |
CN112966643A (en) * | 2021-03-23 | 2021-06-15 | 成都天佑路航轨道交通科技有限公司 | Face and iris fusion recognition method and device based on self-adaptive weighting |
Non-Patent Citations (2)
Title |
---|
Distinguishing a Person by Face and Iris Using Fusion Approach;Md. Zahidur Rahman,and etc;《2019 International Conference on Sustainable Technologies for Industry 4.0 (STI)》;20200416;第1-5页 * |
基于深度学习的虹膜人脸多特征融合识别;肖珂等;《计算机工程与设计》;20200430;第41卷(第4期);第1070-1073页 * |
Also Published As
Publication number | Publication date |
---|---|
CN114758424A (en) | 2022-07-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114758424B (en) | Intelligent payment equipment based on multiple verification mechanisms and payment method thereof | |
US11475660B2 (en) | Method and system for facilitating recognition of vehicle parts based on a neural network | |
CN109101602B (en) | Image retrieval model training method, image retrieval method, device and storage medium | |
CN109978893B (en) | Training method, device, equipment and storage medium of image semantic segmentation network | |
CN107292291B (en) | Vehicle identification method and system | |
CN108230291B (en) | Object recognition system training method, object recognition method, device and electronic equipment | |
CN111460980B (en) | Multi-scale detection method for small-target pedestrian based on multi-semantic feature fusion | |
JP2014232533A (en) | System and method for ocr output verification | |
KR102592551B1 (en) | Object recognition processing apparatus and method for ar device | |
CN109815884A (en) | Unsafe driving behavioral value method and device based on deep learning | |
CN114211975A (en) | Charging alarm system for electric automobile and working method thereof | |
CN115690714A (en) | Multi-scale road target detection method based on area focusing | |
CN114022713A (en) | Model training method, system, device and medium | |
CN117475236A (en) | Data processing system and method for mineral resource exploration | |
CN112567398A (en) | Techniques for matching different input data | |
US20230196841A1 (en) | Behavior recognition artificial intelligence network system and method for efficient recognition of hand signals and gestures | |
Ali et al. | On-road vehicle detection using support vector machine and decision tree classifications | |
CN112559968A (en) | Driving style representation learning method based on multi-situation data | |
CN114463685B (en) | Behavior recognition method, behavior recognition device, electronic equipment and storage medium | |
CN111985448A (en) | Vehicle image recognition method and device, computer equipment and readable storage medium | |
Piroli et al. | LS-VOS: Identifying outliers in 3D object detections using latent space virtual outlier synthesis | |
CN111126173A (en) | High-precision face detection method | |
CN110555425A (en) | Video stream real-time pedestrian detection method | |
CN113869510A (en) | Network training, unlocking and object tracking method, device, equipment and storage medium | |
CN114445716A (en) | Key point detection method, key point detection device, computer device, medium, and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |