CN113920564A - Client mining method based on artificial intelligence and related equipment - Google Patents
Client mining method based on artificial intelligence and related equipment Download PDFInfo
- Publication number
- CN113920564A CN113920564A CN202111274335.9A CN202111274335A CN113920564A CN 113920564 A CN113920564 A CN 113920564A CN 202111274335 A CN202111274335 A CN 202111274335A CN 113920564 A CN113920564 A CN 113920564A
- Authority
- CN
- China
- Prior art keywords
- face
- matching
- product
- customer
- client
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 37
- 238000005065 mining Methods 0.000 title claims abstract description 33
- 238000012549 training Methods 0.000 claims abstract description 83
- 230000001815 facial effect Effects 0.000 claims abstract description 29
- 230000014509 gene expression Effects 0.000 claims abstract description 26
- 238000013528 artificial neural network Methods 0.000 claims abstract description 24
- 238000013527 convolutional neural network Methods 0.000 claims description 29
- 238000012937 correction Methods 0.000 claims description 10
- 238000012360 testing method Methods 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 5
- 230000004931 aggregating effect Effects 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 4
- 230000014759 maintenance of location Effects 0.000 claims description 4
- 230000008901 benefit Effects 0.000 description 11
- 230000006870 function Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 9
- 210000003128 head Anatomy 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 8
- 238000011176 pooling Methods 0.000 description 7
- 230000004913 activation Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 239000002537 cosmetic Substances 0.000 description 2
- 210000001508 eye Anatomy 0.000 description 2
- 210000004709 eyebrow Anatomy 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 235000013305 food Nutrition 0.000 description 2
- 210000001061 forehead Anatomy 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 210000001097 facial muscle Anatomy 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0201—Market modelling; Market analysis; Collecting market data
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Strategic Management (AREA)
- Finance (AREA)
- Development Economics (AREA)
- Accounting & Taxation (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- Entrepreneurship & Innovation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Game Theory and Decision Science (AREA)
- Economics (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
The application provides a client mining method based on artificial intelligence, which comprises the following steps of collecting a face image to establish a face characteristic database and obtain a first face image; extracting feature key points of the first face image to obtain first face features; matching the first facial features with the facial feature database to obtain matching results, wherein the matching results comprise matching success and matching failure; determining a target customer based on the matching result and establishing a binding relationship between the target customer and the product; and establishing a micro expression recognition model based on a neural network training mode to obtain the interest degree of the target customer for the product according to the binding relation. According to the method and the system, the target client is screened out by recognizing the face image, the interested degree of the target client to the product is judged, and the new requirement of the target client can be dynamically known. The application simultaneously provides a customer excavating device, electronic equipment and a storage medium based on artificial intelligence.
Description
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a customer mining method based on artificial intelligence and related equipment.
Background
Billboards are widely used in commercial advertising, and some people who often see the billboard are really interested in (at least not exclusive of) the product, and those who stay in front of the billboard are really potential target customers of a certain product, however, there is currently no way to specifically identify which people are potential target customers who are interested in the product on the billboard.
Disclosure of Invention
In view of the foregoing, there is a need for a method and related apparatus for customer mining based on artificial intelligence, which can solve the technical problem that a potential target customer interested in a product cannot be identified currently, wherein the related apparatus includes an artificial intelligence-based customer mining device, an electronic device and a storage medium.
In a first aspect, an embodiment of the present application provides a customer mining method based on artificial intelligence, including:
collecting a face image to establish a face feature database and obtain a first face image;
extracting feature key points of the first face image to obtain first face features;
matching the first facial features with the facial feature database to obtain matching results, wherein the matching results comprise matching success and matching failure;
determining a target customer based on the matching result and establishing a binding relationship between the target customer and the product;
and establishing a micro expression recognition model based on a neural network training mode to obtain the interest degree of the target customer for the product according to the binding relation.
In some embodiments, the acquiring the face image to build the face feature database and obtain the first face image includes:
establishing a face feature recognition model based on a neural network training mode;
identifying a second face image based on the face feature identification model to obtain a second face feature;
matching the second face features with corresponding customer information to form a face feature database;
aggregating a plurality of the face feature database sub-databases to form the face feature database.
In some embodiments, the building of the face feature recognition model based on the neural network training mode includes:
acquiring a training sample and a test sample;
parsing the training samples to extract training features;
constructing a convolutional neural network model;
training the training features based on the convolutional neural network model to obtain a face feature recognition bottom die;
inputting the test sample to the face feature recognition bottom die to obtain correction data;
and adjusting the face feature recognition bottom die based on the correction data to obtain the face feature recognition model.
In some embodiments, building a micro-expression recognition model based on neural network training to obtain the interest level of the target customer in the product according to the binding relationship includes:
obtaining the retention time of the target customer for observing the corresponding product according to the binding relationship;
and dividing the interest level of the target customer in the product based on the residence time.
In some embodiments, building a micro-expression recognition model based on neural network training to obtain the interest level of the target customer in the product according to the binding relationship includes:
establishing a micro expression recognition model;
sending a first face image corresponding to the target client to the micro expression recognition model according to the binding relationship;
extracting a characteristic information sequence used for indicating micro expression identification in the first face image;
establishing matching information of the characteristic information sequence and a preset interest classification;
and mapping the interest degree of the target client corresponding to the first face image in the product according to the matching information.
In some embodiments, matching the first facial features to the database of facial features to obtain matching results comprises;
calculating the association degree of the first face feature and the face feature database to obtain face recognition precision;
judging whether the face recognition precision reaches a preset face recognition threshold value or not; and if the face recognition precision reaches a preset face recognition threshold, the matching result is confirmed to be successful in matching, and if the face recognition precision does not reach the preset face recognition threshold, the matching result is confirmed to be failed in matching.
In some embodiments, the artificial intelligence based customer mining method further comprises:
generating an information pushing scheme corresponding to the product according to the interestingness;
and pushing the product information to the corresponding target customer based on the product information pushing scheme.
In a second aspect, an embodiment of the present application provides a customer mining device based on artificial intelligence, including:
the construction unit is used for acquiring a face image to establish a face feature database and acquire a first face image;
the extraction unit is used for extracting feature key points of the first face image to obtain first face features;
the matching unit is used for matching the first face features with the face feature database to obtain matching results, and the matching results comprise matching success and matching failure;
the association unit is used for determining a target client based on the matching result and establishing a binding relationship between the target client and a product;
and the obtaining unit is used for establishing a micro expression recognition model based on a neural network training mode so as to obtain the interest degree of the target customer on the product according to the binding relationship.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a memory storing computer readable instructions; and
a processor executing computer readable instructions stored in the memory to implement the artificial intelligence based customer mining method as described above.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having computer-readable instructions stored thereon, which, when executed by a processor, implement the artificial intelligence based customer mining method as described above.
According to the client mining method based on artificial intelligence, the obtained face image of the client is recognized by using the trained face feature recognition model, the target client is screened out, the product interest degree of the target client is judged according to the stay time of the target client, the new requirement of the target client can be dynamically known and updated, and subsequent service personnel can conveniently provide more targeted product consultation and communication service for the client.
Drawings
Fig. 1 is a flowchart of an artificial intelligence-based client mining method according to an embodiment of the present disclosure.
Fig. 2 is a flowchart of a method for creating a facial feature database according to an embodiment of the present application.
Fig. 3 is a flowchart of a method for establishing a face feature recognition model according to an embodiment of the present application.
Fig. 4 is a flowchart of a method for building a micro expression recognition model based on neural network training to obtain interest level of a target customer in a product according to a binding relationship according to an embodiment of the present application.
Fig. 5 is a structural diagram of a customer mining device based on artificial intelligence according to a second embodiment of the present application.
Fig. 6 is a schematic structural diagram of an electronic device according to a third embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The customer mining method based on artificial intelligence provided by the embodiment of the application is executed by the electronic equipment, and correspondingly, the customer mining method based on artificial intelligence is operated in the electronic equipment.
The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning, deep learning and the like.
With the development of face recognition technology, various products related to face recognition have been widely used in human life. The main recognition function of the face recognition technology is realized based on a convolutional neural network. And (3) training the convolutional neural network by using a large number of face image data sets, so that the convolutional neural network has the face recognition capability after the training convergence.
According to the method and the system, the obtained face images of the clients are recognized by using the trained face feature recognition model, the target clients are screened out, the degree of interest of the target clients in the products is judged according to the stay time of the target clients, new needs of the target clients can be dynamically known and updated, and subsequent service personnel can conveniently provide more targeted product consultation and communication services for the clients.
Example one
Fig. 1 is a flowchart of a customer mining method based on artificial intelligence according to an embodiment of the present application, and the method may be applied to a billboard in a subway, a billboard in an offline exhibition hall, and the like.
And S10, acquiring the face image to establish a face feature database and acquiring a first face image.
The first face image is a face picture shot by a camera on the advertising board when the first face image passes through the advertising board or stays in front of the advertising board.
As shown in fig. 2, in an alternative embodiment, the step of building a facial feature database at S10 includes:
s101, establishing a face feature recognition model based on a neural network training mode.
And S102, identifying a second face image based on the face feature identification model to obtain a second face feature.
And S103, matching the second face features with the corresponding client information to form a face feature data sub-database.
And S104, aggregating a plurality of face feature database sub-databases to form a face feature database.
Illustratively, the second face information refers to a face image of the client, which is acquired in advance by the camera device on the premise of permission of the client.
The face feature database is composed of at least one face feature database, each face image corresponds to one face feature database, and N (for example, 10) face feature data spaces are allocated to the face feature database of each face image, that is, each face image is allowed to have N sets of face feature data.
As shown in fig. 3, in an alternative embodiment, the S101 building the face feature recognition model based on the neural network training mode includes:
and S1011, obtaining a training sample and a test sample.
Illustratively, a face picture training sample and a face picture to be detected are obtained.
S1012, the training samples are parsed to extract training features.
Extracting face training characteristics from the face picture training sample, and extracting the face characteristics to be detected from the face picture to be detected.
And S1013, constructing a convolutional neural network model and training the training characteristics based on the convolutional neural network model to obtain a face characteristic recognition bottom die.
And constructing a convolutional neural network model, and training the face training characteristics of the face picture training sample by using the convolutional neural network model to obtain a face characteristic recognition bottom die.
The training specifically comprises the steps of carrying out first-stage training on the convolutional neural network model by using an Arcface loss function to obtain the state of convergence of the convolutional neural network model, and carrying out second-stage training on the convolutional neural network model by using an intra-class inter-class loss function.
And S1014, inputting the test sample to the face characteristic identification bottom die to acquire correction data.
Inputting a face picture to be detected into the face feature recognition bottom film, comparing the face feature to be detected of the face picture to be detected by the face feature recognition bottom film so as to recognize the face picture to be detected, outputting a recognition result, and comparing the recognition result with a correct result corresponding to the face picture to be detected to obtain correction data of the face feature recognition bottom film, wherein the correction data refers to the face picture to be detected with a wrong face feature recognition bottom film recognition.
And S1015, adjusting the face feature recognition bottom die based on the correction data to obtain a face feature recognition model.
And (5) taking the face picture to be detected with the face feature recognition bottom film recognition error as a training sample, and repeating the steps 1011 to S1014, and when the face picture to be detected with the face feature recognition bottom film recognition error is zero, obtaining a face feature recognition model.
The embodiment includes the steps of firstly obtaining a face picture training sample and a face picture to be detected, then extracting face training features from the face picture training sample, extracting face features to be detected from the face picture to be detected, then constructing a convolutional neural network model, training the face training features of the face picture training sample by using the convolutional neural network model to obtain a face feature recognition model, and finally comparing the face features to be detected of the face picture to be detected according to the trained face feature recognition model to recognize the face picture to be detected. The application can enable the interval between classes to be more uniform, can train the training data of more classes simultaneously, can realize large-scale face data training, can improve face recognition efficiency, and promotes face recognition effect.
And S11, extracting the feature key points of the first face image to obtain first face features.
In the optional embodiment, eyebrows, eyes, pupils, a nose, a mouth, a forehead and a chin in a face image are used as seven feature key points to be extracted, a server can perform face positioning interception on a received first face image, also supports accurate positioning interception of the seven key point positions, unifies the size of a face, performs cutting analysis on a face region to obtain face feature key point data in the face, and uses the face feature key point data as a first face feature.
And S12, matching the first facial features with the facial feature database to obtain matching results, wherein the matching results comprise matching success and matching failure.
Specifically, if the matching result is a successful matching, S13 is executed, and if the matching result is a failed matching, the execution is ended.
In an alternative embodiment, the S12 matching the first facial feature with the facial feature database to obtain a matching result, where the matching result includes matching success and matching failure, includes:
and S121, calculating the association degree of the first face features and the face feature database to obtain the face recognition precision.
S122, judging whether the face recognition precision reaches a preset face recognition threshold value; and if the face recognition precision reaches the preset face recognition threshold, the matching result is confirmed to be successful in matching, and if the face recognition precision does not reach the preset face recognition threshold, the matching result is confirmed to be failed in matching.
In this optional embodiment, the face recognition threshold may be a face recognition accuracy of 90%, that is, when the face recognition accuracy reaches 90% corresponding to the face recognition threshold, it is determined that the matching is successful, and when the face recognition accuracy does not reach 90% corresponding to the face recognition threshold, it is determined that the matching is failed.
Illustratively, matching query is performed on the acquired face feature data and the established face feature database, and clients with face recognition accuracy exceeding 90% are filtered out, for example:
an instantaneously captured image has A, B, C head portraits, A, B, C three head portraits need to be matched in sequence, old client comparison matching is carried out on the A head portraits, the results can be 99% of Zhang three face recognition accuracy, 86% of Li four face recognition accuracy, 83% of Wang five and 88% of Zhao two, and then the A head portraits are matched, and the Zhang three with the face recognition accuracy exceeding 90% is selected.
The recognition accuracy of an avatar is lower than 90%, and the avatar may not be the client himself, so that the trade connection risks being complained.
And S13, determining the target client corresponding to the first face characteristic based on the matching result.
In an alternative embodiment, a target client corresponding to the first facial feature may be determined based on the face recognition threshold, where the target client refers to a client that can be matched in the facial feature database, that is, an original old client.
And S14, determining the target client based on the matching result and establishing the binding relationship between the target client and the product.
In this alternative embodiment, the product may be various commercial products such as home appliances, food, cosmetics, etc., or may be an insurance product, which is not limited herein; before the binding relationship between the target customer and the product is established, the binding relationship between the image collector and the product is established.
The image collector is cameras with marks, which are arranged on display boards or advertising boards of various products in online activities, and the cameras are already set and bound with corresponding products and product characteristics in the system in advance.
A driving intention, such as a car insurance, is a product whose characteristics refer to whether it is an insurance of a designated car, whether it is an insurance of a designated vehicle occupant, whether a premium can be returned, whether there is a coupon gift, and the like.
The binding operation is performed through binding association between the background system table and the table, for example: the driving insurance product table maintains driving insurance, the product table maintains basic information and whether fields such as benefits exist, specific benefit types are maintained by a single benefit table, and binding of products and benefits is maintained and associated through a product characteristic table (the table maintains main key combinations of specific products and benefits).
Each image collector is correspondingly bound with a product and product characteristics.
Establishing the binding relationship between the target client and the product means: the image collector is already bound with the product, so that the target customer and the product corresponding to the image collector are bound through the target customer image collected by the image collector.
Illustratively, the matched client and the product serial number represented by the camera head number at present are bound and stored in a database, and the matched message that the client Zhang III and the client Li IV are interested in the driving intention risk is recorded and stored.
According to the embodiment of the application, the target client and the product are bound, the situation that the acquired face image cannot correspond to the product and the potential client of the specific product cannot be accurately positioned is prevented from occurring.
And S15, establishing a micro expression recognition model based on a neural network training mode to obtain the interest degree of the target customer in the product according to the binding relationship.
In an optional embodiment, the obtaining the interest level of the target customer in the product according to the binding relationship includes:
obtaining the retention time of the target client for observing the corresponding product according to the binding relationship;
and dividing the interest degree of the target customer in the product based on the residence time.
For example, if the customer stays at the activity site for more than a set time, such as one second, a face image is captured by a camera under the billboard, and the face image is transmitted to the application server in a byte stream for parsing.
The dwell period section is set in advance, for example: 0-1 seconds represents no interest, 1-5 seconds represents somewhat of interest, 5-10 seconds represents very of interest, more than 10 seconds is very of interest.
According to the embodiment of the application, the time of the face entering and leaving the camera is captured through the camera, the camera can transmit the captured information back to the server in real time for recording, the face is triggered to transmit the information to the server again when leaving, the server calculates the stay time T1 through the time of leaving and entering, if the T1 exceeds the set threshold value, recording is carried out, invalid data which does not reach the stay time are cleared, the interference of the invalid data is eliminated, and the subsequent calculated amount of image recognition is reduced.
As shown in fig. 4, in an optional embodiment, the building a micro expression recognition model based on neural network training to obtain the interest level of the target customer in the product according to the binding relationship further includes:
and S151, establishing a micro expression recognition model.
Constructing a three-dimensional Convolutional Neural Network (CNN) structure, which comprises: the device comprises an input layer, a 3D convolutional layer, a 3D maximum pooling layer, a first dropout layer, a flatten layer, a first full-connection layer, a second dropout layer, a second full-connection layer and an activation layer. The convolution layer is used for performing convolution calculation on an input image and extracting local features, the maximum pooling layer is used for selecting the maximum value in a window so as to reduce calculated parameters and prevent overfitting, the first dropout layer and the second dropout layer are used for randomly discarding the neural network unit of each fully-connected layer according to a certain probability so as to prevent overfitting caused by excessive parameters, the fully-connected layers map all distributed features to a sample mark space, the flatten layer is used for performing one-dimensional input, and the activation layer is responsible for mapping the input of a neuron to an output end.
Collecting a plurality of face image sequences which are marked with interest categories, sequentially passing each face image sequence through an input layer, a 3D convolutional layer, a 3D maximum pooling layer, a first dropout layer, a flatten layer, a first full-connection layer, a second dropout layer, a second full-connection layer and an activation layer, and outputting interest index values of each face image sequence. The micro expression recognition model is used for training the association relationship between various biological micro expressions containing facial features and interestingness.
And S152, sending the first face image corresponding to the target client to the micro expression recognition model according to the binding relationship.
The input layer is used for receiving multi-dimensional information data in the first face image sequence, and the multi-dimensional information data comprises image frame data and the number of image frames. For example, in the first received face image sequence, the image frame data is 128 × 128, the number of image frames is 96, and then the multidimensional information data is 128 × 96.
S153, extracting a characteristic information sequence used for indicating the micro expression mark in the first face image based on the micro expression recognition model.
The 3D convolution layer is used for segmenting each image frame, extracting characteristic information and obtaining a plurality of characteristic information sequences. In this embodiment, it can be understood that the obtained face image sequence is a cubic structure stacked by consecutive image frames, 3D convolution operation is performed on the cubic structure, and each feature information in each image frame in the 3D convolution layer is connected with the feature information of an adjacent image frame, that is, a consecutive feature information sequence is obtained, so that the motion information of facial muscles is captured according to each feature information in the feature information sequence.
And the 3D maximum pooling layer is used for performing maximum pooling processing on the characteristic information sequences to obtain a plurality of micro-expression characteristic information sequences. In this embodiment, it can be understood that, the data size in the feature information sequence is large, and in order to improve the operation efficiency and reduce the operation amount, all the feature information sequences are subjected to the dimension reduction processing, for example, the feature information sequence consisting of repeated or redundant feature information in the feature information sequence is removed, and the micro-expression feature information sequence consisting of the critical feature information is extracted to realize the 3D maximum pooling processing.
And the flatten layer is used for flattening the multidimensional micro-expression characteristic information sequence into one-dimensional key characteristic information so as to be used as the input of the full-connection layer. In this embodiment, it can be understood that the flatten layer is a transition between the 3D maximum pooling layer and the first full-link layer, and the dimensions of the micro-expression characteristic information sequence are converted, so as to simplify the operation, thereby improving the operation efficiency.
And S154, establishing matching information of the characteristic information sequence and the preset interest classification.
The matching information refers to an interest classification table established by the first full connection layer, the categories comprise interest and non-interest, and the one-dimensional key feature information is associated with the interest classification table. In this embodiment, since thousands of facial feature images are subjected to interest classification during pre-training, one-dimensional key feature information can be understood as being composed of a plurality of feature information, and each feature information is directly classified.
And S155, mapping the interest degree of the target client corresponding to the first face image in the product according to the matching information.
And the second full connection layer is used for mapping the user interest degree represented by the one-dimensional key characteristic information. In this embodiment, the interestingness of the user is obtained through the matching information of the first full connection layer.
According to the method and the system, the interest of the user is acquired by identifying the micro-expression, so that the real demand of the user is acquired, and then a product manager can check the interest distribution condition of the user to the product, and help the product manager to adjust the product characteristics in time.
In an optional embodiment, the artificial intelligence based customer mining method further comprises:
generating an information pushing scheme corresponding to the product according to the interestingness;
and pushing the product information to the corresponding target client based on the product information pushing scheme.
Illustratively, target customers who are interested in a little, very interested and very interested can be sent specific information of related products according to product categories, so that the target customers can conveniently know the products further.
In an optional embodiment, a follow-up customer service can be distributed to a target customer, the customer service contacts the customer to provide product consultation for the customer, the customer who is interested in a certain product is extracted from the face feature database at regular time, a customer service worker in charge of the corresponding product carries out dispatching, the customer service can carry out return visit confirmation on the customer, and follow-up operations such as invitation follow-up can be carried out on the customer who is really interested in.
In optional implementation, a product manager and an activity organizer log in a background reporting system to check the feedback information of a client to a product, and take corresponding measures in time. The product manager can check the interest distribution of the client to the product and help the product manager to adjust the product characteristics in time.
And S16, ending the execution.
Example two
Fig. 5 is a structural diagram of a customer mining device based on artificial intelligence according to a second embodiment of the present application.
In some embodiments, the artificial intelligence based client mining device 10 comprises a construction unit 101, an extraction unit 102, a matching unit 103, an association unit 104, and an acquisition unit 105.
The construction unit 101 is configured to acquire a face image to establish a face feature database and obtain a first face image.
The first face image is a face picture shot by a camera on the advertising board when the first face image passes through the advertising board or stays in front of the advertising board.
In an alternative embodiment, establishing the facial feature database includes:
establishing a face feature recognition model based on a neural network training mode;
identifying a second face image based on the face feature identification model to obtain a second face feature;
matching the second face features with corresponding customer information to form a face feature database;
and aggregating a plurality of face feature database sub-databases to form a face feature database.
Illustratively, the second face information refers to a face image of the client, which is acquired in advance by the camera device on the premise of permission of the client.
The face feature database is composed of at least one face feature database, each face image corresponds to one face feature database, and N (for example, 10) face feature data spaces are allocated to the face feature database of each face image, that is, each face image is allowed to have N sets of face feature data.
In an optional embodiment, the building of the face feature recognition model based on the neural network training mode comprises:
acquiring a training sample and a test sample;
analyzing the training samples to extract training features;
constructing a convolutional neural network model and training characteristics based on the convolutional neural network model to obtain a face characteristic recognition bottom die;
inputting a test sample to a face feature recognition bottom die to obtain correction data;
and adjusting the face feature recognition bottom die based on the correction data to obtain a face feature recognition model.
Illustratively, a face picture training sample and a face picture to be detected are obtained, face training features are extracted from the face picture training sample, and the face features to be detected are extracted from the face picture to be detected; constructing a convolutional neural network model, and training face training characteristics of a face picture training sample by using the convolutional neural network model to obtain a face characteristic recognition model, wherein the training specifically comprises the steps of performing first-stage training on the convolutional neural network model by using an Arcface loss function to obtain a state of convergence of the convolutional neural network model, and performing second-stage training on the convolutional neural network model by using an intra-class inter-class loss function; and comparing the human face features to be detected of the human face picture to be detected according to the trained human face feature recognition model so as to recognize the human face picture to be detected.
The face image training method comprises the steps of firstly obtaining a face image training sample and a face image to be tested, then extracting face training features from the face image training sample, extracting face features to be tested from the face image to be tested, then constructing a convolutional neural network model, training the face training features of the face image training sample by using the convolutional neural network model to obtain a face feature recognition model, and finally comparing the face features to be tested of the face image to be tested according to the trained face feature recognition model to recognize the face image to be tested. The application can enable the interval between classes to be more uniform, can train the training data of more classes simultaneously, can realize large-scale face data training, can improve face recognition efficiency, and promotes face recognition effect.
An extracting unit 102, configured to extract feature key points of the first face image to obtain a first face feature.
In the optional embodiment, eyebrows, eyes, pupils, a nose, a mouth, a forehead and a chin in a face image are used as seven feature key points to be extracted, a server can perform face positioning interception on a received first face image, also supports accurate positioning interception of the seven key point positions, unifies the sizes of faces, performs cutting analysis on a face region to obtain face feature key point data in the faces, and uses the face feature key point data as a first face feature.
The matching unit 103 is configured to match the first facial feature with the facial feature database to obtain a matching result, where the matching result includes a matching success and a matching failure.
In an optional embodiment, matching the first facial feature with a facial feature database to obtain a matching result, where the matching result includes matching success and matching failure, includes:
calculating the association degree of the first face features and the face feature database to obtain face recognition accuracy;
judging whether the face recognition precision reaches a preset face recognition threshold value or not; and if the face recognition precision reaches the preset face recognition threshold, the matching result is confirmed to be successful in matching, and if the face recognition precision does not reach the preset face recognition threshold, the matching result is confirmed to be failed in matching.
In this optional embodiment, the face recognition threshold may be a face recognition accuracy of 90%, that is, when the face recognition accuracy reaches 90% corresponding to the face recognition threshold, it is determined that the matching is successful, and when the face recognition accuracy does not reach 90% corresponding to the face recognition threshold, it is determined that the matching is failed.
Illustratively, matching query is performed on the acquired face feature data and the established face feature database, and clients with face recognition accuracy exceeding 90% are filtered out, for example:
an instantaneously captured image does have A, B, C head portraits, corresponding results are obtained by matching and distributing A, B, C three graphics in sequence, old client comparison matching is carried out on the head portraits A, and the results can be 99% of three-piece face recognition accuracy, 86% of four-piece plum face recognition accuracy, 83% of five king and 88% of two Zhao, so that three pieces with the face recognition accuracy exceeding 90% are selected for the head portraits A matching results.
The recognition accuracy of an avatar is lower than 90%, and the avatar may not be the client himself, so that the trade connection risks being complained.
In an alternative embodiment, a target client corresponding to the first facial feature may be determined based on the face recognition threshold, where the target client refers to a client that can be matched in the facial feature database, that is, an original old client.
And the association unit 104 is configured to determine the target customer based on the matching result and establish a binding relationship between the target customer and the product.
The product can be various commercial products such as household appliances, foods, cosmetics and the like, and can also be an insurance product, and the product is not limited in the specification. Before the binding relationship between the target customer and the product is established, the binding relationship between the image collector and the product is established.
The image collector is cameras with marks, which are arranged on display boards or advertising boards of various products in online activities, and the cameras are already set and bound with corresponding products and product characteristics in the system in advance.
A driving intention, such as a car insurance, is a product whose characteristics refer to whether it is an insurance of a designated car, whether it is an insurance of a designated vehicle occupant, whether a premium can be returned, whether there is a coupon gift, and the like.
The binding operation is performed through binding association between the background system table and the table, for example: the driving insurance product table maintains driving insurance, the product table maintains basic information and whether fields such as benefits exist, specific benefit types are maintained by a single benefit table, and binding of products and benefits is maintained and associated through a product characteristic table (the table maintains main key combinations of specific products and benefits).
Each image collector is correspondingly bound with a product and product characteristics.
Establishing the binding relationship between the target client and the product means: the image collector is already bound with the product, so that the target customer and the product corresponding to the image collector are bound through the target customer image collected by the image collector.
Illustratively, the matched client and the product serial number represented by the camera head number at present are bound and stored in a database, and the matched message that the client Zhang III and the client Li IV are interested in the driving intention risk is recorded and stored.
According to the embodiment of the application, the target client and the product are bound, the situation that the acquired face image cannot correspond to the product and the potential client of the specific product cannot be accurately positioned is prevented from occurring.
The obtaining unit 105 is configured to establish a micro-expression recognition model based on a neural network training manner to obtain the interest level of the target customer in the product according to the binding relationship.
In an optional embodiment, the building a micro expression recognition model based on neural network training to obtain the interest level of the target customer in the product according to the binding relationship includes:
obtaining the retention time of the target client for observing the corresponding product according to the binding relationship;
and dividing the interest degree of the target customer in the product based on the residence time.
For example, if the customer stays at the activity site for more than a set time, such as one second, a face image is captured by a camera under the billboard, and the face image is transmitted to the application server in a byte stream for parsing.
The dwell period section is set in advance, for example: 0-1 seconds represents no interest, 1-5 seconds represents somewhat of interest, 5-10 seconds represents very of interest, more than 10 seconds is very of interest.
According to the embodiment of the application, the time of the face entering and leaving the camera is captured through the camera, the camera can transmit the captured information back to the server in real time for recording, the face is triggered to transmit the information to the server again when leaving, the server calculates the stay time T1 through the time of leaving and entering, if the T1 exceeds the set threshold value, recording is carried out, invalid data which does not reach the stay time are cleared, the interference of the invalid data is eliminated, and the subsequent calculated amount of image recognition is reduced.
In an optional implementation manner, the obtaining the interest level of the target customer for the product according to the binding relationship further includes:
establishing a micro expression recognition model;
sending a first face image corresponding to a target client to the micro expression recognition model according to the binding relationship;
extracting a characteristic information sequence used for indicating micro expression identification in the first face image based on the micro expression recognition model;
establishing matching information of the characteristic information sequence and a preset interest classification;
and mapping the interest degree of the target client corresponding to the first face image in the product according to the matching information.
For example, in an optional embodiment, the artificial intelligence based customer mining device further comprises a generating unit and a pushing unit.
The generating unit generates an information pushing scheme corresponding to the product according to the interestingness.
The pushing unit pushes the product information to the corresponding target client based on the product information pushing scheme.
Illustratively, specific information of related products is sent to interested, very interested and very interested target clients according to the product categories, so that the target clients can conveniently know the products further.
In an optional embodiment, the target client can be further distributed with follow-up customer service, the customer service contacts the client to provide product consultation for the client, the client with high interest degree in a certain product is extracted from the face feature database at regular time, the client service personnel in charge of the corresponding product carry out dispatching, the customer service can carry out revisit confirmation on the client, and follow-up operations such as invitation follow-up can be carried out on the client with real interest.
In optional implementation, a product manager and an activity organizer log in a background reporting system to check the feedback information of a client to a product, and take corresponding measures in time. The product manager can check the interest distribution of the client to the product and help the product manager to adjust the product characteristics in time.
The embodiment of the application is based on the convolutional neural network, the face image data sets with large quantity are used for training the convolutional neural network, the obtained face images of the clients are identified by using the trained face feature identification model, the target clients are screened out, and then the interested degree of the target clients in the products is judged through the stay time of the target clients, so that the new needs of the target clients can be dynamically known and updated, and the follow-up service staff can conveniently provide more targeted product consultation and communication services for the clients.
EXAMPLE III
Embodiments of the present application further provide a computer-readable storage medium, on which computer-readable instructions are stored, and when executed by a processor, the computer-readable instructions implement the steps in the artificial intelligence based client mining method embodiment described above, such as S10-S16 shown in fig. 1:
and S10, acquiring the face image to establish a face feature database and acquiring a first face image.
And S11, extracting the feature key points of the first face image to obtain first face features.
And S12, matching the first facial features with the facial feature database to obtain matching results, wherein the matching results comprise matching success and matching failure.
And S13, determining the target client corresponding to the first face characteristic based on the matching result.
And S14, establishing the binding relationship between the target client and the product.
And S15, establishing a micro expression recognition model based on a neural network training mode to obtain the interest degree of the target client in the product according to the binding relationship.
And S16, ending the execution.
Example four
Fig. 6 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present disclosure. In the preferred embodiment of the present application, the electronic device 20 includes, but is not limited to, a memory 201 and a processor 202, and computer readable instructions, such as a program for an artificial intelligence based customer mining method, stored in the memory 201 and enabled on the processor 202.
It will be appreciated by those skilled in the art that the schematic diagram is merely an example of the electronic device 20 and does not constitute a limitation of the electronic device 1, and may include more or less components than those shown, or combine certain components, or different components, e.g. the electronic device 20 may also include input output devices, network access devices, buses, etc.
The Processor 202 may be a Central Processing Unit (CPU), and may include other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The processor 202 is a control center and an arithmetic core of the electronic device 20, connects various parts of the entire electronic device 20 by using various interfaces and lines, and executes an operating system of the electronic device 20 and various installed application programs, program codes, and the like.
Illustratively, the computer readable instructions may be divided into one or more modules/units, which are stored in the memory 201 and executed by the processor 202 to implement the present invention. One or more modules/units may be a series of computer-readable instruction segments capable of performing certain functions, which are used to describe the execution of computer-readable instructions in the electronic device 20. For example, the computer readable instructions may be divided into a construction unit 101, an extraction unit 102, a matching unit 103, an association unit 104, an acquisition unit 105.
The memory 201 may be used to store computer programs and/or modules/units, and the processor 202 may implement various functions of the electronic device 20 by running or executing the computer programs and/or modules/units stored in the memory 201 and invoking data stored in the memory 201. The memory may include volatile memory and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other storage device.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments of the present application may also be implemented by hardware related to computer readable instructions, which may be stored in a computer readable storage medium, and when the computer readable instructions are executed by a processor, the steps of the respective method embodiments may be implemented.
The computer-readable instructions described herein may be downloaded from a computer-readable storage medium to a corresponding computing processing device, or downloaded over a network (e.g., the internet, a local area network, a wide area network, and a network) to an external computer or external storage device or wireless network. The network may include copper transmission cables, optical transmission fibers, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers, with the network adapter card or network interface in each computing processing device receiving computer-readable instructions from the network and forwarding the computer-readable instructions for storage in a computer-readable storage medium within the respective computing processing device.
Computer readable instructions for carrying out operations of the present application may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, configuration data for an integrated circuit, or source or object code language written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and a procedural programming language such as the "C" programming language or the like. The computer readable instructions may be executed entirely on the user's computer, as a stand-alone software package; may execute partially on the user's computer and partially on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, an electronic circuit, including, for example, a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), can personalize computer-readable instructions by utilizing state information of the computer-readable instructions.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or are based on the specified purpose.
The description of various embodiments of the present application has been presented for purposes of illustration but is not intended to be exhaustive or limited to the application in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the present application. The embodiment was chosen and described in order to best explain the principles of the application and the practical application, and to enable others of ordinary skill in the art to understand the application for various embodiments with various modifications as are suited to the particular use contemplated.
Claims (10)
1. A customer mining method based on artificial intelligence is characterized by comprising the following steps:
collecting a face image to establish a face feature database and obtain a first face image;
extracting feature key points of the first face image to obtain first face features;
matching the first facial features with the facial feature database to obtain matching results, wherein the matching results comprise matching success and matching failure;
determining a target customer based on the matching result and establishing a binding relationship between the target customer and the product;
and establishing a micro expression recognition model based on a neural network training mode to obtain the interest degree of the target customer for the product according to the binding relation.
2. The artificial intelligence based client mining method of claim 1, wherein the capturing the face image to build a database of face features and obtaining the first face image comprises:
establishing a face feature recognition model based on a neural network training mode;
identifying a second face image based on the face feature identification model to obtain a second face feature;
matching the second face features with corresponding customer information to form a face feature database;
aggregating a plurality of the face feature database sub-databases to form the face feature database.
3. The artificial intelligence based client mining method of claim 2, wherein the building of the face feature recognition model based on neural network training comprises:
acquiring a training sample and a test sample;
parsing the training samples to extract training features;
constructing a convolutional neural network model;
training the training features based on the convolutional neural network model to obtain a face feature recognition bottom die;
inputting the test sample to the face feature recognition bottom die to obtain correction data;
and adjusting the face feature recognition bottom die based on the correction data to obtain the face feature recognition model.
4. The artificial intelligence based client mining method of claim 1, wherein building a micro-expression recognition model based on neural network training to obtain the interest level of the target client in the product according to the binding relationship comprises:
obtaining the retention time of the target customer for observing the corresponding product according to the binding relationship;
and dividing the interest level of the target customer in the product based on the residence time.
5. The artificial intelligence based client mining method of claim 4, wherein building a micro-expression recognition model based on neural network training to obtain the interest level of the target client in the product according to the binding relationship comprises:
establishing a micro expression recognition model;
sending a first face image corresponding to the target client to the micro expression recognition model according to the binding relationship;
extracting a characteristic information sequence used for indicating micro expression identification in the first face image;
establishing matching information of the characteristic information sequence and a preset interest classification;
and mapping the interest degree of the target client corresponding to the first face image in the product according to the matching information.
6. The artificial intelligence based client mining method of claim 1, wherein matching the first facial features to the database of facial features to obtain matching results comprises;
calculating the association degree of the first face feature and the face feature database to obtain face recognition precision;
judging whether the face recognition precision reaches a preset face recognition threshold value or not; and if the face recognition precision reaches a preset face recognition threshold, the matching result is confirmed to be successful in matching, and if the face recognition precision does not reach the preset face recognition threshold, the matching result is confirmed to be failed in matching.
7. The artificial intelligence based customer mining method of claim 1, the method further comprising:
generating an information pushing scheme corresponding to the product according to the interestingness;
and pushing the product information to the corresponding target customer based on the product information pushing scheme.
8. An artificial intelligence customer mining device, comprising:
the construction unit is used for acquiring a face image to establish a face feature database and acquire a first face image;
the extraction unit is used for extracting characteristic key points of the first face image to obtain first face characteristics;
the matching unit is used for matching the first face features with the face feature database to obtain matching results, and the matching results comprise matching success and matching failure;
the association unit is used for determining a target client based on the matching result and establishing a binding relationship between the target client and a product;
and the obtaining unit is used for establishing a micro expression recognition model based on a neural network training mode so as to obtain the interest degree of the target customer on the product according to the binding relationship.
9. An electronic device, comprising:
a memory storing computer readable instructions; and
a processor executing computer readable instructions stored in the memory to implement the artificial intelligence based customer mining method of any of claims 1 to 7.
10. A computer-readable storage medium having computer-readable instructions stored thereon which, when executed by a processor, implement the artificial intelligence based customer mining method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111274335.9A CN113920564A (en) | 2021-10-29 | 2021-10-29 | Client mining method based on artificial intelligence and related equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111274335.9A CN113920564A (en) | 2021-10-29 | 2021-10-29 | Client mining method based on artificial intelligence and related equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113920564A true CN113920564A (en) | 2022-01-11 |
Family
ID=79244007
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111274335.9A Pending CN113920564A (en) | 2021-10-29 | 2021-10-29 | Client mining method based on artificial intelligence and related equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113920564A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115631525A (en) * | 2022-10-26 | 2023-01-20 | 万才科技(杭州)有限公司 | Insurance instant matching method based on face edge point recognition |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109858958A (en) * | 2019-01-17 | 2019-06-07 | 深圳壹账通智能科技有限公司 | Aim client orientation method, apparatus, equipment and storage medium based on micro- expression |
CN111160962A (en) * | 2019-12-20 | 2020-05-15 | 恒银金融科技股份有限公司 | Micro-expression recognition marketing pushing method and system |
CN111178966A (en) * | 2019-12-30 | 2020-05-19 | 武汉零客思唯科技有限公司 | Latent customer behavior analysis method and system based on face recognition |
CN112417956A (en) * | 2020-10-14 | 2021-02-26 | 广州虎牙科技有限公司 | Information recommendation method and device, electronic equipment and computer-readable storage medium |
-
2021
- 2021-10-29 CN CN202111274335.9A patent/CN113920564A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109858958A (en) * | 2019-01-17 | 2019-06-07 | 深圳壹账通智能科技有限公司 | Aim client orientation method, apparatus, equipment and storage medium based on micro- expression |
CN111160962A (en) * | 2019-12-20 | 2020-05-15 | 恒银金融科技股份有限公司 | Micro-expression recognition marketing pushing method and system |
CN111178966A (en) * | 2019-12-30 | 2020-05-19 | 武汉零客思唯科技有限公司 | Latent customer behavior analysis method and system based on face recognition |
CN112417956A (en) * | 2020-10-14 | 2021-02-26 | 广州虎牙科技有限公司 | Information recommendation method and device, electronic equipment and computer-readable storage medium |
Non-Patent Citations (1)
Title |
---|
孙力帆: "《人工智能前沿理论与应用》", 30 April 2020, 中国原子能出版社, pages: 189 - 190 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115631525A (en) * | 2022-10-26 | 2023-01-20 | 万才科技(杭州)有限公司 | Insurance instant matching method based on face edge point recognition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107742100B (en) | A kind of examinee's auth method and terminal device | |
US20190392587A1 (en) | System for predicting articulated object feature location | |
CN111582342B (en) | Image identification method, device, equipment and readable storage medium | |
Chandran et al. | Missing child identification system using deep learning and multiclass SVM | |
WO2019119396A1 (en) | Facial expression recognition method and device | |
CN112101123B (en) | Attention detection method and device | |
CN114219971B (en) | Data processing method, device and computer readable storage medium | |
CN111860377A (en) | Live broadcast method and device based on artificial intelligence, electronic equipment and storage medium | |
CN108229375B (en) | Method and device for detecting face image | |
CN111709382A (en) | Human body trajectory processing method and device, computer storage medium and electronic equipment | |
CN115222427A (en) | Artificial intelligence-based fraud risk identification method and related equipment | |
CN115661580A (en) | Convolutional neural network-based traditional Chinese medicine decoction piece image identification method and system | |
CN111738199A (en) | Image information verification method, image information verification device, image information verification computing device and medium | |
CN113920564A (en) | Client mining method based on artificial intelligence and related equipment | |
CN113269179B (en) | Data processing method, device, equipment and storage medium | |
CN115222443A (en) | Client group division method, device, equipment and storage medium | |
CN114639152A (en) | Multi-modal voice interaction method, device, equipment and medium based on face recognition | |
CN109801394B (en) | Staff attendance checking method and device, electronic equipment and readable storage medium | |
CN112101191A (en) | Expression recognition method, device, equipment and medium based on frame attention network | |
CN112949305B (en) | Negative feedback information acquisition method, device, equipment and storage medium | |
CN114155400B (en) | Image processing method, device and equipment | |
CN116205723A (en) | Artificial intelligence-based face tag risk detection method and related equipment | |
CN116524606A (en) | Face living body identification method and device, electronic equipment and storage medium | |
CN112580505B (en) | Method and device for identifying network point switch door state, electronic equipment and storage medium | |
CN115482571A (en) | Face recognition method and device suitable for shielding condition and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |