[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109558773B - Information identification method and device and electronic equipment - Google Patents

Information identification method and device and electronic equipment Download PDF

Info

Publication number
CN109558773B
CN109558773B CN201710884606.XA CN201710884606A CN109558773B CN 109558773 B CN109558773 B CN 109558773B CN 201710884606 A CN201710884606 A CN 201710884606A CN 109558773 B CN109558773 B CN 109558773B
Authority
CN
China
Prior art keywords
target object
historical
current target
same
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710884606.XA
Other languages
Chinese (zh)
Other versions
CN109558773A (en
Inventor
朱碧军
贾海军
李文龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201710884606.XA priority Critical patent/CN109558773B/en
Priority to TW107119428A priority patent/TW201915830A/en
Priority to PCT/CN2018/106102 priority patent/WO2019062588A1/en
Publication of CN109558773A publication Critical patent/CN109558773A/en
Application granted granted Critical
Publication of CN109558773B publication Critical patent/CN109558773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/30Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an information identification method and device and electronic equipment, and relates to the technical field of computer application. Detecting the acquired images to determine at least one current target object; judging whether the at least one current target object is the same as a historical target object in a historical detection result; the current target object different from the historical target object in the historical detection result is identified, the technical scheme provided by the embodiment of the application reduces unnecessary time and improves identification efficiency.

Description

Information identification method and device and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of computer application, in particular to an information identification method and device and electronic equipment.
Background
In the application fields of attendance, entrance guard, monitoring and the like, the requirement of quickly confirming the identity of a person is involved, but at present, the identity authentication is usually carried out by using the biological characteristics of a human body, wherein the application of face recognition is the most extensive.
The face recognition is a biological recognition technology for carrying out identity recognition based on facial features of people, the face recognition firstly needs face detection to determine a face, and then carries out face recognition on the face obtained by detection, taking attendance application as an example, an attendance system can determine whether the image comprises the face or not by detecting the acquired image; and based on the detected face, carrying out face recognition from the registered employee database to determine employee information corresponding to the face, so as to achieve the purpose of identity confirmation.
Because image acquisition is usually performed all the time, face recognition is performed on each frame of image, and several adjacent frames of images may be acquired by the same user, which may cause repeated work and affect the efficiency of recognition.
Disclosure of Invention
The embodiment of the application provides an information identification method and device and electronic equipment, and aims to solve the technical problem of low identification efficiency in the prior art.
In a first aspect, an embodiment of the present application provides an information identification method, including:
detecting the acquired image to obtain at least one current target object;
determining whether the at least one current target object is the same as a historical target object in historical detection results;
and identifying the current target object different from the historical target object in the historical detection result.
Optionally, the method further comprises:
the current target object identical to the history target object in the history detection result is not identified.
Optionally, the determining whether the at least one current target object is the same as a historical target object in the historical detection result includes:
and determining whether the position area of any current target object is consistent with the position area of any historical target object in the historical detection results, if so, determining that any current target object is the same as any historical target object, and otherwise, determining that any current target object is different from any historical target object.
Optionally, the not identifying the current target object that is the same as the historical target object in the historical detection result includes:
acquiring any current target object and object characteristics of any historical target object which is the same as any current target object in historical detection results;
determining whether the object characteristics of any current target object are the same as the object characteristics of any historical target object which is the same as the current target object;
if yes, identifying any current target object;
if not, identifying any current target object.
Optionally, the determining whether the at least one current target object is the same as a historical target object in the historical detection results includes:
respectively acquiring object characteristics of the at least one current target object;
and determining whether a historical target object with the object characteristics identical to the object characteristics of the at least one current target object exists in the historical detection result.
Optionally, the not identifying the current target object that is the same as the historical target object in the historical detection result includes:
determining whether the historical target object identical to any current target object in the historical detection result is successfully identified;
if yes, not identifying any current target object;
and if not, identifying any current target object.
Optionally, the determining whether the location area where any current target object is located is consistent with the location area where any historical target object is located in the historical detection result includes:
and determining whether the position deviation between the position area of any current target object and the position area of any historical target object in the historical detection results is within a preset range.
Optionally, the target object is a human face.
In a second aspect, an embodiment of the present application provides an information identifying apparatus, including:
the detection module is used for detecting the acquired image so as to obtain at least one current target object;
the judging module is used for determining whether the at least one current target object is the same as a historical target object in a historical detection result;
and the first identification module is used for identifying the current target object which is different from the historical target object in the historical detection result.
Optionally, the method further comprises:
and the second identification module is used for not identifying the current target object which is the same as the historical target object in the historical detection result.
Optionally, the determining module is specifically configured to:
and determining whether the position area of any current target object is consistent with the position area of any historical target object in the historical detection results, if so, determining that any current target object is the same as any historical target object, and otherwise, determining that any current target object is different from any historical target object.
Optionally, the second identification module is specifically configured to obtain an object feature of any current target object and any historical target object in the historical detection result that is the same as the current target object; determining whether the object characteristics of any current target object are the same as the object characteristics of any historical target object which is the same as the current target object; if yes, identifying any current target object; and if not, identifying any current target object.
Optionally, the determining module is specifically configured to: respectively acquiring object characteristics of the at least one current target object; and determining whether a historical target object with the same object characteristics as the at least one current target object exists in the historical detection result.
Optionally, the second identification module is specifically configured to determine whether, in the historical detection result, the historical target object that is the same as any current target object is successfully identified; if yes, not identifying any current target object; and if not, identifying any current target object.
Optionally, the determining module is specifically configured to: and determining whether the position deviation between the position area of any current target object and the position area of any historical target object in the historical detection results is within a preset range.
In a third aspect, an embodiment of the present application provides an electronic device, including a processing component, and memories respectively connected to the processing component;
the memory stores one or more computer program instructions for invocation and execution by the processing component;
the processing component is to:
detecting the acquired images to determine at least one current target object;
determining whether the at least one current target object is the same as a historical target object in historical detection results;
and identifying the current target object different from the historical target object in the historical detection result.
Optionally, the system further comprises an acquisition component connected with the processing component and used for acquiring images;
the processing component detects the acquired image to determine at least one current target object, specifically, detects the image acquired by the acquisition component to determine at least one current target object.
In the embodiment of the application, the acquired image is detected to determine at least one current target object; determining whether the at least one current target object is the same as a target object in the historical detection result; only the current target object different from the target object in the historical detection result is identified, so that the identification time can be reduced, and the identification efficiency can be improved.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart illustrating an embodiment of an information recognition method provided herein;
FIG. 2 is a flow chart illustrating a further embodiment of an information recognition method provided herein;
FIG. 3 is a flow chart illustrating a further embodiment of an information recognition method provided by the present application;
FIG. 4 is a schematic diagram illustrating an embodiment of an information recognition system provided herein;
FIG. 5 is a schematic diagram illustrating an information recognition system according to another embodiment of the present application;
FIG. 6 is a schematic structural diagram illustrating an embodiment of an information recognition apparatus provided in the present application;
FIG. 7 is a schematic structural diagram of an information recognition apparatus according to another embodiment of the present application;
FIG. 8 is a schematic diagram illustrating an embodiment of an electronic device provided by the present application;
fig. 9 shows a schematic structural diagram of another embodiment of an electronic device provided by the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In some of the flows described in the specification and claims of this application and in the above-described figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, the number of operations, e.g., 101, 102, etc., merely being used to distinguish between various operations, and the number itself does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor do they limit the types of "first" and "second".
The technical scheme of the embodiment of the application can be applied to the safety fields of attendance checking, entrance guard, monitoring, security protection and the like and is used for identity recognition, the target object in the embodiment of the application can be a human face, and other biological characteristics of the human body can be certainly not excluded. Similar to the face recognition process, the target object needs to be detected to determine the target object, and then the target object obtained through detection is recognized to confirm the identity of the target object.
By the technical scheme of the embodiment of the application, the identification time can be shortened, the identification efficiency is improved, and the method and the device are particularly suitable for scenes for simultaneously identifying a plurality of target objects.
Taking face recognition as an example, as described in the background art, in the process of face recognition, image acquisition is performed all the time, and face detection can be performed on each frame of image, and in a short time, usually several hundreds of milliseconds, the position change of an acquisition object, that is, a user, is usually not large, that is, adjacent frames of images may all be acquired by acquiring the same user or the same batch of users. In the prior art, face detection and face recognition are performed on each frame of image, but the detected face is successfully recognized, so that repeated recognition is caused, the recognition time is increased, and the recognition efficiency is reduced.
In order to improve the recognition efficiency, the inventor finds that, due to the very complex recognition process, taking face recognition as an example, for example, face features need to be extracted first to construct a face feature template, and then the face feature templates stored in the database are compared to determine identity information corresponding to the face feature template, so as to complete identity authentication.
Accordingly, the technical scheme of the application is provided, and in the embodiment of the application, the acquired image is detected to determine at least one current target object; determining whether the at least one current target object is the same as a target object in a historical detection result; only the current target object different from the target object in the historical detection result is identified, so that the identification time can be reduced, and the identification efficiency is improved, particularly when a plurality of target objects are simultaneously required to be identified, the identification efficiency is obviously improved.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a flowchart of an embodiment of an information identification method provided in an embodiment of the present application, where the method may include the following steps:
101: and detecting the acquired image to obtain at least one current target object.
For convenience of description, the target object detected from the acquired image is named "current target object". The images acquired by each acquisition can be processed according to the technical scheme of the application.
The acquired image is detected, so that a current detection result can be obtained, and the current detection result comprises at least one current target object.
When the target object is a human face, the human face detection is performed on the acquired image, and the human face in the image can be identified and acquired through the human face detection.
The algorithm for extracting the target object from the image may be various, for example, the algorithm may include a detection algorithm based on histogram rough segmentation and singular value features, a detection algorithm based on wavelet transform, an Adaboost algorithm, and the like, and a basic process thereof may be to train a classifier using a sample image to achieve detection of the target object, and the like.
In one implementation, multiple target objects may be included in an image.
102: determining whether the at least one current target object is the same as a historical target object in the historical detection result.
For convenience of description, the target object in the history detection result is named "history target object".
The historical detection result is obtained by detecting the image acquired by the historical acquisition.
The historical detection result may specifically refer to a previous detection result, that is, an image acquired in the previous time is detected.
In this embodiment, the current detection result and the historical detection result are compared, and whether any current target object is the same as any historical target object in the historical detection result is determined.
There may be various implementation manners for determining whether the target objects are the same, which will be described in detail in the following embodiments.
103: and identifying the current target object different from the historical target object in the historical detection result.
The current target object which is the same as the historical target object in the historical detection result is not identified. That is, in this embodiment, only the current target object different from the historical target object in the historical detection result is identified.
Optionally, the identification is continued for the current target object which is the same as the historical target object in the historical detection result.
The identification of the current target object may include, for example, extracting object features of the current target object, performing search matching on the extracted face features and feature templates stored in the database, calculating template similarity, and if the similarity is greater than a first predetermined value, determining that the current target object passes the identification, where the identity information corresponding to the feature template is the identity information of the current target object; if the similarity is smaller than a second preset value, it may be determined that the current target object fails to be identified, and if the similarity is smaller than the first preset value or larger than the second preset value, it may be determined that the current target object fails to be identified, the identity information of the current target object may not be identified, and re-identification may be required.
According to the embodiment, only any current target object different from any historical target object in the historical detection results is identified, and if any current target object is the same as any historical target object in the historical detection results and indicates that the current target object is identified, the current target object does not need to be identified, so that the identification time is shortened, and the identification efficiency is improved.
As a possible implementation manner, the determining whether the at least one current target object is the same as a historical target object in the historical detection result may include:
respectively acquiring object characteristics of the at least one current target object;
and determining whether a historical target object with the same object characteristics as the at least one current target object exists in the historical detection result.
The object features can be coarse-grained features and are extracted by a feature extraction algorithm, the object features are usually represented by multi-dimensional vector data, and the dimensionality of the object features can be lower than that of the object features extracted in the identification process.
The feature extraction algorithm may be, for example, LBP (Local Binary Patterns), a method based on geometric features, a method based on statistical features, and the like, which are the same as those in the prior art and are not described herein again.
As another possible implementation manner, since object detection may be performed for each frame of image, and the detection interval between two adjacent frames of images is very short, typically several tens of milliseconds, the position change of the user is not large even if the user is in a moving state, and therefore, whether the target objects in the two previous and subsequent detection results are the same target object may be determined by means of position comparison. As shown in fig. 2, which is a flowchart of another embodiment of an information identification method provided in the embodiment of the present application, the method may include the following steps:
201: and detecting the acquired image to obtain at least one current target object.
202: and determining whether the position area of any current target object is consistent with the position area of any historical target object in the historical detection results, if so, executing step 203, and if not, executing step 204.
In this embodiment, the historical detection result specifically refers to a previous detection result.
Because the image acquisition is carried out all the time, when the user is positioned in the acquisition range of the acquisition equipment, the acquisition equipment can automatically search and shoot the image containing the user, and when the face recognition is required, the face image of the user is shot.
Alternatively, it may be determined whether the position deviation between the position area where any current target object is located and the position area where any historical target object is located in the historical detection result is within a preset range.
The position area where the target object is located may be represented by position coordinates of preset feature points in the target object, and taking the target object as a face as an example, the preset feature points may be, for example, a left eye or a right eye, a nose, a mouth, and the like in the face.
In order to ensure the identification accuracy, whether the time difference between the current detection time and the historical detection time is within an allowable range can be judged, if yes, whether the position area of any current target object is consistent with the position area of any historical target object in the historical detection result is judged; otherwise, the at least one current target object is directly identified.
203: and determining that any current target object is the same as any historical target object in the historical detection results.
204: determining that the any current target object is different from the any historical target object in the historical detection results.
205: the current target object identical to the history target object in the history detection result is not identified.
206: and identifying the current target object different from the historical target object in the historical detection result.
That is, only the current target object different from each target object in the historical detection result is identified.
In this embodiment, whether the target objects in the two previous and subsequent detection results are the same or not can be determined by means of position comparison, so that the same target object does not need to be identified, only different target objects are identified, the identification time is shortened by reducing the identification time, and the identification efficiency is improved.
In order to ensure the identification accuracy, in some embodiments, the not identifying the current target object that is the same as the historical target object in the historical detection result may include:
acquiring any current target object and object characteristics of any historical target object which is the same as any current target object in historical detection results;
determining whether the object characteristics of any current target object are the same as the object characteristics of any historical target object;
if yes, identifying any current target object;
and if not, identifying any current target object.
That is, for any current target object and any historical target object in the historical detection result, which is the same as any current target object, the verification is further performed by combining the object characteristics.
The object features can be rough features and are extracted and obtained by a feature extraction algorithm, the object features are usually represented by multi-dimensional vector data, and the dimensionality of the object features can be lower than that of the object features extracted in the identification process.
The feature extraction algorithm may be, for example, an LBP algorithm, a method based on geometric features, a method based on statistical features, etc., and is the same as the prior art, and is not described herein again.
In addition, as can be known from the above description, the recognition result obtained by recognizing the target object may include a success in recognition or a failure in recognition, where the success in recognition includes a pass in recognition or a fail in recognition, and the pass in recognition is that identity information corresponding to the target object exists in the database, that is, the similarity between the object feature of the target object and the feature template in the database is greater than a first predetermined value; the identification fails, and it can be considered that the identity information corresponding to the target object does not exist in the database, that is, the similarity between the object feature of the target object and the feature template in the database is smaller than a second predetermined value. The identification failure indicates that the identity information of the target object cannot be confirmed, and the identification needs to be carried out again.
Therefore, if the identification of the same historical target object as the current target object fails in the historical detection result, the current target object also needs to be re-identified to identify the identity information of the current target object, and therefore, in order to further ensure the identification accuracy, in some embodiments, the not identifying the same current target object as the historical target object in the historical detection result may include:
determining whether the historical target object identical to any current target object in the historical detection result is successfully identified;
if yes, identifying any current target object;
and if not, identifying any current target object.
Optionally, in order to facilitate identification, an identification success flag may be set for the successfully identified target object, so that determining whether the historical target object identical to any current target object in the historical detection result is successfully identified may be:
and determining whether the historical target object which is the same as any current target object in the historical detection result is provided with a successful identification mark.
Therefore, after any current target object is identified, the identification success flag of the current target object which is successfully identified can be set based on the identification result.
Furthermore, in order to further improve the identification convenience, in some embodiments, the not identifying the current target object which is the same as the historical target object in the historical detection result may include:
setting different object numbers for the at least one current target object, wherein the current detection result is the same as the object number of the same target object in the historical detection result;
the current target object having the same object number as the object number of the history target object in the history detection result is not identified.
If the historical target object with the same object number as any current target object in the historical detection result is provided with an identification success mark, any current target object can not be identified.
The technical scheme in the embodiment of the application can be applied to the application fields of attendance checking, entrance guard and the like, and is certainly also applicable to various different security fields of network security control such as identity recognition in certificates, security detection and monitoring in important places, identity recognition in smart cards, computer login and the like.
In practical applications, the target object described in the embodiment of the present application may specifically refer to a human face, and the following describes the technical solution of the present application by taking the target object as a human face as an example.
As shown in fig. 3, a flowchart of another embodiment of an information identification method provided in the embodiment of the present application may include the following steps:
301: and carrying out face detection on the acquired images to obtain at least one current face.
302: and (3) determining whether the position area of any current face is consistent with the position area of any historical face in the previous detection result, if so, executing step 303, and if not, executing step 304.
Optionally, it may be determined whether a position offset between the position area of any current face and the position area of any historical face in the previous detection result is within a preset range.
The position area where the human face is located may refer to a position coordinate of a certain preset feature point in the human face, for example, the preset feature point may be a mouth, a nose, a left eye, a right eye, or the like.
303: it is determined that the any current face is the same as the any historical face in the previous detection result, and step 305 is performed.
304: it is determined that any current face is different from any historical face in the previous detection result, and step 309 is performed.
305: and respectively acquiring facial features of any current face and any historical face.
The facial feature extraction may be implemented by using, for example, an LBP algorithm.
306: determining whether the facial features of any current face are the same as the facial features of any historical face; if yes, go to step 307, if no, go to step 309.
Optionally, different face numbers may be set for each current face, and it is ensured that a current face that is the same as any historical face in the previous detection result sets a face number that is the same as any historical face.
307: and determining whether any historical human face is provided with a recognition success mark, if so, executing step 308, and if not, executing step 309.
308: and not identifying any current face.
309: and carrying out face recognition on any current face.
310: and setting a recognition success mark for each current face which is successfully recognized based on the recognition result.
In the embodiment, in the face recognition process, based on the previous detection result, if a historical face identical to the current face exists, it can be shown that the current face is already recognized, so that recognition can be performed again without need, so that the recognition time can be reduced, the face recognition time can be reduced, and the face recognition efficiency can be improved.
The technical solution of the embodiment of the present application may be applied to an information identification system as an embodiment, and as shown in fig. 4, the information identification system may include an acquisition terminal 401 and an authentication server 402;
the acquisition terminal 401 is used for acquiring an image and sending the image to the authentication server 402; detecting, by the authentication server 402, the acquired image to obtain at least one current target object; determining whether the at least one current target object is the same as a historical target object in historical detection results; identifying the current target object which is the same as the historical target object in the historical detection result; but only the current target object different from the historical target object in the historical detection result is identified.
That is, the acquisition terminal 401 acquires an image, and the authentication server performs object detection and object identification.
The acquisition terminal can acquire images for a plurality of users in the acquisition range of the acquisition terminal, so that a plurality of target objects can be detected and obtained from the images, and the target objects can be human faces of the users.
As still another embodiment, as shown in fig. 5, the information recognition system may include a detection terminal 501 and an authentication server 502;
the detection terminal 501 is configured to collect an image, detect the collected image to obtain at least one current target object, and send the at least one current target object to the authentication server; determining whether the at least one current target object is the same as a historical target object in historical detection results; triggering the authentication server 502 to not identify the current target object which is the same as the historical target object in the historical detection result; and trigger the authentication server 502 to identify a current target object that is different from the historical target object in the historical detection results.
That is, the detection terminal realizes image acquisition and object detection, and the authentication server realizes object identification, so as to ensure the processing performance of the detection terminal and the authentication server.
Of course, the technical solution of the embodiment of the present application may also be applied to an independent recognition terminal, and the recognition terminal completes operations such as image acquisition, object detection, and object recognition.
In a practical application, the acquisition terminal, the detection terminal or the identification terminal can be respectively realized as attendance machines with different functions so as to realize the purpose of attendance checking.
In the attendance checking application, after the identity information corresponding to the target object is determined, the attendance checking time and the like can be recorded corresponding to the identity information.
Fig. 6 is a schematic structural diagram of an embodiment of an information identification apparatus provided in the embodiment of the present application, where the apparatus may be configured in an authentication server as shown in fig. 4, may also be configured in a detection terminal as shown in fig. 5, and may of course also be configured in an identification terminal.
The apparatus may include:
the detecting module 601 is configured to detect an acquired image to determine at least one current target object.
A determining module 602, configured to determine whether the at least one current target object is the same as a historical target object in a historical detection result;
a first identifying module 603, configured to identify a current target object that is different from a historical target object in the historical detection result.
Further, optionally, as shown in fig. 7, the apparatus may further include, in contrast to the apparatus shown in fig. 6:
a second identification module 604, configured to identify a current target object that is the same as the historical target object in the historical detection result.
According to the embodiment, only any current target object different from any historical target object in the historical detection results is identified, and if any current target object is the same as any historical target object in the historical detection results and indicates that the current target object is identified, the current target object does not need to be identified, so that the identification time is shortened, and the identification efficiency is improved.
As a possible implementation manner, the determining module may be specifically configured to: respectively acquiring object characteristics of the at least one current target object; and determining whether a historical target object with the same object characteristics as the at least one current target object exists in the historical detection result.
As another possible implementation manner, the determining module may be specifically configured to:
and determining whether the position area of any current target object is consistent with the position area of any historical target object in the historical detection results, if so, determining that any current target object is the same as any historical target object, and otherwise, determining that any current target object is different from any historical target object. Whether the target objects in the detection results of the two times are the same or not can be determined in a position comparison mode, so that the same target object does not need to be identified, different target objects are identified, the identification time is shortened by reducing the identification time, and the identification efficiency is improved.
Optionally, the determining module may be specifically configured to determine whether a position offset between a position area where any current target object is located and a position area where any historical target object is located in the historical detection result is within a preset range.
In order to ensure the identification accuracy, in some embodiments, the second identification module may be specifically configured to obtain an object feature of any current target object and any historical target object that is the same as the current target object in the historical detection results; determining whether the object characteristics of any current target object are the same as the object characteristics of any historical target object which is the same as the current target object; if yes, not identifying any current target object; and if not, identifying any current target object.
That is, for any current target object and any historical target object in the historical detection result, which is the same as any current target object, the verification is further performed by combining the object characteristics.
In addition, in some embodiments, the second identification module may be specifically configured to determine whether, in the historical detection result, the same historical target object as any current target object is successfully identified; if yes, identifying any current target object; and if not, identifying any current target object.
Optionally, in order to facilitate the identification, an identification success flag may be set for the successfully identified target object, so that the determining, by the first identification module, whether the historical target object identical to any current target object is successfully identified in the historical detection result may specifically be:
and judging whether the historical target object which is the same as any current target object in the historical detection result is provided with a successful identification mark.
Therefore, after the second identification module identifies any current target object, the identification success flag may be set for the current target object which is successfully identified based on the identification result.
In addition, in order to further improve the identification convenience, in some embodiments, the first identification module may be specifically configured to set a different object number for the at least one current target object, where the current detection result is the same as the object number of the same target object in the historical detection results;
the current target object having the same object number as the object number of the history target object in the history detection result is not identified.
The information recognition apparatus shown in fig. 6 or 7 may execute the information recognition method shown in any one of fig. 1 to 3, and the implementation principle and technical effect thereof are not repeated. The specific manner in which each module and unit of the information identification apparatus in the above embodiments perform operations has been described in detail in the embodiments related to the method, and will not be described in detail herein.
In one possible design, the information identification apparatus in the embodiment shown in fig. 6 or fig. 7 may be implemented as an electronic device, as shown in fig. 8, which may include a processing component 801 and a memory 802 respectively connected to the processing component 801;
the memory 802 stores one or more computer program instructions for invocation and execution by the processing component 801;
the processing component 801 is configured to:
detecting the acquired images to determine at least one current target object;
determining whether the at least one current target object is the same as a historical target object in historical detection results;
and identifying the current target object different from the historical target object in the historical detection result.
The processing component 801 is also configured to not identify a current target object that is the same as a historical target object in the historical detection results.
The processing component does not identify the current target object which is the same as the historical target object in the historical detection result.
In a practical application, the electronic device may be an authentication server connected to a collection terminal, and the collection terminal may be a camera device such as a camera.
In addition, as another embodiment, as shown in fig. 9, the electronic device may further include an acquisition component 803 connected to the processing component 801 for acquiring an image, which is different from the embodiment shown in fig. 8.
The processing component 801 is specifically configured to detect the image acquired by the acquisition component 803 to determine at least one current target object.
In this embodiment, the electronic device may be an independent recognition terminal that implements image acquisition, object detection, and object recognition.
Furthermore, in some embodiments, the current target object determined by the processing component 801 may also be sent to an authentication server, and the non-recognition by the processing component 801 of the current target object that is the same as the historical target object in the historical detection result may be triggering the authentication server to not recognize the current target object that is the same as the historical target object in the historical detection result. The processing component 801 identifying a current target object different from the historical target object in the historical detection result may specifically trigger the authentication server to identify the current target object different from the historical target object in the historical detection result.
The processing component 801 may include one or more processors executing computer instructions to perform all or part of the steps of the method described above. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components configured to perform the above-described methods.
Memory 802 is configured to store various types of data to support operation at the XX device. The memory may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The collection component 803 may be a camera.
Of course, the electronic device may of course also comprise other components, such as input/output interfaces, communication components, etc.
The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc.
The communication component is configured to facilitate wired or wireless communication between the electronic device and other devices, such as with an authentication server or the like.
The embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the information identification method of the embodiment shown in any one of fig. 1 to 3 may be implemented.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (13)

1. An information recognition method, comprising:
detecting the acquired image to acquire at least one current target object;
determining whether the at least one current target object is the same as a historical target object in historical detection results;
identifying a current target object different from a historical target object in the historical detection result;
identifying the current target object which is the same as the historical target object in the historical detection result;
the not identifying the current target object which is the same as the historical target object in the historical detection result comprises:
determining whether the historical target object identical to any current target object in the historical detection result is successfully identified;
if yes, not identifying any current target object;
and if not, identifying any current target object.
2. The method of claim 1, wherein the determining whether the at least one current target object is the same as a historical target object in historical detection results comprises:
and determining whether the position area of any current target object is consistent with the position area of any historical target object in the historical detection results, if so, determining that any current target object is the same as any historical target object, and otherwise, determining that any current target object is different from any historical target object.
3. The method of claim 2, wherein the not identifying a current target object that is the same as a historical target object in the historical detection results comprises:
acquiring any current target object and object characteristics of any historical target object which is the same as any current target object in historical detection results;
determining whether the object characteristics of any current target object are the same as the object characteristics of any historical target object which is the same as the current target object;
if yes, not identifying any current target object;
and if not, identifying any current target object.
4. The method of claim 1, wherein the determining whether the at least one current target object is the same as a historical target object in historical detection results comprises:
respectively acquiring object characteristics of the at least one current target object;
and determining whether a historical target object with the same object characteristics as the at least one current target object exists in the historical detection result.
5. The method according to claim 2, wherein the determining whether the location area of any current target object is consistent with the location area of any historical target object in the historical detection results comprises:
and determining whether the position deviation between the position area of any current target object and the position area of any historical target object in the historical detection results is within a preset range.
6. The method of claim 1, wherein the target object is a human face.
7. An information identifying apparatus, comprising:
the detection module is used for detecting the acquired image so as to obtain at least one current target object;
the judging module is used for determining whether the at least one current target object is the same as a historical target object in a historical detection result;
the first identification module is used for identifying a current target object different from a historical target object in a historical detection result;
the second identification module is used for not identifying the current target object which is the same as the historical target object in the historical detection result;
the second identification module is specifically used for determining whether the historical target object identical to any current target object in the historical detection result is successfully identified; if yes, identifying any current target object; and if not, identifying any current target object.
8. The apparatus according to claim 7, wherein the determining module is specifically configured to:
and determining whether the position area of any current target object is consistent with the position area of any historical target object in the historical detection results, if so, determining that any current target object is the same as any historical target object, and otherwise, determining that any current target object is different from any historical target object.
9. The apparatus according to claim 8, wherein the second identification module is specifically configured to obtain object features of any current target object and any historical target object that is the same as the any current target object in the historical detection results; determining whether the object characteristics of any current target object are the same as the object characteristics of any historical target object which is the same as the current target object; if yes, identifying any current target object; and if not, identifying any current target object.
10. The apparatus according to claim 7, wherein the determining module is specifically configured to: respectively acquiring object characteristics of the at least one current target object; and determining whether a historical target object with the same object characteristics as the at least one current target object exists in the historical detection result.
11. The apparatus of claim 8, wherein the determining module is specifically configured to: and determining whether the position deviation between the position area of any current target object and the position area of any historical target object in the historical detection results is within a preset range.
12. An electronic device comprising a processing component and memories respectively connected to the processing component;
the memory stores one or more computer program instructions for invocation and execution by the processing component;
the processing component is to:
detecting the acquired images to determine at least one current target object;
determining whether the at least one current target object is the same as a historical target object in historical detection results;
identifying a current target object different from a historical target object in the historical detection result;
identifying the current target object which is the same as the historical target object in the historical detection result;
the not identifying the current target object which is the same as the historical target object in the historical detection result comprises:
determining whether the historical target object identical to any current target object in the historical detection result is successfully identified;
if yes, identifying any current target object;
and if not, identifying any current target object.
13. The electronic device of claim 12, further comprising an acquisition component coupled to the processing component for acquiring an image;
the processing component detects the acquired image to determine at least one current target object, specifically, detects the image acquired by the acquisition component to determine at least one current target object.
CN201710884606.XA 2017-09-26 2017-09-26 Information identification method and device and electronic equipment Active CN109558773B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201710884606.XA CN109558773B (en) 2017-09-26 2017-09-26 Information identification method and device and electronic equipment
TW107119428A TW201915830A (en) 2017-09-26 2018-06-06 Information recognition method and apparatus, and electronic device
PCT/CN2018/106102 WO2019062588A1 (en) 2017-09-26 2018-09-18 Information recognition method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710884606.XA CN109558773B (en) 2017-09-26 2017-09-26 Information identification method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN109558773A CN109558773A (en) 2019-04-02
CN109558773B true CN109558773B (en) 2023-04-07

Family

ID=65863117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710884606.XA Active CN109558773B (en) 2017-09-26 2017-09-26 Information identification method and device and electronic equipment

Country Status (3)

Country Link
CN (1) CN109558773B (en)
TW (1) TW201915830A (en)
WO (1) WO2019062588A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275649A (en) * 2020-02-03 2020-06-12 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111401463B (en) * 2020-03-25 2024-04-30 维沃移动通信有限公司 Method for outputting detection result, electronic equipment and medium
CN111582047A (en) * 2020-04-15 2020-08-25 浙江大华技术股份有限公司 Face recognition verification passing method and related device thereof
CN111522795A (en) * 2020-04-23 2020-08-11 北京互金新融科技有限公司 Method and device for processing data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102017606A (en) * 2008-04-23 2011-04-13 日本电气株式会社 Image processing device, camera, image processing method, and program
CN102254148A (en) * 2011-04-18 2011-11-23 周曦 Method for identifying human faces in real time under multi-person dynamic environment
CN102880864A (en) * 2012-04-28 2013-01-16 王浩 Method for snap-shooting human face from streaming media file
CN105205482A (en) * 2015-11-03 2015-12-30 北京英梅吉科技有限公司 Quick facial feature recognition and posture estimation method
CN106446816A (en) * 2016-09-14 2017-02-22 北京旷视科技有限公司 Face recognition method and device
CN107105310A (en) * 2017-05-05 2017-08-29 广州盈可视电子科技有限公司 Figure image replacement method, device and a kind of recording and broadcasting system in a kind of net cast

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8121358B2 (en) * 2009-03-06 2012-02-21 Cyberlink Corp. Method of grouping images by face
CN103491351A (en) * 2013-09-29 2014-01-01 东南大学 Intelligent video monitoring method for illegal buildings
CN104581003A (en) * 2013-10-12 2015-04-29 北京航天长峰科技工业集团有限公司 Video rechecking positioning method
US9524418B2 (en) * 2014-06-05 2016-12-20 Promethean Limited Systems and methods for detecting, identifying and tracking objects and events over time
CN104038742B (en) * 2014-06-06 2017-12-01 上海卓悠网络科技有限公司 A kind of door bell and button system based on face recognition technology

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102017606A (en) * 2008-04-23 2011-04-13 日本电气株式会社 Image processing device, camera, image processing method, and program
CN102254148A (en) * 2011-04-18 2011-11-23 周曦 Method for identifying human faces in real time under multi-person dynamic environment
CN102880864A (en) * 2012-04-28 2013-01-16 王浩 Method for snap-shooting human face from streaming media file
CN105205482A (en) * 2015-11-03 2015-12-30 北京英梅吉科技有限公司 Quick facial feature recognition and posture estimation method
CN106446816A (en) * 2016-09-14 2017-02-22 北京旷视科技有限公司 Face recognition method and device
CN107105310A (en) * 2017-05-05 2017-08-29 广州盈可视电子科技有限公司 Figure image replacement method, device and a kind of recording and broadcasting system in a kind of net cast

Also Published As

Publication number Publication date
CN109558773A (en) 2019-04-02
TW201915830A (en) 2019-04-16
WO2019062588A1 (en) 2019-04-04

Similar Documents

Publication Publication Date Title
CN109325964B (en) Face tracking method and device and terminal
CN109241868B (en) Face recognition method, device, computer equipment and storage medium
WO2015165365A1 (en) Facial recognition method and system
CN111191567B (en) Identity data processing method, device, computer equipment and storage medium
CN107644204B (en) Human body identification and tracking method for security system
US10262190B2 (en) Method, system, and computer program product for recognizing face
EP3286679B1 (en) Method and system for identifying a human or machine
US10635946B2 (en) Eyeglass positioning method, apparatus and storage medium
CN110674712A (en) Interactive behavior recognition method and device, computer equipment and storage medium
CN109558773B (en) Information identification method and device and electronic equipment
CN111626123A (en) Video data processing method and device, computer equipment and storage medium
CN107633205B (en) lip motion analysis method, device and storage medium
CN111191539A (en) Certificate authenticity verification method and device, computer equipment and storage medium
CN111462381A (en) Access control method based on face temperature identification, electronic device and storage medium
CN108108711B (en) Face control method, electronic device and storage medium
CN110866466A (en) Face recognition method, face recognition device, storage medium and server
CN108171138B (en) Biological characteristic information acquisition method and device
CN111626371A (en) Image classification method, device and equipment and readable storage medium
EP3617993B1 (en) Collation device, collation method and collation program
CN110619689A (en) Automatic sign-in and card-punching method for smart building, computer equipment and storage medium
CN109635625B (en) Intelligent identity verification method, equipment, storage medium and device
CN111709404B (en) Machine room legacy identification method, system and equipment
CN111488798B (en) Fingerprint identification method, fingerprint identification device, electronic equipment and storage medium
CN111353364A (en) Dynamic face identification method and device and electronic equipment
CN113591921B (en) Image recognition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant