[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110569785B - Face recognition method integrating tracking technology - Google Patents

Face recognition method integrating tracking technology Download PDF

Info

Publication number
CN110569785B
CN110569785B CN201910839847.1A CN201910839847A CN110569785B CN 110569785 B CN110569785 B CN 110569785B CN 201910839847 A CN201910839847 A CN 201910839847A CN 110569785 B CN110569785 B CN 110569785B
Authority
CN
China
Prior art keywords
face
tracking
frame
result
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910839847.1A
Other languages
Chinese (zh)
Other versions
CN110569785A (en
Inventor
张智
李思远
於耀耀
刘子瑜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Zhiai Time Technology Co ltd
Original Assignee
Hangzhou Zhiai Time Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Zhiai Time Technology Co ltd filed Critical Hangzhou Zhiai Time Technology Co ltd
Priority to CN201910839847.1A priority Critical patent/CN110569785B/en
Publication of CN110569785A publication Critical patent/CN110569785A/en
Application granted granted Critical
Publication of CN110569785B publication Critical patent/CN110569785B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention provides a face recognition method and device integrating tracking technology, and relates to the technical field of face recognition, wherein the method comprises the following steps of face capture: performing face capturing on each frame by running a recognition algorithm, and acquiring a face position and characteristics, wherein the face position captured by the current frame is used as an initial value of target face tracking of the next frame; face comparison: comparing the face tracking result of the current frame with the face capturing result of the current frame by calculating Euclidean distance on the face position and the features, and judging whether the face tracking result and the face capturing result of the current frame are the same person or not; face tracking: and running a target tracking algorithm to track the captured human face. The invention solves the problems that the same person is recognized as different persons and different persons are recognized as the same person in the process of comparing face characteristic values, and reduces the influence of face angles and shielding on face recognition.

Description

Face recognition method integrating tracking technology
Technical Field
The invention relates to a face recognition technology, in particular to a face recognition method combining a recognition algorithm and a tracking algorithm.
Background
Along with the development of artificial intelligence, the biometric authentication technology also rapidly develops on an intelligent road, and the face recognition technology has become an important authentication mode.
Most face recognition systems at present recognize faces in a static image environment, and mainly adopt Adaboost algorithm to train and distinguish the faces. The Adaboost algorithm is easy to be interfered by noise, and the training time is longer; the most dominant conventional face recognition algorithms include geometric feature-based, face global feature extraction and SVM (support vector machine) based methods, but recognition accuracy is not high. The face recognition algorithm based on the Convolutional Neural Network (CNN) improves the feature extraction capability and is superior to the traditional algorithm in recognition accuracy. However, the existing face recognition tracking technology generally operates face detection and face tracking independently, and in the video stream, inaccuracy is brought to face feature value comparison due to different angles of face capture and movement of a person, so that the method has the conditions that the same person is regarded as different persons and different persons are regarded as the same person during face comparison,
the existing 'face recognition-based video positioning method and device' has the patent number of 201811178561.5, can quickly and accurately recognize frame images of face images in videos, and can specifically recognize the face images of target persons in which frame images appear.
Based on this, the present application is made.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a face recognition method integrating a tracking technology, which improves the accuracy of face tracking recognition under the condition of multiple people.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
a face recognition method integrating tracking technology comprises the following steps
Face capturing: performing face capturing on each frame by running a recognition algorithm, and acquiring a face position and characteristics, wherein the face position captured by the current frame is used as an initial value of target face tracking of the next frame;
face comparison: comparing the face tracking result of the current frame with the face capturing result of the current frame by calculating Euclidean distance on the face position and the features, and judging whether the face tracking result and the face capturing result of the current frame are the same person or not; the successfully-compared face capturing result is used as an initial value of next frame tracking, and if the comparison fails or the face is not captured by the current frame, the current frame tracking result is used as the initial value of next frame tracking to continue tracking;
face tracking: and running a target tracking algorithm to track the captured human face.
As a preferred solution, the face comparison step includes:
calculating the face position offset of the tracking result and the recognition result, and if the face position offset is smaller than a given threshold value, reserving the recognition position in the adjacent coordinate set;
and acquiring the face features of the identification in the adjacent coordinate set, and matching the face features of the tracking target and the capturing target.
As a preferred scheme, the face position offset of the tracking result and the recognition result is calculated, and compared with a given threshold value, the threshold value is determined through a plurality of experiments, if the offset is smaller than the threshold value, the recognition position is reserved in the step of collecting adjacent coordinates, and the tracking algorithm and the recognition algorithm output coordinate vectors of the target position are expressed in the form of upper, left, right and lower 4-dimensional vectors; calculating the offset refers to calculating its euclidean distance in the 4-dimensional vector space.
In the step of acquiring the face features of the identified person in the neighboring coordinate set and matching the face features of the tracked object and the captured object, the euclidean distance of the face features is calculated by the feature value vector of the target frame in the output result of the tracking algorithm and the feature value vector of the target frame in the output result of the identification algorithm, the euclidean distance is compared with a given threshold value, the threshold value is determined through a plurality of experiments, and if the euclidean distance is smaller than the given threshold value after the validity is verified through a plurality of experiments, the face identified at the moment and the face captured at the beginning are determined to be the same person.
In the face comparison step, as an initial value of next frame tracking, if the comparison fails or the face is not captured by the current frame, the face is continuously tracked by using the current frame tracking result as the initial value of next frame tracking, specifically: if the positions and the characteristic values of the tracking result and the capturing result are matched, updating the current target person state by using the capturing result of the recognition algorithm; if the positions of the tracking target and the capturing target are matched but the characteristics are not matched, the tracking target is considered to be blocked, the tracking is continued, the next frame tracking is initialized by using the position result of the current frame tracking until the face is captured by the recognition algorithm again, and the tracking of the next frame is initialized by using the capturing position of the current frame; if the tracking result is wrong and no capture position can be matched completely, the person is disconnected and no tracking is performed; the remaining unmatched captured results are added to the database as new people or the old targets are re-brought on-line.
As a preferred solution, the face tracking step includes:
initializing a tracker, wherein the identification position of the previous frame is used as a face tracking initial value of the next frame, and the face number of the previous frame is used as the face number of the next frame for face tracking; if the face successfully captured by the previous frame cannot be captured, the initial value and the face number of the next frame tracking are obtained from the face result tracked by the current frame;
and running a tracking algorithm, acquiring the current tracking face position and extracting the tracking target face characteristics.
In the step of face tracking, the 1 st frame of the captured face starts, the face set tracked in the 1 st frame is an empty set, the captured face in the 1 st frame is compared with the empty set, the result of the 1 st frame face capturing is directly used as the initial value of the next frame tracking algorithm to track the face, and the actual face comparison starts from the 2 nd frame; and repeating the steps of face capturing, face comparing and face tracking in real time or after the recorded video stream is not finished.
The working principle of the invention is as follows: according to the invention, the face is identified by combining the target tracking algorithm with the face identification algorithm, each frame captures and numbers the face by using the face identification algorithm, and the numbers of the subsequent faces of the same face are consistent. The successfully captured face position of the previous frame becomes the initial value of the tracking algorithm of the next frame, and the tracking result is reflected in the next frame. The characteristic values of the currently captured face and the tracked face are compared in each frame of image, and the successfully captured face is compared to update the current tracking result and serve as the initial value of the next frame of tracking. If the previous frame does not successfully capture the face, the face tracking current value is used as the initial value of the next frame tracking. When the target is blocked, the blocked target is continuously tracked by using the position result tracked by the current frame, the next target face is waited to reappear and captured, and the next frame tracking algorithm is initialized by using the captured position.
The invention can realize the following technical effects:
(1) The invention uses the correlation of face tracking in the continuous two frames of video in space and time, uses the face position captured in the current frame as the initial value of the next frame of target face tracking, and skillfully converts the face comparison in different frames into the face comparison in the same frame by face detection and combining with Euclidean distance, and simultaneously simplifies the target tracking into the target tracking only between two frames, and compensates the inaccuracy of face recognition by the face tracking technology in the recognition and tracking scenes of faces, similar characters, blocked characters or external interference and the like in different angles, thereby reducing the misjudgment rate of face recognition, simultaneously remarkably improving the accuracy of face tracking and effectively solving the problem of inaccurate tracking in the multi-frame continuous tracking process.
(2) The invention solves the problems that the same person is recognized as different persons and different persons are recognized as the same person in the process of comparing face characteristic values, and reduces the influence of face angles (front face, side face, low head and head lifting) and shielding on face recognition.
(3) The invention captures the face in each frame, takes the captured face as the initial value of the next frame tracking after successful capturing, improves the conditions of high-speed motion, complex background and shielding in target tracking, and greatly reduces the tracking loss rate under the condition of full shielding.
(4) According to the invention, through target tracking, the correlation between the space and time of two continuous frames is utilized to compare the face characteristic values in different frames, and the comparison of the face characteristic values in the same frame is converted, so that the accuracy of face recognition is greatly improved.
Drawings
Fig. 1 is a flowchart of basic steps of an implementation process of a face recognition method of the fusion tracking technology according to the present embodiment;
fig. 2 is an algorithm main loop flow chart of an implementation process of a face recognition method of the fusion tracking technology according to the present embodiment;
fig. 3 is a schematic diagram of a euclidean distance comparison algorithm of a capturing result and a tracking result of an implementation process of a face recognition method of a fusion tracking technology according to the embodiment;
fig. 4 is a schematic diagram of the first three frames of a face recognition method of the fusion tracking technology according to this embodiment.
Detailed Description
In order to make the technical means of the present invention and the technical effects achieved thereby clearer and more complete disclosure, the following embodiments are provided, and the following detailed description is given with reference to the accompanying drawings:
as shown in fig. 1 and 2, the face recognition method of the fusion tracking technology of the present embodiment includes the following steps:
step S1, face capturing: each frame runs a recognition algorithm to capture the face, and the position of the face captured by the current frame is used as an initial value of target face tracking of the next frame.
Input video stream frame sequence f= { F 1 ,F 2 ,…,F m In the nth frame, 1<n<m, invoking face recognition algorithm to obtain F n And adding the face positions and the face features into the recognition position set, and numbering the faces.
Step S2, face comparison: and comparing the face tracking result of the current frame with the face capturing result of the current frame by calculating Euclidean distance on the face position and the features, and judging whether the faces are the same person. And using the successfully-compared face capturing result as an initial value of next frame tracking, and using the current frame tracking result as the initial value of next frame tracking to continue tracking if the comparison fails or the face is not captured by the current frame.
And S21, calculating the face position offset of the tracking result and the recognition result, and if the face position offset is smaller than a given threshold value E, reserving the recognition position in the adjacent coordinate set. The coordinate vectors of the target positions output by the tracking algorithm and the recognition algorithm are expressed in the form of (up, left, right, and down), and calculating the offset means calculating the euclidean distance thereof in the 4-dimensional vector space, as shown in formula 1:
equation 1:
Figure BDA0002191918660000051
wherein the coordinate vector of the target position of the current frame tracking result is (tx 1 ,ly 1 ,ry 1 ,bx 1 ) The coordinate vector of the target position of the result captured by the recognition algorithm is (tx 2 ,ly 2 ,ry 2 ,bx 2 ) D represents the euclidean distance of the two. As shown in fig. 3, fig. 3 is a schematic diagram of a euclidean distance comparison algorithm of a capturing result and a tracking result in the implementation process.
Step S22, acquiring the face features of the identification in the adjacent coordinate set, and matching the face features of the tracking target and the capturing target. The eigenvalue vector of the target frame in the tracking algorithm output result is expressed as (x) 1 ,x 2 ,…,x n ) The eigenvalue vector of the target frame in the output result of the recognition algorithm is expressed as (y 1 ,y 2 ,…,y n ) And calculating the Euclidean distance f in the feature space, and if f is smaller than a given threshold value eta, determining that the face recognized at the moment and the face captured at the beginning are the same person.
Equation 2:
Figure BDA0002191918660000052
the size of the threshold value epsilon and eta in the step 2 is obtained by carrying out a large number of experiments according to different using algorithms, and the threshold value epsilon and eta is set to be 0.6 after the Euclidean distance is calculated by an np.
Step S3, face tracking: and tracking the captured face through a target tracking algorithm.
Step S31, initializing a tracker. With frame F of n-1 th n-1 Is used as the initial value of the face tracking of the nth frame, and the nth-1 frame F is used n-1 As the face number of the n-th frame face tracking; if the face successfully captured by the previous frame cannot be captured, the initial value and the face number of the next frame tracking are obtained from the face result tracked by the current frame.
And S32, running a tracking algorithm, acquiring the current tracking face position, and extracting the tracking target face characteristics.
In the step, from the 1 st frame of the captured face, the face set tracked in the 1 st frame is an empty set, and the captured face in the 1 st frame is compared with the empty set, namely, the result of capturing the 1 st frame of the face is directly used as the initial value of the following frame tracking algorithm to track the face, and the actual face comparison is carried out from the 2 nd frame. Steps S1 to S3 are repeated as long as the real-time or recorded video stream is not over.
As shown in fig. 2, which is a main loop flow chart of an implementation process algorithm, a video frame sequence is acquired and then is connected with a database for global initialization, if a readable video frame sequence exists, a recognition algorithm is operated for face capture to acquire face feature values and face capture positions, if the current frame is the 1 st frame, a tracker is initialized by using the current face capture position, a tracking algorithm is operated for tracking the current face, and if the video frame is not finished, the face capture is continued; if the current frame is not the 1 st frame, the face capturing result and the face tracking result are present, the Euclidean distance is calculated through a formula 1 and a formula 2 to compare the face positions and the characteristic values of the two results, if the tracking result is wrong and no capturing position can be completely matched, the person is off line, and the tracking is finished; if the positions and the characteristic values of the tracking result and the capturing result are matched, updating the current target person state by using the capturing result of the recognition algorithm; if the positions of the tracking target and the capturing target are matched but the characteristics are not matched, the tracking target is considered to be blocked, the next frame tracking is initialized by using the position result of the current frame tracking until the face is captured again by the recognition algorithm, the tracking of the next frame is initialized by using the capturing position of the current frame again, and the rest unmatched capturing result is used as a newly added target and added into a database or an old target to be on line again.
As shown in fig. 4, which is a schematic diagram of the first three frames in the implementation process, it can be seen from the figure that the method of the present invention is a face recognition method capable of tracking and recognizing multiple targets, and in this embodiment, a face is recognized by using a face_recognition open source recognition algorithm, and a face is tracked by using an ECO (Efficient Convolution Operators efficient convolution operator) target tracking algorithm based on a correlation filter. The invention is also applicable to other face recognition and target tracking algorithms. In the figure, a thick solid line box represents a tracking result, a thin solid line box represents a face capturing result of a recognition algorithm, and in a 1 st frame, the face is captured through recognitionThe algorithm captures the human face, the tracking set is an empty set, and the captured human face result is used as an initial value of tracking of the next frame; in the 2 nd frame, capturing the face through a recognition algorithm to obtain a current face capturing result, namely a thin solid line frame, wherein a current tracking result tracked from the face capturing result of the previous frame is a thick solid line frame, comparing the current face capturing result with the current tracking result, if the comparison is successful, updating the current tracking result by using the current capturing result and taking the current tracking result as an initial value of tracking of the next frame, and if the comparison fails or the face is not captured by the current frame, continuing to track by using the current frame tracking result as the initial value of tracking of the next frame; in frame 3, the current target 1 (ID 1 ) Quilt target 2 (ID) 2 ) Occlusion, the tracking algorithm of the next frame can be initialized by using the capturing result of the current frame for the target 2, and for the target 1, the tracking of the next frame is initialized by using the tracking position result of the current frame until the face is captured by the recognition algorithm again, and then the tracking of the next frame is initialized by using the capturing result of the current frame.
The foregoing is a further detailed description of the provided technical solution in connection with the preferred embodiments of the present invention, and it should not be construed that the specific implementation of the present invention is limited to the above description, and it should be understood that several simple deductions or substitutions may be made by those skilled in the art without departing from the spirit of the present invention, and all the embodiments should be considered as falling within the scope of the present invention.

Claims (6)

1. A face recognition method integrating tracking technology is characterized in that: comprising
Face capturing: performing face capturing on each frame by running a recognition algorithm, and acquiring a face position and characteristics, wherein the face position captured by the current frame is used as an initial value of target face tracking of the next frame;
face comparison: comparing the face tracking result of the current frame with the face capturing result of the current frame by calculating Euclidean distance on the face position and the features, and judging whether the face tracking result and the face capturing result of the current frame are the same person or not; the successfully-compared face capturing result is used as an initial value of next frame tracking, and if the comparison fails or the face is not captured by the current frame, the current frame tracking result is used as the initial value of next frame tracking to continue tracking, specifically: if the positions and the characteristic values of the tracking result and the capturing result are matched, updating the current target person state by using the capturing result of the recognition algorithm; if the positions of the tracking target and the capturing target are matched but the characteristics are not matched, the tracking target is considered to be blocked, the tracking is continued, the next frame tracking is initialized by using the position result of the current frame tracking until the face is captured by the recognition algorithm again, and the tracking of the next frame is initialized by using the capturing position of the current frame; if the tracking result is wrong and no capture position can be matched completely, the person is disconnected and no tracking is performed; the rest unmatched capturing result is added into a database as a new person or the old target is on line again;
face tracking: and running a target tracking algorithm to track the captured human face.
2. The face recognition method of the fusion tracking technology as claimed in claim 1, wherein: the face comparison step comprises the following steps:
calculating the face position offset of the tracking result and the recognition result, and if the face position offset is smaller than a given threshold value, reserving the recognition position in the adjacent coordinate set;
and acquiring the face features of the identification in the adjacent coordinate set, and matching the face features of the tracking target and the capturing target.
3. The face recognition method of claim 2, wherein the face recognition method based on the fusion tracking technology is characterized in that: the face position offset of the tracking result and the recognition result is calculated and compared with a given threshold value, if the offset is smaller than the threshold value, the recognition position is reserved in the step of the adjacent coordinate set, and the coordinate vectors of the target position output by the tracking algorithm and the recognition algorithm are expressed in the form of upper, left, right and lower 4-dimensional vectors; calculating the offset refers to calculating its euclidean distance in the 4-dimensional vector space.
4. The face recognition method of claim 2, wherein the face recognition method based on the fusion tracking technology is characterized in that: in the step of acquiring the face features of the identified faces in the adjacent coordinate set and matching the face features of the tracking target and the capturing target, the Euclidean distance of the face features is calculated through the feature value vector of the target frame in the output result of the tracking algorithm and the feature value vector of the target frame in the output result of the identification algorithm, the Euclidean distance is compared with a given threshold value, and if the Euclidean distance is smaller than the threshold value, the face identified at the moment and the face captured at the beginning are determined to be the same person.
5. The face recognition method of the fusion tracking technology as claimed in claim 1, wherein: the face tracking step comprises the following steps:
initializing a tracker, wherein the identification position of the previous frame is used as a face tracking initial value of the next frame, and the face number of the previous frame is used as the face number of the next frame for face tracking; if the face successfully captured by the previous frame cannot be captured, the initial value and the face number of the next frame tracking are obtained from the face result tracked by the current frame;
and running a tracking algorithm, acquiring the current tracking face position and extracting the tracking target face characteristics.
6. The face recognition method of the fusion tracking technology as claimed in claim 1, wherein: in the step of face tracking, the 1 st frame of the captured face starts, the face set tracked in the 1 st frame is an empty set, the captured face in the 1 st frame is compared with the empty set, the result of the 1 st frame face capturing is directly used as an initial value of a next frame tracking algorithm to track the face, and the actual face comparison starts from the 2 nd frame; and repeating the steps of face capturing, face comparing and face tracking in real time or after the recorded video stream is not finished.
CN201910839847.1A 2019-09-05 2019-09-05 Face recognition method integrating tracking technology Active CN110569785B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910839847.1A CN110569785B (en) 2019-09-05 2019-09-05 Face recognition method integrating tracking technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910839847.1A CN110569785B (en) 2019-09-05 2019-09-05 Face recognition method integrating tracking technology

Publications (2)

Publication Number Publication Date
CN110569785A CN110569785A (en) 2019-12-13
CN110569785B true CN110569785B (en) 2023-07-11

Family

ID=68777948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910839847.1A Active CN110569785B (en) 2019-09-05 2019-09-05 Face recognition method integrating tracking technology

Country Status (1)

Country Link
CN (1) CN110569785B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111460884A (en) * 2020-02-09 2020-07-28 天津博宜特科技有限公司 Multi-face recognition method based on human body tracking
CN112232257B (en) * 2020-10-26 2023-08-11 青岛海信网络科技股份有限公司 Traffic abnormality determination method, device, equipment and medium
CN113255608B (en) * 2021-07-01 2021-11-19 杭州智爱时刻科技有限公司 Multi-camera face recognition positioning method based on CNN classification
CN113869210A (en) * 2021-09-28 2021-12-31 中通服创立信息科技有限责任公司 Face recognition following method and intelligent device adopting same
CN116152872A (en) * 2021-11-18 2023-05-23 北京眼神智能科技有限公司 Face tracking method, device, storage medium and equipment
CN114241586B (en) * 2022-02-21 2022-05-27 飞狐信息技术(天津)有限公司 Face detection method and device, storage medium and electronic equipment
CN115451962B (en) * 2022-08-09 2024-04-30 中国人民解放军63629部队 Target tracking strategy planning method based on five-variable Carnot diagram

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845385A (en) * 2017-01-17 2017-06-13 腾讯科技(上海)有限公司 The method and apparatus of video frequency object tracking
CN107122751A (en) * 2017-05-03 2017-09-01 电子科技大学 A kind of face tracking and facial image catching method alignd based on face
CN107516303A (en) * 2017-09-01 2017-12-26 成都通甲优博科技有限责任公司 Multi-object tracking method and system
CN109063593A (en) * 2018-07-13 2018-12-21 北京智芯原动科技有限公司 A kind of face tracking method and device
CN109190444A (en) * 2018-07-02 2019-01-11 南京大学 A kind of implementation method of the lane in which the drivers should pay fees vehicle feature recognition system based on video

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845385A (en) * 2017-01-17 2017-06-13 腾讯科技(上海)有限公司 The method and apparatus of video frequency object tracking
CN107122751A (en) * 2017-05-03 2017-09-01 电子科技大学 A kind of face tracking and facial image catching method alignd based on face
CN107516303A (en) * 2017-09-01 2017-12-26 成都通甲优博科技有限责任公司 Multi-object tracking method and system
CN109190444A (en) * 2018-07-02 2019-01-11 南京大学 A kind of implementation method of the lane in which the drivers should pay fees vehicle feature recognition system based on video
CN109063593A (en) * 2018-07-13 2018-12-21 北京智芯原动科技有限公司 A kind of face tracking method and device

Also Published As

Publication number Publication date
CN110569785A (en) 2019-12-13

Similar Documents

Publication Publication Date Title
CN110569785B (en) Face recognition method integrating tracking technology
CN109919977B (en) Video motion person tracking and identity recognition method based on time characteristics
WO2020042419A1 (en) Gait-based identity recognition method and apparatus, and electronic device
CN105405154B (en) Target object tracking based on color-structure feature
CN105224912B (en) Video pedestrian&#39;s detect and track method based on movable information and Track association
JP6448223B2 (en) Image recognition system, image recognition apparatus, image recognition method, and computer program
CN109389086B (en) Method and system for detecting unmanned aerial vehicle image target
CN104008370B (en) A kind of video face identification method
CN106707296A (en) Dual-aperture photoelectric imaging system-based unmanned aerial vehicle detection and recognition method
CN110555867B (en) Multi-target object tracking method integrating object capturing and identifying technology
CN106885574A (en) A kind of monocular vision robot synchronous superposition method based on weight tracking strategy
CN109784130B (en) Pedestrian re-identification method, device and equipment thereof
CN104361327A (en) Pedestrian detection method and system
CN112464847B (en) Human body action segmentation method and device in video
CN114639117B (en) Cross-border specific pedestrian tracking method and device
CN104268519A (en) Image recognition terminal based on mode matching and recognition method of image recognition terminal
CN111798486B (en) Multi-view human motion capture method based on human motion prediction
CN107833239A (en) A kind of searching of optimal matching method for tracking target based on weighted model constraint
WO2022134916A1 (en) Identity feature generation method and device, and storage medium
Harville Stereo person tracking with short and long term plan-view appearance models of shape and color
CN110349184B (en) Multi-pedestrian tracking method based on iterative filtering and observation discrimination
Jean et al. Body tracking in human walk from monocular video sequences
Wang et al. Face tracking using motion-guided dynamic template matching
CN112164097B (en) Ship video detection sample collection method
CN112926518A (en) Gesture password track restoration system based on video in complex scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201125

Address after: Room 1007, building 3, Fengyuan international building, 430 Fengtan Road, Gongshu District, Hangzhou City, Zhejiang Province

Applicant after: Hangzhou Zhiai time Technology Co.,Ltd.

Address before: 311300 room 413, building 2, No. 168, Qianwu Road, Qingshanhu street, Lin'an District, Hangzhou City, Zhejiang Province

Applicant before: HANGZHOU LICHEN TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant