CN113269091A - Personnel trajectory analysis method, equipment and medium for intelligent park - Google Patents
Personnel trajectory analysis method, equipment and medium for intelligent park Download PDFInfo
- Publication number
- CN113269091A CN113269091A CN202110577338.3A CN202110577338A CN113269091A CN 113269091 A CN113269091 A CN 113269091A CN 202110577338 A CN202110577338 A CN 202110577338A CN 113269091 A CN113269091 A CN 113269091A
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- personnel
- face
- identification
- database
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 24
- 230000005021 gait Effects 0.000 claims abstract description 47
- 238000012544 monitoring process Methods 0.000 claims abstract description 16
- 238000004364 calculation method Methods 0.000 claims abstract description 14
- 238000000034 method Methods 0.000 claims description 34
- 238000012549 training Methods 0.000 claims description 32
- 238000004422 calculation algorithm Methods 0.000 claims description 26
- 238000012360 testing method Methods 0.000 claims description 21
- 238000012795 verification Methods 0.000 claims description 18
- 238000003860 storage Methods 0.000 claims description 14
- 238000001514 detection method Methods 0.000 claims description 10
- 230000000694 effects Effects 0.000 claims description 8
- 238000007781 pre-processing Methods 0.000 claims description 8
- 230000009191 jumping Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 4
- 238000013136 deep learning model Methods 0.000 claims description 3
- 238000009826 distribution Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 230000001960 triggered effect Effects 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 12
- 238000004891 communication Methods 0.000 description 8
- 238000000605 extraction Methods 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000004931 aggregating effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000013434 data augmentation Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 210000001930 leg bone Anatomy 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the field of intelligent parks, and particularly discloses a personnel trajectory analysis method, equipment and a medium for an intelligent park, wherein video streams from all monitoring cameras of the park are acquired; carrying out face recognition, gait recognition and pedestrian re-recognition on people appearing in the park all the time; similarity calculation is carried out between the personnel database and the community, if the personnel database is confirmed to be the community personnel, tracking is not carried out on the personnel database; if the characteristic of the person is identified to be not matched with the characteristic of the cell personnel base, whether the person is in a stranger personnel base is identified; if not, the stranger appears in the lens for the first time, an alarm prompt is triggered, the currently identified monitoring picture is returned, and the time when the pedestrian with each ID appears in the camera and the position of the camera are recorded to the personnel track database. The invention carries out personnel identification and tracking through face identification, gait identification and pedestrian re-identification, and improves the consistency and accuracy of personnel identification and tracking.
Description
Technical Field
The invention relates to the field of intelligent parks, in particular to a method, equipment and a medium for analyzing a person track facing an intelligent park.
Background
Public safety is always a topic which is not neglected in the whole society, and a video monitoring system which is complementary with the public safety is also popularized in a large quantity. The video monitoring system can intuitively reproduce a target scene, and the tracking of the personnel trajectory is an important ring for guaranteeing social security and can be used as a powerful aid for public security investigation and case breaking. In the work of law enforcement, the identification and location of targets is a key step, and the existing solutions for finding personnel trajectories mainly have two types:
firstly, based on the internet of things and the wireless radio frequency technology of the traditional telecommunication network, all general physical objects capable of being independently addressed are subjected to information interconnection. RFID relies primarily on RFID readers and RFID tags to operate. The scanning range is correspondingly different according to different working frequencies (low/medium/high frequency). The reader reads the label in a certain range, and the cloud end can display the detailed information of the label and the position of the label.
Secondly, the method is realized based on a face recognition algorithm, the basic principle of the scheme is that face information of entering persons of the garden, collected by a garden entrance guard, is obtained, the face information is compared with white list faces of entrance guard systems in the garden to determine whether the person is a white list person, if not, tracking of a garden camera is subsequently carried out according to the face information collected by the entrance guard systems, video streams of the garden camera are subjected to frame extraction, the face recognition algorithm is called to carry out face detection on collected pictures, a plurality of garden cameras are communicated in the same method, captured face data are captured after the strangers enter the garden, the track of the strangers entering the garden is judged, and the track of the strangers entering the garden is recorded.
However, the above-described person trajectory tracking scheme suffers from a number of disadvantages. The method consumes financial and material resources, and needs to carry an electronic tag for each visitor under the condition of personnel cooperation, the personnel track can be found through an RFID reader, the price of the electronic tag is higher than that of a common bar code tag and is dozens of times of that of the common bar code tag, and the situation that the tag is damaged manually to be separated from tracking and the like is avoided in the aspect of safety, so that the use is relatively limited; and the technical standards are not uniform, and the good portability is not achieved.
The scheme for finding the track of the person based on single face recognition also has a plurality of defects, only a door control system is used for collecting a face photo, people entering a park from illegal ways (such as wall turning, hidden in a vehicle and the like) are not considered, the face feature information of the person is extracted by the face recognition technology, the face photo is not a frontal photo in a large probability of the temporarily framed picture, and the situations of a back head and a side face generally exist, so that the frontal face recognition is difficult to perform. The face recognition technology is difficult to play a role under the conditions of wearing a mask or being dark in light, poor in shooting angle, shielded by a hat and the like. Furthermore, the pixels shot by the camera are not high, especially the image cut by the human face in the long-range camera is not likely to have 32x32 pixels, and the park cameras are hung at a height of more than 3m and look down, so the collected image is not clear enough compared with the large-size image. Therefore, the face recognition has a limited effect in the actual re-recognition application, and the traditional face recognition algorithm is not completely applicable to the personnel trajectory analysis scene of the campus.
Disclosure of Invention
In order to overcome the visual limitation of a fixed camera and the defect that a single tracking algorithm based on face recognition cannot track a face under the condition that the face cannot be recognized, the invention provides a person track analysis method, equipment and a medium for a smart park, wherein a monitored pedestrian image is given by utilizing a pedestrian Re-identification (ReID) technology, and the pedestrian image under the condition of crossing equipment is retrieved. And combined with pedestrian detection/pedestrian tracking technology, the trajectory of the person is discovered.
The technical scheme adopted by the invention is as follows: 1. a personnel trajectory analysis method for an intelligent park is characterized by comprising the following steps: the method comprises the following steps:
s1, acquiring video streams of all monitoring cameras from the park;
s2, the park monitoring system constantly carries out face recognition on people appearing in the park through a face recognition algorithm, carries out gait recognition through a gait recognition algorithm, and carries out pedestrian re-recognition through a pedestrian re-recognition method;
s3, similarity calculation is carried out between the cell personnel database and the identified three characteristics, if the similarity of one of the three characteristics and the corresponding characteristic stored in the cell personnel database is more than or equal to a first threshold value, the cell personnel is determined to be the cell personnel, and tracking is not carried out on the cell personnel;
s4, if the characteristic of the person is identified to be not matched with the characteristic of the cell person library, namely the similarity of the three characteristics is smaller than a first threshold value, whether the person is in a stranger person library is identified;
s5, if the similarity between one of the three identified features and the corresponding feature stored in the stranger library is larger than or equal to a first threshold value, indicating that the feature of the stranger is recorded by the stranger library, and jumping to the step S7 in the process of being tracked;
s6, if not, namely the stranger appears in the shot for the first time, storing the three identified characteristics of the stranger into a stranger library by establishing a new ID, triggering an alarm prompt, returning a currently identified monitoring picture, and jumping to the step S1;
s7, recording the time when the pedestrian with each ID appears in the camera and the position of the camera to a person track database;
s8, according to the personnel information recorded by the personnel track database, the positions of the cameras are connected in series on the map of the park according to the time sequence of the personnel appearing in the cameras, the personnel track map is drawn in real time, and the step S1 is skipped.
Preferably, the community personnel database is used for storing face information feature data, gait feature data and pedestrian re-identification feature data of all people in the community;
the stranger database is used for storing face feature data, gait feature data and pedestrian re-identification feature data of strangers;
the pedestrian trajectory database is used for recording the time and the position of each pedestrian, and historical data stored in the pedestrian trajectory database is used for backtracking the pedestrian trajectory.
Preferably, the face recognition in step S2 includes the following sub-steps:
a1, firstly, carrying out face detection by using MTCNN;
a2, then face recognition is carried out by utilizing faceNet, and the recognized feature is embedding;
and A3, performing similarity comparison with the existing face features according to the embedding of the face features, and finishing the face recognition task.
Preferably, the face recognition algorithm in step S2 includes the following steps:
b1, establishing an MTCNN + faceNet model;
b2, inputting a large amount of face feature information, manually labeling the face feature information, and dividing the face feature information into a training set, a verification set and a test set;
b3, training an MTCNN + faceNet model by using the training set, training an automatic positioning face, intercepting the face in the face detection frame and extracting features;
b4, verifying the convergence condition of the MTCNN + faceNet model by using a verification set;
and B5, testing the MTCNN + faceNet model by using the test set, and outputting the MTCNN + faceNet model with the best effect as a face recognition algorithm if the test is passed.
Preferably, the implementation method of the gait recognition in step S2 is as follows:
c1, acquiring a section of video of walking of people, preprocessing the video into a video image set with pedestrians separated from the background, and forming a black and white outline image silhouette;
c2, directly learning the characteristics of the gait by using a GaitSet network instead of measuring the similarity between a series of gait contour map sequences and a template;
and C3, performing identification by calculating cosine distance of the learned features and the existing features.
Preferably, the gait recognition algorithm in step S2 is implemented by the following sub-steps:
d1, establishing a GaitSet deep learning model;
d2, dividing an open-source data set CASIA Gate Database into a training set, a verification set and a test set;
d3, training the GaitSet model by using a training set;
d4, verifying the convergence condition of the GaitSet model by using a verification set;
d5, using the GaitSet model of the test set, and outputting the GaitSe model with the best effect as a gait recognition algorithm if the test is passed.
Preferably, in the step S2, the pedestrian re-identification is performed by the following method:
and inputting the picture framed by the video stream data into a trained pedestrian re-identification network established by using an open-source FastReaD library to obtain a feature vector, carrying out similarity calculation on the feature vector and the pedestrian re-identification feature of the database, and outputting the picture as the same person if the similarity is greater than a first threshold value.
Preferably, the pedestrian re-identification method in step S2 includes the following steps:
e1, building a pedestrian re-identification network;
e2, acquiring a Market1501 data set with an open source, and preprocessing the data;
e3, dividing the data set into a training set and a verification set;
e4, training the pedestrian re-identification model by using the training set, evaluating the pedestrian re-identification model by using the verification set, and screening out the pedestrian re-identification model with the optimal prediction performance;
e5, obtaining the similarity distribution among the pedestrian images by using the pedestrian re-identification model obtained in the substep E4;
e6, inputting the picture extracted from the video stream data into the trained pedestrian re-identification network to obtain a feature vector, carrying out similarity calculation with the pedestrian re-identification feature vector of the database, and if the similarity is greater than a first threshold value, outputting the person and the corresponding person in the database to be the same person.
A people trajectory analysis device facing a smart park comprises a video input interface, a processor and a storage device, wherein the storage device is used for storing one or more programs; when the one or more programs are executed by the processor, the processor implements the intelligent campus-oriented people trajectory analysis method described above.
A computer-readable storage medium storing at least one program which, when executed by a processor, implements the intelligent campus-oriented person trajectory analysis method described above.
The invention has the beneficial effects that:
(1) all cameras of the garden monitoring system are adopted for face recognition, strangers are found at the first time, and information of people entering the garden, which is acquired by the garden access control system, is not only relied on, so that dangerous people entering the garden by bypassing the access control system through other illegal means can be found, and the characteristic record is carried out on the dangerous people, so that the tracking is convenient.
(2) The method comprises the steps of judging whether a specific pedestrian exists in an image or a video sequence by utilizing a pedestrian Re-identification technology and a computer vision technology, retrieving by depending on the overall posture of the pedestrian, mainly extracting static external features such as wearing, knapsack, hairstyle, umbrella and the like, identifying the pedestrian by analyzing the wearing and posture of the pedestrian, tracking and identifying the pedestrian by multiple cameras, and improving the continuity of identifying and tracking the pedestrian.
(3) The gait recognition technology is used, the gait recognition technology is a biological characteristic recognition technology, the recognition technology is used for recognizing the whole body characteristics of a person, the recognition technology has the advantages of being long in distance, across visual angles and uncontrolled, the identity of the person is analyzed through the body size and the walking posture, and the physical basis is the different physiological structures of each person: the gait recognition method has the advantages that the gait recognition method is stable in characteristics such as height, head shape, leg bone, arm extension, muscle, gravity center and nerve sensitivity, the gait recognition chance is higher than the face recognition chance in video frame extraction, and the consistency and accuracy of personnel recognition and tracking are improved.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
fig. 2 is a flow chart of the operation of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and the detailed description, and it should be noted that any combination of the embodiments or technical features described below can be used to form a new embodiment without conflict.
Referring to fig. 1 and 2, the present invention is a method, an apparatus and a medium for analyzing a person trajectory for an intelligent park, wherein the method for analyzing a person trajectory for an intelligent park includes the following steps:
s1, acquiring video streams of all monitoring cameras from the park;
s2, the park monitoring system constantly carries out face recognition on people appearing in the park through a face recognition algorithm, carries out gait recognition through a gait recognition algorithm, and carries out pedestrian re-recognition through a pedestrian re-recognition method;
s3, similarity calculation is carried out between the cell personnel database and the identified three characteristics, if the similarity of one of the three characteristics and the corresponding characteristic stored in the cell personnel database is more than or equal to a first threshold value, the cell personnel is determined to be the cell personnel, and tracking is not carried out on the cell personnel;
s4, if the characteristic of the person is identified to be not matched with the characteristic of the cell person library, namely the similarity of the three characteristics is smaller than a first threshold value, whether the person is in a stranger person library is identified;
s5, if the similarity between one of the three identified features and the corresponding feature stored in the stranger library is larger than or equal to a first threshold value, indicating that the feature of the stranger is recorded by the stranger library, and jumping to the step S7 in the process of being tracked;
s6, if not, namely the stranger appears in the shot for the first time, three characteristics recognized by the stranger are stored in a stranger library by establishing a new ID so as to be tracked according to the characteristics of the stranger library and find the track of the stranger; and meanwhile, an alarm prompt is triggered, and a currently identified monitoring picture is returned to remind a security guard to pay attention to the track of the personnel. Jumping to step S1;
s7, recording the time when the pedestrian with each ID appears in the camera and the position of the camera to a person track database;
s8, according to the personnel information recorded by the personnel track database, the positions of the cameras are connected in series on the map of the park according to the time sequence of the personnel appearing in the cameras, the personnel track map is drawn in real time, and the step S1 is skipped.
The cell personnel library is used for storing face information characteristic data, gait characteristic data and pedestrian re-identification characteristic data of all people in the cell.
The stranger database is used for storing the face characteristic data, the gait characteristic data and the pedestrian re-identification characteristic data of strangers.
The pedestrian trajectory database is used for recording the time and the position of each pedestrian, and historical data stored in the pedestrian trajectory database is used for backtracking the pedestrian trajectory. When a major case occurs, the case backtracking is convenient for police to find suspicious people.
The face recognition in step S2 includes the following sub-steps:
a1, firstly, carrying out face detection by using MTCNN; of course, other face detection methods, such as Dilb, OpenCV, OpenFace face detection, etc.,
a2, then face recognition is carried out by utilizing faceNet; the faceNet can be simply regarded as a CNN network for extracting the face features, and the identified features are embedding;
and A3, performing similarity comparison with the existing face features according to the embedding of the face features, and finishing the face recognition task.
The face recognition algorithm in step S2 is implemented in the following steps:
b1, establishing an MTCNN + faceNet model;
b2, inputting a large amount of face feature information, manually labeling the face feature information, and dividing the face feature information into a training set, a verification set and a test set;
b3, training an MTCNN + faceNet model by using the training set, training an automatic positioning face, intercepting the face in the face detection frame and extracting features;
b4, verifying the convergence condition of the MTCNN + faceNet model by using a verification set;
and B5, testing the MTCNN + faceNet model by using the test set, and outputting the MTCNN + faceNet model with the best effect as a face recognition algorithm if the test is passed.
The implementation method of the gait recognition in step S2 is as follows:
c1, acquiring a section of video of walking of people, preprocessing the video into a video image set with pedestrians separated from the background, and forming a black and white outline image silhouette;
c2, directly learning the characteristics of the gait by using a GaitSet network instead of measuring the similarity between a series of gait contour map sequences and a template;
and C3, performing identification by calculating cosine distance of the learned features and the existing features.
The extraction method of the black-white contour map silhouette comprises the following steps: detecting the position of a pedestrian in a video, and obtaining the contour (siloette) of the pedestrian by using a segmentation/matching/background modeling mode and other modes; and cutting out partial pictures containing pedestrians, and aligning masks of frames by using a geometric gravity center or other fixed points.
Except for the constraint that the size of the model input picture needs to be fixed to 64 x 44, the input time sequence picture is not required to have a time sequence order, the number is not required, any number, any posture and any shooting view angle outline picture can be input, and a feature vector is obtained through model output.
The gait recognition algorithm in step S2 is implemented by the following sub-steps:
d1, establishing a GaitSet deep learning model;
d2, dividing an open-source data set CASIA Gate Database into a training set, a verification set and a test set;
d3, training the GaitSet model by using a training set;
d4, verifying the convergence condition of the GaitSet model by using a verification set;
d5, using the GaitSet model of the test set, and outputting the GaitSe model with the best effect as a gait recognition algorithm if the test is passed.
In step S2, the method for re-identifying the pedestrian includes:
and inputting the picture framed by the video stream data into a trained pedestrian re-identification network established by using an open-source FastReaD library to obtain a feature vector, carrying out similarity calculation on the feature vector and the pedestrian re-identification feature of the database, and outputting the picture as the same person if the similarity is greater than a first threshold value.
Wherein, the training phase comprises the following modules:
(1) preprocessing Pre-processing, which is actually various data augmentation methods, such as Resize, Flipping, Random amplifying, Auto-amplifying (a skill in automl, which is used to achieve effective data enhancement and improve the robustness of features), Random patch, Cutout, and the like; wherein Random interference has significant effect in reid task.
(2) Backbone network (Backbone), including Backbone network selection (such as ResNet, ResNest, ResNeXt, etc.) and special modules (such as non-local, instance base simulation (IBN) modules, etc.) capable of enhancing Backbone network expression capability;
(3) an Aggregation module (Aggregation) for aggregating the features generated by the backbone network into a global feature, such as max porous, average porous, GeM porous, and attention porous;
(4) and the Head module is used for carrying out normalization, latitude reduction and the like on the generated global features.
(5) Training strategies, including Learning rate, arm-up, Backbone freeze, cosine decay (cosine decay), etc.
(6) Loss functions including Cross-entry loss, triple loss, Arcface loss, Circle loss;
in the inference phase, the inclusion module:
(1) a measurement part, which adds a local matching method deep spatial correlation (DSR) besides common cosine and Euclidean distances;
(2) and the post-processing part is used for processing the retrieval result and comprises two reordering methods of K-reciprocal coding and Query Expansion (QE).
The pedestrian re-identification method in the step S2 is implemented by the following steps:
e1, building a pedestrian re-identification network;
e2, acquiring a Market1501 data set with an open source, and preprocessing the data;
e3, dividing the data set into a training set and a verification set;
e4, training the pedestrian re-identification model by using the training set, evaluating the pedestrian re-identification model by using the verification set, and screening out the pedestrian re-identification model with the optimal prediction performance;
e5, obtaining the similarity distribution among the pedestrian images by using the pedestrian re-identification model obtained in the substep E4;
e6, inputting the picture extracted from the video stream data into the trained pedestrian re-identification network to obtain a feature vector, carrying out similarity calculation with the pedestrian re-identification feature vector of the database, and if the similarity is greater than a first threshold value, outputting the person and the corresponding person in the database to be the same person.
The first threshold in this scenario is 0.7.
The invention also discloses a personnel trajectory analysis device facing the intelligent park, which comprises a video input interface, a storage device and a processor, wherein the storage device is used for storing one or more programs; when the one or more programs are executed by the processor, the processor implements the intelligent campus-oriented person trajectory analysis method as described above.
Wherein the device may also preferably comprise a communication interface for communication and data interactive transmission with an external device.
It should be noted that the memory may include a high-speed RAM memory, and may also include a nonvolatile memory (nonvolatile memory), such as at least one disk memory.
In a specific implementation, if the memory, the processor and the communication interface are integrated on one chip, the memory, the processor and the communication interface can complete mutual communication through the internal interface; if the memory, the processor and the communication interface are implemented independently, the memory, the processor and the communication interface may be connected to each other through a bus and perform communication with each other.
The invention also discloses a computer readable storage medium which stores at least one program which, when executed by a processor, implements the intelligent park-oriented person trajectory analysis method as described above.
It should be understood that the computer-readable storage medium is any data storage device that can store data or programs which can be read by a computer system. Examples of computer-readable storage media include: read-only memory, random access memory, CD-ROM, HDD, DVD, magnetic tape, optical data storage devices, and the like.
The computer readable storage medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic, Radio Frequency (RF), the like, or any suitable combination of the foregoing; in some embodiments, the computer-readable storage medium may also be non-transitory.
As one embodiment of the present disclosure, the method is implemented by a computer software system, and the specific workflow is as follows:
s1, acquiring video streams of all monitoring cameras from the park;
s2, constantly carrying out face recognition, gait recognition and pedestrian re-recognition on people appearing in the garden by the garden monitoring system, respectively automatically extracting face features by utilizing a trained MTCNN + faceNet algorithm and gait features by utilizing a GaitSet algorithm, and extracting pedestrian re-recognition features by utilizing a pedestrian re-recognition algorithm built on the basis of an open-source FastReiD library, but not limited to the above algorithms;
s3, comparing the characteristic information with a community personnel library, if one of the recognized face characteristic, gait characteristic and pedestrian weight recognition characteristic can be matched with the characteristic of the community personnel library, the person is the community person and is not tracked; the specific method comprises the steps of respectively calculating the similarity of the recognized face features, gait features and pedestrian re-recognition features with the face features, the gait features and the pedestrian re-recognition features of a cell personnel library, and if the similarity of any one feature is greater than 0.7, considering that the features are matched;
s4, if the characteristics of the person do not match with the characteristics of the cell personnel base, identifying whether the person is in a stranger base; the specific method comprises the steps of respectively calculating the similarity between the recognized face features, gait features and pedestrian re-recognition features and the face features, the gait features and the pedestrian re-recognition features of a stranger bank, and if the similarity of any one feature is greater than 0.7, considering that the features of the features are matched; preferably, the above similarity calculation method adopts a cosine distance calculation method;
s5, if one of the recognized face feature, gait feature and pedestrian re-recognition feature can be matched with the feature of the stranger library; indicating that the person' S characteristic has been recorded, in the process of being tracked, go to the method of step S7 to mark it;
s6, if not, namely the person appears in the lens for the first time, giving a new ID to the person, and storing the three characteristics of the face characteristic, the gait characteristic and the pedestrian re-identification characteristic recognized by the person into a stranger bank so as to track the person according to the characteristics of the stranger bank and find the track of the person; by the way, triggering an alarm prompt and returning a picture to remind a security guard to pay attention to the track of the personnel; go to step S1 to continue tracking;
s7, recording the time when the pedestrian with each ID appears in the camera and the position of the camera to a person track database; historical data stored in the personnel track database can be used for tracing the personnel track when an accident occurs;
and S8, according to the personnel information recorded by the personnel track database, serially connecting the positions of the cameras on the map of the park according to the sequence of the time when the personnel appear on the cameras, drawing a personnel track map in real time, turning to S1, and continuing to execute the steps.
In the scheme, all cameras in a garden are connected with a system, no matter which camera firstly finds the appearance of a pedestrian, each pedestrian carries out face recognition and gait recognition immediately, three features of pedestrian re-recognition are extracted, the face feature is set to be empty if the face cannot be extracted, similarity calculation is carried out with the face feature, the gait feature and the pedestrian re-recognition feature in a cell personnel library, and if the similarity of any one feature is greater than 0.7, the cell personnel are identified and are not tracked; if not, similarity calculation is carried out on the human face features, the gait features and the pedestrian re-identification features in the stranger library, and if the similarity of any one feature is greater than 0.7, the time when the person corresponding to each ID passes through which camera is recorded in the person track database; if not, namely the stranger appears in the camera of the garden for the first time, the human face picture captured by the camera, the gait feature and the pedestrian re-identification feature are stored in a stranger library, alarm information is sent to security personnel, and a picture is returned to the security personnel in real time to know the situation. And finally, drawing the pedestrian track of the pedestrian on a map of the park according to the information recorded by the pedestrian track database, and providing functions of retrospective tracing and the like.
The above embodiments are only preferred embodiments of the present invention, and the protection scope of the present invention is not limited thereby, and any insubstantial changes and substitutions made by those skilled in the art based on the present invention are within the protection scope of the present invention.
Claims (10)
1. A personnel trajectory analysis method for an intelligent park is characterized by comprising the following steps: the method comprises the following steps:
s1, acquiring video streams of all monitoring cameras from the park;
s2, the park monitoring system constantly carries out face recognition on people appearing in the park through a face recognition algorithm, carries out gait recognition through a gait recognition algorithm, and carries out pedestrian re-recognition through a pedestrian re-recognition method;
s3, similarity calculation is carried out between the cell personnel database and the identified three characteristics, if the similarity of one of the three characteristics and the corresponding characteristic stored in the cell personnel database is more than or equal to a first threshold value, the cell personnel is determined to be the cell personnel, and tracking is not carried out on the cell personnel;
s4, if the characteristic of the person is identified to be not matched with the characteristic of the cell person library, namely the similarity of the three characteristics is smaller than a first threshold value, whether the person is in a stranger person library is identified;
s5, if the similarity between one of the three identified features and the corresponding feature stored in the stranger library is larger than or equal to a first threshold value, indicating that the feature of the stranger is recorded by the stranger library, and jumping to the step S7 in the process of being tracked;
s6, if not, namely the stranger appears in the shot for the first time, storing the three identified characteristics of the stranger into a stranger library by establishing a new ID, triggering an alarm prompt, returning a currently identified monitoring picture, and jumping to the step S1;
s7, recording the time when the pedestrian with each ID appears in the camera and the position of the camera to a person track database;
s8, according to the personnel information recorded by the personnel track database, the positions of the cameras are connected in series on the map of the park according to the time sequence of the personnel appearing in the cameras, the personnel track map is drawn in real time, and the step S1 is skipped.
2. The intelligent park-oriented personnel trajectory analysis method of claim 1, wherein: the community personnel database is used for storing face information characteristic data, gait characteristic data and pedestrian re-identification characteristic data of all people in the community;
the stranger database is used for storing face feature data, gait feature data and pedestrian re-identification feature data of strangers;
the pedestrian trajectory database is used for recording the time and the position of each pedestrian, and historical data stored in the pedestrian trajectory database is used for backtracking the pedestrian trajectory.
3. The intelligent park-oriented personnel trajectory analysis method of claim 1, wherein: the face recognition in step S2 includes the following sub-steps:
a1, firstly, carrying out face detection by using MTCNN;
a2, then face recognition is carried out by utilizing faceNet, and the recognized feature is embedding;
and A3, performing similarity comparison with the existing face features according to the embedding of the face features, and finishing the face recognition task.
4. The intelligent park-oriented personnel trajectory analysis method of claim 1, wherein: the face recognition algorithm in step S2 is implemented in the following steps:
b1, establishing an MTCNN + faceNet model;
b2, inputting a large amount of face feature information, manually labeling the face feature information, and dividing the face feature information into a training set, a verification set and a test set;
b3, training an MTCNN + faceNet model by using the training set, training an automatic positioning face, intercepting the face in the face detection frame and extracting features;
b4, verifying the convergence condition of the MTCNN + faceNet model by using a verification set;
and B5, testing the MTCNN + faceNet model by using the test set, and outputting the MTCNN + faceNet model with the best effect as a face recognition algorithm if the test is passed.
5. The intelligent park-oriented personnel trajectory analysis method of claim 1, wherein: the implementation method of the gait recognition in step S2 is as follows:
c1, acquiring a section of video of walking of people, preprocessing the video into a video image set with pedestrians separated from the background, and forming a black and white outline image silhouette;
c2, directly learning the characteristics of the gait by using a GaitSet network instead of measuring the similarity between a series of gait contour map sequences and a template;
and C3, performing identification by calculating cosine distance of the learned features and the existing features.
6. The intelligent park-oriented personnel trajectory analysis method of claim 1, wherein: the gait recognition algorithm in the step S2 is realized by the following sub-steps:
d1, establishing a GaitSet deep learning model;
d2, dividing an open-source data set CASIA Gate Database into a training set, a verification set and a test set;
d3, training the GaitSet model by using a training set;
d4, verifying the convergence condition of the GaitSet model by using a verification set;
d5, using the GaitSet model of the test set, and outputting the GaitSe model with the best effect as a gait recognition algorithm if the test is passed.
7. The intelligent park-oriented personnel trajectory analysis method of claim 1, wherein: in the step S2, the pedestrian re-identification is implemented as follows:
and inputting the picture framed by the video stream data into a trained pedestrian re-identification network established by using an open-source FastReaD library to obtain a feature vector, carrying out similarity calculation on the feature vector and the pedestrian re-identification feature of the database, and outputting the picture as the same person if the similarity is greater than a first threshold value.
8. The intelligent park-oriented personnel trajectory analysis method of claim 1, wherein: the pedestrian re-identification method in the step S2 includes the following steps:
e1, building a pedestrian re-identification network;
e2, acquiring a Market1501 data set with an open source, and preprocessing the data;
e3, dividing the data set into a training set and a verification set;
e4, training the pedestrian re-identification model by using the training set, evaluating the pedestrian re-identification model by using the verification set, and screening out the pedestrian re-identification model with the optimal prediction performance;
e5, obtaining the similarity distribution among the pedestrian images by using the pedestrian re-identification model obtained in the substep E4;
e6, inputting the picture extracted from the video stream data into the trained pedestrian re-identification network to obtain a feature vector, carrying out similarity calculation with the pedestrian re-identification feature vector of the database, and if the similarity is greater than a first threshold value, outputting the person and the corresponding person in the database to be the same person.
9. The utility model provides a personnel's orbit analysis equipment towards wisdom garden which characterized in that: the intelligent park-oriented personnel trajectory analysis equipment comprises a video input interface, a processor and a storage device, wherein the storage device is used for storing one or more programs; the one or more programs, when executed by the processor, implement the intelligent campus-oriented person trajectory analysis method of any one of claims 1-8.
10. A computer-readable storage medium storing at least one program, characterized in that: the program, when executed by a processor, implements a smart park oriented people trajectory analysis method according to any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110577338.3A CN113269091A (en) | 2021-05-26 | 2021-05-26 | Personnel trajectory analysis method, equipment and medium for intelligent park |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110577338.3A CN113269091A (en) | 2021-05-26 | 2021-05-26 | Personnel trajectory analysis method, equipment and medium for intelligent park |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113269091A true CN113269091A (en) | 2021-08-17 |
Family
ID=77232883
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110577338.3A Pending CN113269091A (en) | 2021-05-26 | 2021-05-26 | Personnel trajectory analysis method, equipment and medium for intelligent park |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113269091A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113688794A (en) * | 2021-09-24 | 2021-11-23 | 北京声智科技有限公司 | Identity recognition method and device, electronic equipment and computer readable storage medium |
CN113903068A (en) * | 2021-10-19 | 2022-01-07 | 深圳市中博科创信息技术有限公司 | Stranger monitoring method, device and equipment based on human face features and storage medium |
CN113989914A (en) * | 2021-12-24 | 2022-01-28 | 安维尔信息科技(天津)有限公司 | Security monitoring method and system based on face recognition |
CN114022920A (en) * | 2021-09-09 | 2022-02-08 | 中通服和信科技有限公司 | Wisdom garden fortune dimension system based on thing networking |
CN114120506A (en) * | 2021-09-30 | 2022-03-01 | 国网浙江省电力有限公司 | Infrastructure field personnel management and control system and method based on 5G network architecture |
CN114360185A (en) * | 2022-01-14 | 2022-04-15 | 上海星杰装饰有限公司 | Villa intelligent system |
CN114429665A (en) * | 2022-01-27 | 2022-05-03 | 复旦大学 | Regional pedestrian trajectory reconstruction device and method based on deep learning |
CN115471902A (en) * | 2022-11-14 | 2022-12-13 | 广州市威士丹利智能科技有限公司 | Face recognition protection method and system based on smart campus |
CN118135462A (en) * | 2024-03-29 | 2024-06-04 | 北京积加科技有限公司 | Stranger intrusion detection method and device based on face and gait recognition |
CN118762333A (en) * | 2024-09-05 | 2024-10-11 | 江苏普尊科技开发有限公司 | Cloud computing-based park monitoring data storage and sharing method and system |
CN118762333B (en) * | 2024-09-05 | 2024-11-19 | 江苏普尊科技开发有限公司 | Cloud computing-based park monitoring data storage and sharing method and system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109934104A (en) * | 2019-01-29 | 2019-06-25 | 武汉烽火众智数字技术有限责任公司 | The pedestrian retrieval method and system across camera lens identified again based on pedestrian |
CN110619277A (en) * | 2019-08-15 | 2019-12-27 | 青岛文达通科技股份有限公司 | Multi-community intelligent deployment and control method and system |
CN111178129A (en) * | 2019-11-25 | 2020-05-19 | 浙江工商大学 | Multi-modal personnel identification method based on face and posture |
CN112287815A (en) * | 2020-10-28 | 2021-01-29 | 广州瀚信通信科技股份有限公司 | Intelligent security personnel deployment and control system and method for complex scene |
CN112766119A (en) * | 2021-01-11 | 2021-05-07 | 厦门兆慧网络科技有限公司 | Method for accurately identifying strangers and constructing community security based on multi-dimensional face analysis |
-
2021
- 2021-05-26 CN CN202110577338.3A patent/CN113269091A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109934104A (en) * | 2019-01-29 | 2019-06-25 | 武汉烽火众智数字技术有限责任公司 | The pedestrian retrieval method and system across camera lens identified again based on pedestrian |
CN110619277A (en) * | 2019-08-15 | 2019-12-27 | 青岛文达通科技股份有限公司 | Multi-community intelligent deployment and control method and system |
CN111178129A (en) * | 2019-11-25 | 2020-05-19 | 浙江工商大学 | Multi-modal personnel identification method based on face and posture |
CN112287815A (en) * | 2020-10-28 | 2021-01-29 | 广州瀚信通信科技股份有限公司 | Intelligent security personnel deployment and control system and method for complex scene |
CN112766119A (en) * | 2021-01-11 | 2021-05-07 | 厦门兆慧网络科技有限公司 | Method for accurately identifying strangers and constructing community security based on multi-dimensional face analysis |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114022920A (en) * | 2021-09-09 | 2022-02-08 | 中通服和信科技有限公司 | Wisdom garden fortune dimension system based on thing networking |
CN113688794A (en) * | 2021-09-24 | 2021-11-23 | 北京声智科技有限公司 | Identity recognition method and device, electronic equipment and computer readable storage medium |
CN114120506A (en) * | 2021-09-30 | 2022-03-01 | 国网浙江省电力有限公司 | Infrastructure field personnel management and control system and method based on 5G network architecture |
CN113903068A (en) * | 2021-10-19 | 2022-01-07 | 深圳市中博科创信息技术有限公司 | Stranger monitoring method, device and equipment based on human face features and storage medium |
CN113989914A (en) * | 2021-12-24 | 2022-01-28 | 安维尔信息科技(天津)有限公司 | Security monitoring method and system based on face recognition |
CN113989914B (en) * | 2021-12-24 | 2022-03-15 | 安维尔信息科技(天津)有限公司 | Security monitoring method and system based on face recognition |
CN114360185A (en) * | 2022-01-14 | 2022-04-15 | 上海星杰装饰有限公司 | Villa intelligent system |
CN114429665A (en) * | 2022-01-27 | 2022-05-03 | 复旦大学 | Regional pedestrian trajectory reconstruction device and method based on deep learning |
CN115471902A (en) * | 2022-11-14 | 2022-12-13 | 广州市威士丹利智能科技有限公司 | Face recognition protection method and system based on smart campus |
CN118135462A (en) * | 2024-03-29 | 2024-06-04 | 北京积加科技有限公司 | Stranger intrusion detection method and device based on face and gait recognition |
CN118762333A (en) * | 2024-09-05 | 2024-10-11 | 江苏普尊科技开发有限公司 | Cloud computing-based park monitoring data storage and sharing method and system |
CN118762333B (en) * | 2024-09-05 | 2024-11-19 | 江苏普尊科技开发有限公司 | Cloud computing-based park monitoring data storage and sharing method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113269091A (en) | Personnel trajectory analysis method, equipment and medium for intelligent park | |
CN109934176B (en) | Pedestrian recognition system, recognition method, and computer-readable storage medium | |
CN109271554B (en) | Intelligent video identification system and application thereof | |
CN109711370B (en) | Data fusion method based on WIFI detection and face clustering | |
CN104166841B (en) | The quick detection recognition methods of pedestrian or vehicle is specified in a kind of video surveillance network | |
CN110414441B (en) | Pedestrian track analysis method and system | |
CN103824070B (en) | A kind of rapid pedestrian detection method based on computer vision | |
CN109145708B (en) | Pedestrian flow statistical method based on RGB and D information fusion | |
CN108229335A (en) | It is associated with face identification method and device, electronic equipment, storage medium, program | |
US20130243343A1 (en) | Method and device for people group detection | |
CN105631430A (en) | Matching method and apparatus for face image | |
CN111652035B (en) | Pedestrian re-identification method and system based on ST-SSCA-Net | |
CN106355154B (en) | Method for detecting frequent passing of people in surveillance video | |
CN112396658A (en) | Indoor personnel positioning method and positioning system based on video | |
CN113963399A (en) | Personnel trajectory retrieval method and device based on multi-algorithm fusion application | |
CN112861673A (en) | False alarm removal early warning method and system for multi-target detection of surveillance video | |
CN109409250A (en) | A kind of across the video camera pedestrian of no overlap ken recognition methods again based on deep learning | |
CN109492534A (en) | A kind of pedestrian detection method across scene multi-pose based on Faster RCNN | |
CN115620090A (en) | Model training method, low-illumination target re-recognition method and device and terminal equipment | |
CN110322472A (en) | A kind of multi-object tracking method and terminal device | |
CN115861940A (en) | Working scene behavior evaluation method and system based on human body tracking and recognition technology | |
Gunawan et al. | Design of automatic number plate recognition on android smartphone platform | |
CN111539257B (en) | Person re-identification method, device and storage medium | |
CN113673308B (en) | Object identification method, device and electronic system | |
CN114581990A (en) | Intelligent running test method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210817 |