CN112380461A - Pedestrian retrieval method based on GPS track - Google Patents
Pedestrian retrieval method based on GPS track Download PDFInfo
- Publication number
- CN112380461A CN112380461A CN202011308869.4A CN202011308869A CN112380461A CN 112380461 A CN112380461 A CN 112380461A CN 202011308869 A CN202011308869 A CN 202011308869A CN 112380461 A CN112380461 A CN 112380461A
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- bor
- layer
- gps
- track
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000008569 process Effects 0.000 claims abstract description 15
- 238000012216 screening Methods 0.000 claims abstract description 6
- 238000001514 detection method Methods 0.000 claims description 10
- 238000012544 monitoring process Methods 0.000 claims description 6
- 230000001174 ascending effect Effects 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 2
- 238000000605 extraction Methods 0.000 abstract 1
- 230000015654 memory Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011840 criminal investigation Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9537—Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a pedestrian retrieval method based on a GPS track, which comprises the following steps: collecting multi-modal information of the pedestrian in the walking process; drawing the collected GPS information into a track on a map and searching nearby candidate pedestrian images according to the space-time information of the track; performing feature extraction on the candidate pedestrian image by using a ResNet-50 model, and performing pedestrian clustering on the candidate pedestrian features by using a clustering algorithm to form different pedestrian clusters; and (3) performing track description on the map by each pedestrian cluster according to the spatio-temporal information of the pedestrian image, calculating the distance between the pedestrian track and the GPS track, screening out the most consistent track, further determining the target pedestrian corresponding to the GPS track, and finally achieving the purpose of searching the pedestrian by the GPS track.
Description
Technical Field
The invention relates to the technical field of multi-mode intelligent security, in particular to a pedestrian retrieval method based on a GPS track.
Background
At present, intelligent security is a very central position in building smart cities in China. The key step of building a three-dimensional security system is to fully utilize multi-modal information.
Most of the existing pedestrian retrieval algorithms are based on image information, and only map finding can be achieved, and the algorithms are limited in application in real complex scenes. With the great popularization of intelligent mobile terminals, GPS information and pedestrians are fully combined, but at present, GPS is mostly applied to navigation and further positioning, and combination with pedestrian retrieval is not considered. Due to the accuracy of GPS information positioning and the compactness of combination with pedestrians, the GPS information positioning method is urgently to be introduced into the field of pedestrian retrieval at present, so that various reconnaissance systems are helped to screen out target pedestrians better, faster and more three-dimensionally.
Disclosure of Invention
The invention aims to solve the defects in the prior art and provides a pedestrian retrieval method based on a GPS track.
The purpose of the invention can be achieved by adopting the following technical scheme:
a pedestrian retrieval method based on a GPS track comprises the following steps:
s1, obtaining multi-mode information of the pedestrian during walking, wherein the multi-mode information comprises a pedestrian image, time information of the pedestrian image, longitude and latitude information of a camera for obtaining the pedestrian image, and GPS information collected by the pedestrian in the moving process;
s2, the GPS information obtained in the step S1 is drawn into a track, a target GPS track is selected, and candidate pedestrian images appearing at a certain space-time threshold value are searched according to the space-time information of each GPS point corresponding to the track;
s3, extracting the features of the candidate pedestrian images obtained in the step S2 by using a ResNet-50 model, and clustering the pedestrian features by using a clustering algorithm to form different pedestrian clusters;
and S4, drawing tracks on the graph according to the space-time information of each pedestrian image in each cluster of the pedestrian clusters obtained in the step S3, and screening out one pedestrian cluster which best accords with the target GPS track to serve as a target pedestrian.
Further, the process of step S1 is as follows:
s11, acquiring a pedestrian image from a monitoring video acquired by the cross-camera equipment on the selected road section;
s12, recording time information displayed in the monitoring video when each pedestrian image is acquired while acquiring the pedestrian image, wherein the time information is used as the time information of the pedestrian image acquired by the camera equipment; recording longitude and latitude information of the camera equipment as spatial information of the camera equipment and spatial information of a pedestrian image acquired by the camera equipment;
and S13, continuously collecting the GPS change information of the pedestrian in the moving process by using the GPS collection software installed on the pedestrian mobile terminal in the moving process of the pedestrian.
Further, in step S11, the surveillance video collected by the camera device is divided into video frames one by one, and then the video frames are subjected to pedestrian detection through the SSD pedestrian detection algorithm or the fast RCNN pedestrian detection algorithm.
Further, the process of step S2 is as follows:
s21, firstly, carrying out GPS division on the GPS information obtained in the step S1 according to the unique MAC information of the mobile terminal equipment, then, sequencing each piece of GPS information in an ascending order according to time, then, carrying out point tracing on the sequenced GPS information on a map according to longitude and latitude, and then, connecting the lines for track display;
s22, selecting any one target GPS track, and screening candidate pedestrian images appearing at a time threshold and a space threshold according to the time threshold and the space threshold at each GPS point of the track, wherein the time threshold is within 5 seconds before and after each GPS point; the space threshold is a space range with the difference of the longitude and the latitude of each GPS point being less than 0.0001.
Further, the network structure of the ResNet-50 model in step S3 is specifically as follows:
the input layer is connected with the output layer in sequence as follows: a convolutional layer conv1, a BN layer BN1, a maximum pooling layer max _ pool, a convolutional layer layerr 1.0.conv1, a BN layer layerr 1.0.bn1, a convolutional layer layerr 1.0.conv2, a BN layer layerr 1.0.bn2, a convolutional layer layerr 1.0.conv3, a BN layer layerr 1.0.bn3, a downsampling layer layerr 1.0.downsample, a convolutional layer layerr 1.1.conv1, a BN layer layerr 1.1.bn1, a convolutional layer layerr 1.1.1.conv 2, a BN layer layerr 1.1.bn2, a BN layer1.1.bn2, a convolutional layer layerr 1.1.1.1.conv 3, a convolutional layer layerr 2.1.0.1.1.1, a convolutional layer 2.1.2.0.1.0.1.1.1.1.1.1.bn3, a convolutional layer 2.2.2.2.1.0.0.0.1.0.1.0.1.1.1.1.1.1.1.2.2.0.1.1.1.1.1.1.1.1.2.1.1.2.2.2.1.2.2.1.1.1.2.2.2.1.2.2.2.2.2.2.2.2.1.1.2.2.2.1.1.1.1.2.1.2.2.2.2.1.1.1.2.2.1.2.2.2.2.2.2.2.2.2.2.1.2.1.1.2.1.1.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.1.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2, a convolutional layer, a layer 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2, a layer 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2, a layer 2.2.2.2.2.2.2.2.2.2.2.2., BN layer layerr3.1. bn1, convolutional layer layerr3.1. conv2, BN layer layerr3.1. bnn 2, convolutional layer layerr3.1. conv3, BN layer layerr3.1.bn 3, convolutional layer layerr3.2. conv1, BN layer layerr3.2. BN1, convolutional layer layerr3.2. conv2, BN layer layerr3.2. BN2, convolutional layer layerr3.2. BN3, BN layer layerr3.2. conv3, BN layer layerr3.2. bn3, convolutional layer layerr3.3. conv1, multilayer layer3.3. bnr 1, convolutional layer layerr3.3. bn1, multilayer layer3.3.bn1, convolutional layer3.3.conv2, BN layer layerr3.3. bynen2, convolutional layer layerr3.3. 3. byerr3. bor 3, convolutional layer layerr3.4. byerr3. bor 2, convolutional layer layerr3.4. byerr 3. borebn 2, convolutional layer3.4. byerr 3.0.0.1. byerr 2. byerr 3, convolutional layer4. byerr 3.4. bor 3.0.0.0.1. byerr 2. bor 2, convolutional layer, bor 4. bor 3.4. bor 2, bor 3.4. bor 2, bor 2. bor 3, bor 3.4, bor 2. bor 4, bor 3.4, bor 4, bor 3.4, borborborborbor 3.4, bor 4, bor 3.4, bor 2, borborborborborborbornbn 2, bor 3.4, borborbor 3.3.3.4, borborborborborborborborborborborborbor 3.3.4, bor 3.0.0.0.0.0.4, bor 3.5, bor 3.4, bor 3.0.0.0.1, bor 3.0.1, borborborborborborborbornbn 2, borborborborborbornbn 2, bornbn 2, bor 3.4, bornbn 2.
Further, the clustering algorithm in step S3 adopts a DBSCAN clustering algorithm or an AP clustering algorithm.
Further, the process of step S4 is as follows:
s41, performing point drawing and line connecting on the map according to the space-time information of each pedestrian image in the cluster on different pedestrian clusters obtained in the step S3 to obtain a plurality of candidate pedestrian tracks;
s42, the pedestrian track and the target GPS track are taken as two curves, and the pedestrian track most similar to the target GPS track is screened out by calculating the distance between the two curves, so that the pedestrian cluster corresponding to the pedestrian track is taken as the target pedestrian searched out according to the target GPS track.
Compared with the prior art, the invention has the following advantages and effects:
the invention introduces the GPS track information into the pedestrian retrieval field for the first time, can detect the walking of the pedestrian more three-dimensionally and accurately, and has important significance in the fields of criminal investigation and the like. Meanwhile, under the condition that marking is not manually intervened, the invention efficiently utilizes GPS track information, pedestrian image information and space-time information of pedestrian images, and achieves the aim of quickly and cost-effectively searching pedestrians.
Drawings
Fig. 1 is a flowchart of a pedestrian retrieval method based on a GPS track according to the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
The embodiment discloses a pedestrian retrieval method based on a GPS track, which comprises the following steps:
s1, obtaining multi-mode information of the pedestrian during walking, wherein the multi-mode information comprises a pedestrian image, time information of the pedestrian image, longitude and latitude information of a camera for obtaining the pedestrian image, and GPS information collected by the pedestrian in the moving process;
s2, the GPS information obtained in the step S1 is drawn into a track, a target GPS track is selected, and candidate pedestrian images appearing at a certain space-time threshold value are searched according to the space-time information of each GPS point corresponding to the track;
s3, extracting the features of the candidate pedestrian images obtained in the step S2 by using a ResNet-50 model, and clustering the pedestrian features by using a clustering algorithm to form different pedestrian clusters;
and S4, according to the spatio-temporal information of each pedestrian image in each cluster, drawing tracks on the graph of the pedestrian clusters obtained in the step S3, screening out one pedestrian cluster which best accords with the target GPS track and using the pedestrian cluster as a target pedestrian, and thus achieving the purpose of searching pedestrians by the GPS track.
In this embodiment, the specific implementation process of the foregoing step S1 is as follows:
and S11, acquiring pedestrian images from the monitoring videos acquired by the cross-camera equipment at a certain road section.
For example, the method for obtaining the pedestrian image from the surveillance video may be to divide the surveillance video collected by the camera device into video frames one by one, and then perform pedestrian detection on the video frames through a pedestrian detection algorithm. The pedestrian detection algorithm can adopt an SSD algorithm or a fast RCNN algorithm, and can achieve the purpose of acquiring the pedestrian image in one frame of video frame. The pedestrian detection algorithm is not limited in the embodiment of the invention, and can be selected by a person skilled in the art according to actual conditions.
S12, recording time information displayed in the monitoring video when each pedestrian image is acquired while acquiring the pedestrian image, wherein the time information is used as the time information of the pedestrian image acquired by the camera equipment; and recording longitude and latitude information of the camera equipment as the spatial information of the camera equipment and the spatial information of the pedestrian image acquired by the camera equipment. The present invention refers to temporal information and spatial information of a pedestrian image collectively as spatiotemporal information of the pedestrian image.
S13, in the process of pedestrian movement, the invention continuously collects the GPS change information of the pedestrian in movement by using the GPS collection software installed on the pedestrian mobile terminal. And then uniformly collecting the GPS information of the movement of each pedestrian on the road section.
Illustratively, the GPS collection software is used for collecting GPS information sent by the mobile terminal, storing the GPS information locally, and then collecting the GPS information uniformly. The person skilled in the art can develop himself or herself according to the actual situation.
In this embodiment, the specific implementation process of the foregoing step S2 is as follows:
s21, GPS division is carried out on the GPS information obtained in the step S1 according to the unique MAC information of the mobile terminal equipment, then each piece of GPS information is sorted in an ascending order according to time, then the sorted GPS information is subjected to point drawing on a map according to longitude and latitude, and then connection is carried out for track display.
And S22, selecting any one target GPS track, wherein each GPS point of the track can screen candidate pedestrian images appearing at the time threshold and the space threshold according to the time threshold and the space threshold. Wherein the time threshold is within 5 seconds before and after each GPS point; the spatial threshold is a spatial range with the difference of the longitude and the latitude of each GPS point less than 0.0001. The specific time threshold and the specific space threshold are not limited in this embodiment, and may be set according to actual needs.
In this embodiment, the network structure of the ResNet-50 model in step S3 is specifically as follows: the input layer is connected with the output layer in sequence as follows: a convolutional layer conv1, a BN layer BN1, a maximum pooling layer max _ pool, a convolutional layer layerr 1.0.conv1, a BN layer layerr 1.0.bn1, a convolutional layer layerr 1.0.conv2, a BN layer layerr 1.0.bn2, a convolutional layer layerr 1.0.conv3, a BN layer layerr 1.0.bn3, a downsampling layer layerr 1.0.downsample, a convolutional layer layerr 1.1.conv1, a BN layer layerr 1.1.bn1, a convolutional layer layerr 1.1.1.conv 2, a BN layer layerr 1.1.bn2, a BN layer1.1.bn2, a convolutional layer layerr 1.1.1.1.conv 3, a convolutional layer layerr 2.1.0.1.1.1, a convolutional layer 2.1.2.0.1.0.1.1.1.1.1.1.bn3, a convolutional layer 2.2.2.2.1.0.0.0.1.0.1.0.1.1.1.1.1.1.1.2.2.0.1.1.1.1.1.1.1.1.2.1.1.2.2.2.1.2.2.1.1.1.2.2.2.1.2.2.2.2.2.2.2.2.1.1.2.2.2.1.1.1.1.2.1.2.2.2.2.1.1.1.2.2.1.2.2.2.2.2.2.2.2.2.2.1.2.1.1.2.1.1.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.1.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2, a convolutional layer, a layer 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2, a layer 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2, a layer 2.2.2.2.2.2.2.2.2.2.2.2., BN layer layerr3.1. bn1, convolutional layer layerr3.1. conv2, BN layer layerr3.1. bnn 2, convolutional layer layerr3.1. conv3, BN layer layerr3.1.bn 3, convolutional layer layerr3.2. conv1, BN layer layerr3.2. BN1, convolutional layer layerr3.2. conv2, BN layer layerr3.2. BN2, convolutional layer layerr3.2. BN3, BN layer layerr3.2. conv3, BN layer layerr3.2. bn3, convolutional layer layerr3.3. conv1, multilayer layer3.3.BN1, multilayer layer3.3. bnn 1, convolutional layer3.3.conv2, BN layer layerr3.3. 3. bynen 2, BN layer3. byerr 3, convolutional layer layerr3.3. negbn 2, convolutional layer layerr3.4.3. conv3, multilayer layer3.0, convolutional layer layerr 4.0.5. byerr 3.0, convolutional layer layerr 3.4. byerr 3.0, convolutional layer 4.1.0, convolutional layer 4.1.5. byerr 3.4. bor 3.0, convolutional layer4. byerr 3.4. bor 2. bor 3.0, convolutional layer, bor 2. bor 3.4. bor 2, bor 3.4, bor 2, bor 3.4. bor 2, bor 3.4, bor 2, bor 3.3.4, bornbn 2, bor 2, bornbn 2, bor 3.3.4, bornbn 2, bor 3.4, bor 3.3.4, bornbn 2, bor 3.4, bor 2, borbornbn 2, bornbn 2, bor 3.3.0, bornbn 2, bor 3.4, bornbn 2, bor 3.0, bor 3.4, bornbn 2, borborbornbn 2, bor 3.3.4, bornbn 2, bor 3.4, bor 3.0, bornbn 2, bor 3.4, bornbn 2, bor 3.4, bornbn 2, bor 3.3.3.3.4, bornbn 2, bornbn;
in this embodiment, the clustering algorithm in step S3 may adopt a DBSCAN clustering algorithm or an AP clustering algorithm, and the specific clustering algorithm adopted in this embodiment is not limited, and may be determined as needed.
Illustratively, the DBSCAN clustering algorithm is a density clustering algorithm. The algorithm requires that the number of objects (points or other spatial objects) contained within a certain region in the clustering space is not less than a given threshold, so as to further filter low density regions to find dense sample points. Samples of the same class are closely related, i.e., samples of the same class must exist a short distance around any sample of the class. The DBSCAN clustering algorithm has the advantages of high clustering speed, capability of effectively processing noise points and finding spatial clusters of any shapes.
In this embodiment, the specific implementation process of the foregoing step S4 is as follows:
and S41, performing point drawing and line connecting on different pedestrian clusters obtained in the step S3 on a map according to the space-time information of each pedestrian image in the clusters to obtain a plurality of candidate pedestrian tracks.
S42, the pedestrian track and the target GPS track are taken as two curves, and a pedestrian track most similar to the target GPS track can be screened out by calculating the distance between the two curves, so that the pedestrian cluster corresponding to the pedestrian track is taken as the target pedestrian searched out according to the target GPS track. The distance function for calculating the two curves is not limited in this example, and those skilled in the art may use the freschel distance or the hausdorff distance according to actual situations.
Example two
The embodiment of the invention also provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions can execute the pedestrian retrieval method in the first embodiment. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.
Claims (7)
1. A pedestrian retrieval method based on a GPS track is characterized by comprising the following steps:
s1, obtaining multi-mode information of the pedestrian during walking, wherein the multi-mode information comprises a pedestrian image, time information of the pedestrian image, longitude and latitude information of a camera for obtaining the pedestrian image, and GPS information collected by the pedestrian in the moving process;
s2, the GPS information obtained in the step S1 is drawn into a track, a target GPS track is selected, and candidate pedestrian images appearing at a certain space-time threshold value are searched according to the space-time information of each GPS point corresponding to the track;
s3, extracting the features of the candidate pedestrian images obtained in the step S2 by using a ResNet-50 model, and clustering the pedestrian features by using a clustering algorithm to form different pedestrian clusters;
and S4, drawing tracks on the graph according to the space-time information of each pedestrian image in each cluster of the pedestrian clusters obtained in the step S3, and screening out one pedestrian cluster which best accords with the target GPS track to serve as a target pedestrian.
2. The pedestrian retrieval method based on the GPS track according to claim 1, wherein the step S1 is performed as follows:
s11, acquiring a pedestrian image from a monitoring video acquired by the cross-camera equipment on the selected road section;
s12, recording time information displayed in the monitoring video when each pedestrian image is acquired while acquiring the pedestrian image, wherein the time information is used as the time information of the pedestrian image acquired by the camera equipment; recording longitude and latitude information of the camera equipment as spatial information of the camera equipment and spatial information of a pedestrian image acquired by the camera equipment;
and S13, continuously collecting the GPS change information of the pedestrian in the moving process by using the GPS collection software installed on the pedestrian mobile terminal in the moving process of the pedestrian.
3. The pedestrian retrieval method according to claim 1, wherein in step S11, the surveillance video collected by the camera device is divided into one frame and one frame of video frames, and then the video frames are subjected to pedestrian detection through SSD pedestrian detection algorithm or fast RCNN pedestrian detection algorithm.
4. The pedestrian retrieval method based on the GPS track according to claim 1, wherein the step S2 is performed as follows:
s21, firstly, carrying out GPS division on the GPS information obtained in the step S1 according to the unique MAC information of the mobile terminal equipment, then, sequencing each piece of GPS information in an ascending order according to time, then, carrying out point tracing on the sequenced GPS information on a map according to longitude and latitude, and then, connecting the lines for track display;
s22, selecting any one target GPS track, and screening candidate pedestrian images appearing at a time threshold and a space threshold according to the time threshold and the space threshold at each GPS point of the track, wherein the time threshold is within 5 seconds before and after each GPS point; the space threshold is a space range with the difference of the longitude and the latitude of each GPS point being less than 0.0001.
5. The pedestrian retrieval method based on the GPS track according to claim 1, wherein the network structure of the ResNet-50 model in step S3 is specifically as follows:
the input layer is connected with the output layer in sequence as follows: a convolutional layer conv1, a BN layer BN1, a maximum pooling layer max _ pool, a convolutional layer layerr 1.0.conv1, a BN layer layerr 1.0.bn1, a convolutional layer layerr 1.0.conv2, a BN layer layerr 1.0.bn2, a convolutional layer layerr 1.0.conv3, a BN layer layerr 1.0.bn3, a downsampling layer layerr 1.0.downsample, a convolutional layer layerr 1.1.conv1, a BN layer layerr 1.1.bn1, a convolutional layer layerr 1.1.1.conv 2, a BN layer layerr 1.1.bn2, a BN layer1.1.bn2, a convolutional layer layerr 1.1.1.1.conv 3, a convolutional layer layerr 2.1.0.1.1.1, a convolutional layer 2.1.2.0.1.0.1.1.1.1.1.1.bn3, a convolutional layer 2.2.2.2.1.0.0.0.1.0.1.0.1.1.1.1.1.1.1.2.2.0.1.1.1.1.1.1.1.1.2.1.1.2.2.2.1.2.2.1.1.1.2.2.2.1.2.2.2.2.2.2.2.2.1.1.2.2.2.1.1.1.1.2.1.2.2.2.2.1.1.1.2.2.1.2.2.2.2.2.2.2.2.2.2.1.2.1.1.2.1.1.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.1.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2, a convolutional layer, a layer 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2, a layer 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2, a layer 2.2.2.2.2.2.2.2.2.2.2.2., BN layer layerr3.1. bn1, convolutional layer layerr3.1. conv2, BN layer layerr3.1. bnn 2, convolutional layer layerr3.1. conv3, BN layer layerr3.1.bn 3, convolutional layer layerr3.2. conv1, BN layer layerr3.2. BN1, convolutional layer layerr3.2. conv2, BN layer layerr3.2. BN2, convolutional layer layerr3.2. BN3, BN layer layerr3.2. conv3, BN layer layerr3.2. bn3, convolutional layer layerr3.3. conv1, multilayer layer3.3. bnr 1, convolutional layer layerr3.3. bn1, multilayer layer3.3.bn1, convolutional layer3.3.conv2, BN layer layerr3.3. bynen2, convolutional layer layerr3.3. 3. byerr3. bor 3, convolutional layer layerr3.4. byerr3. bor 2, convolutional layer layerr3.4. byerr 3. borebn 2, convolutional layer3.4. byerr 3.0.0.1. byerr 2. byerr 3, convolutional layer4. byerr 3.4. bor 3.0.0.0.1. byerr 2. bor 2, convolutional layer, bor 4. bor 3.4. bor 2, bor 3.4. bor 2, bor 2. bor 3, bor 3.4, bor 2. bor 4, bor 3.4, bor 4, bor 3.4, borborborborbor 3.4, bor 4, bor 3.4, bor 2, borborborborborborbornbn 2, bor 3.4, borborbor 3.3.3.4, borborborborborborborborborborborborbor 3.3.4, bor 3.0.0.0.0.0.4, bor 3.5, bor 3.4, bor 3.0.0.0.1, bor 3.0.1, borborborborborborborbornbn 2, borborborborborbornbn 2, bornbn 2, bor 3.4, bornbn 2.
6. The pedestrian retrieval method based on the GPS track according to claim 1, wherein the clustering algorithm in step S3 adopts a DBSCAN clustering algorithm or an AP clustering algorithm.
7. The pedestrian retrieval method based on the GPS track according to claim 1, wherein the step S4 is performed as follows:
s41, performing point drawing and line connecting on the map according to the space-time information of each pedestrian image in the cluster on different pedestrian clusters obtained in the step S3 to obtain a plurality of candidate pedestrian tracks;
s42, the pedestrian track and the target GPS track are taken as two curves, and the pedestrian track most similar to the target GPS track is screened out by calculating the distance between the two curves, so that the pedestrian cluster corresponding to the pedestrian track is taken as the target pedestrian searched out according to the target GPS track.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011308869.4A CN112380461A (en) | 2020-11-20 | 2020-11-20 | Pedestrian retrieval method based on GPS track |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011308869.4A CN112380461A (en) | 2020-11-20 | 2020-11-20 | Pedestrian retrieval method based on GPS track |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112380461A true CN112380461A (en) | 2021-02-19 |
Family
ID=74584472
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011308869.4A Pending CN112380461A (en) | 2020-11-20 | 2020-11-20 | Pedestrian retrieval method based on GPS track |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112380461A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023070833A1 (en) * | 2021-10-26 | 2023-05-04 | 惠州市德赛西威汽车电子股份有限公司 | Method for detecting target pedestrian around vehicle, and vehicle moving method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110503032A (en) * | 2019-08-21 | 2019-11-26 | 中南大学 | Individual important place detection method based on monitoring camera track data |
CN111344739A (en) * | 2017-11-14 | 2020-06-26 | 高通股份有限公司 | Spatio-temporal action and role localization |
CN111353448A (en) * | 2020-03-05 | 2020-06-30 | 南京理工大学 | Pedestrian multi-target tracking method based on relevance clustering and space-time constraint |
US20200283016A1 (en) * | 2019-03-06 | 2020-09-10 | Robert Bosch Gmbh | Movement prediction of pedestrians useful for autonomous driving |
CN111666823A (en) * | 2020-05-14 | 2020-09-15 | 武汉大学 | Pedestrian re-identification method based on individual walking motion space-time law collaborative identification |
-
2020
- 2020-11-20 CN CN202011308869.4A patent/CN112380461A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111344739A (en) * | 2017-11-14 | 2020-06-26 | 高通股份有限公司 | Spatio-temporal action and role localization |
US20200283016A1 (en) * | 2019-03-06 | 2020-09-10 | Robert Bosch Gmbh | Movement prediction of pedestrians useful for autonomous driving |
CN110503032A (en) * | 2019-08-21 | 2019-11-26 | 中南大学 | Individual important place detection method based on monitoring camera track data |
CN111353448A (en) * | 2020-03-05 | 2020-06-30 | 南京理工大学 | Pedestrian multi-target tracking method based on relevance clustering and space-time constraint |
CN111666823A (en) * | 2020-05-14 | 2020-09-15 | 武汉大学 | Pedestrian re-identification method based on individual walking motion space-time law collaborative identification |
Non-Patent Citations (3)
Title |
---|
SUNGWON BYON 等: "An Implementation of Re-identifying Video Objects with Location Trajectory Data", 2020 INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY, 23 October 2020 (2020-10-23), pages 1546 - 1548 * |
江志浩 等: "基于时空模型视频监控的行人活动预测算法", 计算机应用与软件, vol. 34, no. 01, 31 January 2017 (2017-01-31), pages 149 - 153 * |
袁冠 等: "移动对象轨迹数据挖掘技术", 30 November 2016, 中国矿业大学出版社, pages: 42 - 43 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023070833A1 (en) * | 2021-10-26 | 2023-05-04 | 惠州市德赛西威汽车电子股份有限公司 | Method for detecting target pedestrian around vehicle, and vehicle moving method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109241349B (en) | Monitoring video multi-target classification retrieval method and system based on deep learning | |
US11657623B2 (en) | Traffic information providing method and device, and computer program stored in medium in order to execute method | |
CN112905824A (en) | Target vehicle tracking method and device, computer equipment and storage medium | |
Filonenko et al. | Real-time flood detection for video surveillance | |
CN112836683B (en) | License plate recognition method, device, equipment and medium for portable camera equipment | |
CN113963438A (en) | Behavior recognition method and device, equipment and storage medium | |
CN111881233B (en) | Distributed point cloud map construction method and device, server and computer readable storage medium | |
CN106530331A (en) | Video monitoring system and method | |
CN115049731B (en) | Visual image construction and positioning method based on binocular camera | |
CN112380461A (en) | Pedestrian retrieval method based on GPS track | |
CN115272949A (en) | Pedestrian tracking method and system based on geographic spatial information | |
CN108765954B (en) | Road traffic safety condition monitoring method based on SNN density ST-OPTIC improved clustering algorithm | |
CN110781797B (en) | Labeling method and device and electronic equipment | |
CN109800685A (en) | The determination method and device of object in a kind of video | |
CN114677627A (en) | Target clue finding method, device, equipment and medium | |
CN112819859B (en) | Multi-target tracking method and device applied to intelligent security | |
CN114090909A (en) | Graph code joint detection correlation method and device, computer equipment and storage medium | |
CN112925948A (en) | Video processing method and device, medium, chip and electronic equipment thereof | |
CN114219938A (en) | Region-of-interest acquisition method | |
CN111651690A (en) | Case-related information searching method and device and computer equipment | |
CN116935305B (en) | Intelligent security monitoring method, system, electronic equipment and storage medium | |
Kim | Lifelong Learning Architecture of Video Surveillance System | |
CN110781796B (en) | Labeling method and device and electronic equipment | |
CN111598053B (en) | Image data processing method and device, medium and system thereof | |
CN101539988A (en) | Internet and GPRS-based intelligent video monitoring system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |