CN111415374A - KVM system and method for monitoring and managing scenic spot pedestrian flow - Google Patents
KVM system and method for monitoring and managing scenic spot pedestrian flow Download PDFInfo
- Publication number
- CN111415374A CN111415374A CN202010024838.XA CN202010024838A CN111415374A CN 111415374 A CN111415374 A CN 111415374A CN 202010024838 A CN202010024838 A CN 202010024838A CN 111415374 A CN111415374 A CN 111415374A
- Authority
- CN
- China
- Prior art keywords
- target
- image
- frame
- moving target
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
- G06F3/023—Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/038—Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
- G06F3/0383—Signal control means within the pointing device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30242—Counting objects in image
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a KVM system and a method for monitoring and managing scenic spot pedestrian flow, which are applied to electronic equipment, wherein the method comprises the steps of obtaining a moving target by adopting a three-sensor difference method, and carrying out moving target segmentation, moving target tracking and moving target counting by utilizing a method combining Kalman tracking and minimum Euclidean distance; the system comprises a processor FPGA chip and a computer program which is stored on the FPGA chip and can run on the processor, and the computer program executes the method. The invention adopts the FPGA chip to carry out parallel operation on the video image, has high processing speed and low delay, ensures that the change of the scenic spot pedestrian volume can be detected in real time, and in addition, the invention combines the computer vision technology and the KVM technology, effectively prolongs the data transmission distance and provides convenience for monitoring and managing the scenic spot pedestrian volume.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a KVM (keyboard video mouse) method for monitoring and managing scenic spot pedestrian flow.
Background
With the increasingly improved social living standard and the continuous improvement of computer technology, for large public places such as shopping malls, stations and tourist attractions, people counting becomes an important task for management and decision makers, and for tourist attractions, the number of tourists is a part of the important income of scenic spots. The mode that people used in the past all was artifical count or artifical electronic equipment triggers the count works, and it is obvious that this kind of mode has not adapted to the age that the information is big explosion, has appeared at present many automatic counting modes, for example thermal imaging count, infrared count, but infrared count is extremely easily disturbed by external factor, for example many people can produce the hourglass number and the instrument wave band also can cause the influence to it through producing.
In order to improve the accuracy of people flow statistics, a computer vision technology is introduced into the system, and a moving object is accurately shot, positioned, tracked and counted by using high-definition video acquisition equipment and an intelligent digital image algorithm.
Disclosure of Invention
The invention aims to solve the technical problem of the prior art, and provides a KVM device and a method for monitoring and managing the pedestrian flow in scenic spots.
In order to achieve the above object, the present invention provides a KVM method for monitoring and managing scenic spot traffic, applied to an electronic device, the method comprising
S1: carrying out operation target detection on the collected video image by adopting a three-frame difference method;
s2: performing operation target segmentation on the video image with the operation target detection completed;
s3: taking the head area of the tourist as a target characteristic, and setting a circumscribed rectangle of the head area as a tracking window;
s4: predicting the area of a next frame of tracking window by using Kalman filtering, and finally finding out the optimal matching object in the prediction area by combining the minimum European distance to realize target tracking;
s5: before the next frame of image comes, the count value changes if the tracked moving target prediction area is not in the shooting area, so as to count the running targets.
The operation target detection adopts a three-frame difference method, and comprises the following steps:
s101: by fK-1(x,y)、fK(x, y) and fK+1(x, y) represents the pixel gray scale of the image of the (k-1) th frame, the k-1 th frame and the (k + 1) th frame at the (x, y) point;
s102: will f isK(x, y) and fK-1(x, y) as a difference, fK+1(x, y) and fK(x, y) are differentiated, and the differential image is marked as DK(x,y)、DK+1(x,y),DK(x, y) is obtained from the following equation:
S104: performing threshold segmentation by using a maximum inter-class variance method to obtain a binary image
S105: and performing the morphological processing by adopting the expansion operation, so that a clear foreground moving target image Rn can be obtained.
Specifically, the operation target segmentation in step S2 is characterized in that the segmentation process is performed if the extracted motion target regions adhere to each other, and the segmentation is not necessary if it is determined that there is a single motion target, and the target segmentation step includes:
s201: connected domain external connection rectangle: firstly, drawing an external rectangle on the outline of the human body, scanning the rectangle to obtain coordinates of four corners of the rectangle, namely X1n, X2n, Y1n and Y2n, wherein n is the serial number of the target rectangle for adhesion movement;
s202: by the following formula:
D1=|X2N-X1N|,D2=|Y2N-Y1N|
and obtaining the length and the width of the outline of the circumscribed rectangle of the moving target, and setting the length threshold value of the circumscribed rectangle as A, the width as B and the length-width ratio as follows. If the ratio of the length, the width and the length-width of the circumscribed rectangle is within a set threshold range, a single moving target is judged, and if more than one moving target is not described in the range, the circumscribed rectangle needs to be segmented.
S203: calculating the area mark m of each circumscribed rectangle by adopting a vertical segmentation method00And two first moments are respectively marked as m10And m01;
S204: calculated by the centroid formulaAnd finally, taking the mean value of the horizontal coordinates between the centroids as a punctuation and drawing a vertical segmentation boundary.
Further, the target tracking in step S4 is characterized by including:
s301: selecting a target characteristic parameter;
s302: establishing and initializing a system model;
s303: target prediction and matching: predicting the position of the center point of the next frame of moving target by Kalman filtering based on the center point of the current moving target, setting a pre-search area with the radius of R according to the predicted center point when the next frame of image comes, and searching in the range by the minimum Euclidean distance to obtain the best matching object;
s304: target updating: and taking the best matching object as an initial position, updating target information, and repeating the operation until the operation is finished.
Preferably, the selecting the target feature parameter in step S301 includes:
s401: firstly, extracting edge features in each rectangular frame by using a Canny edge detection algorithm;
s402: secondly, detecting a circle in the upper half part of the rectangular frame by utilizing Hough transformation and taking the circle as the head of a moving target;
s403: marking the detected head of the moving target as a minimum circumscribed rectangle and scanning coordinates on four corners of the circumscribed rectangle as X1n, X2n, Y1n and Y2n, wherein n is the serial number of the head of the moving target, and calculating the length and the width of a window of the head of the target by the following formula:
D1=|X2N-X1N|,D2=|Y2N-Y1N|
and by the formula:
the window center point (X, Y) is obtained.
S404: assuming that the pixel at a certain point in the circumscribed rectangle is f (x, y), and the total number of pixels in the head region is N, the average value of the pixels in the head region is
Specifically, the system model establishment and initialization in step S302 includes:
s501: if the motion target speed v changes very little due to very little time difference between adjacent frames, the motion target is determined to do uniform motion, and the window central point (X, Y), the pixel mean value w and the speed v are selected as state variables;
s502: calculating a transfer matrix and a measurement matrix of the state variables;
s503: initializing a covariance matrix and setting the position of the first frame motion target as an initial position, and setting the initial speed as zero.
Based on the above technical solution, the present invention further provides a KVM system for monitoring and managing scenic spot pedestrian volume, the system includes a processor, an FPGA chip, and a computer program stored on the FPGA chip and operable on the processor, and is characterized in that the haisi chip has a model of XC7a100T-2FGG484I, and the following steps are implemented when the program is executed:
s1: carrying out operation target detection on the collected video image by adopting a three-frame difference method;
s2: performing operation target segmentation on the video image with the operation target detection completed;
s3: taking the head area of the tourist as a target characteristic, and setting a circumscribed rectangle of the head area as a tracking window;
s4: predicting the area of a next frame of tracking window by using Kalman filtering, and finally finding out the optimal matching object in the prediction area by combining the minimum European distance to realize target tracking;
s5: before the next frame of image comes, the count value changes if the tracked moving target prediction area is not in the shooting area, so as to count the running targets.
The operation target detection adopts a three-frame difference method, and comprises the following steps:
s101: by fK-1(x,y)、fK(x, y) and fK+1(x, y) represents the pixel gray scale of the image of the (k-1) th frame, the k-1 th frame and the (k + 1) th frame at the (x, y) point;
s102: will f isK(x, y) and fK-1(x, y) as a difference, fK+1(x, y) and fK(x, y) are differentiated, and the differential image is marked as DK(x,y)、DK+1(x,y),DK(x, y) is obtained from the following equation:
S104: performing threshold segmentation by using a maximum inter-class variance method to obtain a binary image
S105: and performing the morphological processing by adopting the expansion operation, so that a clear foreground moving target image Rn can be obtained.
By adopting the technical scheme, the invention has the following positive effects:
(1) the invention adopts the FPGA chip to carry out parallel operation on the video image, has high processing speed and low delay, and ensures that the change of the flow of people in the scenic spot can be detected in real time.
(2) The problem of crowding and mutual shielding of the moving target body is solved by adopting a camera vertical shooting mode, the moving target judgment and tracking algorithm is developed by taking the circular characteristic of the head of the tourist as the gravity center, and the accuracy of moving target detection is improved.
Drawings
In order that the present disclosure may be more readily and clearly understood, reference is now made to the following detailed description of the present disclosure taken in conjunction with the accompanying drawings, in which
FIG. 1 is a schematic diagram of the FPGA internal video image processing operation of the present invention;
FIG. 2 is a process diagram of a three frame differencing method of the present invention;
FIG. 3 is a schematic diagram of a three-frame difference experiment of the present invention
FIG. 4 is a flow chart of moving object segmentation in accordance with the present invention;
FIG. 5 is a schematic diagram of a moving object segmentation experiment according to the present invention;
FIG. 6 is a moving object tracking flow chart of the present invention;
FIG. 7 is a schematic diagram of a moving object tracking experiment of the present invention;
fig. 8 is a view of the counting of visitors in a simulated scenic spot according to the present invention.
Detailed Description
The above description is only an overview of the technical solutions of the present invention, and the present invention can be implemented in accordance with the content of the description in order to make the technical means of the present invention more clearly understood, and the detailed description of the present invention will be given below in order to make the above and other objects, features, and advantages of the present invention more clearly understandable.
The technical solutions of the present invention are described in detail below with reference to the drawings and the specific embodiments, and it should be understood that the specific features in the embodiments and the examples are detailed descriptions of the technical solutions of the present application, and are not limitations of the technical solutions of the present application, and the technical features in the embodiments and the examples of the present application may be combined with each other without conflict.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates a relationship in which the front and rear associated objects are an "or".
(example 1)
A KVM method for monitoring and managing scenic spot people flow is applied to an electronic device, and in the embodiment, the electronic device is a Field Programmable Gate Array (FPGA) chip.
S1: carrying out operation target detection on the collected video image by adopting a three-frame difference method;
as shown in fig. 2, first using fK-1(x,y)、fK(x, y) and fK+1(x, y) represents the pixel gray scale of the (k-1) th, k-1 th and k +1 th frame images at the (x, y) point, and then f is setK(x, y) and fK-1(x, y) as a difference, fK+1(x, y) and fK(x, y) are differentiated, and the difference image is recorded as DK(x,y)、DK+1(x, y), e.g. DK(x, y) is obtained from the following equation:then, the difference image D is comparedK(x, y) and DK+1(x, y) and operating to obtain an imageSecondly, performing threshold segmentation by adopting a maximum inter-class variance method to obtain a binary imageAnd finally, performing the morphological processing by using the expansion operation, so that a clear foreground moving target image Rn can be obtained.
As shown in fig. 3, which is a schematic diagram of a three-frame difference method experiment, the images of the (k-1) th frame, the (k) th frame and the (k + 1) th frame are extracted as operation targets, and a moving target image is finally obtained through a three-frame difference algorithm, so that a foundation is laid for subsequent moving target tracking and counting.
S2: performing operation target segmentation on the video image with the operation target detection completed;
as shown in fig. 4, which is a flow chart of moving object segmentation, if the extracted moving object regions are stuck to each other, segmentation is performed, and if the moving object regions are determined as a single moving object, segmentation is not required. The segmentation step comprises the steps of obtaining a connected domain external rectangle, counting the segmentation limit of the conglutinated crowd lumps and segmenting. The communication domain is externally connected with a rectangle: firstly, drawing an external rectangle on the outline of the human body, scanning the rectangle to obtain rectangular four-corner coordinates of X1n, X2n, Y1n and Y2n, wherein n is the serial number of the rectangle of the adhesion moving target, and obtaining the external rectangle by a formula D1=|X2N-X1N|,D2=|Y2N-Y1NObtaining the length and the width of the outline of the circumscribed rectangle of the moving target, setting the length threshold value of the circumscribed rectangle as A, the width as B, the length-width ratio as C, if the ratio of the length, the width and the length-width of the circumscribed rectangle is within the set threshold value range, judging as a single moving target, if the moving target is not within the range, judging as more than one moving target, needing to divide the circumscribed rectangle, in addition, because the camera shoots vertically, avoiding the length of the circumscribed rectangle from exceeding the threshold value due to the front and back shielding among tourists,the vertical segmentation method is selected. Dividing point statistics based on connected domain external rectangles: firstly, the area mark m of each circumscribed rectangle is calculated00And two first moments are respectively marked as m10And m01Secondly calculated as by the centroid formulaAnd finally, taking the mean value of the horizontal coordinates between the centroids as a punctuation and drawing a vertical segmentation boundary.
Fig. 5 is a schematic diagram of an experiment for dividing moving targets, in which two left and right conglutinations and three left and right conglutinations are shown on the left side of the diagram, and the result after dividing the moving targets is shown on the right side, so that the number of the moving targets can be clearly observed.
S3: taking the head area of the tourist as a target characteristic, and setting a circumscribed rectangle of the head area as a tracking window;
s4: predicting the area of a next frame of tracking window by using Kalman filtering, and finally finding out the optimal matching object in the prediction area by combining the minimum European distance to realize target tracking;
as shown in fig. 6, a moving target tracking flow chart is provided, and a tracking algorithm combining Kalman filter prediction and minimum euclidean distance is adopted, and the algorithm steps are divided into four steps: firstly, selecting target characteristic parameters, establishing and initializing a subsystem model of the target characteristic parameters, predicting and matching the target again, updating the target if the predicted target is the same as the matched target, and changing a count value if the predicted target exceeds a monitoring area, wherein the actual operation steps are as follows:
selecting target characteristic parameters: selecting target characteristic parameters: after the moving target is segmented, firstly, extracting edge features in each rectangular frame by using a Canny edge detection algorithm, secondly, detecting a circle in the upper half part of the rectangular frame by using Hough transformation and taking the circle as the head of the moving target, then, taking the detected head of the moving target as a minimum circumscribed rectangle and scanning coordinates on four corners of the circumscribed rectangle as X1n, X2n, Y1n and Y2n, wherein n is the head sequence of the moving targetNumber, by formula DX=|X2N-X1N|, DY=|Y2N-Y1NL, calculating the length and width of the target head window and passing through a formulaObtaining the window central point (X, Y), and finally, assuming that the pixel of a certain point in the circumscribed rectangle is f (X, Y), the total number of the pixels in the head area is N, the average value of the pixels in the head area is
Establishing and initializing a system model: the method comprises the steps that the speed v of a moving target is changed very little due to the fact that the time difference between adjacent frames is very small, the moving target is determined to move at a constant speed, the central point (X, Y) of a window, the pixel mean value w and the speed v are selected as state variables, then a transfer matrix and a measurement matrix of the state variables are calculated, finally a covariance matrix is initialized, the position of the moving target of a first frame is set as an initial position, and the initial speed is set to be zero;
target prediction and matching: based on the current moving target center point, predicting the position of the next frame moving target center point by utilizing Kalman filtering, setting a pre-search area with the radius of R according to the predicted center point when the next frame image comes, and searching in the range by using the minimum Euclidean distance to obtain the best matching object;
target updating: and taking the best matching object as an initial position, updating target information, and repeating the operation until the operation is finished.
Fig. 7 is a schematic diagram of a moving target tracking experiment, which is respectively an image of the kth frame, an image of the (k + 3) th frame, an image of the (k + 6) th frame and an image of the (k + 12) th frame, and a tracking algorithm combining Kalman filtering prediction and a minimum euclidean distance is used to effectively track a moving target.
S5: before the next frame of image comes, the count value changes if the tracked moving target prediction area is not in the shooting area, so as to count the running targets.
As shown in fig. 8, which is a view simulating counting of tourists in a scenic spot, the shooting area of the camera is from the handrails on both sides to the ticket gate, i.e. the area marked by the dotted oval line in the view, the tourists are assumed to stand in the undetected area a before entering the park, and when the tourists walk into the detection area B and enter the counting range, the video image processing algorithm starts to work. And according to a moving target tracking algorithm, the first position of the moving target entering the detection area B is used as the initial position of the moving target, the position detection and the position updating are carried out in real time, when the Kalman filtering is used for predicting that the position of the moving target of the next frame exceeds the detection area, the moving target is shown to be in the area C, and the number of people entering the scenic spot is changed.
(example 2)
Based on the same inventive concept as the KVM method for monitoring and managing the scenic spot traffic, the present invention also provides a KVM system for monitoring and managing the scenic spot traffic.
Specifically, the intelligent controller comprises a processor, an FPGA chip and a computer program which is stored on the FPGA chip and can run on the processor, and is characterized in that the Haesi chip is XC7A100T-2FGG 484I.
As shown in fig. 1, taking a scenic spot entrance 1 as an example, three cameras respectively collect three paths of video images and transmit the video images to a PFGA chip, one end of the PFGA chip performs local loop-out, and the other end of the PFGA chip performs video image algorithm processing. Firstly, an FIFO memory stores received video signals, and local loop-out is carried out through a buffer while signal output quality is guaranteed, each frame of image is converted into a gray-scale image through an RGB-YCbCr module under the control of time sequence, then the gray-scale image is stored in a certain address in a DDR3 memory through a memory controller, then data are read out through the memory controller, moving target detection, moving target segmentation, moving target tracking and moving target counting are carried out, finally a processing result is transmitted to a Haisi sending module through an SPI bus by an SPI interface controller for compression coding, and the following steps are realized when the program is specifically executed:
s1: carrying out operation target detection on the collected video image by adopting a three-frame difference method;
s2: performing operation target segmentation on the video image with the operation target detection completed;
s3: taking the head area of the tourist as a target characteristic, and setting a circumscribed rectangle of the head area as a tracking window;
s4: predicting the area of a next frame of tracking window by using Kalman filtering, and finally finding out the optimal matching object in the prediction area by combining the minimum European distance to realize target tracking;
s5: before the next frame of image comes, the count value changes if the tracked moving target prediction area is not in the shooting area, so as to count the running targets.
The operation target detection adopts a three-frame difference method, and comprises the following steps:
s101: by fK-1(x,y)、fK(x, y) and fK+1(x, y) represents the pixel gray scale of the image of the (k-1) th frame, the k-1 th frame and the (k + 1) th frame at the (x, y) point;
s102: will f isK(x, y) and fK-1(x, y) as a difference, fK+1(x, y) and fK(x, y) are differentiated, and the differential image is marked as DK(x,y)、DK+1(x,y),DK(x, y) is obtained from the following equation:
S104: performing threshold segmentation by using a maximum inter-class variance method to obtain a binary image
S105: and performing the morphological processing by adopting the expansion operation, so that a clear foreground moving target image Rn can be obtained.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (6)
1. A KVM method for monitoring and managing scenic spot people flow is applied to an electronic device, and is characterized in that: the method comprises the following steps
S1: carrying out operation target detection on the collected video image by adopting a three-frame difference method;
s2: performing operation target segmentation on the video image with the operation target detection completed;
s3: taking the head area of the tourist as a target characteristic, and setting a circumscribed rectangle of the head area as a tracking window;
s4: predicting the area of a tracking window of the next frame by using Kalman filtering, and finally finding out the optimal matching object in the prediction area by combining the minimum Euclidean distance to realize target tracking;
s5: and before the next frame of image comes, if the tracked moving target prediction area is not in the shooting area, the counting value is changed, so that the running target counting is carried out.
The operation target detection adopts a three-frame difference method, and comprises the following steps:
s101: by fK-1(x,y)、fK(x, y) and fK+1(x, y) represents the pixel gray scale of the image of the (k-1) th frame, the k-1 th frame and the (k + 1) th frame at the (x, y) point;
s102: will f isK(x, y) and fK-1(x, y) as a difference, fK+1(x, y) and fK(x, y) are differentiated, and the difference image is recorded as DK(x,y)、DK+1(x,y),DK(x, y) is obtained from the following equation:
S104: performing threshold segmentation by using a maximum inter-class variance method to obtain a binary image
S105: and performing the morphological processing by adopting the expansion operation, so that a clear foreground moving target image Rn can be obtained.
2. The KVM method for scenic spot traffic monitoring and management according to claim 1, wherein the step S2 of running object segmentation is characterized in that if the extracted moving object areas are stuck to each other, a segmentation process is performed, and if a single moving object is determined, no segmentation is required, and the object segmentation step comprises:
s201: connected domain external connection rectangle: firstly, drawing an external rectangle on the outline of the human body, scanning the rectangle to obtain coordinates of four corners of the rectangle, namely X1n, X2n, Y1n and Y2n, wherein n is the serial number of the rectangle of the adhesion motion target;
s202: by the following formula:
D1=|X2N-X1N|,D2=|Y2N-Y1N|
and obtaining the length and the width of the outline of the circumscribed rectangle of the moving target, and setting the length threshold value of the circumscribed rectangle as A, the width as B and the length-width ratio as follows. If the ratio of the length, the width and the length-width of the circumscribed rectangle is within a set threshold range, a single moving target is judged, and if more than one moving target is not described in the range, the circumscribed rectangle needs to be segmented.
S203: calculating the area mark m of each circumscribed rectangle by adopting a vertical segmentation method00And two first moments are respectively marked as m10And m01;
S204: calculated by the centroid formulaDeriving centroid coordinatesX and y respectively represent the abscissa and the ordinate of the centroid, the number of the centroids represents the number of the tourist targets, and finally, the mean value of the abscissas between the centroids is taken as a punctuation to draw a vertical segmentation boundary.
3. The KVM method for scenic spot traffic monitoring and management of claim 1, the target tracking in step S4, comprising:
s301: selecting a target characteristic parameter;
s302: establishing and initializing a system model;
s303: target prediction and matching: predicting the position of the center point of the next frame of moving target by Kalman filtering based on the center point of the current moving target, setting a pre-search area with the radius of R according to the predicted center point when the next frame of image comes, and searching in the range by the minimum Euclidean distance to obtain the best matching object;
s304: target updating: and taking the best matching object as an initial position, updating target information, and repeating the operation until the operation is finished.
4. The KVM method for scenic spot traffic monitoring and management of claim 3, wherein the selecting of the target feature parameters in step S301 comprises:
s401: firstly, extracting edge features in each rectangular frame by using a Canny edge detection algorithm;
s402: secondly, detecting a circle in the upper half part of the rectangular frame by utilizing Hough transformation and taking the circle as the head of a moving target;
s403: marking the detected head of the moving target as a minimum circumscribed rectangle and scanning coordinates on four corners of the circumscribed rectangle as X1n, X2n, Y1n and Y2n, wherein n is the serial number of the head of the moving target, and calculating the length and the width of a window of the head of the target by the following formula:
D1=|X2N-X1N|,D2=|Y2N-Y1N|
and by the formula:
the window center point (X, Y) is obtained.
5. The KVM method for scenic spot traffic monitoring and management as claimed in claim 3, wherein the system model building and initialization in step S302 comprises:
s501: if the motion target speed v changes very little due to very little time difference between adjacent frames, the motion target is determined to do uniform motion, and the window central point (X, Y), the pixel mean value w and the speed v are selected as state variables;
s502: calculating a transfer matrix and a measurement matrix of the state variables;
s503: initializing a covariance matrix and setting the position of the first frame motion target as an initial position, and setting the initial speed as zero.
6. A KVM system for scenic spot traffic monitoring and management, comprising a processor and an FPGA chip and a computer program stored on and executable on the FPGA chip, wherein the haisi chip is of model XC7a100T-2FGG484I, and the program when executed implements the steps of:
s1: carrying out operation target detection on the collected video image by adopting a three-frame difference method;
s2: performing operation target segmentation on the video image with the operation target detection completed;
s3: taking the head area of the tourist as a target characteristic, and setting a circumscribed rectangle of the head area as a tracking window;
s4: predicting the area of a tracking window of the next frame by using Kalman filtering, and finally finding out the optimal matching object in the prediction area by combining the minimum Euclidean distance to realize target tracking;
s5: and before the next frame of image comes, if the tracked moving target prediction area is not in the shooting area, the counting value is changed, so that the running target counting is carried out.
The operation target detection adopts a three-frame difference method, and comprises the following steps:
s101: by fK-1(x,y)、fK(x, y) and fK+1(x, y) represents the pixel gray scale of the image of the (k-1) th frame, the k-1 th frame and the (k + 1) th frame at the (x, y) point;
s102: will f isK(x, y) and fK-1(x, y) as a difference, fK+1(x, y) and fK(x, y) are differentiated, and the difference image is recorded as DK(x,y)、DK+1(x,y),DK(x, y) is obtained from the following equation:
S104: performing threshold segmentation by using a maximum inter-class variance method to obtain a binary image
S105: and performing the morphological processing by adopting the expansion operation, so that a clear foreground moving target image Rn can be obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010024838.XA CN111415374A (en) | 2020-01-10 | 2020-01-10 | KVM system and method for monitoring and managing scenic spot pedestrian flow |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010024838.XA CN111415374A (en) | 2020-01-10 | 2020-01-10 | KVM system and method for monitoring and managing scenic spot pedestrian flow |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111415374A true CN111415374A (en) | 2020-07-14 |
Family
ID=71493970
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010024838.XA Pending CN111415374A (en) | 2020-01-10 | 2020-01-10 | KVM system and method for monitoring and managing scenic spot pedestrian flow |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111415374A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112102362A (en) * | 2020-09-14 | 2020-12-18 | 北京数衍科技有限公司 | Pedestrian step track determination method and device |
CN112381975A (en) * | 2020-11-16 | 2021-02-19 | 成都中科大旗软件股份有限公司 | Scenic spot scheduling system and scheduling method based on 5G |
CN112837337A (en) * | 2021-02-04 | 2021-05-25 | 成都国翼电子技术有限公司 | Method and device for identifying connected region of massive pixel blocks based on FPGA |
CN115119253A (en) * | 2022-08-30 | 2022-09-27 | 北京东方国信科技股份有限公司 | Method, device and equipment for monitoring regional pedestrian flow and determining monitoring parameters |
CN117132948A (en) * | 2023-10-27 | 2023-11-28 | 南昌理工学院 | Scenic spot tourist flow monitoring method, system, readable storage medium and computer |
-
2020
- 2020-01-10 CN CN202010024838.XA patent/CN111415374A/en active Pending
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112102362A (en) * | 2020-09-14 | 2020-12-18 | 北京数衍科技有限公司 | Pedestrian step track determination method and device |
CN112381975A (en) * | 2020-11-16 | 2021-02-19 | 成都中科大旗软件股份有限公司 | Scenic spot scheduling system and scheduling method based on 5G |
CN112837337A (en) * | 2021-02-04 | 2021-05-25 | 成都国翼电子技术有限公司 | Method and device for identifying connected region of massive pixel blocks based on FPGA |
CN112837337B (en) * | 2021-02-04 | 2022-08-12 | 成都国翼电子技术有限公司 | Method and device for identifying connected region of massive pixel blocks based on FPGA |
CN115119253A (en) * | 2022-08-30 | 2022-09-27 | 北京东方国信科技股份有限公司 | Method, device and equipment for monitoring regional pedestrian flow and determining monitoring parameters |
CN115119253B (en) * | 2022-08-30 | 2022-11-18 | 北京东方国信科技股份有限公司 | Method, device and equipment for monitoring regional pedestrian flow and determining monitoring parameters |
CN117132948A (en) * | 2023-10-27 | 2023-11-28 | 南昌理工学院 | Scenic spot tourist flow monitoring method, system, readable storage medium and computer |
CN117132948B (en) * | 2023-10-27 | 2024-01-30 | 南昌理工学院 | Scenic spot tourist flow monitoring method, system, readable storage medium and computer |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111415374A (en) | KVM system and method for monitoring and managing scenic spot pedestrian flow | |
CN110175576B (en) | Driving vehicle visual detection method combining laser point cloud data | |
Sidla et al. | Pedestrian detection and tracking for counting applications in crowded situations | |
CN109977782B (en) | Cross-store operation behavior detection method based on target position information reasoning | |
CN104978567B (en) | Vehicle checking method based on scene classification | |
CN104217428B (en) | A kind of fusion feature matching and the video monitoring multi-object tracking method of data correlation | |
CN103279791B (en) | Based on pedestrian's computing method of multiple features | |
CN105046206B (en) | Based on the pedestrian detection method and device for moving prior information in video | |
CN109033972A (en) | A kind of object detection method, device, equipment and storage medium | |
CN103413444A (en) | Traffic flow surveying and handling method based on unmanned aerial vehicle high-definition video | |
CN112947419B (en) | Obstacle avoidance method, device and equipment | |
CN102609724B (en) | Method for prompting ambient environment information by using two cameras | |
CN109685827B (en) | Target detection and tracking method based on DSP | |
CN105160649A (en) | Multi-target tracking method and system based on kernel function unsupervised clustering | |
CN110781806A (en) | Pedestrian detection tracking method based on YOLO | |
CN109711256B (en) | Low-altitude complex background unmanned aerial vehicle target detection method | |
CN111008994A (en) | Moving target real-time detection and tracking system and method based on MPSoC | |
CN117949942B (en) | Target tracking method and system based on fusion of radar data and video data | |
CN111967396A (en) | Processing method, device and equipment for obstacle detection and storage medium | |
CN113066129A (en) | Visual positioning and mapping system based on target detection in dynamic environment | |
CN104599291B (en) | Infrared motion target detection method based on structural similarity and significance analysis | |
CN116758421A (en) | Remote sensing image directed target detection method based on weak supervised learning | |
CN110443142A (en) | A kind of deep learning vehicle count method extracted based on road surface with segmentation | |
CN115308732A (en) | Multi-target detection and tracking method integrating millimeter wave radar and depth vision | |
CN108471497A (en) | A kind of ship target real-time detection method based on monopod video camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |