CN111415374A - KVM system and method for monitoring and managing scenic spot pedestrian flow - Google Patents
KVM system and method for monitoring and managing scenic spot pedestrian flow Download PDFInfo
- Publication number
- CN111415374A CN111415374A CN202010024838.XA CN202010024838A CN111415374A CN 111415374 A CN111415374 A CN 111415374A CN 202010024838 A CN202010024838 A CN 202010024838A CN 111415374 A CN111415374 A CN 111415374A
- Authority
- CN
- China
- Prior art keywords
- target
- frame
- area
- image
- moving target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000012544 monitoring process Methods 0.000 title claims abstract description 18
- 230000011218 segmentation Effects 0.000 claims abstract description 27
- 238000012545 processing Methods 0.000 claims abstract description 14
- 238000004590 computer program Methods 0.000 claims abstract description 5
- 238000001514 detection method Methods 0.000 claims description 16
- 238000001914 filtration Methods 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 8
- 230000000877 morphologic effect Effects 0.000 claims description 6
- 240000004282 Grewia occidentalis Species 0.000 claims description 3
- 238000003708 edge detection Methods 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 3
- 230000007704 transition Effects 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims 1
- 238000005516 engineering process Methods 0.000 abstract description 7
- 230000005540 biological transmission Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 8
- 238000002474 experimental method Methods 0.000 description 6
- 230000010339 dilation Effects 0.000 description 4
- 230000006872 improvement Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
- G06F3/023—Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/038—Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
- G06F3/0383—Signal control means within the pointing device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30242—Counting objects in image
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种用于景区人流量监控和管理的KVM系统及方法,应用于一电子设备,所述方法包括采用的三侦差分法进行运动目标获取、利用Kalman跟踪和最小欧式距离相结合的方法进行运动目标分割、运动目标跟踪和运动目标计数;所述系统包括处理器FPGA芯片及存储在FPGA芯片上并可在处理器上运行的计算机程序,所述计算机程序执行所述方法。本发明采用FPGA芯片对视频图像的进行并行运算,不经处理速度快而且延迟低,确保了能够实时的检测景区人流量的变化,另外本发明将计算机视觉技术与KVM技术相结合,有效延长了数据传输距离,为景区人流量监控和管理提供了便利。
The invention discloses a KVM system and method for monitoring and management of people flow in scenic spots, which is applied to an electronic device. The method performs moving object segmentation, moving object tracking and moving object counting; the system includes a processor FPGA chip and a computer program stored on the FPGA chip and executable on the processor, the computer program executing the method. The invention uses FPGA chips to perform parallel operations on video images, without processing speed and low delay, ensuring that changes in the flow of people in scenic spots can be detected in real time. In addition, the invention combines computer vision technology with KVM technology, effectively prolonging the period of time. The data transmission distance provides convenience for the monitoring and management of the flow of people in the scenic spot.
Description
技术领域technical field
本发明涉及计算机技术领域,尤其涉及一种用于景区人流量监控和管 理的KVM方法。The present invention relates to the field of computer technology, in particular to a KVM method for monitoring and managing the flow of people in scenic spots.
背景技术Background technique
随着社会生活水平的日益改善和计算机技术的不断提高,对于商场、 车站、旅游景点等大型公共场所,人数统计成为管理与决策者的重要工作, 而对于旅游景点来说,游客的数量是景区收入至关重要的一部分。过去人 们采用的都是人工计数或者人工电子设备触发计数的方式工作,很显然这 种方式已经适应不了信息大爆炸的时代,目前出现有很多自动化计数方式, 比如热成像计数,红外计数,但是红外计数极易受外界因素干扰,比如多 人经过会产生漏数同时仪器波段也会对其造成影响。With the continuous improvement of social living standards and the continuous improvement of computer technology, for large public places such as shopping malls, stations, and tourist attractions, people counting has become an important task for management and decision-makers. A vital part of income. In the past, people used manual counting or manual electronic equipment to trigger counting. Obviously, this method can no longer adapt to the era of information explosion. There are many automatic counting methods, such as thermal imaging counting and infrared counting, but infrared counting. Counting is easily disturbed by external factors. For example, many people passing by will cause leakage and the instrument band will also affect it.
为了提高人流量统计的准确度,计算机视觉技术便被人们引入其中, 它利用高清的视频采集设备和智能化的数字图像算法精确的对运动目标进 行拍摄、定位、跟踪并计数,本发明将计算机视觉技术与KVM技术相结合, 有效延长了数据传输距离,为景区人流量监控和管理提供了便利。In order to improve the accuracy of people flow statistics, computer vision technology is introduced into it. It uses high-definition video acquisition equipment and intelligent digital image algorithms to accurately shoot, locate, track and count moving targets. The combination of visual technology and KVM technology effectively extends the data transmission distance and provides convenience for the monitoring and management of people flow in scenic spots.
发明内容SUMMARY OF THE INVENTION
本发明所要解决的技术问题是为解决现有技术的不足,本发明提供了 一种用于景区人流量监控和管理的KVM装置及方法,利用摄像头和麦克风分 别采集视频和音频信号,并用FPGA模块进行视频信号处理,并采用海思模 块对所有数据进行压缩编码和解码,最后利用KVM装置实现了远程上位机操 控不同本地视频和音频的功能,解决目前缺少准确的进行景区人流量统计 和对景区进行有效管理的问题。The technical problem to be solved by the present invention is to solve the deficiencies of the prior art. The present invention provides a KVM device and method for monitoring and managing the flow of people in scenic spots. A camera and a microphone are used to collect video and audio signals respectively, and an FPGA module is used. Perform video signal processing, and use the HiSilicon module to compress, encode and decode all data. Finally, the KVM device is used to realize the function of remote host computer to control different local video and audio, which solves the current lack of accurate statistics on the flow of people in the scenic area and for the scenic spot. issues of effective management.
为了解决达到前述目的,本发明的技术方案是一种用于景区人流量监 控和管理的KVM方法,应用于一电子设备,所述方法包括In order to solve the aforementioned purpose, the technical solution of the present invention is a KVM method for monitoring and managing the flow of people in scenic spots, applied to an electronic device, and the method includes
S1:采用的三帧差分法对采集到的视频图像进行运行目标检测;S1: The three-frame difference method is used to detect the running target of the collected video image;
S2:对运行目标检测完毕的视频图像进行运行目标分割;S2: Segment the running target of the video image after the running target detection has been completed;
S3:将游客头部区域作为目标特征,其次头部区域的外接矩形设置为 跟踪窗口;S3: Take the tourist head area as the target feature, and set the enclosing rectangle of the second head area as the tracking window;
S4:利用Kalman滤波预测出下一帧跟踪窗口的区域,最后结合最小欧 式距离在预测区域找出最佳匹配对象,实现目标跟踪;S4: Use Kalman filtering to predict the area of the next frame of the tracking window, and finally find the best matching object in the prediction area combined with the minimum Euclidean distance to achieve target tracking;
S5:当下一帧图像来临前,跟踪的运动目标预测区域不在拍摄区域之 内则计数值发生变化,以此进行运行目标计数。S5: When the next frame of image comes, if the tracked moving target prediction area is not within the shooting area, the count value will change, so as to count the running target.
所述运行目标检测采用三帧差分法,包括以下步骤:The running target detection adopts the three-frame difference method, which includes the following steps:
S101:用fK-1(x,y)、fK(x,y)和fK+1(x,y)表示第k-1帧、第k帧和第k+1 帧图像在(x,y)点的像素灰度;S101: Use f K-1 (x, y), f K (x, y) and f K+1 (x, y) to represent the k-1 th frame, the k th frame and the k+1 th frame image in (x , y) pixel grayscale of point;
S102:将fK(x,y)与fK-1(x,y)作差分,fK+1(x,y)与fK(x,y)作差分,差分图 像记为DK(x,y)、DK+1(x,y),DK(x,y)由下列公式得到:S102: Differentiate f K (x, y) and f K-1 (x, y), make a difference between f K+1 (x, y) and f K (x, y), and record the difference image as D K ( x,y), D K+1 (x,y), D K (x,y) are obtained by the following formulas:
S103:对差分图像DK(x,y)和DK+1(x,y)进行与操作,得到图像 S103: Perform AND operation on the difference images D K (x, y) and D K+1 (x, y) to obtain an image
S104:采用最大类间方差法进行阈值分割得到二值图像 S104: Use the maximum inter-class variance method to perform threshold segmentation to obtain a binary image
S105:采用膨胀运算进行形态化处理,这样能获得较为清晰的前景运 动目标图像Rn。S105: Use the dilation operation to perform morphological processing, so that a clearer foreground moving target image Rn can be obtained.
具体的说,所述步骤S2中的运行目标分割,其特征在于若提取的运动 目标区域互相粘连则进行分割处理,若判定为单个运动目标则无需分割, 目标分割步骤包括:Specifically, the operation target segmentation in the step S2 is characterized in that if the extracted moving target regions are adhered to each other, the segmentation process is performed, and if it is determined to be a single moving target, no segmentation is required, and the target segmentation step includes:
S201:连通域外接矩形:首先对人体外轮廓画外接矩形,对该矩形进 行扫描得出矩形四角坐标记为X1n,X2n,Y1n,Y2n,其中n为粘连运动目 标矩形序号;S201: Circumscribed rectangle of the connected domain: first, draw a circumscribed rectangle for the outer contour of the human body, and scan the rectangle to obtain the four-corner coordinates of the rectangle as X1n, X2n, Y1n, Y2n, where n is the sequence number of the sticking motion target rectangle;
S202:通过以下公式:S202: By the following formula:
D1=|X2N-X1N|,D2=|Y2N-Y1N|D 1 =|X 2N -X 1N |, D 2 =|Y 2N -Y 1N |
得出运动目标外接矩形轮廓的长宽,设外接矩形长阀值为A,宽为B,长宽 比为。若外接矩形长、宽和长宽之比在设定阀值范围内则判定为单个运动 目标,若不在范围说明运动目标不止一个,需要对外接矩形进行分割处理。Obtain the length and width of the circumscribed rectangle outline of the moving target, set the length threshold of the circumscribed rectangle as A, the width as B, and the aspect ratio as . If the ratio of length, width and length to width of the circumscribed rectangle is within the set threshold range, it is judged as a single moving target. If it is not within the range, it means that there is more than one moving target, and the circumscribed rectangle needs to be divided.
S203:采用垂直分割方法,计算每个外接矩形的面积标记为m00和两个 一阶矩分别标记为m10和m01;S203: adopt the vertical division method, calculate the area of each circumscribed rectangle and mark it as m 00 and the two first-order moments are marked as m 10 and m 01 respectively;
S204:利用质心公式计算为得出质心坐标,x和y分别代 表质心的横坐标和纵坐标,质心个数则代表了游客目标人数,最后将质心 之间横坐标的均值作为标点,画垂直分割界限。S204: Using the centroid formula to calculate as The centroid coordinates are obtained, x and y represent the abscissa and ordinate of the centroid, respectively, and the number of centroids represents the target number of tourists. Finally, the average of the abscissas between the centroids is used as a punctuation point to draw the vertical division boundary.
进一步的,所述步骤S4中的目标跟踪,其特征在于包括:Further, the target tracking in the step S4 is characterized by comprising:
S301:选择目标特征参数;S301: Select target feature parameters;
S302:系统模型建立及初始化;S302: System model establishment and initialization;
S303:目标预测和匹配:基于当前运动目标中心点,利用Kalman滤波 预测下一帧运动目标中心点位置,当下一帧图像来临时,以预测的中心点 设置半径为R的预搜索区域,并在这个范围内以最小欧式距离进行搜索来 获得最佳匹配对象;S303: target prediction and matching: based on the center point of the current moving target, use Kalman filtering to predict the position of the center point of the next frame of moving target, when the next frame of image comes, set a pre-search area with a radius of R based on the predicted center point, Search with the minimum Euclidean distance within this range to obtain the best matching object;
S304:目标更新:将最佳匹配对象作为初始位置,更新目标信息,重 复上述操作,直至结束。S304: target update: take the best matching object as the initial position, update the target information, and repeat the above operation until the end.
优选的,所述步骤S301中的选择目标特征参数,其特征在于包括:Preferably, the selection of target feature parameters in the step S301 is characterized by comprising:
S401:首先利用Canny边缘检测算法提取每个矩形框中边缘特征;S401: First, use the Canny edge detection algorithm to extract the edge features of each rectangular frame;
S402:其次利用Hough变换检测矩形框的上半部分中的圆并将其作为 运动目标的头部;S402: secondly utilize Hough transform to detect the circle in the upper half of the rectangular frame and use it as the head of the moving target;
S403:将检测到的运动目标头部作最小外接矩形并扫描外接矩形四角 上的坐标记为X1n,X2n,Y1n,Y2n,其中n为运动目标头部序号,通过以 下公式,计算目标头部窗口的长和宽:S403: Make the detected head of the moving target a minimum circumscribed rectangle and scan the coordinates on the four corners of the circumscribed rectangle as X1n, X2n, Y1n, Y2n, where n is the sequence number of the moving target head, and calculate the target head window by the following formula length and width:
D1=|X2N-X1N|,D2=|Y2N-Y1N|D 1 =|X 2N -X 1N |, D 2 =|Y 2N -Y 1N |
并通过公式:and by the formula:
得出窗口中心点(X,Y)。Get the window center point (X,Y).
S404:假设外接矩形中某点像素为f(x,y),头部区域像素总数为N,则 头部区域像素均值为 S404: Assuming that the pixel of a certain point in the circumscribed rectangle is f(x, y), and the total number of pixels in the head area is N, the average pixel value of the head area is
具体来说,所述步骤S302中的系统模型建立及初始化,其特征在于包 括:Specifically, the system model establishment and initialization in described step S302 is characterized in that comprising:
S501:相临帧之间时差很小导致运动目标速度v变化很小,则认定运 动目标在做匀速运动,选取窗口中心点(X,Y)、像素均值w和速度v作为状 态变量;S501: The time difference between adjacent frames is very small, resulting in a small change in the speed v of the moving object, then it is determined that the moving object is moving at a uniform speed, and the center point of the window (X, Y), the pixel mean w and the speed v are selected as state variables;
S502:计算出状态变量的转移矩阵和测量矩阵;S502: Calculate the transition matrix and measurement matrix of the state variable;
S503:初始化协方差阵并把第一帧运动目标位置为初始位置,初始速 度设为零。S503: Initialize the covariance matrix and set the moving target position of the first frame as the initial position and the initial speed as zero.
基于上述技术方案,本发明还提供一种用于景区人流量监控和管理的 KVM系统,系统包括处理器和FPGA芯片及存储在FPGA芯片上并可在处理器上 运行的计算机程序,其特征在于,所述海思芯片型号为XC7A100T-2FGG484I, 执行所述程序时实现以下步骤:Based on the above technical solution, the present invention also provides a KVM system for monitoring and managing the flow of people in scenic spots. The system includes a processor, an FPGA chip, and a computer program stored on the FPGA chip and running on the processor. It is characterized in that , the HiSilicon chip model is XC7A100T-2FGG484I, and the following steps are implemented when executing the program:
S1:采用的三帧差分法对采集到的视频图像进行运行目标检测;S1: The three-frame difference method is used to detect the running target of the collected video image;
S2:对运行目标检测完毕的视频图像进行运行目标分割;S2: Segment the running target of the video image after the running target detection has been completed;
S3:将游客头部区域作为目标特征,其次头部区域的外接矩形设置为 跟踪窗口;S3: Take the tourist head area as the target feature, and set the enclosing rectangle of the second head area as the tracking window;
S4:利用Kalman滤波预测出下一帧跟踪窗口的区域,最后结合最小欧 式距离在预测区域找出最佳匹配对象,实现目标跟踪;S4: Use Kalman filtering to predict the area of the next frame of the tracking window, and finally find the best matching object in the prediction area combined with the minimum Euclidean distance to achieve target tracking;
S5:当下一帧图像来临前,跟踪的运动目标预测区域不在拍摄区域之 内则计数值发生变化,以此进行运行目标计数。S5: When the next frame of image comes, if the tracked moving target prediction area is not within the shooting area, the count value will change, so as to count the running target.
所述运行目标检测采用三帧差分法,包括以下步骤:The running target detection adopts the three-frame difference method, which includes the following steps:
S101:用fK-1(x,y)、fK(x,y)和fK+1(x,y)表示第k-1帧、第k帧和第k+1 帧图像在(x,y)点的像素灰度;S101: Use f K-1 (x, y), f K (x, y) and f K+1 (x, y) to represent the k-1 th frame, the k th frame and the k+1 th frame image in (x , y) pixel grayscale of point;
S102:将fK(x,y)与fK-1(x,y)作差分,fK+1(x,y)与fK(x,y)作差分,差分图 像记为DK(x,y)、DK+1(x,y),DK(x,y)由下列公式得到:S102: Differentiate f K (x, y) and f K-1 (x, y), make a difference between f K+1 (x, y) and f K (x, y), and record the difference image as D K ( x,y), D K+1 (x,y), D K (x,y) are obtained by the following formulas:
S103:对差分图像DK(x,y)和DK+1(x,y)进行与操作,得到图像 S103: Perform AND operation on the difference images D K (x, y) and D K+1 (x, y) to obtain an image
S104:采用最大类间方差法进行阈值分割得到二值图像 S104: Use the maximum inter-class variance method to perform threshold segmentation to obtain a binary image
S105:采用膨胀运算进行形态化处理,这样能获得较为清晰的前景运 动目标图像Rn。S105: Use the dilation operation to perform morphological processing, so that a clearer foreground moving target image Rn can be obtained.
采用了上述技术方案,本发明具有以下的积极的效果:Having adopted the above-mentioned technical scheme, the present invention has the following positive effects:
(1)本发明采用FPGA芯片对视频图像的进行并行运算,不经处理速度 快而且延迟低,确保了能够实时的检测景区人流量的变化。(1) the present invention adopts FPGA chip to carry out parallel operation to the video image, and the processing speed is fast and the delay is low, which ensures that the change of the flow of people in the scenic spot can be detected in real time.
(2)采用摄像头垂直拍摄的方式解决了人流拥挤和运动目标身体互相 遮挡的问题,运动目标判定与跟踪算法更是以游客头部圆形特征为重心展 开,提高了运动目标检测的准确性。(2) The camera is used to shoot vertically to solve the problem of crowded people and the mutual occlusion of moving objects. The moving object determination and tracking algorithm is based on the circular feature of the head of tourists, which improves the accuracy of moving object detection.
附图说明Description of drawings
为了使本发明的内容更容易被清楚地理解,下面根据具体实施例并结合 附图,对本发明作进一步详细的说明,其中In order to make the content of the present invention easier to understand clearly, the present invention will be described in further detail below according to specific embodiments and in conjunction with the accompanying drawings, wherein
图1为本发明的FPGA内部视频图像处理工作示意图;Fig. 1 is the working schematic diagram of FPGA internal video image processing of the present invention;
图2为本发明的三帧差分法过程图;Fig. 2 is a three-frame difference method process diagram of the present invention;
图3为本发明的三帧差分法实验示意图3 is a schematic diagram of the experiment of the three-frame difference method of the present invention
图4为本发明的运动目标分割流程图;Fig. 4 is the moving target segmentation flow chart of the present invention;
图5为本发明的运动目标分割实验示意图;5 is a schematic diagram of a moving target segmentation experiment of the present invention;
图6为本发明的运动目标跟踪流程图;Fig. 6 is the flow chart of moving target tracking of the present invention;
图7为本发明的运动目标跟踪实验示意图;7 is a schematic diagram of a moving target tracking experiment of the present invention;
图8为本发明的模拟景区游客计数图。FIG. 8 is a graph of tourist counts in a simulated scenic spot of the present invention.
具体实施方式Detailed ways
上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技 术手段,而可依照说明书的内容予以实施,并且为了让本发明的上述和其 它目的、特征和优点能够更明显易懂,以下特举本发明的具体实施方式。The above description is only an overview of the technical solutions of the present invention, in order to be able to understand the technical means of the present invention more clearly, it can be implemented according to the content of the description, and in order to make the above and other purposes, features and advantages of the present invention more obvious and easy to understand , the following specific embodiments of the present invention are given.
下面通过附图以及具体实施例对本发明技术方案做详细的说明,应当理 解本申请实施例以及实施例中的具体特征是对本申请技术方案的详细的说 明,而不是对本申请技术方案的限定,在不冲突的情况下,本申请实施例 以及实施例中的技术特征可以相互组合。The technical solutions of the present invention will be described in detail below with reference to the accompanying drawings and specific embodiments. If there is no conflict, the embodiments of the present application and the technical features in the embodiments may be combined with each other.
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以 存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B, 单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象 是一种“或”的关系。The term "and/or" in this document is only an association relationship to describe associated objects, indicating that there can be three kinds of relationships, for example, A and/or B, which can mean that A exists alone, A and B exist at the same time, and A and B exist independently. B these three cases. In addition, the character "/" in this text generally indicates that the related objects are an "or" relationship.
(实施例1)(Example 1)
一种用于景区人流量监控和管理的KVM方法,应用于一电子设备,在本 实施例中,电子设备具体FPGA芯片。A KVM method for monitoring and managing the flow of people in scenic spots is applied to an electronic device, in the present embodiment, the specific FPGA chip of the electronic device.
S1:采用的三帧差分法对采集到的视频图像进行运行目标检测;S1: The three-frame difference method is used to detect the running target of the collected video image;
如图2所示,首先用fK-1(x,y)、fK(x,y)和fK+1(x,y)表示第k-1帧、第k 帧和第k+1帧图像在(x,y)点的像素灰度,然后将fK(x,y)与fK-1(x,y)作差 分,fK+1(x,y)与fK(x,y)作差分,差分图像记为DK(x,y)、DK+1(x,y),例如DK(x,y) 由下公式得到:然后在对 差分图像DK(x,y)和DK+1(x,y)进行与操作,得到图像其次采用最大类间 方差法进行阈值分割得到二值图像最后用膨胀运算进行形态化处理, 这样能获得较为清晰的前景运动目标图像Rn。As shown in Figure 2, firstly use f K-1 (x, y), f K (x, y) and f K+1 (x, y) to represent the k-1th frame, the kth frame and the k+1th frame The pixel grayscale of the frame image at (x, y) point, then f K (x, y) and f K-1 (x, y) are differentiated, f K+1 (x, y) and f K (x ,y) as the difference, the difference image is recorded as D K (x,y), D K+1 (x,y), for example, D K (x,y) is obtained by the following formula: Then perform AND operation on the difference image D K (x, y) and D K+1 (x, y) to get the image Secondly, the maximum inter-class variance method is used to perform threshold segmentation to obtain a binary image. Finally, the dilation operation is used for morphological processing, so that a clearer foreground moving target image Rn can be obtained.
如图3所示为三帧差分法实验示意图,提取第k-1帧、第k帧和第k+1 帧图像作为运算目标,经三帧差分算法后最终得出运动目标图像,为后续 运动目标追踪和计数奠定了基础。Figure 3 is a schematic diagram of the experiment of the three-frame difference method. The k-1 frame, the k frame and the k+1 frame are extracted as the operation targets. After the three-frame difference algorithm, the moving target image is finally obtained, which is the subsequent motion. Target tracking and counting lay the groundwork.
S2:对运行目标检测完毕的视频图像进行运行目标分割;S2: Segment the running target of the video image after the running target detection has been completed;
如图4所示为运动目标分割流程图,若提取的运动目标区域互相粘连 则进行分割处理,若运动目标区域被判定为单个运动目标则无需分割。分 割步骤分为连通域外接矩形获取、粘连人群团块分割界限统计和分割。连 通域外接矩形:首先对人体外轮廓画外接矩形,对该矩形进行扫描得出矩 形四角坐标记为X1n,X2n,Y1n,Y2n,其中n为粘连运动目标矩形序号, 通过公式D1=|X2N-X1N|,D2=|Y2N-Y1N|,得出运动目标外接矩形轮廓的长宽,设 定外接矩形长阀值为A,宽为B,长宽比为C,若外接矩形长、宽和长宽之 比在设定阀值范围内则判定为单个运动目标,若不在范围说明运动目标不 止一个,需要对外接矩形进行分割处理,此外由于摄像头垂直拍摄,避免 了游客之间前后遮挡导致外接矩形长度通常超过阀值,所以选择垂直分割 方法。基于连通域外接矩形的分割点统计:首先计算每个外接矩形的面积 标记为m00和两个一阶矩分别标记为m10和m01,其次利用质心公式计算为得出质心坐标,x和y分别代表质心的横坐标和纵坐标,质心 个数则代表了游客目标人数,最后将质心之间横坐标的均值作为标点,画 垂直分割界限。Figure 4 is a flowchart of moving object segmentation. If the extracted moving object regions are adhered to each other, segmentation processing is performed. If the moving object region is determined to be a single moving object, no segmentation is required. The segmentation steps are divided into the acquisition of the circumscribed rectangle of the connected domain, the segmentation boundary statistics of the adhering crowd, and the segmentation. Connected domain circumscribed rectangle: first draw a circumscribed rectangle for the outer contour of the human body, and scan the rectangle to obtain the four-corner coordinates of the rectangle as X1n, X2n, Y1n, Y2n, where n is the sequence number of the sticking motion target rectangle, through the formula D 1 =|X 2N -X 1N |,D 2 =|Y 2N -Y 1N |, get the length and width of the bounding rectangle outline of the moving target, set the length threshold of the bounding rectangle as A, the width as B, and the aspect ratio as C. If the ratio of the length, width and length of the rectangle is within the set threshold range, it is determined as a single moving target. If it is not within the range, it means that there is more than one moving target, and the bounding rectangle needs to be divided. The length of the circumscribed rectangle usually exceeds the threshold due to the occlusion between the front and rear, so the vertical segmentation method is selected. The segmentation point statistics based on the circumscribed rectangle of the connected domain: first, the area of each circumscribed rectangle is calculated as m 00 and the two first-order moments are denoted as m 10 and m 01 respectively, and then the centroid formula is used to calculate as The centroid coordinates are obtained, x and y represent the abscissa and ordinate of the centroid, respectively, and the number of centroids represents the target number of tourists. Finally, the average of the abscissas between the centroids is used as a punctuation point to draw the vertical division boundary.
如图5所示是运动目标分割实验示意图,图中左边为二人左右粘连和 三人左右粘连,右边为运动目标分割后的结果,这清晰的观察出运动目标 个数。Figure 5 is a schematic diagram of the moving target segmentation experiment. The left side of the figure is the left-right adhesion of two people and the left-right adhesion of three people, and the right side is the result of the moving target segmentation, which clearly shows the number of moving objects.
S3:将游客头部区域作为目标特征,其次头部区域的外接矩形设置为 跟踪窗口;S3: Take the tourist head area as the target feature, and set the enclosing rectangle of the second head area as the tracking window;
S4:利用Kalman滤波预测出下一帧跟踪窗口的区域,最后结合最小欧 式距离在预测区域找出最佳匹配对象,实现目标跟踪;S4: Use Kalman filtering to predict the area of the next frame of the tracking window, and finally find the best matching object in the prediction area combined with the minimum Euclidean distance to achieve target tracking;
如图6所示为运动目标跟踪流程图,采用Kalman滤波预测和最小欧式 距离相结合的跟踪算法,算法步骤分为四步:首先选择目标特征参数、其 次系统模型建立及初始化、再次进行目标预测和匹配,若预测的目标与匹 配的目标相同则进行目标更新,若预测的目标超过了监控区域则计数值发 生改变,实际操作步骤如下:Figure 6 shows the flow chart of moving target tracking. A tracking algorithm combining Kalman filter prediction and minimum Euclidean distance is used. The algorithm steps are divided into four steps: first, select the target feature parameters, secondly establish and initialize the system model, and perform target prediction again. If the predicted target is the same as the matching target, the target will be updated. If the predicted target exceeds the monitoring area, the count value will change. The actual operation steps are as follows:
选择目标特征参数:选择目标特征参数:在运动目标分割后,首先利 用Canny边缘检测算法提取每个矩形框中边缘特征,其次利用Hough变换 检测矩形框的上半部分中的圆并将其作为运动目标的头部,然后将检测到 的运动目标头部作最小外接矩形并扫描外接矩形四角上的坐标记为X1n, X2n,Y1n,Y2n,其中n为运动目标头部序号,通过公式DX=|X2N-X1N|, DY=|Y2N-Y1N|,计算目标头部窗口长和宽,并通过公式得出窗 口中心点(X,Y),最后假设外接矩形中某点像素为f(x,y),头部区域像素总 数为N,则头部区域像素均值为 Select target feature parameters: Select target feature parameters: After the moving target is segmented, first use the Canny edge detection algorithm to extract the edge features of each rectangular box, and then use the Hough transform to detect the circle in the upper half of the rectangular box and use it as a motion The head of the target, then make the detected moving target head as the smallest circumscribed rectangle and scan the coordinates on the four corners of the circumscribed rectangle as X1n, X2n, Y1n, Y2n, where n is the moving target head serial number, through the formula D X = |X 2N -X 1N |, D Y = |Y 2N -Y 1N |, calculate the length and width of the target head window, and use the formula Obtain the center point (X, Y) of the window, and finally assume that a certain point pixel in the circumscribed rectangle is f(x, y), and the total number of pixels in the head area is N, then the average pixel value of the head area is
系统模型建立及初始化:由于相临帧之间时差很小导致运动目标速度v 变化很小,则认定运动目标在做匀速运动,首先选取窗口中心点(X,Y)、像 素均值w和速度v作为状态变量,然后计算出状态变量的转移矩阵和测量 矩阵,最后初始化协方差阵并把第一帧运动目标位置为初始位置,初始速 度设为零;System model establishment and initialization: Since the time difference between adjacent frames is very small, the speed v of the moving object changes very little, then it is determined that the moving object is moving at a uniform speed. First, select the center point of the window (X, Y), the pixel mean w and the speed v As a state variable, then calculate the transition matrix and measurement matrix of the state variable, and finally initialize the covariance matrix and set the first frame moving target position as the initial position and the initial speed as zero;
目标预测和匹配:基于当前运动目标中心点,利用Kalman滤波预测下 一帧运动目标中心点位置,当下一帧图像来临时,以预测的中心点设置半 径为R的预搜索区域,并在这个范围内以最小欧式距离进行搜索来获得最 佳匹配对象;Target prediction and matching: Based on the center point of the current moving target, Kalman filtering is used to predict the position of the center point of the next frame of moving target. When the next frame of image comes, a pre-search area with a radius of R is set based on the predicted center point, and within this range Search within the minimum Euclidean distance to obtain the best matching object;
目标更新:将最佳匹配对象作为初始位置,更新目标信息,重复上述 操作,直至结束。Target update: take the best matching object as the initial position, update the target information, and repeat the above operations until the end.
如图7所示是运动目标跟踪实验示意图,分别为第k帧、第k+3帧、 第k+6帧和第k+12帧图像,利用Kalman滤波预测和最小欧式距离相结合 的跟踪算法可以有效的对运动目标进行跟踪。As shown in Figure 7 is a schematic diagram of the moving target tracking experiment, which are the kth frame, the k+3th frame, the k+6th frame and the k+12th frame image, using the Kalman filter prediction and the minimum Euclidean distance combined tracking algorithm It can effectively track moving targets.
S5:当下一帧图像来临前,跟踪的运动目标预测区域不在拍摄区域之 内则计数值发生变化,以此进行运行目标计数。S5: When the next frame of image comes, if the tracked moving target prediction area is not within the shooting area, the count value will change, so as to count the running target.
如图8所示为模拟景区游客计数图,摄像头拍摄区域为从两侧栏杆起至 过检票闸机处,即图中椭圆虚线标记区域,假设游客为进园之前站在未检 测区域A,当其一走进检测区域B则进入计数范围,视频图像处理算法便开 始工作。根据运动目标跟踪算法,运动目标进入检测区域B的第一位置作为 运动目标的初始位置,并实时进行位置检测与位置更新,当用Kalman滤波 预测下一帧运动目标位置超过检测区域时,表明运动目标出现在区域C,进 入景区人数则发生改变。As shown in Figure 8 is the tourist count map of the simulated scenic spot. The camera shooting area is from the railings on both sides to the ticket gate, that is, the area marked by the elliptical dotted line in the figure. It is assumed that the tourists are standing in the undetected area A before entering the park. As soon as it enters the detection area B, it enters the counting range, and the video image processing algorithm starts to work. According to the moving target tracking algorithm, the first position of the moving target entering the detection area B is used as the initial position of the moving target, and the position detection and position update are carried out in real time. The target appears in area C, and the number of people entering the scenic spot changes.
(实施例2)(Example 2)
基于与前述实施例中一种用于景区人流量监控和管理的KVM方法同样 的发明构思,本发明还提供一种用于景区人流量监控和管理的KVM系统。Based on the same inventive concept as a KVM method for monitoring and managing the flow of people in scenic spots in the foregoing embodiment, the present invention also provides a KVM system for monitoring and managing the flow of people in scenic spots.
具体来说,包括处理器和FPGA芯片及存储在FPGA芯片上并可在处理器 上运行的计算机程序,其特征在于,所述海思芯片型号为 XC7A100T-2FGG484I。Specifically, it includes processor and FPGA chip and a computer program that is stored on the FPGA chip and can run on the processor, and is characterized in that the HiSilicon chip model is XC7A100T-2FGG484I.
如图1所示,以景区入口1为例,三个摄像头分别采集三路视频图像传 输至PFGA芯片,一端进行本地环出,一端进行视频图像算法处理。首先FIFO 存储器对接收的视频信号进行保存,一边为保证信号输出质量则通过缓冲 器进行本地环出,一边在时序的控制下先经过RGB-YCbCr模块将每帧图像转 化成灰度图,然后通过存储控制器将其存储在DDR3内存某地址中,其次通 过存储控制器读出数据,在进行运动目标检测、运动目标分割、运动目标跟踪和运动目标计数,最后把处理结果由SPI接口控制器通过SPI总线传输 至发送海思模块进行压缩编码,具体执行所述程序时实现以下步骤:As shown in Figure 1, taking
S1:采用的三帧差分法对采集到的视频图像进行运行目标检测;S1: The three-frame difference method is used to detect the running target of the collected video image;
S2:对运行目标检测完毕的视频图像进行运行目标分割;S2: Segment the running target of the video image after the running target detection has been completed;
S3:将游客头部区域作为目标特征,其次头部区域的外接矩形设置为 跟踪窗口;S3: Take the tourist head area as the target feature, and set the enclosing rectangle of the second head area as the tracking window;
S4:利用Kalman滤波预测出下一帧跟踪窗口的区域,最后结合最小欧 式距离在预测区域找出最佳匹配对象,实现目标跟踪;S4: Use Kalman filtering to predict the area of the next frame of the tracking window, and finally find the best matching object in the prediction area combined with the minimum Euclidean distance to achieve target tracking;
S5:当下一帧图像来临前,跟踪的运动目标预测区域不在拍摄区域之 内则计数值发生变化,以此进行运行目标计数。S5: When the next frame of image comes, if the tracked moving target prediction area is not within the shooting area, the count value will change, so as to count the running target.
所述运行目标检测采用三帧差分法,包括以下步骤:The running target detection adopts the three-frame difference method, which includes the following steps:
S101:用fK-1(x,y)、fK(x,y)和fK+1(x,y)表示第k-1帧、第k帧和第k+1 帧图像在(x,y)点的像素灰度;S101: Use f K-1 (x, y), f K (x, y) and f K+1 (x, y) to represent the k-1 th frame, the k th frame and the k+1 th frame image in (x , y) pixel grayscale of point;
S102:将fK(x,y)与fK-1(x,y)作差分,fK+1(x,y)与fK(x,y)作差分,差分图 像记为DK(x,y)、DK+1(x,y),DK(x,y)由下列公式得到:S102: Differentiate f K (x, y) and f K-1 (x, y), make a difference between f K+1 (x, y) and f K (x, y), and record the difference image as D K ( x,y), D K+1 (x,y), D K (x,y) are obtained by the following formulas:
S103:对差分图像DK(x,y)和DK+1(x,y)进行与操作,得到图像 S103: Perform AND operation on the difference images D K (x, y) and D K+1 (x, y) to obtain an image
S104:采用最大类间方差法进行阈值分割得到二值图像 S104: Use the maximum inter-class variance method to perform threshold segmentation to obtain a binary image
S105:采用膨胀运算进行形态化处理,这样能获得较为清晰的前景运 动目标图像Rn。S105: Use the dilation operation to perform morphological processing, so that a clearer foreground moving target image Rn can be obtained.
以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行 了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施例而 已,并不用于限制本发明,凡在本发明的精神和原则之内,所做的任何修 改、等同替换、改进等,均应包含在本发明的保护范围之内。The specific embodiments described above further describe the purpose, technical solutions and beneficial effects of the present invention in further detail. It should be understood that the above descriptions are only specific embodiments of the present invention, and are not intended to limit the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included within the protection scope of the present invention.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010024838.XA CN111415374A (en) | 2020-01-10 | 2020-01-10 | KVM system and method for monitoring and managing scenic spot pedestrian flow |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010024838.XA CN111415374A (en) | 2020-01-10 | 2020-01-10 | KVM system and method for monitoring and managing scenic spot pedestrian flow |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111415374A true CN111415374A (en) | 2020-07-14 |
Family
ID=71493970
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010024838.XA Pending CN111415374A (en) | 2020-01-10 | 2020-01-10 | KVM system and method for monitoring and managing scenic spot pedestrian flow |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111415374A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112102362A (en) * | 2020-09-14 | 2020-12-18 | 北京数衍科技有限公司 | Pedestrian step track determination method and device |
CN112381975A (en) * | 2020-11-16 | 2021-02-19 | 成都中科大旗软件股份有限公司 | Scenic spot scheduling system and scheduling method based on 5G |
CN112837337A (en) * | 2021-02-04 | 2021-05-25 | 成都国翼电子技术有限公司 | Method and device for identifying connected region of massive pixel blocks based on FPGA |
CN115119253A (en) * | 2022-08-30 | 2022-09-27 | 北京东方国信科技股份有限公司 | Method, device and equipment for monitoring regional pedestrian flow and determining monitoring parameters |
CN117132948A (en) * | 2023-10-27 | 2023-11-28 | 南昌理工学院 | Scenic spot tourist flow monitoring method, system, readable storage medium and computer |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101533341A (en) * | 2008-03-11 | 2009-09-16 | 宏正自动科技股份有限公司 | Operating platform module and computer module suitable for multicomputer switching system |
EP2866047A1 (en) * | 2013-10-23 | 2015-04-29 | Ladar Limited | A detection system for detecting an object on a water surface |
CN109102523A (en) * | 2018-07-13 | 2018-12-28 | 南京理工大学 | A kind of moving object detection and tracking |
-
2020
- 2020-01-10 CN CN202010024838.XA patent/CN111415374A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101533341A (en) * | 2008-03-11 | 2009-09-16 | 宏正自动科技股份有限公司 | Operating platform module and computer module suitable for multicomputer switching system |
EP2866047A1 (en) * | 2013-10-23 | 2015-04-29 | Ladar Limited | A detection system for detecting an object on a water surface |
CN109102523A (en) * | 2018-07-13 | 2018-12-28 | 南京理工大学 | A kind of moving object detection and tracking |
Non-Patent Citations (4)
Title |
---|
伍玉铃: "基于监测视频的出入口人数统计系统研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 3, 15 March 2018 (2018-03-15), pages 2 - 6 * |
刘志红: "一种运动目标检测与跟踪监控系统的研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 4, 15 April 2016 (2016-04-15), pages 2 - 3 * |
渠燕红: "基于头部检测与跟踪的人数统计方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 3, 15 March 2018 (2018-03-15), pages 2 - 5 * |
马光华: "基于FPGA和ARM9微控制器的KVM交换机设计", 《仪表技术》, no. 9, 31 December 2013 (2013-12-31), pages 23 - 28 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112102362A (en) * | 2020-09-14 | 2020-12-18 | 北京数衍科技有限公司 | Pedestrian step track determination method and device |
CN112381975A (en) * | 2020-11-16 | 2021-02-19 | 成都中科大旗软件股份有限公司 | Scenic spot scheduling system and scheduling method based on 5G |
CN112837337A (en) * | 2021-02-04 | 2021-05-25 | 成都国翼电子技术有限公司 | Method and device for identifying connected region of massive pixel blocks based on FPGA |
CN112837337B (en) * | 2021-02-04 | 2022-08-12 | 成都国翼电子技术有限公司 | Method and device for identifying connected region of massive pixel blocks based on FPGA |
CN115119253A (en) * | 2022-08-30 | 2022-09-27 | 北京东方国信科技股份有限公司 | Method, device and equipment for monitoring regional pedestrian flow and determining monitoring parameters |
CN115119253B (en) * | 2022-08-30 | 2022-11-18 | 北京东方国信科技股份有限公司 | Method, device and equipment for monitoring regional pedestrian flow and determining monitoring parameters |
CN117132948A (en) * | 2023-10-27 | 2023-11-28 | 南昌理工学院 | Scenic spot tourist flow monitoring method, system, readable storage medium and computer |
CN117132948B (en) * | 2023-10-27 | 2024-01-30 | 南昌理工学院 | Scenic area tourist flow monitoring method, system, readable storage medium and computer |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111415374A (en) | KVM system and method for monitoring and managing scenic spot pedestrian flow | |
CN107133969B (en) | A kind of mobile platform moving target detecting method based on background back projection | |
CN109145708B (en) | Pedestrian flow statistical method based on RGB and D information fusion | |
CN104392468B (en) | Moving Object Detection Method Based on Improved Visual Background Extraction | |
CN104217428B (en) | A kind of fusion feature matching and the video monitoring multi-object tracking method of data correlation | |
WO2022027931A1 (en) | Video image-based foreground detection method for vehicle in motion | |
CN107169985A (en) | A kind of moving target detecting method based on symmetrical inter-frame difference and context update | |
CN111178161B (en) | Vehicle tracking method and system based on FCOS | |
CN109086724B (en) | Accelerated human face detection method and storage medium | |
CN104063885A (en) | Improved movement target detecting and tracking method | |
CN103854273A (en) | Pedestrian tracking counting method and device based on near forward direction overlooking monitoring video | |
CN111145215B (en) | Target tracking method and device | |
CN104537688A (en) | Moving object detecting method based on background subtraction and HOG features | |
CN110348332A (en) | The inhuman multiple target real-time track extracting method of machine under a kind of traffic video scene | |
TWI514327B (en) | Method and system for object detection and tracking | |
CN109754034A (en) | A method and device for locating terminal equipment based on two-dimensional code | |
CN106934819A (en) | A kind of method of moving object segmentation precision in raising image | |
CN110443142A (en) | A kind of deep learning vehicle count method extracted based on road surface with segmentation | |
CN105405153B (en) | Intelligent mobile terminal anti-noise jamming Extracting of Moving Object | |
CN110033425B (en) | Interference area detection device and method and electronic equipment | |
CN103093481B (en) | A kind of based on moving target detecting method under the static background of watershed segmentation | |
CN113689365A (en) | Target tracking and positioning method based on Azure Kinect | |
CN103905826A (en) | Self-adaptation global motion estimation method | |
CN115880643B (en) | A method and device for monitoring social distance based on target detection algorithm | |
CN115719362B (en) | High-altitude parabolic detection method, system, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |