CN103955699A - Method for detecting tumble event in real time based on surveillance videos - Google Patents
Method for detecting tumble event in real time based on surveillance videos Download PDFInfo
- Publication number
- CN103955699A CN103955699A CN201410125985.0A CN201410125985A CN103955699A CN 103955699 A CN103955699 A CN 103955699A CN 201410125985 A CN201410125985 A CN 201410125985A CN 103955699 A CN103955699 A CN 103955699A
- Authority
- CN
- China
- Prior art keywords
- target
- video
- cameras
- sigma
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims description 30
- 238000001514 detection method Methods 0.000 claims abstract description 35
- 230000008859 change Effects 0.000 claims abstract description 13
- 238000012544 monitoring process Methods 0.000 claims abstract description 8
- 239000000284 extract Substances 0.000 claims abstract description 7
- 238000012549 training Methods 0.000 claims description 27
- 230000008569 process Effects 0.000 claims description 14
- 239000011159 matrix material Substances 0.000 claims description 6
- 230000007704 transition Effects 0.000 claims description 4
- 230000036544 posture Effects 0.000 description 36
- 238000012360 testing method Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000037237 body shape Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 230000003631 expected effect Effects 0.000 description 1
- 230000004886 head movement Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000035939 shock Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
本申请公开了一种基于监控视频的实时摔倒事件检测方法,在检测场景中安装有朝向同一目标区域、拍摄角度不同的多个摄像机,多个摄像机连续拍摄目标区域,包括以下步骤:多个摄像机同时拍摄目标区域的一段视频;从多个摄像机各自拍摄的同一时段的多个视频中,分别提取出每一帧画面的代表目标的前景图像;提取同一时刻同一目标的前景图像在由多个摄像机拍摄的画面中的各自的形状和位置特征,并使用RVM分类器,确定每一帧画面对应的时刻的目标姿态类别;将所得到的每一帧画面的目标姿态类别作为目标姿态值序列输入到HMM评估器,得到目标姿态类别变化的后验概率,目标姿态类别变化代表目标摔倒事件发生;如果后验概率大于预定阈值,则确定摔倒发生。
The present application discloses a real-time fall event detection method based on monitoring video. In the detection scene, multiple cameras facing the same target area and with different shooting angles are installed, and the multiple cameras continuously shoot the target area, including the following steps: The camera shoots a video of the target area at the same time; from multiple videos of the same period shot by multiple cameras, extract the foreground image representing the target in each frame; extract the foreground image of the same target at the same time by multiple The respective shape and position features in the pictures captured by the camera, and use the RVM classifier to determine the target attitude category at the moment corresponding to each frame; the obtained target attitude category of each frame is input as a sequence of target attitude values Go to the HMM evaluator to obtain the posterior probability of the change of the target posture category, which represents the occurrence of the target fall event; if the posterior probability is greater than the predetermined threshold, it is determined that the fall occurred.
Description
技术领域technical field
本发明涉及图像模式识别领域,更具体地,涉及基于RVM和HMM的实时摔倒检测方法。The present invention relates to the field of image pattern recognition, more specifically, to a real-time fall detection method based on RVM and HMM.
背景技术Background technique
由于摔倒检测的研究具有较高的理论意义和实用价值,国内外已有相关的研究成果和产品问世。根据摔倒检测所采用的方法,检测技术可以分为三类:佩戴式仪器检测方法、环境装置检测方法、监控视频检测方法,其中前两种技术为基于传感器的方法,后一种技术为基于图像处理的方法。Because the research on fall detection has high theoretical significance and practical value, relevant research results and products have been published at home and abroad. According to the methods used in fall detection, detection technologies can be divided into three categories: wearable instrument detection methods, environmental device detection methods, and surveillance video detection methods. The first two technologies are sensor-based methods, and the latter technology is based on methods of image processing.
在佩戴仪器式检测中,使用者需要随身佩戴一些装有传感器或其他设备装置的仪器来帮助系统获取使用者的动作信息及身体的运动信息,系统通过对采集信息分类来检测摔倒事件,文献[1]中通过使用速度和加速度传感器来判断人体动作的突然静止以达到检测摔倒的目的。In the instrument-worn detection, the user needs to wear some instruments equipped with sensors or other devices to help the system obtain the user's movement information and body movement information. The system detects falls by classifying the collected information. Literature In [1], the speed and acceleration sensors are used to judge the sudden stillness of human body movements to achieve the purpose of detecting falls.
佩戴式仪器检测方法简单易施,但存在的主要问题是:由于传感器相关参数阈值是根据仪器与佩戴者之间的精确相对位置关系设定的,一旦这种关系被破坏(事实上经常发生),如剧烈运动或穿脱衣服,则会产生大量的误检;此外,由于需要使用者佩戴仪器,将会为使用者带来较大的不适和不便。The detection method of wearable instruments is simple and easy to implement, but the main problem is: since the sensor-related parameter thresholds are set according to the precise relative positional relationship between the instrument and the wearer, once this relationship is destroyed (in fact, it often happens) , such as strenuous exercise or putting on and taking off clothes, a large number of false detections will be generated; in addition, since the user is required to wear the instrument, it will bring greater discomfort and inconvenience to the user.
环境监测仪器方法主要通过多种安置在环境中的传感器来采集与使用者相关的人体各项数据,通过数据分析来判断是否有摔倒事件发生。Alwan等人在文献[2]中使用装置在地板上的震动感受器来判断摔倒;Technical SolutionsAustralia系统[3]通过“下床”报警器、地板脚垫报警器等来收集使用者施加的压力信息,并通过分析采集的数据对使用者的姿态进行判别。The environmental monitoring instrument method mainly collects various data related to the user's human body through various sensors placed in the environment, and judges whether there is a fall event through data analysis. In the literature [2], Alwan et al. used the shock sensor installed on the floor to judge the fall; the Technical Solutions Australia system [3] collects the pressure information exerted by the user through the "get out of bed" alarm, the floor mat alarm, etc. , and judge the user's posture by analyzing the collected data.
与佩戴式仪器检测方法相同,此种方法也容易受到环境中的其他干扰而产生误检;虽然免除了使用者佩戴仪器的麻烦,但由于增加了大量的传感器,系统的复杂性有所上升。Similar to the wearable instrument detection method, this method is also susceptible to false detection due to other interference in the environment; although it saves the user from the trouble of wearing the instrument, the complexity of the system has increased due to the addition of a large number of sensors.
监控视频检测,即计算机视觉检测,通过实时分析监控环境内的视频数据以判断是否有摔倒事件发生。此种方法又可进一步细分为三种:(1)静止检测。通常摔倒后的人会静止地躺在地面上一段时间,基于这一假设,Nait-Charif和McKenna[4]使用架设在使用者头顶上方的广角镜头来获取使用者的运动轨迹,以检测摔倒时轨迹的突然终止。(2)身体形状变化检测。在摔倒过程中,摔倒者的人体形状通常会发生明显的变化,如从站立转变为平躺。基于这一原理,Ganapathy等人[5]使用人体外接矩形宽高比、外接矩形倾角作为姿态特征,并通过分析特征值的变化来判断人体形状的变化,继而检测是否有摔倒事件发生。(3)头部运动/位置检测。在此方法中,研究者通过检测人体头部,并跟踪头部的运动轨迹或定位头部与地面的相对距离来检测摔倒事件的发生。Shoaib等人[6]则通过椭圆拟合来检测人体头部,并利用模拟高斯分布的场景地面信息来计算头部相对于地面的距离并判断是否发生摔倒。Surveillance video detection, that is, computer vision detection, judges whether there is a fall event by analyzing the video data in the monitoring environment in real time. This method can be further subdivided into three types: (1) Static detection. Usually, people who fall will lie still on the ground for a period of time. Based on this assumption, Nait-Charif and McKenna [4] used a wide-angle lens mounted above the user's head to obtain the user's movement trajectory to detect falls. abrupt termination of the trajectory. (2) Body shape change detection. During a fall, the faller's body shape usually changes significantly, such as changing from standing to lying flat. Based on this principle, Ganapathy et al. [5] used the aspect ratio of the circumscribed rectangle of the human body and the inclination angle of the circumscribed rectangle as gesture features, and judged the change of the human body shape by analyzing the change of the feature value, and then detected whether there was a fall event. (3) Head movement/position detection. In this method, researchers detect the occurrence of falls by detecting the human head and tracking the trajectory of the head or locating the relative distance between the head and the ground. Shoaib et al. [6] detected the human head by ellipse fitting, and used the simulated Gaussian distribution scene ground information to calculate the distance of the head relative to the ground and judge whether a fall occurred.
在监控视频检测方法中,大部分的研究仅采用单一的运动特征或姿态特征,因此容易造成大量误检。另一方面,相关文献没有考虑处理沿着摄像机照射方向的摔倒事件,此类情况中,摔倒者的形状与站立者的形状类似,紧靠单纯的外观特征很难将两种姿态相区分。In surveillance video detection methods, most studies only use a single motion feature or pose feature, so it is easy to cause a large number of false detections. On the other hand, the relevant literature does not consider dealing with falling events along the direction of the camera. In such cases, the shape of the falling person is similar to that of the standing person, and it is difficult to distinguish the two poses based on pure appearance features. .
上面提及的参考文献列表List of references mentioned above
[1]Almeida,O.,M.Zhang,and J.C.Liu.Dynamic fall detection and pacemeasurement in walking sticks.IEEE Joint Workshop on High Confidence MedicalDevices,Software,and Systems and Medical Device Plug-and-Play Interoperability,2007.[1] Almeida, O., M. Zhang, and J.C. Liu. Dynamic fall detection and pacemeasurement in walking sticks. IEEE Joint Workshop on High Confidence Medical Devices, Software, and Systems and Medical Device Plug-and-Play Interoperability, 2007.
[2]Alwan,M.,et al.A smart and passive floor-vibration based fall detector forelderly.IEEE2nd Conf.on Information and Communication Technologies,2006.[2]Alwan,M.,et al.A smart and passive floor-vibration based fall detector forelderly.IEEE2nd Conf.on Information and Communication Technologies,2006.
[3]http://www.tecsol.com.au/[3] http://www.tecsol.com.au/
[4]Nait-Charif,H.and S.J.McKenna.Activity summarisation and fall detectionin a supportive home environment.IEEE17th Conf.on Pattern Recognition,2004.[4]Nait-Charif,H.and S.J.McKenna.Activity summarization and fall detection in a supportive home environment.IEEE17th Conf.on Pattern Recognition,2004.
[5]V.Vaidehi et al.Video based automatic fall detection in indoor environment.IEEE International Conference on Recent Trends in Information Technology,2011.[5]V.Vaidehi et al.Video based automatic fall detection in indoor environment.IEEE International Conference on Recent Trends in Information Technology,2011.
[6]Shoaib,Muhammad,Dragon,R.,Ostermann,J.View-invariant fall detectionfor elderly in real home environment.4th Pacific-Rim Symposium on Image andVideo Technology,2010.[6]Shoaib, Muhammad, Dragon, R., Ostermann, J. View-invariant fall detection for elderly in real home environment. 4th Pacific-Rim Symposium on Image and Video Technology, 2010.
发明内容Contents of the invention
本申请的发明人考虑到现有技术的上述情况而作出了本发明。本发明提出了一种基于多角度摄像头的摔倒检测方法,能够检测出不同空间方向的摔倒事件,并具有实时处理能力,具备较高实用性。例如,在室内居家环境中,能够及时检测出独处老人、病人等(观察对象)可能发生的摔倒事件,在很大程度上减轻摔倒事件带来的伤害。The inventors of the present application have made the present invention in consideration of the above-mentioned circumstances of the prior art. The invention proposes a fall detection method based on a multi-angle camera, which can detect fall events in different spatial directions, has real-time processing capability, and has high practicability. For example, in an indoor home environment, it is possible to detect in time the possible falls of the elderly or patients (observation objects) who are alone, and reduce the injuries caused by the falls to a large extent.
通常,在摔倒时,摔倒者的姿态会发生较大幅度的变化,基于此原理,本发明将摔倒过程中的不同姿态划分为四类,通过两个不同角度的摄像机视频图像提取目标的外观、场景特征并利用相关向量机(RVM)来对运动目标进行快速姿态识别。利用隐马尔科夫模型(HMM)来对摔倒过程中的姿态变化进行建模,并用该模型去评估监控视频中的每一段运动过程,从而判断是否有摔倒事件发生。Usually, when a fall occurs, the posture of the faller will change significantly. Based on this principle, the present invention divides the different postures during the fall into four categories, and extracts the target through two camera video images from different angles. Appearance, scene features, and use the Relevant Vector Machine (RVM) to perform fast gesture recognition on moving targets. The hidden Markov model (HMM) is used to model the posture changes during the fall process, and the model is used to evaluate each movement process in the surveillance video, so as to judge whether there is a fall event.
根据本发明的实施例,提供了一种基于监控视频的实时摔倒事件检测方法,其中,在检测场景中安装有朝向同一目标区域、拍摄角度不同的多个摄像机,所述多个摄像机连续拍摄目标区域,所述方法包括以下步骤:步骤1、所述多个摄像机同时拍摄目标区域的一段视频;步骤2、从所述多个摄像机各自拍摄的同一时段的多个视频中,分别提取出视频的每一帧画面的代表目标的前景区域;步骤3、提取同一时刻同一目标的所述前景区域在由所述多个摄像机拍摄的画面中的各自的形状和位置特征,并使用RVM分类器,确定每一帧画面对应的时刻的目标姿态类别;步骤4、将所得到的每一帧画面的目标姿态类别作为目标姿态值序列输入到HMM评估-,得到目标姿态类别变化的后验概率,其中,所述目标姿态类别变化过程指示目标摔倒事件的发生;以及步骤5、如果所述后验概率大于预定阈值,则确定目标摔倒事件的发生。According to an embodiment of the present invention, a real-time fall event detection method based on surveillance video is provided, wherein multiple cameras facing the same target area and with different shooting angles are installed in the detection scene, and the multiple cameras continuously shoot Target area, the method includes the following steps: Step 1, the plurality of cameras shoot a section of video of the target area at the same time; Step 2, from the plurality of videos of the same period of time captured by the plurality of cameras, respectively extract the video Each frame of picture represents the foreground area of the target; Step 3, extracting the respective shape and position features of the foreground area of the same target at the same moment in the pictures taken by the plurality of cameras, and using the RVM classifier, Determine the target attitude category at the moment corresponding to each frame of picture; step 4, input the target attitude category of each frame picture obtained as the target attitude value sequence into the HMM evaluation-, obtain the posterior probability of the change of the target attitude category, where , the change process of the target posture category indicates the occurrence of a target fall event; and step 5, if the posterior probability is greater than a predetermined threshold, then determine the occurrence of the target fall event.
本发明创新性地采用了RVM和HMM结合的方式来进行视频画面模式识别,不但能够从视频中识别出任意时刻的观测对象姿态,还能够识别任一段时间内的姿态变化过程,这样,对于像摔倒这样的在一段时间内发生姿态变化的情况,能够实时检测出来。The present invention innovatively adopts the combination of RVM and HMM to recognize the pattern of the video image, not only can recognize the posture of the observed object at any time from the video, but also can recognize the posture change process within any period of time. In this way, for images such as Situations such as falling over a period of time that have a posture change can be detected in real time.
本发明主要应用于居家监控视频场景中,用于对监控视频中可能出现的摔倒(非正常卧倒)事件进行监测并及时报警,从而有效地保障被监控者的人身安全。有益效果主要有:(1)对独居的空巢老人进行全天候实时监测,对老年人的行为和状态进行分析,自动滤除掉无用信息,并对可能出现的老人摔倒事件做出快速判断并报警,以进行及时救助,从根本上保障独居老人安全。(2)对需要监管的病人进行身体状态的分析,在发生摔倒时,可以自动向值班人员报警,提示医护人员及时处理。一方面可以降低医护人员的工作负担,另一方面也为病人的及时救护提供了宝贵的时间。The present invention is mainly applied in home monitoring video scenes, and is used for monitoring possible falls (abnormal lying down) events that may appear in the monitoring video and alarming in time, so as to effectively protect the personal safety of the monitored person. Beneficial effects mainly include: (1) Carry out all-weather real-time monitoring of empty-nest elderly living alone, analyze the behavior and status of the elderly, automatically filter out useless information, and make quick judgments on possible falls of the elderly and Call the police for timely rescue and fundamentally guarantee the safety of the elderly living alone. (2) Analyze the physical status of patients who need to be supervised. When a fall occurs, it can automatically alarm the on-duty staff and prompt the medical staff to deal with it in time. On the one hand, it can reduce the workload of medical staff, and on the other hand, it also provides valuable time for the timely rescue of patients.
附图说明Description of drawings
图1是示出根据本发明的实施例的两个相互垂直的摄像方向上的人体外接椭圆偏角的夹角差的示意图;Fig. 1 is a schematic diagram showing the included angle difference of the declination angle of the circumscribed ellipse of the human body in two mutually perpendicular imaging directions according to an embodiment of the present invention;
图2是示出根据本发明的实施例的人体所在背景的场景信息的示意图;Fig. 2 is a schematic diagram showing scene information of a background where a human body is located according to an embodiment of the present invention;
图3是示出根据本发明的实施例的训练出的3个RVM分类器的结构示意图;Fig. 3 is a schematic structural diagram showing three RVM classifiers trained according to an embodiment of the present invention;
图4为示出根据本发明的实施例的一段视频中训练模型下姿态序列的对数后验概率log(P(O|λ))随帧数变化的图。Fig. 4 is a graph showing the logarithmic posterior probability log(P(O|λ)) of the pose sequence under the training model in a video according to an embodiment of the present invention as it changes with the number of frames.
具体实施方式Detailed ways
下面,结合附图对技术方案的实施作进一步的详细描述。Below, the implementation of the technical solution will be further described in detail in conjunction with the accompanying drawings.
首先,简述本发明的原理。First, the principle of the present invention will be briefly described.
根据本发明的实施例,基于HMM和RVM的实时摔倒检测方法在模型训练阶段主要包括以下步骤:1)特征提取,用来在两个不同角度的摄像机的训练视频帧中提取反映人体的姿态变化的多个特征;2)姿态分类,利用提取的上述特征训练出分类器,并通过分类器得到每个训练视频帧的姿态类别;3)利用隐马尔科夫模型(HMM)来对摔倒过程中的姿态变化进行建模(生成HMM模型)。According to an embodiment of the present invention, the real-time fall detection method based on HMM and RVM mainly includes the following steps in the model training stage: 1) feature extraction, which is used to extract the posture reflecting the human body from the training video frames of two cameras with different angles 2) Attitude classification, use the extracted features to train a classifier, and get the attitude category of each training video frame through the classifier; 3) Use the hidden Markov model (HMM) to classify the fall The pose changes during the process are modeled (generating an HMM model).
根据本发明的实施例,基于RVM和HMM的实时摔倒检测方法在事件检测阶段主要包括以下步骤:1)特征提取,用来在两个不同角度的摄像机的测试视频帧中提取反映人体的姿态变化的多个特征;2)姿态分类,利用所训练出的分类器得到每个测试视频帧的姿态类别;3)利用在上述训练阶段生成的HMM模型,评估监控视频中的每一段运动过程(姿态类别发生变化的过程),从而判断是否有摔倒事件发生。According to an embodiment of the present invention, the real-time fall detection method based on RVM and HMM mainly includes the following steps in the event detection stage: 1) feature extraction, which is used to extract the posture reflecting the human body from the test video frames of two cameras with different angles 2) Pose classification, use the trained classifier to obtain the pose category of each test video frame; 3) Use the HMM model generated in the above training stage to evaluate each movement process in the surveillance video ( The process of changing the posture category), so as to judge whether there is a fall event.
下面,按照上面的顺序分别说明本发明的基于RVM和HMM的实时摔倒检测方法的具体实现过程。本领域的技术人员能够理解,以下有些步骤/操作同时存在于训练阶段和测试阶段,为了简明起见,不进行重复说明。Next, the specific implementation process of the real-time fall detection method based on RVM and HMM of the present invention will be described respectively according to the above sequence. Those skilled in the art can understand that some of the following steps/operations exist in both the training phase and the testing phase, and for the sake of brevity, repeated descriptions are not repeated.
1.特征提取1. Feature extraction
将摔倒的过程进行分解,人体的姿态变化过程可归结为站立-倾斜-躺(到地面)。基于此,采用人体几何外观、场景信息组成的特征作为RVM分类器的输入,进行姿态的判断,可将居家视频中人体的姿态大致分为4类:Decomposing the process of falling, the posture change process of the human body can be attributed to standing-tilting-lying (to the ground). Based on this, the features composed of the geometric appearance and scene information of the human body are used as the input of the RVM classifier to judge the posture. The postures of the human body in the home video can be roughly divided into four categories:
1)站立;1) standing;
2)倾斜;2) Tilt;
3)躺(仅限地面);3) Lying (ground only);
4)其他,包括坐、蹲、躺床上等。4) Others, including sitting, squatting, lying on the bed, etc.
上述分类仅为示例,本领域的技术人员能够理解,还可根据实际需要,将人体姿态分为与上述4类不同的任意数目的类别。The above classification is only an example, and those skilled in the art can understand that human body postures can also be divided into any number of categories different from the above four categories according to actual needs.
采用的人体几何外观特征有:The geometric appearance features of the human body are:
1)人体外接矩形的宽高比。针对站姿,这一比值较小,针对倾斜姿态,其外接矩形形状接近正方形,宽高比接近于1;1) The aspect ratio of the rectangle circumscribing the human body. For the standing posture, this ratio is small, and for the inclined posture, the circumscribed rectangle shape is close to a square, and the aspect ratio is close to 1;
2)两个摄像方向上的人体外接椭圆偏角的夹角差。为了从多个摄像角度分析人体姿态,需要考虑椭圆拟合的夹角。由于摄像机的透视变换,当三维场景映射到二维图像中存在一定的信息丢失,使得许多姿态不易区分,如站立、顺着摄像机方向的躺等。可采用两个视线等高且相互垂直的摄像机,能够有效地进行信息互补。针对站立姿势,由于目标垂直于地平面,故在两个摄像机画面中,人体外接椭圆的长轴与水平轴夹角均约为90°,因此两个角度的差约为0°;对于倾斜姿态,目标与地平面呈一定夹角,两个摄像方向上的夹角角度的差(绝对值)约在0°~90°范围内,一般显著高于站姿下的该角度差(约0°);对于躺姿(仅限地面),由于目标平行于地平面,故两个垂直摄像方向上的夹角(考虑正负)的角度差(绝对值)约为90°。上述原理可通过图1解释(从上至下依次为站立、倾斜、躺;第一列为实际场景示意图,第二三列分别为两个摄像机拍摄画面)。上述两个视线等高且垂直的摄像机摆位仅为示例,本领域的技术人员能够理解,实际上两个摄像机的摆位还可以有其它方式,只要上述夹角在不同姿态下的变化能够呈现某种规律即可。当然,也可以使用更多的摄像机,从而得到更准确、精细的分类结果。2) The angle difference between the declination angle of the circumscribed ellipse of the human body in the two camera directions. In order to analyze human pose from multiple camera angles, the included angle of ellipse fitting needs to be considered. Due to the perspective transformation of the camera, there is a certain amount of information loss when the 3D scene is mapped to the 2D image, making it difficult to distinguish many poses, such as standing, lying down along the direction of the camera, etc. Two cameras with the same line of sight and perpendicular to each other can be used to effectively complement information. For the standing posture, since the target is perpendicular to the ground plane, in the two camera images, the angle between the major axis of the circumscribed ellipse of the human body and the horizontal axis is about 90°, so the difference between the two angles is about 0°; , the target and the ground plane form a certain angle, and the angle difference (absolute value) between the two camera directions is in the range of 0° to 90°, which is generally significantly higher than the angle difference in the standing posture (about 0° ); for the lying position (only on the ground), since the target is parallel to the ground plane, the angle difference (absolute value) of the included angle (considering positive and negative) in the two vertical camera directions is about 90°. The above principle can be explained by Figure 1 (from top to bottom are standing, leaning, lying down; the first column is a schematic diagram of the actual scene, and the second and third columns are the pictures taken by two cameras). The placement of the above-mentioned two cameras with the same height and vertical line of sight is just an example. Those skilled in the art can understand that there are actually other ways for the placement of the two cameras, as long as the changes of the above-mentioned included angles under different postures can present Some kind of law will do. Of course, more cameras can also be used to obtain more accurate and refined classification results.
采用的场景信息特征有:The scene information features used are:
1)人体所在背景的场景信息直方图。根据居家视频特性,预先人工将场景区域进行人为标记,主要标记为床/沙发/椅子区域、墙、地面,如图2所示(其中灰色代表墙,黑色代表床/沙发/椅子,白色代表地面)。将三个区域以不同的值表示,统计人体所在区域的场景信息,计算三种灰度值所占比例,组成一个3bin的场景信息直方图。1) The histogram of the scene information of the background where the human body is located. According to the characteristics of the home video, the scene area is artificially marked in advance, mainly including the bed/sofa/chair area, wall, and ground, as shown in Figure 2 (where gray represents the wall, black represents the bed/sofa/chair, and white represents the ground ). The three areas are represented by different values, the scene information of the area where the human body is located is counted, and the proportions of the three gray values are calculated to form a 3-bin scene information histogram.
针对原始视频,利用前景分割算法提取出运动区域(可参见文献[7]),对于以上三种特征,均从上述两个摄像画面中同时提取,这样共组成了一个2×1+1+3×2=9维特征向量。即,对于监控视频的每个视频帧对应的时刻,都提取出上述9维特征向量。For the original video, use the foreground segmentation algorithm to extract the moving area (see literature [7]). For the above three features, they are all extracted from the above two camera images at the same time, thus forming a total of 2×1+1+3 ×2=9-dimensional feature vector. That is, for the moment corresponding to each video frame of the surveillance video, the aforementioned 9-dimensional feature vector is extracted.
2.姿态分类2. Pose classification
接下来,可进行姿态分类。姿态分类器采用RVM(例如,可使用在文献[8]中公开的RVM,因其测试速度较快)。由于RVM主要用于2分类的情况,因此需要训练多个2分类器,以便进行逐层分类。分析所提取的上述场景信息特征和人体外观特征,由于此两类特征不同,故采用决策树的分类结构将这两类特征分开考虑,每次选取部分特征作分类,逐层判断。依次选取场景直方图、外接矩形宽高比和外接椭圆角度差三种特征,训练出3个2分类器,得到的分类结构如下图3所示。具体地,例如,第一个分类器RVM1可用来区分上述第4类姿态与其它3类姿态,第二个分类器RVM2用来区分上述第3类姿态与第1、2类姿态,第三个分类器RVM3用来区分上述第2类姿态与第1类姿态。Next, pose classification can be performed. The pose classifier uses RVM (for example, the RVM disclosed in [8] can be used because of its fast test speed). Since RVM is mainly used in the case of 2 classifications, multiple 2 classifiers need to be trained for layer-by-layer classification. Analyze the extracted scene information features and human appearance features. Since these two types of features are different, the classification structure of decision tree is used to consider these two types of features separately, and each time select some features for classification and judge layer by layer. Three features of the scene histogram, the aspect ratio of the circumscribed rectangle and the angle difference of the circumscribed ellipse are selected in turn, and three 2-classifiers are trained. The resulting classification structure is shown in Figure 3 below. Specifically, for example, the first classifier RVM1 can be used to distinguish the above-mentioned fourth type of posture from other three types of postures, the second classifier RVM2 can be used to distinguish the above-mentioned third type of posture from the first and second types of postures, and the third classifier The classifier RVM3 is used to distinguish the above-mentioned type 2 posture from the type 1 posture.
这样,对于两个摄像机各自拍摄的每一帧中提取的上述9维特征,均依次送入这3个分类器中,得到姿态分类的结果。根据每帧中人员的姿态,得到相应的姿态类型编号,从而产生姿态序列,此即为下面的HMM评估中用到的HMM模型中的观测序列,其中状态数N即为4,即有上述4种可能的输出状态。In this way, the above-mentioned 9-dimensional features extracted from each frame captured by the two cameras are sequentially sent to the three classifiers to obtain the pose classification result. According to the posture of the person in each frame, the corresponding posture type number is obtained, thereby generating the posture sequence, which is the observation sequence in the HMM model used in the following HMM evaluation, where the number of states N is 4, that is, the above 4 possible output states.
RVM分类器的预测分类过程可以概述如下(具体可参考文献[8]):The prediction and classification process of the RVM classifier can be summarized as follows (for details, refer to [8]):
1)已知参与训练的特征矩阵X∈Rn×m、测试样本中得到的新特征向量x*∈R1×n以及训练得到的RVM模型向量p∈Rm×1,其中n是特征维数,m为参与训练的样本个数;1) It is known that the feature matrix X∈R n×m participating in the training, the new feature vector x*∈R 1×n obtained from the test sample, and the RVM model vector p∈R m×1 obtained from the training, where n is the feature dimension number, m is the number of samples participating in the training;
2)利用x*和X计算基向量b∈R1×m;2) Calculate the basis vector b∈R 1×m by using x* and X;
3)将基向量与模型相乘得到数值y=b*p,若y>0.5,则预测为正类,反之预测为负类。3) Multiply the base vector and the model to get the value y=b*p. If y>0.5, it is predicted as a positive class, otherwise it is predicted as a negative class.
所述RVM分类器的训练过程包括:The training process of described RVM classifier comprises:
1)选择适当的核函数,将特征向量映射到高维空间。常用的几种核函数包括RBF核函数,Laplace核函数,多项式核函数等,在本发明中采用RBF核;1) Select an appropriate kernel function to map the feature vector to a high-dimensional space. Several kernel functions commonly used comprise RBF kernel function, Laplace kernel function, polynomial kernel function etc., adopt RBF kernel in the present invention;
2)初始化RVM参数;2) Initialize RVM parameters;
3)从四种姿态(站、躺地、倾斜、其他)的训练样本提取姿态特征,所有样本的特征组成特征矩阵X,所有样本对应的姿态标号组成向量Y;3) Extract posture features from training samples of four postures (standing, lying on the ground, tilting, etc.), the features of all samples form a feature matrix X, and the corresponding posture labels of all samples form a vector Y;
4)根据贝叶斯准则,用训练特征和标号迭代求解训练样本最优的权重分布和分布参数;4) According to the Bayesian criterion, use the training features and labels to iteratively solve the optimal weight distribution and distribution parameters of the training samples;
5)输出RVM参数,即训练得到的模型。5) Output the RVM parameters, that is, the trained model.
3.HMM评估3. HMM evaluation
在每一帧中,记录目标的姿态类别,为了利用HMM,将目标姿态用离散值表示,即(0、1、2、3),这样在连续的一段时间中,得到一组长度为T(对应于视频的帧数)的目标姿态值序列,即观测序列O1O2...OT。在训练阶段,根据HMM学习问题,利用摔倒过程中提取到的观测序列O1O2...OT进行学习,找到一组模型参数λ={π,A,B}使得P(O|λ)最大,此即HMM摔倒模型的参数。In each frame, the attitude category of the target is recorded. In order to use HMM, the attitude of the target is represented by discrete values, that is, (0, 1, 2, 3), so that in a continuous period of time, a set of length T ( Corresponding to the frame number of the video), the target pose value sequence, that is, the observation sequence O 1 O 2 ... O T . In the training phase, according to the HMM learning problem, use the observation sequence O 1 O 2 ... O T extracted during the fall process to learn, and find a set of model parameters λ={π,A,B} such that P(O| λ) is the largest, which is the parameter of the HMM fall model.
所述HMM模型的训练过程包括:The training procedure of described HMM model comprises:
1)收集多段不同摔倒者不同方向的摔倒视频;1) Collect multiple falling videos of different fallers in different directions;
2)提取每段摔倒视频中的特征,并利用RVM进行姿态分类,在一个时间滑窗内,将每一帧输出的姿态编号作为HMM观测序列;2) Extract the features in each fall video, and use RVM to classify the poses. Within a time sliding window, use the pose number output by each frame as the HMM observation sequence;
3)利用基于多观测序列的Baum-Welch训练算法训练HMM模型λ,传统的Baum-Welch算法步骤如下(可参见文献[9]):3) Use the Baum-Welch training algorithm based on multi-observation sequences to train the HMM model λ. The steps of the traditional Baum-Welch algorithm are as follows (see literature [9]):
3-1为模型参数赋一个初值λ0;3-1 Assign an initial value λ 0 to the model parameters;
3-2利用前向后向方法(可参见文献[10]),计算在该模型下观测序列O的后验概率,即P(O|λ0);3-2 Using the forward-backward method (see literature [10]), calculate the posterior probability of observing sequence O under this model, that is, P(O|λ 0 );
3-3基于观测序列O和当前模型参数,更新模型参数λ,其更新公式为:3-3 Based on the observation sequence O and the current model parameters, update the model parameter λ, the update formula is:
式中,N是隐状态的数目,其含义为隐含的目标姿态信息(例如,摔倒姿态、站立姿、倾斜姿态),在本专利中取经验值3;M是观测值的数目,也即RVM输出的四种态编号;aij是状态间的转移概率,表示由状态i转到状态j的概率;bj(k)为发射概率,表示在状态j时输出观测值为k的概率;πi是初始状态的概率分布;vk表示第k个观测值;Ot表示t时刻的观测值。ξt(i,j)和γt(i)为由当前模型参数计算得到的辅助变量(可参见文献[9]),T为训练视频帧长度。In the formula, N is the number of hidden states, and its meaning is the hidden target posture information (for example, falling posture, standing posture, inclined posture), and the empirical value 3 is taken in this patent; M is the number of observation values, also That is, the four state numbers output by RVM; a ij is the transition probability between states, indicating the probability of transitioning from state i to state j; b j (k) is the emission probability, indicating the probability of outputting observation value k in state j ; π i is the probability distribution of the initial state; v k represents the kth observed value; O t represents the observed value at time t. ξ t (i,j) and γ t (i) are auxiliary variables calculated from the current model parameters (see literature [9]), and T is the length of the training video frame.
4、计算在新模型下观测序列的后验概率P(O|λ)4. Calculate the posterior probability P(O|λ) of the observation sequence under the new model
5、如果logP(O|λ)-logP(O|λ0)<Delta(Delta是一个非常小的数,通常取1e-6左右),说明训练已达到预期效果,算法结束,输出当前的模型λ,否则,令λ0=λ,返回到第3步继续工作。5. If logP(O|λ)-logP(O|λ 0 )<Delta (Delta is a very small number, usually around 1e-6), it means that the training has achieved the expected effect, the algorithm ends, and the current model is output λ, otherwise, set λ 0 =λ, and return to step 3 to continue working.
由于传统的训练算法只基于单观测序列,训练出的模型普适性不强,本专利中引入基于多观测序列的训练算法。Since traditional training algorithms are only based on single-observation sequences, the trained models are not universally applicable. This patent introduces a training algorithm based on multi-observation sequences.
考虑同一模式下的一系列观测序列O={O(1),O(2),…O(K)},分别为单个观测序列(1≤k≤K),由于每段序列之间互不影响,故假设它们为互相独立的,基于此假设,模型参数的更新公式变化为(可参见文献[9]):Consider a sequence of observations O={O (1) ,O (2) ,…O (K) } in the same mode, Each is a single observation sequence (1≤k≤K). Since each sequence does not affect each other, it is assumed that they are independent of each other. Based on this assumption, the update formula of the model parameters changes as (see literature [9]) :
HMM的模型参数可以用下表概括:The model parameters of the HMM can be summarized in the following table:
在测试阶段,根据HMM评估问题,利用已有的参数和测试视频中提取到的观测序列O=O1O2...OT,计算该序列的概率P(O|λ)(可参考文献[10]中的计算公式),从而判断是否为摔倒。In the test phase, according to the HMM evaluation problem, use the existing parameters and the observation sequence O=O 1 O 2 ... O T extracted from the test video to calculate the probability P(O|λ) of the sequence (refer to [10]), so as to judge whether it is a fall.
图4为一段视频中训练模型下姿态序列的对数后验概率logP(O|λ)随帧数变化的图,其中六段峰值区间对应着六段不同的摔倒事件,在摔倒期间,姿态输出序列的对数后验概率达到极值点,与非摔倒姿势有较强的差别。Figure 4 is a graph of the logarithmic posterior probability logP(O|λ) of the attitude sequence under the training model in a video as it changes with the number of frames, in which the six peak intervals correspond to six different fall events. During the fall, The log-posterior probability of the pose output sequence reaches an extreme point with a strong difference from the non-fall pose.
上面提及的参考文献References mentioned above
[7]Yuyang Chen,Yanyun Zhao,Anni Cai,A robust moving objectsegmentation algorithm using integrated mask-based background maintenance,3rdIEEE International Conference on Network Infrastructure and Digital Content(IC-NIDC),2012.[7] Yuyang Chen, Yanyun Zhao, Anni Cai, A robust moving object segmentation algorithm using integrated mask-based background maintenance, 3rd IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC), 2012.
[8]Michael E.Tipping,Sparse Bayesian Learning and the Relevance VectorMachine,Journal of Machine Learning Research1(2001)211-244.[8] Michael E. Tipping, Sparse Bayesian Learning and the Relevance Vector Machine, Journal of Machine Learning Research 1(2001) 211-244.
[9]L.E.Baum,T.Petrie,G.Soules,and N.Weiss,aA Maximization TechniqueOccurring in the Statistical Analysis of Probabilistic Functions of Markov Chains,oAnnals of Math.Statistics,vol.41,no.1,pp.164-171,1970[9] LE Baum, T. Petrie, G. Soules, and N. Weiss, a A Maximization Technique Occurring in the Statistical Analysis of Probabilistic Functions of Markov Chains, o Annals of Math. Statistics, vol.41, no.1, pp. 164-171, 1970
[10]Xiaolin Li,Parizeau,M.,Plamondon,Rejean,“Training hidden Markovmodels with multiple observations-a combinatory method”,in IEEE Transactionson Pattern Analysis and Machine Intelligence,vol.22,no.4,pp.371-377,April,2000.[10] Xiaolin Li, Parizeau, M., Plamondon, Rejean, "Training hidden Markovmodels with multiple observations-a combinatorial method", in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.22, no.4, pp.371-377 , April, 2000.
为了避免使本说明书的描述限于冗繁,在本说明书中的描述中,可能对可在上述参考文献或其它现有技术资料中获得的部分技术细节进行了省略、简化、变通等处理,这对于本领域的技术人员来说是可以理解的,并且这不会影响本说明书的公开充分性。在此,将上述参考文献通过引用全文合并于此。In order to avoid making the description in this specification redundant, in the description in this specification, some technical details that can be obtained in the above-mentioned references or other prior art materials may be omitted, simplified, and modified. It is understandable to those skilled in the art, and this does not affect the adequacy of the disclosure of this specification. The above references are hereby incorporated by reference in their entirety.
综上所述,本领域的技术人员能够理解,对本发明的上述实施例能够做出各种修改、变型、以及替换,其均落入如所附权利要求限定的本发明的保护范围。In summary, those skilled in the art can understand that various modifications, variations, and replacements can be made to the above embodiments of the present invention, all of which fall within the protection scope of the present invention as defined by the appended claims.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410125985.0A CN103955699B (en) | 2014-03-31 | 2014-03-31 | A kind of real-time fall events detection method based on monitor video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410125985.0A CN103955699B (en) | 2014-03-31 | 2014-03-31 | A kind of real-time fall events detection method based on monitor video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103955699A true CN103955699A (en) | 2014-07-30 |
CN103955699B CN103955699B (en) | 2017-12-26 |
Family
ID=51332974
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410125985.0A Active CN103955699B (en) | 2014-03-31 | 2014-03-31 | A kind of real-time fall events detection method based on monitor video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103955699B (en) |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105354540A (en) * | 2015-10-22 | 2016-02-24 | 上海鼎松物联网科技有限公司 | Video analysis based method for implementing person fall-down behavior detection |
CN105868403A (en) * | 2016-04-20 | 2016-08-17 | 浙江宇视科技有限公司 | Method and device for extracting video |
CN105930906A (en) * | 2016-04-15 | 2016-09-07 | 上海大学 | Trip detection method based on characteristic weighting and improved Bayesian algorithm |
CN106228572A (en) * | 2016-07-18 | 2016-12-14 | 西安交通大学 | The long inactivity object detection of a kind of carrier state mark and tracking |
WO2016197385A1 (en) * | 2015-06-12 | 2016-12-15 | 深圳开源创客坊科技有限公司 | Alarm system and method capable of monitoring accidental tumble of human body |
CN106529455A (en) * | 2016-11-04 | 2017-03-22 | 哈尔滨工业大学 | Fast human posture recognition method based on SoC FPGA |
CN107221128A (en) * | 2017-05-19 | 2017-09-29 | 北京大学 | A kind of evaluation of portable body fall risk and early warning system and its method |
CN108363966A (en) * | 2018-01-30 | 2018-08-03 | 广东工业大学 | A kind of interior fall detection method and system |
CN108986405A (en) * | 2018-08-07 | 2018-12-11 | 河南云拓智能科技有限公司 | A kind of multi parameters control method based on Zigbee gateway |
CN109447174A (en) * | 2018-11-07 | 2019-03-08 | 金瓜子科技发展(北京)有限公司 | A kind of lacquer painting recognition methods, device, storage medium and electronic equipment |
CN110136381A (en) * | 2018-02-07 | 2019-08-16 | 中国石油化工股份有限公司 | A kind of well drilling operation site personnel standing monitoring and warning system |
CN110378515A (en) * | 2019-06-14 | 2019-10-25 | 平安科技(深圳)有限公司 | A kind of prediction technique of emergency event, device, storage medium and server |
CN110765860A (en) * | 2019-09-16 | 2020-02-07 | 平安科技(深圳)有限公司 | Tumble determination method, tumble determination device, computer apparatus, and storage medium |
CN110852237A (en) * | 2019-11-05 | 2020-02-28 | 浙江大华技术股份有限公司 | Object posture determining method and device, storage medium and electronic device |
CN111460908A (en) * | 2020-03-05 | 2020-07-28 | 中国地质大学(武汉) | A method and system for human fall recognition based on OpenPose |
CN111723598A (en) * | 2019-03-18 | 2020-09-29 | 北京邦天信息技术有限公司 | Machine vision system and implementation method thereof |
CN111753587A (en) * | 2019-03-28 | 2020-10-09 | 杭州海康威视数字技术股份有限公司 | Method and device for detecting falling to ground |
CN111767888A (en) * | 2020-07-08 | 2020-10-13 | 北京澎思科技有限公司 | Object state detection method, computer device, storage medium, and electronic device |
CN111899470A (en) * | 2020-08-26 | 2020-11-06 | 歌尔科技有限公司 | Human body falling detection method, device, equipment and storage medium |
WO2021164654A1 (en) * | 2020-02-20 | 2021-08-26 | 艾科科技股份有限公司 | Time-continuity-based detection determination system and method |
CN113505752A (en) * | 2021-07-29 | 2021-10-15 | 中移(杭州)信息技术有限公司 | Fall detection method, device, equipment and computer readable storage medium |
CN116887057A (en) * | 2023-09-06 | 2023-10-13 | 北京立同新元科技有限公司 | Intelligent video monitoring system |
CN118430183A (en) * | 2024-04-28 | 2024-08-02 | 岳正检测认证技术有限公司济南分公司 | A method for monitoring the posture of a person getting out of bed at night |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090278934A1 (en) * | 2003-12-12 | 2009-11-12 | Careview Communications, Inc | System and method for predicting patient falls |
CN103186902A (en) * | 2011-12-29 | 2013-07-03 | 爱思开电讯投资(中国)有限公司 | Trip detecting method and device based on video |
-
2014
- 2014-03-31 CN CN201410125985.0A patent/CN103955699B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090278934A1 (en) * | 2003-12-12 | 2009-11-12 | Careview Communications, Inc | System and method for predicting patient falls |
CN103186902A (en) * | 2011-12-29 | 2013-07-03 | 爱思开电讯投资(中国)有限公司 | Trip detecting method and device based on video |
Non-Patent Citations (1)
Title |
---|
MEI JIANG等: "A Real-time Fall Detection System Based on HMM and RVM", 《VISUAL COMMUNICATIONS AND IMAGE PROCESSING(VCIP),2013.IEEE》 * |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016197385A1 (en) * | 2015-06-12 | 2016-12-15 | 深圳开源创客坊科技有限公司 | Alarm system and method capable of monitoring accidental tumble of human body |
CN105354540A (en) * | 2015-10-22 | 2016-02-24 | 上海鼎松物联网科技有限公司 | Video analysis based method for implementing person fall-down behavior detection |
CN105930906A (en) * | 2016-04-15 | 2016-09-07 | 上海大学 | Trip detection method based on characteristic weighting and improved Bayesian algorithm |
CN105868403A (en) * | 2016-04-20 | 2016-08-17 | 浙江宇视科技有限公司 | Method and device for extracting video |
CN105868403B (en) * | 2016-04-20 | 2019-10-18 | 浙江宇视科技有限公司 | Extract the method and device of video recording |
CN106228572B (en) * | 2016-07-18 | 2019-01-29 | 西安交通大学 | A kind of the long inactivity object detection and tracking of carrier state mark |
CN106228572A (en) * | 2016-07-18 | 2016-12-14 | 西安交通大学 | The long inactivity object detection of a kind of carrier state mark and tracking |
CN106529455A (en) * | 2016-11-04 | 2017-03-22 | 哈尔滨工业大学 | Fast human posture recognition method based on SoC FPGA |
CN106529455B (en) * | 2016-11-04 | 2019-06-11 | 哈尔滨工业大学 | A Fast Human Gesture Recognition Method Based on SoC FPGA |
CN107221128A (en) * | 2017-05-19 | 2017-09-29 | 北京大学 | A kind of evaluation of portable body fall risk and early warning system and its method |
CN108363966A (en) * | 2018-01-30 | 2018-08-03 | 广东工业大学 | A kind of interior fall detection method and system |
CN110136381B (en) * | 2018-02-07 | 2023-04-07 | 中国石油化工股份有限公司 | On-spot personnel of drilling operation monitoring early warning system that stands |
CN110136381A (en) * | 2018-02-07 | 2019-08-16 | 中国石油化工股份有限公司 | A kind of well drilling operation site personnel standing monitoring and warning system |
CN108986405A (en) * | 2018-08-07 | 2018-12-11 | 河南云拓智能科技有限公司 | A kind of multi parameters control method based on Zigbee gateway |
CN109447174A (en) * | 2018-11-07 | 2019-03-08 | 金瓜子科技发展(北京)有限公司 | A kind of lacquer painting recognition methods, device, storage medium and electronic equipment |
CN111723598A (en) * | 2019-03-18 | 2020-09-29 | 北京邦天信息技术有限公司 | Machine vision system and implementation method thereof |
CN111753587B (en) * | 2019-03-28 | 2023-09-29 | 杭州海康威视数字技术股份有限公司 | Ground falling detection method and device |
CN111753587A (en) * | 2019-03-28 | 2020-10-09 | 杭州海康威视数字技术股份有限公司 | Method and device for detecting falling to ground |
CN110378515A (en) * | 2019-06-14 | 2019-10-25 | 平安科技(深圳)有限公司 | A kind of prediction technique of emergency event, device, storage medium and server |
CN110765860A (en) * | 2019-09-16 | 2020-02-07 | 平安科技(深圳)有限公司 | Tumble determination method, tumble determination device, computer apparatus, and storage medium |
CN110765860B (en) * | 2019-09-16 | 2023-06-23 | 平安科技(深圳)有限公司 | Tumble judging method, tumble judging device, computer equipment and storage medium |
CN110852237A (en) * | 2019-11-05 | 2020-02-28 | 浙江大华技术股份有限公司 | Object posture determining method and device, storage medium and electronic device |
WO2021164654A1 (en) * | 2020-02-20 | 2021-08-26 | 艾科科技股份有限公司 | Time-continuity-based detection determination system and method |
CN111460908B (en) * | 2020-03-05 | 2023-09-01 | 中国地质大学(武汉) | Human body fall recognition method and system based on OpenPose |
CN111460908A (en) * | 2020-03-05 | 2020-07-28 | 中国地质大学(武汉) | A method and system for human fall recognition based on OpenPose |
CN111767888A (en) * | 2020-07-08 | 2020-10-13 | 北京澎思科技有限公司 | Object state detection method, computer device, storage medium, and electronic device |
CN111899470A (en) * | 2020-08-26 | 2020-11-06 | 歌尔科技有限公司 | Human body falling detection method, device, equipment and storage medium |
CN113505752A (en) * | 2021-07-29 | 2021-10-15 | 中移(杭州)信息技术有限公司 | Fall detection method, device, equipment and computer readable storage medium |
CN113505752B (en) * | 2021-07-29 | 2024-04-23 | 中移(杭州)信息技术有限公司 | Tumble detection method, device, equipment and computer readable storage medium |
CN116887057A (en) * | 2023-09-06 | 2023-10-13 | 北京立同新元科技有限公司 | Intelligent video monitoring system |
CN116887057B (en) * | 2023-09-06 | 2023-11-14 | 北京立同新元科技有限公司 | Intelligent video monitoring system |
CN118430183A (en) * | 2024-04-28 | 2024-08-02 | 岳正检测认证技术有限公司济南分公司 | A method for monitoring the posture of a person getting out of bed at night |
CN118430183B (en) * | 2024-04-28 | 2024-11-08 | 中国人民解放军总医院第一医学中心 | A method for monitoring the posture of a person getting out of bed at night |
Also Published As
Publication number | Publication date |
---|---|
CN103955699B (en) | 2017-12-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103955699B (en) | A kind of real-time fall events detection method based on monitor video | |
Wang et al. | Human fall detection in surveillance video based on PCANet | |
Harrou et al. | An integrated vision-based approach for efficient human fall detection in a home environment | |
Yun et al. | Human fall detection in videos by fusing statistical features of shape and motion dynamics on Riemannian manifolds | |
CN107657244B (en) | A multi-camera-based human fall behavior detection system and its detection method | |
Gatt et al. | Detecting human abnormal behaviour through a video generated model | |
Abdo et al. | Fall detection based on RetinaNet and MobileNet convolutional neural networks | |
Poonsri et al. | Fall detection using Gaussian mixture model and principle component analysis | |
Poonsri et al. | Improvement of fall detection using consecutive-frame voting | |
Yun et al. | Human fall detection via shape analysis on Riemannian manifolds with applications to elderly care | |
Hasib et al. | Vision-based human posture classification and fall detection using convolutional neural network | |
CN111079481B (en) | An aggressive behavior recognition method based on two-dimensional skeleton information | |
Iazzi et al. | Fall detection based on posture analysis and support vector machine | |
Shoaib et al. | View-invariant fall detection for elderly in real home environment | |
Bhavani et al. | Human Fall Detection using Gaussian Mixture Model and Fall Motion Mixture Model | |
Hung et al. | Fall detection with two cameras based on occupied area | |
Alaoui et al. | Video based human fall detection using von mises distribution of motion vectors | |
CN112232190B (en) | Method for detecting abnormal behaviors of old people facing home scene | |
Merrouche et al. | Fall detection using head tracking and centroid movement based on a depth camera | |
CN111178134B (en) | A Fall Detection Method Based on Deep Learning and Network Compression | |
Dai | Vision-based 3d human motion analysis for fall detection and bed-exiting | |
Biswas et al. | A literature review of current vision based fall detection methods | |
Yun et al. | Fall detection in RGB-D videos for elderly care | |
Lee et al. | Automated abnormal behavior detection for ubiquitous healthcare application in daytime and nighttime | |
ShanShan et al. | Fall detection method based on semi-contour distances |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |