GB2605647A - Method and device - Google Patents
Method and device Download PDFInfo
- Publication number
- GB2605647A GB2605647A GB2105085.1A GB202105085A GB2605647A GB 2605647 A GB2605647 A GB 2605647A GB 202105085 A GB202105085 A GB 202105085A GB 2605647 A GB2605647 A GB 2605647A
- Authority
- GB
- United Kingdom
- Prior art keywords
- individual
- objects
- velocity
- processing
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/04—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
- G08B21/0407—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis
- G08B21/043—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis detecting an emergency event, e.g. a fall
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/04—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/04—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
- G08B21/0438—Sensor means for detecting
- G08B21/0469—Presence detectors to detect unsafe condition, e.g. infrared sensor, microphone
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/04—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
- G08B21/0438—Sensor means for detecting
- G08B21/0476—Cameras to detect unsafe condition, e.g. video cameras
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/04—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
- G08B21/0438—Sensor means for detecting
- G08B21/0492—Sensor dual technology, i.e. two or more technologies collaborate to extract unsafe condition, e.g. video tracking and RFID tracking
Landscapes
- Health & Medical Sciences (AREA)
- Emergency Management (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Gerontology & Geriatric Medicine (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Psychiatry (AREA)
- Psychology (AREA)
- Social Psychology (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
A computer-implemented method of generating an alert if an individual 102 has fallen over in a monitored area 100, such as a room. The method is implemented by a processing resource (130,fig.1b) The method begins with receiving a video feed of the area 100. The video feed may be captured by thermal imaging camera (104,fig.1a and 1b), or a time-of-flight camera (404,fig.6) which may be used to monitor a human head. Static objects in the area 100 such as television (106,fig.1a), table and chairs (108,fig.1a), sitting chair (110,fig.1a) and pressure cooker (112,fig.1a) are then filtered out. The filtering may occur by filtering out objects which have a heat signature which is not within a pre-determined human temperature range which may be between 36-39 degrees Celsius. The filtering may also occur by determining which objects have not moved over a pre-determined time and removing those objects from the data. The individual 102 is then identified in the monitored area 100. The individual is traced by measuring their velocity representation. An alarm is generated indicating the individual has fallen over if the velocity of the individual in the area 100 has exceeded a pre-determined velocity threshold.
Description
Intellectual Property Office Application No GI321050851 RTM Date:21 July 2021 The following terms are registered trade marks and should be read as such wherever they occur in this document: Bluetooth, Microsoft Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo
METHOD AND DEVICE
Technical Field
The invention relates to a method and device. Particularly, but not exclusively, the invention relates to a computer-implemented method. Further particularly, but not exclusively, the invention relates to a computer-implemented method of generating an alert if an individual has fallen over in a monitored area.
Background
Individuals who are in poor health or in sheltered accommodation or homes for the elderly are often susceptive to falling over in their private areas. When such incidents happen, it can sometimes be a couple of hours or even days before they are found and this lag between the fall event and the awareness of others can be critical, even fatal.
One solution is to provide a pendant or a wearable device wrapped around the wrist of the respective individual but these depend on the individuals remembering to don these items. This is a high expectation on individuals who may be in poor mental or physical health as they may forget to don these items for a myriad of reasons.
Another solution is to provide a pull cord to alert carers of the predicament but this has inherent limitations in that it requires the individual to be able to reach the pull cord to even initiate the process. If the individual has fallen over, this may prevent even a usually mobile individual from reaching the cord.
Aspects and embodiments were conceived with the foregoing in mind.
Summary
Viewed from a first aspect, there is provided a computer-implemented method of generating an alert if an individual has fallen over in a monitored area, the method implemented by a processing resource, the method comprising receiving data representative of a video feed of a monitored area, processing the data to filter out static objects in the monitored area, processing the data to identify a representation of an individual in the monitored area by processing the data to filter out objects which do not satisfy a human body criterion, tracing the movement of the individual in the monitored area by measuring the velocity of the representation of the individual, wherein the method further comprises generating an alarm indicating the individual has fallen over if the velocity of the representation of the individual in the monitored area has exceeded a pre-determined velocity threshold.
The video feed may be generated by an image capture device such as, for example, a thermal imaging camera or a time-of-flight camera.
The video feed may be provided as a plurality of frames which each represent a captured image of the monitored area at an instance in time. The frames may be described as image frames. The video feed may be received as a series of frames and the processing may be implemented on a frame-by-frame basis. That is, the processing steps may be performed on a first frame and then performed again on the next frame and so on and so-forth Processing the data to filter out static objects or to identify a representation or processing the data to filter out objects which do not satisfy a human body criterion may be implemented on each of the received frames. A human body criterion may be any set of physical parameters which identify a human body, such as temperature, reflected light, impedance etc. Processing the data at any of the stages may comprise calculating average velocity, heat signature values, light signature values or any other indicator over a specified period of time.
The received video feed may be filtered and the remaining steps only executed on frames separated by a defined interval. For example, the received video feed may be received at a standard frame rate associated with video but a filtering module coupled to the processing device may only enable every fifth frame to proceed to the following processing steps. That is to say, frames may be extracted from the received video frame for further processing. The frames may be extracted at a specific rate.
Processing the data to identify a representation of an individual may comprise the application of image processing tools such as image segmentation and thresholding to the data. Processing of the data to identify a representation may comprise the application of two-dimensional or three-dimensional imaging techniques.
The processing resource may be any computing device or computing resource. The processing resource may be coupled to an imaging device configured to receive the data representative of a video feed of the monitored area from the imaging device. The imaging device may be a thermal imaging device. The imaging device may be a time of flight camera. The processing resource may be a cloud resource. The processing resource may comprise a plurality of computing resources. The processing resource may be coupled to a imaging device which is mountable to a surface of the monitored area. The use of a thermal imaging device or a time of flight camera means the individuals privacy can be maintained as imaging data can be generated without needing to view the individuals face or specific personal features.
The monitored area may be a closed residential area such as a bedroom or another room in a residence. The monitored area may be a floor of a building. The monitored area may be an external area.
The velocity may be determined based on the position of the individual using any suitable approach, i.e. using the velocity vector approach by determining the magnitude of the vector created by the change in position or any other suitable approach.
The effect of a method in accordance with the first aspect enables a video feed to be received at a processing resource which is then configured to process the data to filter out objects which do not move and then also determine the presence of an individual in the monitored area by removing objects which are not representative of a human body. The movement of the individual is then traced using their velocity and then an alarm can be generated if the individual falls over. A method in accordance with the first aspect enables a fall to be determined without the need for the individual in question to be wearing an expensive wearable device or a pendant.
The processing resource may be configured to overlay a coordinate system onto the monitored area. The velocity may be determined with reference to the coordinate system. The coordinate system may be a two-dimensional or three-dimensional Cartesian coordinate system. The velocity may be determined based on these coordinates. The coordinate system may be used to associate a value, such as a temperature measurement or a reflected infrared light measurement, with a location in the room. This enables objects to be analysed and tracked by the various modules of the processing resource.
Optionally, the processing resource may be configured to receive the data representative of the video feed from a thermal imaging device configured to determine the heat signature of the objects in the monitored area, wherein the delineation of the representation of the individual may optionally comprise filtering out objects which have a heat signature which is not within a pre-determined human temperature range.
The effect of this feature is that the video feed is received from a thermal imaging device which means that the individual is effectively anonymised as only their heat signature is being monitored and their physical features are not clearly visible if the video feed is placed onto a display.
Optionally, the pre-determined human temperature range may be between 36 degrees Celsius and 39 degrees Celsius. A guideline level of precision here would be around 3 significant figures although other acceptable levels of precision are acceptable.
The effect of this feature is that method filters out objects which do not fall within the temperature range typical of a live human body.
Optionally, the method may further comprise identifying the monitored part of the individual by processing the data received from the thermal imaging device to determine the part of the individual with the highest temperature and tracing the movement of the individual in the monitored area by measuring the velocity of the monitored part of the individual.
The monitored part may be the head of the individual.
Optionally, the method may further comprise identifying the head of the individual by processing the data received from the thermal imaging device to determine the part of the individual with the highest temperature and tracing the movement of the individual in the monitored area by measuring the velocity of the head of the individual.
The effect of this is that the head of the individual is used to monitor for the fall event. The head is generally the hottest part of the human body and is therefore the optimal body part to track when determining whether a fall event is taking place whilst still benefitting from the anonymity provided by thermal imaging.
Processing the data to filter out static objects in the monitored area may optionally comprise measuring the velocity of the objects in the area to determine which objects have not moved over a pre-determined time period and removing those objects from the data.
The effect of this is that the static object are taken out based on their velocity over a predetermined time period. The static objects therefore do not interfere with the determination of the fall event.
Optionally, the alarm is generated if the measurement of the velocity indicates the movement is in a combination of horizontal and vertical movements.
The effect of this is that the fall detection capability is improved and false positive detections of a fall are reduced. A change in velocity which is solely vertical or solely horizontal is unlikely to be a fall. A change in velocity which is solely vertical is likely to be an event such as the individual sitting down. A change in velocity which is solely horizontal is likely to be an event such as the individual moving across the room as part of their normal routine.
The processing resource may be coupled to a time-of-flight imaging device, wherein the timeof-flight imaging device may be configured to determine a light signature of the objects in the monitored area, wherein processing the data to identify the representation of the individual comprises filtering out objects which do not provide a light signature indicative of a human head. The effect of this is that the imaging data can be captured without identifying the individual as only light readings are measured by a time-of-flight imaging device.
Measuring the velocity of the individual in the monitored area may comprise measuring the velocity of the light signature of the human head. This may be implemented by searching for light signatures in the long infra-red part of the infra-red spectrum.
Optionally, responsive to the generation of the alarm, a duplex communication channel may be initialised between a speaker in the monitored area and a microphone to enable communication with the individual to be established.
The effect of this is that, when the alarm is generated, a communication channel can be opened where an external individual can communicate with the individual regarding their fall event.
Optionally, responsive to the generation of the alarm, an alert is transmitted to an electronic device comprising data indicating the occurrence of the fall. The transmission may be implemented by any suitable protocol or means such as, for example Voice over Internet Protocol (VOIP), telephone, world-wide web, Bluetooth, Bluetooth Low Energy, Near field Communication.
The effect of this is that, when the alarm is generated, an alert can be used to message an individual in possession of an electronic device to inform the individual of the fall.
Optionally, responsive to the generation of the alarm, the data representative of the video feed is transmitted to a computing device comprising a display, wherein the video feed is displayed on the display.
The effect of this is that the video feed is displayed on a display so that an individual to be able to see the monitored area.
Viewed from a further aspect, there may be provided a computer-implemented method of generating an alert if an object has fallen over in a monitored area, the method implemented by a processing resource, the method comprising: receiving data representative of a video feed of a monitored area; processing the data to filter out static objects in the monitored area; processing the data to identify a representation of the object in the monitored area by processing the data to filter out objects which do not satisfy a physical criterion indicative of the object; tracing the movement of the object in the monitored area by measuring the velocity of the representation of the object; wherein the method further comprises generating an alarm indicating the object has fallen over if the velocity of the representation of the object in the monitored area has exceeded a pre-determined velocity threshold.
The processing resource may be configured to receive the data representative of the video feed from a thermal imaging device configured to determine the heat signature of the objects in the monitored area, wherein the delineation of the representation of the object comprises filtering out objects which have a heat signature which is not within a pre-determined temperature range.
The method may further comprise identifying a part of the object by processing the data received from the thermal imaging device to determine the part of the object with the highest temperature and tracing the movement of the object in the monitored area by measuring the velocity of the part of the object which is determined to have the highest temperature.
Processing the data to filter out static objects in the monitored area may comprise measuring the velocity of the objects in the area to determine which objects have not moved over a predetermined time period and removing those objects from the data.
The alarm may be generated if the measurement of the velocity indicates the movement is in a combination of horizontal and vertical movements.
The alarm may not be generated if the measurement of the velocity indicates the movement is solely horizontal or solely vertical.
The method may further comprise generating an alarm if the thermal imaging device determines the area has a temperature above or below a pre-determined temperature threshold.
The processing resource may be coupled to a time-of-flight imaging device, wherein the timeof-flight imaging device may be configured to determine a light signature of the objects in the monitored area, wherein the processing the data to identify the representation of the object comprises filtering out objects which do not provide a light signature indicative of that object.
Measuring the velocity of the individual in the monitored area may comprise measuring the velocity of the light signature of the object.
Responsive to the generation of the alarm, a duplex communication channel may be initialised with the room to enable a conversation to take place with an individual in the monitored area.
Responsive to the generation of the alarm, an alert may be transmitted to an electronic device comprising data indicating the occurrence of the fall.
Responsive to the generation of the alarm, the data representative of the video feed may be transmitted to a computing device comprising a display, wherein the video feed is displayed on the display.
Description
Aspects and embodiments of the present disclosure will now be described, by way of example only, and with reference to the accompany drawings, in which: Figure 1 illustrates an individual falling in a three-dimensional plane as captured by a device captured in accordance with a first embodiment; Figure la schematically illustrates an aspect view of a room being monitored by a device configured in accordance with the first embodiment; Figure lb provides a schematic of a processing resource configured in accordance with the first embodiment; Figure 2 illustrates how falling individual can generate a change in position of their head over a time period; Figures 3a and 3b shows a flow diagram illustrating how an individual falling is determined in accordance with the first embodiment; Figure 4 schematically illustrates an aspect view of a room being monitored by a device configured in accordance with a second embodiment; Figures 5a and 5b shows a flow diagram illustrating how an individual falling is determined in accordance with the second embodiment; Figure 6 provides a schematic of a processing resource configured in accordance with the second embodiment; Figure 7 provides a schematic of a telecommunications module configured in accordance with the first or second embodiments; and Figure 8 provides a flow diagram of the initialisation of communications after a fall event has been determined.
We now describe, with reference to Figures 1, 1a, lb, 2 3a and 3b, the monitoring of a room 100 in accordance with a first embodiment to determine whether an individual 102 has fallen over inside the room 100.
Figure la shows an aspect view of a room 100 from the view of an imaging device which may be, for example, a thermal imaging camera 104 mounted on a surface of the room 100. The room 100 contains a television 106, a table and chairs 108 and a sitting chair 110. The table has a pressure cooker 112 positioned on it.
The thermal imaging camera 104 is configured to capture a video feed of the room 100 as a plurality of frames. Standard frame rates associated with video processing may be adopted for the capture of the video although other frame rates may also be used. In one embodiment, the frame rate may be 30 frames per second. The thermal imaging camera 104 is configured to determine a temperature reading for each pixel of the camera as described below. As also described below, the thermal imaging camera 104 is configured to transmit data to the processing module 114 as part of the processing resource 130 via an imaging device interface 132. The thermal imaging camera 104 may be part of a single imaging device which also includes the processing resource 130. That is to say, the thermal imaging camera 104 and the processing resource 130 may be self-contained as part of the same imaging device which may be mounted on the surface of the room 100 as illustrated in Figure la. The processing module 114 is configured to filter the received video feed such that the following steps are only applied to every fifth frame, for example. Other frequencies may also be adopted. That is to say, the processing module 114 may be configured according to the situation to apply the following processing steps at a defined frequency which be that the processing is applied only to, for example, every fifth frame, every tenth frame etc. An optimal frame rate will depend on the processing capacity of the processing module 114 and an optimal rate of extraction of frames will depend on the processing capacity of the processing module 114.
The processing module 114 is configured to transmit to and receive data from a thermal filtering module 116, a static object filtering module 118 and fall detection module 120. The transmission between the modules may be implemented using any suitable means. The modules may be housed in the same device or they may be distributed remotely relative to one another.
The communication between the thermal imaging camera 104 and the processing resource 130 may be implemented using any suitable telecommunication means including, for example, local area network, wide area network, world-wide web or data bus. Similarly, the communication between the processing module 114 and the other modules in the processing resource 130 may be implemented using any suitable data transmission means. Each of the modules is configured to access storage to retrieve and store data, call suitable data processing routines and provide input to those routines.
In a step S300, the thermal imaging camera 104 captures a video feed from the room 100 as a series of frames, although thermal imaging cameras with other frame rates may also be used. The thermal imaging camera 104 comprises an infra-red sensor which measures infrared light captured as part of each frame of the video-feed and uses that light to determine the temperature of the objects in the room 100. The thermal imaging camera 104 associates a temperature measurement with each pixel to generate a thermal map of the room from the frames received during the capture of the video feed. The temperature measurement is attributed to each pixel of each frame. If the thermal imaging camera 104 determines that the temperature of the room is too high, i.e. an average of over 30 degrees, then it may be configured to generate an alarm as the room is too hot for human habitation.
The thermal imaging camera 104 transmits the captured image data (frame-by-frame) and the associated temperature measurements to a processing module 114 in a step S302. The received image data and the temperature measurements are transmitted on a frame-by-frame basis. The received video feed is then filtered by the processing module 114 so that the following steps are applied only to every fifth frame or whichever frequency of frames is preferred. That is to say, the processing module 114 may simply extract frames at a predetermined frequency for processing. This means the processing requirement of the processing module 114 can be reduced to enable relatively small On terms of processing capability) devices to be used. The frames which are not selected can be discarded or stored for further use elsewhere. The extracted frames are then used for processing the video feed by the processing resource 130.
The processing module 114 is configured to overlay a two-dimensional (x,y) coordinate plane onto the extracted frames of the received image data in that the coordinates are matched to the data received from the corresponding pixel on the camera 104. This can be implemented using the inverse perspective matrix method to extract x and y coordinates of the objects and the x and y coordinates can then be equated with the associated temperature measurements to extract the temperature of the objects captured in each extracted frame. This is step S304. The coordinate plane (x,y,z) is shown in Figure 1 against the aspect of the room which is visualized by the thermal imaging camera 104 in accordance with the embodiment. The inverse perspective matrix method works on the assumption that the thermal imaging camera 104 is at z=0. That is to say, two-dimensional coordinates are associated with the data received from the corresponding pixel on the camera to associate a position on the coordinate plane with a temperature measurement. Other known methods of associating a location in a room with a temperature measurement can also be used.
In step S306, the processing module 114 calls a thermal filtering module 116 and provides to the thermal filtering module 116 the temperature associated with the area of the room 100 corresponding to each pixel of the thermal imaging camera 104, i.e. the temperature values measured by the thermal imaging camera and then associated with each x and y coordinate. This is repeated for each extracted frame.
In step 3308, pixels which correspond to a temperature measurement higher than 39 degrees Celsius are marked as corresponding to objects which too hot and not likely to be a human being. The threshold value of 39 degrees can be adjusted if necessary if fall detection of objects other than humans is of interest.
The thermal filtering module 116 then feeds back to the processing module 114 that those pixels are to be marked as too hot to correspond to a human being. This is step 3310. For example, if the television 106 is switched on then it is likely to be hotter than 39 degrees Celsius and the pixels corresponding to that area are then marked by the thermal filtering module 116 as corresponding to an object which is too hot to be a human being. Optionally, the temperature measurements analysed in step 3308 may be averaged over a time period (e.g. 10 seconds) before the pixels are marked as corresponding to objects which are too hot. This can be achieved by storing the temperature measurements for each extracted frame and the mean temperature measurements for each pixel can be calculated based on the stored temperature associated with each pixel (and subsequently each coordinate in the image plane of the thermal imaging camera 104) as it changes over time, i.e. a mean average temperature measurement for each pixel can be taken over a number of extracted frames. The number of extracted frames can be configured according to the user requirements. As the frame rates of the thermal imaging cameras increase the number of frames which are processed can be increased, although the number of frames which are processed can be optimised to a specific constant number. In other embodiments, the thermal filtering module 116 may be configured to filter out pixels which exhibit a temperature measurement which are too hot to be emitted by a specific object other than a human being. For example, if the monitoring of the room is for another purpose, such as the behaviour of an industrial object such as a large fridge, then the thermal filtering module 116 may be configured to filter out pixels showing a temperature measurement which is evidently too hot to correspond to a large fridge.
In step 3312, the processing module 114 calls a static object filtering module 118. The static object filtering module 118 receives the same data passed to the thermal filtering module 116 in step S306, i.e. the extracted frames and their associated temperature measurements. In step S314, the static object filtering module 118 determines the velocity of all of the objects in the room 100 using the temperature values associated with each pixel.
The velocity is determined by measuring over a pre-defined time period (for example 5 seconds) the change in the position of the objects in the image by measuring the change in position of the temperature values associated with the pixels corresponding to the objects. The likelihood of two objects in the same room recording the same temperature value is highly unlikely. Even two objects which are made in the same way, such as two identical chairs, will likely generate distinct temperature values as they are located in different places and their temperature is impacted by different factors. One chair may have recently been sat on, for instance, and one chair may be positioned adjacent a window. The transfer of heat from an animal or human to a chair may transfer heat to the chair and the cold air from a window may transfer heat away from a chair. The presence of a pressure cooker 112 on the table may also cause heat to be transferred to each of the chairs, adding a further source of temperature change over time.
For example, the region of the image occupied by the television 106 has a specific heat signature (Ti) which is shown by the data (as it is captured by the thermal imaging camera 114). Over the time period of time, i.e. 5 seconds, the change in position of that heat signature is determined using the coordinate system (x,y). This can be determined using the Euclidean approach, i.e. the velocity vector. That is to say, at the start of the 5 second period, the heat signature Ti is associated with coordinates (x0,y0,0) and at the end of the 5 second period the heat signature T1 is associated with coordinates (x1,y1,0). That is to say, the object recording heat signature Ti may, in a first frame, be at coordinates (xo,y0,0) and, in a second frame, be at coordinates (xi,0). The static object filtering module 118 uses this information to measure the velocity of the television 106 between the two sets of coordinates using the equation below: V(Ti) = -YoY + (x1 -xo)2 Where (xo,y0,0) is the coordinate of heat signature Ti at a first instance in time and (xi,y1,0) is the coordinate of heat signature Ti at a second instance in time, i.e. V(Ti) represents the velocity over the time between the first and second frames.
The likelihood of a false positive is reduced as two objects are not likely to return exactly the same heat signature, so velocity can be measured based on the movement of the heat signature around the image plane defined by the coordinates x and y. Alternatively, the movement of objects can be based on average velocity over the time between the first and second frames. This would be determined by dividing V(Ti) by the elapsed time between the two frames.
As the television 106 is static, the calculation of V(Ti) would be expected to return a value of zero. The same rule is used to measure the change in heat signatures across the room and those heat signatures which do not change their position (i.e. where V(Ti) = 0) are designated by the static object filtering module 118 as corresponding to objects which are static. This enables the static object filtering module 118 to feed back to the processing module 114 the regions of the received image which correspond to static objects. This is step S316. Those regions can then be removed from the further analysis. Alternatively or additionally, the static object filtering module 118 may be configured to perform object recognition on the data using a trained neural network which is trained to infer the presence of objects in the room. The neural network may be trained with specific objects such as a chair, a table, a television and other objects one would generally expect to find in a room. The identification of these objects using the trained neural network can be used to determine whether those objects are moving or are, indeed, static. The objects determined, using the neural network, to be static can then be associated with coordinates in the overlaid coordinate plane and be marked as regions which correspond to static objects.
The filtering of objects which are static and which are too hot to be human can be interchanged in that the filtering of static objects using the static object filtering module 118 can be implemented prior to the filtering of hot objects using the thermal filtering module 116.
The processing module 114 then receives the filtered data from the respective thermal filtering module 116 and the static object filtering module 118 in a step S318. That is to say, the processing module 114 receives data which attributes certain pixels in the received image as containing hot objects and/or static objects based on their heat signature.
The processing module 114 then traverses the data to identify regions of the image which are too cold to correspond to a human being, i.e. pixels which return a heat signature of less than 36 degrees Celsius. This is step S320. This can also be performed by the thermal filtering module 116 prior to step S318 at the same time the objects which are too hot to be human are identified. Again, the measurement of temperature of the regions of the image can be averaged over a pre-defined number of extracted frames. Again, if the monitoring is of an object which is not a human being, then the processing module 114 may be configured to traverse the data to identify regions of the image which are too cold to correspond to that object. This can be guided by industrial knowledge related to that object.
Following step S320, the received image data has, in effect, been filtered to remove contributions from objects which are static, and which are too hot or too cold to correspond to a human body. That is to say, the pixels in the received video feed which correspond to objects which are not likely to correspond to a human being (because they are too hot or too cold) or likely to be static are marked as such by the processing module 114. This means they can be effectively discarded by the processing module 114 in the remaining steps.
The remaining pixels can then be used to identify a human being in the room 100. This can be implemented by applying a standard technique of image segmentation to the filtered image data. Such a standard technique may be multi-band thresholding although any technique which enables a human shape to be segmented in a set of image data can be used. This enables the human being 102 to be delineated relative to the remainder of the image data. The pixels containing the human being 102 and the associated temperature values are then stored by the processing module 114. In the embodiment where the monitored object is not a human being, the segmentation is configured in accordance with the shape of the monitored object.
The identification of a human being is shown clearly in Figure 1 wherein the left most image the human being 102 can be clearly delineated on the received image data in a step S322 as the remaining pixels are the only pixels which are likely to contain a human being 102 as a human being stood in the room 100 is likely to have not remained static for the previous 5 seconds (the time period can be adjusted) and is also likely to have an overall body temperature between the range 36 degrees and 39 degrees.
The processing module 114 then calls a fall detection module 120 and transmits the filtered image data to the fall detection module 120 in a step 5324. That is to say, the processing module 114 transmits the image data corresponding to the human being 102 in the room 100. The data contains no contributions from static objects or objects which are clearly not human based on their temperature. The fall detection module 120 receives from the processing module 114 the temperature value associated with the pixels containing the human.
The fall detection module 120 uses the data to identify the position of the head of the human being 102 by determining the coordinates corresponding to the pixels returning the highest temperature reading. This determination is performed based on the most recently received extracted frame or it can be based on an average reading over a plurality of the most recently received frames. This is because the head is considered to be the hottest part of the human body and therefore the part of the human body which will return the highest heat signature measurement. The fall detection module 120 measures the movement of the head of the human being in the room 100 by determining the velocity of the heat signature of the head. The fall detection module 120 can also be configured to measure movement based on an average temperature value which may not correspond to the temperature of the head. The velocity of the heat signature of the head can be determined by calculating the Euclidean distance between the heat signature of the head at two different times. This will be described in more detail below. Identifying key parts of other objects which can used to determine a fall can also be programmed into the fall detection module 120. If the monitored object is an industrial object rather than a human being, industrial intelligence can be used to pick out the part of that object which is the warmest to enable its movement to be tracked.
This is implemented by tracing the movement of the head by determining which part of the received image stream contains the hottest heat signature (which is likely to be the position of the head) in a step S326. As described above, this is implemented by determining which of the x and y coordinates of the image plane of the thermal imaging camera 104 contains the highest temperature reading. This is determined by traversing the temperature readings corresponding to the pixels containing the human being (as delineated in step S322) and then obtaining the x and y coordinates corresponding to the pixels with the highest temperature reading as that is highly likely to correspond to the head of the human being 102 in the room 100.
This means that over a defined time period the velocity of the head can be traced to determine the instance of a fall event. This is illustrated in Figure 2 using a first frame at time t2 and a second frame at time t3.
That is to say, at time t2the position of the head is (x2,y2,0) and at time t3 the position of the head is (x3,y3,0). This is based on the assumption that the individual is stood in the axes z=0 as the (x,y) coordinate plane is overlaid onto the received frames. Optionally, the position of the head at time t3 may be estimated by a reiteration of steps S300 to S326 to establish the position of the head. The continuance of steps S300 to S326 can allow the movement of the head to be traced as representative of movement of the human being 102.
The velocity of the movement of the head in the time elapsed between time t2 and t3 can be estimated using the relative position change between those times. That is to say, the velocity of the movement of the head between the times t2 and t3 can be estimated (in a step S328) using this equation.
V(Head) = /(y3 -y2)2 + (x3 -x2)2 Where (x3,y3,0) is the coordinate of the head's heat signature at a first instance in time (i.e. t3) and (x2,y2,0) is the coordinate of the head's heat signature at a second instance in time (i.e. t2), i.e. V(Ti) represents the velocity over the time between the first and second frames.
If z is not equal to zero, then the difference between the respective z values will also be included in the calculation of V(Head) in accordance with the standard approach to measuring the magnitude of a three dimensional velocity vector.
The fall event is depicted in Figure 1 and in Figure 2. The change in position of the head in Figure 2 shows that the fall event has taken place. If the velocity (V(Head)) exceeds a predefined threshold then this indicates a fall has likely taken place. This is step S330. Alternatively, the average velocity could also be used as the basis for determining whether a fall has taken place by dividing V(Head) by the time elapsed between the times t2 and t3.
Optionally, the fall detection module 120, on measuring a velocity exceeding a threshold indicating a high likelihood of a fall, may be configured to perform further analysis which then determine from the respective position values whether a fall has taken place by comparing the values of x and y at the respective instances in time.. That is to say, if the difference between the respective measurements of the y values or the x values is zero then this may indicate a fall has not actually taken place and it is actually just a movement of the human being in the room. Specifically, if the difference between the x values is zero then this indicates only movement in the y axis. This is likely to be indicative of the individual sitting down. If the difference between the y values is zero, then this is likely to be indicative of the individual walking around the room. Equally, in the situation where z is non-zero, the difference in the z values may also be determined. That is to say, the fall detection module 120 may be configured to use the respective x and y values to determine whether the change in position is in at least two dimensions as this will be more likely to be indicative of a fall event rather than a false positive indication of a fall. However, it is possible that even if the difference between the x values is zero or if the difference between the y values is zero that a fall may still have taken place even if a measurement has not registered in a second dimension.
If the calculations in step S330 determine that a fall event is likely to have happened, i.e. if the threshold velocity has been exceeded based on the head movement, then an alarm is generated in a step S332 and transmitted to the processing module 114.
In another embodiment, the imaging device may be a time-of-flight camera 404 as described with reference to Figure 4, Figure 5a, Figure 5b and Figure 6. The time-of-flight camera 404 may be mounted on again on a surface of the room 100.
The time-of-flight camera 404 is coupled to processing resource 430 via imaging interface 432 via any suitable telecommunications means The time of flight camera 404 is configured to capture a video feed of the room 100 as a plurality of frames which may be described as plurality of image frames. The time-of-flight camera 404 is configured to determine a light reading for each pixel of the camera as described below in that the infrared light reflected from each of the objects in the room is received by the infra-red sensor on the time-of-flight camera 404. As also described below, the time-of-flight camera 404 is configured to transmit data to the processing module 414 as part of the processing resource 430 via an imaging device interface 432. The time-of-flight camera 404 may be part of a single imaging device which also includes the processing resource 430. That is to say, the time-of-flight camera 404 and the processing resource 430 may be self-contained as part of the same imaging device which may be mounted on the surface of the room 100 as illustrated in Figure 4.
The processing module 414 is configured to transmit to and receive data from a head recognition module 116, a static object filtering module 418 and fall detection module 420 using any suitable telecommunications means.
The communication between the time-of-flight camera 404 and the processing resource 430 may be implemented using any suitable telecommunication means including, for example, local area network, wide area network, world-wide web or data bus. Similarly, the communication between the processing module 414 and the other modules in the processing resource 430 may be implemented using any suitable data transmission means. Each of the modules is configured to access storage to retrieve and store data, call suitable data processing routines and provide input to those routines.
The time-of-flight camera 404 is configured to emit infra-red light into the room 100 and detect the reflected light on an infra-red sensor, which can then be processed as described below to determine a fall event. This is then used in the steps we will now describe in relation to Figure 5a and 5b.
In a step S500, the time-of-flight camera 404 captures a video feed from the room 100 as a series of frames at a suitable frame rate. The time-of-flight camera 404 comprises an infra-red sensor which measures infra-red light received from the room 100.
The time-of-flight camera 404 transmits the received data corresponding to the infra-red sensor readings to a processing module 414 in a step S502 on a frame by frame basis. The processing module 414 extracts frames from the received frames for further processing. The frequency of extraction may be every fifth frame, say, or even every tenth frame. The frames which are not used for further processing can be discarded or they can be stored for further use elsewhere. A further use of the frames which are not extracted may be further investigation of incidents if it becomes necessary. In extracting frames for further processing, processing modules with a lower processing capacity can be used.
Unlike the first embodiment, we use 3D information in this embodiment to determine whether a fall has taken place. That is to say, a three-dimensional coordinate plane can be overlaid onto the captured image data and the three-dimensional coordinates associated with infra-red readings from the room monitored by the time-of-flight camera 404. The infra-red readings received from the time-of-flight camera 404 provide depth measurements which enable us to image a monitored room without the need to map the information to a two-dimensional plane. The infra-red readings for the extracted frames are then transmitted to the static object filtering module 414. This is step S504. That is to say, the processing module 414 associates coordinates corresponding to locations in the room with an infra-red reading.
In step S506, the processing module 414 calls a static object filtering module 418. In step S508, the static object filtering module 418 determines the velocity of all of the objects in the room 100.
The static object filtering module 418 determines the velocity of the objects by determining the change in position of the infra-red reading in terms of x, y and z coordinates. All objects in the room 100 which are depth-mapped by the time-of-flight camera 414 are likely to generate a unique infra-red reading as they are all slightly different. The infra-red readings are attributed to x, y and z coordinates on a frame by frame basis. This means that the velocity of an infra-red reading R1 can be calculated by determining its x, y and z coordinates at a first instance in time (ti), i.e. (xo,yo,z0) and then determining its x, y and z coordinates at a second instance in time (t2), i.e. (xi,yi,zi) and then using the Euclidean distance method described above to determine the velocity of the infra-red reading Ri using the formula below.
V(Ri) = -4)2 + (Y1 yo)2 + (x1 -xo)2 This will approximate the velocity of the object generating the reading Ri and therefore provide the approximate velocity of objects providing that reading. Alternatively, average velocity could also be used as the basis for this calculation.
As the television 106 is static, the calculation of V(IRO (if R1 was the reading for the coordinates corresponding to the television 106) would be expected to return a value of zero. The same rule is used to measure the change in infra-red readings across the room and those pixels which do not change their reading are designated by the static object filtering module 418 as containing objects which are static. This enables the static object filtering module to feed back to the processing module the regions of the received image which correspond to static objects. This is step S510. Those regions can then be removed from the further analysis.
The processing module 414 then receives the filtered data from the respective static object filtering module 418 in a step S512. That is to say, the processing module 414 receives data which attributes certain pixels in the received image as containing static objects based on their infra-red readings as received from the time of flight camera 404.
The processing module 414 then calls a head recognition module 416 and provides to the head recognition module 416 the infra-red readings as filtered by the static object filtering module 418. This is step S514. The head recognition module 416 is configured to determine from the data provided in step S514 the presence of a human head, i.e. the head recognition module 416 is configured to determine whether the data contains anything which satisfies a human-body criterion, i.e. does the data contain a human head. The head recognition module 416 scans the image data to see if any infra-red readings have been received which would be considered to be in the long-infra red part of the infra-red spectrum. This is considered to be the optimal way of identifying a face based on the teachings of haps:P rn armakintraret a coon:hon and is therefore likely to be a suitable method of identifying a human head. This is step S516. Alternatively, the elliptical Hough transform may be applied to the received infra-red readings to identify a head as a face can be approximated to be elliptically shaped. This enables the pixels corresponding to the head of the human being 102 to be determined. The pixels corresponding to the head of the human being 102 can then be associated with corresponding positions (i.e. corresponding x, y and z coordinates) in the image plane of the time of flight camera 404. This is step S518. This head recognition module 416 generates a data set which identifies where the head is likely to be in the received video feed based on the x,y, z coordinates identified in step S518. This is step S520. The data set generated in step S520 is then transmitted to the processing module 414.
That is to say, the infra-red readings received by the time of flight camera 404 can be processed to determine the light signature indicative of an object corresponding to a human head. This is because a human head has a specific light signature which provides an infrared reading in the long infra-red part of the infra-red spectrum. This is a human body criterion as the human head provides a light signature in that specific part of the infra-red spectrum whilst other objects in the room are not likely to provide the same signature.
The processing module 414 then calls a fall detection module 420 and transmits the data generated in step S520 to the fall detection module 420 in a step S522. This is performed on a frame-by-frame basis which means that, for each received extracted frame, the position of the head of the human being 102 in the room 100 is provided to the fall detection module 120 as it is processed by the respective head detection module 416. As illustrated in Figure 2, this enables the fall detection module 420 to determine that the human being 102 has fallen over.
That is to say, at time t2, i.e. at a first frame, the position of the head is determined in step S518 to be at position (x2,y2,z2) and at time t3, i.e. at a second frame, the position of the head is determined in step S518 to be at position (x3,y3, z3).
The velocity of the movement of the head in the time elapsed between time t2 and t3 can be estimated using the relative position change between those times. That is to say, the velocity of the movement of the head between the times t2 and t3 can be estimated (in a step S524) using this equation.
V(Head) = .\/(z3 -z2)2 + (Y3 -Y2)2 + (x3 -x2)2 The fall event is depicted in Figure 1 and in Figure 2. The change in position of the head in Figure 2 shows that the fall event has taken place. If the velocity (V(Head)) exceeds a predefined threshold then this indicates a fall has likely taken place. This is step 5526. Again, as in the thermal imaging example, average velocity can also be used.
That is to say, the velocity of the movement of the head can be estimated based on its position and used to infer that a fall event has taken place.
Optionally, the fall detection module 420 can be configured to determine from the respective position values whether a fall has taken place by comparing the values of x and y at the respective instances in time. That is to say, if the difference between the respective y values or the x values is zero then this may indicate a fall has not actually taken place and it is actually just a movement of the human being in the room. Specifically, if the difference between the x values is zero then this may indicates only movement in the y axis. This is likely to be indicative of the individual sitting down. If the difference between the y values is zero then this is likely to be indicative of the individual walking around the room. That is to say, the fall detection module 420 may be configured to use the respective x and y values to determine whether the change in position is in at least two dimensions as this will be more likely to be indicative of a fall event rather than a false positive indication of a fall. However, it is possible that even if the difference between the x values is zero or if the difference between the y values is zero that a fall may still have taken place even if a measurement has not registered in a second or third dimension.
If the calculations in step S526 determine that a fall event is likely to have happened, then an alarm is generated in a step 5528 and transmitted to the processing module 414.
That is to say, in both embodiments the data received from an imaging device is analysed to remove static objects and is then processed to determine the presence of a human being. The position of the human head can then be established and then used to determine the likelihood of a fall. If a fall is determined to have taken place, an alarm is generated. The processing resources can be configured to include metadata in the alarm signal to provide an identifier for the room (a number for instance) and perhaps an identifier for the building the room is positioned in. We now describe, with reference to Figure 7 and Figure 8, how the generation of an alarm indicating a fall can lead to further action.
Either or both of the processing resources, i.e. processing resource 130 or processing resource 430, are configured to interact with a telecommunications module 700 via an alarm interface 702 in that the alarm signal is transmitted from the respective processing resource when the alarm indicating a fall is generated. The interaction may be implemented using any suitable telecommunications means such as local area network, wide area network, the internet, Bluetooth Low Energy or Near-Field Communication.
On receiving the alarm signal at alarm interface 702 in a step 5800, the telecommunications module 700 utilises an alarm signal processing module 704 to extract the metadata from the signal and uses the metadata to identify the room and the building which is the source of the alarm event. This is step S802.
The metadata is transmitted to alarm transmission module 706 in a step 5804. Responsive to receiving the metadata, the alarm transmission module 706 is configured to initiate communication between the room 100 where the fall incident has taken place and an external individual such as a carer or a relative. This is step 5806. This is implemented by issuing a request to the telecommunications call module 708 which initiates a duplex communication channel between the external individual and a device which may be positioned in the room. For example, the telecommunications call module 708 may initialise a VOIP call. Such a device may be a smart speaker such as an Amazon® Echo Dot device positioned in the room or a mobile telephone in the possession of the human being 102 in question. Such a device may be a pager or mobile telephone in the possession of a carer. In the event it is a pager, the telecommunications call module 708 sends a message providing information such as "FALL in ROOM No." with an indication of the room. The carer can then go to the room to aid the person who has fallen over. The telecommunications call module 708 may be configured to access application programming interfaces to enable interaction with other services such as Microsoft Teams, Zoom to enable communications to be initiated with an individual using those applications. The telecommunications call module 708 may be configured to send an email to an individual containing metadata related to the fall event, i.e. the room number and the fallen individual's name.
The telecommunications module 700 may also interact with a camera which is positioned in the room to provide a video feed of the event to a carer of the interior of the room. However, this may be only in the most serious of circumstances as the privacy of the individual may then be compromised.
It should be noted that the above-mentioned aspects and embodiments illustrate rather than limit the disclosure, and that those skilled in the art will be capable of designing many alternative embodiments without departing from the scope of the disclosure as defined by the appended claims. In the claims, any reference signs placed in parentheses shall not be construed as limiting the claims. The word "comprising" and "comprises", and the like, does not exclude the presence of elements or steps other than those listed in any claim or the specification as a whole. In the present specification, "comprises" means "includes or consists of" and "comprising" means "including or consisting of'. The singular reference of an element does not exclude the plural reference of such elements and vice-versa. The disclosure may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In a device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Claims (16)
- Claims 1 A computer-implemented method of generating an alert if an individual has fallen over in a monitored area, the method implemented by a processing resource, the method comprising: receiving data representative of a video feed of a monitored area; processing the data to filter out static objects in the monitored area; processing the data to identify a representation of an individual in the monitored area by processing the data to filter out objects which do not satisfy a human body criterion; tracing the movement of the individual in the monitored area by measuring the velocity of the representation of the individual; wherein the method further comprises generating an alarm indicating the individual has fallen over if the velocity of the representation of the individual in the monitored area has exceeded a pre-determined velocity threshold.
- 2 A method according to Claim 1, wherein the processing resource is configured to receive the data representative of the video feed from a thermal imaging device configured to determine the heat signature of the objects in the monitored area, wherein the delineation of the representation of the individual comprises filtering out objects which have a heat signature which is not within a pre-determined human temperature range.
- 3. A method according to Claim 1, wherein the pre-determined human temperature range is between 36 degrees Celsius and 39 degrees Celsius.
- 4 A method according to any of Claims 2 to 3 wherein the method further comprises identifying the monitored part of the individual by processing the data received from the thermal imaging device to determine the part of the individual with the highest temperature and tracing the movement of the individual in the monitored area by measuring the velocity of the part of the individual which is determined to have the highest temperature.
- 5. A method according to any preceding claim wherein processing the data to filter out static objects in the monitored area comprises measuring the velocity of the objects in the area to determine which objects have not moved over a pre-determined time period and removing those objects from the data.
- 6 A method according to any preceding claim wherein the alarm is generated if the measurement of the velocity indicates the movement is in a combination of horizontal and vertical movements.
- 7 A method according to any preceding claim wherein the alarm is not generated if the measurement of the velocity indicates the movement is solely horizontal or solely vertical.
- 8 A method according to any of Claims 2 to 4 wherein the method further comprises: generating an alarm if the thermal imaging device determines the area has a temperature above or below a pre-determined temperature threshold.
- 9 A method according to any preceding claim, wherein the processing resource is coupled to a time-of-flight imaging device, wherein the time-of-flight imaging device is configured to determine a light signature of the objects in the monitored area, wherein the processing the data to identify the representation of the individual comprises filtering out objects which do not provide a light signature indicative of a human head.
- 10. A method according to any of Claim 9, wherein measuring the velocity of the individual in the monitored area comprises measuring the velocity of the light signature of the human head.
- 11. A method according to any preceding claim wherein, responsive to the generation of the alarm, a duplex communication channel is initialised with the room to enable a conversation to take place with the individual who has fallen over.
- 12. A method according to any preceding claim wherein, responsive to the generation of the alarm, an alert is transmitted to an electronic device comprising data indicating the occurrence of the fall.
- 13. A method according to any preceding claim wherein, responsive to the generation of the alarm, the data representative of the video feed is transmitted to a computing device comprising a display, wherein the video feed is displayed on the display.
- 14. A processing resource configured to implement the method of any of Claims 1 to 13.
- 15. A processing resource according to Claim 14 further comprising an imaging device.
- 16. A computer program product which, when installed on a processing device, configures the processing device to implement the method of any of Claims 1 to 13.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2105085.1A GB2605647A (en) | 2021-04-09 | 2021-04-09 | Method and device |
PCT/GB2022/050864 WO2022214809A1 (en) | 2021-04-09 | 2022-04-06 | Velocity of an individual detected in a video feed provided by a camera serves as criterion for detecting a fall of the individual. |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2105085.1A GB2605647A (en) | 2021-04-09 | 2021-04-09 | Method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
GB202105085D0 GB202105085D0 (en) | 2021-05-26 |
GB2605647A true GB2605647A (en) | 2022-10-12 |
Family
ID=75949554
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB2105085.1A Pending GB2605647A (en) | 2021-04-09 | 2021-04-09 | Method and device |
Country Status (2)
Country | Link |
---|---|
GB (1) | GB2605647A (en) |
WO (1) | WO2022214809A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116392110A (en) * | 2023-04-12 | 2023-07-07 | 上海松椿果健康科技有限公司 | Fall monitoring system for invoking millimeter wave radar by 4G module |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001063576A2 (en) * | 2000-02-23 | 2001-08-30 | The Victoria University Of Manchester | Monitoring system |
WO2010055205A1 (en) * | 2008-11-11 | 2010-05-20 | Reijo Kortesalmi | Method, system and computer program for monitoring a person |
EP2763116A1 (en) * | 2013-02-01 | 2014-08-06 | FamilyEye BVBA | Fall detection system and method for detecting a fall of a monitored person |
US20150139504A1 (en) * | 2013-11-19 | 2015-05-21 | Renesas Electronics Corporation | Detecting apparatus, detecting system, and detecting method |
US20180137734A1 (en) * | 2016-11-13 | 2018-05-17 | Agility4Life | Biomechanical Parameter Determination For Emergency Alerting And Health Assessment |
US20190114895A1 (en) * | 2016-01-22 | 2019-04-18 | Suzhou Wanghu Real Estate Development Co., Ltd. | Body fall smart control system and method therefor |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6717235B2 (en) * | 2017-03-02 | 2020-07-01 | オムロン株式会社 | Monitoring support system and control method thereof |
US20200258364A1 (en) * | 2019-02-07 | 2020-08-13 | Osram Gmbh | Human Activity Detection Using Thermal Data and Time-of-Flight Sensor Data |
-
2021
- 2021-04-09 GB GB2105085.1A patent/GB2605647A/en active Pending
-
2022
- 2022-04-06 WO PCT/GB2022/050864 patent/WO2022214809A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001063576A2 (en) * | 2000-02-23 | 2001-08-30 | The Victoria University Of Manchester | Monitoring system |
WO2010055205A1 (en) * | 2008-11-11 | 2010-05-20 | Reijo Kortesalmi | Method, system and computer program for monitoring a person |
EP2763116A1 (en) * | 2013-02-01 | 2014-08-06 | FamilyEye BVBA | Fall detection system and method for detecting a fall of a monitored person |
US20150139504A1 (en) * | 2013-11-19 | 2015-05-21 | Renesas Electronics Corporation | Detecting apparatus, detecting system, and detecting method |
US20190114895A1 (en) * | 2016-01-22 | 2019-04-18 | Suzhou Wanghu Real Estate Development Co., Ltd. | Body fall smart control system and method therefor |
US20180137734A1 (en) * | 2016-11-13 | 2018-05-17 | Agility4Life | Biomechanical Parameter Determination For Emergency Alerting And Health Assessment |
Also Published As
Publication number | Publication date |
---|---|
WO2022214809A1 (en) | 2022-10-13 |
GB202105085D0 (en) | 2021-05-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Deep et al. | A survey on anomalous behavior detection for elderly care using dense-sensing networks | |
Chen et al. | A fall detection system based on infrared array sensors with tracking capability for the elderly at home | |
CN109271832B (en) | People stream analysis method, people stream analysis device, and people stream analysis system | |
JP4478510B2 (en) | Camera system, camera, and camera control method | |
JP6885682B2 (en) | Monitoring system, management device, and monitoring method | |
JP6689566B2 (en) | Security system and security method | |
WO2005122576A1 (en) | System and method for presence detetion | |
JP2010191620A (en) | Method and system for detecting suspicious person | |
Kong et al. | A privacy protected fall detection IoT system for elderly persons using depth camera | |
US20220375257A1 (en) | Apparatus, system, and method of providing a facial and biometric recognition system | |
WO2012124852A1 (en) | Stereo camera device capable of tracking path of object in monitored area, and monitoring system and method using same | |
Fang et al. | Eyefi: Fast human identification through vision and wifi-based trajectory matching | |
WO2018029193A1 (en) | Device, system and method for fall detection | |
Banerjee et al. | Resident identification using kinect depth image data and fuzzy clustering techniques | |
Thaman et al. | Face mask detection using mediapipe facemesh | |
WO2022214809A1 (en) | Velocity of an individual detected in a video feed provided by a camera serves as criterion for detecting a fall of the individual. | |
CN106094614B (en) | A kind of grain information monitoring remote monitoring system Internet-based | |
WO2021020500A1 (en) | Information processing device and marketing activity assistance device | |
CN114078603A (en) | Intelligent endowment monitoring system and method, computer equipment and readable storage medium | |
EP1886486A2 (en) | Monitoring method and device | |
Gabaldon et al. | A framework for enhanced localization of marine mammals using auto-detected video and wearable sensor data fusion | |
Madhubala et al. | A survey on technical approaches in fall detection system | |
KR101340287B1 (en) | Intrusion detection system using mining based pattern analysis in smart home | |
Przybylo | Landmark detection for wearable home care ambient assisted living system | |
WO2021210454A1 (en) | Individual identification system and individual identification program |