AU2020102094A4 - GROUP ACTIVITY RECOGNITION BY INTEGRATION AND FUSION OF INDIVIDUAL MULTISENSORY IoT DATA - Google Patents
GROUP ACTIVITY RECOGNITION BY INTEGRATION AND FUSION OF INDIVIDUAL MULTISENSORY IoT DATA Download PDFInfo
- Publication number
- AU2020102094A4 AU2020102094A4 AU2020102094A AU2020102094A AU2020102094A4 AU 2020102094 A4 AU2020102094 A4 AU 2020102094A4 AU 2020102094 A AU2020102094 A AU 2020102094A AU 2020102094 A AU2020102094 A AU 2020102094A AU 2020102094 A4 AU2020102094 A4 AU 2020102094A4
- Authority
- AU
- Australia
- Prior art keywords
- fusion
- data
- individual
- sensor
- activities
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/01—Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Computational Linguistics (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
GROUP ACTIVITY RECOGNITION BY INTEGRATION AND FUSION OF
INDIVIDUAL MULTISENSORY IoT DATA
ABSTRACT
The wearable and smartphone embedded sensors have attracted the researchers for human activity
identification. Multi-Sensory data in the healthcare application is widely increased in research
because it aims to monitor the behaviors of humans, estimating energy expenditure, posture
detection, etc. In recent days, the researchers are giving importance to the multi-Sensor fusion
method for the reason of achieving good performance, robustness, solving difficulties present in
single sensor values. In recent years, several fusion methods are proposed to identify and monitor
human activities by using multiple sensors. Group Activity Recognition (GAR) method has
impressed the researchers, and it is widely spread in recent years. The main aim of this framework
is to introduce a new multi-sensory fusion of IoT data to improve human activities and group
activities, also to reduce the misrecognition rate. The framework model is proposed for group
activities by integrating the individual sensor data to conclude group activity. The proposed
framework focuses on developing protocols to integrate IoT data, features, and multiple
classification algorithms to enhance physical activities by monitoring human activities and
evaluation.
1| P a g e
GROUP ACTIVITY RECOGNITION BY INTEGRATION AND FUSION OF
INDIVIDUAL MULTISENSORY IoT DATA
Drawings:
Data Cd at M perat on
Pun 5e ws sigal Processag
1 !
neiromT ~ete r Alrtering zurzruto
sofa pe S ig na Ienntton
------------------------------- I
pnoirt reni
Actvigy DTails
Figure~~~~~~~~S 1: Prpoe Grou AciiReontnFrmwk
1- |N PW a g
Description
GROUP ACTIVITY RECOGNITION BY INTEGRATION AND FUSION OF INDIVIDUAL MULTISENSORY IoT DATA
Drawings:
Data Cd at M perat on
Pun 5e ws sigal Processag
neiromT ~ete r Alrtering zurzruto sofa pe S ig na Ienntton
1
pnoirt reni
Figure~~~~~~~~S 1: Prpoe Grou AciiReontnFrmwk
Actvigy DTails 1- |N PW a g
GROUP ACTIVITY RECOGNITION BY INTEGRATION AND FUSION OF INDIVIDUAL MULTISENSORY IoT DATA
Description
Field of Invention:
This field of the invention is to addresses the Group Activity Recognition (GAR) by integrating and fusing the individual multi-sensory IoT data. The Proposed framework is for group activities by integrating the individual sensor data to conclude the group activity. The proposed model focuses on integrating the IoT data, features, and multiple classification algorithms to improve physical activities by monitoring human activities and evaluation.
Background of the invention:
Current developments have contributed to an immense rise in sensor technology owing to low cost and product availability. In the areas of intelligent homes, computer technology, protection, care for older adults, working, and sporting activities, the installation, and analysis of sensor data produced by the devices are commonly used. The sensor data that are evaluated in the health framework are used to classify easy and complicated tasks such as biking, walking, and specific housekeeping or operating machinery in the industry. In various sensor forms, including film, ambient, mobile, and portable sensors, human recognition practices have been examined. With light variance, the video-based device can shift and cannot discern aim or non-objective details during data collection. Ambient sensor instruments are used for the processing of information, including temperature, tone, energy, etc. Researchers recognize the track, classify, and control human movements to increase the rate of physical activity utilizing each movement sensor. Also, the procedure is followed by an application utilizing handheld sensors and wearable sensors depending on the numerous sensor methods. For data collection and the tracking of sports events utilizing accelerometer devices, Biagetti et al. recommended a wireless architecture. To track the everyday operations, Bhattacharjee and others have tested various forms of machine learning algorithms, such as neural networks and vector support machines. The daily activities are evaluated by using machine learning algorithms with the help of a collection of data from the motion sensors
1| P a g e present in smartphone devices. To collect further information about the monitoring of human activities, various sensor modality is in recent research studies.
Objects of the invention:
• The main idea of this study is to improve group activities and individual human activity using the fusion of multi-sensory IoT data. • This study focuses on developing protocols to integrate IoT data, features, and multiple classification algorithms to enhance physical activities by monitoring human activities and evaluation. • This study proposes the fusion of individual multi-Sensor to monitor human activities. • The classification algorithms like Decision Tree, Regression, KNN are used to monitor the dynamic group activity detection system. • From an individual sensor, the features can be extracted and evaluated using the feature selection method. • Over-sampling techniques are used to improve performance results.
Summary of the invention
los devices have improved dramatically, and sensing power for the consumers just a few days. The sensing capabilities can be studied to learn more, i.e., A person may identify his behaviors. Device sensing may be paired with Internet-of-Things sensing to improve the movement detection of ordinary items through the sensors. The identification of the behavior varies from people to communities, recognized as Social Interaction Awareness (GAR). It is meant to provide community information and often provides member details, as is done in Behavior Recognition (AR). For example: to figure out whether the individual is at the party. In recent years, there have been plans for a range of fusion approaches to detect and track human behavior via several sensors. This is classified as the data stage, the stage of roles, and the decision-making method, or the system of several classifiers. To evaluate community tasks and evaluations, estimation, and tracking, this form of approach incorporates multiple sensors, extraction processes of features and algorithms. Researchers have been fascinated by wearable and intelligent phone devices to detect
21Page human behavior. The commonly utilized instruments consist of embedded sensors such as GPS, microphones, magnetometers, gyroscopes, and speed gauges for daily tracking of physiological signs, indoor and pedestrian protection. These instruments define numerous behaviors, such as human physics and cyber physics. The intricate monitoring of physical activity is assured by the different sensor models integrated into wearable and smartphone applications. The accelerometer is mounted under the moving sensor, which causes motions and acceleration dynamically to ensure that movement variations are often observed dynamically. The Gyroscope sensor is used for calculating the angular speed and also for detecting the behaviors of related patterns. To remove the influence of gravity, orientation, and movements, the magnetometer system is used. Multi sensor fusion approaches have been suggested in advance studies to increase the efficiency of the detection of human activity data. In the numerous sensor models, the raw sensor input, role extraction, or the decisions expected with classification algorithms are implemented. Using sensor models to boost sensitivity, efficiency, and the system for detecting human behavior is the goal of the Fusion Protocol. Multi sensor fusion tends to minimize uncertain and indirect details, which with the single sensor model, is very difficult to overlook. The several machine-learning algorithms are integrated to create decisions to have a better alternative to a static classification method-many classifiers. The multiple classifier methods address the sensor data's dynamic problems. This leads to the development of precision, robustness, and classification system. To identify human behaviors and to track hygiene, the classification scheme is, therefore, of considerable significance. The goal is to detect basic and complex everyday behaviors by using the multi-view stacking algorithm. The stacking approach is the multiple classifying techniques used to strengthen the system for the identification of human behavior through predictive values. The multi-view stacking would help combine data from multiple sources to maximize the reliability and robustness of the device. The characteristics and capabilities of each sensor would be different. The primary aim of the multi view stacking approach is to allow acceptable, versatile, and practical use of the features of each sensor model and base classifier for behavior recognition purposes. The multi-view stacking approach dramatically improves the detection of human behavior for wearable and intelligent phone sensing results. The method used to train views using classification algorithms and mixed predictive values with the same method of classification. The classification algorithm also achieves effective performance. To incorporate the behavior monitoring system, the decision tree, KNN,
31Page and regression models also are used. There is a difference in most data sets to understand human behaviors. This dilemma is a concern for evaluating human activities, which sometimes happens after a decrease in the size of the function vectors in health surveillance which human activities. The over-sampling technique restricts this problem to the sensor data and increases the efficiency of human behavior recognition. This approach is then used to test algorithms to enhance the detection system's efficiency. This is used to test the application function of the monitoring of human behavior, multi-sensor fusion, and multi-classification scheme.
Detailed Description of the Invention:
From Figure 1, the proposed model consists of multi-sensor fusion for human activities identification, and it consists of some steps to process. The steps include a collection of data, signal processing, extraction of features, normalization, selection of features, and the classification of physical activities. The evaluation steps also include the analysis of the sensor, fusion of the sensor using the concatenation of the features, multi-view stacking method that is used to combine the different sensor models before the fusion process. To balance the sensor data, the technique is used to overcome the issue. To detect the activities of the human in the group, the datasets were collected with the wearable devices. The dataset consists of different types of sensors, and data is collected from the sensors with the mobile and wearable sensors. The data collected from sensors will get corrupted due to some signal degradation so that the signal processing is used to remove the noise before extracting the features. Feature extraction is an important task to detect human activities as it helps to transfer the raw signal into the feature vectors. The features are broadly classified into time and frequency domain, which is extracted from the raw sensor data. Then, the sensor data is normalized to limit the features to the range, and it improves the classifier performance. From the sensor analysis, feature vectors are extracted from the sensor modality, and it is fed to a multi-classification algorithm for activity detection. The feature-level fusion stage integrates the feature vectors, which are extracted from the sensor modality before the activity detection using the machine-learning algorithm. The multi-view stacking method helps to integrate the decision from the various classification algorithm and sensor modality. The wearable and mobile sensors based on the human activity classification need some procedure to process the sensor data before usage of different machine learning algorithms. The raw sensor
41Page data will be corrupted by the signal like noise, missing values because of degradation of signal or the battery life loss. The raw sensor data will be converted to the time series, and then the noise will be removed using the filtering process. The filtering process is necessary for the classification of human activities to remove low-frequency data. For the reduction of computation time and to recognize the activity details, the segmentation of data is applied to the raw data to divide the data into a series of segments. Feature extraction is used to process the reduction of a signal into the feature vectors. For the classification of human activities, different features are to be proposed for activity detection, and it is broadly classified into features of time and frequency domain. Time domain features involve signal extraction, and its advantage is to provide a low computational time. Here, the classification algorithm is used for the detection of human activities. The classification algorithms like decision tree, SVM, KNN, and logistic regression is used to improve the performance in detecting human activities. The data fusion and multiple classifier systems are an effective mechanism to improve the reliability, robustness of the human activity system. The development of multi-sensor for human activity is achieved at three levels that include the fusion of data, feature level fusion, and multiple classifiers. The proposed approach is defined in three ways. 1. Evaluation is done based on a classification algorithm on the real feature vectors, which is extracted from the different sensors to improve every sensor model for the detection of human activities and monitoring. 2. The algorithms and feature selection methods are used for the reduction of feature vectors and to evaluate the effect of fusion of feature levels and the multi view stacking method.
Decision Tree provides the classification algorithm that will divide the training data or the features collected from the sensor into the segments. It is a non-parametric algorithm, and it won't require any assumptions on the training features and models the non-linear relationship for the feature vectors and activity class. This type of algorithm is used extensively for human activity detection. Support Vector Machine also provides the classification algorithm based on statistical learning and uses the hyperplane to separate the training data. It will separate the classes into different activities. K-nearest neighbor is a simple algorithm and sufficient for the detection of human activities, and performance is provided for the pattern recognition problem. This method handles the extensive training features, which is large and challenging to fit into the memory. Logistic regression is a faster method, simple and widely used for human activity detection and monitoring
51Page the health. It provides the model for feature vectors. In this model, the relationship exists between the training data and the activity details to detect the activities. To predict on training data, the input values are linearly combined with the weights or coefficient values. In recent years, this algorithm also helps in human activity classification.
61Page
Claims (7)
1. GAR aims to provide information about the group and provides information about each individual. For example: To know whether the person is present in any of the groups.
2. The multi-sensor fusion can use the features of individual sensors and multiple classifier system characteristics to improve the accuracy of recognition.
3. The efficient algorithm like k-Nearest Neighbours, Decision Tree, and Logistic Regression is proposed for identifying the human activities and health monitoring. These algorithms are used as base classifiers and meta classifiers.
4. The feature selection algorithm is evaluated to produce the feature vectors to compute the efficient framework of human activity identification.
5. The over-sampling technique is used to overcome the issue like class imbalance, which is present in the multi-sensor loT data.
6. The signal processing technique is used to remove the noise from the corrupted data, which is collected from the raw sensor data.
7. The multi-view stacking method will combine the various types of sensors to exploit the predictions for monitoring and detection of human activities.
1 Pag e
GROUP ACTIVITY RECOGNITION BY INTEGRATION AND FUSION OF 01 Sep 2020
INDIVIDUAL MULTISENSORY IoT DATA
Drawings: 2020102094
Figure 1: Proposed Group Activity Recognition Framework
1|Page
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2020102094A AU2020102094A4 (en) | 2020-09-01 | 2020-09-01 | GROUP ACTIVITY RECOGNITION BY INTEGRATION AND FUSION OF INDIVIDUAL MULTISENSORY IoT DATA |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2020102094A AU2020102094A4 (en) | 2020-09-01 | 2020-09-01 | GROUP ACTIVITY RECOGNITION BY INTEGRATION AND FUSION OF INDIVIDUAL MULTISENSORY IoT DATA |
Publications (1)
Publication Number | Publication Date |
---|---|
AU2020102094A4 true AU2020102094A4 (en) | 2020-10-08 |
Family
ID=72663848
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2020102094A Ceased AU2020102094A4 (en) | 2020-09-01 | 2020-09-01 | GROUP ACTIVITY RECOGNITION BY INTEGRATION AND FUSION OF INDIVIDUAL MULTISENSORY IoT DATA |
Country Status (1)
Country | Link |
---|---|
AU (1) | AU2020102094A4 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112910859A (en) * | 2021-01-19 | 2021-06-04 | 山西警察学院 | Internet of things equipment monitoring and early warning method based on C5.0 decision tree and time sequence analysis |
CN117727464A (en) * | 2023-11-23 | 2024-03-19 | 重庆邮电大学 | Training method and device based on medical multi-view disease prediction model |
-
2020
- 2020-09-01 AU AU2020102094A patent/AU2020102094A4/en not_active Ceased
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112910859A (en) * | 2021-01-19 | 2021-06-04 | 山西警察学院 | Internet of things equipment monitoring and early warning method based on C5.0 decision tree and time sequence analysis |
CN112910859B (en) * | 2021-01-19 | 2022-06-14 | 山西警察学院 | Internet of things equipment monitoring and early warning method based on C5.0 decision tree and time sequence analysis |
CN117727464A (en) * | 2023-11-23 | 2024-03-19 | 重庆邮电大学 | Training method and device based on medical multi-view disease prediction model |
CN117727464B (en) * | 2023-11-23 | 2024-10-01 | 重庆邮电大学 | Training method and device based on medical multi-view disease prediction model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zerrouki et al. | Combined curvelets and hidden Markov models for human fall detection | |
Roggen et al. | Recognition of crowd behavior from mobile sensors with pattern analysis and graph clustering methods | |
US20180268292A1 (en) | Learning efficient object detection models with knowledge distillation | |
Erdogan et al. | A data mining approach for fall detection by using k-nearest neighbour algorithm on wireless sensor network data | |
Khan et al. | Transact: Transfer learning enabled activity recognition | |
Giorgi et al. | Try walking in my shoes, if you can: Accurate gait recognition through deep learning | |
Zerrouki et al. | Accelerometer and camera-based strategy for improved human fall detection | |
Mohamed et al. | Multi-label classification for physical activity recognition from various accelerometer sensor positions | |
Huu et al. | Proposing posture recognition system combining MobilenetV2 and LSTM for medical surveillance | |
Sabir et al. | Gait-based gender classification using smartphone accelerometer sensor | |
Mokhtari et al. | Fall detection in smart home environments using UWB sensors and unsupervised change detection | |
AU2020102094A4 (en) | GROUP ACTIVITY RECOGNITION BY INTEGRATION AND FUSION OF INDIVIDUAL MULTISENSORY IoT DATA | |
Thu et al. | Utilization of postural transitions in sensor-based human activity recognition | |
Soni et al. | An approach to enhance fall detection using machine learning classifier | |
Khatun et al. | Human activity recognition using smartphone sensor based on selective classifiers | |
Liu et al. | Automatic fall risk detection based on imbalanced data | |
Menter et al. | Application of machine learning-based pattern recognition in iot devices | |
Na et al. | Stick-slip classification based on machine learning techniques for building damage assessment | |
Alqahtani et al. | Falling and drowning detection framework using smartphone sensors | |
Bagci Das et al. | Human activity recognition based on multi‐instance learning | |
Tahir et al. | Object based human-object interaction (hoi) recognition using wrist-mounted sensors | |
Li et al. | iwalk: Let your smartphone remember you | |
Sadiq et al. | Human activity recognition prediction for crowd disaster mitigation | |
Choujaa et al. | Activity recognition from mobile phone data: State of the art, prospects and open problems | |
Choudhary et al. | A seismic sensor based human activity recognition framework using deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FGI | Letters patent sealed or granted (innovation patent) | ||
MK22 | Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry |