Body Temperature Monitoring for Regular COVID-19 Prevention Based on Human Daily Activity Recognition
<p>Temperature measurement in public places for COVID-19 regular prevention.</p> "> Figure 2
<p>Hardware structure of the wearable device.</p> "> Figure 3
<p>The 10 participants wore wearable sensors to conduct different activity recognition experiments. (A, H, W and G are the abbreviations of age, height, weight and gender).</p> "> Figure 4
<p>Visualized sensing signal curves for four activity patterns (sitting, walking, walking upstairs and walking downstairs). (<b>a</b>) Sitting with acceleration, velocity and temperature sensing signal curves. (<b>b</b>) Walking with acceleration, velocity and temperature sensing signal curves. (<b>c</b>) Walking upstairs with acceleration, velocity and temperature sensing signal curves. (<b>d</b>) Walking downstairs with acceleration, velocity and temperature sensing signal curves.</p> "> Figure 5
<p>Normal distribution curves of body surface temperature error under dynamic activities. (<b>a</b>) The temperature while walking. (<b>b</b>) The temperature while walking upstairs. (<b>c</b>) The temperature while walking downstairs.</p> "> Figure 6
<p>General workflow for temperature monitoring and adjusting.</p> ">
Abstract
:1. Introduction
1.1. Related Work
1.2. Main Contributions
- In addition to the accelerometer and gyroscope, the independently designed wearable device also adds the temperature sensor module, which enriches the sensing data and is conducive to the further study of the relationship between human body surface temperature and the accuracy of activity recognition.
- The 10 participants spanned the major age groups, various professions and different ranges of height and weight. After collecting sensing data under similar experimental conditions, the data are divided into a training set (75%) and testing set (25%) for learning algorithms to ensure that the selected learning model is generalized enough to adapt better to future new users.
- The performance of almost all algorithms has been improved to varying degrees after incorporating body surface temperature data (slightly lower than the normal human body temperature). In other words, the temperature sensing data achieve more accurate human activity recognition.
- Among all the selected learning modules, random forest (RF) and extreme trees (ET) comprehensively perform better. Without data stacking, the ET reaches an 89% recognition rate and RF reaches an 88% recognition rate, while it has less computing time consumption. After the resampling process, the performances of the algorithms with the raw dataset are continuously improved, and ET and RF can reach 90% and 92% accuracy, respectively.
- The body surface temperature of participants under moving activities (walking, walking upstairs or downstairs) is lower than that of sitting, which is related to the body temperature monitoring during COVID-19. Temperature errors of 1–2 C may lead to the omission of potential feverous people and affect the accuracy and efficiency of epidemic prevention work.
2. Experimental Setup
2.1. Apparatus
2.2. Participants
2.3. Activity Data Visualization
3. Activity Recognition Algorithm
3.1. Activity Data Collection
3.2. Signal Standardization and Data Stacking
3.3. Learning Algorithm Accuracy
3.3.1. Conventional Machine Learning
- Support vector machine: SVM aims to find an optimal hyperplane, and the largest interval hyperplane will classify samples by distinguishing between positive cases and other cases. The sample points closest to the hyperplane are called the support vector. The classification results of the SVM classifier are mainly affected by the kernel function, including the linear kernel, polynomial kernel and radial basis function kernel (RBF), which also called sigmoid kernels [20]. In this experiment, the SVM classification accuracy without temperature data reached 74%, and this number ascended to 81% after considering temperature. However, after data stacking, the accuracy dropped to 40%, which may have been caused by the increased model complexity.
- K-nearest neighbor: Compared with other classification methods, KNN has no obvious learning process, since it does not process data in the training stage but simply saves the obtained training samples. In addition, different k values may influence the classification results of KNN: a small k value may cause overfitting, while an overly large value may cause underfitting. The working principle of KNN is to classify data by measuring the distance between data, but when the data dimension is too high, it is difficult to calculate the distance between two samples, which results in large prediction deviation [21]. Different values of k were taken in the range of 1-100 to obtain the optimal solution in this experiment, as presented in Table 4.The results showed the highest accuracy when k was 7, reaching 81.26%.
- Stochastic gradient descent: SGD is commonly used to optimize learning algorithms; for example, building the loss function for the original model and finding the optimal parameter that minimizes the function value through the optimization algorithm. Each iteration uses a set of randomly shuffled samples to effectively reduce the parameter update cancellation phenomenon in small sample problems [22]. However, for the dataset used in this experiment, the performance of the SGD algorithm is very unsatisfactory, with an accuracy of less than 50% before data processing.
- Logistic regression: LR aims to organize samples of different categories distributed on both sides of the straight line as far as possible. To the best of our knowledge regarding the logistic regression function (sigmoid function), its output of a large range of numbers can be compressed within the interval of [0,1]. The maximum likelihood method is often used to estimate the parameters of LR, which is equivalent to the minimum likelihood loss function [23]. Likewise, the LR algorithm also performed poorly in this experiment, with only a 56% recognition rate.
- Naive Bayes classifier: The core concept of NB is to assume that the components of all vectors are independent of each other, but this also makes it unsuitable for problems with a large number of attributes or a large correlation between attributes [24]. The classification process of NB is divided into two main stages. The first stage is the learning, in which the classifier is constructed from sample data. The second stage is the reasoning, including calculating the conditional probability of nodes and classifying data. The accuracy of NB on the sample data set was only 61% and thus hardly required further consideration.
3.3.2. Deep Learning
- Stacked denoising autoencoder: SDAE is a deep learning model. The auto-encoder (AE) is supposed to be introduced first, which is a self-monitoring algorithm. A simple AE model consists of an encoder and a decoder, and the data are input into the encoder and then into the decoder to obtain the final reconstructed data. Secondly, by adding noise to the input data, overfitting can be avoided, and thus the denoising autoencoder (DAE) is formed. DAE uses the data after adding noise for training, so the weight of the model contains less noise, thus improving the robustness and stability of the model. The SDAE model is used to stack multiple DAE models together to form a depth model. Each layer is independent for unsupervised training. The output of the first layer can be used as the input of the next layer, and the last layer is the softmax layer [28,29,30]. The SDAE model designed in this paper has three layers and was trained 100, 500, 1000, 1500 and 3000 times, respectively. The classification accuracy of the model can be improved by increasing the number of training times. However, it is found that when the training times reach a certain value, although the accuracy of the training set increases continuously, the accuracy of the testing set remains at about 79%. General speaking, although the training times increase, the testing accuracy of the model will not change when the training iterations reach a certain value. In order to balance the cost and accuracy, 500, 1000, 1500, 3000 and 5000 training iterations were performed. Consequently, when the number of training iterations reached 3000, the training accuracy was not greatly improved. A total of 3000 iterations is also a relatively moderate choice, which takes into account the training cost and classification accuracy; specific results are shown in Table 5.This chart shows that the classification accuracy was very poor when the number of training iterations was 100, which was of no significance. Therefore, the training iterations were increased to 500, and the result improved. When the repetitions were scaled up to 1500, the accuracy reached 75%. In order to find the best matching number, the training iterations were doubled to 3000, at which time the accuracy was only improved by 2%. In contrast, when the training times were increased from 100 to 500, the accuracy was improved by 10%, which proved that the accuracy of the SDAE model would not increase significantly with the increase of the training times when the training times were increased to 3000. When the training times reached 5000 times, even if the training set accuracy continued to increase, the accuracy of the testing set did not improve significantly, and finally stabilized at 78%.
3.3.3. Ensemble Learning
- Random forest: Random forest belongs to the bagging category (bootstrap-aggregating), which indicates the data are randomly extracted from the raw dataset and put back. Multiple decision trees are built through these subsets, and all classification voting results are integrated to ultimately obtain the average testing results of the classifier [32]. RF is simple and can be effectively applied to large datasets with good accuracy. In this study, the performance of random forest was outstanding, as expected, reaching the recognition rates of 88% and 89%, respectively, in the cases of 0% and 50% data stacking.
- Extra trees: ET is also called extremely randomized tree, which is also an DT-based ensemble learning algorithm, but it belongs to another random process rather than bagging. Compared with the ensemble methods represented by the RF, the main characteristics of ET are the way of constructing the trees in the forest and selecting the splitting points. There is no bagging process and it is unnecessary to use a bootstrap as a copy. Therefore, these features of ET weaken the correlation between the base estimators, simplify the process of node segmentation, reduce the complexity of model splitting, decrease the amount of computation, improve the training speed and form more diversified trees. When considering the bias–variance tradeoff in the model selection procedure, ET also has advantages, because its stronger random process can effectively reduce the variance. It also uses the whole training set to conclude each tree in the model, which can minimize the bias to a certain extent [33,34,35,36]. In practical application, the performance of extra trees is also related to the selection of parameters. In this experiment, the super parameters we selected were n_estimators = 550, random_state = 666, bootstrap = true, oob_score = true, and the accuracy was almost the same as RF or even higher. In the cases of 0% and 50% data stacking, the accuracy reached 89% and 90%, respectively. Parameters could be adjusted appropriately according to specific problems or by using cross-validation when necessary, which had a certain impact on the performance of the model.
- Deep forest: DF is a non-neural network deep tree model, which was originally proposed by professor Zhou Zhihua from Nanjing University in the study of an alternative to deep neural networks and is also known as the multi-grained cascade forest (gcForest). DF is a type of deep structure based on the logic of deep learning. Compared with the deep neural network, it is not only easier in terms of its theoretical analysis, but also simpler in terms of its parameter setting and training process, and it even shows more competitive performance on open datasets in some certain application domains. In the model training procedure, the deep neural network requires large-scale data, while DF can be trained on small-scale datasets, with relatively lower computational complexity [37].The general process of gcForest is composed of multi-grained scanning and the cascade forest. The first step is to preprocess the raw input features by using multi-grained scanning. In the second step, the feature vectors are input into the cascade forest for training, and the output of training data of each layer is used as the input of the next layer, and this is repeated continuously until the verification results converge. However, DF performed the worst of all the selected ensemble learning models in this experiment, with only a 71% recognition rate in the case of no data stacking.
3.3.4. Further Attempt
3.4. Algorithm Evaluation and Discussion
- True Positive (TP): the true category of the sample is positive, and the predicted result is also positive;
- True Negative (TN): the true category of the sample is negative, and the predicted result is also negative.;
- False Positive (FP): the true category of the sample is negative, but the model predicts it to be positive;
- False Negative (FN): the true category of the sample is positive, but the model predicts it to be negative.
3.4.1. Experimental Result without Data Stacking
3.4.2. Performance Enhancement with 50% Stacking
4. Human-Centered Application in COVID-19
5. Conclusions
Author Contributions
Funding
Informed Consent Statement
Conflicts of Interest
References
- Lara, O.D.; Labrador, M.A. A Survey on Human Activity Recognition using Wearable Sensors. IEEE Commun. Surv. Tutor. 2013, 15, 1192–1209. [Google Scholar] [CrossRef]
- Fu, B.; Damer, N.; Kirchbuchner, F.; Kuijper, A. Sensing Technology for Human Activity Recognition: A Comprehensive Survey. IEEE Access 2020, 8, 83791–83820. [Google Scholar] [CrossRef]
- Demrozi, F.; Pravadelli, G.; Bihorac, A.; Rashidi, P. Human Activity Recognition Using Inertial, Physiological and Environmental Sensors: A Comprehensive Survey. IEEE Access 2020, 8, 210816–210836. [Google Scholar] [CrossRef]
- Yen, C.T.; Liao, J.X.; Huang, Y.K. Human Daily Activity Recognition Performed Using Wearable Inertial Sensors Combined with Deep Learning Algorithms. IEEE Access 2020, 8, 174105–174114. [Google Scholar] [CrossRef]
- Ueafuea, K.; Boonnag, C.; Sudhawiyangkul, T.; Leelaarporn, P.; Gulistan, A.; Chen, W.; Mukhopadhyay, S.C.; Wilaiprasitporn, T.; Piyayotai, S. Potential Applications of Mobile and Wearable Devices for Psychological Support During the COVID-19 Pandemic: A Review. IEEE Sens. J. 2021, 21, 7162–7178. [Google Scholar] [CrossRef]
- Lonini, L.; Shawen, N.; Botonis, O.; Fanton, M.; Jayaraman, C.; Mummidisetty, C.K.; Shin, S.Y.; Rushin, C.; Jenz, S.; Xu, S.; et al. Rapid Screening of Physiological Changes Associated With COVID-19 Using Soft-Wearables and Structured Activities: A Pilot Study. IEEE J. Transl. Eng. Health Med. 2021, 9, 1–11. [Google Scholar] [CrossRef] [PubMed]
- Sadighbayan, D.; Ghafar-Zadeh, E. Portable Sensing Devices for Detection of COVID-19: A Review. IEEE Sens. J. 2021, 21, 10219–10230. [Google Scholar] [CrossRef]
- Rehman, M.; Shah, R.A.; Khan, M.B.; Ali, N.A.A.; Alotaibi, A.A.; Althobaiti, T.; Ramzan, N.; Shaha, S.A.; Yang, X.; Alomainy, A.; et al. Contactless Small-Scale Movement Monitoring System Using Software Defined Radio for Early Diagnosis of COVID-19. IEEE Sens. J. 2021, 21, 17180–17188. [Google Scholar] [CrossRef]
- Hsu, Y.L.; Yang, S.C.; Chang, H.C.; Lai, H.C. Human Daily and Sport Activity Recognition Using a Wearable Inertial Sensor Network. IEEE Access 2018, 6, 31715–31728. [Google Scholar] [CrossRef]
- Lawal, I.A.; Bano, S. Deep Human Activity Recognition With Localisation of Wearable Sensors. IEEE Access 2020, 8, 155060–155070. [Google Scholar] [CrossRef]
- Anish, N.K.; Bhat, G.; Park, J.; Lee, H.G.; Ogras, U.Y. Sensor-Classifier Co-Optimization for Wearable Human Activity Recognition Applications. In Proceedings of the 2019 IEEE International Conference on Embedded Software and Systems (ICESS), Las Vegas, NV, USA, 2–3 June 2019; pp. 1–4. [Google Scholar] [CrossRef]
- Pham, C.; Nguyen-Thai, S.; Tran-Quang, H.; Tran, S.; Vu, H.; Tran, T.H.; Le, T.L. SensCapsNet: Deep Neural Network for Non-Obtrusive Sensing Based Human Activity Recognition. IEEE Access 2020, 8, 86934–86946. [Google Scholar] [CrossRef]
- Munoz-Organero, M. Outlier Detection in Wearable Sensor Data for Human Activity Recognition (HAR) Based on DRNNs. IEEE Access 2019, 7, 74422–74436. [Google Scholar] [CrossRef]
- Khokhlov, I.; Reznik, L.; Cappos, J.; Bhaskar, R. Design of activity recognition systems with wearable sensors. In Proceedings of the 2018 IEEE Sensors Applications Symposium (SAS), Seoul, Korea, 12–14 March 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Ayman, A.; Attalah, O.; Shaban, H. An Efficient Human Activity Recognition Framework Based on Wearable IMU Wrist Sensors. In Proceedings of the 2019 IEEE International Conference on Imaging Systems and Techniques (IST), Abu Dhabi, United Arab Emirates, 9–10 December 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Wu, T.; Redouté, J.M.; Yuce, M.R. A Wearable Wireless Medical Sensor Network System Towards Internet-of-Patients. In Proceedings of the 2018 IEEE SENSORS, New Delhi, India, 28–31 October 2018; pp. 1–3. [Google Scholar] [CrossRef]
- He, J.; Zhang, Q.; Wang, L.; Pei, L. Weakly Supervised Human Activity Recognition From Wearable Sensors by Recurrent Attention Learning. IEEE Sens. J. 2019, 19, 2287–2297. [Google Scholar] [CrossRef]
- Myers, S.H.; Huhman, B.M. Enabling Scientific Collaboration and Discovery Through the Use of Data Standardization. IEEE Trans. Plasma Sci. 2015, 43, 1190–1193. [Google Scholar] [CrossRef]
- Al-Garadi, M.A.; Mohamed, A.; Al-Ali, A.K.; Du, X.; Ali, I.; Guizani, M. A Survey of Machine and Deep Learning Methods for Internet of Things (IoT) Security. IEEE Commun. Surv. Tutor. 2020, 22, 1646–1685. [Google Scholar] [CrossRef] [Green Version]
- Hossain Shuvo, M.M.; Ahmed, N.; Nouduri, K.; Palaniappan, K. A Hybrid Approach for Human Activity Recognition with Support Vector Machine and 1D Convolutional Neural Network. In Proceedings of the 2020 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 13–15 October 2020; pp. 1–5. [Google Scholar] [CrossRef]
- Abianya, G.; Beno, M.M.; Sivakumar, E.; Rajeswari, N. Performance Evaluation of Multi-instance Multi-label Classification using Kernel based K-Nearest Neighbour Algorithm. In Proceedings of the 2019 International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 27–29 November 2019; pp. 1170–1175. [Google Scholar] [CrossRef]
- Liu, Y.; Huangfu, W.; Zhang, H.; Long, K. An Efficient Stochastic Gradient Descent Algorithm to Maximize the Coverage of Cellular Networks. IEEE Trans. Wirel. Commun. 2019, 18, 3424–3436. [Google Scholar] [CrossRef]
- Zou, X.; Hu, Y.; Tian, Z.; Shen, K. Logistic Regression Model Optimization and Case Analysis. In Proceedings of the 2019 IEEE 7th International Conference on Computer Science and Network Technology (ICCSNT), Dalian, China, 19–20 October 2019; pp. 135–139. [Google Scholar] [CrossRef]
- Aridas, C.K.; Karlos, S.; Kanas, V.G.; Fazakis, N.; Kotsiantis, S.B. Uncertainty Based Under-Sampling for Learning Naive Bayes Classifiers Under Imbalanced Data Sets. IEEE Access 2020, 8, 2122–2133. [Google Scholar] [CrossRef]
- Shickel, B.; Tighe, P.J.; Bihorac, A.; Rashidi, P. Deep EHR: A Survey of Recent Advances in Deep Learning Techniques for Electronic Health Record (EHR) Analysis. IEEE J. Biomed. Health Inform. 2018, 22, 1589–1604. [Google Scholar] [CrossRef]
- Tüfek, N.; Özkaya, O. A Comparative Research on Human Activity Recognition Using Deep Learning. In Proceedings of the 2019 27th Signal Processing and Communications Applications Conference (SIU), Sivas, Turkey, 24–26 April 2019; pp. 1–4. [Google Scholar] [CrossRef]
- Natani, A.; Sharma, A.; Peruma, T.; Sukhavasi, S. Deep Learning for Multi-Resident Activity Recognition in Ambient Sensing Smart Homes. In Proceedings of the 2019 IEEE 8th Global Conference on Consumer Electronics (GCCE), Osaka, Japan, 15–18 October 2019; pp. 340–341. [Google Scholar] [CrossRef]
- Ni, Q.; Fan, Z.; Zhang, L.; Nugent, C.D.; Cleland, I.; Zhang, Y.; Zhou, N. Leveraging Wearable Sensors for Human Daily Activity Recognition with Stacked Denoising Autoencoders. Sensors 2020, 20, 5114. [Google Scholar] [CrossRef] [PubMed]
- Gu, F.; Khoshelham, K.; Valaee, S.; Shang, J.; Zhang, R. Locomotion Activity Recognition Using Stacked Denoising Autoencoders. IEEE Internet Things J. 2018, 5, 2085–2093. [Google Scholar] [CrossRef]
- Kim, J.C.; Chung, K. Multi-Modal Stacked Denoising Autoencoder for Handling Missing Data in Healthcare Big Data. IEEE Access 2020, 8, 104933–104943. [Google Scholar] [CrossRef]
- Zambelli, M.; Demirisy, Y. Online Multimodal Ensemble Learning Using Self-Learned Sensorimotor Representations. IEEE Trans. Cogn. Dev. Syst. 2017, 9, 113–126. [Google Scholar] [CrossRef] [Green Version]
- Wang, A.; Chen, H.; Zheng, C.; Zhao, L.; Liu, J.; Wang, L. Evaluation of Random Forest for Complex Human Activity Recognition Using Wearable Sensors. In Proceedings of the 2020 International Conference on Networking and Network Applications (NaNA), Haikou City, China, 10–13 December 2020; pp. 310–315. [Google Scholar] [CrossRef]
- Xie, L.; Tian, J.; Ding, G.; Zhao, Q. Human activity recognition method based on inertial sensor and barometer. In Proceedings of the 2018 IEEE International Symposium on Inertial Sensors and Systems (INERTIAL), Lake Como, Italy, 26–29 March 2018; pp. 1–4. [Google Scholar] [CrossRef]
- Desir, C.; Petitjean, C.; Heutte, L.; Salaun, M.; Thiberville, L. Classification of Endomicroscopic Images of the Lung Based on Random Subwindows and Extra-Trees. IEEE Trans. Biomed. Eng. 2012, 59, 2677–2683. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Li, Y.; Bao, T.; Gong, J.; Shu, X.; Zhang, K. The Prediction of Dam Displacement Time Series Using STL, Extra-Trees, and Stacked LSTM Neural Network. IEEE Access 2020, 8, 94440–94452. [Google Scholar] [CrossRef]
- Alsariera, Y.A.; Adeyemo, V.E.; Balogun, A.O.; Alazzawi, A.K. AI Meta-Learners and Extra-Trees Algorithm for the Detection of Phishing Websites. IEEE Access 2020, 8, 142532–142542. [Google Scholar] [CrossRef]
- Liu, X.; Wang, R.; Cai, Z.; Cai, Y.; Yin, X. Deep Multigrained Cascade Forest for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8169–8183. [Google Scholar] [CrossRef]
- Ohsaki, M.; Wang, P.; Matsuda, K.; Katagiri, S.; Watanabe, H.; Ralescu, A. Confusion-Matrix-Based Kernel Logistic Regression for Imbalanced Data Classification. IEEE Trans. Knowl. Data Eng. 2017, 29, 1806–1819. [Google Scholar] [CrossRef]
Hardware Component | Model Type |
---|---|
Single-Chip Microcomputer | STM32F103C8T6 |
Inertial Sensing Module | MPU6050 |
Temperature Sensor | LMT70 |
Bluetooth Module | HC-06 |
Power Supply Module | 5V and 1500 mAh dry battery |
ID No. | Occupation | Age | Height (cm) | Weight (kg) | Gender |
---|---|---|---|---|---|
1 | Student | 22 | 177 | 77.8 | Male |
2 | Student | 21 | 165 | 50.3 | Female |
3 | Professor | 32 | 180 | 80.2 | Male |
4 | Cleaner | 60 | 163 | 55.5 | Female |
5 | Student | 22 | 163 | 45.7 | Female |
6 | Police | 26 | 181 | 82.5 | Male |
7 | Accountant | 32 | 165 | 53.7 | Female |
8 | Security | 23 | 171 | 50.5 | Male |
9 | Staff | 38 | 172 | 84.0 | Female |
10 | Student | 22 | 175 | 100.1 | Male |
Accuracy | Data | Stacking | |||||||
---|---|---|---|---|---|---|---|---|---|
Learning Algorithms | Classifiers | (No Temperature) | 0% | 10% | 20% | 30% | 50% | 70% | 90% |
SVM | 74% | 81% | 40% | 43% | 43% | 48% | 47% | -% | |
KNN | 73% | 81% | 80% | 80% | 83% | 83% | 85% | 93% | |
Conventional Machine Learning | SGD | 50% | 47% | 52% | 55% | 53% | 55% | 50% | 54% |
LR | 54% | 56% | 58% | 56% | 58% | 56% | 55% | 58% | |
NB | 59% | 61% | 63% | 61% | 62% | 62% | 62% | 59% | |
Deep Learning | SDAE | 75% | 77% | 69% | 72% | 74% | 75% | 75% | 86% |
RF | 78% | 88% | 89% | 89% | 89% | 89% | 92% | 91% | |
Ensemble Learning | ET | 78% | 89% | 88% | 89% | 89% | 90% | 92% | 96% |
DF | 65% | 71% | 77% | 76% | 74% | 75% | 75% | 79% |
K Value | Accuracy | K Value | Accuracy |
---|---|---|---|
1 | 80.05% | 13 | 80.89% |
2 | 78.74% | 14 | 80.87% |
3 | 81.22% | 15 | 80.64% |
4 | 80.82% | 16 | 80.42% |
5 | 80.93% | 17 | 80.24% |
6 | 80.97% | 18 | 80.18% |
7 | 81.26% | 19 | 80.16% |
8 | 80.90% | 20 | 79.87% |
9 | 80.75% | 30 | 78.61% |
10 | 80.93% | 50 | 76.81% |
11 | 80.74% | 80 | 75.18% |
12 | 80.94% | 100 | 74.22% |
Frequency of Training | 100 | 500 | 1500 | 3000 | 5000 |
---|---|---|---|---|---|
Training set accuracy | 65% | 73% | 80% | 85% | 88% |
Testing set accuracy | 61% | 71% | 75% | 77.5% | 78% |
Processing Mode | No. of Dataset | Accuracy | ||
---|---|---|---|---|
RF | ET | DF | ||
Raw dataset | 31,713 | 88% | 89% | 71% |
50% of the total | 15,857 | 87% | 86% | 73% |
10% of the total | 3171 | 83% | 82% | 70% |
Algorithm | ACT | 0% Precision | Data Recall | Stacking F1 Score | Accuracy | Running Time | 50% Precision | Data Recall | Stacking F1 Score | Accuracy | Running Time |
---|---|---|---|---|---|---|---|---|---|---|---|
Sitting | 0.99 | 0.99 | 0.99 | 0.99 | 0.98 | 0.99 | |||||
RF | Walking | 0.86 | 0.90 | 0.88 | 0.89 | 4.50 s | 0.86 | 0.92 | 0.89 | 0.91 | 2.38 s |
Upstairs | 0.86 | 0.81 | 0.83 | 0.92 | 0.83 | 0.87 | |||||
Downstairs | 0.86 | 0.86 | 0.86 | 0.89 | 0.90 | 0.90 | |||||
Sitting | 0.98 | 0.99 | 0.99 | 0.96 | 0.98 | 0.97 | |||||
KNN | Walking | 0.77 | 0.82 | 0.80 | 0.83 | 0.07 s | 0.75 | 0.86 | 0.80 | 0.83 | 0.01 s |
Upstairs | 0.73 | 0.73 | 0.73 | 0.77 | 0.69 | 0.73 | |||||
Downstairs | 0.81 | 0.72 | 0.76 | 0.86 | 0.77 | 0.81 | |||||
Sitting | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 | |||||
ET | Walking | 0.84 | 0.91 | 0.87 | 0.89 | 10.66 s | 0.83 | 0.93 | 0.88 | 0.91 | 4.05 s |
Upstairs | 0.88 | 0.78 | 0.83 | 0.92 | 0.79 | 0.85 | |||||
Downstairs | 0.86 | 0.85 | 0.85 | 0.91 | 0.89 | 0.90 | |||||
Sitting | 0.98 | 0.98 | 0.98 | 0.99 | 0.97 | 0.98 | |||||
DF | Walking | 0.58 | 0.78 | 0.67 | 0.70 | 41.56 s | 0.59 | 0.67 | 0.63 | 0.69 | 6.81 s |
Upstairs | 0.61 | 0.44 | 0.51 | 0.57 | 0.49 | 0.53 | |||||
Downstairs | 0.65 | 0.53 | 0.59 | 0.61 | 0.60 | 0.61 |
Temperature Difference ( C) | Expectation () | Variance () |
---|---|---|
Walking (Tw-Ts) | −1.15 C | 0.82 C |
Upstairs (Tu-Ts) | −1.17 C | 0.78 C |
Downstairs (Td-Ts) | −0.95 C | 0.88 C |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, L.; Zhu, Y.; Jiang, M.; Wu, Y.; Deng, K.; Ni, Q. Body Temperature Monitoring for Regular COVID-19 Prevention Based on Human Daily Activity Recognition. Sensors 2021, 21, 7540. https://doi.org/10.3390/s21227540
Zhang L, Zhu Y, Jiang M, Wu Y, Deng K, Ni Q. Body Temperature Monitoring for Regular COVID-19 Prevention Based on Human Daily Activity Recognition. Sensors. 2021; 21(22):7540. https://doi.org/10.3390/s21227540
Chicago/Turabian StyleZhang, Lei, Yanjin Zhu, Mingliang Jiang, Yuchen Wu, Kailian Deng, and Qin Ni. 2021. "Body Temperature Monitoring for Regular COVID-19 Prevention Based on Human Daily Activity Recognition" Sensors 21, no. 22: 7540. https://doi.org/10.3390/s21227540
APA StyleZhang, L., Zhu, Y., Jiang, M., Wu, Y., Deng, K., & Ni, Q. (2021). Body Temperature Monitoring for Regular COVID-19 Prevention Based on Human Daily Activity Recognition. Sensors, 21(22), 7540. https://doi.org/10.3390/s21227540