Abstract
This research addresses the accuracy issues in IoT-based human activity recognition (HAR) applications, essential for health monitoring, elderly care, gait analysis, security, and Industry 5.0. This study uses 12 machine learning approaches, split equally between support vector machine (SVM) and k-nearest neighbor (k-NN) models. Data from 102 individuals, aged 18–43, were used to train and test these models. The researchers aimed to detect twelve daily activities, such as sitting, walking, and cycling. Results showed k-NN models achieved slightly higher accuracy (97.08%) compared to SVM models (95.88%), though SVM had faster processing times. The improved machine learning approaches proved effective in accurately classifying daily activities, with k-NN models outperforming SVM models marginally. The paper provides significant contributions to the field of HAR by enhancing the performance of SVM and k-NN classifiers, optimizing them for higher accuracy and faster processing. Through robust testing with samples of real-world data, the study provides a detailed comparative analysis that highlights strengths and weaknesses of each classifier model, specifically within IoT-based systems. This work not only advances the theoretical understanding and practical applications of HAR systems in areas, such as healthcare and industrial automation, but also sets the stage for future research that could explore hybrid models or further enhancements, consequently improving the efficiency and functionality of IoT devices based on activity recognition.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
HAR is a rapidly growing research area that involves identifying and classifying human actions based on sensor data. One approach to HAR is the use of machine learning classifiers, which are algorithms that can learn to recognize patterns in data. In recent years, there has been a lot of interest in using enhanced supervised machine learning classifiers for HAR. These classifiers are designed to improve the accuracy and robustness of activity recognition by incorporating additional information into the learning process [1].
One approach to HAR is the use of machine learning classifiers, which are algorithms that can learn to recognize patterns in data. In recent years, there has been a lot of interest in using enhanced supervised machine learning classifiers for HAR. These classifiers are designed to improve the accuracy and robustness of activity recognition by incorporating additional information into the learning process [2]. One example of an enhanced supervised classifier is the use of a deep learning network (DLN), which can automatically learn features from raw sensor data and then classify activities based on these features. Another example is using the hybrid approach, which is a combination of multiple classifiers to improve the accuracy of the system. Another important aspect of HAR research is the use of large and diverse datasets to train and evaluate classifiers. To improve the performance of classifiers, it is important to have datasets that are representative of the real-world conditions in which the classifiers will be deployed. Nowadays, people are more aware of the importance of engaging in healthy behaviors and actions in their daily lives to maintain a healthy lifestyle. It is crucial to monitor a person’s daily activities to maintain a good and healthy lifestyle; in that sense, researching and developing tools and techniques that observe human daily activities is represented in the field of HAR. Harvesting valuable and accurate context-aware data in the form of feedback from a person’s daily activities provides an opportunity for the user to use it for a diverse range of applications, such as health monitoring systems, elderly care, gait analysis, security, and Industry 5.0. To acquire data on a person’s daily activities in an efficient and practical manner, various techniques, methods, and data sets are employed in conjunction with pervasive devices within an IoT-based environment [3]. Data acquisition represents the first phase of the HAR process, where real-time data are harvested from the sensors of IoT-based devices, such as accelerometers and gyroscopes. The second phase of HAR is adopting and applying one of the pre-processing techniques. In this step, normalization will be performed over the acquired data to reduce the artifact and therefore to enhance the prediction accuracy of the proposed neural network model. In this work, the raw data are classified into two sets, training and testing data sets; training data represents the standard data generated by the model developers with a number of 40 samples, while the test data is obtained from 62 volunteers. Both test and training data are fed to the neural network model during the supervised learning process to assess the neural network module performance in detecting accurately 12 daily physical human activities. The detected 12 activities are labeled as sitting, laying, standing, attaching to a table, walking, jogging, running, jumping, push-ups, stairs down, going upstairs, and cycling [4, 5]. In our daily life activities, wearable sensors are considered a practical tool in obtaining vital data by tracking the users’ daily activities, vital signs and the surrounding environment data, such as temperature, humidity, thermostat, etc. The obtained data from the users and the surrounding environment will be analyzed and assessed for the best use. In order to have preeminent employment of wearable-based sensors in the users’ surrounding environment, there is ongoing development and enhancement in the industry over wearable-based sensors’ capabilities, features, compatibility and security level. Consequently, there is a significant need for researchers in the fields of healthcare, security, surveillance, well-being, and biofeedback systems to integrate wearable sensors into their studies. Incessant monitoring represents the key feature of wearable sensors. This feature will provide an effective and efficient daily fitness tracking method. Recently, making use of multiple sensors, such as accelerometers, gyroscopes, and magnetometers, in IoT-based devices is expected to strengthen the HAR conduct. The employment of a mixture of these sensors facilitates IoT devices to elicit pivotal data about complex patterns of a person’s body in three-dimensional space [6].
The motivation for this study is rooted in the critical need to enhance the accuracy and efficiency of HAR systems, which are increasingly deployed across diverse applications, such as healthcare, elderly care, security, and industrial automation. Recent technological advancements and the expansion of wearable sensors have provided unprecedented opportunities to monitor and analyze human activities continuously. However, despite these technological rapid advancement, the field faces significant challenges, primarily due to the limitations in the accuracy and processing efficiency of existing machine learning classifiers when applied to real-world data. This research targets these challenges by optimizing and comparing two robust classifiers: SVM and k-NN, known for their effective performance in pattern recognition tasks. The choice of these classifiers is justified by their proven track records in various recognition tasks [7, 8]. By employing enhanced versions of these classifiers, the study aims to push the boundaries of what's currently achievable in HAR systems, ensuring more reliable, faster, and context-aware data analysis.
The research presents a novel approach in the development of IoT-based HAR applications, focusing on enhanced versions of SVM and KNN classifiers. The novelty lies in applying improved neural network techniques to increase accuracy in detecting twelve physical activities, addressing the challenge of accurate classification in various applications, such as health monitoring and elderly care. The choice of SVM and KNN is justified by their proven effectiveness in pattern recognition tasks. SVM models provided slightly faster processing times, while KNN models showed slightly higher accuracy, demonstrating the strength and complementarity of both methods in achieving high accuracy and efficiency in activity classification. This study contributes to the field by optimizing these classifiers for better performance in real-world scenarios.
The remainder of the paper is organized as follows: Section 2 presents background reading and related work. To provide a deeper understanding of this area, Sect. 3 introduces Reference and proposed models. Section 4 introduces the methodologies. Section 5 presents the experimental results. Finally, the paper concludes with Sect. 6 focusing on the Conclusion and Future recommendation.
2 Related Work
Machine learning has significantly contributed to the advancement of clustering in HAR, a field that involves identifying and categorizing various human behaviors based on data collected from sensors or video feeds. By leveraging sophisticated algorithms, such as k-means, hierarchical clustering, and neural networks, machine learning enables the effective grouping of activity data into meaningful categories without prior labeling. This capability facilitates a deeper understanding of activity patterns, enhances the accuracy of activity detection systems, and improves the personalization of user experiences in applications ranging from health monitoring to smart home automation. The automated and adaptive nature of these machine learning models also allows for continuous refinement and optimization of recognition processes, thus increasingly integrating and enhancing the interaction between humans and technology in everyday environments.
The article of [9] work presents an enhanced fuzzy clustering approach tailored for datasets with incomplete instances. The method leverages Dempster–Shafer theory to manage uncertainty and imprecision, improving clustering results by modeling missing values locally and combining subset results with varying reliabilities to form a more robust final decision. This approach not only enhances the clustering accuracy but also provides a way to handle datasets with significant missing data effectively.
Another study by [10] delves into adaptive weighted multi-view evidential clustering (WMVEC), addressing the challenges of uncertainty and imprecision in clustering data from multiple views. Their methodology includes the integration of feature and view weights, demonstrating through experiments that this approach provides superior performance compared to traditional methods. This adaptive strategy not only refines the clustering process but also capitalizes on the diversity of multi-view data to achieve a more accurate partitioning.
In their work, [11] proposed an attribute-weighted neutrosophic c-means clustering method (AWNCM) for gene expression data, which focused on integrating attribute weights into the clustering process. This method is particularly effective in handling the uncertainty and imprecision inherent in biological datasets, where data may include overlapping clusters and noise. The integration of kernel methods (KAWNCM) to handle non-spherical cluster shapes further enhances this approach, providing a sophisticated tool for complex data analysis in biomedical applications.
The paper by [12] provides a systematic survey of theory of belief functions (TBF)-based methods in machine learning, highlighting their role in managing uncertainty and imprecision across clustering, classification, and information fusion tasks. This survey not only summarizes the state-of-the-art but also identifies gaps and proposes future research directions, emphasizing the potential of belief functions to transform machine learning methodologies in dealing with imperfect data.
Over the decades, iterative improvements in machine learning approaches for extracting sophisticated features have empowered the HAR field. However, traditional HAR practices still face the shortcoming of inadequate number of extracted features from the collected real data, which may lead to false recognition outcomes. To address this issue, a novel framework named capsule network (CapsGaNet) was introduced and developed by [13] for spatiotemporal multi-feature extraction on HAR data. CapsGaNet comprises a spatial feature extraction layer with capsule blocks, a temporal feature extraction layer with capsule and gated recurrent units (GRU) with attention mechanisms, and an output layer. The authors utilized a Daily and Aggressive Activity Dataset (DAAD) to identify aggressive activities in specific situations such as smart prisons. A threshold was set for aggressive activity detection to handle the requirements of high real-time performance and low computational complexity in prison situations. In this work, the number of detected activities is limited to eight, achieving an overall accuracy of 96.8% for the Wireless Sensor Data Mining (WISDM) dataset and 96.9% for the DAAD. The overall accuracy could be improved if the data samples were expanded and acquired in real-time from volunteers using pervasive devices in the environment. This approach addresses real data acquisition issues in real-life scenarios, rather than relying solely on standardized online datasets. In the research described in [14], a new method for recognizing human actions was developed, utilizing an acceleration sensor and a neural network. The proposed method involves labeling the STM32 data obtained from portable IoT devices. The convolutional neural network (CNN) model, enhanced with deep learning algorithms, was optimized and validated using the STM32 CubeMX.AI development tool to analyze the collected data. The Keras model was then integrated into the STM32 firmware to create the final model, which could classify and recognize various actions, such as jogging, being stationary, walking, standing, and climbing stairs. The results demonstrate that the recognition method presented in Wenzheng's paper can accurately identify five distinct human activities based on real data collected from 36 volunteers. Recommendations have been made for future improvements to the model, focusing on enhancing real-time performance and accuracy during the recognition phase. Despite achieving a 93.25% accuracy rate in the experiment, there is potential for improvement to reduce the overall error rate and to overcome the limitation of recognizing only five activities.
In the study referenced as [15], the authors implemented a HAR system using a Multi-ResAtt (multilevel residual network with attention) deep learning architecture. They carefully considered the complexity of extracting time series features. Data were collected from three public datasets: opportunity, UniMiB-SHAR, and PAMAP2. The performance of the proposed model was evaluated based on its accuracy in processing data from these datasets, achieving accuracies of 86.89, 87.82, and 86.37%, respectively. Furthermore, the authors conducted a performance comparison between their model and basic CNNmodels. The results demonstrated that their model outperformed the other models compared in this study. Although the number of detected activities ranged from 12 to 18, there is still room for significant improvement to reach an overall average accuracy score of 93.19%. Enhancing overall accuracy could be achieved by expanding the dataset size and collecting real-time data from individuals through pervasive devices in authentic environments. By tackling the challenges of real data acquisition in practical scenarios—instead of relying solely on standardized public datasets like opportunity, UniMiB-SHAR, and PAMAP2—a higher accuracy percentage can be attained.
The authors in [16] explored the effectiveness of a pruning methodology by integrating deep learning techniques from the field of neural networks for the recognition of different human activities. For this purpose, a long short-term memory (LSTM) architecture was utilized to identify six distinct human activities. An Android-based application was developed to collect accelerometer data from smartphones, which was then processed prior to the training phase. Their LSTM model achieved an overall accuracy of 92% in recognizing various human actions. Although the experiment yielded a commendable accuracy rate of 97.3% with real data samples from eight volunteers, there is still room for improvement, particularly in expanding the model's capability to recognize more than just six activities.
An optimized HAR-CT model was developed by [17] with the purpose of improving the accuracy level in recognizing different human activities using a convolutional neural network. To reduce the complexity of the deep neural network approach, the TWN model was adopted and employed in the HAR-CT model. The data parameters obtained from the MotionSense and University of California, Irvine (UCI_DB) datasets were tuned to successfully recognize 6 human actions, achieving accuracies of 95 and 94.74%, respectively. In this work, the challenge lies in surpassing eight well-recognized activities, thereby constraining the scope of the work.
The performance of shallow ANN architectures on two public databases was compared with hyper-parameters, ANN ensembles, binary ANN classifier groups, and convolutional neural networks in [18]. The results of their work indicated that a properly configured shallow ANN setup with hyper-parameters, coupled with extracted features, could achieve the same or a higher level of identification in a reduced amount of time compared to other artificial neural network methods during the recognition phase. In this study, the wearable action recognition dataset (WARD) database exhibited a measured recognition accuracy rate of 99.5% for 13 activities, while the UCIDB database showed a rate of 97.3% for 6 activities. However, an important aspect that was lacking in this research was the absence of specific information regarding the number of activities that were successfully recognized. This factor holds significant importance in the field of human activity recognition.
In the study referenced as [19], the authors employed a regression model, a type of machine learning algorithm, to predict daily calorie expenditure. They analyzed six factors: weight, gender, age, height, metabolism, and type of activity, in their prediction process for daily calorie burn. The data acquisition was not conducted in real time. Instead, the datasets were sourced from the Kaggle website repository. This study achieved an average accuracy rate of 95.77% in detecting 4–5 activities, which served the purpose of calculating the calories burned by a specific subject. To achieve more reliable results, the authors suggested that accuracy could be further improved by increasing the number of activities detected.
In their study, the authors in [20] compared the performance of support vector machines (SVMs) and convolutional neural networks (CNNs), proposing a novel CNN approach for activity prediction. Their goal was to develop a prediction model that optimizes resource management and mitigates potential risks. The authors highlight the diverse applications of activity recognition, including surveillance systems and sports video annotation. Furthermore, they emphasize the importance of time series analysis in activity recognition and introduce their innovative CNN approach for predicting human activity. The conclusion of their study underscores that their novel CNN method outperforms the traditional SVM algorithm in terms of accuracy for human activity recognition. They also suggest that their approach could enhance the detection of human actions in both emergency and routine care settings, significantly contributing to the advancement of more effective and efficient activity recognition systems. The authors propose future research directions, such as exploring alternative machine learning algorithms and incorporating additional sensors for improved activity recognition. This study achieved an average accuracy rate of 90.12% using the SVM classifier and 93.83% using the CNN classifier in recognizing 11 different activities. The data for this research were obtained from 47 volunteers in a real-time manner. However, to achieve more reliable results, the accuracy level could be further improved by tuning the classifiers used and increasing the number of activities detected.
3 Reference and Proposed Models
3.1 Reference Model
In the study referenced as [21], the authors successfully identified and classified 12 distinct human actions. These actions include attaching to a table, standing, walking, laying, sitting, jogging, jumping, doing push-ups, going upstairs, descending stairs, cycling, and running. The data of the twelve daily recognized activities were gained from the smartphone using an android application. The developed application’s main role is to read and collect the person’s raw data using acceleration and gyroscope sensors. A group of 15 subjects aged between 19 and 35 years, performed the twelve activities with smartphones where the developed application is installed. A Matlab R2016a program was developed to classify the subjects’ raw data. A set of twelve supervised classification algorithms models were selected to compare them in terms of accuracy and speed factors. The twelve models are divided into two classes: Six of them are under SVM, while the other six are under the k-NN. The experimental results show that SVM cases’ average accuracy rate score was 89.79%, in contrast with 87.81% for k-NN. On the other hand, SVM cases scored an average speed rate of 47 s whereas it is 39 s in k-NN cases’ records.
3.2 Proposed Model
The proposed model is basically representing continuous progress over what was done in the reference model. The continuous enhancement relates to the accuracy improvements over the detected 12 human actions. Three aspects in this research have been proposed to surpass the accuracy level achieved by the reference model. One aspect is to acquire a larger dataset of samples for training the activity recognition model. By increasing the diversity and quantity of samples, the model can capture a wider range of activity patterns and learn more robust representations. This helps reduce the potential for overfitting and improves the generalization capability of the model. With a larger dataset, the model can potentially learn more features with an improved accuracy rate level, leading to better activity recognition performance [22, 23]. Another aspect is to increase the size of the window segment used for activity recognition. In this context, a window segment refers to a fixed duration of sensor data that is analyzed to classify an activity. By enlarging the window segment, the model can capture more temporal information and context about the activity being performed. This increased context can help in distinguishing between similar activities and reducing the ambiguity in the recognition process. However, it is important to find a balance, as excessively large window segments may introduce temporal dependencies that make the model less effective in real-time applications [24, 25]. The third aspect involves maximizing the margins of the classifiers used in the activity recognition model. In machine learning, the margin refers to the separation between different classes in the feature space. By maximizing the margins, the model aims to achieve a clear distinction between different activities, reducing the chances of misclassification. This can be achieved through various techniques, such as margin-based loss functions, SVMs, or ensemble methods [26, 27]. By enhancing the discriminative power of the model, the accuracy level can be improved. The application of these three aspects in this work aims to enhance the robustness, discriminative power, and generalization capability of activity recognition models, ultimately surpassing the accuracy levels achieved by the reference model. The enhancement over the recognition process accuracy was in comparison with 89.79% as an average accuracy rate score in the reference model.
4 Methodologies
In this section, we will provide an overview of the main methods used to enhance accuracy in the process of detecting and recognizing the twelve targeted activities.
4.1 Data Acquisition and Segmentation
To commence with human activity recognition (HAR), it is essential to collect data on the subject’s daily activities in a practical and appropriate manner. This study utilized the embedded accelerometer and gyroscope sensors in smartphones running the Android OS to collect data on the subject's daily activities. An Android application was developed for the data acquisition phase, and subsequently, the user data was transferred via Bluetooth to a laptop equipped with MATLAB for processing and analysis within the HAR ecosystem, as illustrated in Fig. 1.
During this phase, the application reads data from the sensors and saves it in a text file format as depicted in Fig. 2, here, the first, second, third, and fourth column (separated by commas) represents time, in addition to the x, y, and z coordinates, respectively.
The text files are then transferred via Bluetooth to a PC for processing, feature extraction, and classification. The experiment aims to recognize twelve activities (attaching to a table, standing, walking, laying, sitting, jogging, jumping, push-ups, stairs up, stairs down, bicycling and running). It uses a reference model, and the number of samples in the reference model was expanded to include data collected from 102 volunteers aged between 18 and 43 years. The experiment was carried out without any restrictions on time or the order of activities. A 3-axial linear acceleration and 3-axial angular velocity were utilized to capture speed and direction values. The values were recorded at a constant rate of 60 Hz, with a window segment covering 102 samples (expanded from the reference model). The datasets were divided into two groups: 40 samples were used for training data, and 62 for testing data. Each activity or movement pattern exhibits a unique signature in terms of x, y, z coordinates over time. Therefore, the training data were manually labeled based on a standard pattern observed in time segments. For instance, the first minute of a certain training data sample might represent 'walking,' while the second minute represents 'standing', and so on. Finally, the collected data were segmented and analyzed, with a comparison conducted between the processing and analyzing of the fragmented signal and the whole signal, focusing on processing time and accuracy [28].
4.2 Pre-processing
One of the most crucial aspects of HAR is the raw signal data pre-processing phase, which involves normalizing, filtering, selecting features, and extracting features from raw data. A combination of time and frequency domains was used. In this work, a set of selected statistical features from the time domain type, such as mean, median, variance, range, average, and standard deviations (SD), were utilized. Additionally, a group of features from the frequency domain type, such as energy, correlation, velocity, acceleration, fundamental frequencies using discrete Fourier transform (DFT), and signal peaks using power spectral density (PSD) [29, 30], were employed as shown in Table 1.
DFT is applied to get discrete data signal x frequency spectrum as illustrated below equation:
In DFT, X represents the frequency spectrum, f stands for the Fourier coefficient in the frequency domain and N the length of the sliding window. X(f) typically is the frequency domain representation of a discrete signal, while the x(n) typically represents the signal in the time domain. The DFT transforms a sequence of complex numbers x(n), which is the signal in time or spatial domain, into another sequence of complex numbers X(f), which is the representation of the signal in the frequency domain. The variable n is an index into the time domain signal, and f is an index into the frequency domain representation. The transformation is achieved by multiplying the time domain signal by a set of complex exponential functions that represent different frequencies and summing the results. Euler’s number e is a mathematical constant that is the base of the natural logarithm [31]. In this work, PSD equation represents squaring the summation of spectral coefficients which normalizes the sliding window length as shown in the below equation:
The association between the highest calculated density of the power spectrum and the peak frequency of that given signal is measured, where a and b are the orthogonal components of accelerations [32].
During the data acquisition phase, hardware limitations could lead to slight changes in the sample rate values at 60 Hz level, which in turn produce inaccurate classification results. To overcome this issue, re-sampling algorithms were used to maintain the accuracy level of the sampled data at 60 Hz. Built-in Matlab functions, such as smooth, sort, acc, varfun, pca, and horzcat, were employed in this work to normalize the obtained data signals. The second step of linearization represents the last point in the extraction phase, which aims to deal with any discrepancy in the magnitude value of the obtained raw data. To do so, the magnitude of the acceleration vector was calculated by measuring the Euclidean magnitude of the x, y, and z axis by executing the following equation:
4.3 Classification
In this section, improved versions of the supervised learning classifiers were developed. Based on the reference model, a set of SVM algorithm classifiers was selected, including linear SVM, cubic SVM, coarse Gaussian SVM, medium Gaussian SVM, fine Gaussian SVM, and quadratic SVM. Additionally, another set of KNN algorithm classifiers was selected, including linear k-NN, cubic k-NN, coarse k-NN, medium k-NN, cosine k-NN, and weighted k-NN. The enhancement procedure applied to the twelve classifiers involved maximizing the margins of each classifier. The listed parameters were used in Table 2 to implement enhanced versions of various SVM algorithms including LSVM, CSVM, CGSVM, MGSVM, FGSVM, and QSVM. These parameters include input data (\(x\)), predicted label (\(y\)), learning rate (\(\alpha \)), regularization parameter (\(\lambda \)), kernel width (\(\sigma \)), number of data points (\(n\)), weights (\(w\)), updated weights (\(w^{\prime }\)), bias (\(b\)), and updated bias (\(b^{\prime }\)) as shown in 2. Input data and predicted labels are essential for training and testing the SVM models. Learning rate and regularization parameters are used to control the convergence of the model and prevent overfitting cases. Kernel width is used in the Gaussian kernel function to measure the similarity between input data points. The number of data points is necessary to estimate the gradient of the loss function. Weights and bias are used to define the decision boundary of the model. Finally, updated weights and updated bias are used in the optimization process to improve the model’s performance. By manipulating these parameters, different SVM algorithms can be developed to solve classification accuracy and regression problems in various domains. The kernel width (\(\sigma \)) parameter is used in the coarse Gaussian SVM, which is also known as the radial basis function (RBF) SVM. This SVM algorithm uses a Gaussian kernel to map the input data into a high-dimensional space where it becomes linearly separable, allowing the SVM to classify the data accurately. The value of σ in the Gaussian kernel determines the width of the kernel and therefore affects the smoothness of the decision boundary. A larger σ value results in a smoother decision boundary, while a smaller \(\sigma \) value results in a more complex and wiggly decision boundary. However, other SVM algorithms use different types of kernels, such as linear, polynomial, or other non-linear kernels. These kernels do not have a width parameter like the Gaussian kernel and thus do not require a value of σ to be specified.
Table 2 provides a comprehensive overview of the parameters utilized across enhanced versions of various support vector machine (SVM) algorithms, namely linear SVM (LSVM), cubic SVM (CSVM), coarse Gaussian SVM (CGSVM), medium Gaussian SVM (MGSVM), fine Gaussian SVM (FGSVM), and quadratic SVM (QSVM). The parameters include 'x' for input data, 'y' for the predicted label, and 'α' for the learning rate, all of which are common across algorithms, indicating their fundamental role in SVM processing. The regularization parameter '\(\lambda \)' is also universally applied, underscoring its importance in preventing overfitting by controlling the complexity of the model. Interestingly, the kernel width '\(\sigma \)' is specific to CGSVM, highlighting its unique requirement for this parameter in adjusting the decision boundary flexibility. The number of data points 'n' and the initial '\(w\)' (weights) and '\(b\)' (bias) are essential across all models for defining the dataset size and starting conditions for optimization, respectively. Finally, '\(w\)' and '\(b\)' represent the updated weights and bias after training, indicating the iterative improvement process inherent to SVM training. This detailed parameter list showcases the versatility and customization of SVM algorithms to various data types and problem complexities, with specific adjustments like 'σ' for CGSVM tailoring the approach to the algorithm's unique needs.
4.3.1 Linear SVM
To find the hyperplane that maximally separates the data points, the SVM algorithm searches for the decision boundary that has the largest margin, or distance, between the nearest data points of different classes. This is achieved by optimizing an objective function that maximizes the margin while ensuring that all data points are classified correctly. The optimization problem can be solved using quadratic programming techniques [33]. Once the SVM model is trained, it can be used to make predictions on new, unseen data. To make a prediction, the new data point is plugged into the SVM model, which returns a label positive or negative based on which side of the hyperplane the data point falls on [34].
Here are the steps and Eqs. (4), (5) and (6) that were followed in this research to developed the enhanced linear SVM model:
-
(1)
Collect the training data and label the data points as positive or negative, depending on their class.
-
(2)
Select a kernel function and specify any hyper-parameters, if applicable. For a linear SVM, the kernel function is simply the dot product of the input data points.
-
(3)
Initialize the SVM model with the kernel function and hyper-parameters.
-
(4)
Train the SVM model on the training data. This involves finding the hyperplane that maximally separates the data points of different classes.
-
(5)
Use the trained SVM model to make predictions on new, unseen data. To make a prediction, the new data point is plugged into the SVM model, which returns a label positive or negative based on which side of the hyperplane the data point falls on.
$$ {\text{margin}} = y \cdot \left( {w \cdot x + b} \right) $$(4)$$ w^\prime = \left( {1 - \alpha \cdot \lambda } \right) \cdot w + \alpha \cdot y \cdot x $$(5)$$ b^\prime = b + \alpha \cdot y $$(6)
4.3.2 Cubic SVM
To employ a cubic SVM, we had to specify the cubic kernel function as the kernel to be used in the SVM model. In addition, the hyper-parameters of the SVM were tuned in this work, such as the regularization parameter and the kernel coefficient, to find the optimal values for your data. In the case of a cubic SVM, the optimization problem is to find a hyperplane that maximizes the margin, subject to the constraint that the data points must lie on the correct side of the hyperplane [35, 36]. This can be done using a variant of the SVM algorithm known as the "cubic SVM."
Here are the steps and Eqs. (7), (8) and (9) that were followed in this research to developed the enhanced cubic SVM model:
-
(1)
Load the training data, consisting of input vectors x and corresponding labels y.
-
(2)
Initialize the cubic SVM model with some parameters, such as the regularization constant C and the kernel function.
-
(3)
For each training example (x, y):
-
(a)
Compute the prediction of the model for the input x.
-
(b)
If the prediction is incorrect, update the model parameters to reduce the error.
-
(4)
Repeat steps 3a and 3b until the model has correctly classified all the training examples or until a specified number of iterations has been reached.
-
(5)
Once the model is trained, it can be used to make predictions on new input data by applying the learned parameters to the kernel function and computing the prediction.
$$ {\text{margin}} = y \cdot \left( {w \cdot x^{3} + b} \right) $$(7)$$ w^\prime = \left( {1 - \alpha \cdot \lambda } \right) \cdot w + \alpha \cdot y \cdot x^{3} $$(8)$$ b^\prime = b + \alpha \cdot y $$(9)
4.3.3 Coarse Gaussian SVM
The "coarse" in coarse Gaussian SVM refers to the fact that the algorithm only considers a small number of data points when making predictions, rather than considering all of the points in the dataset [37]. This can help reduce the computational complexity of the algorithm, making it faster to train and easier to scale large datasets. However, it can also potentially lead to less accurate predictions, as the algorithm is not considering as much of the data when making its decisions [38]. To overcome that issue, we have applied an improvement over the base algorithm model here are the steps and Eqs. (10), (11) and (12) that were followed in this research to developed the enhanced cubic SVM model:
-
(1)
Collect and preprocess the training data: as with any machine learning task, the first step is to collect and prepare the training data. This includes gathering a dataset of labeled examples and preparing them for use in the model. Pre-processing steps may include cleaning the data, handling missing values, scaling numerical features, and encoding categorical features.
-
(2)
Select a kernel function and set the hyper-parameters: a modified version of the Gaussian kernel function is used in coarse Gaussian SVM which takes \(\sigma \) value, but you will need to choose the values of the hyper-parameters that control the shape of the kernel. These hyper-parameters can have a significant impact on the model’s performance, so it is important to choose them carefully.
-
(3)
Define the optimization problem: to find the hyperplane that maximizes the margin, you will need to define an optimization problem that minimizes the distance between the data points and the hyperplane, subject to the constraint that the data points are correctly classified. This optimization problem can be solved using an optimization algorithm such as gradient descent or a quadratic programming solver.
-
(4)
Train the model: once you have defined the optimization problem, you can use an optimization algorithm to find the solution that maximizes the margin. This will result in a trained coarse Gaussian SVM model that is able to correctly classify the training data and has a large margin between the hyperplane and the data points.
-
(5)
Test the model: once the model has been trained, you can test its performance on a separate dataset of test examples. This will help you to determine how well the model generalizes to new data and can give you an idea of its overall accuracy.
-
(6)
Fine-tune the model: if the model’s performance is not satisfactory, you can try adjusting the hyper-parameters or adding more training data to see if this improves the model’s accuracy. You can also try using a different kernel function or a different optimization algorithm to see if these changes result in a better model.
$$ {\text{margin}} = y \cdot \left( {w \cdot {\text{Gaussian}}\left( {x,x^\prime ,\sigma } \right) + b} \right) $$(10)
where
x and x′ = two input data points,
4.3.4 Medium Gaussian SVM
A Gaussian support vector machine uses a Gaussian kernel function to classify data points. This function is a measure of similarity between two data points and is defined by a smoothing parameter called the bandwidth [39]. In this type of SVM, the kernel function transforms the data points into a higher-dimensional space, making it easier to find the optimal hyperplane. The term "medium" refers to the bandwidth of the Gaussian kernel function, which determines the smoothness of the decision boundary. A smaller bandwidth value results in a more complex, wiggly decision boundary, while a larger bandwidth value results in a smoother, more linear decision boundary [40]. The Eqs. (13), (14) and (15) were followed in this research to developed the enhanced medium Gaussian SVM model:
4.3.5 Fine Gaussian SVM
The fine Gaussian SVM model utilizes a Gaussian radial basis function (RBF) as the kernel function to transform the original data into a higher-dimensional space, where a hyperplane can be identified to separate different classes of activities. The model then employs a fine-tuning process to optimize the parameters of the RBF kernel and the SVM algorithm, leading to improved performance in recognizing complex human activities. The fine Gaussian SVM has been shown to outperform other traditional machine learning methods, making it a valuable tool for researchers and practitioners in the field of human activity recognition [41, 42]. To maximize the margin in a fine Gaussian SVM, the model needs to find the optimal hyperplane that separates the different classes in the dataset as much as possible. It also needs to strike the optimal balance between the complexity of the decision boundary and the size of the margin. This is usually achieved through the use of an optimization algorithm such as gradient descent. The algorithm adjusts the model parameters to minimize the loss function, resulting in the maximum margin. The Eqs. (16), (17) and (18) were followed in this research to developed the enhanced fine Gaussian SVM model:
where:
x and x′ = two input data points, Gaussian kernel function = K(x, x).
4.3.6 Quadratic SVM
A quadratic support vector machine is a type of SVM that uses a quadratic kernel function to classify data points. The quadratic kernel function is defined as in Eq. (19). In a quadratic SVM, the kernel function is used to transform the data points into a higher-dimensional space, where it is easier to find the optimal hyperplane. The quadratic kernel function is often used when the data points are not linearly separable in the original feature space. By transforming the data points into a higher-dimensional space using the quadratic kernel function, the SVM is able to find a non-linear decision boundary that can better separate the different classes [43, 44].
where:
x and x′ are two data points.
Quadratic kernel function = \(K (x.x^{\prime})2\)
The steps and Eqs. (20), (21) and (22) were followed in this research to developed the enhanced quadratic SVM model:
-
(1)
Initialize the SVM model with weights w and bias b.
-
92)
Loop until the model has converged (i.e., the loss has stopped decreasing or the maximum number of iterations has been reached).
-
(3)
Loop through each training example in the dataset.
-
(4)
Predict the label y for the current training sample using the SVM model. This is done by first transforming the data point x using the kernel function, then using the resulting transformed data point to compute y in Eq. (21).
-
(5)
Calculate the loss for the current training samples using the hinge loss function and then update the model parameters w and b using gradient descent:
-
(6)
Return the SVM model (w, b) once the model has converged.
$$ {\text{margin }} = y \cdot w \cdot K\left( {x, x^\prime } \right)^{2} + b $$(20)$$ w^{\prime } = \left( {1 {-} \alpha \cdot \lambda } \right) \cdot w + \alpha \cdot y \cdot K\left( {x, x^{\prime } } \right)^{2} $$(21)$$ b^\prime = b + \alpha \cdot y $$(22)
Table 3 lists the parameters used in the implementation of enhanced versions of various KNN algorithms, including LKNN, CUKNN, CKNN, COKNN, MKNN, and WKNN. The first parameter, "d distance," is used in all the algorithms to measure the distance between data points. The second parameter, "k number of data points," is used to determine the number of neighboring data points to consider when making a prediction. The third parameter, "x input data set," refers to the dataset used for training and testing the KNN algorithms. The fourth parameter, "y labels," represents the labels or classes assigned to each data point. The fifth parameter, "distance = [] initial value of distance array," is used to initialize an empty array for storing distances between data points. This array is then populated during the training phase of the KNN algorithm. Finally, the "f function to maximize the margin" parameter is used in some of the enhanced KNN algorithms to optimize the classification boundary and improve the accuracy of the predictions. Overall, these parameters play a crucial role in the implementation of KNN algorithms and their enhancements, enabling accurate predictions for various machine learning applications. The listed parameters in the table are crucial elements in the development of improved versions of all the KNN classifiers. These parameters are essential for the implementation of KNN algorithms, and utilizing them, developers can enhance the accuracy of KNN predictions and boost the performance of these algorithms in different machine learning applications. Each parameter has a specific function, such as gaging the distance between data points, determining the number of neighboring points to consider, initializing arrays, and optimizing the classification boundary. With the help of these parameters, developers can design KNN algorithms that yield precise predictions while also being adaptable enough to handle various datasets and applications. The equations are employed in the implementation phase across all KNN models as one of the factors to enhance the classification accuracy level.
4.3.7 Linear k-NN
Linear KNN is a version of the K-nearest neighbors (KNN) algorithm that predicts the value of a test data point by combining the K-closest data points from the training set using a linear function. To make the prediction, the algorithm searches for the K-nearest data points in the training set that is closest to the test data point and applies a linear function to these K-nearest points. The term "linear" in "linear KNN" refers to the method used to compute the distance between data points, such as Euclidean or Manhattan distance, which are linear metrics, in contrast to non-linear metrics like cosine similarity [45, 46].
The steps and Eqs. (23), (24) and (25) were followed in this research to develop the enhanced linear k-NN model:
-
(1)
Define the number of nearest neighbors K. This is a hyperparameter that needs to be tuned.
-
(2)
Initialize an empty list to store the distances between X and the training points.
-
(3)
Iterate through the training set and calculate the distance between X and each training point. The distance can be calculated using any distance metric, such as Euclidean distance or Manhattan distance.
-
(4)
Sort the distances in ascending order and get the indices of the sorted distances. This is done so that we can easily select the K-nearest neighbors.
-
(5)
Get the K-nearest neighbors by selecting the first K indices from the sorted list.
-
(6)
Get the labels of the K-nearest neighbors.
-
(7)
Get the weights for the K-nearest neighbors using a linear function. The weights can be calculated using any linear function, such as a linear combination of the inverse distances.
-
(8)
Classify X based on the weighted sum of the labels of the K’s nearest neighbors. In other words, we multiply the labels of the K-nearest neighbors by the weights and sum them up to get the predicted label for X.
$$ {\text{margin }} = d {-}\max \left( {d, s} \right) $$(23)$$f =\text{max}\left(\text{margin}\right)$$(24)$$ d = \sqrt {\mathop \sum \limits_{i = 1}^{k} \left( {x_{i} - z_{i} } \right)^{2}} $$(25)
where
x and z are the coordinates of the two samples.
The equation provided appears to be intended to calculate the distance d between two points in a multi-dimensional space. The summation notation includes an index i, which runs from 1 to k, suggesting that the points lie in a k-dimensional space. However, the terms within the square root do not reflect this indexation consistently: the x terms are indexed by i, while the z terms are not. For clarity and mathematical consistency, each term within the sum should reflect its dependency on the index i. Therefore, if the z terms are meant to represent the coordinates of a second point in the same space, they should also be indexed by i, resulting in zi.
4.3.8 Cubic k-NN
Cubic K-nearest neighbor (KNN) is an extension of the KNN algorithm that allows for more efficient and accurate classification of data points in high-dimensional space. Traditional KNN works by computing the distance between a query point and all other points in the dataset and selecting the K-nearest neighbors to determine the class of the query point. However, in high-dimensional space, this approach becomes computationally expensive and can suffer from the "curse of dimensionality," where the distance between points becomes less meaningful as the number of dimensions increases. Cubic KNN addresses this issue by transforming the data into a lower-dimensional space using a cubic non-linear transformation. This transformation preserves local distances between points while reducing the dimensionality of the data, resulting in faster and more accurate classification [47].
The cubic transformation used in cubic KNN is based on the "canonical cubic transformation" that maps high- dimensional data to a low-dimensional space while preserving pairwise distances. The transformation is calculated using a technique called "smoothing splines," which fits a smooth function to the data that minimizes the sum of squared distances between points. This function can then be used to transform new data points into the lower-dimensional space for classification using traditional KNN. Cubic KNN has been shown to outperform traditional KNN in high- dimensional datasets, particularly in cases where the number of dimensions is larger than the number of training examples [48]. In cubic KNN, the distance between two samples is calculated using a polynomial distance measure, such as the cubic distance formula:
where
(x1, z1) and (x2, z2) are the coordinates of the two samples.
The steps and Eqs. (23), (24) and (26) were followed in this research to develop the enhanced cubic k-NN model:
-
(1)
Define the number of nearest neighbors K. This is a hyperparameter that needs to be tuned.
-
(2)
Initialize an empty list to store the distances between X and the training points.
-
(3)
Iterate through the training set and calculate the distance between X and each training point. The distance can be calculated using any distance metric, such as cubic distance or Minkowski distance.
-
(4)
Sort the distances in ascending order and get the indices of the sorted distances. This is done so that we can easily select the K-nearest neighbors.
-
(5)
Get the K-nearest neighbors by selecting the first K indices from the sorted list.
-
(6)
Get the labels of the K-nearest neighbors.
-
(7)
Find the K points in the retrieved data that are closest to the new data point.
-
(8)
Determine the majority label among the K-nearest neighbor.
-
(9)
Assign the majority label as the predicted label for the new data point.
4.3.9 Coarse k-NN
Coarse KNN is a variant of the K-nearest neighbors (KNN) algorithm that is designed to improve the efficiency of the algorithm using coarser-grained data structures to store the training data [49]. Here are the steps and Eqs. (23), (24) and (27) that were followed in this research to developed the enhanced coarse k-NN model:
-
(1)
Define the number of nearest neighbors K and the distance metric to use.
-
(2)
Initialize an empty list to store the distances between X and the training points.
-
(3)
Load the training data and labels.
-
(4)
Iterate through the training set and calculate the distance between the new data point and each retrieved point using Minkowski distance.
-
(5)
Sort the distances in ascending order and get the indices of the sorted distances. This is done to easily select the K-nearest neighbors.
-
(6)
Get the K-nearest neighbors by selecting the first K indices from the sorted list.
-
(7)
Get the labels of the K-nearest neighbors.
-
(8)
Find the K points in the retrieved data that are closest to the new data point.
-
(9)
Determine the majority label among the K’s nearest neighbor.
-
(10)
Assign the majority label as the predicted label for the new data point.
$$d=(\sum_{i=1}^{k}{\left({\left({\left|{x}_{2}-{x}_{1}\right|}^{3}+\left|{z}_{2}-{z}_{1}\right|\right)}^{3}\right)}^{1}/3)$$(27)
where
(x1, z1) and (x2, z2) are the coordinates of the two samples.
4.3.10 Medium k-NN
In the KNN algorithm, the class of a new sample is determined by the class of its K-nearest neighbors in the feature space. However, in some cases, choosing a fixed value of K may not be optimal as it can lead to overfitting or underfitting of the data. To overcome this issue, the medium KNN algorithm selects a dynamic value of K based on the median distance of the Kth nearest neighbor of each training sample. This approach reduces the influence of outliers and results in better classification or regression accuracy. The medium KNN algorithm is particularly useful for datasets with non-uniform distributions or where the distance metric is not well-defined.
The medium KNN algorithm is straightforward to implement and can be used with various distance metrics, including Euclidean distance, Manhattan distance, and cosine similarity. One disadvantage of the medium KNN algorithm is that it requires a relatively large number of computational resources compared to the standard KNN algorithm, especially for large datasets. Another drawback is that it may be sensitive to the choice of the median or the distance metric used to compute it. Therefore, it is important to tune these hyper-parameters carefully for optimal performance. Overall, the medium KNN algorithm is a robust and flexible approach for classification and regression tasks, particularly in scenarios where the optimal value of K is not known a prior [50].
The steps and Eqs. (23), (24) and (28) were followed in this research to developed the enhanced medium k-NN model:
-
(1)
Collect and preprocess the training data. This might include tasks, such as cleaning and formatting the data, handling missing values, and scaling or normalizing the features.
-
(2)
Split the training data into a training set and a validation set. The training set is used to build the model, while the validation set is used to evaluate the model’s performance.
-
(3)
Choose a value for the hyperparameter k, which determines the number of nearest neighbors that are used to make a prediction.
-
(4)
Train the KNN model by increasing the margin of the training data using Manhattan distance.
-
(5)
Use the trained KNN model to make predictions on new input data. To do this, the algorithm will find the k-nearest neighbors to the input data in the training set and use those neighbors to make a prediction.
-
(6)
Evaluate the performance of the KNN model on the validation set, using a metric such as accuracy or mean squared error.
-
(7)
If the performance is not satisfactory, adjust the hyperparameter k or pre-processing steps and repeat the training and evaluation process until an acceptable level of performance is achieved.
$$d=\sum_{i=1}^{k}\left(\left|{x}_{2}-{x}_{1}\right|+\left|{z}_{2}-{z}_{1}\right|\right)$$(28)
where
(x1, z1) and (x2, z2) are the coordinates of the two samples.
4.3.11 Cosine k-NN
Cosine KNN is a variant of the k-nearest neighbors (KNN) algorithm that uses cosine similarity as the distance to measure the similarity between data points. Cosine similarity is a measure of similarity between two non-zero vectors that take into account the angle between them, rather than the Euclidean distance. Like traditional KNN, cosine KNN works by finding the k number of data points in the training set that are most similar to the input data and then using those points to make a prediction or classify the input data. However, instead of using Euclidean distance as the distance metric, cosine KNN uses cosine similarity [51].
Cosine similarity is often used in information retrieval and natural language processing tasks, as it is able to capture the similarity between two documents or texts even when they use different vocabulary. It can also be more effective than Euclidean distance in high-dimensional spaces, where the distance between points can be distorted by the presence of many irrelevant features [52].
The steps and Eqs. (23), (24) and (29) were followed in this research to develop the enhanced cosine k-NN model:
-
(1)
Collect and preprocess the training data. This might include tasks, such as cleaning and formatting the data, handling missing values, and scaling or normalizing the features.
-
(2)
Split the training data into a training set and a validation set. The training set is used to build the model, while the validation set is used to evaluate the model’s performance.
-
(3)
Choose a value for the hyperparameter k, which determines the number of nearest neighbors that are used to make a prediction.
-
(4)
Train the cosine KNN model by storing the training data in a data structure such as a k-d tree or a ball tree.
-
(5)
Use the trained cosine KNN model to make predictions on new input data. To do this, the algorithm will find the k-nearest neighbors to the input data in the training set using cosine similarity as the distance metric and use those neighbors to make a prediction.
-
(6)
Evaluate the performance of the cosine KNN model on the validation set, using a metric such as accuracy or mean squared error.
-
(7)
If the performance is not satisfactory, adjust the hyperparameter k or pre-processing steps and repeat the training and evaluation process until an acceptable level of performance is achieved.
$$d=\text{cosine}({x}_{\text{test}}, {x}_{\text{train}})$$(29)
4.3.12 Weighted k-NN
Weighted K-nearest neighbors (KNN) is an altered form of the KNN algorithm that factors in the proximity of neighbors to the test point when generating predictions. Unlike the traditional KNN algorithm, which relies exclusively on the most common class among the K-closest neighbors, the weighted KNN algorithm uses a weighted average of the K-nearest neighbors, with the weight of each neighbor being inversely proportional to its distance from the test point. Weighted KNN may be beneficial in cases where the distance between the test point and its neighbors is a crucial aspect of prediction-making.
Additionally, it can be advantageous when working with imbalanced classes, as it grants higher significance to the underrepresented category if it’s closer to the test point [53, 54].
The Eqs. (23), (30), (31) and (32) were followed in this research to alter the basic pseudo-code for we ighted k-NN and to come up with a new model:
where (x1, z1) and (x2, z2) are the coordinates of the two samples.
5 Experimental Results
During the training data acquisition process for the experiment phase, we utilized a Samsung S10 smartphone that runs on the Android operating system. The collected data were processed and analyzed using a Microsoft Surface Pro 4 laptop. The laptop is equipped with a 6 Generation Intel® Core™ i7-6650U CPU, 8 GB of RAM, 256 GB of SSD, and an Intel® Iris® Graphics 540 integrated graphics card, all operating on the Windows 10 Pro operating system. For the normalization, pre-processing, and analysis phases, we primarily used Matlab 2022a version. The reference model was updated by adding more samples, obtained from 102 individuals ranging in age from 18 to 43.
The study did not impose any limitations on the timing or activities of data collection, and the data sets were split into two groups. The training set consisted of 40 samples of the data, while the remaining 62 were allocated for testing. The average accuracy performance for each classifier was assessed across the 62 samples, representing the score for each individual activity. The activities with the lowest accuracy were jogging and running. Even the enhanced classifiers based on the proposed models had difficulty distinguishing between these two activities despite achieving high accuracy consistently across all cases and samples, as shown in Table 4 and Fig. 3.
To address the accuracy issue in detecting jogging and running within the reference model, data were collected from gyroscope and accelerometer sensors. The key threshold used to differentiate between the two activities was identified as step numbers.
Table 5 displays the average accuracy results and processing times of various classification models. The accuracy metric indicates the average performance of the models in predicting the correct output, while the average processing time metric reflects the time each model typically requires processing and classifying the data.
Based on Table 5, it can be observed that the model with the highest accuracy score is WKNN, scoring 0.98645, which is significantly higher than the other models. This implies that WKNN has a higher probability of making accurate predictions compared to other models, as shown in Fig. 4. The highest accuracy score achieved by the WKNN model was due to the assignment of weight values to every incremental point in the margin during the research. Predictions were generated based on a weighted average of the test sample labels within the range of the selected k-nearest neighbor value assigned during the experimental phase for the training sample.
In terms of processing time, CGSVM is the fastest model with a processing time of 42 s, followed closely CKNN, CSVM, and FGSVM, all with 47 s. Although WKNN has the highest accuracy score, it also has the longest processing time of 51 s, which is longer than the processing time of all other models. This observation suggests that the trade-off between accuracy and processing time should be considered when selecting a classification model for a particular application. Overall, Table 5 provides a useful summary of the performance of different classification models in terms of accuracy and processing time. It can be used to select the best model that balances both accuracy and processing time, depending on the application's specific requirements. The error rate per activity is calculated according to Eq. (33).
where:
PE is a percentage error.
EN is the estimated number.
AN is the actual number.
The error rate calculations in Table 6 summarize the performance of various classification models effectively. These results are expressed in terms of the error rate percentage calculated over all the test samples. Additionally, the error rate data for different classification models has been illustrated in the visualization output as shown in Fig. 5. The average accuracy results for both classifier model types over the 62 samples, along with the precision, recall, and F1 score, are shown in Table 7. A MATLAB function called "statsOfMeasure" was utilized to calculate the precision, recall, and F1 score.
In the experimental setup, the parameters, such as k-nearest neighbor, weight, and bias were configured during the implementation of the classifiers as shown in Table 8. These configurations were based on Mathworks’ standard recommendations for Matlab [55]. However, it’s important to note that different outcomes in terms of processing time could have been obtained if the k-nearest neighbor value had been set to 1 for CKNN, and similarly, for WKNN, if the standard threshold for K had been altered to a value greater than 1. While accuracy might have been improved, it would come at the cost of increased processing time.
Table 9 provides a comparative analysis of different studies in the field of HAR by outlining the methods used, data sources, number of activities detected, and the accuracy achieved. It highlights a variety of approaches, including deep neural networks (DNN), artificial neural networks (ANN), deep learning (DL), convolutional neural networks (CNN), and traditional machine learning (ML) techniques. The data sources range from specific datasets like WISDM, DAAD, opportunity, UniMiB-SHAR, PAMAP2, and UCI_DB to real data collected from volunteers. The number of detected activities varies across studies, from as few as 5 to as many as 18, showcasing the diversity in the scope of HAR applications. Accuracy rates presented span from 86.37 to 99.5%, indicating significant success in activity recognition using these methods. This table effectively underscores the advancements in HAR through the use of sophisticated algorithms and diverse data sources, demonstrating the potential for high accuracy in activity recognition across various contexts.
Shdefat et al. chose to use SVM and KNN classifiers for their work on human activity recognition for several reasons, highlighted by the comparative effectiveness of these methods in pattern recognition tasks within their research context. Both SVM and KNN have demonstrated substantial effectiveness in pattern recognition tasks. Their research aimed at leveraging these capabilities to accurately classify twelve physical activities, making the choice of these classifiers strategic for their objectives. The study found that SVM models provided slightly faster processing times, while KNN models showed slightly higher accuracy. This complementarity suggests that combining these methods could leverage the strengths of each, thus achieving a balance between speed and accuracy. By optimizing these classifiers, they aimed to enhance performance in real-world scenarios, making their application more robust for IoT-based human activity recognition applications. The research contributes to the field by optimizing these classifiers to improve performance significantly in real-world scenarios. The choice of SVM and KNN is justified by their adaptability and the potential for enhancement, which was proven by the research outcomes. The development of IoT-based HAR applications focuses on the accurate classification of activities, which is crucial for various applications, including health monitoring and elderly care. The nuanced capabilities of SVM and KNN in handling pattern recognition tasks make them suitable candidates for tackling these challenges.
For the SVM models, this research was conducted with the constraint of setting the parameter w value to zero. The prediction results for each data point were categorized under one of the class activities during the classification process, contributing to each class weight counter.
On the other hand, the bias was set to the standard value b = 0, but it could be modified, for example, to a value around 1, indicating a representation of one of the class activities. In such a scenario, the classification process would be based on the referenced class with the b = 1 value. While this might lead to a slight improvement in accuracy, it would come at the expense of significantly increased processing time as a potential drawback. The results of classifying the accuracy of user daily activities obtained from smartphones using different classifiers are shown in Fig. 6. The average accuracy results of SVM models were found to be 95.88%, while the average accuracy results of KNN models were found to be 97.079% in the proposed model. In comparison, the average accuracy percentages for SVM and KNN in the reference model were 87.54 and 86.13%, respectively. These results indicate that the KNN models slightly outperform the SVM models in accurately classifying users’ daily activities. This finding is significant because precise classification of user activities can yield valuable insights into user behavior and assist in the development of personalized applications, such as personalized health and fitness apps. Therefore, the high accuracy achieved by both SVM and KNN models in classifying user daily activities represents a promising advancement in the field of smartphone-based activity recognition.
In addition to SVM and k-NN, the human activity recognition field employs various deep learning approaches, such as CNNs and recurrent neural networks (RNNs), accomplished at processing sequential and spatial–temporal data. Other algorithms, including Decision Trees, Random Forests, and ensemble methods, are valued for their robustness and interpretability. The data utilized in this domain, mainly time series from sensors like accelerometers and gyroscopes, critically shapes model selection. High-quality, frequent data generally supports the use of sophisticated models like deep neural networks, which excel in recognizing complex patterns but may demand significant computational power and extensive data for training. The effectiveness and the precision of these models depend on the quality and variability of the data, underscoring the importance of accurate dataset preparation and pre-processing in developing effective activity recognition systems.
Fig. 7 shows the average processing time results of different classifiers used to classify the processing time of user daily activities obtained from smartphones. The proposed SVM models had an average processing time of 47.2 s, while the proposed KNN models had an average processing time of 50.3 s. In comparison with the reference model, the performance scores were 47 s for SVM and 39 s for KNN as shown in Fig. 7. These results indicate that the proposed SVM models exhibit a slightly faster processing time than the proposed KNN models. This finding is important because processing time is a critical factor in real-time activity recognition systems, and faster processing times can lead to more accurate and timely results. Therefore, the faster processing time of SVM models in classifying users’ daily activities represents a significant advantage in the field of smartphone-based activity recognition. However, it’s worth noting that, due to the expansion of the number of samples, the reference model has a slightly better processing time score than the proposed one.
The accelerometer and the gyroscope sensors in a smartphone have the capability to measure the device’s acceleration orientation in three-dimensional space. This means they can detect changes in the smartphone’s velocity and direction of movement, including rotational motions like tilting or turning. These sensors provide data on the device’s angular velocity and orientation. The collected acceleration data finds applications in various fields, such as step counting, gesture recognition, gaming, virtual reality, and augmented reality. Typically, this data is measured in units of meters per second squared or millimeters per second squared and can be analyzed to gain insights into the user’s behavior and movement patterns, as shown in Fig. 8.
Figure 9 illustrates the plotted values of acceleration (in millimeters per second squared), velocity (in millimeters per second), and displacement (in millimeters) for the one-hundred-two data samples gathered from the accelerometer and gyroscope sensors. With the widespread use of smartphones, accelerometers, and gyroscope, sensors have become valuable tools for collecting and analyzing acceleration and orientation data across various industries, including entertainment, education, and healthcare. The energy plot of accelerometer and gyroscope sensors located in a smartphone, as depicted in Fig. 9, offers valuable insights into the physical movements and behaviors of users based on one-hundred-two daily activity samples. This plot serves as a visual representation of the power spectrum of the signals produced by the sensors, providing information about the frequency content of these signals. It can be effectively utilized to differentiate various types of physical activities and monitor shifts in activity patterns over time. For instance, the energy plot can discern between activities, such as walking, running, and cycling, while also identifying variations in the intensity of these activities throughout the day. Analyzing the energy plot of accelerometer and gyroscope sensors can also yield insights into the health and well-being of users. Alterations in activity patterns or a decrease in physical activity levels can serve as potential indicators of underlying health issues or the early stages of chronic diseases. Through continuous monitoring of the energy plot, healthcare professionals can detect potential health concerns at an early stage and develop targeted interventions to enhance the health and well-being of patients. Furthermore, the energy plot can be combined with other sensors, such as heart rate monitors or GPS trackers, to gain a more comprehensive understanding of users’ physical activity patterns and behaviors. In summary, the energy plot generated by the accelerometer and gyroscope sensors within a smartphone offers a valuable tool for monitoring and enhancing the health and well-being of users.
6 Conclusion and Future Work
This study successfully enhanced the accuracy of classifying human activities using smartphone data by adjusting margins in SVM and KNN classifiers and optimizing sample sizes and window segments. It demonstrated that while KNN classifiers slightly outperformed SVMs in accuracy, the latter were faster, indicating a trade-off between speed and precision that is critical for real-world applications. The findings also highlight potential limitations due to the demographic focus of data collection and suggest avenues for future research in model and parameter optimization to improve activity recognition systems. By implementing the three proposed methods, this research achieved a higher level of precision than the reference model. The methods employed included augmenting the sample size, extending the window segment, and optimizing the margins of the classifiers. However, by increasing the margin of these classifiers, the sample size to one-hundred two and extending the window segment to 60 Hz, it becomes possible to more accurately distinguish between different activities, leading to more effective applications in various fields. The research showed that WKNN had the highest accuracy score, while CGSVM had the fastest processing time. However, the trade-off between accuracy and processing time needs to be considered when selecting a classification model for a particular application. The study also found that KNN models performed slightly better than SVM models in accurately classifying user daily activities with an average accuracy score of 97.08%, while SVM models had a slightly faster processing time. These results offer promise for the development of personalized applications and real-time activity recognition systems. A notable limitation of the research on optimizing HAR systems through comparative analysis of enhanced SVM and k-NN classifiers could be its focus on a specific age group (18 to 43 years) for data collection. This demographic constraint may limit the generalizability of the findings to wider populations, particularly to older adults or children who may have different activity patterns. Future research could include exploring other classification models or optimizing the parameters of existing models to enhance the accuracy and processing time of the activity recognition system. Additionally, investigating the impact of varying smartphone and operating system specifications on the performance of classification models could be an interesting area of future research.
Data Availability
Data will be provided upon request.
References
Suh, S., Rey, V.F., Lukowicz, P.: Tasked: transformer-based adversarial learning for human activity recognition using wearable sensors via self-knowledge distillation. Knowl.-Based Syst. 260, 110143 (2023). https://doi.org/10.1016/j.knosys.2022.110143. (ISSN 0950-7051)
Ismail, W.N., Alsalamah, H.A., Hassan, M.M., Mohamed, E.: Auto-HAR: an adaptive recognition framework using an automated CNN architecture design. Heliyon 9(2), e13636 (2023). https://doi.org/10.1016/j.heliyon.2023.e13636. (ISSN 2405-8440)
Dahou, A., Al-qaness, M.A.A., Elaziz, M.A., Helmi, A.: Human activity recognition in IoHT applications using arithmetic optimization algorithm and deep learning. Measurement 199, 111445 (2022). https://doi.org/10.1016/j.measurement.2022.111445. (ISSN 0263-2241)
Sarveshwaran, V., Joseph, I.T., Maravarman, M., Karthikeyan, P.: Investigation on human activity recognition using deep learning. Procedia Comput. Sci. 204, 73–80 (2022). https://doi.org/10.1016/j.procs.2022.08.009. ISSN 1877-0509. International Conference on Industry Sciences and Computer Science Innovation
Andrade-Ambriz, Y.A., Ledesma, S., Ibarra-Manzano, M.-A., Oros-Flores, M.I., Almanza-Ojeda, D.-L.: Human activity recognition using temporal convolutional neural network architecture. Expert Syst. Appl. 191, 116287 (2022). https://doi.org/10.1016/j.eswa.2021.116287. (ISSN 0957-4174)
Halim, N.: Stochastic recognition of human daily activities via hybrid descriptors and random forest using wearable sensors. Array 15, 100190 (2022). https://doi.org/10.1016/j.array.2022.100190. (ISSN 2590-0056)
Budiono, D.A., Utomo, K.S., Wibowo, K.J., Wiradinata, M.J.: Used car price prediction model: a machine learning approach. Int. J. Comput. Inf. Syst. (IJCIS) 5(1), 59–66 (2024). https://doi.org/10.29040/ijcis.v5i1.147
Saputro, P.H., Zalmi, W.F., Syahputra, R.: Performance testing of KNN and logistic regression algorithms in classifying heart disease susceptibility. Int. J. Comput. Inf. Syst. IJCIS 4(4), 140–144 (2023). https://doi.org/10.29040/ijcis.v4i4.133
Liu, Z., Letchmunan, S.: Representing uncertainty and imprecision in machine learning: a survey on belief functions. J. King Saud Univ. Comput. Inf. Sci. 36(1), 101904 (2024). https://doi.org/10.1016/j.jksuci.2023.101904. (ISSN 1319-1578)
Liu, Z., Letchmunan, S.: Representing uncertainty and imprecision in machine learning: a survey on belief functions. J. King Saud Univ. Comput. Inf. Sci. (2024). https://doi.org/10.1016/j.jksuci.2023.101904. (Online publication date: 1-Jan-2024. 10.1145/3638061)
Liu, Z., Huang, H., Letchmunan, S., Deveci, M.: Adaptive weighted multi-view evidential clustering with feature preference. Knowl.-Based Syst. (2024). https://doi.org/10.1016/j.knosys.2024.111770. (ISSN 0950-7051)
Liu, Z., Qiu, H., Letchmunan, S.: Self-adaptive attribute weighted neutrosophic c-means clustering for biomedical applications. Alex. Eng. J. 96, 42–57 (2024). https://doi.org/10.1016/j.aej.2024.03.092. (ISSN 1110-0168)
Sun, X., Xu, H., Dong, Z., Shi, L., Liu, Q., Li, J., Li, T., Fan, S., Wang, Y.: CapsGaNet: deep neural network based on capsule and GRU for human activity recognition. IEEE Syst. J. (2022). https://doi.org/10.1109/JSYST.2022.3153503
Wenzheng, Z.: Human activity recognition based on acceleration sensor and neural network. In: 2020 8th International Conference on Orange Technology (ICOT), pp. 1–5. 2020. https://doi.org/10.1109/ICOT51877.2020.9468785
Al-qaness, M.A.A., Dahou, A., Elaziz, M.A., Helmi, A.M.: Multi-ResAtt: multilevel residual network with attention for human activity recognition using wearable sensors. IEEE Trans. Ind. Inform. (2022). https://doi.org/10.1109/TII.2022.3165875
Uddin, M.H., Kanon Ara, J.M., Rahman, M.H., Yang, S.H.: Neural network pruning: an effective way to reduce the initial network for deep learning based human activity recognition. In: 2021 International Conference on Electronics, Communications and Information Technology (ICECIT), pp. 1–4, 2021. https://doi.org/10.1109/ICECIT54077.2021.9641226
Jaberi, M., Ravanmehr, R.: Human activity recognition via wearable devices using enhanced ternary weight convolutional neural network. Pervas. Mob. Comput. 83, 101620 (2022). https://doi.org/10.1016/j.pmcj.2022.101620. (ISSN 1574-1192)
Suto, J., Oniga, S.: Efficiency investigation from shallow to deep neural network techniques in human activity recognition. Cognit. Syst. Res. 54, 37–49 (2019). https://doi.org/10.1016/j.cogsys.2018.11.009. (ISSN 1389-0417)
Nipas, M., Acoba, A.G., Mindoro, J.N., Malbog, M.A.F., Susa, J.A.B., Gulmatico, J.S.: Burned calories prediction using supervised machine learning: regression algorithm. In: 2022 Second International Conference on Power, Control and Computing Technologies (ICPC2T), pp. 1–4, 2022. https://doi.org/10.1109/ICPC2T53885.2022.9776710
Saravanan, M.S., Charan, S.: Prediction of insufficient accuracy for human activity recognition using convolutional neural network in compared with support vector machine. In: 2022 5th International Conference on Contemporary Computing and Informatics (IC3I), pp. 1915–1919, 2022. https://doi.org/10.1109/IC3I56241.2022.10072905
Shdefat, A.Y., Halimeh, A.A., Kim, H.C.: Human activities recognition via smartphones using supervised machine learning classifiers. Prim. Health Care Open Access (2018). https://doi.org/10.4172/2167-1079.1000289
Hong, N.T.T., Nguyen, G.L., Huy, N.Q., Manh, D.V., Tran, D.-N., Tran, D.-T.: A low-cost real-time IoT human activity recognition system based on wearable sensor and the supervised learning algorithms. Measurement 218, 113231 (2023). https://doi.org/10.1016/j.measurement.2023.113231. (ISSN 0263-2241)
Fan, C., He, W., Liao, L.: Real-time machine learning-based recognition of human thermal comfort-related activities using inertial measurement unit data. Energy Build. 294, 113216 (2023). https://doi.org/10.1016/j.enbuild.2023.113216. (ISSN 0378-7788)
Han, C., Zhang, L., Tang, Y., Huang, W., Min, F., He, J.: Human activity recognition using wearable sensors by heterogeneous convolutional neural networks. Expert Syst. Appl. 198, 116764 (2022). https://doi.org/10.1016/j.eswa.2022.116764. (ISSN 0957-4174)
Qian, H., Pan, S.J., Miao, C.: Weakly-supervised sensor-based activity segmentation and recognition via learning from distributions. Artif. Intell. 292, 103429 (2021). https://doi.org/10.1016/j.artint.2020.103429. (ISSN 0004-3702)
Cevikalp, H., Uzun, B., Köpüklü, O., Ozturk, G.: Deep compact polyhedral conic classifier for open and closed set recognition. Pattern Recognit. 119, 108080 (2021). https://doi.org/10.1016/j.patcog.2021.108080. (ISSN 0031-3203)
Lv, T., Wang, X., Jin, L., Xiao, Y., Song, M.: Margin-based deep learning networks for human activity recognition. Sensors 20(7), 1871 (2020). https://doi.org/10.3390/s20071871. (ISSN 1424-8220)
Venkatachalam, K., Yang, Z., Trojovsky, P., Bacanin, N., Deveci, M., Ding, W.: Bimodal HAR—an efficient approach to human activity analysis and recognition using bimodal hybrid classifiers. Inf. Sci. 628, 542–557 (2023). https://doi.org/10.1016/j.ins.2023.01.121. (ISSN 0020-0255)
Gosciewska, K., Frejlichowski, D.: Recognizing human actions with multiple Fourier transforms. Procedia Comput. Sci. 176, 1083–1090 (2020). https://doi.org/10.1016/j.procs.2020.09.104. ISSN 1877-0509. Knowledge-Based and Intelligent Information and Engineering Systems: Proceedings of the 24th International Conference KES2020
Zhu, W., Chen, J., Xu, L., Cao, J.: Recognition of interactive human groups from mobile sensing data. Comput. Commun. 191, 208–216 (2022). https://doi.org/10.1016/j.comcom.2022.04.028. (ISSN 0140-3664)
Park, C.-S.: Guaranteed-stable sliding DFT algorithm with minimal computational requirements. IEEE Trans. Signal Process. 65(20), 5281–5288 (2017). https://doi.org/10.1109/TSP.2017.2726988
Chou, C.-C., Tzong-Lin, Wu.: Analysis of peak and statistical spectrum of random nonreturn-to-zero digital signals. IEEE Trans. Electromagn. Compat. 59(6), 2002–2013 (2017). https://doi.org/10.1109/TEMC.2017.2674025
Yang, C., Oh, S.-K., Yang, B., Pedrycz, W., Fu, Z.W.: Fuzzy quasi-linear svm classifier: design and analysis. Fuzzy Sets Syst. 413, 42–63 (2021). https://doi.org/10.1016/j.fss.2020.05.010. (ISSN 0165-0114. Data Science)
Danenas, P., Garsva, G.: Credit risk evaluation modeling using evolutionary linear svm classifiers and sliding window approach. Procedia Comput. Sci. 9, 1324–1333 (2012). https://doi.org/10.1016/j.procs.2012.04.145. ISSN 1877-0509. Proceedings of the International Conference on Computational Science, ICCS 2012
Zhang, X., Zhang, S., Li, Y.: Classification method for communication modulation signal identification based on multiple feature extraction and cubic SVM. In: 2022 IEEE 5th International Conference on Information Systems and Computer Aided Education (ICISCAE), pp. 432–436, 2022. https://doi.org/10.1109/ICISCAE55891.2022.9927628
Jain, U., Nathani, K., Ruban, N., Joseph Raj, A.N., Zhuang, Z., Mahesh, V.G.V.: Cubic SVM classifier based feature extraction and emotion detection from speech signals. In: 2018 International Conference on Sensor Networks and Signal Processing (SNSP), pp. 386–391, 2018. https://doi.org/10.1109/SNSP.2018.00081
Lei, M., Zhang, L., Li, M., Chen, H., Zhang, X.: Near-infrared spectrum of coal origin identification based on SVM algorithm. In: 2018 37th Chinese Control Conference (CCC), pp. 9016–9020, 2018. https://doi.org/10.23919/ChiCC.2018.8483742
Sunnetci, K.M., Ulukaya, S., Alkan, A.: Periodontal bone loss detection based on hybrid deep learning and machine learning models with a user-friendly application. Biomed. Signal Process. Control 77, 103844 (2022). https://doi.org/10.1016/j.bspc.2022.103844. (ISSN 1746-8094)
Wang, T., Su, C.-H.: Medium Gaussian SVM, wide neural network and stepwise linear method in estimation of Lornoxicam pharmaceutical solubility in supercritical solvent. J. Mol. Liq. 349, 118120 (2022). https://doi.org/10.1016/j.molliq.2021.118120. (ISSN 0167-7322)
Polat, K., Nour, M.: Epileptic seizure detection based on new hybrid models with electroencephalogram signals. IRBM 41(6), 331–353 (2020). https://doi.org/10.1016/j.irbm.2020.06.008. (ISSN 1959-0318)
Aregawi, B.H., Diana, T., Su, C.-H., El-Shafay, A.S., Alashwal, M., Felemban, B.F., Zwawi, M., Algarni, M., Wang, F.-M.: Development of a machine learning computational technique for estimation of molecular diffusivity of nonelectrolyte organic molecules in aqueous media. J. Mol. Liq. 353, 118763 (2022). https://doi.org/10.1016/j.molliq.2022.118763. (ISSN 0167-7322)
Albaba, A., Simões-Capela, N., Wang, Y., Hendriks, R.C., De Raedt, W., Van Hoof, C.: Assessing the signal quality of electrocardiograms from varied acquisition sources: a generic machine learning pipeline for model generation. Comput. Biol. Med. 130, 104164 (2021). https://doi.org/10.1016/j.compbiomed.2020.104164. (ISSN 0010-4825)
Liu, S., You, S., Yin, H., Lin, Z., Liu, Y., Cui, Y., Yao, W., Sundaresh, L.: Data source authentication for wide-area synchrophasor measurements based on spatial signature extraction and quadratic kernel SVM. Int. J. Electr. Power Energy Syst. 140, 108083 (2022). https://doi.org/10.1016/j.ijepes.2022.108083. (ISSN 0142-0615)
Madhu, M.S., Karthikeyan, P.R.: Detection of liver disorder using quadratic support vector machine in comparison with RBF SVM to measure the accuracy, precision, sensitivity and specificity. In: 2022 International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES), pp. 1–7, 2022. https://doi.org/10.1109/ICSES55317.2022.9914126
Yang, X., Chen, Y., Zhao, Y., Pan, J., Guo, J., Yang, D.: Application of KNN for linear array pattern prediction based on the active element pattern method. IEEE Antennas Wirel. Propag. Lett. (2023). https://doi.org/10.1109/LAWP.2023.3234587
Liu, Q., Liu, C.: A novel locally linear KNN method with applications to visual recognition. IEEE Trans. Neural Netw. Learn. Syst. 28(9), 2010–2021 (2017). https://doi.org/10.1109/TNNLS.2016.2572204
Shahabi, H., Shirzadi, A., Ghaderi, K., Omidvar, E., Al-Ansari, N., Clague, J.J., Geertsema, M., Khosravi, K., Amini, A., Bahrami, S., Rahmati, O., Habibi, K., Mohammadi, A., Nguyen, H., Melesse, A.M., Ahmad, B.B., Ahmad, A.: Flood detection and susceptibility mapping using sentinel-1 remote sensing data and a machine learning approach: hybrid intelligence of bagging ensemble based on k-nearest neighbor classifier. Remote Sens. 12(2), 266 (2020). https://doi.org/10.3390/rs12020266. (ISSN 2072-4292)
Yaman, O.: An automated faults classification method based on binary pattern and neighborhood component analysis using induction motor. Measurement 168, 108323 (2021). https://doi.org/10.1016/j.measurement.2020.108323. (ISSN 0263-2241)
Saleem, Z., Mudassir, M., Khanam, S.: Investigation into bearing fault classification using various feature set combinations in KNN. In: 2022 5th International Conference on Multimedia, Signal Processing and Communication Technologies (IMPACT), pp. 1–6, 2022. https://doi.org/10.1109/IMPACT55510.2022.10029053
Yu, S., Jia, C., Hou, R.: Application of distance measure in KNN motor fault diagnosis. In: 2022 7th International Conference on Intelligent Computing and Signal Processing (ICSP), pp. 1660–1666, 2022. https://doi.org/10.1109/ICSP54964.2022.9778433
Othman, N.H., Lee, K.Y., Radzol, A.R.M., Mansor, W., Rashid, U.R.M.: Classification of salivary adulterated NS1 SERs spectra using PCA-cosine-KNN. In: 2019 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), pages 159–163, 2019. https://doi.org/10.1109/ICIIBMS46890.2019.8991490
Chethana, C.: Prediction of heart disease using different KNN classifier. In: 2021 5th International Conference on Intelligent Computing and Control Systems (ICICCS), pp. 1186–1194, 2021. https://doi.org/10.1109/ICICCS51141.2021.9432178
Yang, T., Du, S.: An improved weighted KNN algorithm about text classification based on spark framework. In: 2022 IEEE 10th International Conference on Information, Communication and Networks (ICICN), pp. 655–661, 2022. https://doi.org/10.1109/ICICN56848.2022.10006555
Chen, Z., Li, B., Han, B.: Improve regression accuracy by using an attribute weighted KNN approach. In: 2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), pp. 1838–1843, 2017. https://doi.org/10.1109/FSKD.2017.8393046
Mathworks: choose classifier options. https://www.mathworks.com/help/stats/choose-a-classifier.html#bunt0p6-1. Accessed: 2022-12-16
Funding
There was no funding.
Author information
Authors and Affiliations
Contributions
Ahmed Shdefat played a key role in the study's conceptualization, drafting the initial manuscript, developing the methodology, creating the application, and conducting experimental work. Nour Mostafa was instrumental in drafting the original manuscript, formulating the methodology, and overseeing data curation. Zakwan Al-Arnaout significantly contributed to the study's conceptualization, provided critical revisions for intellectual content, and was involved in methodology development and data curation. Yehia Kotb focused on reviewing related works and played a crucial role in the analysis and interpretation of the data. Lastly, Samer Alabed was involved in the conceptualization of the study and contributed significantly to the analysis and interpretation of the data.
Corresponding author
Ethics declarations
Conflict of interest
The authors declared that there were no apparent conflicts of interest or personal relationships that could have influenced the work reported in this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Shdefat, A.Y., Mostafa, N., Al-Arnaout, Z. et al. Optimizing HAR Systems: Comparative Analysis of Enhanced SVM and k-NN Classifiers. Int J Comput Intell Syst 17, 150 (2024). https://doi.org/10.1007/s44196-024-00554-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s44196-024-00554-0