1. Introduction
With the economic prosperity and technological progress, wireless and mobile healthcare applications are proliferating [
1]. Internet of things devices are widely used in human health monitoring. Due to the characteristics of streaming data, heterogeneous and high noise [
2], the efficient analysis and diagnosis of health data face many challenges. The maturity of cloud computing and deep learning technology provides strong support for solving this problem. Nevertheless, there are still some problems in the application of deep learning model based on cloud platform for real time health monitoring. First, continuous transmission of health data will consume a lot of network resources. Second, the return time of cloud decision is uncertain due to the fluctuation of network transmission. Last, users’ health data are stored in the cloud, which will involve personal privacy issues. Therefore, data-intensive analysis in smart healthcare requires a new computational model to provide location-aware and delay sensitive monitoring with intelligent and reliable control [
3].
Electrocardiogram (ECG) is one of the most commonly used examinations in clinic. Remote ECG monitoring plays an important role in early diagnosis and prevention of cardiovascular diseases. This usually requires low-cost, high-convenience and low-latency services for ECG users. With the application of deep learning in the field of ECG, the automatic detection of ECG has made great progress [
4]. At present, most of the training data used by researchers are MIT-BIH [
5] and other public datasets, which have certain limitations in data volume and category. For example, the MIT-BIH dataset is sufficient in a few kinds of heart disease data, while the category of disease data is not enough to meet the needs of deep learning model training. As a result, research on ECG data is far from reaching the practical application.
In this work, we take advantage of both edge and cloud to design an efficient health monitoring architecture. We use convolutional neural networks (CNN) to develop a streamlined and efficient model to identify ECGs on the edge. We noticed the imbalance of ECG data and its influence on diagnostic accuracy. Specifically, the contributions of this work can be summarized as follows.
Firstly, a new hybrid intelligent healthcare architecture based on edge computing and cloud computing, named EdgeCNN, is proposed. This architecture can flexibly learn medical data from edge devices. Deep learning model is deployed to run on edge devices, which makes analysis and diagnosis closer to Input/Output(IoT) data sources. This can significantly relieve learning latency and network I/O and reduce the pressure of large user groups and massive data on the cloud platform.
Secondly, relying on the EdgeCNN architecture, we design an efficient ECG edge computing diagnosis model and learning algorithm. Then, we successfully deploy it on edge devices. The model can infer ECG in real time closer to data source and get a good balance between diagnosis accuracy and resource loss. Experimental results show that, compared with pure cloud computing architecture, EdgeCNN can not only ensure reasonable accuracy, but also have obvious advantages in diagnosis delay, network I/O, application availability and resource cost. More importantly, it can effectively protect the privacy of user data from IoT devices.
Thirdly, we propose a method to enhance ECG by deep convolutional generative adversarial network (DCGAN) [
6]. DCGAN is a stable network architecture based on CNN extension, which has a very high credibility in unsupervised learning. Data enhancement can provide enough data quantity and category for ECG diagnosis. The experimental results show that, after the data enhancement of ECG data related categories, not only the overall accuracy of the deep learning model is improved, but also the ECG categories that the model can diagnose are greatly expanded, which greatly improves the practical usability of the system.
This paper is an extended and enhanced version of an earlier conference paper published in IEEE 24th International Conference on Parallel and Distributed Systems [
7]. Our initial conference paper does not address the problem of the imbalance of ECG data and its impact on diagnostic accuracy. This manuscript addresses this issue and provides a data enhancement method for ECG based on DCGAN. Experiments are presented to verify the effectiveness of the data enhancement.
The rest of the article is organized as follows.
Section 2 reviews related works.
Section 3 introduces the hybrid intelligent healthcare architecture and the ECG diagnosis model.
Section 4 presents the method to enhance ECG data by DCGAN.
Section 5 analyzes the experimental results. We draw conclusions and note future work in
Section 6.
2. Related Work
Health informatics has recently become an important area of concern and one of the great challenges in engineering. Mobile Edge cloud is an emerging field, which is rapidly applying across the border. Cloud computing can help the Iot to calculate and analyze data. Edge devices are important medical tools for timely monitoring and preventing cardiovascular diseases. Obviously, the tendencies of combining health informatics with mobile edge cloud computing and developing ECG devices based on IoT are inevitable. Antonio et al. [
8] reviewed the evolution of the role that electrophysiology plays as part of occupational health. They summarized the benefits in the development of wearable and smart devices for cardiovascular monitoring and their possible applications, demonstrating the trends of using mobile ECG devices across different environmental settings and populations. Khairuddin et al. [
9] not only reviewed previous works on conventional ECG devices to identify their limitations, but also provides some insights into how the IoT has the potential to develop medical applications in ECG devices. Liu et al. [
10] combined mobile cloud computing and health informatics field to develop a system for ECG extraction and analysis using mobile computing, which they plan to deploy the system in commercial environment.
Using deep learning to classify and analyze medical images is popular. Pereira et al. [
11] used CNN to recognize handwritten images to help identify the early stages of Parkinson’s disease. Their model learns features from the signals of the smart pen, which uses sensors to capture the movements of handwriting during personal exams. Lipton et al. [
12] studied the performance of LSTM in analyzing and identifying multivariate time series patterns of medical measurements in the intensive care unit. Daniele et al. [
13] wrote a survey to review the deep learning in health informatics.
At the same time, many researchers use deep learning to diagnose ECG. Hannun et al. [
14] collected ECG data from 53.549 patients and designed a 34-layer CNN model to classify the 12 heart rhythm irregularities. The final classification accuracy is higher than the diagnosis accuracy of human experts. Chauhan et al. [
15] used LSTM in a recurrent neural network to diagnose ECG. This method allows ECG signals to be directly input into the network without any complicated data preprocessing. Adam et al. [
16] used a neural network to extract a complex segment of QRS waves of ECG signals, and then used it for user authentication. The user authentication accuracy of this system is as high as 99%. In the medical field, the ECG classification challenges are held every year. Zihlmann et al. [
17] constructed two models, CNN and LSTM, and used the PhysioNet/CinC 2017 dataset for training and testing. In the comparison of results, the LSTM model achieved a
-score [
18] of 82.1% accuracy, which is better than the CNN model. Wang et al. [
19] designed a five-class model DMSFNet, which uses 12-lead CPSC 2018 and PhysioNet/CinC 2017 dataset to classify atrial fibrillation. Zhang et al. [
20] designed STA-CRNN model with attention mechanism for eight classification and obtained an
-score of 0.835. The training and reasoning of these ECG diagnosis models are completed in the resource rich server or cloud.
Because heart disease has the characteristic of occurring suddenly, it is necessary to use deep learning in mobile devices for real-time monitoring. There are many studies which use mobile applications to perform task, such as using CNN to identify garbage in images [
21]. However, experiments show that the resources consumption of these applications are still very high. It takes 5.6 s to return the prediction results, but it consumes 83% of CPU and 67 MB of memory. Amato et al. [
22] ran CNN on the RaspberryPi board and integrated a smart camera to find empty parking spaces. Ravi et al. [
23] demonstrated the development of a mobile device fitness application that uses deep learning to classify human activities. However, DNN models in resource-constrained devices often have fewer hidden layers, so their performance is poor. Nguyen et al. [
24] proposed a conceptual hardware and software framework to support intelligent IoT applications.
Zhang et al. [
25] enhanced the EEG data with DCGAN and compared it with traditional methods including geometric transformation (GT), autoencoder (AE) and variational autoencoder (VAE). The results show that DCGAN obtains better enhancement performance. Zanini et al. [
26] used DCGAN to enhance EMG to study Parkinson’s disease. This model expands the patient’s tremor dataset by learning the patient’s different tremor frequency and amplitude, and extends it to different exercise program sets. Cao et al. [
27] proposed a novel data augmentation strategy based on duplication, concatenation and resampling of ECG episodes to balance the number of samples among different categories as well as to increase the diversity of samples. Salamon et al. [
28] used a variety of audio data enhancement techniques and discussed the impact of different enhancement methods on the performance of the proposed CNN architecture. Perez explored and compared various solutions to the problem of data augmentation in image classification [
29], which attempted to use Generative Adversarial Network (GAN) to generate images of different styles, and discussed the success and inadequacies of the method on various datasets.
3. ECG Diagnosis Model Based on Edge Cloud Intelligent Medical Architecture
3.1. The Edge Cloud Intelligent Medical Architecture
The intelligent medical system based on cloud cannot meet the urgent needs of people for daily health monitoring in the era of IoT. In this section, we introduce a hybrid intelligent medical architecture called EdgeCNN (see
Figure 1) to solve existing problems. EdgeCNN realizes intelligent collaboration of edge and cloud. The architecture composes of IoT device layer, edge computing layer, cloud platform layer and third-party service layer.
The IoT layer, which involves a series of health devices such as electrocardiographs, smart wristbands, smart watches, blood pressure meters and so on, is used to monitor various physiological indicators of human body and produce health data. Due to hardware limitations in computing power, storage and power, IoT devices do not have the ability to efficiently process these data [
30]. Users need to upload data to the cloud or carry data to the hospital to obtain health analysis results. However, both methods are not real-time and convenient, thus cannot give early warning for potential health problems.
The edge layer run the computing tasks on the resources close to the data source, so as to effectively reduce the delay of the computing system, reduce the data transmission bandwidth, and protect the data security and privacy. The edge layer is responsible for the execution of edge AI reasoning, executing the AI model distributed from the cloud, and feeding back the execution results to the user or cloud. In our architecture, the IoT monitoring device transmits the monitored data to an edge device through a local area network (e.g., WLAN, BlueTooth, and ZigBee), where a deep learning model for processing the corresponding data is deployed for diagnosis. Note that the deep learning model for edge computing needs to be re-designed, to make a trade-off between the accuracy of the diagnosis and the complexity of the model [
31] so that it can run efficiently at the edge devices. By introducing the edge computing node, the data are transmitted only in the user-controllable local area network, thus protecting user’s data privacy.
The cloud platform layer provides global, non real-time, long-term big data processing and analysis. It carries out centralized AI model training according to business requirements, historical data, real-time data and AI execution feedback and sends the AI model to edge nodes for execution. In the framework of cloud and edge integration, it is very important to study the interaction between clouds and edge. Although users can obtain efficient health monitoring and diagnosis using only IoT devices and edge computing devices, they can still upload a part of data to the cloud platform for more accurate inference. As mentioned above, the deep learning model deployed on the edge device is a trade off between accuracy and complexity.
The top layer of the architecture is the third-party service. Users can authorize relevant hospitals or private doctors to obtain their own data through cloud platform for further detailed diagnosis. The cloud platform can intelligently connect patients, medical staff, medical service providers and insurance companies in a seamless and collaborative way, so that patients can experience one-stop nursing, medical and insurance services.
3.2. Diagnosis Model for Agile Learning in EdgeCNN
Although some of the research based on deep learning algorithms have been able to perform electrocardiogram diagnostics well, their models are too complex to be deployed on edge devices for low latency diagnosis. Diagnosis cannot effectively promote the popularization of home-based smart medical. In this section, we introduce a model that can effectively trade off between accuracy and complexity so that it can be deployed on smart devices to perform low latency ECG diagnosis for users.
The general structure of our model is shown in
Figure 2. This is a convolutional neural network with five convolutional layers and one fully connected layer. Since the length of the ECG signal in the dataset is different, we preprocess the original data, splitting the ECG data and inputing the model with one-dimensional data. The data are batched into the CNN network, a convolution operation is performed on the input data using a
convolution kernel and the offset value is added after this.
To enhance the generalization of the network and enable rapid convergence of model training, we use batch-normalization (BN) [
32] to normalize the data after each convolution. The BN can normalize input sample features to make the data be distributed with a mean of 0 and a standard deviation of 1. If we do not normalize the data, due to the scattered distribution of the sample features, the neural network will learn slowly or even have difficulty learning. The formula for data normalization is given in Equation (
1):
where
represents the
kth dimension of the input data,
represents the average of the dimensions and
represents the standard deviation. However, if it is done simply, it will reduce the layer of expression. Thus, BN adds two parameters (
and
) to maintain the expressiveness of the model. The form is Equation (
2):
Rectified linear activation (ReLU) [
33] can save computational complexity, alleviate overfitting problems to a certain degree and obtain faster convergence than other activation functions. For ReLU functions, the formula is Equation (
3):
Its function is to make the calculated value equal to 0 if it is less than 0, otherwise it remains the same. It has been proven in practice that the trained network is fully moderately sparse. The visualized training results are similar to the pre-trained results of the traditional methods. This also shows that ReLU has the ability to guide moderate sparseness.
The pooling layer can reduce the parameters and calculations while retaining the main features, prevent over-fitting and improve the model’s generalization ability. The model uses
average pooling, which averages the two values in the domain. This can reduce the error of the increase in the variance of the estimate caused by the limited size of the neighborhood. The fully connected layer converts the output from the convolutional layer into a
vector. Each number corresponds to a category (N, A, O or ∼). Finally, softmax is used to obtain the most likely prediction result. The performance of softmax is Equation (
4):
where
represents the i-th element in the array
V. When a sample passes through the softmax layer and outputs a vector of
, the index of the one with the highest value in the vector is taken as the prediction label of the sample.
However, actual ECG data are generated continuously over time and are not similar to fixed-length ECG data when training models. The actual process of real-time diagnosis is abstracted as Algorithm 1. We note that the time series data generated during a certain period of time are set
. The number of data points that the monitoring device can generate per second is m, and the time length of the time series data
X is
seconds. We use
X as the input of the model, and then the required data length for one prediction is
n and the total time for the model to predict
X is
t. To prevent data congestion, it is necessary to satisfy
.
Algorithm 1 Real-time diagnosis algorithm |
Require: Time series data: |
Ensure: Real-time diagnosis, no data congestion.
|
while True do |
if then |
Save previous data and continue to receive data
|
else |
X= |
Call model to diagnose X which takes time t
|
if then |
Cause data congestion
|
Break
|
else |
Output diagnosis result
|
end if |
end if |
end while |
4. Data Augmentation
ECG automatic diagnosis using deep learning needs a lot of training data. At present, the lack and imbalance of ECG data have become an obstacle to the development of ECG automatic diagnosis technology. In this section, we design a data enhancement method for ECG based on DCGAN to expand ECG data volume and data categories. In our research field, we are the first to use DCGAN to try to solve the ECG imbalance problem.
4.1. Analysis of MIT-BIH Datasets
MIT-BIH arrhythmia database is an authoritative public dataset in ECG research field, which has rich data categories and accurate annotation specification. However, MIT-BIH data distribution is very unbalanced; only a few categories of data can meet the requirements of deep learning training. There are 23 data labeling categories in the MIT-BIH arrhythmia database, but only seven categories have thousands of samples. This has led to the current ECG automatic diagnosis methods being unable to expand the scope of diagnosis, hindering its pace of practical application.
Table 1 shows the top ten label categories for the number of beats.
It can be seen from the distribution of the table data that the total number of heartbeat data in the top five categories exceeds 7000, which meets the basic requirements of deep learning model training. In actual research, scholars have mostly studied these five categories. However, the other categories of ECG diagnostic classification are rarely considered in current research. To overcome the current bottleneck of insufficient ECG diagnosis data, we try to augment the heartbeat data labeled 8, 38, 6 and 31 through DCGAN, and observe the data generation process. The original heartbeat samples of these four types of ECG after denoising and QRS wave extraction are shown in
Figure 3.
4.2. Model Building
The framework of GAN includes a generative model
G and a discriminative model
D, which makes
G learn the distribution of the data [
34]. GAN is a zero-sum game mechanism, generating networks and discriminating networks against each other. The generating network tries to generate realistic fake samples to deceive the discriminating network as much as possible, and the discriminating network needs to identify whether a sample is fake or real as often as possible. The process can be seen in
Figure 4.
Random noise usually obeys the Gaussian distribution. The generation network generates fake samples on the basis of random noise. When the discriminator cannot distinguish between the real image and the generated image, the network is fit. The loss function of the two networks can be described as:
For the generator G, to generate fake samples that can deceive the discriminator D, it is necessary to minimize the value of , that is, to maximize the discriminant probability of the generated sample. In actual training, the method is alternative training of generators and discriminators.
Although GAN has acquired huge success, it has been proved to be unstable to train and produce nonsensical outputs in the generator. Thus, DCGAN was proposed in 2015. CNN performs well in supervised learning and worse in unsupervised learning, while DCGAN performs well in unsupervised learning. We can combine CNN in supervised learning with GAN in unsupervised learning to get better performance compared with GAN. DCGAN makes the following adjustments. First, replace any pooling layers with strided convolutions (discriminator) and fractional-strided convolutions (generator). Second, use batchnorm in both the generator and the discriminator. Third, remove fully connected hidden layers for deeper architectures. Fourth, use ReLU activation in generator for all layers except for the output, which uses Tanh. Fifth, use LeakyReLU activation in the discriminator for all layers. The structure of DCGAN is shown in
Figure 5.
4.3. Data Generation
In this section, four types of ECG data and randomly generated noise sequences are used as input of DCGAN, which is used for training and data generation. The input of the generator consists of a series of data sequences, each of which consists of 250 noise points. The output is the generated ECG sequence, whose length is set to 250. The input of the discriminator is the result produced by the generator and the real ECG data.
The learning rate of the Adam optimizer in the experiment is 0.002. Since the BN mechanism is used, the batch size for each batch training is set to 64. According to the data characteristics of ECG, one-dimensional convolution and deconvolution methods are used in the process of training using tensorflow. Each iteration operation sets an event tracking to record its discriminative network loss value and generative network loss value. After 50 epochs of training, the generating samples of the four types of ECG beats are shown in
Figure 6. They are the examples of atrial premature beats, ventricular fusion heartbeat, pacing mixed heart rate, and ventricular flutter. It can be seen in the figure that, although the generated pattern is not as smooth as the original data, the important basic features of the original data have been reflected.
In the whole training process, the change trend of the loss value d_loss of discriminative network and the loss value g_loss of the generative network is shown in
Figure 7. It can be seen that, in the whole process, the discriminative network and the generative network are undergoing alternating and iterative optimization, which is in line with the idea of zero-sum game. The optimization goal of GAN is mainly composed of two parts: the probability of judging whether the real sample is true by the discriminative network and the probability of judging whether the generated sample is true by the generative network. This makes them have a confrontation game in the process of training. The generative network tries to generate more realistic data to deceive the discriminative network and the discriminative network tries to distinguish the real samples from the generated samples. Therefore, in terms of optimization goals, the discriminative network wants
V(G, D) to be as large as possible, while the generating network wants
V(G, D) to be as small as possible. This leads to the phenomenon of iterative optimization in the training process.
5. Evaluation
In this section, we use the deep learning model in EdgeCNN to perform feature learning and classification on the data in the PhysioNet/CinC 2017 and MIT-BIH datasets. The cardiology challenge announced by PhysioNet in 2017 is to classify atrial fibrillation (AF) with short single lead electrocardiograms. The incidence of AF is 1–2%, increasing in the incidence of the growing age, is the most common persistent arrhythmia, which have considerable morbidity and mortality. Our experiments utilize their published dataset including 8528 single-lead ECG recordings, 9–60 s in length, with a sampling rate of 300 Hz, and it is divided into four categories: normal rhythm (N), AF rhythm (A), other rhythm (O) and noisy recordings (∼). Examples of these different types of electrocardiograms are shown in
Figure 8. During the experiment, due to the problem of MIT-BIH data imbalance, only a small part of the categories can be fully trained. To solve this problem, we enhance the MIT-BIH dataset.
5.1. Evaluation of EdgeCNN
At first, we evaluate the performance of EdgeCNN diagnosis model in PhysioNet2017 dataset based on the accuracy of prediction, the cost of storage and memory and prediction time. Afterwards, the comparative experiments on two diverse architectures for ECG diagnosis are run, one that utilizes the hybrid architecture in EdgeCNN and another that only utilizes cloud platform.
5.1.1. Performance of the ECG Diagnosis Model Deployed on the Edge Devices
We evaluate the overall performance of the ECG diagnosis model that is deploy on the edge devices of processor snapdragon 660. After testing the prediction accuracy of normal rhythm (N), AF rhythm (A), other rhythm (O) and noisy recordings (∼), the result is shown in
Figure 9, which shows the prediction accuracy rate of “Normal” is 87%, “AF” is 84%, “Other” is 81% and “Noisy” is 60%.
The figure describes that there are three different categories of ECG which are easily misjudged as “Other”. In addition, the ratio of “AF” to “Other” is the same, 11%. “Noisy” has the lowest prediction accuracy, because the majority of them are classified as “Other” and “Normal”. In the true production environment, to avoid the transmission of numerous amount of data in the network environment, most of the predictions of the hybrid architecture are completed at the edge devices. This strategy can decline the ratio of noise generated by data transmission and decrease the predicted error to some degree.
Then, we compare the accuracy of the model in predicting with
-score, the cost of storage and memory and predict time.
-score is a measure of the accuracy of the model in predicting atrial fibrillation. Its calculation formula is Equation (
6):
The definition of each variable is shown in
Figure 10.
The results of the ECG diagnosis model and the optimal model in 2017CinC [
35] is shown in
Table 2. Bo et al. [
35] used a CNN with a 16-layer convolutional layer and obtained a final score of 0.83. However, this complex model structure needs a larger volume and causes higher computational complexity. Although our streamlined and efficient model acquires a score of 0.78, which is a little bit lower than the optimal model, the optimal model’s physical storage space reached 4.95 MB after being persisted, which is almost 1.5-fold larger than our model. Moreover, the memory at runtime reached 74 MB, which is 23 MB more than our model.
The definition of prediction time is the time which is required for the ECG diagnosis model from receiving data to output the final prediction result. It means that the faster diagnosis speed can decrease the general delay of the system. Hence, we should ensure a shorter prediction time and acquire the basic accuracy which is higher than cardiologists. The complex calculations make the prediction time of 2017CinC optimal model about 100 ms longer than our model. Such a long delay is definitely catastrophic in the field of smart healthcare.
Therefore, our model absolutely occupies an advantage from the perspective of the size of the storage space occupied by the model, the memory occupied by the runtime and the inferred time, which makes the model easily deploy on resource-constrained edge devices. it can fit the new architecture of the cloud-edge-integrated home smart healthcare proposed by us.
5.1.2. Comparison of Delay on Two Different Architectures
We build two system with diverse architectures for ECG diagnosis, one system uses a hybrid architecture in EdgeCNN while the other uses cloud platform. The experiments compare the two architectures in terms of delay. Delay is an important indicator that is used to evaluate the effectiveness of a system. We measure the length of time from data generation to the feedback of diagnostic results to the user under different request volumes. It contains both data transfer time and model prediction time.
The predicted total time
t is mainly composed of four parts: the data transmission delay
, the data propagation delay
, the time
from when the model receives the data to the final output of the prediction result and the time
when the prediction result is returned to the user. Therefore, time
t can be described as follows:
Since the propagation delay is related and proportional to the network distance between the client and the server and is very small, according to Algorithm 1, to avoid data congestion, t needs to meet the conditions , which must be considered from two aspects that affect and :
- (1)
When the number of users gradually increases, the amount of data generated in the same time period will also increase. Whether the network bandwidth of the data processing point is sufficient to transmit such a large amount of data within a limited time will directly affect the sending delay .
- (2)
The performance of the data processing point is limited. When the data that need to be processed at the same time reach the performance bottleneck of the data processing point, the next part of the data will be queued for processing, which will increase the time .
Assuming that the bandwidth of the processing point is 1 Mbps, and
N is used to represent the number of data segments generated in the same time period, the transmission time
of these data transmission in the network can be calculated by Equation (
8):
If the hardware performance is sufficient to handle the current concurrency, will be constant, and then obviously the time t will be inversely proportional to the bandwidth a. The larger is the bandwidth, the smaller is the sending delay . The overall performance of data processing points will vary with changes in hardware, and quantitative measurements cannot be made. However, as the amount of data that need to be processed at the same time increases, the average number of requests processed by each thread in the processing point will inevitably increase. We can measure its impact on time t under the condition that the requests processed by a single thread gradually increase.
As shown by the experimental results in
Figure 11, as the request volume of each thread gradually increases, the cloud faces bottlenecks in performance and network I/O. As a result, there is a gradual growth in latency. The delay is about 393 ms when the number of concurrent requests to be processed by each thread of the cloud platform is less than 7. The delay reaches at least 384.7 ms when the number of simultaneous requests processed by the thread is 7 or 8. Because of the request rate and the fastest rate handled by a single thread of the cloud platform, which are close at the moment, the thread does not need to be awakened frequently, and the cloud platform has the highest processing efficiency. Once the requests exceed the maximum number of requests that a single thread can afford, they have to be queued. The consequences are that it will result in a large amount of data congestion and the delay will rise until the real-time diagnostic capability lose.
In contrast, since the edge devices are distributed and deployed in each user’s home, it can be the user’s smart mobile phone, a gateway or router, a smart device customized for smart healthcare or other intelligent electronic devices. All of them contain certain data computing capabilities.
With the gradual growth of the amount of users, the problems which are similar to the data congestion in the cloud platform will not happen, and the delay will continuously keep stable at about 320 ms. There is usually at least one user at home, which presents a stable and efficient solution to handle their healthcare data.
5.1.3. Network I/O and High Availability
Due to the low dependence on the cloud platform, the data generated by IoT does not need to be transmitted to the cloud platform in real time through the network. A portion of the data is only uploaded when the user authorizes the use of a third-party service. This makes the cloud platform not encounter too much dilemma on the network I/O.
Moreover, for the traditional cloud-only architecture, it is difficult to ensure that the cloud platform does not have problems. Once the cloud platform fails, all users’ services will be interrupted, which is catastrophic for the user experience and the system platform. The hybrid architecture with cloud and edge is a good way to circumvent this problem. Each edge node has autonomy. Even if the cloud platform fails, most users can still rely on the edge device to obtain services normally, and only special third-party services will be suspended. This ensures high availability of the entire architecture.
The performance comparison of the two architectures, EdgeCNN architecture and traditional cloud based platform architecture, on different metrics are summarized in
Table 3. First, because the processing unit is closer to the data source, the hybrid architecture achieves lower and more stable latency without data congestion due to the large user volume. Second, one of the important reasons for data congestion is the bottleneck of the network I/O. When large amounts of data flock to the cloud platform in real time, network I/O is under tremendous pressure, resulting in a decline in real-time processing power of the system. The hybrid architecture circumvents this problem by spreading the data across the various edge compute nodes. Third, the hybrid architecture uses edge computing nodes to diagnose, which greatly reduces the cost of operation and maintenance of cloud platforms. Fourth, the hybrid architecture can ensure that data are transmitted within the user-controllable local network, avoiding all data being exposed to the Internet, effectively protecting user privacy. Fifth, the hybrid architecture takes distributed edge nodes to process data. It will not paralyze the whole system due to the failure of the cloud platform. The high availability of the system is guaranteed.
In addition to the related issues mentioned above, power consumption is also a key issue that edge devices need to consider. To this end, we measured the power consumption of the two models when running on the experimental equipment, as shown in
Figure 12. We can see from the figure that it takes 325 min for the remaining power to drop from
to
without any processing tasks. After deploying the 2017CinC model, the time required for the remaining power to drop to
was suddenly shortened by 75 min. It can be seen that this reduces the overall endurance of the device by approximately 250 min. EdgeCNN shortens the time required for the remaining battery to drop to
by 25 min. Compared with the former, the battery life has increased by 50 min, and the overall battery life can be increased by about 167 min. It can be seen that EdgeCNN has obvious advantages in energy efficiency.
5.2. Evaluation of Data Augmentation
In this section, we evaluate the performance of our proposed ECG diagnosis model in MIT-BIH dataset.
Figure 13 shows the main process of ECG data identification. We compare the diagnostic accuracy of the model before and after the data enhancement. The confusion matrix of the test results before data enhancement is shown in
Figure 14. It can be seen from the table that the four categories of normal beat (1), left bundle branch block (2), right bundle branch block (3) and ventricular premature beats (5) with sufficient data volume have high diagnostic accuracy. The four categories of atrial premature beats (8), pacing mixed heart rate (38), ventricular fusion heartbeat (6) and ventricular flutter (31) decrease with the amount of data, and the diagnostic accuracy also drops significantly, until the diagnosis goes down to zero. The confusion matrix of the test results after data enhancement is shown in
Figure 14. It can be seen from the matrix that almost all ECG categories are most likely to be misclassified as ventricular premature beats, which indicates that there may be some common features of ventricular premature beats. Comparison with
Figure 14 shows that data enhancement plays an important role in improving the overall accuracy.
Figure 15 shows the change in diagnostic accuracy of eight ECG categories before and after data augmentation. It can be seen that, before the data augmentation, although the diagnostic accuracy of the four ECGs of normal beat, left bundle branch block, right bundle branch block and ventricular premature beats have a good performance of more than 90%, the diagnostic accuracy of the three types of pacing mixed heart rate, ventricular fusion heartbeat and ventricular flutter are very poor. The diagnostic accuracy of pacing mixed heart rate is as low as 23.14%, while ventricular fusion heartbeat and ventricular flutter cannot even be classified at all. After the data are enhanced, the diagnostic accuracy of the unenhanced data categories is more than 93%, which have a slight fluctuations compared with the previous ones. However, the fluctuations are within the normal range. The diagnostic accuracy of the enhanced data categories have undergone qualitative changes. Among them, the pacing mixed heart rate has increased from 23.14% to 84.46%, and the increase rate has reached 265%. Ventricular fusion heartbeat and ventricular flutter have, respectively, 79.17% and 95.97% diagnostic accuracy. Experimental data show that data enhancement has a substantial meaning for the classification of ECG diagnosis. The expansion of diagnosis categories and the improvement of accuracy make ECG diagnosis more practical.
6. Conclusions
Smart medical treatment can provide timely and effective services for people’s health. In this article, we present a hybrid smart medical architecture based on edge and cloud computing. This architecture can not only reduce diagnosis delay and transmission overhead, but also protect data privacy. Furthermore, we design an effective deep learning model for ECG inference which can be deployed and run on edge smart devices. This diagnosis model achieves a good balance between diagnosis accuracy and resource cost. In addition, we propose a data enhancement method for ECG based on DCGAN, which expands ECG data volume and data categories. The experimental results show that, through data enhancement, not only the overall accuracy of the diagnosis model is improved, but also the diagnosis category is expanded.
Further work can be carried out from the following aspects. (1) The representation of ECG’s medical morphological features in data still needs to be further explored. Some heart diseases can be judged by a simple and single ECG waveform, but the waveforms of many heart diseases often show complex and diverse morphological characteristics. This requires researchers to have a deeper understanding of these data to design more excellent diagnostic model. (2) Although data enhancement technology has effectively improved the diagnostic accuracy of some ECG categories, most of the ECG category data are still vacant. This requires further strengthening of cross-professional cooperation, especially cooperation with hospitals with massive data sources. (3) The model needs to consider individual differences. The current ECG diagnosis is mostly based on publicly available datasets and does not fully consider individual differences. To further improve the practical application effect of ECG automatic diagnosis model, it is necessary to update the diagnosis model according to the characteristics of each individual.