[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
An IoT-Based Anonymous Function for Security and Privacy in Healthcare Sensor Networks
Next Article in Special Issue
A Combined Offline and Online Algorithm for Real-Time and Long-Term Classification of Sheep Behaviour: Novel Approach for Precision Livestock Farming
Previous Article in Journal
Extraction of Bridge Fundamental Frequencies Utilizing a Smartphone MEMS Accelerometer
Previous Article in Special Issue
A Cascade Ensemble Learning Model for Human Activity Recognition with Smartphones
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Retraction published on 15 January 2020, see Sensors 2020, 20(2), 476.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RETRACTED: The Novel Sensor Network Structure for Classification Processing Based on the Machine Learning Method of the ACGAN

1
School of Computer and Communication Engineering & Hunan Provincial Key Laboratory of Intelligent Processing of Big Data on Transportation, Changsha University of Science and Technology, Changsha 410114, China
2
School of Information Science and Engineering, Fujian University of Technology, Fuzhou 350118, China
3
Hunan Institute of Scientific and Technical Information, Changsha 410001, China
4
Electronics & Information School, Yangtze University, Jingzhou 434023, China
5
Technical Quality Department, Hunan ZOOMLION Heavy Industry Intelligent Technology Corporation Limited, Changsha 410005, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(14), 3145; https://doi.org/10.3390/s19143145
Submission received: 16 June 2019 / Revised: 12 July 2019 / Accepted: 16 July 2019 / Published: 17 July 2019
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)

Abstract

:
To address the problem of unstable training and poor accuracy in image classification algorithms based on generative adversarial networks (GAN), a novel sensor network structure for classification processing using auxiliary classifier generative adversarial networks (ACGAN) is proposed in this paper. Firstly, the real/fake discrimination of sensor samples in the network has been canceled at the output layer of the discriminative network and only the posterior probability estimation of the sample tag is outputted. Secondly, by regarding the real sensor samples as supervised data and the generative sensor samples as labeled fake data, we have reconstructed the loss function of the generator and discriminator by using the real/fake attributes of sensor samples and the cross-entropy loss function of the label. Thirdly, the pooling and caching method has been introduced into the discriminator to enable more effective extraction of the classification features. Finally, feature matching has been added to the discriminative network to ensure the diversity of the generative sensor samples. Experimental results have shown that the proposed algorithm (CP-ACGAN) achieves better classification accuracy on the MNIST dataset, CIFAR10 dataset and CIFAR100 dataset than other solutions. Moreover, when compared with the ACGAN and CNN classification algorithms, which have the same deep network structure as CP-ACGAN, the proposed method continues to achieve better classification effects and stability than other main existing sensor solutions.

1. Introduction

Image classification algorithms have always been a hot research topic in the field of image processing research. Krizhevsky et al. [1] proposed the AlexNet network, based on deep learning methods, which has been successfully applied to image classification in the ImageNet dataset [1]. The Top5 error rate was controlled at the 15.4% level, a full ten percentage points higher than the non-deep learning method that took second place. Moreover, ResNet [2] controlled the Top5 error rate at the 3.57% level, which exceeds the performance of human recognition. The successful application of these deep networks has caused deep convolutional neural networks (DCNNs) to gradually become one of the most important methods in the field of image classification research. DCNNs are essentially a more effective feature extraction method that takes the extracted features as the input of a classifier to achieve image classification. However, the major drawback of such networks is that they can only use artificially supplied samples; moreover, the models cannot learn the spatial distribution of samples. In addition, the distribution between samples cannot account for a deeper understanding of the sample’s internal structure, which will undoubtedly affect the model’s image classification performance.
The generative model is a model that can learn the potential distribution of data and generate new sensor samples. Traditional generative models include the Gaussian model (GM), Bayesian network (BN) [3], S-type reliability network (SRN) [4], Gaussian mixture model (GMM) [5], multinomial mixture model (MMM) [6], hidden Markov model (HMM) [7] and hidden Markov random field (HMRF) [8]. Goodfellow et al. [9] proposed generative adversarial networks (GAN) by summarizing the advantages and disadvantages of traditional generative networks. The main idea behind the GAN model involves training two antagonistic networks simultaneously: namely, a generative network (G) and discriminative network (D). The key aspect of training a discriminative network is that it can distinguish real sensor samples from the pseudo-samples generated by the generative network, which is defined as binary classification. Accordingly, the generative network is trained to generate sensor samples that look as real as possible, with the aim of causing the discriminative network to mistake them for real sensor samples, thereby achieving the effect of false and fake.
Subsequently, Radford et al. [10] proposed deep convolutional generative adversarial networks (DCGAN), which combine GAN with a convolutional neural network (CNN) in order to make the process of training the generative network more stable and arrive at a clearer generated image. Furthermore, Mirza et al. [11] proposed conditional generative adversarial networks (CGANs). Unlike the original GAN model, CGANs are trained by adding class labels to both the generator and discriminator, thus realizing directional image generation. Odena et al. [12] further proposed the ACGAN network. Like the CGAN model, the ACGAN model uses image label information during the training procedure; however, it only adds label information to the generator to realize directional image generation. Experimental results have shown that the ACGAN can generate samples with higher definition. The traditional GAN belonged to the unsupervised learning method category. While Kingma et al. [13], Gui et al. [14], Zhang et al. [15] and Xia et al. [16] successfully applied the GAN model to a semi-supervised learning procedure, the application of the CGAN and ACGAN brings it into supervised learning territory. Salimans et al. [17] proposed an image recognition method based on the CGAN model. The GAN has been successfully applied to supervised image classification. The traditional discriminator of the ACGAN network is the natural classifier; however, when used in image classification applications, many problems arise including unstable training, poor discriminative effects and so on [18,19,20,21,22].
Based on the above problems, the present paper proposes classification processing based on an auxiliary classifier generative adversarial network (CP-ACGAN). The proposed method adjusts the sensor structure of the ACGAN network in several key ways. Firstly, the real discrimination and fake discrimination of sensor samples has been canceled in the discriminator (D), and only the prediction vectors of labels in sensor samples are outputted. Secondly, the pooling and caching method of the CNN is fused with the ACGAN model. That is, the convolutional layer part of the discriminator in the ACGAN is changed into a pooling layer so that the diversity of generative networks can be effectively utilized, while the feature extraction ability of the pooling method can also be utilized to achieve a better classification effect. Thirdly, feature matching (FM) is also proposed to improve the diversity of the generated samples. The proposed algorithm has been proven to be an effective classification method for semi-supervised learning procedures. Experimental results have shown that the proposed algorithm (CP-ACGAN) yields a better classification performance than the ACGAN method. At the same time, when compared with a CNN with a sensor network structure, it also achieves better classification accuracy. Therefore, the CP-ACGAN image classification method proposed in the present paper represents an improved image classification algorithm. Moreover, improving both the generator and the diversity of the generative sensor samples can further improve the network classification effect.
The rest of this paper is organized as follows. Section 2 discusses the GAN and its deformation. In Section 3, we propose the classification processing algorithm based on the ACGAN. Section 4 mainly deals with the experimental results of the paper. Finally, Section 5 presents the conclusion and some suggested avenues for future work.

2. Generative Adversarial Networks and Their Deformation

2.1. Generative Adversarial Networks

The generation of confrontational networks is one of the generative models proposed by Goodfellow et al. [9]. The GAN model is based on the minimum and maximum problem of the two-person game. The method of confrontation training is as shown in Equation (1):
min G max D V ( G , D ) = E x ~ p d a t a [ log D ( x ) ] + E z ~ p z ( z ) [ log ( 1 D ( G ( z ) ) ) ]
The network structure of the GAN model is presented in Figure 1 above. N z represents noise information. X r e a l is the real sample and X f a k e is the virtual sample. Real/Fake is the optional result of the output. The GAN network model consists of a generator (G) and discriminator (D); here, the generator (G) conducts mapping from z ~ p z ( z ) noise to generate sensor sample space G ( z ; θ g ) , while the discriminator D ( x ; θ d ) determines whether the input x has come from real sensor samples or generated sensor samples, meaning that the discriminator essentially has two classes. In the continuous confrontation between G and D, the generation p g ( x ) of the distribution comes to approximate the real distribution of p ( x ) , and ultimately achieves a Bayesian Nash equilibrium. At this time, the generator can match the real data distribution completely; that is, p g ( x ) = p ( x ) , while the discriminator is D ( x ) = p ( x ) / ( p g ( x ) + p ( x ) ) = 1 / 2 . Thus, the distribution of generated samples is completely consistent with that of the real samples, and the purpose of generating real samples has been achieved. Both neural networks (G and D) of the GAN model make use of the traditional back-propagation principle, and the computational processing requires neither complex Markov chains nor maximum relief estimation; nor does it exhibit a complex variation in lower bounds, which greatly reduces the difficulty of network training and makes it easier to achieve convergence.

2.2. Deep Convolutional Generative Adversarial Networks

The generator (G) and discriminator (D) of the original GAN model are all fully based on connected neural networks. The training procedure is simple and the amount of calculations required is small; however, the images generated after training are blurred and the obtained visual effects are poor. Furthermore, a CNN has a powerful feature extraction ability and better spatial information perception ability. Radford et al. [10] first proposed the DCGAN network model. In the DCGAN model, the convolutional layer and transposed convolutional layer are used to replace the fully connected layers in the generator (G) and discriminator (D), respectively, so that the generated image has higher definition [23,24]. The DCGAN is characterized by the following changes in network structure:
(1).
The pooling layer in the CNN has been canceled; strided convolutions are used in the discriminator, while fractional strided convolutions are used in the generator (G).
(2).
In addition to the generative output layer and the discriminative input layer, another layer has been added, namely a batch norm (BN). The BN can help to reduce the excessive dependence of the network on the initial parameters, as well as prevent the initialization parameters from being subpar. It further prevents the gradient from disappearing and transfers the gradient to each layer of the network; moreover, it prevents the generator from converging to the same point, thereby improving the diversity of the generated samples. This also reduces the network oscillation and improves the stability of network training.
(3).
The full connection layer has also been cancelled. In the generator (G), while the activation function Tanh is used in the final output layer, the activation function ReLU is used in all other layers. The leaky activation function ReLU is used in all layers of the discriminator.

2.3. Auxiliary Classifier Generative Adversarial Networks

The traditional generative networks were unsupervised models. The CGAN applied the generated antagonistic network concept to the supervised learning method for first time, allowing it to achieve a corresponding effect between tags and generated images. The ACGAN [16] was further improved on the basis of the CGAN through incorporation of the idea of mutual information in InfoGAN [25]. N z represents noise information and L y is the label used by the generator (G). X r e a l is the real sample and X f a k e is the virtual sample. Real/Fake is the optional result of the output. L y is the output label used by the discriminator (D). The network structure of the ACGAN is shown in Figure 2 below.
Equations (2) and (3) are the objective functions of the ACGAN training process:
L s = E [ log p ( s = real | x d a t a ) ] + E [ log p ( s = fake ) | x f a k e ]
L c = E [ log p ( C = c | x d a t a ) ] + E [ log p ( C = c ) | x f a k e ]
D is trained to maximize L s + L c , while G is trained to maximize L s L c . From the network structure or from training the objective function, we can see that the loss function of the ACGAN model has incorporated the cross-entropy between the input sample label information and the label posterior probability estimation value on the basis of the GAN model [26,27].

3. Image Classification Processing Algorithm Using ACGAN

3.1. Applied ACGAN for Image Classification

The discriminator in the ACGAN outputs the posterior error estimate of the input tag, as well as the real and fake discrimination of the output sample. After the network training is complete, input the sample x ; the discriminator can then output its corresponding probability p ( y | x ) for each class and select the category p ( y | x ) , which makes k the largest and thus the label of the input sample x , so as to achieve image classification operation.
The generator structure of the ACGAN-based image classification model is presented in Figure 3 (the example dataset is MNIST). The network generator consists of four fully connected layers and five transposed convolutional layers. The structure of transposed convolutional layers one and three is the same (kernel_size is 4, stride is 2, padding is 1), while that of transposed convolutional layers two, four, and five are also the same (kernel_size is 5, stride is 1, padding is 1).
Figure 4 presents the discriminator structure diagram of the ACGAN model. The discriminator has the contrary structure to the generator, as it comprises five convolutional layers and four fully connected layers. Here, convolutional layers one, two and four have the same structure (kernel_size is 5, stride is 1, padding is 1), while convolutional layers three and five have the same structure (kernel_size is 4, stride is 2, padding is 1). The output layer of the discriminant network outputs the posterior probability estimation of the sample label; that is, the estimated value of the sample label in the testing dataset, in addition to the authenticity discrimination of the output sample. The ACGAN [16] is based on adding label constraints to improve the quality of high-resolution image generation, and it can propose a new measurement of image quality and mode collapse.

3.2. The CP-ACGAN Algorithm

3.2.1. Feature Matching Operation

The feature matching (FM) operation is the proposed method utilized by the improved GAN to enhance training stability and generate sample diversity. Assuming that f ( x ) represents the discriminator output of the middle layer, the objective function of FM can be expressed by means of Equation (4):
min θ g = E x ~ p data [ f ( x ) ] E z ~ p z [ f ( G ( z ) ) ] 2 2
The feature output of the generated sample in the discriminator matches the feature output of the real sample in the discriminator. While Salimans et al. [17] and Zhang et al. [28] indicated that minibatch discrimination [29] can produce better results under some circumstances, feature matching achieves a better classification effect under semi-supervised learning conditions. Therefore, the present paper opts to introduce feature matching into the ACGAN to further improve its image classification performance.

3.2.2. Improved Loss Function

Some potential problems with the proposed configuration were identified, including slow training speed, unstable network and poor effect when using the ACGAN discriminant network D to classify images. Therefore, the original ACGAN network structure has been improved. N z represents noise information and L y is the label used by the generator (G) and X r e a l . X r e a l is the real sample and X f a k e is the virtual sample. y is the predicted value of the sample label. The improved network structure is presented in Figure 5.
In terms of the network structure, the improved network cancels the real and fake samples of the discriminant in the discriminator, as well as introducing feature matching in the discriminator, although the other parts remain unchanged. However, in order to ensure the effective use of the real and fake samples, the loss function of the generator and discriminator has been substantially altered. The real samples are now treated as labeled supervisory data, while the generated samples are treated as labeled fake data. The softmax classifier is then connected to the output layer of the discriminant network. The supervisory loss function of the real samples can be expressed as Equation (5):
L supervised = E ( x , y ) ~ p data [ log p ( y | x , y < K + 1 ) ]   = 1 N log p ( y | x , y < K + 1 )   = 1 N log exp ( y , y ) i = 1 K exp y i   = 1 N ( y , y + log ( i = 1 K exp y i ) )   = C E ( y , y )
Here, N is the batch size of samples, , is the inner product, y is the sample label, and y is the predictive value of the sensor label. Therefore, the loss function of the real data can be expressed as Equation (6):
L real = L supervised
For data generation, the error consists of two parts: one is the probability loss value of the false sample K + 1 class, and the other is the cross-entropy of the loss value between the output label y fake and the input label y . Let L u n s u p e r v i s e d express the probability of expected loss of fake sample classes; we can then make use of the property of the softmax function to make y K + 1 = 0 and obtain Equation (7):
L unsupervised = E [ log p ( y = K + 1 | x ) ]   = 1 N log p ( y = K + 1 | x )   = 1 N log exp y K + 1 i = 1 K + 1 exp y i   = 1 N log ( 1 + i = 1 K exp y i )
Because the ACGAN network is being used, the input labels of the generated samples in each batch are consistent with the labels of real samples. Therefore, the cross-entropy loss value between the generated tag y fake and the input tag y is C E ( y , y fake ) . In short, the loss of generated samples can be expressed as Equation (8):
L real = 0.5 ( L unsupervised + C E ( y , y fake ) )
Moreover, because the parameters of the generator and discriminator are updated constantly during training, the errors of the generator and discriminator need to be constructed separately. For the discriminator D, the error can be expressed as Equation (9):
L D = 0.5 ( L real + L fake )
While for the generator G, the error can be expressed as Equation (10):
L G = 0.5 ( L FM + L unsupervised )
In addition, L FM = E x ~ p data [ f ( x ) ] E z ~ p z [ f ( G ( z ) ) ] 2 2 represents the two-norm loss item of feature matching.

3.2.3. The Pooling Method

Convolutional neural networks (CNNs) have achieved great success on image classification tasks, and the pooling method has played an important role in image processing. As an important step in a CNN, the pooling method can not only extract effective features, but can also reduce the dimensionality of data and prevent the overfitting phenomenon from occurring [30]. The pooling method, which is the key step in feature extraction using a CNN, has the characteristics of translation, rotation and scaling invariance [31,32,33]. Commonly used pooling methods include mean pooling, maximum pooling and random pooling.
In the application of the GAN model, in order to make the generated image more high-definition, the use of deconvolution (Deconv) rather than the pooling method in the discriminant network has caused pooling to be abandoned in the production network. However, the pooling method has played an incomparable role in solving classification problems. Therefore, if the pooling method was to be combined with a generative confrontational network, the resulting generative confrontational network with pooling method could be used to solve classification problems effectively. On the one hand, the diversity of generative samples can be used. On the other hand, the pooling method can be used to more effectively take the features.
Accordingly, the present paper proposes the CP-ACGAN as a means of solving image classification problems. On the basis of introducing the feature matching and reconstruction loss functions, the convolutional layer of the discriminator in the ACGAN is changed to a pooling layer. Moreover, corresponding to Figure 4, the third and fifth convolutional layers in the original discriminant network are changed to pooling layers, while the generator structure remains unchanged. Figure 6 presents a structural diagram of the CP-ACGAN discriminant network.

3.3. Details of the Proposed Algorithm

In order to solve the problems proposed in Section 2.2, this paper uses a logarithmic function, represented by He et al.’s [2] total variation model. In order to reduce the illumination direction of the noise and maintain the consistency of the structure, it aims to restore image structure texture decomposition using the structural component to obtain the image illumination direction information. The priority function is also modified to improve the robustness and reliability of computation. The local model is then constructed to select the best matching block in order to ensure local structural consistency and visual credibility of the image.
The structural component u of the image is obtained by the logarithmic total variation minimization model. u is the piecewise smooth part of image f, including the geometric structure and equal illumination information. The texture components include small size details and random noise in the image, such as those discussed by Dong et al. [34], Lim et al. [35] and Kumar et al. [36].
Assuming that the original image is f, the structural component is u and the texture component is v, U can thus be represented as BV ( Ω ) (bounded variation). U retains the sharp edge and contour information of the image, but can remove the texture and noise components. The residual v = f u is the generalization function defined on L 2 ( Ω ) . The structural texture decomposition of the images can be solved by the following convex minimization problem, defined as Equation (11):
inf   F ( u ) = ( ϕ | u | ) d x + λ | f u | 2 d x
λ is the harmonic parameter, which controls the approximate degree of the structural component u and the original graph. ϕ ( | u | ) d x is the generalized total variation of image u, and ϕ ( | u | ) is the convex function of | u | . The Euler Lagrange equation is satisfied by the upper minimum.
{ u t = ( ϕ | u | | u | u ) λ E ( u u 0 ) u ( x , y ) | t = 0 = u 0
Here, the equation for calculating the priority term is P ( p ) = C ( p ) D ( p ) . C ( p ) is the confidence term and D ( p ) denotes the data items. The equation for calculating the confidence term is as follows:
C ( p ) = q N p Φ C ( q ) | N p |
where ψ ( p ) has been located, the region has been expressed in N p . The denominator | N p | represents the number of image pixels in the ψ ( p ) piece. C ( q ) represents the confidence of Q. If it is to be repaired, it is zero, while if it is the source image area, it is one. Therefore, the molecules in Equation (1) represent the number of data points that do not need to be repaired in image slice ψ ( p ) . The meaning of one is that there are enough real data points in the image of ψ ( p ) to be repaired. The more real information that the image slice provides, the higher the reliability, and it should be repaired first. The equation used to calculate data item D ( p ) is as follows:
D ( p ) = | I p n p | α
The total variation based on the logarithmic function is ϕ ( s ) = 1 2 s ln ( 1 + s ) . It is proved that the model is an anisotropic diffusion model. Diffusion has achieved excellent performance in illumination direction and gradient direction applications. The diffusion along the direction of irradiance diffuses faster than the total variation minimization model P–M (Perna–Malik) and the anisotropic diffusion model. Accordingly, this paper can adopt the structure decomposition model of image texture. In terms of numerical calculation, the iterative Gauss–Jacobi method is adopted by introducing a half pixel point; the specific method is taken from Min et al. [37].
After getting the structural image of the restored image, we can calculate the priority on the structural image u directly. In order to overcome the influence of product effect, we modify the priority calculation rule by means of Equation (15):
P ( p ) = ξ C ( p ) + ( 1 ξ ) D ( p )
Here, the weight value ξ is controlled by the emphasis of C ( p ) and D ( p ) , where ξ = 0.5 is selected. We can thus calculate confidence in the structural image and remove the interference of some local fine-texture noise while avoiding high-priority points; when the confidence of data items is very small, the calculation of priority items is more credible than in other cases.
As described in Section 2.1, the selection of an optimal matching block with reference only to the minimum of the sum of square differences (SSD) criterion will bring about some error matching and accumulation errors. Thus, we here propose the optimal matching optimization model. The idea of the model is based on the premise of image classification. The texture structure of natural images has local continuity; the closer the distance, the more consistent the texture should be, and the smaller the total local variation of the image. Therefore, a 0-1 optimal model is proposed in this paper. The local window of the p point to be repaired is taken as W p to calculate the local total variation. The local total variation is related to the image slice ψ ( p ) of the repaired p point. According to the principle of SSD, we calculate the minimum k SSD image fragment as ψ q 1 , ψ q 2 , , ψ q k . If ψ ( p ) = ψ q i is recorded, the image of effective pixels in W p is totally changed into TV Local ( p , i ) = i [ 1 , k ] | u i | d x .
Objective Function: min : TV Local ( p , i ) = i [ 1 , k ] | u i | d x , x W p Φ
Decision Variables: c i , i = 1 , 2 , , k
s . t . { ψ ( p ) = i = 1 k c i ψ q i , i = 1 k c i = 1 , c i = { 0 , 1 } , i = 1 , 2 , , k .
For matching regions with multiple minimum SSDs, the process is no longer a random selection of images, or of the first or last image piece encountered in a circular search, which allows for some shortcomings to be avoided. However, compared with Kim et al. [38], selecting our matching image according to the local minimum total variation is a reasonable criterion for use in selecting K images from the film, rather than simply the K optional image slices of the weighted average; we can thus avoid image blurring caused by humans.
The proposed algorithm is thus an image classification processing model using auxiliary classifier generative adversarial networks. It is outlined in detail in Algorithm 1 below.
Algorithm 1. CP-ACGAN
  Step 1: Reading the image f and the area mask M;
  Step 2: According to the ACGAN model, the structure image of u is obtained by image f;
  Step 3: Determine the set of image pixels on the Ω to be classified on the boundary of S;
  Step 4: For the structural image u and p to be patched, each p Ω is centered from an image slice. According to Equation (13), the confidence item C(p) is calculated. According to Equation (14), data item D(p) is calculated, while priority P(p) is calculated according to Equation (15).
  Step 5: Determine the highest priority point P, as well as the corresponding image block W in the corresponding ψ p , and record the location of ψ p .
  Step 6: According to the optimization model from Equation (16), we determine the optimal matching block ψ ( p ^ ) and location information in ψ p ψ ( p ^ ) , then replace ψ ( p ^ ) with ψ p to complete the repair of p point’s image slices. Meanwhile, in the u of structural components, we have used ψ ( p ^ ) images to replace the corresponding images of p points corresponding to u.
  Step 7: Update M and ψ p ;
Step 8: Determine whether the mask M is empty. If it is empty, the algorithm ends; otherwise, return to step 3.

4. The Experimental Results and Analysis

To further verify the effectiveness of the proposed algorithm, experiments were carried out on the MNIST [35] dataset, the CIFAR10 [36] dataset and the CIFAR100 [39] dataset. In all experiments, the pooling layer in the CP-ACGAN method utilized the averaging pooling method.
The Monte Carlo method [40] is called a stochastic simulation method, sometimes called a random sampling technique or a statistical testing method. It is a mature simulation method. Based on probability theory and mathematical statistics, the Monte Carlo method uses a computer to perform statistical experiments on random variables and random simulations to solve a numerical solution of the approximate solution of the problem. In order to solve the specific problem of image classification, we first needed to establish a probability model or stochastic process related to the solution, whose parameters are equal to the solution of the problem. We then needed to improve the model according to the characteristics of the model or process, and random simulation. The observation or sampling testing process calculates the statistical characteristics of the relevant parameters, and it gives the approximated value and its accuracy.
According to the size of datasets and the experimental conditions of image classification, we chose the Monte Carlo method for random selection of training and testing samples on the CIFAR10 and CIFAR100 datasets. The MNIST dataset is a very classic dataset in the machine learning field. It is ideal for a single-sample testing procedure. The GPU environment was as follows: (1) operating system: Windows 10; (2) GPU: GTX1050+CUDA9.0+cuDNN; (3) IDE: Pycharm; (4) Framework: Pytorch-GPU; (5) Interpreter: Python 3.6.

4.1. MNIST Dataset Experiment

The MNIST dataset is a handwriting dataset containing a total of 60,000 training samples and 10,000 testing samples. Each sample corresponds to one digit from zero to nine. Each sample has a two-dimensional image dataset 28 × 28 in size, and is expanded into a 784-dimensional vector. To enhance the comparability of our experimental results, the ACGAN network structure of our experiments is described in Figure 3, while the discriminator network structure of the CP-ACGAN is shown in Figure 6. The number of the training epoch was 1000, with one training sample. In the experiments, the generator and discriminator were optimized using the Adam algorithm [41], and the learning rate was 0.002. The experiment also used the deep learning framework and was implemented under a GPU environment.
For existing image classification problems, the most effective and optimal method at present is the DCNN. Thus, the present paper compares the CP-ACGAN method with the CNN method. Because the CNN method differs from the ACGAN model discriminator only in that the pooling layer is added behind the convolutional layer of the CNN, the network structure of the discriminator is exactly the same as that of the CP-ACGAN method. Therefore, in order to make the experiments more comparable, the meaning, minimum and maximum pooling of the CNN were compared.
Figure 7 presents the effect of different methods on the MNIST training dataset after a single sample has 1000 training epochs. The training accuracy [23,24,26,27] of the ACGAN method fluctuates greatly, although the fluctuation range decreases as the number of training epoch increases. In this paper, the CP-ACGAN method reduced the fluctuation range of training accuracy, and the oscillation range reduced rapidly with an increasing number of training procedures. As can be seen from Figure 7, the CP-ACGAN method achieves better convergence than the ACGAN method, and the number of training epochs can quickly be brought down to the desired value with a little training.
Figure 8 presents a testing accuracy comparison of different methods on the MNIST testing dataset [37,42,43]. Under these conditions, the CP-ACGAN can obtain better testing accuracy than the ACGAN. Comparing the mean value of the CNN with the maximum value of the CNN, the accuracy of the CP-ACGAN method can be observed to be higher than this range in most cases.
Table 1 presents the highest accuracy of the different methods on different datasets (MNIST, CIFAR10 and CIFAR100) after 1000 training epochs.
Moreover, Table 2 shows the average prediction accuracy and variance of the different methods when the network training tends to be stable after testing for 1000 epochs.
When Figure 8, Table 1 and Table 2 are combined, it is easy to see that the CP-ACGAN method exhibits smaller variance than the ACGAN method, meaning that it has better training and testing stability. At the same time, the maximum prediction accuracy of the CP-ACGAN is 99.61%, which is higher than the 99.45% achieved by the ACGAN, while the average prediction accuracy of 500 epochs is also higher. Compared with the CNN method, the CP-ACGAN method achieves better maximum prediction accuracy and average prediction accuracy than the mean and maximum pooling CNN. However, compared with the CNN method, the variance of the CP-ACGAN method is greater; that is, the stability is slightly worse. Figure 9 presents the images generated by the ACGAN and CP-ACGAN after the completion of 1000 training epochs.
An interesting phenomenon can be observed from Figure 9: namely, although the CP-ACGAN achieves better classification performance, it is worse than the ACGAN at generating images, which is exactly the same as the conclusion drawn regarding GAN-based semi-supervised learning. This problem seems to be a paradox, but as it is based on common problems with GAN supervised learning and semi-supervised learning, it will also be the focus of future work.

4.2. The CIFAR10 Dataset Experiment

The CIFAR10 dataset contains more complex data than the MNIST dataset. Each image is a 32 × 32 color image, with an image size of 3 × 32 × 32. There are ten categories, each containing 5000 training images (that is, a total of 50,000 training images), as well as another 10,000 testing images overall. The network structure of the experiment is exactly the same as that of the MNIST’s experimental structure, except for the fact that the output characteristic number of the final output layer of the generator is three, while the input characteristic of the discriminator is also three.
According to the scale of the CIFAR10 dataset, we randomly selected 1000 training samples (including 10 categories) from 50,000 training images using the Monte Carlo method. For the selected training images, the CNN_mean, CNN_max, ACGAN and CP-ACGAN methods were used for every training procedure. The average classification accuracy of the training phase is shown in Figure 10. Figure 10 presents the effects of various methods on the CIFAR10 training classification accuracy after 1000 training samples.
Moreover, Figure 11 presents a comparison of different methods on the CIFAR10 testing dataset. On the CIFAR10 testing dataset, the CP-ACGAN obtains better testing accuracy than the ACGAN. When the mean value of the CNN is compared with the maximum value of the CNN, the testing accuracy of the CP-ACGAN method meets this range in most cases. The testing accuracy of the ACGAN method is almost entirely outside the scope. In the CIFAR10 testing dataset, the random number generated by the Monte Carlo method [40] and the image classification testing image samples partially deviate, but the difference is not significant. The advantage of the Monte Carlo method is that it can be quickly simulated to generate experimental data. The 1000 testing samples used in Figure 11 were selected according to the Monte Carlo method, and the curve in Figure 11 was selected by the Monte Carlo fitting analysis method. From the probability density function, the random number conforming to the normal distribution was calculated.
The highest accuracy rates [44] obtained on the CIFAR10 testing dataset after the completion of various training methods and various testing samples in the testing dataset are shown in Table 3. As an example, Table 3 presents the average prediction accuracy and variance of different methods after 1000 tests.
Through analysis of Figure 11, Table 1 and Table 3, we can see that the ACGAN method achieves good results on MNIST; however, when faced with a complex CIFAR10 dataset, the effect is very poor, and far inferior to that of the CNN method. By contrast, the CP-ACGAN method shows strong adaptability. When facing the complex CIFAR10 dataset, the obtained effect is also much better than the CNN method with the same structure; however, the stability is again deficient.

4.3. The CIFAR100 Dataset Experiment

CIFAR100 is a dataset similar to CIFAR10, containing three-channel colorful images. However, CIFAR100 has 100 categories with 500 training pictures for each category; that is, 50,000 training pictures, and 10,000 testing pictures. At this time, the number of training samples per-category is less, so the performance of the testing dataset is also slightly worse. The network structure of all classification methods in the CIFAR100 experiment is exactly the same as that in the CIFAR10 and MNIST experiments.
According to the scale of the CIFAR100 dataset, we randomly selected 1000 training samples (including 100 categories) from 50,000 training images using the Monte Carlo method. For the selected training images, the CNN_mean, CNN_max [45], ACGAN and CP-ACGAN methods were used for the training procedure. The classification accuracy effect of different methods on the CIFAR100 training dataset after training 1000 samples is shown in Figure 12.
Figure 13 presents a comparison of different methods on the CIFAR100 testing dataset. Under these conditions, the CP-ACGAN can obtain better testing accuracy than the ACGAN. Moreover, when comparing the mean value of the CNN with the maximum value of the CNN, the accuracy of the CP-ACGAN method exceeds this range in most cases. The classification accuracy of the ACGAN method is almost entirely outside the scope. Compared with the CIFAR10 testing suite, the comparison results of different methods on the CIFAR100 testing suite are smoother, and the ACGAN method performs better.
In the CIFAR100 testing dataset, the random number generated by the Monte Carlo method [40] and the image classification testing image samples partially deviate. The advantage of the Monte Carlo method is that it can be quickly simulated to generate experimental data. The testing samples used in Figure 13 were selected according to the Monte Carlo method, and the curve in Figure 13 was selected by Monte Carlo fitting analysis method.
Table 4 reflects the average predictive accuracy [46,47,48] and variance of various methods when training is conducted over 1000 tests and the network is gradually stabilized.
Analysis of Figure 13, Table 1 and Table 4 demonstrates that the ACGAN is similar to CIFAR10, in that the ACGAN is also less effective at dealing with the complex CIFAR100 dataset compared to the CNN with the same structure. The CP-ACGAN method presented in this paper also shows strong adaptability to the CIFAR100 dataset. Compared with the CNN with the same structure, the testing results are greatly improved, and the stability is also improved.
In summary, the ACGAN and CP-ACGAN achieve good classification effects when faced with a simple MNIST dataset. The ACGAN does not perform as well as the CNN when facing complex high-dimensional data; however, the CP-ACGAN method proposed in this paper also achieves good classification effects for these data. Therefore, the CP-ACGAN method can be seen to have enhanced the adaptability of the network to complex data; moreover, compared with the CNN method with the same structure, the effect is also significantly improved.

4.4. The Efficiency Comparison

To further illustrate the effectiveness of the proposed algorithm and evaluate its performance, we analyzed the time complexity [41,44,45,49,50,51,52,53] of the CP-ACGAN and compared it in turn with that of the improved models. In this paper, the time complexity of the proposed algorithm is O ( n log n ) ; the time complexity of the CP-ACGAN is better than the CNN_mean, CNN_max, and ACGAN. With the same number of iterations, the training network of the CP-ACGAN performs better than that of the CNN_mean, CNN_max and ACGAN, while the computational complexity of our method was also greatly reduced relative to the others. In summary, the efficiency of our proposed method is superior to that of both the ACGAN and the CNN model [54].

5. Conclusions

By analyzing the synthesis principle of the ACGAN’s high-definition image and its discriminator judgment ability, a CP-ACGAN is proposed in this paper. Some changes were made to the original ACGAN in the proposed method, including feature matching, changing the output layer structure of the discriminator, introducing the softmax classifier, reconstructing the loss function of the generator and discriminator by means of semi-supervised learning, and introducing the pooling method into the discriminator. The experimental results have shown that, compared with the original ACGAN method, the CP-ACGAN method achieves better classification performance on the MNIST, CIFAR10 and CIFAR100 datasets, and is also more stable than the others. At the same time, compared with a CNN with the same deep network structure, the classification effect of the proposed method is also better. The advantage of the proposed method is that further study of the diversity of GAN-generated samples will further improve the classification effect, meaning that this method has better scalability than others. An interesting phenomenon that emerged in the experiments is that the CP-ACGAN achieves better classification effects than the ACGAN, but the generated image samples are worse than those of other methods. This coincides with the observed phenomenon in which semi-supervised learning based on GAN also achieves better classification performance, but produces worse images. Moreover, the CP-ACGAN method achieves better classification effects than the CNN method, but exhibits some deficiencies in stability. Therefore, the question of how to further improve the stability of the CP-ACGAN classification method will also be the focus of future research work. In addition, in future work, we need to improve the efficiency and rationality of the CP-ACGAN structure by improving the rationality and feasibility of the structure.

Author Contributions

Conceptualization Y.-T.C.; Methodology, J.W. and J.-J.T.; Software X.C.; Validation, J.-B.X.; Formal Analysis, Y.-T.C.; Investigation J.W.; Resources J.X. and J.-J.T.; Data Curation, K.Y.; Supervision J.W.; Funding Acquisition, J.W. and K.Y. Y.-T.C. provides extensive support in the overall research; J.W. conceived and designed the presented idea and developed the theory, performed the simulations, and wrote the paper; T.-J.J. and K.Y. verified the analytical methods and encouraged investigation of various relevant aspects of the proposed research; J.-B.X. and J.X. provided critical feedback and helped shape the research, analysis, and manuscript. All authors discussed the results and contributed to the final manuscript.

Funding

This research was funded by the National Natural Science Foundation of China [61811530332,61811540410], the Open Research Fund of Hunan Provincial Key Laboratory of Intelligent Processing of Big Data on Transportation [2015TP1005], the Changsha Science and Technology Planning [KQ1703018, KQ1706064, KQ1703018-01, KQ1703018-04], the Research Foundation of Education Bureau of Hunan Province [17A007], Changsha Industrial Science and Technology Commissioner [2017-7], the Junior Faculty Development Program Project of Changsha University of Science and Technology [2019QJCZ011].

Acknowledgments

We are grateful to our anonymous referees for their useful comments and suggestions. The authors also thank Ke Gu, Yan Gui, Jian-Ming Zhang, Lecturer Run-long Xia and Li-Dan Kuang for their useful advice during this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the 2012 Annual Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1106–1114. [Google Scholar]
  2. He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  3. Zhou, J.H.; Zhang, B. Collaborative Representation Using Non-Negative Samples for Image Classification. Sensors 2019, 19, 3609. [Google Scholar] [CrossRef] [PubMed]
  4. Gao, G.W.; Zhu, D.; Yang, M.; Lu, H.M.; Yang, W.K.; Gao, H. Face Image Super-Resolution with Pose via Nuclear Norm Regularized Structural Orthogonal Procrustes Regression. Neural Comput. Appl. 2018. [Google Scholar] [CrossRef]
  5. Zhou, S.W.; He, Y.; Xiang, S.Z.; Li, K.Q.; Liu, Y.H. Region-based compressive networked storage with lazy encoding. IEEE Trans. Parallel Distrib. Syst. 2019, 30, 1390–1402. [Google Scholar] [CrossRef]
  6. Donati, L.; Iotti, E.; Mordonini, G.; Prati, A. Fashion Product Classification through Deep Learning and Computer Vision. Appl. Sci. 2019, 9, 1385. [Google Scholar] [CrossRef]
  7. Turajlic, E.; Begović, A.; Škaljo, N. Application of Artificial Neural Network for Image Noise Level Estimation in the SVD domain. Appl. Sci. 2019, 8, 163. [Google Scholar] [CrossRef]
  8. Wang, W.L.; Li, Z.R. Advances in Generative Adversarial Network. J. Commun. 2018, 39, 135–148. [Google Scholar]
  9. Goodfellow, I.; Pouget-Adadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the 2014 Advances in Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 2672–2680. [Google Scholar]
  10. Radford, A.; Metz, A.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
  11. Mirza, M.; Osindero, S. Conditional Generative Adversarial Nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
  12. Odena, A.; Olah, C.; Shlens, J. Conditional Image Synthesis with Auxiliary Classifier GANs. In Proceedings of the 2017 International Conference on Learning Representations, Sydney, Australia, 6–11 August 2017; pp. 2642–2651. [Google Scholar]
  13. Kingma, D.P.; Rezende, D.J.; Mohamed, S.; Welling, M. Semi-Supervised Learning with Deep Generative Models. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 3581–3589. [Google Scholar]
  14. Gui, Y.; Zeng, G. Joint Learning of Visual and Spatial Features for Edit Propagation from a Single Image. Vis. Comput. 2019. [Google Scholar] [CrossRef]
  15. Zhang J., M.; Lu C., Q.; Li X., D.; Kim, H.J.; Wang, J. A Full Convolutional Network Based on DenseNet for Remote Sensing Scene Classification. Math. Biosci. Eng. 2019, 16, 3345–3367. [Google Scholar] [CrossRef]
  16. Xia, X.J.; Togneri, R.; Sohel, F.; Huang, D.D. Auxiliary Classifier Generative Adversarial Network with Soft Labels in Imbalanced Acoustic Event Detection. IEEE Trans. Multimed. 2019, 21, 1359–1371. [Google Scholar] [CrossRef]
  17. Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; Chen, X. Improved Techniques for Training GANs. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 2226–2234. [Google Scholar]
  18. Yin, Y.Y.; Chen, L.; Xu, Y.S.; Wan, J.; Zhang, H.; Mai, Z.D. QoS Prediction for Service Recommendation with Deep Feature Learning in Edge Computing Environment. Mob. Netw. Appl. 2019. [Google Scholar] [CrossRef]
  19. Chen, X.Y.; Xu, C.; Yang, X.K.; Song, L.; Tao, D.C. Gated-GAN: Adversarial Gated Networks for Multi-Collection Style Transfer. IEEE Trans. Image Process. 2019, 28, 546–560. [Google Scholar] [CrossRef] [PubMed]
  20. Zhang, H.M.; Qian, J.J.; Gao, J.B.; Yang, J.; Xu, C.Y. Scalable Proximal Jacobian Iteration Method with Global Convergence Analysis for Nonconvex Unconstrained Composite Optimizations. IEEE Trans. Neural Netw. Learn. Syst. 2019. [Google Scholar] [CrossRef] [PubMed]
  21. Koniusz, P.; Yan, F.; Gosselin, P.; Mikolajczyk, K. Higher-order Occurrence Pooling for Bags-of-words: Visual Concept Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 313–326. [Google Scholar] [CrossRef] [PubMed]
  22. Csurka, G.; Dance, C.R.; Fan, L.; Willamowski, J.; Bray, C. Visual Categoryzation with Bags of Keypoints. In Workshop on Statistical Learning in Computer Vision, in Conjunction Conference on Computer Vision; ECCV: Prague, Czech, 2004; Volume 1, pp. 1–2. [Google Scholar]
  23. Chen, Y.T.; Xiong, J.; Xu, W.H.; Zuo, J.W. A Novel Online Incremental and Decremental Learning Algorithm Based on Variable Support Vector Machine. Clust. Comput. 2018. [Google Scholar] [CrossRef]
  24. Yin, Y.Y.; Chen, L.; Xu, Y.S.; Wan, J. Location-Aware Service Recommendation with Enhanced Probabilistic Matrix Factorization. IEEE Access 2018, 6, 62815–62825. [Google Scholar] [CrossRef]
  25. Chen, X.; Duan, Y.; Houthooft, R.; Schulman, J.; Sutskever, L.; Abbeel, P. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 2172–2180. [Google Scholar]
  26. Wang, J.; Gao, Y.; Liu, W.; Sangaiah, A.K.; Kim, H.J. An Intelligent Data Gathering Schema with Data Fusion Supported for Mobile Sink in WSNs. Int. J. Distrib. Sens. Netw. 2019. [Google Scholar] [CrossRef]
  27. Tan, D.S.; Lin, J.M.; Lai, Y.C.; Liao, J.; Hua, K.L. Depth Map Upsampling via Multi-Modal Generative Adversarial Network. Sensors 2019, 19, 1587. [Google Scholar] [CrossRef]
  28. Zhang, J.M.; Jin, X.K.; Sun, J.; Wang, J.; Sangaiah, A.K. Spatial and Semantic Convolutional Features for Robust Visual Object Tracking. Multimed. Tools Appl. 2018. [Google Scholar] [CrossRef]
  29. Ulyanov, D.; Vedaldi, A.; Lempitsky, V. Adversarial Generator-Encoder Networks. arXiv 2017, arXiv:1704.02304. [Google Scholar]
  30. Scherer, D.; Muller, A.; Behnke, S. Evaluation of Pooling Operation in Convolutional Architecture for Object Recognition. In Proceedings of the International Conference on Artificial Neural Networks, Thessaloniki, Greece, 15–18 September 2010; pp. 92–101. [Google Scholar]
  31. Boureau, Y.L.; Ponce, J.; LeCun, Y. A Theoretical Analysis of Feature Pooling in Visual Recognition. In Proceedings of the International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010; pp. 111–118. [Google Scholar]
  32. Zhan, Y.; Hu, D.; Wang, Y.T.; Yu, X.C. Semisupervised Hyperspectral Image Classification Based on Generative Adversarial Networks. IEEE Geosci. Remote Sens. Lett. 2018, 15, 212–216. [Google Scholar] [CrossRef]
  33. Chen, Y.T.; Wang, J.; Chen, X.; Sangaiah, A.K.; Yang, K.; Cao, Z.H. Image Super-Resolution Algorithm Based on Dual-Channel Convolutional Neural Networks. Appl. Sci. 2019, 9, 2316. [Google Scholar] [CrossRef]
  34. Dong, C.; Chen, C.L.; He, K.M.; Tang, X.O. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 295–303. [Google Scholar] [CrossRef] [PubMed]
  35. Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced Deep Residual Networks for Single Image Super-Resolution. In Proceedings of the Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 1132–1140. [Google Scholar]
  36. Kumar, V.; Mukherjee, J.; Mandal, S.K.D. Image Inpainting Through Metric Labelling Via Guided Patch Mixing. IEEE Trans. Image Process. 2016, 25, 5212–5226. [Google Scholar] [CrossRef] [PubMed]
  37. Min, X.; Ma, K.; Gu, K.; Zhai, G.T.; Wang, Z.; Lin, W.S. Unified Blind Quality Assessment of Compressed Natural, Graphic, and Screen Content Images. IEEE Trans. Image Process. 2017, 26, 5462–5474. [Google Scholar] [CrossRef] [PubMed]
  38. Kim, J.; Lee, J.K.; Lee, K.M. Deeply-Recursive Convolutional Network for Image Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1637–1645. [Google Scholar]
  39. Khan, R.; Barat, C.; Muselet, D.; Ducottet, C. Spatial Histograms of Soft Pairwise Similar Patches to Improve the Bag-of-visual-words model. Comput. Vis. Image Underst. 2014, 132, 102–112. [Google Scholar] [CrossRef]
  40. Metropolis, N.; Ulam, S. The Monte Carlo Method. J. Am. Stat. Assoc. 1949, 44, 335–341. [Google Scholar] [CrossRef]
  41. Xiang, L.Y.; Shen, X.B.; Qin, J.H.; Hao, W. Discrete Multi-Graph Hashing for Large-scale Visual Search. Neural Process. Lett. 2019. [Google Scholar] [CrossRef]
  42. Chen, Y.T.; Xu, W.H.; Zuo, J.W.; Yang, K. The Fire Recognition Algorithm Using Dynamic Feature Fusion and IV-SVM Classifier. Clust. Comput. 2018. [Google Scholar] [CrossRef]
  43. Sun, R.X.; Shi, L.F.; Yin, C.Y.; Wang, J. An Improved Method in Deep Packet Inspection Based on Regular Expression. J. Supercomput. 2019, 75, 3317–3333. [Google Scholar] [CrossRef]
  44. Kofler, C.; Muhr, R.; Spock, G. Classifying Image Stacks of Specular Silicon Wafer Back Surface Regions: Performance Comparison of CNNs and SVMs. Sensors 2019, 19, 2056. [Google Scholar] [CrossRef] [PubMed]
  45. Acremont, A.; Fablet, R.; Baussard, A.; Quin, G. CNN-Based Target Recognition and Identification for Infrared Imaging in Defense Systems. Sensors 2019, 19, 2040. [Google Scholar] [CrossRef] [PubMed]
  46. Chen, Y.T.; Wang, J.; Xia, R.L.; Zhang, Q.; Cao, Z.H.; Yang, K. The Visual Object Tracking Algorithm Research Based on Adaptive Combination Kernel. J. Ambient Intell. Humaniz. Comput. 2019. [Google Scholar] [CrossRef]
  47. Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
  48. Chen, Y.T.; Wang, J.; Chen, X.; Zhu, M.W.; Yang, K.; Wang, Z.; Xia, R.L. Single-Image Super-Resolution Algorithm Based on Structural Self-Similarity and Deformation Block Features. IEEE Access 2019, 7, 58791–58801. [Google Scholar] [CrossRef]
  49. Du, P.J.; Gan, L.; Xia, J.S.; Wang, D.M. Multikernel Adaptive Collaborative Representation for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4664–4677. [Google Scholar] [CrossRef]
  50. Zhu, L.; Chen, Y.S.; Ghamisi, P.; Benediktsson, J.A. Generative Adversarial Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5046–5063. [Google Scholar] [CrossRef]
  51. Yu, Y.; Tang, S.H.; Aizawa, K.; Aizawa, A. Category-Based Deep CCA for Fine-Grained Venue Discovery from Multimodal Data. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 1250–1258. [Google Scholar] [CrossRef]
  52. Chen, Y.T.; Xia, R.L.; Wang, Z.; Zhang, J.M.; Yang, K.; Cao, Z.H. The visual saliency detection algorithm research based on hierarchical principle component analysis method. Multimed. Tools Appl. 2019. [Google Scholar] [CrossRef]
  53. He, S.M.; Xie, K.; Chen, W.W.; Zhang, D.F.; Wen, J.G. Energy-aware Routing for SWIPT in Multi-hop Energy-constrained Wireless Network. IEEE Access 2018, 6, 17996–18008. [Google Scholar] [CrossRef]
  54. Qiao, T.T.; Zhang, J.; Xu, D.Q.; Tao, D.C. MirrorGAN: Learning Text-to-image Generation by Redescription. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; Available online: https://arxiv.org/pdf/1903.05854.pdf (accessed on 16 June 2019).
Figure 1. The original GAN network structure [9].
Figure 1. The original GAN network structure [9].
Sensors 19 03145 g001
Figure 2. The ACGAN network structure [16].
Figure 2. The ACGAN network structure [16].
Sensors 19 03145 g002
Figure 3. The generator structure of ACGAN [16].
Figure 3. The generator structure of ACGAN [16].
Sensors 19 03145 g003
Figure 4. Discriminator structure using an ACGAN.
Figure 4. Discriminator structure using an ACGAN.
Sensors 19 03145 g004
Figure 5. The CP-ACGAN network structure.
Figure 5. The CP-ACGAN network structure.
Sensors 19 03145 g005
Figure 6. The discriminator structure of the CP-ACGAN.
Figure 6. The discriminator structure of the CP-ACGAN.
Sensors 19 03145 g006
Figure 7. Classification effect of different methods on the MNIST training dataset.
Figure 7. Classification effect of different methods on the MNIST training dataset.
Sensors 19 03145 g007
Figure 8. Testing accuracy comparison of different methods on the MNIST testing dataset.
Figure 8. Testing accuracy comparison of different methods on the MNIST testing dataset.
Sensors 19 03145 g008
Figure 9. Comparison of different methods on the MNIST testing dataset.
Figure 9. Comparison of different methods on the MNIST testing dataset.
Sensors 19 03145 g009
Figure 10. Average classification accuracy of different methods on the CIFAR10 training dataset.
Figure 10. Average classification accuracy of different methods on the CIFAR10 training dataset.
Sensors 19 03145 g010
Figure 11. Average testing accuracy comparison on the CIFAR10 testing dataset.
Figure 11. Average testing accuracy comparison on the CIFAR10 testing dataset.
Sensors 19 03145 g011
Figure 12. Average classification accuracy of different methods on the CIFAR100 training dataset.
Figure 12. Average classification accuracy of different methods on the CIFAR100 training dataset.
Sensors 19 03145 g012
Figure 13. Average testing accuracy comparison on the CIFAR100 testing dataset.
Figure 13. Average testing accuracy comparison on the CIFAR100 testing dataset.
Sensors 19 03145 g013
Table 1. Best prediction accuracy of different methods on MNIST, CIFAR10, and CIFAR100.
Table 1. Best prediction accuracy of different methods on MNIST, CIFAR10, and CIFAR100.
ModelMNISTCIFAR10CIFAR100
Mean Pooling CNN0.99510.77960.4594
Maximum Pooling CNN0.99430.76390.4283
ACGAN0.99500.73060.3989
CP-ACGAN0.99620.79070.4803
Table 2. MNIST prediction accuracy mean value, maximum value, minimum value and variance of different methods in 1000 testing epochs.
Table 2. MNIST prediction accuracy mean value, maximum value, minimum value and variance of different methods in 1000 testing epochs.
ModelMean ValueMaximum ValueMinimum ValueVariance
Mean Pooling CNN0.99490.99490.98656.0 × 10−9
Maximum Pooling CNN0.99410.99410.98653.3 × 10−9
ACGAN0.99400.99450.98354.1 × 10−7
CP-ACGAN0.99560.99610.98901.9 × 10−7
Table 3. CIFAR10 prediction accuracy. Mean value, maximum value, minimum value and variance of different methods in 1000 tests.
Table 3. CIFAR10 prediction accuracy. Mean value, maximum value, minimum value and variance of different methods in 1000 tests.
ModelThe Number of Testing SamplesMean ValueMaximum ValueMinimum ValueVariance
Mean Pooling CNN2000.76540.77650.57053.96 × 10−6
6000.76650.77750.58003.88 × 10−6
10000.76700.77950.58833.79 × 10−6
Maximum Pooling CNN2000.75170.76050.58003.88 × 10−6
6000.75350.76450.58903.76 × 10−6
10000.75500.76900.59603.67 × 10−6
ACGAN2000.71960.73050.57005.88 × 10−5
6000.72030.73350.57805.76 × 10−5
10000.72200.73650.58905.69 × 10−5
CP-ACGAN2000.76820.78500.67003.28 × 10−5
6000.76990.78850.67903.19 × 10−5
10000.77150.79050.68993.14 × 10−5
Table 4. CIFAR100 prediction accuracy. Mean value, maximum value, minimum value and variance of different methods in 1000 Tests.
Table 4. CIFAR100 prediction accuracy. Mean value, maximum value, minimum value and variance of different methods in 1000 Tests.
ModelThe Number of Testing SamplesMean ValueMaximum ValueMinimum ValueVariance
Mean Pooling CNN2000.67520.71050.40004.92 × 10−6
6000.67920.71850.45004.88 × 10−6
10000.68100.72520.48004.75 × 10−6
Maximum Pooling CNN2000.65140.67500.39005.25 × 10−6
6000.65600.67900.42005.18 × 10−6
10000.66000.68300.45005.15 × 10−6
ACGAN2000.61880.64800.42004.05 × 10−5
6000.62120.65050.44004.00 × 10−5
10000.62580.65950.45003.96 × 10−5
CP-ACGAN2000.70280.73000.47003.75 × 10−5
6000.70880.73800.49003.70 × 10−5
10000.71220.74550.50223.64 × 10−5

Share and Cite

MDPI and ACS Style

Chen, Y.; Tao, J.; Wang, J.; Chen, X.; Xie, J.; Xiong, J.; Yang, K. RETRACTED: The Novel Sensor Network Structure for Classification Processing Based on the Machine Learning Method of the ACGAN. Sensors 2019, 19, 3145. https://doi.org/10.3390/s19143145

AMA Style

Chen Y, Tao J, Wang J, Chen X, Xie J, Xiong J, Yang K. RETRACTED: The Novel Sensor Network Structure for Classification Processing Based on the Machine Learning Method of the ACGAN. Sensors. 2019; 19(14):3145. https://doi.org/10.3390/s19143145

Chicago/Turabian Style

Chen, Yuantao, Jiajun Tao, Jin Wang, Xi Chen, Jingbo Xie, Jie Xiong, and Kai Yang. 2019. "RETRACTED: The Novel Sensor Network Structure for Classification Processing Based on the Machine Learning Method of the ACGAN" Sensors 19, no. 14: 3145. https://doi.org/10.3390/s19143145

APA Style

Chen, Y., Tao, J., Wang, J., Chen, X., Xie, J., Xiong, J., & Yang, K. (2019). RETRACTED: The Novel Sensor Network Structure for Classification Processing Based on the Machine Learning Method of the ACGAN. Sensors, 19(14), 3145. https://doi.org/10.3390/s19143145

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop