[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Flight maneuver intelligent recognition based on deep variational autoencoder network

Abstract

The selection and training of aircraft pilots has high standards, long training cycles, high resource consumption, high risk, and high elimination rate. It is the particularly urgent and important requirement for the current talent training strategy of national and military to increase efficiency and speed up all aspects of pilot training, reduce the training cycle and reduce the elimination rate. To this end, this paper uses deep variational auto-encoder network and adaptive dynamic time warping algorithms as support to explore the establishment of an integrated evaluation system for flight maneuver recognition and quality evaluation, solve the industry difficulty faced by current flight training data mining applications, and achieve accurate recognition and reliable quality evaluation of flight regimes under the background of high mobility. It will fully explore the benefits of existing airborne flight data for military trainee pilots, support the personalized and accurate training of flight talents, and reduce the rate of talent elimination.

1 Introduction

In the face of the urgent demand for high quality pilots for the launching of aircraft carriers and the commissioning of carrier-borne aircraft, how to train high quality and efficient military flying talents is a historical mission in front of military flying academies. At present, there is still a gap in the mining application of big data for the growth of flight cadets and the mining of flight talent cultivation and growth rules based on big data support. Therefore, it is of great practical significance to carry out effective research on intelligent flight maneuver identification and evaluation methods, which are mainly reflected in the following four aspects:

First, launch the pioneering exploration of big data construction and application for the growth of flight cadets. The industry has reached a consensus on the value and significance of big data for the growth of flight cadets. It is urgent to carry out the construction of big data standard system, construction of supporting information conditions, data production and accumulation, data mining, analysis and application.

Second, explore an effective way to improve the quality of flight talent training based on data. With the rapid development of network information system application and communication technology, data collection and accumulation of flight cadets have become increasingly rich [1, 2]; The growth data of flight cadets can accurately reflect their learning motivation, learning attitude and learning effect. Through data mining, and correlation analysis, forming students into multidimensional dynamic curve of growing up, can appear in students learning will reduce, study effect of weakening trend, actively introduce management department, teaching and training department human intervention and guidance to help, to stimulate students learning enthusiasm, supplement short weaknesses; Through the accumulation of a large number of historical flight data and flight statistics, Construct the growth model of flight cadet training, And after a large number of historical data training and test correction, used for scientific early warning of flying cadets, Put forward reasonable opinions and suggestions on the model diversion of students in data, providing an effective way to improve the quality of flight talent training.

Third, solve the urgent need of flight training quality assessment of flight cadets. During flight training, cadets have accumulated a large number of rich flight parameters. For a long time, the training management departments at all levels, in the implementation of the outline of the training requirements, check whether the flight vehicles complete regulations training maneuver, training maneuver whether meet the requirements, etc., in the face of vast amounts of flight parameter data, and to rely on artificial analysis is difficult to achieve, 'Whether to fly, whether the quality of the flight is good' is plagued by departments at all levels of reality problem for a long time. Based on the theoretical method and means of flight parameter data mining, the realization of automatic identification of flight regimes and quantitative evaluation of maneuver quality, automatic intelligent processing of flight parameter data of each flight, and analysis of flight maneuver completion rate and quality can effectively solve the urgent need of management departments at all levels.

Fourth, it is inevitable to give full play to the data effectiveness of flight cadets' growth. Big data is hailed as a fighting force in the new field and new space of military struggle. Education and training are moving towards the era of big data, who can discover the data, who can win the future survival; Who can mine the data, who can win the future development; Who can leverage data and provide personalized services, who can win the competition in the future. With the in-depth promotion of the concept of big data and the iterative evolution of data processing technology, how to fully leverage these real historical data, let data "speak", and use data to support decisions has become the reality of the development of The Times.

2 Related work

It is the premise of pilot control quality monitoring that flight movements can be quickly and accurately identified from the flight parameter data. Applied in the field of flight cadets training, it can realize the evaluation of flight quality of flight cadets, the targeted guidance of weaknesses of flight cadets in flight solitary action, the targeted auxiliary decision-making of flight cadets diversion and the early warning of flight cadets in terms of their successful flight so as to improve the efficiency of flight training.

A lot of flight training data resources were accumulated during the daily flight training of military flying cadets, which can be utilized to analyze and assess their flight action and flight quality. But the problems exist widely in the traditional algorithm applied to process such data, such as the not high accuracy in the identification of the flight action, the low resolution ratio of the quality assessment on a five-point scale, the unsatisfactory discrimination of scoring in application etc. Especially in the current situation of the enhancement in military flight training intensity and the better flight mobility, the traditional motion recognition and quality evaluation algorithm is almost invalid in the application, resulting in the difficulty in carrying out the objective data evaluation of the current military flight quality.

Flight motion recognition is essentially a pattern recognition problem. Pilot control quality monitoring was carried out earlier in civil aviation aircraft. Because civil aviation aircraft were in flat flight state for a long time. pilot control quality monitoring was mainly reflected in the identification of take-off, climb, descent and landing stages, which is easy to identify and of little value and significance. Military aircraft, especially fighter aircraft, are difficult to be identified because of their strong flight maneuverability, complex and changeable movements, and are prone to misidentification and missing identification. Mature application has been used in foreign military aircraft based on radar observation and data mining to analyze the aircraft flight movements and movement trend in order to realize the prediction of the other side’s situational awareness on the battlefield and win the battlefield situational countermeasures initiative. The research on flight motion recognition methods of domestic military aircraft began in the beginning of this century, mainly by Air Force Engineering University and Naval Aviation University etc. Aiming at flight motion recognition tasks, the flight motion recognition methods widely used in engineering mainly rely on expert systems and the fast identification of the flight action was achieved by the established knowledge base of motion recognition according to the changing characteristics of flight parameters extracted from the flying data by the specialists’ experience, the developed inferen-ce engine applying advanced computer language and the forward accurate reasoning strategy adopted. In 2004, Xie Chuan et al. used support vector machine to divide the flight action recognition task into two stages: action data screening and action segment classification [3]. In 2005, Xie Chuan et al. proposed a method to extract the characteristics of flight parameters using rough set theory to improve the accuracy of model classification [4]. Ni Shihong et al. constructed the knowledge base of flight parameter discrimination rules for typical flight actions based on the idea of expert system [5]. In 2006, Zhang Ruifeng et al. proposed an algorithm using Cardinals cubic spline curve to generate aircraft flight paths [6] as the basis of flight motion evaluation. Yin Wenjun et al. proposed a flight action recognition method based on genetic algorithm [7]. In 2011, Su Chen et al. and Gao Gao et al. used artificial immune system algorithm [8] and improved quantum genetic algorithm [9] respectively to automatically extract flight action discrimination rules without relying on expert knowledge. In 2018, Wang Yuwei et al. proposed an extraction method of flight action recognition rules based on whale optimization algorithm [10]. In addition, expert knowledge was used to decompose complex actions into five basic actions and establish a flight action knowledge base to realize flight action recognition [11]. The disadvantage of this method is that it is difficult to judge the completeness and accuracy of knowledge base and knowledge multi-layer nested relationship will appear for the representation of complex maneuvering. In extreme cases, for some tactical maneuvers, it is difficult to extract the characteristics of flight parameters, and the knowledge of motion recognition cannot be expressed.

In recent years, scholars have proposed that the problem of motion recognition can be transformed into the similarity query of flight time series and standard time series. The earliest research paper on time series similarity was proposed by Agrawal et al. in 1993 [12, 13], and subsequently became one of the hot issues with many valuable studies, such as discrete Fourier Transform method (DFT) [12, 13], discrete wavelet transform method (DWT) [14], Singular value decomposition (SVD) [15], piecewise Approximation (PAA) [16], dynamic time warping (DTW) [17, 18], etc. The relevant research began earlier that Discrete Fourier transform and R-tree search are used to realize the similarity query of time series. The discrete wavelet transform method uses the scaling and translation wavelet to replace the fixed window for calculation and analysis. However, the information loss of this algorithm is serious, so it is not suitable for non-stationary sequence. The piecewise approximation method segmented the time series by equal width window, and the time series in each window was represented by the window average value. A piecewise linear representation of the time series was obtained, and then the similarity query was carried out. Its advantage is that it has strong data compression ability and can keep the main shape of time series well. Its disadvantages are that it cannot handle sequences of arbitrary length and does not support distance measurements with weights. Singular value decomposition method regards variables in time series as random variables, the observed values at each time point as sample points and the correlation coefficient matrix as the basis of feature extraction and constructs a pattern matching model according to the coordinate transformation principle in linear space. Its advantage is that it has strong data compression ability, and the precise details of data processing can be retained. Its disadvantages are that it cannot handle sequences of arbitrary length and does not support distance measurements with weights. Singular value decomposition (SVD) can also realize the motion analysis of multiple flight parameters, and its transformation is global, but the time and space complexity of the algorithm are relatively high. Dynamic time warping is widely used in speech recognition, and its core idea is to obtain the minimum path through dynamic time warping technology, which has the advantage of realizing the optimal alignment and matching, supporting the time axis bending of sequence and the similarity measurement of time series of different lengths, but the calculation is relatively complicated. In 2015, Li Hongli et al. use multiple dynamic time neat (Multivariant Dynamic Time Warping MDTW) algorithm to calculate the similarity between the time sequence and standard action sequence to determine its action categories [19]. In 2017, Shen Yichao et al. used hierarchical clustering based on multivariate dynamic time regularization to screen node features, and then constructed a Bayesian network to predict the probability of time series belonging to different categories [20]. Zhou Chao et al. used the improved multivariate dynamic time warping algorithm to achieve the hierarchical pre-classification and sub-classification of flight movements [21]. In 2019, Shen Yichao et al. proposed a dynamic time-warping path ill-condition matching algorithm to remove invalid fragments in time series and improve the precision of action starting and ending points [22]. In 2018, Xiao Yanping et al. conducted a comprehensive analysis of the lateral-directional flight quality evaluation indexes of aircraft from the perspective of system dynamics model [23]. In 2019, Wang Yuwei et al. constructed a five-level comprehensive evaluation structure system for the flight quality, and used the comprehensive weighting method to conduct weighted evaluation on the score of basic movements [24]. In 2020, Xu Gang et al. proposed an automatic evaluation method of training effectiveness of combat simulation [25]. In addition, some studies used low-cost simulated flight data to replace costly actual flight [26]. However, influenced by many factors, such as the aircraft performance, the pilot operation habits, the flight environment et. complex maneuver are with strong randomness and fuzziness. Hence, the above algorithms are, generally, falling into disuse because the recognition features is not diverse enough, they adopt the hard division for the precise threshold and the complex maneuver randomness and fuzziness fail to be fully illustrated for the knowledge expression and reasoning process.

3 Flight maneuver recognition and quality assessment system

This paper solves the identification of the maneuver name, flight time starting point and quality assessment of three specific issues, build flight regimes intelligent identification and quality evaluation system. Based on flight parameters of flight training subsets according to get the plane three-dimensional trajectory and attitude data, in order to carry out the maneuver criterion standard establishment, flight regimes intelligent identification, adaptive flight quality assessment.

3.1 Establishment of flight maneuver criteria

Flight parameter data in flight training data comprehensively record flight training subjects and content, flight maneuver data of aircraft and aircraft control data, etc., with abundant, authoritative, objective and true data accumulation. Flight maneuver is the basic element of flight training unit and the key index of flight assessment, and a single flight training includes several flight regimes that need to be assessed. Aimed at training model, combining with the aircraft maneuver characteristics and flight instructor expert experience, based on the maneuver of the aircraft flight parameters data sets, a comprehensive combing to identify flight regimes of classification methods and standards requirements, key flight maneuver set, put forward the classification standard, the index requirements of flight regimes, lay the foundation for flight gesture recognition and evaluation.

3.2 Intelligent identification of flight maneuver

In the mining analysis and application of flight training data, the traditional algorithms generally have some problems, such as low recognition rate of flight maneuver, low resolution of quality evaluation and low differentiation of evaluation. Deep learning methods based on deep neural network have been widely used in feature extraction of big data. In order to give full play to the feature extraction ability of neural network and make the deep neural network pay attention to the timing characteristics of collected flight maneuver signals and adaptively extract features from them, based on deep residual convolution, a deep variational auto-encoder network is constructed to realize flight maneuver recognition, which is expected to fill the gap of the lack of effective mining and analysis methods for flight training data.

3.3 Adaptive evaluation of flight quality

On the basis of efficient completion of automatic identification of flight regimes, according to the flight maneuver and flight sorties two levels, combined with the proposed maneuver classification criterion index system and deep variational auto-encoder network, a multi-view fusion maneuver scoring algorithm based on variational auto-encoder feature compression network and adaptive dynamic time warping was implemented.

4 Flight training maneuver library

4.1 4.1 Criterion standard

Flight parameters derived from different aircraft components, points belong to different hardware management system, causes the tend to have different sampling frequency and scope of said, after data preprocessing is needed to input variables of the model, such as neural networks as the depth at the same time also ensures network in training can give equal importance to different parameters. Parameter types include Boolean switching variables, Continuous variables, Discrete variables and Angle variables, so data preprocessing includes resampling, normalization and slicing. Firstly, the sampling frequency of fast variable parameters and slow variable parameters is unified by down-sampling and over-sampling techniques. Then, according to the physical characteristics of each parameter, the upper and lower bounds of its theoretical amplitude are determined, and the input parameters of the model are normalized to the same numerical range by means of amplitude scaling, which is usually [0,1] or [− 1,1]. This process can significantly improve the learning efficiency and prediction accuracy of the model. In the slicing process, the entire flight sorties with overlap are divided into multiple segments to ensure that the input samples of the segmentation network have the same time sequence length. Finally, the output sequences obtained are spliced and combined to obtain the output results of the whole flight sorties. When only a few training samples are available, data enhancement is essential to make the network as invariant and robust as possible.

4.2 The three-dimensional trajectory reproduction

Due to the increasing intensity of flight training and stronger maneuverability of flight, the corresponding flight regimes cannot be accurately identified only by analyzing flight parameters However, 3d trajectory reconstruction can reproduce the real flight trajectory in flight sortals by observing the real-time changes of flight trajectory, supplemented by the variation trend of main flight parameters, flight regimes and their starting and ending moments can be accurately, real-time and intuitively identified.

The three-dimensional position of an aircraft is determined by and only by longitude, latitude, and altitude. Therefore, the flight parameters of longitude, latitude and barometric height in flight sorties are extracted as x, y and z coordinates respectively to draw scatter plots in three-dimensional space. In order to obtain a smooth 3D trajectory, the moving average method is used to smooth the 3D coordinates of the aircraft at each time point. By adding new and old data one by one in order to calculate the average value, the moving average method eliminates accidental variation factors and achieves the effect of smoothing the curve.

After obtaining the overall three-dimensional trajectory of the aircraft, we expect to observe the trend of the flight trajectory with time. Therefore, the way of sliding window is adopted to draw the flight track of the aircraft in the window [t, t + 100] at time t. With the change of time t, the window continues to move along the time axis, and the flight path of the aircraft changes at the same time, realizing the real-time change of the flight path, which is more conducive to the identification of flight regimes. In order to realize intuitive and accurate judgment of flight regimes by combining dynamic 3D trajectory reconstruction and real-time flight parameter data, flight history tracing software is built, as shown in Fig. 1. The interface is mainly divided into time-of-flight area, dynamic 3D trajectory area, real-time flight parameters area and interaction area. The time of flight area is located at the top of the interface, showing the running time of the flight track. The dynamic 3D trajectory area is located at the left end of the interface, where the dynamic three-dimensional trajectory of flight at different moments is displayed as time changes. By dragging the mouse position, you can show the 3D flight track from different angles for further maneuver analysis and judgment. The real-time flight parameter area is located on the right side of the interface, which dynamically displays six major flight parameters highly correlated with flight regimes at that time. By analyzing the real-time flight parameters, the flight regimes can be more accurately mined. The interaction area is located at the bottom of the interface and contains two control buttons "Stop" and "Start", which are used to control the pause and start of the flight history backtracking algorithm. The flight history tracing interface can be used to analyze and trace any historical flight sorties, and then build flight maneuver database, which lays a foundation for the establishment of classification model.

Fig. 1
figure 1

Flight history tracing interface

4.3 Flight regimes to be identified

As shown in Table 1, there are a total of 21 categories of flight regimes to be identified, which are divided into three categories: take-off and landing routes, basic regimes and stunt regimes.

Table 1 Flight regimes to be identified

5 Depth variational auto-encoder maneuver recognition model

In the mining of flight training data, the traditional algorithms rely on expert experience, the recognition rate of flight maneuver is not high, and the model generalization ability is poor. Deep learning algorithm based on convolutional neural network improves the accuracy and generalization ability of recognition model in a data-driven way. However, the traditional classification and recognition model is aimed at a single category of samples, and it still needs the help of expert knowledge to classify specific maneuver samples, so it cannot process the flight data containing multiple types of flight regimes. In this paper, point-by-point prediction was made based on the time sequence characteristics of the flight parameters in multiple time series, and a point-by-point maneuver recognition network based on the depth variational auto-encoder network was constructed to perform intelligent segmentation of flight regimes in the complete sorties.

5.1 Variational auto-encoder

variational auto-encoder is a kind of considering the generated model derived from the hidden variable approximation, it includes coding device network \(q_{\phi } (z|x)\) and decoder network \(p_{\theta } (x|z)\), as shown in Fig. 2. Where, x is the visible variable of input sample features, and z is a series of unobserved hidden layer variables. The deep variational auto-encoder network uses variational inference theory and stochastic gradient descent principle to train the network. The encoder compresses the raw data into a low-dimensional space so that the visible variable x maps to the continuous hidden variable z. The decoder uses hidden layer variables to generate data and uses hidden variable z to reconstruct the original data x. Depth variation since the coding network distribution with depth of the neural network structure encoder \(q_{\phi } (z|x)\) and decoder network \(p_{\theta } (x|z)\), thus to extract the characteristics with abundant information of hidden layer.

Fig. 2
figure 2

Variational auto-encoder

According to prior assumptions, the implicit variable z is constrained to be normally distributed \(p_{\theta } (z)\sim N(\mu ,\sigma )\), that the model learns the distribution of input data. But the marginal likelihood function of distribution is difficult to solve. According to the variational theory, the lower boundary of the marginal likelihood function is taken as the objective function. Given the approximate posterior \(q_{\phi } (z|x)\), variational since coding network loss function is shown by

$$\begin{aligned} & {\mathcal{L}}(x) \, = \, {\mathbb{E}}_{q(z|x)} [\log \, p_{\theta } (x|z)] \\ & \quad \quad \quad - {\mathbb{E}}_{q(z|x)} [\log \, q_{\phi } (z|x) \, + \, \log \, p_{\theta } (z)] \\ & \quad \quad \quad {\mathbb{E}}_{q(z|x)} [\log \, p_{\theta } (x|z)] \\ & \quad \quad \quad - D_{{{\text{KL}}}} (q_{\phi } (z|x)||p_{\theta } (z)) \\ \end{aligned}$$
(1)
$$D_{{{\text{KL}}}} (p||q) = - \int\limits_{x} {p(x){\text{ln}}\frac{q(x)}{{p(x)}}} {\text{d}}x$$
(2)

where \({\mathbb{E}}_{{q{(}z|x{)}}}\) is the expect under the posterior distribution \(q_{\phi } (z|x)\), \(D_{{{\text{KL}}}}\) is the KL divergence between the approximate posterior \(q_{\phi } (z|x)\) and the prior distribution \(p_{\theta } (z)\) of the potential variable z, which in used to measure the gap between approximate posterior and prior distributions. The greater the KL divergence value, the greater the difference between the two distributions.

5.2 Deep variational auto-encoder maneuver recognition network

For the multivariate time series in the flight process, the depth variational multivariate variational auto-encoder network is constructed, as shown in Fig. 3. It can be divided into two parts: variational auto-encoder feature extraction and multiple time series segmentation. For input of multivariate time series \(x \in R^{l \times n}\), where l is the length of the time series, n is the number of flying parameters entered.

Fig. 3
figure 3

Deep variational auto-encoder recognition network

Firstly, the hidden layer features of flying parameter are extracted by deep variational auto-encoder network \(\overline{x} \in R^{l \times z}\), where z is the dimension of hidden layer features, let the features extracted can be standardized without redundancy, which is conducive to further flight maneuver sequence segmentation. Then, convolution and down-sampling are carried out to encode the hidden layer features as high-level features with temporal relations. Finally, deconvolution and up-sampling are performed on the high-level features to output the sequence prediction category probability vector \(\hat{y} \in R^{l \times 1}\). The classification error is computed by the sequence prediction category probability vector \(\hat{y} \in R^{l \times 1}\) and the one-hot encoding vector of sequence real class \(y \in R^{l \times 1}\) together. Combined with classification error and reconstruction error, the model parameters are updated iteratively by gradient descent method to determine the final network. The specific network structure and parameters are shown in Table 2.

Table 2 Hyper parameter setting of network

5.3 Loss function and model training

The loss function of the model is divided into two parts: reconstruction error and classification error. For variational auto-coding networks, we make the minimum mean-square error between the decoder output \(\hat{x} \in R^{l \times n}\) and the original input \(x \in R^{l \times n}\) as small as possible to ensure that no information is lost in the extracted hidden variables. At the same time, in order to make the approximate posterior of hidden variables close to the real prior, the KL divergence between them should be minimum. Therefore, the reconstruction error can be expressed as

$$\begin{aligned} L_{{{\text{recon}}}} & = L_{{{\text{MSE}}}} (x,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{x} ) - L_{KL} \\ \, & = - \frac{1}{nl}\sum\limits_{i = 1}^{l} {\sum\limits_{j = 1}^{n} {(x_{ij} - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{x}_{ij} )^{2} } } - KL(N(\mu ,\sigma ),N(0,1)) \\ \end{aligned}$$
(3)

where \(\mu\) and \(\sigma\) are the mean vector and variance vector output by encoder g respectively.

In order to improve the accuracy of classification error, the predicted category should be consistent with the label as much as possible. The cross entropy loss function is adopted as

$$L_{{{\text{CE}}}} = - \sum\limits_{i = 1}^{l} {\sum\limits_{j = 1}^{K} {y_{ij} \times \log \frac{{e^{{p_{ij} }} }}{{\sum\nolimits_{k = 1}^{K} {e^{{p_{ij} }} } }}} }$$
(4)

where \(p_{ij}\) represents the probability of class J maneuver predicted by the model at moment i, and \(y_{ij}\) represents the real flight maneuver performed at moment i. Therefore, the total loss function of time series segmentation is the sum of reconstruction error and classification error, as shown by

$$L_{{{\text{seg}}}} = L_{{{\text{recon}}}} + L_{{{\text{CE}}}}$$
(5)

Other model training Settings are as follows. All network parameters are initialized by kaiming initialization. The number of training samples per batch was 512. The total loss of time series segmentation is taken as the objective function, and the gradient of the objective function about the weight is calculated by the back propagation strategy, and the network weight is updated in the opposite direction of the gradient. The random gradient descent (Adam) optimizer was selected, and the initial learning rate was 0.1, the exponential decay strategy was used to attenuate the learning rate, and the decay rate was selected as 0.95. The model parameters were saved after 100 rounds of training.

5.4 Flight data preprocessing

Collect K = 21 flight parameters (such as heading Angle, pitch Angle, barometric height, etc.) within multiple flight sorties. It is spliced into a multiple time series of L × K, where L is the total time of the series. In order to speed up the convergence process of the model and improve the calculation speed of the network, the overall statistical indicators of each flight parameter are calculated respectively and the overall normalization process is carried out so that the value of each parameter is about 1. Finally, it is divided into training set and test set in the ratio of 6:4.

Then, to generate training samples, resampling the training set is performed. In order to ensure consistent model input size, a sliding window with a fixed size Lt was used to traverse the whole training set and divide the time series into several sub-samples. At the same time, in order to collect a sufficient number of samples to meet the requirements of model training, the whole time series is traversed with a step size of It. So, for the time series with points L, the sample number N obtained by resampling is shown by

$$N = {\text{floor}}\left( {\frac{{L - L_{t} }}{{I_{t} }}} \right) + 1$$
(6)

5.5 Model testing process

In the test phase, the sequence to be predicted was truncated into several sub-sequences whose length met the network input length Lt, and each sub-sequence was input into the maneuver recognition network to obtain several prediction sub-sequences. Then the prediction sub-sequence is spliced to obtain the point-by-point prediction label of the whole flight sortie. Finally, the boundary judgment is carried out to extract each flight maneuver segment of flight sorties as the output result. The test process is shown in Fig. 4.

Fig. 4
figure 4

Test procedure

The identification accuracy of network on flight maneuver types and maneuver start and end times has a significant impact on the identification results of the whole flight sortie. Therefore, this paper adopts three types of evaluation indicators to evaluate the model, which are described as follows.

The Intersection-over-Union refers to the ratio of intersection and union between the predicted result and the real tag, and measures the similarity between the predicted sequence and the real tag sequence, as shown by

$${\text{IoU}} = \frac{1}{{\sum\nolimits_{1}^{C} {I_{j} } }}\sum\limits_{j = 1}^{C} {I_{j} \times \frac{{I(y_{j} ,p_{j} )}}{{U(y_{j} ,p_{j} )}}}$$
(7)
$$I_{j} = \left\{ {\begin{array}{*{20}l} {1,} \hfill & {{\text{if}}\;I(y_{j} ,p_{j} ) > 0} \hfill \\ {0,} \hfill & {{\text{if}}\;I(y_{j} ,p_{j} ) = 0} \hfill \\ \end{array} } \right.$$
(8)

where Ij is an indicator function used to indicate whether there is an element point belonging to maneuver j in the sample sequence. pj indicates that the maximum probability of output prediction is the number of points of maneuver j, and yj is the number of points that are truly labeled as maneuver j. I(·) and U(·) represent the intersection and union of pj and yj respectively, and C represents the total number of categories. Overall accuracy refers to the proportion of correctly predicted points outputted by the network to all sequence points, as shown in Formula (9).

$${\text{acc}} = \frac{{\sum {y \circ p} }}{{L_{t} }}$$
(9)

where P indicates the maneuver corresponding to the maximum probability predicted by the output.

The F1 score also focuses on the overall prediction accuracy of the model and the balance between different categories. It is the harmonic mean of accuracy rate and recall rate, and its value ranges from 0 to 1, as shown in Eq. (10). The closer the value is to 1, the stronger the model performance is.

$$F1{\text{ - score}} = \frac{1}{C}\sum\limits_{j = 1}^{C} {\frac{{2\sum {y_{j} } \circ p_{j} }}{{\sum {y_{j} + \sum {p_{j} } } }}}$$
(10)

The evaluation results of overall accuracy, intersection ratio and F1 score are shown in Table 3. Among them, the training results of all indicators are close to 100%, and the overall accuracy of the model is 92.76% when the model is run in the test set, indicating that the model can identify flight regimes well. The intersection ratio of the test is 0.7354, indicating that the recognition of the start and end moments of flight regimes is basically correct, but there is still room for improvement. The recognition accuracy of some categories is not high, which is related to the quality and data size of tags. More high-quality flight data and tags need to be added to train better classification models.

Table 3 Model test results for flight regimes recognition

6 Flight quality assessment algorithms

Due to the high maneuverability and complex and changeable regimes of military aircraft, the flight quality evaluation of military aircraft lacks reasonable measures. The existing quality evaluation largely depends on the expert experience of flight instructors, so it is not highly differentiated in practical application. At the same time, the highly nonlinear mutation between multiple flight regimes interferes with the quality evaluation of the whole flight envelope. In this paper, a flight maneuver quality evaluation algorithm based on deep variational auto-encoder network is proposed to eliminate redundancy of contextual maneuver information in the whole flight scene, and a scoring rule combining global feature and local feature is constructed to realize the comprehensive evaluation of flight maneuver quality from multiple perspectives.

6.1 Standard maneuver base library is established

In order to realize the maneuver quality assessment of different flight categories, it is necessary to select the corresponding standard maneuver datum and establish the standard maneuver benchmark library, which can be used as the reference datum of flight regimes and guide the threshold setting of subsequent scoring rules. The establishment process of the standard maneuver reference library is as follows: Combine the flight training outline of the designated aircraft type with the experience and knowledge of the flight experts, the relevant standard flight maneuver items are selected from the rich historical flight data, and the typical standard maneuver database is established. The relevant standards include flight parameter standards and three-dimensional trajectory standards of various flight regimes. The criteria for flight parameters include entry speed, exit speed, takeoff Angle, time, altitude, intake pressure and other flight parameters. The 3D trajectory standard is used to judge whether the 3D trajectory of flight maneuver conforms to the maneuver description and whether its smoothness meets the requirements.

According to the above standards, some standard data of each type of flight regimes are extracted from the flight history data, and the start and end times of the regimes are recorded, which are included in the standard regimes reference base.

6.2 Variational auto-encoder feature compression network

The flight environment is complex when an aircraft enters a maneuver, and the flight maneuver parameters monitored by airborne sensors contain variables such as space–time coordinates and state parameters. The variables are of various types and vary in different ranges, and there are huge differences among them. In addition, when the aircraft is performing the maneuver, the collected signals show the same or opposite trend of change, and there is highly jumbled information among the feature dimensions. Direct use of original signals to evaluate flight quality will cause serious inductive bias and cannot be applied to the actual flight environment. Its specific performance is as follows:

  1. 1.

    The value range of different dimensions varies greatly, which makes the evaluation model focus on the dimension with a larger value range. For example, the altitude of air pressure ranges from 0 to 2500 m, and the elevation angle ranges from 0 to 180°. When measured against standard maneuver, the distance depends mainly on barometric height and ignores the influence of pitch angle.

  2. 2.

    The original value distribution is easy to sample to extreme values, which will lead to low or high ratings of highly or lowly quality regimes.

  3. 3.

    Information redundancy or correlation between dimensions is high, and the influence of some flight parameters on maneuver quality is ignored.

In order to reduce the impact of the above problems on flight quality assessment, a depth variational auto-encoder network is constructed to constrain the distribution of hidden layer features, remove redundant information among features, standardize features of each dimension, make distance measurement calculation more reasonable, and improve the accuracy of scoring.

According to the variational auto-encoder principle described in Sect. 3.1, a variational auto-encoder feature compression network is constructed, as shown in Fig. 5. The network is divided into two stages: model training and feature compression During model training, the convolution layer is used to form the encoder of the model, and the multiple time signal input x is accepted and encoded to obtain the hidden variable z (z is constrained by KL divergence), which contains the characteristic information of flight regimes for subsequent quality assessment tasks. Then, the decoder composed of deconvolution uses hidden variables to reconstruct the signal, and takes the mean square error between the reconstructed signal and the original signal as the loss function, and trains the network parameters of the encoder by minimizing the reconstruction error. The specific parameter Settings of encoder and decoder are shown in Table 4.

Fig. 5
figure 5

Variational autoencoder feature compression network

Table 4 Parameter setting of variational autoencoder feature compression network

After the network training is completed, flight parameters are input to the encoder with fixed weights to obtain the hidden layer variables with normal distribution, so as to realize the standardization of features and eliminate redundancy, which is conducive to improving the accuracy of subsequent distance measurement.

6.3 Adaptive dynamic time warping

After variational auto-encoder feature compression, the next step is to measure the distance between the compression features of the flight maneuver to be evaluated and the standard maneuver to measure the flight maneuver quality. Common Euclidean distance, Mahalanobis distance and cosine pair distance isometric measures can calculate the similarity between two time series. However, influenced by a variety of external factors during flight, the duration of similar regimes varies greatly and is difficult to predict, which cannot meet the requirements of the common distance measurement method to measure the object scale consistency. Adaptive dynamic time programming calculates distances by means of dynamic programming to find the best matching relationship between sequences. It allows sequence points to be mismatched after self-replication, can measure non-isometric sequences and is robust to noise. Therefore, this paper uses adaptive dynamic time planning algorithm to measure the similarity between the evaluated maneuver and the standard maneuver.

Suppose that for two time-of-flight sequences X =  < x1, x2, …, xm > , Y =  < y1, y2, …, yn > , and the maneuver duration is m and n respectively, then the path after dynamic state interval planning is W =  < w1, w2, …,wk, …, wK > , and the calculation process of similarity between X and Y is as follows:

  1. 1.

    Construct matrix D with size n × m, and element dij = dist(xi, yj) of row I and column, where dist is the distance calculation function and Euclidean distance is usually used by

    $${\text{dist}}(x_{i} ,y_{i} ) = \sqrt {\sum\limits_{j = 1}^{n} {(x_{ij} - y_{ij} )^{2} } }$$
    (11)
  2. 2.

    Dynamic path planning is used to search the shortest path from d11 to dnm from matrix D, and the search process must meet the following constraints:

    1. 1.

      Boundary conditions:w1 = (1, 1), wK = (m, n). The speed of any flight may vary, but the start and end times match. The selected path must start at the lower left corner and end at the upper right corner.

    2. 2.

      Continuity: if wk-1 = (ak-1, bk-1), then the next point of the path wk = (ak, bk) need to meet (ak-ak-1) <  = 1 and (bk-bk-1) <  = 1. That is, you can't match across a point, you can only align with your adjacent points. This ensures that each coordinate in X and Y appears in W.

    3. 3.

      Monotonicity: if wk-1 = (ak-1, bk-1), then the next point of the path wk = (ak, bk) need to meet (ak-ak-1) >  = 0 and (bk-bk-1) >  = 0. This restricts the W path to be monotonous over time to ensure that matching points do not intersect.

    Combined with monotonicity and continuity constraints, the path of any grid point can only be selected in three directions. If the path W has passed the point (i, j), the next grid point can only be the following three cases: (i + 1, j), (i, j + 1) or (i + 1, j + 1), as shown in Fig. 6.

  3. 3.

    Search the shortest path from d11 to dnm in matrix D as the similarity of X and Y sequences. Therefore, the total dynamic time planning distance is shown by

    $$D_{dtw} (X,Y) = d_{i,j} (x_{i} ,y_{j} ) + \min \left\{ \begin{gathered} D_{dtw} (X_{i - 1} ,Y_{j} ) \hfill \\ D_{dtw} (X_{i} ,Y_{j - 1} ) \hfill \\ D_{dtw} (X_{i - 1} ,Y_{j - 1} ) \hfill \\ \end{gathered} \right.$$
    (12)

    where i = 1, 2, …,m; j = 1, 2, …, n; Ddtw(X,Y) represents the distance between sequence X and Y, xi and yj represent the points in sequence X and Y respectively, di,j(xi, yj) represents the Euclidean distance between the points xi and yj.

    Fig. 6
    figure 6

    Path searching direction

Computing all the paths in the entire matrix results in enormous time complexity. In order to achieve fast and accurate flight maneuver evaluation, a fixed window is selected to limit the maximum distance of sequence offset. In addition, in order to suppress the phenomenon that abnormal differences at both ends of the sequence lead to too low dynamic time warping distance, an offset is selected to represent the maximum offset of the abnormal points at both ends of the sequence. Finally, for the flight maneuver sequence to be evaluated, the dynamic time planning distance is calculated with each standard flight maneuver of the corresponding category in the standard flight reference base for subsequent flight maneuver quality evaluation.

6.4 Global–local information multi-view fusion scoring rule

When evaluating flight maneuver quality, a reasonable scoring algorithm should take into account both the completion of all indexes and the completion of single specific indexes during flight. If all the indicators of flight regimes are above the standard, the weighted sum is carried out according to the quality of each indicator to obtain the overall score. However, if a single specific index does not meet the requirements, the final score shall not exceed the fixed score. This paper considers both global information and local information, and forms a set of multi-view fusion scoring rules based on global–local information to achieve multi-view comprehensive scoring.

For a period of stay evaluation maneuver \(x \in R^{l \times n}\) and k standard regimes of the same class {s1, s2, …, sk}, \(s_{i} \in R^{{l_{i} \times n}}\) for i = 1,2,…k, where l and li represent the duration of flight regimes to be evaluated and each standard flight maneuver respectively, n is the total number of flying parameter. The normalized hidden layer features \(\overline{x} \in R^{l \times z}\) 和 \(\{ \overline{{s_{1} }} ,\overline{{s_{2} }} , \ldots ,\overline{{s_{k} }} \}\), \(\overline{{s_{i} }} \in R^{{l_{i} \times z}}\) for i = 1,2,…k are obtained by inputting them into variational auto-encoder feature compression network, Where z is the independent feature dimension of the longitude variational auto-encoder feature compression network after compression. Then the adaptive dynamic time planning algorithm is used to calculate the dynamic time planning distance of each standard maneuver and the maneuver to be evaluated in each dimension dz = {d1,z, d2,z,…dk,z}, where \(d_{i,z} = {\text{adist}}(\overline{{s_{i} }} (;,z),\overline{x}(;,z))\). Take the minimum value of k distances after calculation as the similarity measure between the maneuver to be evaluated and the standard maneuver benchmark in this dimension Dz = min(d1,z, d2,z,…dk,z).

According to the similarity measure between the maneuver to be recognized and the standard maneuver, the global–local information fusion multi-view comprehensive score is carried out. Two threshold vectors A and B were selected to divide the overall score into three ranges of 100 points, 60–100 points and 60 points. Considering global information, if the distance between the maneuver to be recognized and the standard maneuver datum in each characteristic dimension is less than the lower threshold a, the flying maneuver is regarded as 100 points. Considering local information, if there is a distance metric dimensions is greater than the threshold is lower, then the maneuver is less than 100 points, if on the basis of the characteristic dimensions of distance is less than b, upper and lower bounds of the threshold is lower bound by each dimension scores more than threshold of the sum of allowance to decide, and assume that score with various dimensions exceed the threshold is lower total surplus to a linear relationship. The larger the total margin, the lower the score; The smaller the total margin, the higher the score; When the total margin is 0, the score reaches 100 points. If the distance measure on a feature dimension is greater than the upper bound of the threshold, the maneuver is 60 points. The scoring rules are shown in Eq. (13).

$$S = \left\{ {\begin{array}{*{20}l} {100} \hfill & {{\text{if}}\;\forall i,D_{i} < a} \hfill \\ {100 - 40 \times \sum\limits_{i = 1}^{z} {\frac{{\max (D_{i} - a,0)}}{b - a}} } \hfill & {{\text{if}}\;\exists i,D_{i} > a} \hfill \\ {60} \hfill & {{\text{else}}} \hfill \\ \end{array} } \right.$$
(13)

After simplification, the final flight maneuver quality score can be expressed as Eq. (14):

$$S = 100 - \min \left( {40,40 \times \frac{{\sum\limits_{i = 1}^{z} {\max (D_{i} - a,0)} }}{b - a}} \right)$$
(14)

As described above, the quality score is closely related to the choice of upper and lower thresholds. If the threshold value is too high, the algorithm tends to output higher scores, resulting in inflated scores; If the threshold value is too low, the algorithm is too strict and the overall score of flight maneuver quality is low. Therefore, reasonable selection of upper bound b and lower bound a of threshold is crucial to scoring results.

Firstly, considering the sparsity of the number of 100 scores, the lower bound of threshold b is determined by using the standard maneuver reference library. The adaptive dynamic time warping algorithm is used to calculate the similarity measure of each standard maneuver in the standard maneuver base and select their minimum value as the lower threshold. Since the flight parameters of standard regimes are very close to each other, the minimum value between them is taken as the lower bound of threshold value, which not only ensures that the 100-point regimes meet the requirements of standard regimes, but also has a high sparseness. Secondly, the upper bound of threshold a divides the boundaries of two intervals of 60 ~ 100 and 60 min. It is assumed that all kinds of flight regimes obey the Gaussian distribution, and the standard flight regimes are distributed near the mean of the Gaussian distribution. The maximum similarity measure between standard regimes in the standard maneuver base is taken as variance σ of gaussian distribution. According to the 3σ principle, the probability of flight maneuver at a distance of more than 3σ is almost zero. Therefore, 3 times of the maximum similarity measure between each standard maneuver is selected as the upper bound of threshold a to ensure the sparsity of 60-min flying maneuver. To sum up, the lower bound of threshold b and the upper bound of threshold a are given by

$$\left\{ \begin{gathered} a = \min ({\text{adist}}(\overline{s}_{i} ,\overline{s}_{j} )\;\;\;\;\;\;\;\;{\text{ for}}\;\;i = 1,2, \ldots ,k, \, j = 1,2, \ldots ,k) \hfill \\ b = 3*\max ({\text{adist}}(\overline{s}_{i} ,\overline{s}_{j} )\;\;\;{\text{ for}}\;\;i = 1,2, \ldots ,k, \, j = 1,2, \ldots ,k) \hfill \\ \end{gathered} \right.$$
(15)

Combined with the above, the overall process of multi view fusion scoring algorithm based on global local information is shown in Table 5.

Table 5 Overall process of scoring algorithm

7 Simulation results

Because the depth convolution auto-encoder imposes an independent and identically distributed a priori on the hidden layer features, the features after dimension reduction by depth convolution auto-encoder often approximately obey the standard normal distribution of independent and identically distributed, which can eliminate the scale difference between dimensions and is beneficial to the distance calculation between features.

When different dimensionality reduction dimensions are selected, it will affect the training effect of the network. When the dimension is too low, the feature information is missing and the original signal cannot be reconstructed effectively; When the dimension is too high, redundancy inevitably occurs between dimensions, breaking the independence between different features and affecting the performance of the scoring model. Therefore, we tested the overall distribution of scores obtained under different feature dimensions to achieve super parameter optimization. By comparing the score distribution results under five groups of feature dimensions [5, 6, 8, 10, 12], when the feature dimension is greater than or equal to 10, the scores of some regimes obviously tend to be low segmentation, indicating that the auto-encoder model will produce some invalid dimensions at this time; When the feature dimension is less than or equal to 6, the score of some regimes obviously tends to be high segmentation, indicating that the network is difficult to converge and the feature discrimination is insufficient. Therefore, the feature dimension selected by the final scoring model is 8, as shown in Fig. 7.

Fig. 7
figure 7

Distribution of typical regimes score (feature dim = 8)

8 Conclusions

In this paper, a scheme of deep learning for accurate recognition of high maneuvering flight regimes in military flight is proposed for the first time. Make full use of the neural network, which has the characteristics of high invariance to the flight parameter data of multivariate time series, such as scaling, translation, tilt and other forms of deformation. Combined with the spatial mapping ability of variational auto-encoder, the flight maneuver segmentation and accurate evaluation are realized. At the same time, it effectively solves the practical problem of scientific evaluation of training quality in flight training. Based on the accurate identification and quality evaluation of flight regimes, the concept of building a basic database of flight quality for flight cadets is proposed, and the in-depth application level analysis and mining of flight training big data is expected to solve the specific engineering practical problems of the management department on specific air regimes, such as "whether to fly, how to fly quality, how to express and locate Flight points", It can fill the gap that the current military training lacks effective mining and analysis means for flight training data.

By establishing a comprehensive evaluation method system for flight quality of flight cadets, we can give play to the advantages of big data and analyze the overall law of flight training from multiple dimensions in follow-up research and practice. From the individual dimension of flight cadets, comprehensively study and judge the technical maturity curve formed by the factors such as technical maturity, stability and isolated regimes of a single flight Cadet during flight training, so as to provide auxiliary data support for the personalized and accurate training of flight cadets; From the group dimension of flight cadets, mine and analyze the flight training quality law of characteristic groups, summarize and refine the advantageous flight training regimes and group training difficulties, form auxiliary data analysis and decision-making, and support the reform of flight training; From the dimension of flight instructors, systematically mine and analyze the overall flight quality, advantageous regimes and isolated regimes of flight instructors, summarize experience and avoid weaknesses through data, and support the overall improvement of teaching ability and level of flight instructors.

Availability of data and materials

The datasets used and analysed during the current study are available from the corresponding author on reasonable request.

References

  1. X. Liu, X.B. Zhai, W. Lu, C. Wu, QoS-guarantee resource allocation for multibeam satellite industrial internet of things with NOMA. IEEE Tran. Ind. Inf. 17(3), 2052–2061 (2019). https://doi.org/10.1109/TII.2019.2951728

    Article  Google Scholar 

  2. X. Liu, X. Zhang, Rate and energy efficiency improvements for 5G-based IoT with simultaneous transfer. IEEE Internet Things J. 6, 5971–5980 (2019). https://doi.org/10.1109/JIOT.2018.2863267

    Article  Google Scholar 

  3. C. Xie, S.H. Ni, Z.L. Zhang, Y.H. Wang, Recognition method of acrobatic maneuver based on state matching and support vector machines. J. Project. Rockets Missiles Guidance 3, 240–242 (2004). https://doi.org/10.15892/j.cnki.djzdxb.2004.s3.015

    Article  Google Scholar 

  4. C. Xie, S.H. Ni, Z.L. Zhang, Pattern attributes extraction of flight data based on rough set. Comput. Eng. 12, 169–171 (2005). https://doi.org/10.3969/j.issn.1000-3428.2005.12.062

    Article  Google Scholar 

  5. S.H. Ni, Z.K. Shi, C. Xie, Y.H. Wang, Establishment of avion inflight maneuver action recognizing knowledge base. Comput. Simul. (2005). https://doi.org/10.3969/j.issn.1006-9348.2005.04.007

    Article  Google Scholar 

  6. R.F. Zhang, S.H. Ni, The algorithm of flight trajectory creatation based on Cardinals cubic splines. Calc. Technol. Autom. S2, 109–111 (2006). https://doi.org/10.3969/j.issn.1673-4599.2005.01.009

    Article  Google Scholar 

  7. W.J. Yin, S.H. Ni, A method of recognizing flight maneuver based on genetic algorithm. Comput. Dev. Appl. (2006). https://doi.org/10.3969/j.issn.1003-5850.2006.04.008

    Article  Google Scholar 

  8. C. Su, S.H. Ni, Y.H. Wang, Method of rule acquirement of flight state based on improved AIS. Comput. Eng. Appl. 47(3), 237–239 (2011). https://doi.org/10.3778/j.issn.1002-8331.2011.03.069

    Article  Google Scholar 

  9. Y. Gao, S.H. Ni, Y.H. Wang, P. Zhang, A flight state rule extraction method based on improved quantum genetic algorithm. Electroopt. Control 18(01), 28–31 (2011). https://doi.org/10.3969/j.issn.1671-637X.2011.01.007

    Article  Google Scholar 

  10. Y.W. Wang, Y. Gao, A rule extraction method for flight action recognition based on whale optimization algorithm. J. Naval Aeronaut. Astronaut. Univ. 33(05), 447-451+498 (2018). https://doi.org/10.7682/j.issn.1673-1522.2018.05.005

    Article  Google Scholar 

  11. Y.W. Wang, Y. Gao, Research on complex motion recognition method based on basic flying motion. Ship Electron. Eng. 38(10), 74–76 (2018). https://doi.org/10.3969/j.issn.1672-9730.2018.10.018

    Article  Google Scholar 

  12. R. Agrawal, C. Faloutsos, A. Swami, Efficient Similarity Search in Sequence Databases (Springer, Berlin, 1993)

    Book  Google Scholar 

  13. R.A.K. Lin, H.S.S.K. Shim, Fast similarity search in the presence of noise. Scaling. and translation in time. Series Databases. In: Proceedings of the 21 St Int'1 Conference on Very Large Databases, Zurich. Switzerland, pp. 490–501(1995)

  14. Y. Cai, R. Ng, Indexing spatio-temporal trajectories with Chebyshev polynomials. In: Proceedings of the 2004 ACM SIGMOD International Conference on Management of Data. ACM, pp. 599–610 (2004)

  15. F. Korn, H.V. Jagadish, C. Faloutsos, Efficiently supporting ad hoc queries in large datasets of time sequence. ACM SIGMOD Rec. 26(2), 289–300 (1997). https://doi.org/10.1145/253262.253332

    Article  Google Scholar 

  16. T. Pavlidis, S.L. Horowitz, Segmentation of plane curves. IEEE Trans. Comput. 23(8), 860–870 (1974). https://doi.org/10.1109/T-C.1974.224041

    Article  MathSciNet  MATH  Google Scholar 

  17. D.J. Berndt, J. Clifford, Using dynamic time warping to find patterns in time series. Working Notes of the Knowledge Discovery in Databases Workshop. Seatle, pp. 359–370 (1994)

  18. A. Olemskoi, S. Kokhan, Effective temperature of self-similar time series: analytical and numerical developments. Physica A 360(1), 37–58 (2006). https://doi.org/10.5488/CMP.8.4.761

    Article  Google Scholar 

  19. H.L. Li, Z. Shan, H.R. Guo, Flight action recognition algorithm based on MDTW. Comput. Eng. Appl. 51(09), 267–270 (2015). https://doi.org/10.3778/j.issn.1002-8331.1307-0018

    Article  Google Scholar 

  20. Y.C. Shen, S.H. Ni, P. Zhang, Flight action recognition method based on Bayesian network. Comput. Eng. Appl. 53(24), 161-167+218 (2017). https://doi.org/10.3778/j.issn.1002-8331.1707-0156

    Article  Google Scholar 

  21. C. Zhou, R. Fan, G. Zhang, Z.Y. Huang, A flight action recognition based on multivariate time series fusion. J. Air Force Eng. Univ. (Natural Science Edition) 18(04), 34–39 (2017). https://doi.org/10.3969/j.issn.1009-3516.2017.04.007

    Article  Google Scholar 

  22. Y.C. Shen, S.H. Ni, P. Zhang, Similarity query method for flight data time series sub-sequences. J. Air Force Eng. Univ. (Natural Science Edition) 20(02), 7–12 (2019)

    Google Scholar 

  23. Y.P. Xiao, Q.Q. Fu, A review of the research on lat-eral direction flight quality assessment methods. J. Civ. Aviat. 2(05), 42–45 (2018)

    Google Scholar 

  24. Y.W. Wang, Y. Gao, Comprehensive evaluation method of UAV flight quality based on comprehensive weighting method. Ordnance Ind. Autom. 38(05), 1-4+10 (2019). https://doi.org/10.7690/bgzdh.2019.05.001

    Article  Google Scholar 

  25. G. Xu, J.M. Liu, An automatic evaluation method for military simulation training result. Fire Control Command Control 45(05), 162–169 (2020). https://doi.org/10.3969/j.issn.1002-0640.2020.05.030

    Article  Google Scholar 

  26. S. Xu, Y. Wang, L.R. Bai, Overview of military helicopter training effectiveness based on simulation. Helicopter Tech. 03, 58–62 (2020)

    Google Scholar 

Download references

Funding

This work was supported by the Aviation Science Fund under Grant 2020Z014066001, and the 13th special grant of the China Postdoctoral Science Fund under Grant 2020T130181.

Author information

Authors and Affiliations

Authors

Contributions

All authors have contributed toward this work as well as in compilation of this manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Wei Tian.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tian, W., Zhang, H., Li, H. et al. Flight maneuver intelligent recognition based on deep variational autoencoder network. EURASIP J. Adv. Signal Process. 2022, 21 (2022). https://doi.org/10.1186/s13634-022-00850-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-022-00850-x

Keywords