[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Graph Neural Network-Based Modeling with Subcategory Exploration for Drug Repositioning
Previous Article in Journal
Blockchain-Based Privacy Preservation for the Internet of Medical Things: A Literature Review
Previous Article in Special Issue
Analysis of Factors Influencing the Generation of a Higher-Order Hermite–Gaussian Mode Based on Cascaded Spatial Light Modulators
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modulation Format Recognition Scheme Based on Discriminant Network in Coherent Optical Communication System

1
State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China
2
Beijing Key Laboratory of Space-Gound Interconnection and Convergence, School of Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
3
School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China
4
Signal Theory and Communication Department, UPC—Universitat Politècnica de Catalunya, 08034 Barcelona, Spain
5
Centre Tecnològic de Telecomunicacions de Cataluya (CTTC/CERCA), 08860 Castelldefels, Spain
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(19), 3833; https://doi.org/10.3390/electronics13193833
Submission received: 11 September 2024 / Revised: 23 September 2024 / Accepted: 26 September 2024 / Published: 28 September 2024
(This article belongs to the Special Issue Advances in Optical Communication and Optical Computing)
Figure 1
<p>The general process for implementing ML-assisted MFI, where CDF: cumulative distribution function; AH: amplitude histogram; CD: constellation diagram; ED: eye diagram. A solid line represents the training process and a dotted line represents the recognition process.</p> ">
Figure 2
<p>The discriminator of conditional generative adversarial nets algorithm block diagram. Among them, the three colors of red, yellow, and blue represent the three primary color data obtained through three channels of the neural network. And 128 × 128 represents the pixel value of the data.</p> ">
Figure 3
<p>Discriminator workflow, in which the input signal is scaled from the original size of <math display="inline"><semantics> <mrow> <mn>891</mn> <mo>×</mo> <mn>656</mn> </mrow> </semantics></math> to a <math display="inline"><semantics> <mrow> <mn>128</mn> <mo>×</mo> <mn>128</mn> </mrow> </semantics></math> square at the image center.</p> ">
Figure 4
<p>The 1000-km coherent optical fiber transmission system, where AWG: arbitrary waveform generator; ECL: external cavity laser; DVOA: digitally variable optical attenuator; PBS: polarization beam splitter; PBC: polarization beam combiner; EDFA: erbium-doped fiber amplifier; ICR: integrated coherent receiver; DPO: digital phosphor oscilloscope. Blue represents electrical signals and green represents light signals.</p> ">
Figure 5
<p>The 1000 km fiber transmission experimental platform.</p> ">
Figure 6
<p>The constellation diagram is processed by general DSP. QPSK, 8PSK, 16QAM, 32QAM and 64QAM are randomly selected from the training dataset from top to bottom. From left to right, the processing processes are orthogonal imbalance compensation, dispersion compensation, timing recovery, polarization demultiplexing, frequency offset estimation, and phase recovery.</p> ">
Figure 7
<p>The variation curve of recognition accuracy of each modulation format as the training level deepens.</p> ">
Figure 8
<p>Performance comparison between this scheme and traditional machine learning schemes, where DT: decision tree; SVM: support vector machine; CGAN: conditional generative adversarial network.</p> ">
Figure 9
<p>Recognition accuracy of various modulation formats under different OSNR indicators.</p> ">
Versions Notes

Abstract

:
In this paper, we skillfully utilize the discriminative ability of the discriminator to construct a conditional generative adversarial network, and propose a scheme that uses few symbols to achieve high accuracy recognition of modulation formats under low signal-to-noise ratio conditions in coherent optical communication. In the one thousand kilometres G.654E optical fiber transmission system, transmission experiments are conducted on the PDM-QPSK/-8PSK/-16QAM/-32QAM/-64QAM modulation format at 8G/16G/32G baud rates, and the signal-to-noise ratio parameters are traversed under experimental conditions. As a key technology in the next-generation elastic optical networks, the modulation format recognition scheme proposed in this paper achieves 100% recognition of the above five modulation formats without distinguishing signal transmission rates. The optical signal-to-noise ratio thresholds required to achieve 100% recognition accuracy are 12.4 dB, 14.3 dB, 15.4 dB, 16.2 dB, and 17.3 dB, respectively.

1. Introduction

Due to the rapid development of digital twins, 5G, data centers, smart cities, and the Internet of Things, optical fiber communication networks are facing new challenges in terms of latency, bandwidth, and reliability [1]. In order to cope with these challenges, the optical network architecture including the access network, metropolitan area network, and core network has evolved, including new functionalities such as integration between space and wavelength division multiplexing [2], free space optical communication in data centers and mobile access networks [3], reconfigurable optical add-drop multiplexer [4], flex-grid transmission, and advanced modulation formats. Compared with traditional static optical communication networks where the physical channel path from the transmitter to the receiver is fixed, future optical networks such as elastic optical networks [5] and cognitive optical networks [6] will be dynamic, flexible spectrum grids that are in the flexible modulation format and reconfigurable [7,8]. This puts forward requirements for intelligent upgrade of existing optical nodes. Most importantly, it allows network operators to make trade-offs between signal quality, spectral efficiency, and transmission distance by monitoring network performance. High-order modulation format signals are transmitted over optical channels with good conditions, and low-order modulation format signals are transmitted over low-quality optical fiber channels. To achieve this goal, the nodes that receive signals in the optical network need to have the ability to identify signals in different modulation formats to adapt to the upgraded optical network architecture. Modulation format identification (MFI) technology is used to configure receiver digital signal processing modules, such as carrier recovery and adaptive equalization in flexible receivers [9]. Both of them require a known modulation format to complete the compensation and recovery of signal impairments [10].
In general, conventional MFI schemes can be divided into three categories: the information-aided type, likelihood-based type, and feature-based type. The information-aided type requires specific additional operations, such as radio frequency pilots [11], pilot symbols [12], or artificial frequency offset [13] which have the lower calculation complexity at the receiver, whereas they increase the difficulty of receiver design and sacrifice spectrum efficiencies. The likelihood-based type extracts the information of modulation format from the received signals directly. But this convenience comes at the cost of prior knowledge of a mathematical model describing the channel and the complicated calculation of likelihood function [14,15]. The feature-based type is performed by choosing suitable features, which have been verified over a long period of time, such as the density peak [16], intensity profile feature [17], peak-to-average [18], average amplitude deviation [19], high-order cyclic cumulant [20], and intensity fluctuation feature [21]. In addition, feature-based classifiers require decision trees with a set of determined thresholds that have a certain error if calculated using ideal channel parameters, otherwise strong prior knowledge is required.
Machine learning (ML) algorithms, conversely, can autonomously construct predictive and classification models without prior knowledge of the channel model or its parameters. There are no commercial standards, recommendations, or products for MFI. Utilizing ML technology to implement MFI can offer numerous benefits for both the current optical networks and the future autonomous and adaptive elastic optical networks [22]. The general process for implementing machine learning-assisted MFI is depicted in Figure 1. In the first step, photoelectric signals at the receiving end are gathered to extract features that contain information about the damaged modulation format, such as constellation diagram, cumulative distribution function, amplitude histogram, and eye diagram statistics. In the second step, each digital signal processing algorithm module is applied in the preprocessing algorithm flow shown in Figure 1, in sequence. The third step involves extracting feature values from the processed signals and establishing a dataset for training ML algorithms. Based on signal representation techniques, machine learning-assisted approaches are categorized into three groups: sequence representation, feature representation, and image representation. Sequence representation is advantageous when signals are received sequentially, such as amplitude, phase, in-phase and quadrature (I/Q) sequences, requiring minimal computational overhead. However, methods like histogram and fast Fourier transformation sequences demand additional computation. The main challenge lies in designing deep neural network tailored to sequence characteristics; poorly designed networks may struggle to converge during training [23]. Feature representation methods involve extracting multiple features to represent received signals, allowing for simpler deep neural network architectures with fewer neurons/layers. Nevertheless, drawbacks include increased computational complexity due to signal characteristic calculations, the necessity of expert knowledge for feature selection, and the potential compromise in modulation classification performance due to partial feature values extracted from the original signal [24]. Image-based recognition methods benefit greatly from ML advancements. Existing frameworks for image recognition can be directly applied to modulation classification tasks in image representation. Moreover, automatic extraction of signal features by the hidden layers circumvents the need for manual feature extraction, mitigating potential shortcomings [25,26].
We proposes an MFI scheme based on conditional generative adversarial networks (CGAN). By leveraging the discriminative ability of its discriminator, it achieves 100% accuracy in recognizing five modulation formats. Additionally, it obtains a generator capable of generating data with the same pattern as the input signals. As an additional outcome, this generator can provide data generation services for intelligent signal processing algorithms that require large amounts of data to function properly. In the process of collecting experimental data to create training and testing sets, this scheme cleverly uses transmission signals with different polarization states to separate training and testing sets, aiming to avoid common issues such as overfitting in neural network training. The reliability of this scheme is validated from multiple perspectives through transmission experiments of five modulation formats under three transmission rates. It demonstrates that the proposed scheme can accomplish modulation format recognition tasks under low optical signal-to-noise ratio (OSNR) conditions.
The remaining structure of this paper is as follows. Section 2 elaborates on the principles of generative adversarial network (GAN) technology. It provides a detailed explanation of the network architecture used in this paper and the reasons for constructing the network in this particular way. Additionally, it introduces the setup process of the validation experiments and the relevant components used. It outlines the data processing algorithms employed and the principles for constructing the dataset after preprocessing the data. Furthermore, it validates the reliability of the experimental data and presents the configuration scheme of the artificial intelligence (AI) platform used for training the neural network. In Section 3, a comparative analysis is conducted between the MFI scheme based on CGAN and other AI-based approaches. Through visual representations, this section demonstrates that the proposed scheme can simultaneously recognize signals with multiple transmission rates. It also compares the performance of the proposed scheme with several existing ones based on the OSNR threshold, indicating the superiority of the proposed approach. Section 4 provides a summary of the proposed scheme in this paper.

2. Materials and Methods

2.1. Conditional Generative Adversarial Network

GAN is a technique that employs a mutual game to derive two functional models, namely a generative model G and a discriminative model D. Model G learns to capture the distribution of training data to generate pseudo-real data, while model D learns to distinguish between real and generated data [27]. Utilizing GANs for modulation format recognition not only facilitates accurate classification tasks but also serves as a valuable data generator for coherent optical communication systems. In the context of intelligent upgrades to optical nodes in future optical networks such as cognitive optical networks or elastic optical networks, GANs are playing a significant role, particularly in scenarios with insufficient experimental data. Compared to other data generation models, GANs offer unique advantages. GANs excel at parallelizing generation across a single large image, a task that proves challenging for other generative algorithms such as the pixel convolutional neural network [28] and fully visible belief network [29]. The scheme proposed in this paper leverages the discriminative characteristics of model D to identify the modulation format of the data received coherently by the optical fiber transmission system.
The discriminator D aims to maximize the entire objective function V ( D , G ) , ensuring that it can distinguish real data from generated data as accurately as possible [30]. It aims to maximize the objective function by pushing D ( x ) close to 1 for real samples and D ( G ( z ) ) close to 0 for generated samples, as expressed in Equation (1).
min G max D V ( D , G ) = E x p d a t a ( x ) log D ( x ) + E z p z ( z ) log ( 1 D ( G ( z ) ) ) ,
The generator’s goal is to fool the discriminator. It aims to minimize the objective function by generating data such that D ( G ( z ) ) approaches 1, meaning the generated samples are classified as real by the discriminator. x is a real data sample drawn from the training dataset. It is one of the inputs to the discriminator and represents high-dimensional data like images, audio, or text. z is the input noise to the generator. This noise is sampled from a simple Gaussian distribution and serves as the source from which the generator creates new data. p d a t a ( x ) is the probability distribution of real data. The goal of the GAN is to have the generator learn this distribution so that the generated samples G ( z ) resemble real data. p z ( z ) is the probability distribution of the noise. The generator samples noise z from this distribution to create new data. E x p d a t a ( x ) denotes the expectation over the real data distribution p d a t a ( x ) , representing the loss for real data. E z p z ( z ) denotes the expectation over the noise distribution p z ( z ) .
min G max D V ( D , G ) = E x p data ( x ) [ log D ( x | y ) ] + E z p z ( z ) [ log ( 1 D ( G ( z | y ) | y ) ) ] ,
D and G are alternately trained to achieve equilibrium. This process essentially involves labeling the data due to the classification of different modulation formats. The CGAN introduces conditional information y under the aforementioned premise, allowing the generated data to meet specific conditions [31]. In the context of modulation format recognition, this involves adding the label information of the modulation format. The objective function of this two-player minimax game is represented as Equation (2). Similar to the discriminators of most CGAN-based methods [32,33,34,35], conditional information y is incorporated into the discriminator by either concatenating y with the input or with the feature vector at some intermediate layer.
Since convolutional neural networks exhibit strong performance in image processing akin to fully connected networks, we opt for them as the primary component of CGANs. This choice serves to reduce complexity while alleviating computational burdens [36]. The structures of the discriminator and generator are typically similar. They must cooperate in learning speed while ensuring their respective learning capabilities, with neither outperforming the other excessively [37]. Thus, the design of the generator is typically approached from the perspective of replicating the discriminator’s structure. In line with these considerations, the solution proposed in this paper follows suit. The structure of the discriminator network in CGAN is illustrated in Figure 2.

2.2. Discriminator Network Structure

For the discriminator, whether the signal under evaluation originates from sampled data or generated data, it is represented as a three-channel tensor. Although batch information in the fourth dimension has been incorporated during the training process to expedite progress, the input layer remains fixed. Recognizing that crucial information in the modulation format arises from portions of the constellation, employing convolutional layers is a prudent choice. This approach aids in mitigating the substantial memory overhead associated with fully connected networks. The output layer assumes the responsibility of the MFI and only needs to output the discrimination results. The structure of the output layer is straightforward. Therefore, the crux of network design lies in the hidden layers.
Standardization and activation operations are inserted between multiple convolutional layers, effectively constraining signal variance and preventing saturation of the convolution network caused by larger values. However, the standardization operation is discontinued shortly before the addition of label information. This decision stemmed from the realization that in the subsequent stage, only a straightforward discrimination calculation process is involved. Drawing from the experience with simple fully connected networks, it is determined that standardized operations are no longer necessary. Furthermore, the activation function is replaced with an S-type activation function to ensure that the network output range remains within ( 0 , 1 ) . In contrast to the warm-up technique employed in [38], this approach utilizes the interface provided by PyTorch to exponentially decay the learning rate. The loss function employs a binary cross-entropy function, yielding satisfactory training outcomes. The network structure described above is visually depicted in Figure 3.

2.3. Optical Communication System Platform Construction

A coherent optical fiber transmission system spanning 1000 km is constructed, as depicted in Figure 4. The system utilized a Keysight M8195A arbitrary waveform generator (AWG) with a sampling rate of 65 GSa/s and an analog bandwidth of 25 GHz to convert two independent digital symbol sequences into analog radio frequency signals. These radio frequency signals are then amplified by two linear broadband amplifiers (SHF S807) before being input into the I/Q modulator (Fujitsu FTM7961EX). Within the I/Q modulator, two parallel Mach–Zehnder modulators are biased at the null point, with the phase difference between the lower and upper branches fixed at π / 2 . Continuous light waves are generated by an EXFO IQS-2800 external cavity laser with a line width of less than 100 kHz and an adjustable output power ranging from 0 to 15.5 dBm. Operating at 1550 nm, the light source is modulated by the I/Q modulator to achieve electronic-to-optical conversion. For polarization multiplexing, the light signal initially traversed through a polarization beam splitter, where one of the beams underwent separation via a meter-long optical delay line before recombination through a polarization beam combiner. Subsequently, an erbium-doped fiber amplifier (EDFA) and a digitally variable optical attenuator (OVLINK DVOA-1000) are utilized to adjust the transmitted signal power into the optical fiber link, thereby ensuring the adjustability of the OSNR at the receiver. The output signal is then transmitted through a 1000 km optical transmission line consisting of ten spans, as specified in Table 1.
The amplification of each span by the EDFA matched the attenuation of the fiber. Subsequently, the signal is detected by an integrated coherent receiver. The electrical signals, post optical-to-electronic conversion, are sampled using a digital phosphor oscilloscope (Tektronix DPO 72004C) with a sampling rate of 50 GSa/s and an analog bandwidth of 20 G. A total of 1 × 10 6 samples are collected. By adjusting the AWG and the digitally variable optical attenuator (DVOA), a dataset is compiled. Proof-of-concept experiments involving PDM-QPSK, PDM-8PSK, PDM-8QAM, PDM-16QAM, PDM-32QAM, and PDM-64QAM cosine-forming signals with α = 0.1 , subjected to 1000 km fiber transmission, are conducted on the actual experimental platform, as depicted in Figure 5.

2.4. Experimental Data Credibility Verification

Common digital signal processing (DSP) methods are used, as shown in Figure 1. These steps are insensitive to the modulation format. Direct current blocking is used to offset any incomplete bias voltage in the modulators. A Bessel filter with a 3 dB cutoff frequency of 0.75 × baud rate is used to filter the redundant components. The signal sampling rate is then resampled to two samples per symbol. The Gram–Schmidt orthogonalization procedure is used to compensate for the amplitude and phase imbalances of I/Q signals caused by improper bias voltage settings for the modulators, mismatched photo-diode responses, misalignment of polarization controllers, and defects in the optical 90-degree hybrid. The overlapping frequency domain equalization [39] algorithm is used to compensate for the static damage caused by chromatic dispersion. The dispersion coefficient is found to be 21.156 ps/(nm·km) obtained by the dispersion detection method, and the residual dispersion slope is set to 0.06 ps/(nm2·km). The square timing recovery algorithms are used in the timing recovery module to determine the correct time of symbol sampling adaptively [40]. The constant modulus algorithm is used for polarization demultiplexing [41]. For non-constant modulus modulation formats, the constant modulus algorithm method could not achieve optimal convergence, but it could play a certain equilibrium effect.
After the aforementioned processing, the signals will be used to train and test the modulation format discriminator. Additionally, in order to validate the correctness of the experiments, frequency offset estimation and phase recovery modules are further employed, using the Viterbi–Viterbi algorithm and the blind phase search algorithm, respectively. As depicted in the fourth column of Figure 6, the data processed by the aforementioned general process are utilized as the dataset for training the CGAN.

2.5. Dataset Creation Principles

Initially, a substantial dataset is constructed through the aforementioned experiments, encompassing various transmission rates, multiple modulation formats, and a wide range of signal-to-noise ratios. Following the general DSP process, the constellation signals are divided into training and test sets. The ranges of input power, output power, and OSNR involved in the dataset are [−3.93, 4.36] dBm, [−20.4, −9.5] dBm, and [8.84, 18.67] dB, respectively. As an example of the QPSK modulation format, the experimental data for the OSNR are listed in Table 2.
To enhance network training accuracy and prevent overfitting, the two polarized received signals are meticulously segregated into training and test sets. Compared to [38], which randomly divided data into training and test sets, this approach of segregating data with distinct polarization states of the received signal into training and test sets better mitigated the risk of overfitting. Since we want the CGAN to perform modulation format recognition with as little sample information as possible, different sampling points are utilized during the conversion of the pre-equalized signal into an image. These sampling points included 1024, 2048, 4096, 6144, 8192, 10,240, and 12,288, with one sample point collected for each symbol. To validate the robustness of the proposed approach to the transmission rate of the received signal, four sets of datasets are configured. Among them, three sets of datasets are created separately for signals with transmission rates of 8 G, 16 G, and 32 G. The last set combined received signals with rates of 8 G, 16 G, and 32 G into one dataset. This is performed to verify that the training speed and convergence characteristics of the modulation format recognition scheme based on CGAN would not change due to the complexity of the received signal rates.
The training machines are equipped with NVIDIA RTX3090 GPUs, Intel Core i9-10900X CPUs, and 128GB memory. The Intel Core i9-10900X is a high-performance 10-core, 20-thread desktop processor with a base frequency of 3.70 GHz and Turbo Boost Max Technology up to 4.70 GHz. It supports up to 256 GB of memory and PCI Express 3.0. The NVIDIA RTX 3090 graphics card features 10496 CUDA cores, 24 GB of GDDR6X memory, and offers up to 936 GB/s of memory bandwidth. The system is equipped with 128 GB of memory, composed of four 32 GB Kingston DDR4 modules, and the memory has been configured with extreme memory profile for overclocking. Given the impracticality of loading vast amounts of training and test data into memory during the training process, the dataset is structured as per the popular TorchVision standard. Tools provided by TorchVision 0.9.0 facilitated automatic shuffling, multi-task parallel loading, and batch processing mechanisms. These functionalities, when combined with the PyTorch 1.8.0 deep learning framework, effectively and efficiently addressed the aforementioned challenges.

3. Results

In the next generation of intelligent optical networks, not only is the modulation format of the signals transmitted in the communication links dynamic, but the transmission rate also changes with the business load or transmission environment. A major feature of the proposed solution in this paper is to collect experimental data from different transmission schemes into one dataset for training, to simulate the diversity of received signals in actual optical fiber transmission networks. Additionally, as mentioned in Section 4, the dataset used to train the modulation format discriminator contains signals with different numbers of signal sampling points.
A total of 1024, 2048, 4096, 6144, 8192, 10,240, and 12,288-length signal samples are set to simulate the uncertainty of the available data length in actual transmission processes. As shown in Figure 7, where (b), (c), and (d), respectively, represent the curves of changes in the accuracy of modulation format recognition proposed in this paper, under the condition of only including single transmission rate sampling signals, with the deepening of training. Under the OSNR index of 18.34 dB, 18.48 dB, 18.23 dB, 18.56 dB, and 18.68 dB, experimental verification is completed for five modulation formats: QPSK, 8PSK, 16QAM, 32QAM, and 64QAM.
The data in Figure 7 are averaged over one thousand sets of signals to obtain the final result. In the initial stages when the discriminator network has not converged, the recognition accuracy for each modulation format is very low. However, as the training epochs increase, they eventually reach 100% recognition accuracy. It is easy to see that when the transmission rates are 8 G and 16 G, the recognition accuracy of all five modulation formats reaches 100% after six iterations of training. When the transmission rate is 32 G, it takes seven iterations of training for the recognition accuracy of various modulation formats in the transmission signals to reach 100%.
However, when all transmission rate signals are mixed in the same dataset, the number of iterations required to achieve 100% recognition accuracy only slightly increases. From the above results, it can be concluded that the modulation format recognition scheme based on CGAN has strong robustness to the transmission rate of signals in optical fiber transmission links, and the recognition accuracy will not decrease due to changes in the transmission signal rate. However, the change in transmission signal rate will inevitably make the training process of CGAN more difficult. Its reflection in the results is a slight decrease in the convergence speed of the model.
To demonstrate that the modulation format recognition scheme based on CGAN is superior to schemes based on other AI algorithms, schemes based on DT and SVM are used for comparison with the proposed scheme. For the QPSK, 8PSK, 16QAM, 32QAM, and 64QAM modulation formats with a transmission speed of 8 G, experimental data with OSNR values 10.14 dB, 10.09 dB, 10.25 dB, and 10.58 dB are, respectively, selected from the validation set to verify the accuracy of the three modulation format recognition schemes. From the results presented in Figure 8, it can be seen that as the complexity of the modulation format increases, the recognition accuracy of the two schemes used for comparison shows a linear decreasing trend, while the modulation format recognition scheme based on CGAN can maintain 100% recognition accuracy throughout. This demonstrates the effectiveness of the proposed scheme and proves its ability to function normally in complex modulation format recognition scenarios.
The dependence of this scheme on OSNR indicators is investigated. As can be seen from Figure 9, the OSNR thresholds required to achieve 100% MFI are PDM-64QAM 17.3 dB, PDM-32QAM 16.2 dB, PDM-16QAM 15.4 dB, PDM-8PSK 14.3 dB, and PDM-QPSK 12.4 dB, respectively. (Due to the inability to precisely obtain the numerical value of a certain point in the experiment, these threshold values are calculated by fitting the sampled data.) Compared to the approach proposed in [42], which achieved accurate recognition of the 16QAM modulation format under the condition of an OSNR of 15.85 dB in simulation verification with a transmission rate of 28 G, this approach achieves 100% recognition accuracy in a real-world environment with a transmission rate of up to 32 G over 1000 km, requiring only an OSNR of 15.4 dB. Compared to the almost ideal simulated environment with a transmission rate of 28 G as described in [43], this approach also achieves nearly consistent OSNR performance indicators for the 16QAM, 32QAM, and 64QAM modulation formats. In addition to the comparison of recognition accuracy performance under the aforementioned simulation conditions, compared to the approach proposed in [38], which conducted experimental verification with transmission over 25 km, achieving 100% recognition accuracy for the QPSK, 8PSK, and 16QAM modulation formats required OSNR indicators of 11 dB, 15 dB, and 18 dB, respectively. The modulation format recognition scheme based on CGAN proposed in this paper achieves accurate recognition at much longer distances of 1000 km, surpassing the transmission distance in [38], and at higher transmission rates. The corresponding OSNR indicators required for the three modulation formats are 12.4 dB, 14.3 dB, and 15.4 dB. Excluding QPSK, the other two higher-order modulation formats achieve performance improvements of 0.7 dB and 2.6 dB.
From Figure 9, it can be observed that when the OSNR is below this threshold, the recognition accuracy gradually decreases, but still maintains good recognition performance. The curves depicting the changes in recognition accuracy with input power and output power are also displayed separately. As expected, their trends are consistent with the changes in recognition accuracy with OSNR. This further confirms the accuracy and reliability of the experimental data used in this paper.

4. Discussion

Compared to traditional static optical communication networks with fixed physical channel paths from transmitter to receiver, future optical networks such as elastic optical networks and cognitive optical networks will be dynamic, featuring flexible spectrum grids, adaptable modulation formats, and reconfigurable properties. Efficient and accurate modulation format recognition is an essential component of future optical networks. This paper introduces a modulation format recognition scheme based on CGAN, which can accurately recognize five high-order modulation formats under low OSNR conditions using a discriminative network. The distinctive feature of this scheme lies in its robustness to signal transmission rates. This is achieved by training the model with datasets containing three different transmission rates and seven different lengths of symbol sequences. The training and testing sets use signals with different polarization states.
In the experimental setup using a 1000-km G.654E optical fiber transmission system, we conducted transmission experiments involving PDM-QPSK, PDM-8PSK, PDM-16QAM, PDM-32QAM, and PDM-64QAM modulation formats at baud rates of 8 G, 16 G, and 32 G. Throughout these experiments, this modulation format recognition scheme, proposed as a pivotal technology in next-generation elastic optical networks, achieves 100% recognition accuracy for the aforementioned modulation formats regardless of the specific transmission rates. The corresponding optical signal-to-noise ratio thresholds are 12.4 dB, 14.3 dB, 15.4 dB, 16.2 dB, and 17.3 dB, respectively. Hence, the proposed solution in this paper exhibits appealing performance characteristics, making it a competitive solution in the next-generation optical networks.
Moreover, the proposed scheme will face computational and storage challenges during practical deployment. These challenges are common issues when applying neural network-based methods in production environments, but practical solutions are already being explored. For computational challenges, commonly used methods include model pruning and quantization [44]. By removing near-zero weights in neural networks and converting floating-point models into low-precision integer representations, computational challenges can be effectively addressed. In addition to computational challenges, storage limitations are also a common issue. Practical solutions include low-rank decomposition of neural network weight matrices [45] and Huffman coding to compress model parameters [46], effectively addressing the limited storage resources available during deployment. Solving these practical challenges in model deployment will be the focus of our future research efforts.

Author Contributions

Conceptualization, F.Y., Q.T. and J.A.L.; methodology, F.Y., F.W. and Y.P.; software, F.Y. and Y.P.; validation, F.Y., S.Z. and Q.Z.; formal analysis, F.Y., Q.T. and X.X.; investigation, F.Y., J.M.F. and Y.W.; resources, Q.T.; data curation, F.Y., S.Z. and Q.Z.; writing—original draft preparation, F.Y.; writing—review and editing, F.Y., Q.T. and J.A.L.; visualization, Y.P. and Y.W.; supervision, X.X.; project administration, X.X.; funding acquisition, F.W., Q.T. and X.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Key R&D Program of China (2023YFB2905800), the National Natural Science Foundation of China (62021005), and the National Scholarship for International Students.

Data Availability Statement

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MFIModulation format identification
MLMachine learning
I/QIn-phase and quadrature
CGANConditional generative adversarial network
OSNROptical signal-to-noise ratio
GANGenerative adversarial network
AIArtificial intelligence
AWGArbitrary waveform generator
DVOADigitally variable optical attenuator
EDFAErbium-doped fiber amplifier
DSPDigital signal processing

References

  1. Yaqoob, I.; Hashem, I.A.T.; Mehmood, Y.; Gani, A.; Mokhtar, S.; Guizani, S. Enabling communication technologies for smart cities. IEEE Commun. Mag. 2017, 55, 112–120. [Google Scholar] [CrossRef]
  2. Marom, D.M.; Blau, M. Switching solutions for WDM-SDM optical networks. IEEE Commun. Mag. 2015, 53, 60–68. [Google Scholar] [CrossRef]
  3. Al-Gailani, S.A.; Salleh, M.F.M.; Salem, A.A.; Shaddad, R.Q.; Sheikh, U.U.; Algeelani, N.A.; Almohamad, T.A. A survey of free space optics (FSO) communication systems, links, and networks. IEEE Access 2020, 9, 7353–7373. [Google Scholar] [CrossRef]
  4. Wang, S.; Feng, X.; Gao, S.; Shi, Y.; Dai, T.; Yu, H.; Tsang, H.K.; Dai, D. On-chip reconfigurable optical add-drop multiplexer for hybrid wavelength/mode-division-multiplexing systems. Opt. Lett. 2017, 42, 2802–2805. [Google Scholar] [CrossRef] [PubMed]
  5. López, V.; Velasco, L. Elastic optical networks. In Architectures, Technologies, and Control; Springer International Publishing: Cham, Switzerland, 2016. [Google Scholar]
  6. De Miguel, I.; Durán, R.J.; Jiménez, T.; Fernández, N.; Aguado, J.C.; Lorenzo, R.M.; Caballero, A.; Monroy, I.T.; Ye, Y.; Tymecki, A.; et al. Cognitive dynamic optical networks. J. Opt. Commun. Netw. 2013, 5, A107–A118. [Google Scholar] [CrossRef]
  7. O’Mahony, M.J.; Politi, C.; Klonidis, D.; Nejabati, R.; Simeonidou, D. Future optical networks. J. Light. Technol. 2006, 24, 4684–4696. [Google Scholar] [CrossRef]
  8. Nag, A.; Tornatore, M.; Mukherjee, B. Optical network design with mixed line rates and multiple modulation formats. J. Light. Technol. 2009, 28, 466–475. [Google Scholar] [CrossRef]
  9. Li, G. Recent advances in coherent optical communication. Adv. Opt. Photonics 2009, 1, 279–307. [Google Scholar] [CrossRef]
  10. Gao, Y.; Li, Z.; Guo, D.; Dong, Z.; Zhu, L.; Chang, H.; Zhou, S.; Wang, Y.; Tian, Q.; Tian, F.; et al. Unscented Kalman Filter with Joint Decision Scheme for Phase Estimation in Probabilistically Shaped QAM Systems. Electronics 2023, 12, 4075. [Google Scholar] [CrossRef]
  11. Xiang, M.; Zhuge, Q.; Qiu, M.; Zhou, X.; Tang, M.; Liu, D.; Fu, S.; Plant, D.V. RF-pilot aided modulation format identification for hitless coherent transceiver. Opt. Express 2017, 25, 463–471. [Google Scholar] [CrossRef]
  12. Xiang, M.; Zhuge, Q.; Qiu, M.; Zhou, X.; Zhang, F.; Tang, M.; Liu, D.; Fu, S.; Plant, D.V. Modulation format identification aided hitless flexible coherent transceiver. Opt. Express 2016, 24, 15642–15655. [Google Scholar] [CrossRef] [PubMed]
  13. Fu, S.; Xu, Z.; Lu, J.; Jiang, H.; Wu, Q.; Hu, Z.; Tang, M.; Liu, D.; Chan, C.C.K. Modulation format identification enabled by the digital frequency-offset loading technique for hitless coherent transceiver. Opt. Express 2018, 26, 7288–7296. [Google Scholar] [CrossRef] [PubMed]
  14. Zhu, Z.; Nandi, A.K. Automatic Modulation Classification: Principles, Algorithms and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  15. Hameed, F.; Dobre, O.A.; Popescu, D.C. On the likelihood-based approach to modulation classification. IEEE Trans. Wirel. Commun. 2009, 8, 5884–5892. [Google Scholar] [CrossRef]
  16. Zhang, J.; Gao, M.; Chen, W.; Ye, Y.; Ma, Y.; Yan, Y.; Ren, H.; Shen, G. Blind and noise-tolerant modulation format identification. IEEE Photonics Technol. Lett. 2018, 30, 1850–1853. [Google Scholar] [CrossRef]
  17. Jiang, L.; Yan, L.; Yi, A.; Pan, Y.; Hao, M.; Pan, W.; Luo, B. An effective modulation format identification based on intensity profile features for digital coherent receivers. J. Light. Technol. 2019, 37, 5067–5075. [Google Scholar] [CrossRef]
  18. Lu, J.; Tan, Z.; Lau, A.P.T.; Fu, S.; Tang, M.; Lu, C. Modulation format identification assisted by sparse-fast-Fourier-transform for hitless flexible coherent transceivers. Opt. Express 2019, 27, 7072–7086. [Google Scholar] [CrossRef]
  19. Zhao, Z.; Yang, A.; Guo, P.; Tang, W. A modulation format identification method based on amplitude deviation analysis of received optical communication signal. IEEE Photonics J. 2019, 11, 1–7. [Google Scholar] [CrossRef]
  20. Zhao, L.; Xu, H.; Bai, S.; Bai, C. Modulus mean square-based blind hybrid modulation format recognition for orthogonal frequency division multiplexing-based elastic optical networking. Opt. Commun. 2019, 445, 284–290. [Google Scholar] [CrossRef]
  21. Jiang, L.; Yan, L.; Yi, A.; Pan, Y.; Hao, M.; Pan, W.; Luo, B. Blind optical modulation format identification assisted by signal intensity fluctuation for autonomous digital coherent receivers. Opt. Express 2020, 28, 302–313. [Google Scholar] [CrossRef]
  22. Zhuge, Q.; Zeng, X.; Lun, H.; Cai, M.; Liu, X.; Yi, L.; Hu, W. Application of Machine Learning in Fiber Nonlinearity Modeling and Monitoring for Elastic Optical Networks. J. Light. Technol. 2019, 37, 3055–3063. [Google Scholar] [CrossRef]
  23. Zhao, Y.; Shi, C.; Wang, D.; Chen, X.; Wang, L.; Yang, T.; Du, J. Low-complexity and nonlinearity-tolerant modulation format identification using random forest. IEEE Photonics Technol. Lett. 2019, 31, 853–856. [Google Scholar] [CrossRef]
  24. Khan, F.N.; Zhong, K.; Al-Arashi, W.H.; Yu, C.; Lu, C.; Lau, A.P.T. Modulation format identification in coherent receivers using deep machine learning. IEEE Photonics Technol. Lett. 2016, 28, 1886–1889. [Google Scholar] [CrossRef]
  25. Zhang, J.; Chen, W.; Gao, M.; Ma, Y.; Zhao, Y.; Shen, G. Intelligent adaptive coherent optical receiver based on convolutional neural network and clustering algorithm. Opt. Express 2018, 26, 18684–18698. [Google Scholar] [CrossRef]
  26. Eltaieb, R.A.; Farghal, A.E.; HossamEl-din, H.A.; Saif, W.S.; Ragheb, A.; Alshebeili, S.A.; Shalaby, H.M.; Abd El-Samie, F.E. Efficient classification of optical modulation formats based on singular value decomposition and radon transformation. J. Light. Technol. 2020, 38, 619–631. [Google Scholar] [CrossRef]
  27. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27. Available online: https://proceedings.neurips.cc/paper_files/paper/2014/hash/5ca3e9b122f61f8f06494c97b1afccf3-Abstract.html (accessed on 22 September 2024).
  28. Salimans, T.; Karpathy, A.; Chen, X.; Kingma, D.P. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. arXiv 2017, arXiv:1701.05517. [Google Scholar]
  29. Frey, B.J.; Hinton, G.E.; Dayan, P. Does the wake-sleep algorithm produce good density estimators? Adv. Neural Inf. Process. Syst. 1995, 8. [Google Scholar]
  30. Gui, J.; Sun, Z.; Wen, Y.; Tao, D.; Ye, J. A review on generative adversarial networks: Algorithms, theory, and applications. IEEE Trans. Knowl. Data Eng. 2021, 35, 3313–3332. [Google Scholar] [CrossRef]
  31. Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
  32. Donahue, J.; Krahenbuhl, P.; Darrell, T. Adversarially learned inference. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France, 24–26 April 2017; pp. 1–18. [Google Scholar]
  33. Perarnau, G.; Van De Weijer, J.; Raducanu, B.; Álvarez, J.M. Invertible conditional gans for image editing. arXiv 2016, arXiv:1611.06355. [Google Scholar]
  34. Saito, M.; Matsumoto, E.; Saito, S. Temporal generative adversarial nets with singular value clipping. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2830–2839. [Google Scholar]
  35. Sricharan, K.; Bala, R.; Shreve, M.; Ding, H.; Saketh, K.; Sun, J. Semi-supervised conditional gans. arXiv 2017, arXiv:1708.05789. [Google Scholar]
  36. Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef]
  37. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
  38. Zhu, X.; Liu, B.; Zhu, X.; Ren, J.; Mao, Y.; Han, S.; Chen, S.; Li, M.; Tian, F.; Guo, Z. Multiple Stokes Sectional Plane Image Based Modulation Format Recognition with a Generative Adversarial Network. Opt. Express 2021, 29, 31836–31848. [Google Scholar] [CrossRef]
  39. Shieh, W.; Djordjevic, I. OFDM for Optical Communications; Academic Press: Cambridge, MA, USA, 2009; Google-Books-ID: ViPrzMHnY8EC. [Google Scholar]
  40. Oerder, M.; Meyr, H. Digital filter and square timing recovery. IEEE Trans. Commun. 1988, 36, 605–612. [Google Scholar] [CrossRef]
  41. Tao, L.; Ji, Y.; Liu, J.; Lau, A.P.T.; Chi, N.; Lu, C. Advanced modulation formats for short reach optical communication systems. IEEE Netw. 2013, 27, 6–13. [Google Scholar] [CrossRef]
  42. Jiang, X.; Hao, M.; Yan, L.; Jiang, L.; Xiong, X. Blind and Low-complexity Modulation Format Identification Based on Signal Envelope Flatness for Autonomous Digital Coherent Receivers. Appl. Opt. 2022, 61, 5991–5997. [Google Scholar] [CrossRef]
  43. Jiang, J.; Zhang, Q.; Xin, X.; Gao, R.; Wang, X.; Tian, F.; Tian, Q.; Liu, B.; Wang, Y. Blind Modulation Format Identification Based on Principal Component Analysis and Singular Value Decomposition. Electronics 2022, 11, 612. [Google Scholar] [CrossRef]
  44. Liang, T.; Glossner, J.; Wang, L.; Shi, S.; Zhang, X. Pruning and quantization for deep neural network acceleration: A survey. Neurocomputing 2021, 461, 370–403. [Google Scholar] [CrossRef]
  45. Cai, G.; Li, J.; Liu, X.; Chen, Z.; Zhang, H. Learning and compressing: Low-rank matrix factorization for deep neural network compression. Appl. Sci. 2023, 13, 2704. [Google Scholar] [CrossRef]
  46. Pal, C.; Pankaj, S.; Akram, W.; Acharyya, A.; Biswas, D. Modified Huffman based compression methodology for deep neural network implementation on resource constrained mobile platforms. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018; pp. 1–5. [Google Scholar]
Figure 1. The general process for implementing ML-assisted MFI, where CDF: cumulative distribution function; AH: amplitude histogram; CD: constellation diagram; ED: eye diagram. A solid line represents the training process and a dotted line represents the recognition process.
Figure 1. The general process for implementing ML-assisted MFI, where CDF: cumulative distribution function; AH: amplitude histogram; CD: constellation diagram; ED: eye diagram. A solid line represents the training process and a dotted line represents the recognition process.
Electronics 13 03833 g001
Figure 2. The discriminator of conditional generative adversarial nets algorithm block diagram. Among them, the three colors of red, yellow, and blue represent the three primary color data obtained through three channels of the neural network. And 128 × 128 represents the pixel value of the data.
Figure 2. The discriminator of conditional generative adversarial nets algorithm block diagram. Among them, the three colors of red, yellow, and blue represent the three primary color data obtained through three channels of the neural network. And 128 × 128 represents the pixel value of the data.
Electronics 13 03833 g002
Figure 3. Discriminator workflow, in which the input signal is scaled from the original size of 891 × 656 to a 128 × 128 square at the image center.
Figure 3. Discriminator workflow, in which the input signal is scaled from the original size of 891 × 656 to a 128 × 128 square at the image center.
Electronics 13 03833 g003
Figure 4. The 1000-km coherent optical fiber transmission system, where AWG: arbitrary waveform generator; ECL: external cavity laser; DVOA: digitally variable optical attenuator; PBS: polarization beam splitter; PBC: polarization beam combiner; EDFA: erbium-doped fiber amplifier; ICR: integrated coherent receiver; DPO: digital phosphor oscilloscope. Blue represents electrical signals and green represents light signals.
Figure 4. The 1000-km coherent optical fiber transmission system, where AWG: arbitrary waveform generator; ECL: external cavity laser; DVOA: digitally variable optical attenuator; PBS: polarization beam splitter; PBC: polarization beam combiner; EDFA: erbium-doped fiber amplifier; ICR: integrated coherent receiver; DPO: digital phosphor oscilloscope. Blue represents electrical signals and green represents light signals.
Electronics 13 03833 g004
Figure 5. The 1000 km fiber transmission experimental platform.
Figure 5. The 1000 km fiber transmission experimental platform.
Electronics 13 03833 g005
Figure 6. The constellation diagram is processed by general DSP. QPSK, 8PSK, 16QAM, 32QAM and 64QAM are randomly selected from the training dataset from top to bottom. From left to right, the processing processes are orthogonal imbalance compensation, dispersion compensation, timing recovery, polarization demultiplexing, frequency offset estimation, and phase recovery.
Figure 6. The constellation diagram is processed by general DSP. QPSK, 8PSK, 16QAM, 32QAM and 64QAM are randomly selected from the training dataset from top to bottom. From left to right, the processing processes are orthogonal imbalance compensation, dispersion compensation, timing recovery, polarization demultiplexing, frequency offset estimation, and phase recovery.
Electronics 13 03833 g006
Figure 7. The variation curve of recognition accuracy of each modulation format as the training level deepens.
Figure 7. The variation curve of recognition accuracy of each modulation format as the training level deepens.
Electronics 13 03833 g007
Figure 8. Performance comparison between this scheme and traditional machine learning schemes, where DT: decision tree; SVM: support vector machine; CGAN: conditional generative adversarial network.
Figure 8. Performance comparison between this scheme and traditional machine learning schemes, where DT: decision tree; SVM: support vector machine; CGAN: conditional generative adversarial network.
Electronics 13 03833 g008
Figure 9. Recognition accuracy of various modulation formats under different OSNR indicators.
Figure 9. Recognition accuracy of various modulation formats under different OSNR indicators.
Electronics 13 03833 g009
Table 1. The optical fiber line composition.
Table 1. The optical fiber line composition.
NumberOptical Fiber TypeLength (km)Attenuation (dB)
01G.654E101.0416.98
02G.654E100.9916.87
03G.654E101.2616.86
04G.654E101.0416.98
05G.654E100.9517.69
06G.654E100.9516.93
07G.654E101.0817.03
08G.654E100.2916.79
09G.654E101.0817.18
10G.654E100.3516.71
Table 2. Experimental data of OSNR for the QPSK modulation format.
Table 2. Experimental data of OSNR for the QPSK modulation format.
OSNR (dB)
QPSK 32 G8.819.4710.1210.7511.3712.0312.7413.3414.0214.6
QPSK 16 G8.949.6110.2310.8111.5112.1612.8413.5314.1714.7815.4116.1716.75
QPSK 8 G8.899.4610.1410.6711.2511.9412.6113.1613.8014.4315.0615.6516.3716.9817.67
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, F.; Tian, Q.; Xin, X.; Pan, Y.; Wang, F.; Lázaro, J.A.; Fàbrega, J.M.; Zhou, S.; Wang, Y.; Zhang, Q. Modulation Format Recognition Scheme Based on Discriminant Network in Coherent Optical Communication System. Electronics 2024, 13, 3833. https://doi.org/10.3390/electronics13193833

AMA Style

Yang F, Tian Q, Xin X, Pan Y, Wang F, Lázaro JA, Fàbrega JM, Zhou S, Wang Y, Zhang Q. Modulation Format Recognition Scheme Based on Discriminant Network in Coherent Optical Communication System. Electronics. 2024; 13(19):3833. https://doi.org/10.3390/electronics13193833

Chicago/Turabian Style

Yang, Fangxu, Qinghua Tian, Xiangjun Xin, Yiqun Pan, Fu Wang, José Antonio Lázaro, Josep M. Fàbrega, Sitong Zhou, Yongjun Wang, and Qi Zhang. 2024. "Modulation Format Recognition Scheme Based on Discriminant Network in Coherent Optical Communication System" Electronics 13, no. 19: 3833. https://doi.org/10.3390/electronics13193833

APA Style

Yang, F., Tian, Q., Xin, X., Pan, Y., Wang, F., Lázaro, J. A., Fàbrega, J. M., Zhou, S., Wang, Y., & Zhang, Q. (2024). Modulation Format Recognition Scheme Based on Discriminant Network in Coherent Optical Communication System. Electronics, 13(19), 3833. https://doi.org/10.3390/electronics13193833

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop