Intelligent Detection Method for Satellite TT&C Signals under Restricted Conditions Based on TATR
<p>The transmission scenario of the satellite TT&C signal downlink. Ground receiving stations, while receiving downlink TT&C signals, may receive intentional or unintentional interference signals and multipath clutter from ground-based stations. Furthermore, there may be obstacles such as trees and buildings which lead to the submergence of TT&C signals within the ambient signal environment.</p> "> Figure 2
<p>PCM-BPSK-PM signal spectrogram characteristics of satellite TT&C signals under different modulation sensitivities. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>p</mi> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>p</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. The spectral characteristics of the inner modulation signal hinge on the phase modulation sensitivity. Low sensitivity yields a spectrum dominated by a strong carrier component, while high sensitivity reveals the inner modulation signal, its carrier frequency, and symbol rate information.</p> "> Figure 3
<p>Spectrogram of satellite downlink signal reception in complex scenarios, which contain diverse 3G/4G signals as well as burst and frequent interference signals.</p> "> Figure 4
<p>The overall network structure of TATR. This network consists of three parts: ResTA backbone network for spectral feature extraction; neck with multilayer encoders and decoders; and signal detection head block, including class loss and bounding box loss. Firstly, the 1D spectrum amplitude sequence is transformed into a 2D spectrogram and fed into the ResTA backbone. Subsequently, the position embedding of the spectrogram is jointly utilized as input for the TATR encoder and decoder. Finally, the output of the TATR decoder undergoes FFN mapping to derive the positional coordinates and parameter information of the signal.</p> "> Figure 5
<p>ResNet with triplet attention module (ResTA block). This structure divides the output of each block in ResNet into three channels. These channels then undergo rotation, attention calculation, and stacking to associate signal characteristics across channels, amplitude, and frequency dimensions.</p> "> Figure 6
<p>TATR encoder and decoder structure. Structure of the encoder (<b>left</b>); structure of the decoder (<b>right</b>). The output of the encoder jointly uses the query volume as the input for the decoder.</p> "> Figure 7
<p>Signal prediction box matching using Hungarian algorithm. The features output by TATR decoder are processed through FFN to obtain the predicted signal box, which is then matched with the actual detection box padding results using the Hungarian algorithm to obtain the minimum matching loss result.</p> "> Figure 8
<p>Performance of TATR effectiveness analysis experiment: (<b>a</b>) testing loss varies with training rounds, (<b>b</b>) testing AP/AR indicators vary with training rounds.</p> "> Figure 9
<p>Performance of TATR effectiveness analysis experiment: (<b>a</b>) AP/AR metrics vary with SNRs, (<b>b</b>) AP/AR metrics vary with the ratio of labeled samples.</p> "> Figure 10
<p>Performance of various indicators for TATR at different SNRs: (<b>a</b>) mAP@0.5, (<b>b</b>) mAP@0.5:0.95, (<b>c</b>) mAP@0.75, (<b>d</b>) AR. TR represents the TATR model without the triplet attention (TA) module.</p> "> Figure 11
<p>The feature visualization results under various SNRs of TATR: (<b>a</b>–<b>f</b>) show SNRs from 10 dB to −15 dB in 5 dB decrements, respectively. The red box indicates the true signal location, while the bright areas in the heat map show the key focus of the network after training. At high SNRs, it targets the signal peak, then the network shifts attention to sidelobes and the context envelope in low SNR conditions.</p> "> Figure 12
<p>Comparison of signal detection results among different models under different SNRs. (<b>a</b>) Represents the ground truth; (<b>b</b>) represents the detection result of the Faster-RCNN model; (<b>c</b>) represents the detection result of the YOLOv5 model; (<b>d</b>) represents the detection result of the TATR model.</p> ">
Abstract
:1. Introduction
- For signal detection challenges under restricted conditions with incomplete phase information. In contrast to traditional detection methods employing 1D signal sequences as inputs, we transform the 1D sequences into 2D spectrogram images, providing a visual representation of the distinctions between the amplitude envelopes of telemetry signal spectra and background signals. This conversion shifts the frequency prediction problem based on sequences into an object detection problem based on images.
- We design a ResTA backbone based on residual structure and triplet attention, which can correlate the features of the channel, frequency, and amplitude dimensions within the spectrogram. ResTA enhances the capability of extracting spectral features almost without introducing additional parameters.
- We propose a novel signal detection model TATR based on ResTA and Transformer. TATR combines the global attention capability of ResTA and the local attention mechanism of self-attention in the Transformer to capture both global and local features of the spectrogram. Furthermore, it reduces the number of parameters in the Transformer model while adaptively selecting optimal spectral features of TT&C signals.
- Setting fixed anchor points in advance is not suitable for the dynamic nature of TT&C signals’ electromagnetic environments. Therefore, we employ bipartite graph matching in the detection phase, eliminating the necessity for setting anchor points based on prior knowledge, which converts the signal detection problem into a set prediction problem, and the Hungarian algorithm is applied to achieve anchor-free signal detection.
2. Related Works
2.1. Traditional Signal Detection
2.2. Deep-Learning-Based Signal Detection
3. Problem Definition
4. Signal Model
5. Methods
5.1. ResTA Spectrum Feature Extraction Network
5.2. TATR Encoder and Decoder
5.3. Signal Detection
6. Experiment and Results
6.1. Sat_SD2023 Dataset
6.2. Evolution and Indicator
6.3. Experimental Results and Analysis
6.3.1. Validity Analysis and Ablation Experiments
6.3.2. Visualization of Attention Positions
6.3.3. Comparative Experiment
7. Discussion
8. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
Frequency Band (MHz) | Operator | Format | |
---|---|---|---|
825–840/869–885 | China Telecom | CDMA | 2G/4G |
890–909/935–954 | China Mobile | GSM900 | 2G/4G |
909–915/954–960 | China Unicom | GSM900 | 2G |
1710–1725/1805–1820 | China Mobile | DCS1800 | 2G |
1745–1755/1840–1850 | China Unicom | DCS1800 | 2G |
1755–1765/1850–1860 | China Unicom | FDD-LTE | 4G |
1765–1780/1860–1875 | China Telecom | FDD-LTE | 4G |
1885–1905 | China Mobile | TD-LTE | 4G |
1920–1935/2110–2125 | China Telecom | CDMA2000/LTE-FDD | 3G/4G/5G |
1940–1955/2130–2145 | China Unicom | WCDMA/LTE-FDD | 3G/4G/5G |
2010–2025 | China Mobile | TD-SCDMA/ TD-LTE | 3G/4G |
2300–2320 | China Unicom | TD-LTE | 4G |
2320–2370 | China Mobile | TD-LTE | 4G |
2370–2390 | China Telecom | TD-LTE | 4G |
2555–2575 | China Unicom | TD-LTE | 4G |
2575–2635 | China Mobile | TD-LTE | 4G |
2635–2655 | China Telecom | TD-LTE | 4G |
References
- Wu, Y.; Pan, J. Detecting Changes in Impervious Surfaces Using Multi-Sensor Satellite Imagery and Machine Learning Methodology in a Metropolitan Area. Remote Sens. 2023, 15, 5387. [Google Scholar] [CrossRef]
- Li, W.; Sun, Y.; Bai, W.; Du, Q.; Wang, X.; Wang, D.; Liu, C.; Li, F.; Kang, S.; Song, H. A Novel Approach to Evaluate GNSS-RO Signal Receiver Performance in Terms of Ground-Based Atmospheric Occultation Simulation System. Remote Sens. 2024, 16, 87. [Google Scholar] [CrossRef]
- Feng, S.; Ji, K.; Wang, F.; Zhang, L.; Ma, X.; Kuang, G. Electromagnetic Scattering Feature (ESF) Module Embedded Network Based on ASC Model for Robust and Interpretable SAR ATR. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5235415. [Google Scholar] [CrossRef]
- Feng, S.; Ji, K.; Wang, F.; Zhang, L.; Ma, X.; Kuang, G. PAN: Part Attention Network Integrating Electromagnetic Characteristics for Interpretable SAR Vehicle Target Recognition. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5204617. [Google Scholar] [CrossRef]
- Chen, D.; Shi, S.; Gu, X.; Shim, B. Weak Signal Frequency Detection Using Chaos Theory: A Comprehensive Analysis. IEEE Trans. Veh. Technol. 2021, 70, 8950–8963. [Google Scholar] [CrossRef]
- Sun, J.; Wang, Y.; Shen, Y.; Lu, S. High-Precision Trajectory Data Reconstruction for TT&C Systems Using LS B-Spline Approximation. IEEE Signal Process. Lett. 2020, 27, 895–899. [Google Scholar]
- Zhao, Y.; Yang, P.; Xiao, Y.; Dong, B.; Xiang, W. Soft-Feedback Time-Domain Turbo Equalization for Single-Carrier Generalized Spatial Modulation. IEEE Trans. Veh. Technol. 2018, 67, 9421–9434. [Google Scholar] [CrossRef]
- Wang, D.; Chen, X.; Yi, H.; Zhao, F. Improvement of Non-Maximum Suppression in RGB-D Object Detection. IEEE Access 2019, 7, 144134–144143. [Google Scholar] [CrossRef]
- Symeonidis, C.; Mademlis, I.; Pitas, I.; Nikolaidis, N. Neural Attention-Driven Non-Maximum Suppression for Person Detection. IEEE Trans. Image Process. 2023, 32, 2454–2467. [Google Scholar] [CrossRef]
- Misra, D.; Nalamada, T.; Arasanipalai, A.U.; Hou, Q. Rotate to Attend: Convolutional Triplet Attention Module. In Proceedings of the 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 5–9 January 2021; pp. 3138–3147. [Google Scholar]
- Stewart, R.; Andriluka, M.; Ng, A.Y. End-To-End People Detection in Crowded Scenes. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26–30 June 2016; pp. 2325–2333. [Google Scholar]
- Oh, H.; Nam, H. Energy Detection Scheme in the Presence of Burst Signals. IEEE Signal Process. Lett. 2019, 26, 582–586. [Google Scholar] [CrossRef]
- Shui, P.L.; Bao, Z.; Su, H.T. Nonparametric Detection of FM Signals Using Time-Frequency Ridge Energy. IEEE Trans. Signal Process. 2008, 56, 1749–1760. [Google Scholar] [CrossRef]
- Liu, C.; Wang, J.; Liu, X.; Liang, Y.C. Maximum Eigenvalue-Based Goodness-of-Fit Detection for Spectrum Sensing in Cognitive Radio. IEEE Trans. Veh. Technol. 2019, 68, 7747–7760. [Google Scholar] [CrossRef]
- Akhter, M.A.; Heylen, R.; Scheunders, P. A Geometric Matched Filter for Hyperspectral Target Detection and Partial Unmixing. IEEE Geosci. Remote Sens. Lett. 2015, 12, 661–665. [Google Scholar] [CrossRef]
- Theiler, J.; Foy, B. Effect of Signal Contamination in Matched-filter Detection of the Signal on a Cluttered Background. IEEE Geosci. Remote Sens. Lett. 2006, 3, 98–102. [Google Scholar] [CrossRef]
- Lunden, J.; Kassam, S.A.; Koivunen, V. Robust Nonparametric Cyclic Correlation-Based Spectrum Sensing for Cognitive Radio. IEEE Trans. Signal Process. 2010, 58, 38–52. [Google Scholar] [CrossRef]
- Hong, S.; Li, Y.; He, Y.C.; Wang, G.; Jin, M. A Cyclic Correlation-Based Blind SINR Estimation for OFDM Systems. IEEE Commun. Lett. 2012, 16, 1832–1835. [Google Scholar] [CrossRef]
- Ishihara, S.; Umebayashi, K.; Lehtomäki, J.J. Energy Detection for M-QAM Signals. IEEE Access 2023, 11, 6305–6319. [Google Scholar] [CrossRef]
- Zheng, L.; Yang, C.; Deng, X.; Ge, W. Linearized Model for MIMO-MFSK Systems with Energy Detection. IEEE Commun. Lett. 2022, 26, 1408–1412. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 24–27 June 2014; pp. 580–587. [Google Scholar]
- Girshick, R. Fast R-CNN. arXiv 2015, arXiv:1504.08083. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. arXiv 2016, arXiv:1512.02325. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. arXiv 2016, arXiv:1506.02640. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable Bag-of-freebies Sets New State-of-the-art for Real-time Object Detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
- Li, W.; Wang, K.; You, L.; Huang, Z. A New Deep Learning Framework for HF Signal Detection in Wideband Spectrogram. IEEE Signal Process. Lett. 2022, 29, 1342–1346. [Google Scholar] [CrossRef]
- Li, Y.; Shi, X.; Yang, X.; Zhou, F. Unsupervised Modulation Recognition Method Based on Multi-Domain Representation Contrastive Learning. In Proceedings of the 2023 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), Zhengzhou, China, 14–17 November 2023; pp. 1–6. [Google Scholar]
- Zhang, Z.; Shi, X.; Zhou, F. An Incremental Recognition Method for MFR Working Modes Based on Deep Feature Extension in Dynamic Observation Scenarios. IEEE Sens. J. 2023, 23, 21574–21587. [Google Scholar] [CrossRef]
- Ke, D.; Huang, Z.; Wang, X.; Li, X. Blind Detection Techniques for Non-Cooperative Communication Signals Based on Deep Learning. IEEE Access 2019, 7, 89218–89225. [Google Scholar] [CrossRef]
- Prasad, K.N.R.S.V.; Dsouza, K.B.; Bhargava, V.K.; Mallick, S.; Boostanimehr, H. A Deep Learning Framework for Blind Time-Frequency Localization in Wideband Systems. In Proceedings of the 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), Antwerp, Belgium, 25–28 May 2020; pp. 1–6. [Google Scholar]
- Xu, W.; Ma, W.; Wang, S.; Gu, X.; Ni, B.; Cheng, W.; Feng, J.; Wang, Q.; Hu, M. Automatic Detection of VLF Tweek Signals Based on the YOLO Model. Remote Sens. 2023, 15, 5019. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2023, arXiv:1706.03762. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16 × 16 Words: Transformers for Image Recognition at Scale. arXiv 2021, arXiv:2010.11929. [Google Scholar]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-End Object Detection with Transformers. In Proceedings of the 2020 European Conference on Computer Vision (ECCV), Online, 23–28 August 2020; pp. 213–229. [Google Scholar]
- Zhu, X.; Su, W.; Lu, L.; Li, B.; Dai, J. Deformable DETR: Deformable Transformers for End-to-End Object Detection. arXiv 2020, arXiv:2010.04159. [Google Scholar]
- Wang, Y.; Zhang, X.; Yang, T.; Sun, J. Anchor DETR: Query Design for Transformer-Based Detector. Proc. AAAI Conf. Artif. Intell. 2022, 36, 2567–2575. [Google Scholar] [CrossRef]
- Jiang, Y.; Gu, H.; Lu, Y.; Yu, X. 2D-HRA: Two-Dimensional Hierarchical Ring-Based All-Reduce Algorithm in Large-Scale Distributed Machine Learning. IEEE Access 2020, 8, 183488–183494. [Google Scholar] [CrossRef]
- Jiang, Y.; Fu, F.; Miao, X.; Nie, X.; Cui, B. OSDP: Optimal Sharded Data Parallel for Distributed Deep Learning. arXiv 2023, arXiv:2209.13258. [Google Scholar]
- Xu, Z.; Zhu, J.; Geng, J.; Deng, X.; Jiang, W. Triplet Attention Feature Fusion Network for SAR and Optical Image Land Cover Classification. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 4256–4259. [Google Scholar]
- Modenini, A.; Ripani, B. A Tutorial on the Tracking, Telemetry, and Command (TT&C) for Space Missions. IEEE Commun. Surv. Tutor. 2023, 25, 1510–1542. [Google Scholar]
- Zhang, T.; Zhang, X.; Yang, Q. Passive Location for 5G OFDM Radiation Sources Based on Virtual Synthetic Aperture. Remote Sens. 2023, 15, 1695. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26–30 June 2016; pp. 770–778. [Google Scholar]
- Lin, T.Y.; Dollar, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. In Proceedings of the 2017 International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; Savarese, S. Generalized Intersection Over Union: A Metric and a Loss for Bounding Box Regression. In Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 658–666. [Google Scholar]
- Everingham, M.; Van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The Pascal Visual Object Classes (VOC) Challenge. Int. J. Comput. Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef]
- Yu, C.; Feng, Z.; Wu, Z.; Wei, R.; Song, B.; Cao, C. HB-YOLO: An Improved YOLOv7 Algorithm for Dim-Object Tracking in Satellite Remote Sensing Videos. Remote Sens. 2023, 15, 3551. [Google Scholar] [CrossRef]
- Everingham, M.; Eslami, S.M.A.; Van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The Pascal Visual Object Classes Challenge: A Retrospective. Int. J. Comput. Vis. 2015, 111, 98–136. [Google Scholar] [CrossRef]
TT&C Signal Simulation Parameters | |||
Signal modulation type | BPSK, QPSK, PCM-BPSK-PM, PCM-QPSK-PM | ||
Signal center frequency (MHz) | 2200–2240 | ||
Bandwidth (MHz) | 3–30 | ||
Channel | Gaussian Noise and Rayleigh Fading | ||
SNR (dB) | −15–15 | ||
Number of satellite telemetry signals | 0–1 | ||
Background Signal Simulation Parameters | |||
Spectrum size | 3500 × 2625 | ||
Frequency range (MHz) | 2100–2400 | ||
Background environmental signal | 3G/4G signal, etc. | ||
3G signal type | WCDMA | CDMA2000 | TD-SCDMA |
Chip rate (Mchip/s) | 3.8 | 3.68 | 1.28 |
Frame length (ms) | 10 | 25 | 10 |
Time slot | 15 | 15 | 15 |
Data modulation | QPSK | QPSK | QPSK |
Channel width (MHz) | 5 | 5 | 1.6 |
4G signal | TD-LTE | ||
Carrier bandwidth (MHz) | 20 | ||
Subcarrier bandwidth (MHz) | 15 | ||
Frame length (ms) | 10 | ||
Time slot configuration | 10:2:2 | ||
Data modulation | OFDM | ||
Neural Network Hyperparameters | |||
Training epoch | 200 | ||
Batch size | 200 | ||
Initial learning rate | |||
Learning rate adjustment strategy | StepLR | ||
Optimizer | Adam |
Ground Truth∖Predicted Value | Positive | Negative |
Positive | True positive (TP) | False negative (FN) |
Negative | False positive (FP) | True negative (TN) |
Model | TA Block | [email protected] (%) | [email protected]:0.95 (%) | [email protected] (%) | AR (%) | Parameters (M) | FPS |
---|---|---|---|---|---|---|---|
TR-2Layer | ✘ | 27.93 | 19.05 | 11.24 | 53.57 | 29.70 | 39 |
TATR-2Layer | ✔ | 35.84 | 26.86 | 20.04 | 62.45 | 29.70 | 39 |
TR-4Layer | ✘ | 92.21 | 47.37 | 44.98 | 73.62 | 35.49 | 30 |
TATR-4Layer | ✔ | 95.78 | 56.82 | 50.21 | 79.67 | 35.49 | 30 |
TR-6Layer | ✘ | 95.81 | 56.45 | 51.23 | 79.81 | 41.28 | 19 |
TATR-6Layer | ✔ | 96.84 | 60.21 | 53.23 | 82.23 | 41.28 | 19 |
Model | [email protected] (%) | [email protected]:0.95 (%) | [email protected] (%) | AR (%) | (%) | Parameters (M) | FPS |
---|---|---|---|---|---|---|---|
Faster-RCNN (MV2) | 0.049 | 82.3 | 17 | ||||
Faster-RCNN (R50) | 0.054 | 41.7 | 20 | ||||
YOLOv5s | 0.044 | 7.2 | 53 | ||||
YOLOv5m | 0.041 | 21.2 | 43 | ||||
YOLOv5l | 0.034 | 46.5 | 37 | ||||
DETR | 0.032 | 41.3 | 18 | ||||
YOLOv7 | 0.046 | 36.5 | 14 | ||||
YOLOv7w | 0.035 | 69.8 | 8 | ||||
YOLOv7x | 0.032 | 70.8 | 7 | ||||
TATR-4layer | 0.031 | 35.5 | 30 | ||||
TATR-6layer | 96.84 | 60.21 | 53.23 | 82.23 | 0.029 | 41.2 | 19 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, Y.; Shi, X.; Wang, X.; Lu, Y.; Cheng, P.; Zhou, F. Intelligent Detection Method for Satellite TT&C Signals under Restricted Conditions Based on TATR. Remote Sens. 2024, 16, 1008. https://doi.org/10.3390/rs16061008
Li Y, Shi X, Wang X, Lu Y, Cheng P, Zhou F. Intelligent Detection Method for Satellite TT&C Signals under Restricted Conditions Based on TATR. Remote Sensing. 2024; 16(6):1008. https://doi.org/10.3390/rs16061008
Chicago/Turabian StyleLi, Yu, Xiaoran Shi, Xiaoning Wang, Yongqiang Lu, Peipei Cheng, and Feng Zhou. 2024. "Intelligent Detection Method for Satellite TT&C Signals under Restricted Conditions Based on TATR" Remote Sensing 16, no. 6: 1008. https://doi.org/10.3390/rs16061008
APA StyleLi, Y., Shi, X., Wang, X., Lu, Y., Cheng, P., & Zhou, F. (2024). Intelligent Detection Method for Satellite TT&C Signals under Restricted Conditions Based on TATR. Remote Sensing, 16(6), 1008. https://doi.org/10.3390/rs16061008