An Efficient Pose Estimation Algorithm for Non-Cooperative Space Objects Based on Dual-Channel Transformer
<p>Diagram of the relationship between the sensor and the satellite to be measured.</p> "> Figure 2
<p>Dual-channel transformer model.</p> "> Figure 3
<p>EfficientNet feature extraction backbone network.</p> "> Figure 4
<p>Schematic diagram of the transformer structure.</p> "> Figure 5
<p>Structure of SA module (<b>a</b>) and MHA module (<b>b</b>).</p> "> Figure 6
<p>Inference speed/params heat maps for the different numbers of transformer components.</p> "> Figure 7
<p>Orientation accuracy between different activation functions.</p> "> Figure 8
<p>Pose errors distributed by object distance.</p> ">
Abstract
:1. Introduction
- Satellite localization network (SLN): Using object detection networks to train satellite object detectors in order to achieve precise satellite localization results.
- Landmark regression network (LRN): Input the object detected by SLN into the landmark regression network for training.
- Pose solver: The detected satellite landmarks are solved for satellite poses using PnP solver.
- An end-to-end learning non-cooperative space object pose estimation network is proposed to input the images taken with a monocular sensor into the model. The model directly outputs the pose information of the non-cooperative space objects, which can simplify the estimating process of pose information.
- A dual-channel transformer non-cooperative space object pose estimation network is designed to innovatively apply transformer to the end-to-end learning satellite pose estimation task. The dual-channel network design successfully decouples the spatial translation information and orientation information of satellites.
- A new quaternion SoftMax-like activation function is designed to make the model output according to the quaternion constraint so as to effectively improve the precision of orientation prediction.
2. Dual-Channel Transformer Model
2.1. EfficientNet Backbone Network
2.2. Transformer Model Architecture
- (1)
- PE: The main function of the positional encoding is to retain the spatial position information between the input image blocks. The positional encoding of features is
- (2)
- SA: Self-attention is a core component of transformer. It mimics the characteristics of biological observation targets and extracts features of some key areas by focusing attention through a mathematical mechanism. The advantages of the self-attention mechanism lie in distance learning, improved local attention, and parallel computing. As is shown in Figure 5a, the self-attention mechanism is mainly implemented using scaled dot-product attention,
- (3)
- MHA: MHA is used to establish different projection information in multiple different projection spaces. As is shown in Figure 5b, the input matrix is projected in different ways, and the output matrix is spliced together. For each projection result, MHA executes SA in parallel;
- (4)
- FFN: FFN maps features to the high-dimensional space and then to the low-dimensional space. The purpose of mapping features to the high-dimensional space is to combine the features of various types, improve the resolution ability of the model, and remove the features with low resolution through dimensionality reduction. The process is
- (5)
- Add & Norm: Add & Norm contains residual connection and LN blocks. The residual connection can increase the processing capacity of the network depth and effectively prevent the gradient vanishing and gradient explosion. LN increases the stability of the data feature distribution and thus speeds up the convergence of the model. The formula for that is
2.3. Quaternion SoftMax-like Activation Function
2.4. Joint Pose Loss Function
3. Experimental Results and Analysis
3.1. SPEED Datasets Analysis
3.2. Evaluation Metrics
3.3. Experimental Analysis
3.3.1. Impact of the Number of Transformer Components in the Model
- A.
- Relationship between the number of transformer components and model accuracy transformer
- B.
- Relationship between number of transformer components and model inference speed/params
3.3.2. Effect of Activation Function
3.3.3. Backbone Network
3.3.4. Effects of Decoupling Position and Orientation
3.3.5. Experimental Analysis and Comparison
3.3.6. State-of-the-Art Comparison
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
PnP | Perspective-n-Points |
ESA | European Space Agency |
ADR | Active Debris Removal |
OOS | On-Orbit Servicing |
LIDAR | Light Detection and Ranging |
SPEC2019 | Satellite Pose Estimation Challenge Competition in 2019 |
SPEED | Spacecraft Pose Estimation Dataset |
TOF | Time of Flight |
SLN | Satellite Localization Network |
LRN | Landmark Regression Network |
MBConv | Mobile Inverted Bottleneck Convolution |
SENet | Squeeze-and-Excitation Network |
PE | Positional Encoding |
SA | Self-Attention |
MHA | Multi-Head Attention |
FFN | Feed-Forward Network |
LN | Layer Normalization |
Add & Norm | Residual Connection and Layer Normalization Blocks |
References
- Peng, J.; Xu, W.; Liang, B.; Wu, A.G. Pose Measurement and Motion Estimation of Space Non-Cooperative Targets Based on Laser Radar and Stereo-Vision Fusion. IEEE Sens. J. 2019, 19, 3008–3019. [Google Scholar] [CrossRef]
- Pasqualetto Cassinis, L.; Fonod, R.; Gill, E. Review of the robustness and applicability of monocular pose estimation systems for relative navigation with an uncooperative spacecraft. Prog. Aerosp. Sci. 2019, 110, 100548. [Google Scholar] [CrossRef]
- Xu, W.; Yan, L.; Hu, Z.; Liang, B. Area-oriented coordinated trajectory planning of dual-arm space robot for capturing a tumbling target. Chin. J. Aeronaut. 2019, 32, 2151–2163. [Google Scholar] [CrossRef]
- Fu, X.; Ai, H.; Chen, L. Repetitive Learning Sliding Mode Stabilization Control for a Flexible-Base, Flexible-Link and Flexible-Joint Space Robot Capturing a Satellite. Appl. Sci. 2021, 11, 8077. [Google Scholar] [CrossRef]
- Regoli, L.; Ravandoor, K.; Schmidt, M.; Schilling, K. On-line robust pose estimation for Rendezvous and Docking in space using photonic mixer devices. Acta Astronaut. 2014, 96, 159–165. [Google Scholar] [CrossRef]
- Garcia, A.; Musallam, M.A.; Gaudilliere, V.; Ghorbel, E.; Ismaeil, K.A.; Perez, M.; Aouada, D. LSPnet: A 2D Localization-oriented Spacecraft Pose Estimation Neural Network. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 19–25 June 2021; pp. 2048–2056. [Google Scholar]
- Assadzadeh, A.; Arashpour, M.; Li, H.; Hosseini, R.; Elghaish, F.; Baduge, S. Excavator 3D pose estimation using deep learning and hybrid datasets. Adv. Eng. Inform. 2023, 55, 101875. [Google Scholar] [CrossRef]
- Capuano, V.; Kim, K.; Harvard, A.; Chung, S.-J. Monocular-based pose determination of uncooperative space objects. Acta Astronaut. 2020, 166, 493–506. [Google Scholar] [CrossRef]
- Park, T.H.; Märtens, M.; Lecuyer, G.; Izzo, D.; Amico, S.D. SPEED+: Next-Generation Dataset for Spacecraft Pose Estimation across Domain Gap. In Proceedings of the 2022 IEEE Aerospace Conference (AERO), Big Sky, MT, USA, 5–12 March 2022; pp. 1–15. [Google Scholar]
- Proença, P.F.; Gao, Y. Deep Learning for Spacecraft Pose Estimation from Photorealistic Rendering. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 6007–6013. [Google Scholar]
- Bechini, M.; Lavagna, M.; Lunghi, P. Dataset generation and validation for spacecraft pose estimation via monocular images processing. Acta Astronaut. 2023, 204, 358–369. [Google Scholar] [CrossRef]
- Dung, H.A.; Chen, B.; Chin, T.J. A Spacecraft Dataset for Detection, Segmentation and Parts Recognition. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 19–25 June 2021; pp. 2012–2019. [Google Scholar]
- Kisantal, M.; Sharma, S.; Park, T.H.; Izzo, D.; Märtens, M.; D’Amico, S. Satellite Pose Estimation Challenge: Dataset, Competition Design, and Results. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 4083–4098. [Google Scholar] [CrossRef]
- Liu, Y.; Namiki, A. Articulated Object Tracking by High-Speed Monocular RGB Camera. IEEE Sens. J. 2021, 21, 11899–11915. [Google Scholar] [CrossRef]
- Zheng, T.; Yao, Y.; He, F.; Zhang, X. A cooperative detection method for tracking a non-cooperative space target. In Proceedings of the 2019 Chinese Control Conference (CCC), Guangzhou, China, 27–30 July 2019; pp. 4236–4241. [Google Scholar]
- Gómez Martínez, H.; Giorgi, G.; Eissfeller, B. Pose estimation and tracking of non-cooperative rocket bodies using Time-of-Flight cameras. Acta Astronaut. 2017, 139, 165–175. [Google Scholar] [CrossRef]
- Opromolla, R.; Fasano, G.; Rufino, G.; Grassi, M. Uncooperative pose estimation with a LIDAR-based system. Acta Astronaut. 2015, 110, 287–297. [Google Scholar] [CrossRef]
- Aghili, F.; Kuryllo, M.; Okouneva, G.; English, C. Fault-Tolerant Position/Attitude Estimation of Free-Floating Space Objects Using a Laser Range Sensor. IEEE Sens. J. 2011, 11, 176–185. [Google Scholar] [CrossRef]
- Santavas, N.; Kansizoglou, I.; Bampis, L.; Karakasis, E.; Gasteratos, A. Attention! A Lightweight 2D Hand Pose Estimation Approach. IEEE Sens. J. 2021, 21, 11488–11496. [Google Scholar] [CrossRef]
- Zhuang, S.; Zhao, Z.; Cao, L.; Wang, D.; Fu, C.; Du, K. A Robust and Fast Method to the Perspective-n-Point Problem for Camera Pose Estimation. IEEE Sens. J. 2023, 23, 11892–11906. [Google Scholar] [CrossRef]
- Rahmaniar, W.; Haq, Q.M.U.; Lin, T.L. Wide Range Head Pose Estimation Using a Single RGB Camera for Intelligent Surveillance. IEEE Sens. J. 2022, 22, 11112–11121. [Google Scholar] [CrossRef]
- D’Amico, S.; Benn, M.; Jørgensen, J.L. Pose estimation of an uncooperative spacecraft from actual space imagery. Int. J. Space Sci. Eng. 2014, 2, 174. [Google Scholar]
- Opromolla, R.; Fasano, G.; Rufino, G.; Grassi, M. A review of cooperative and uncooperative spacecraft pose determination techniques for close-proximity operations. Prog. Aerosp. Sci. 2017, 93, 53–72. [Google Scholar] [CrossRef]
- Zhang, S.; Hu, W.; Guo, W. 6-DoF Pose Estimation of Uncooperative Space Object Using Deep Learning with Point Cloud. In Proceedings of the 2022 IEEE Aerospace Conference (AERO), Big Sky, MT, USA, 5–12 March 2022; pp. 1–7. [Google Scholar]
- Zhang, H.; Jiang, Z. Multi-view space object recognition and pose estimation based on kernel regression. Chin. J. Aeronaut. 2014, 27, 1233–1241. [Google Scholar] [CrossRef]
- Chen, B.; Cao, J.; Parra, A.; Chin, T.J. Satellite Pose Estimation with Deep Landmark Regression and Nonlinear Pose Refinement. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, Republic of Korea, 27–28 October 2019; pp. 2816–2824. [Google Scholar]
- Sun, K.; Xiao, B.; Liu, D.; Wang, J. Deep High-Resolution Representation Learning for Human Pose Estimation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 5686–5696. [Google Scholar]
- Wang, Z.; Sun, X.L.; Li, Z.; Cheng, Z.L.; Yu, Q.F. Transformer based monocular satellite pose estimation. Acta Aeronaut. Astronaut. Sin. 2022, 43, 325298. [Google Scholar]
- Piazza, M.; Maestrini, M.; Di Lizia, P. Monocular Relative Pose Estimation Pipeline for Uncooperative Resident Space Objects. J. Aerosp. Inf. Syst. 2022, 19, 613–632. [Google Scholar] [CrossRef]
- Xiang, Y.; Schmidt, T.; Narayanan, V.; Fox, D. Pose CNN: A convolutional neural network for 6D object pose estimation in cluttered scenes. In Proceedings of the Robotics: Science and System XIV, Pittsburgh, PA, USA, 26–30 June 2018; p. 19. [Google Scholar]
- Lin, H.Y.; Liang, S.C.; Chen, Y.K. Robotic Grasping With Multi-View Image Acquisition and Model-Based Pose Estimation. IEEE Sens. J. 2021, 21, 11870–11878. [Google Scholar] [CrossRef]
- Wang, C.; Xu, D.; Zhu, Y.; Martín-Martín, R.; Lu, C.; Fei-Fei, L.; Savarese, S. Densefusion: 6d object pose estimation by iterative dense fusion. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 3343–3352. [Google Scholar]
- Meng, Z.; Cao, W.; Sun, D.; Li, Q.; Ma, W.; Fan, F. Research on fault diagnosis method of MS-CNN rolling bearing based on local central moment discrepancy. Adv. Eng. Inform. 2022, 54, 101797. [Google Scholar] [CrossRef]
- Ruan, D.; Wang, J.; Yan, J.; Gühmann, C. CNN parameter design based on fault signal analysis and its application in bearing fault diagnosis. Adv. Eng. Inform. 2023, 55, 101877. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar] [CrossRef]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021; pp. 9992–10002. [Google Scholar]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-End Object Detection with Transformers. In Lecture Notes in Computer Science, Proceedings of the Computer Vision—ECCV 2020, Glasgow, UK, 23–28 August 2020; Springer: Cham, Switzerland, 2020; pp. 213–229. [Google Scholar]
- Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv 2019, arXiv:1905.11946. [Google Scholar] [CrossRef]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the 32nd International Conference on Machine Learning (ICML), Lille, France, 6–11 July 2015; Volume 37, pp. 448–456. [Google Scholar]
- Kendall, A.; Cipolla, R. Geometric Loss Functions for Camera Pose Regression with Deep Learning. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6555–6564. [Google Scholar]
- Kisantal, S.; Sharma, T.H.; Park, D.; Izzo, M.M.; D’Amico, S. Spacecraft Pose Estimation Dataset (SPEED). Available online: https://explore.openaire.eu/search/dataset?pid=10.5281%2Fzenodo.6327547 (accessed on 13 September 2023).
- Sharma, S.; D’Amico, S. Neural Network-Based Pose Estimation for Noncooperative Spacecraft Rendezvous. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 4638–4658. [Google Scholar] [CrossRef]
- Park, H.; Sharma, S.; D’Amico, S. Towards Robust Learning-Based Pose Estimation of Noncooperative Spacecraft. arXiv 2019, arXiv:1909.00392. [Google Scholar] [CrossRef]
Stage | Operator | Resolution | Channels | Layers |
---|---|---|---|---|
1 | Conv3 × 3, stride = 2 | 224 × 224 | 32 | 1 |
2 | MBConv1, k3 × 3, stride = 1 | 112 × 112 | 16 | 1 |
3 | MBConv6, k3 × 3, stride = 2 | 112 × 112 | 24 | 2 |
4 | MBConv6, k5 × 5, stride = 2 | 56 × 56 | 40 | 2 |
5 | MBConv6, k3 × 3, stride = 2 | 28 × 28 | 80 | 3 |
6 | MBConv6, k5 × 5, stride = 1 | 14 × 14 | 112 | 3 |
7 | MBConv6, k5 × 5, stride = 2 | 14 × 14 | 192 | 4 |
8 | MBConv6, k3 × 3, stride = 1 | 7 × 7 | 320 | 1 |
9 | Conv1 × 1 & Pooling & FC | 7 × 7 | 1280 | 1 |
Parameter | Description | Value |
---|---|---|
Horizontal focal length | 17.6 mm | |
Vertical focal length | 17.6 mm | |
Number of horizontal pixels | 1920 | |
Number of vertical pixels | 1200 | |
Horizontal pixel length | 5.86 × 10−3 mm | |
Vertical pixel length | 5.86 × 10−3 mm |
N = 3 | N = 4 | N = 5 | N = 6 | N = 7 | N = 8 | N = 9 | Mean | ||
---|---|---|---|---|---|---|---|---|---|
M = 3 | 0.04377 | 0.04294 | 0.04239 | 0.04223 | 0.04165 | 0.04159 | 0.04176 | 0.04233 | |
1.66940 | 1.64043 | 1.67587 | 1.65353 | 1.64385 | 1.64079 | 1.66543 | 1.6556 | ||
M = 4 | 0.04288 | 0.04368 | 0.04249 | 0.04196 | 0.04140 | 0.04148 | 0.04150 | 0.04220 | |
1.61775 | 1.63859 | 1.59067 | 1.61680 | 1.60677 | 1.59865 | 1.58035 | 1.6071 | ||
M = 5 | 0.04430 | 0.04366 | 0.04199 | 0.04133 | 0.04194 | 0.04130 | 0.04146 | 0.04228 | |
1.54057 | 1.54870 | 1.56177 | 1.52849 | 1.54673 | 1.53628 | 1.53448 | 1.5424 | ||
M = 6 | 0.04419 | 0.04258 | 0.04241 | 0.04205 | 0.04133 | 0.04131 | 0.04141 | 0.04218 | |
1.53358 | 1.56409 | 1.53987 | 1.52393 | 1.50766 | 1.49927 | 1.54952 | 1.5311 | ||
M = 7 | 0.04402 | 0.04311 | 0.04271 | 0.04137 | 0.04119 | 0.04079 | 0.04177 | 0.04214 | |
1.49103 | 1.52963 | 1.49738 | 1.50509 | 1.48586 | 1.47877 | 1.59088 | 1.5112 | ||
M = 8 | 0.04435 | 0.04262 | 0.04225 | 0.04179 | 0.04169 | 0.04126 | 0.04145 | 0.04220 | |
1.47614 | 1.41732 | 1.48848 | 1.41565 | 1.48948 | 1.46815 | 1.45286 | 1.4583 | ||
M = 9 | 0.04368 | 0.04252 | 0.04211 | 0.04156 | 0.04185 | 0.04131 | 0.04118 | 0.04203 | |
1.38198 | 1.47518 | 1.42426 | 1.38159 | 1.36928 | 1.41179 | 1.43471 | 1.4113 | ||
Mean | 0.04388 | 0.04302 | 0.04234 | 0.04176 | 0.04158 | 0.04129 | 0.04150 | ||
1.5301 | 1.5448 | 1.5397 | 1.5178 | 1.5214 | 1.5191 | 1.5440 |
N = 3 | N = 4 | N = 5 | N = 6 | N = 7 | N = 8 | N = 9 | ||
---|---|---|---|---|---|---|---|---|
M = 3 | 31.81 | 32.49 | 34.78 | 36.78 | 36.83 | 38.10 | 38.91 | |
122 | 138 | 154 | 170 | 186 | 202 | 218 | ||
M = 4 | 31.69 | 35.21 | 36.30 | 36.70 | 39.71 | 41.20 | 43.00 | |
138 | 154 | 170 | 186 | 202 | 218 | 234 | ||
M = 5 | 33.45 | 35.45 | 36.95 | 38.63 | 40.55 | 42.13 | 43.48 | |
154 | 170 | 186 | 202 | 218 | 234 | 250 | ||
M = 6 | 34.84 | 36.41 | 39.27 | 39.90 | 41.00 | 44.91 | 44.85 | |
170 | 186 | 202 | 218 | 234 | 250 | 266 | ||
M = 7 | 35.36 | 39.38 | 39.18 | 41.95 | 42.98 | 45.15 | 46.63 | |
186 | 202 | 218 | 234 | 250 | 266 | 282 | ||
M = 8 | 37.99 | 40.22 | 41.24 | 43.21 | 43.26 | 46.21 | 47.68 | |
202 | 218 | 234 | 250 | 266 | 282 | 298 | ||
M = 9 | 38.91 | 40.58 | 42.89 | 44.33 | 46.08 | 47.61 | 47.97 | |
218 | 234 | 250 | 266 | 282 | 298 | 314 |
Backbone | Translation Error | Orientation Error |
---|---|---|
Resnet50 | 0.06325 | 2.53624 |
EfficientNetB0 | 0.04205 | 1.52393 |
EfficientNetB1 | 0.04853 | 1.50589 |
Model | (m) | (deg) |
---|---|---|
Dual-T1 | 0.04430 | 1.54057 |
Dual-T2 | 0.04239 | 1.67587 |
Dual-T3 | 0.04288 | 1.61775 |
Single-T | 1.1948 | 2.1788 |
Method | Estimating Process |
---|---|
TfNet [28] | SLN + LRN + PnP solver |
SPN [42] | SLN + End-to-end learning |
LSPnet [6] | End-to-end learning |
Ours | End-to-end learning |
A | B | C | D | E | F | Mean | LSPnet [6] | SPN [42] | TfNet [28] | |
---|---|---|---|---|---|---|---|---|---|---|
Mean (deg) | 1.564 | 1.597 | 1.586 | 1.596 | 1.542 | 1.550 | 1.573 | 15.70 | 8.425 | 0.969 |
Median (deg) | 1.499 | 1.497 | 1.516 | 1.542 | 1.449 | 1.460 | 1.494 | - | 7.069 | 0.801 |
Mean (m) | 0.042 | 0.044 | 0.043 | 0.045 | 0.044 | 0.043 | 0.044 | - | - | 0.006 |
Median (m) | 0.036 | 0.035 | 0.037 | 0.037 | 0.037 | 0.037 | 0.037 | - | - | 0.005 |
Mean (m) | 0.501 | 0.538 | 0.515 | 0.528 | 0.522 | 0.524 | 0.521 | 0.519 | 0.294 | - |
Median (m) | 0.309 | 0.321 | 0.312 | 0.319 | 0.304 | 0.323 | 0.315 | - | 0.180 | - |
(ms) | 36.82 | 37.46 | 36.64 | 36.89 | 37.25 | 37.98 | 37.17 | - | - | 212.1 |
Team | Real Image Score | Best Score | PnP |
---|---|---|---|
UniAdelaide [26] | 0.3634 | 0.0086 | Yes |
EPFL_cvlab | 0.1040 | 0.0205 | Yes |
pedro_fairspace [10] | 0.1476 | 0.0555 | No |
Ours | 0.1650 | 0.0600 | No |
stanford_slab [43] | 0.3221 | 0.0611 | Yes |
Team_Platypus | 1.7118 | 0.0675 | No |
motokimura1 | 0.5714 | 0.0734 | No |
Magpies | 1.2401 | 0.1429 | No |
GabrielA | 2.3943 | 0.2367 | No |
stainsby | 4.8056 | 0.3623 | No |
VSI_Feeney | 1.5749 | 0.4629 | No |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ye, R.; Ren, Y.; Zhu, X.; Wang, Y.; Liu, M.; Wang, L. An Efficient Pose Estimation Algorithm for Non-Cooperative Space Objects Based on Dual-Channel Transformer. Remote Sens. 2023, 15, 5278. https://doi.org/10.3390/rs15225278
Ye R, Ren Y, Zhu X, Wang Y, Liu M, Wang L. An Efficient Pose Estimation Algorithm for Non-Cooperative Space Objects Based on Dual-Channel Transformer. Remote Sensing. 2023; 15(22):5278. https://doi.org/10.3390/rs15225278
Chicago/Turabian StyleYe, Ruida, Yuan Ren, Xiangyang Zhu, Yujing Wang, Mingyue Liu, and Lifen Wang. 2023. "An Efficient Pose Estimation Algorithm for Non-Cooperative Space Objects Based on Dual-Channel Transformer" Remote Sensing 15, no. 22: 5278. https://doi.org/10.3390/rs15225278
APA StyleYe, R., Ren, Y., Zhu, X., Wang, Y., Liu, M., & Wang, L. (2023). An Efficient Pose Estimation Algorithm for Non-Cooperative Space Objects Based on Dual-Channel Transformer. Remote Sensing, 15(22), 5278. https://doi.org/10.3390/rs15225278