Deep Learning-Based Docking Scheme for Autonomous Underwater Vehicles with an Omnidirectional Rotating Optical Beacon
<p>Framework of the underwater omnidirectional rotating optical beacon docking system.</p> "> Figure 2
<p>Schematic of the underwater omnidirectional rotating optical beacon docking system.</p> "> Figure 3
<p>Structural diagram of the underwater omnidirectional rotating optical beacon.</p> "> Figure 4
<p>Underwater light source selection. (<b>a</b>) 10 W, 60°; (<b>b</b>) 30 W, 60°; (<b>c</b>) 30 W, 10°.</p> "> Figure 5
<p>Annotation information of the underwater optical beacon dataset. (<b>a</b>) Normalized positions of the bounding boxes; (<b>b</b>) Normalized sizes of the bounding boxes. Both panels are presented through histograms with 50 bins per dimension, with darker colours indicating more partitions.</p> "> Figure 6
<p>Improved network architecture of YOLOv8-pose.</p> "> Figure 7
<p>Structure of RFAConv.</p> "> Figure 8
<p>Example of redundant bounding boxes.</p> "> Figure 9
<p>Detection results of different methods. Each row from top to bottom corresponds to scenario 1, scenario 2, and scenario 3, respectively. (<b>a</b>) Ours; (<b>b</b>) YOLOv8n-pose; (<b>c</b>) YOLOv8n with centroid; (<b>d</b>) Tradition; (<b>e</b>) CNN.</p> "> Figure 10
<p>Error diagram.</p> "> Figure 11
<p>Experimental setup.</p> "> Figure 12
<p>Detection results of different methods. (<b>a</b>) Daylight, the beacon faces forward; (<b>b</b>) darkness, the beacon faces forward; (<b>c</b>) daylight, the beacon faces sideways; (<b>d</b>) darkness, the beacon faces sideways.</p> ">
Abstract
:1. Introduction
- An underwater omnidirectional rotating optical beacon was designed to offer a 360-degree operational range of up to 45 m. The design overcomes the limitations of traditional underwater optical beacons, which are hindered by restricted directions and shorter detection distances, thus enhancing docking success rates.
- We have created an underwater optical beacon dataset with manually annotated target boxes and centroid keypoints. A deep learning-based algorithm was developed for the parallel detection of optical beacons and centroids, which is an improved YOLOv8-pose model that significantly enhances detection performance. The algorithm achieved 93.9% AP at 36.5 FPS, with at least a 5.8% increase in detection accuracy over existing methods.
- For the omnidirectional rotating optical beacon, we developed a metric method based on light source features that achieves correct beacon orientation detection. Combined with synchronized scanning and LOS methods, the azimuth and pose estimation errors of this approach are 4.72° and 3.09°, respectively, which meet the practical requirements.
2. Underwater Omnidirectional Rotating Optical Beacon Docking System
2.1. Docking Approach for the Underwater Omnidirectional Rotating Optical Beacon
2.2. Design of Underwater Omnidirectional Rotating Optical Beacon
3. Deep Learning-Based Detection Algorithm for Underwater Optical Beacon
3.1. Underwater Optical Beacon Dataset
3.2. Underwater Optical Beacon Detection Algorithm Based on YOLOv8-Pose
3.2.1. Network Architecture
- Small target detection head: In optical docking, operational range is a critical metric, which is why our dataset includes numerous small object instances at long distances. Zhang et al. [29] have shown that shallower features might be more effective for such small, indistinct targets. Consequently, we introduced a specialized prediction head at the P2 layer designed specifically for detecting small targets. This quad-head structure significantly mitigates the adverse effects of substantial changes in object scale, thereby markedly improving the detection performance for small targets.
- C2f_DC: Dynamic convolution, an extension of traditional convolutional, processes input data by dynamically selecting or combining different convolutional kernels for each input sample. It adapts to the input characteristics by adjusting parameters through a learnable multilayer perceptron (MLP) network that generates weights controlling the contribution of each kernel. The process operates as follows:
- RFApose detection head: RFAConv combines spatial attention mechanisms with convolution operations to optimize how the convolution kernels process spatial features within their receptive fields, as illustrated in Figure 7. H, W, and C in the figure represent the height, width, and number of channels of the feature map, respectively. K denotes the size of the convolution kernel. By introducing attention mechanisms, RFAConv transcends traditional spatial dimensions, enabling the network to more precisely understand and process key areas of the image. The adaptation enhances feature extraction accuracy, particularly in underwater environments characterized by low visibility and light scattering. Furthermore, it optimizes attention weights for large kernel convolutions, effectively addressing the challenge of shared kernel parameters. By reconstructing feature maps, RFAConv further enhances the encoding of image contextual information, allowing the network to better discern the relationship between noise and target light sources in underwater scenes, thereby effectively avoiding erroneous detection of interfering light. In this study, we integrated RFAConv into the decoupled detection head, enabling it to extract more precise classification, bounding box, and keypoint information from multi-scale feature maps, thus helping YOLOv8-pose more effectively address the challenge of indistinct optical beacon features caused by complex underwater environments.
3.2.2. Post-Processing Based on Keypoint Joint IoU Matching
Algorithm 1: Joint Keypoint Similarity and IoU NMS |
Input: {bboxdet}, {bboxconf}, {keypointdet}, λconf, λiou, λoks |
Output: {bboxfilter}, {keypointfilter} |
1 Initialization |
2 {bboxfilter} ← [] |
3 {bboxdet}, {keypointdet}← {bboxs, keypoints|bboxconf ≥ λconf} |
4 order ← sort ({bboxconf}, descending) |
5 while numel (order) > 0 do |
6 i ← order [0] |
7 {bboxfilter} ← {bboxfilter} ∪ bboxdet [i] |
8 {keypointfilter} ← {keypointfilter} ∪ keypointdet [i] |
9 if numel (order) = 1 then break |
10 {bboxremian} ← {bboxdet [order [1:]]} |
11 {keypointremain} ← {keypointdet [order [1:]]} |
12 µiou ← IoU (bboxdet [i], {bboxremian}) |
13 μoks ← OKS (keypointdet [i]i, {keypointremain}) |
14 order ← order [where µiou < λiou and μoks < λoks] |
15 end |
16 return {bboxfilter}, {keypointfilter} |
3.3. Experiments on Underwater Optical Beacon Detection Algorithm
3.3.1. Experimental Setup
3.3.2. Comparative Experiments
3.3.3. Ablation Experiments
4. Pose Estimation Based on Underwater Omnidirectional Rotating Optical Beacon
4.1. Azimuth Estimation
- By determining the frame rate of the AUV’s camera, the observable angle of the optical beacon, and the permissible error margin, the maximum scanning rate of the omnidirectional rotating optical beacon can be calculated. The maximum scanning rate, s′, is computed using the following equation:
- During the docking process, due to the rotational characteristic of the beacon, the deep learning algorithm may detect the target light source in multiple consecutive frames within the same rotation. To accurately determine the beacon’s orientation, we propose a metric method based on the light source’s characteristics, as detailed in Algorithm 2. We hypothesize that the larger the area of the detected target light source and the closer its shape to a circle, the more likely it is that the beacon is facing the AUV directly. Here, I denotes the input image, bboxarea represents the area of the bounding box, and bboxshape represents the aspect ratio of the bounding box. We assign weights to the metrics warea and wshape, and use their weighted sum as the final detection evaluation metric. Additionally, for occasional frame drops in continuous detection, interpolation is used to fill in the gaps.
Algorithm 2: Forward Beacon Detection Algorithm |
Input: I, bbox, warea, wshape, t |
Output:bboxmax, tmax |
1 Initialization |
2 bbox, keypoint, t ← YOLOv8-UL(I) |
3 detect, framemiss, framedet, {bbox} ← False, 0, 0, [] |
4 if bbox is not none: |
5 detect, framemiss ← True, 0 |
6 {bbox} ← {bbox} ∪ bbox |
7 framedet = framedet + 1 |
8 if framedet ≥ 5: |
9 {bboxarea} ← area({bbox}) |
10 {bboxshape} ← shape({bbox}) |
11 score ← warea ∗ {bboxarea}+ wshape ∗ {bboxshape} |
12 return bboxmax, tmax ← max ({bbox}, key = score) |
13 else: framemiss = framemiss+ 1 |
14 if framemiss ≥ 5 then |
15 {bbox}, detect ← [], False |
16 end |
- Prior to the start of AUV docking, the time synchronization between the AUV and the docking station is confirmed through a timing system. Then, using the detection time tdet of the beacon obtained in step 2 and the initial time tinit, the theoretical angular position of the beacon at any given moment can be calculated, representing the AUV’s azimuth relative to the docking station. The formula is provided below:
4.2. Pose Estimation
5. Pool Experiments
5.1. Experiment Setup and Procedure
- Camera platform setup: The camera platform was mounted on the gantry above the pool, with the gantry’s center serving as the origin for the X-axis. The camera moves along the X-axis from −4.5 m to 4.5 m in 3 m increments, moving four times and capturing video at each position to simulate the AUV viewing the optical beacon from different directions.
- Y-axis movement: With the position of the omnidirectional rotating optical beacon serving as the origin for the Y-axis, the camera platform moves along the Y-axis from 20 m to 50 m using the gantry, with 5 m increments, repeating the operation in step 1 seven times. This results in 24 video scans of the optical beacon from various positions. Figure 12 shows representative images collected during the experiment.
- Data collection: Experimental data were collected from offline videos with a resolution of 1920 × 1080. The algorithm runs on a vision computing board, Jetson AGX Orin (manufactured by NVIDIA, based in Santa Clara, CA, USA), which is compact, highly capable, and easily deployable in underwater robots. It provides efficient and reliable computational support for data processing and analysis.
5.2. Experiment Results
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Wynn, R.B.; Huvenne, V.A.; Le Bas, T.P.; Murton, B.J.; Connelly, D.P.; Bett, B.J.; Ruhl, H.A.; Morris, K.J.; Peakall, J.; Parsons, D.R.; et al. Autonomous Underwater Vehicles (AUVs): Their past, present and future contributions to the advancement of marine geoscience. Mar. Geol. 2014, 352, 451–468. [Google Scholar] [CrossRef]
- Sun, K.; Cui, W.; Chen, C. Review of underwater sensing technologies and applications. Sensors 2021, 21, 7849. [Google Scholar] [CrossRef]
- Hou, Y.; Han, G.; Zhang, F.; Lin, C.; Peng, J.; Liu, L. Distributional Soft Actor-Critic-Based Multi-AUV Cooperative Pursuit for Maritime Security Protection. IEEE Trans. Intell. Transp. Syst. 2024, 25, 6049–6060. [Google Scholar] [CrossRef]
- Li, Y.; Sun, K. Review of underwater visual navigation and docking: Advances and challenges. In Proceedings of the Sixth Conference on Frontiers in Optical Imaging and Technology, Nanjing, China, 22–24 October 2023; SPIE: Bellingham, WA, USA, 2024; Volume 13156, pp. 314–321. [Google Scholar]
- Yan, Z.; Gong, P.; Zhang, W.; Li, Z.; Teng, Y. Autonomous Underwater Vehicle Vision Guided Docking Experiments Based on L-Shaped Light Array. IEEE Access 2019, 7, 72567–72576. [Google Scholar] [CrossRef]
- Zhang, W.; Wu, W.; Teng, Y.; Li, Z.; Yan, Z. An underwater docking system based on UUV and recovery mother ship: Design and experiment. Ocean Eng. 2023, 281, 114767. [Google Scholar] [CrossRef]
- Trslic, P.; Rossi, M.; Robinson, L.; O’Donnel, C.W.; Weir, A.; Coleman, J.; Toal, D. Vision based autonomous docking for work class ROVs. Ocean Eng. 2020, 196, 106840. [Google Scholar] [CrossRef]
- Cheng, H.; Chu, J.; Zhang, R.; Gui, X.; Tian, L. Real-time position and attitude estimation for homing and docking of an autonomous underwater vehicle based on bionic polarized optical guidance. J. Ocean Univ. China 2020, 19, 1042–1050. [Google Scholar] [CrossRef]
- Zhao, C.; Dong, H.; Wang, J.; Qiao, T.; Yu, J.; Ren, J. Dual-Type Marker Fusion-Based Underwater Visual Localization for Autonomous Docking. IEEE Trans. Instrum. Meas. 2024, 73, 1–11. [Google Scholar] [CrossRef]
- Sun, K.; Han, Z. Autonomous underwater vehicle docking system for energy and data transmission in cabled ocean observatory networks. Front. Energy Res. 2022, 10, 960278. [Google Scholar] [CrossRef]
- Lin, M.; Lin, R.; Yang, C.; Li, D.; Zhang, Z.; Zhao, Y.; Ding, W. Docking to an underwater suspended charging station: Systematic design and experimental tests. Ocean Eng. 2022, 249, 110766. [Google Scholar] [CrossRef]
- Zhang, Z.; Ding, W.; Wu, R.; Lin, M.; Li, D.; Lin, R. Autonomous Underwater Vehicle Cruise Positioning and Docking Guidance Scheme. J. Mar. Sci. Eng. 2024, 12, 1023. [Google Scholar] [CrossRef]
- Cai, C.; Rong, Z.; Xie, X.; Xu, B.; Zhang, Z.; Wu, Z.; Si, Y.; Huang, H. Development and test of a subsea docking system applied to an autonomous underwater helicopter. In Proceedings of the OCEANS 2022, Hampton Roads, VA, USA, 17–20 October 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–7. [Google Scholar]
- Dörner, D.; Espinoza, A.T.; Torroba, I.; Kuttenkeuler, J.; Stenius, I. To Smooth or to Filter: A Comparative Study of State Estimation Approaches for Vision-Based Autonomous Underwater Docking. In Proceedings of the OCEANS 2024, Singapore, 15–18 April 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–9. [Google Scholar]
- Xu, S.; Jiang, Y.; Li, Y.; Wang, B.; Xie, T.; Li, S.; Cao, J. A stereo visual navigation method for docking autonomous underwater vehicles. J. Field Robot. 2024, 41, 374–395. [Google Scholar] [CrossRef]
- Wang, S.; Wang, X.; Lei, P.; Chen, J.; Xu, Z.; Yang, Y.; Zhou, Y. Blue laser diode light for underwater optical vision guidance in AUV docking. In Proceedings of the Semiconductor Lasers and Applications IX, Hangzhou, China, 20–23 October 2019; SPIE: Bellingham, WA, USA, 2019; Volume 11182, pp. 175–183. [Google Scholar]
- Zhang, Y.; Wang, X.; Lei, P.; Wang, S.; Yang, Y.; Sun, L.; Zhou, Y. Smart vector-inspired optical vision guiding method for autonomous underwater vehicle docking and formation. Opt. Lett. 2022, 47, 2919–2922. [Google Scholar] [CrossRef] [PubMed]
- Chen, Y.; Duan, Z.; Zheng, F.; Guo, Y.; Xia, Q. Underwater optical guiding and communication solution for the AUV and seafloor node. Appl. Opt. 2022, 61, 7059–7070. [Google Scholar] [CrossRef] [PubMed]
- Lv, F.; Xu, H.; Shi, K.; Wang, X. Estimation of Positions and Poses of Autonomous Underwater Vehicle Relative to Docking Station Based on Adaptive Extraction of Visual Guidance Features. Machines 2022, 10, 571. [Google Scholar] [CrossRef]
- Feng, J.; Yao, Y.; Wang, H.; Jin, H. Multi-AUV Terminal Guidance Method Based on Underwater Visual Positioning. In Proceedings of the 2020 IEEE International Conference on Mechatronics and Automation (ICMA), Beijing, China, 13–16 October 2020; pp. 314–319. [Google Scholar]
- Li, Y.; Jiang, Y.; Cao, J.; Wang, B.; Li, Y. AUV Docking Experiments Based on Vision Positioning Using Two Cameras. Ocean Eng. 2015, 110, 163–173. [Google Scholar] [CrossRef]
- Zhang, B.; Zhong, P.; Yang, F.; Zhou, T.; Shen, L. Fast Underwater Optical Beacon Finding and High Accuracy Visual Ranging Method Based on Deep Learning. Sensors 2022, 22, 7940. [Google Scholar] [CrossRef]
- Ren, R.; Zhang, L.; Liu, L.; Yuan, Y. Two AUVs Guidance Method for Self-Reconfiguration Mission Based on Monocular Vision. IEEE Sensors J. 2021, 21, 10082–10090. [Google Scholar] [CrossRef]
- Chavez-Galaviz, J.; Mahmoudian, N. Underwater Dock Detection Through Convolutional Neural Networks Trained with Artificial Image Generation. In Proceedings of the 2022 International Conference on Robotics and Automation, Philadelphia, PA, USA, 23–27 May 2022; pp. 4621–4627. [Google Scholar]
- Duntley, S.Q. Light in the Sea. J. Opt. Soc. Am. 1963, 53, 214–233. [Google Scholar] [CrossRef]
- Liu, S.; Ozay, M.; Okatani, T.; Xu, H.; Sun, K.; Lin, Y. Detection and Pose Estimation for Short-Range Vision-Based Underwater Docking. IEEE Access 2018, 7, 2720–2749. [Google Scholar] [CrossRef]
- Chen, Y.; Dai, X.; Liu, M.; Chen, D.; Yuan, L.; Liu, Z. Dynamic Convolution: Attention Over Convolution Kernels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 11030–11039. [Google Scholar]
- Zhang, X.; Liu, C.; Yang, D.; Song, T.; Ye, Y.; Li, K.; Song, Y. RFAConv: Innovating Spatial Attention and Standard Convolutional Operation. arXiv 2023, arXiv:2304.03198. [Google Scholar]
- Zhang, M.; Xu, S.; Song, W.; He, Q.; Wei, Q. Lightweight Underwater Object Detection Based on YOLOv4 and Multi-Scale Attentional Feature Fusion. Remote Sens. 2021, 13, 4706. [Google Scholar] [CrossRef]
- Neubeck, A.; Van Gool, L. Efficient Non-Maximum Suppression. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006; Volume 3, pp. 850–855. [Google Scholar]
- Maji, D.; Nagori, S.; Mathew, M.; Poddar, D. YOLO-Pose: Enhancing YOLO for Multi-Person Pose Estimation Using Object Keypoint Similarity Loss. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LO, USA, 18–24 June 2022; pp. 2637–2646. [Google Scholar]
- Su, X.; Xiang, X.; Dong, D.; Zhang, J. Visual LOS Guided Docking of Over-Actuated Underwater Vehicle. In Proceedings of the Global Oceans 2020: Singapore–US Gulf Coast, Biloxi, MS, USA, 5–30 October 2020; pp. 1–5. [Google Scholar]
- Petzold, T.J. Volume Scattering Functions for Selected Ocean Waters; Scripps Institution of Oceanography: San Diego, CA, USA, 1972. [Google Scholar]
Eight Lights | Six Lights | Four Lights | Two Lights | Single Light * | |
---|---|---|---|---|---|
Examples | |||||
Training Set | 1535 | 1374 | 350 | 1805 | 1080 |
Test Set | 307 | 274 | 70 | 361 | 216 |
Image Size | 720 × 576 | 960 × 576 | 960 × 576 | 1920 × 1080 | 1920 × 1080 |
Environment | Specification (Train) | Specification (Inference) |
---|---|---|
CPUs | 2 × Intel Xeon Gold 6234 | 12-core ARM Cortex-A78AE |
GPU | NVIDIA RTX A6000 (48G) | NVIDIA Ampere, 2048 CUDA cores, 64 Tensor Cores |
CUDA | 11.3 | 11.4 |
PyTorch | 1.11.0 | 1.13.0 |
Methods | APiou50 | APiou50–95 | APoks50 | FPS |
---|---|---|---|---|
Tradition | - | - | 0.837 | 28 |
CNN | - | - | 0.867 | 67.9 |
YOLOv8n + Centroid | 0.911 | 0.572 | 0.881 | 40 |
YOLOv9t + Centroid | 0.918 | 0.569 | 0.893 | 38.9 |
YOLOv10n + Centroid | 0.924 | 0.555 | 0.871 | 51 |
YOLOv8n-pose | 0.903 | 0.564 | 0.874 | 66.6 |
Ous | 0.943 | 0.599 | 0.939 | 36.5 |
Model | APiou50 | APiou50–95 | APoks50 | FLOP (G) | Parameter (M) | FPS |
---|---|---|---|---|---|---|
YOLOv8n-pose | 0.903 | 0.564 | 0.966 | 8.7 | 6.2 | 66.6 |
+p2 * | 0.923 | 0.576 | 0.978 | 12.4 | 6.2 | 49.6 |
+p2, +DC * | 0.928 | 0.581 | 0.978 | 11.8 | 7.5 | 43.7 |
+p2, +DC, +RFApose * | 0.935 | 0.588 | 0.982 | 10.5 | 9.1 | 38.6 |
+p2, +DC, +RFApose, +kp * | 0.943 | 0.599 | 0.994 | 10.5 | 9.1 | 36.5 |
X-Axis\Y-Axis | 20 m | 25 m | 30 m | 35 m | 40 m | 45 m | 50 m |
---|---|---|---|---|---|---|---|
4.5m (day) | 6.12 | 5.63 | 5.43 | 5.60 | 4.94 | 5.61 | - |
4.5m (night) | 6.12 | 5.62 | 5.43 | 5.55 | 4.89 | 5.54 | - |
1.5m (day) | 4.53 | 4.48 | 4.58 | 4.29 | 4.49 | 4.38 | - |
1.5m (night) | 4.53 | 4.48 | 4.58 | 4.29 | 4.47 | 4.34 | 4.21 |
−1.5m (day) | 3.92 | 3.73 | 3.86 | 3.15 | 2.95 | 2.78 | - |
−1.5m (night) | 3.92 | 3.70 | 3.86 | 3.14 | 2.95 | 2.77 | 2.68 |
−4.5m (day) | 5.85 | 5.52 | 5.55 | 5.43 | 5.17 | 5.58 | - |
−4.5m (night) | 5.85 | 5.52 | 5.52 | 5.43 | 5.11 | 5.50 | - |
X-Axis\Y-Axis | 20 m | 25 m | 30 m | 35 m | 40 m | 45 m | 50 m |
---|---|---|---|---|---|---|---|
4.5m (day) | 4.26 | 4.13 | 4.01 | 3.88 | 3.95 | 3.93 | - |
4.5m (night) | 4.26 | 4.13 | 4.01 | 3.82 | 3.90 | 3.87 | - |
1.5m (day) | 2.15 | 2.91 | 1.97 | 2.28 | 1.72 | 1.95 | - |
1.5m (night) | 2.15 | 2.91 | 1.97 | 2.28 | 1.72 | 1.94 | 1.84 |
−1.5m (day) | 2.82 | 3.20 | 1.95 | 1.93 | 1.86 | 1.88 | - |
−1.5m (night) | 2.81 | 3.20 | 1.95 | 1.93 | 1.85 | 1.84 | 1.79 |
−4.5m (day) | 4.22 | 4.14 | 3.93 | 3.55 | 3.71 | 3.99 | - |
−4.5m (night) | 4.22 | 4.14 | 3.90 | 3.54 | 3.62 | 3.95 | - |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, Y.; Sun, K.; Han, Z.; Lang, J. Deep Learning-Based Docking Scheme for Autonomous Underwater Vehicles with an Omnidirectional Rotating Optical Beacon. Drones 2024, 8, 697. https://doi.org/10.3390/drones8120697
Li Y, Sun K, Han Z, Lang J. Deep Learning-Based Docking Scheme for Autonomous Underwater Vehicles with an Omnidirectional Rotating Optical Beacon. Drones. 2024; 8(12):697. https://doi.org/10.3390/drones8120697
Chicago/Turabian StyleLi, Yiyang, Kai Sun, Zekai Han, and Jichao Lang. 2024. "Deep Learning-Based Docking Scheme for Autonomous Underwater Vehicles with an Omnidirectional Rotating Optical Beacon" Drones 8, no. 12: 697. https://doi.org/10.3390/drones8120697
APA StyleLi, Y., Sun, K., Han, Z., & Lang, J. (2024). Deep Learning-Based Docking Scheme for Autonomous Underwater Vehicles with an Omnidirectional Rotating Optical Beacon. Drones, 8(12), 697. https://doi.org/10.3390/drones8120697