A Fast Multi-Scale of Distributed Batch-Learning Growing Neural Gas for Multi-Camera 3D Environmental Map Building
<p>The GNG topological map in the experimental dataset. As can be seen from the figure, when GNG learns a position and color, the connections appear very messy. (<b>a</b>) Original point cloud. (<b>b</b>) The GNG topology is learned using the position. (<b>c</b>) The GNG topology is learned using the position and color.</p> "> Figure 2
<p>Map building methods for path planning. (<b>a</b>) A real environment. (<b>b</b>) A grid map. (<b>c</b>) A polygonal map.</p> "> Figure 3
<p>Topological map building. (<b>a</b>) Environmental map. (<b>b</b>) Roadmap.</p> "> Figure 4
<p>An example of topological path planning in a polygonal map. (<b>a</b>) Visibility graph. (<b>b</b>) Voronoi diagram.</p> "> Figure 5
<p>The overall process of fast MS-DBL-GNG. The network is first initialized by creating multiple starting points in the point cloud. Then, based on the initialization, the point cloud data are rearranged and split into multi-scale mini-batches. For each mini-batch, it learns twice. During the learning process, it first resets the temporary variables and then learns the mini-batch in a batch matrix calculation manner. After learning is completed, the temporary variables are used to update the network node weights and edges. Then, it calculates the total number of nodes that should be added and, next, adds them to the network. The process is repeated until all multi-scale mini-batches are gone through.</p> "> Figure 6
<p>An example of distributed initialization for three starting points. The circles are data, and the asterisks are nodes. First, a node is randomly selected in the last batch of data as the first starting point. Then, the third closest node is selected and connected. After that, the first <span class="html-italic">B</span> data surrounding it are deleted. The next starting point is selected in the area farthest from the current starting point. The same process is repeated until all three points are initialized.</p> "> Figure 7
<p>The fast multi-scale batch-learning process. Data are learned from a small scale (<b>left</b>) to a full batch (<b>right</b>). However, this study avoid learning the full batch and instead learn the same mini-batch twice in each learning phase.</p> "> Figure 8
<p>The example procedure for balancing the data distribution in each mini-batch, where <math display="inline"><semantics> <mi>η</mi> </semantics></math> is 3, and <span class="html-italic">L</span> is 2. First, divide each set of data <math display="inline"><semantics> <msub> <mover accent="true"> <mi mathvariant="bold-italic">X</mi> <mo>^</mo> </mover> <mi>t</mi> </msub> </semantics></math> into <math display="inline"><semantics> <msup> <mn>2</mn> <mi>L</mi> </msup> </semantics></math> groups and then rearrange the data to data <math display="inline"><semantics> <mi mathvariant="bold-italic">X</mi> </semantics></math>.</p> "> Figure 9
<p>The overall system architecture for automatic calibration using topological mapping. First, set up two Orbbec cameras in the environment to observe two different and partially overlapping areas. Then, extract RGB point clouds based on the intrinsic parameters, depth, and RGB color provided via the cameras. Use the proposed method, fast MS-DBL-GNG, to extract topological maps from each point cloud. These topological maps are then used to extract histogram features, followed by calibration using RANSAC and Color-ICP. Through calibration, extrinsic parameters are obtained and used to calibrate the point cloud to the world coordinate system.</p> "> Figure 10
<p>The challenge of calibrating three or more point clouds is that the two selected point clouds do not have any overlapping areas. In addition, there is no camera arrangement ID between these cameras.</p> "> Figure 11
<p>Each point cloud is first merged with the best matching point cloud. Duplicate merges are removed. And then, the matching is performed again until all point clouds have been used.</p> "> Figure 12
<p>Two different view setups used for the experiments.</p> "> Figure 13
<p>Examples of photos taken from two view types. From left to right, the first two are view type 1, and the second two are view type 2.</p> "> Figure 14
<p>Examples of point clouds taken from two view types. From left to right, the first two are view type 1, and the second two are view type 2.</p> "> Figure 15
<p>The different learning phase results.</p> "> Figure 16
<p>Several examples of topological maps extracted from point clouds using fast MS-DBL-GNG. From left to right, the first two are view type 1, and the second two are view type 2.</p> "> Figure 17
<p>The examples of calibrated point cloud results for view type 1 (<b>left</b>) and view type 2 (<b>right</b>).</p> "> Figure 18
<p>Example point cloud for multi-camera calibration. All of these views are related from left to right or right to left.</p> "> Figure 19
<p>The example of point clouds from four camera views calibrated using the proposed method.</p> "> Figure 20
<p>Example of topological map usage for two calibration point clouds. It is easy to distinguish which ones are walkable through the topological map (the blue-colored topological map). From the walkable path, it can be seen that it does not cover the area close to the table, which is an advantage for the robot to navigate. This is a concept of intelligence sensors that provide the required information appropriately to the target. (<b>a</b>) Calibrated with two point clouds. (<b>b</b>) Merged from two topological maps. (<b>c</b>) Extracted walkable area of topological maps.</p> ">
Abstract
:1. Introduction
2. Related Work
2.1. Topological Mapping Methods
2.2. Growing Neural Gas
2.3. Map Building
2.4. Multi-Camera Calibration
3. Fast MS-DBL-GNG
3.1. Network Initialization
Algorithm 1 Network Initialization. | |
Input: | ▹ Data and the total number of starting points, respectively. |
Output: | ▹ Nodes and edges, respectively. |
Equation (6) | ▹ Define the batch size. |
for do | |
Equation (7) | ▹ Randomly select data from the end batch. |
Equation (8) | ▹ Calculate the distance between the selected node and all data. |
Equation (9) | ▹ Sort data based on distance. |
Equation (10) | ▹ Select the third nearest data as a neighbor node. |
Equation (11) | ▹ Creates an edge connecting the selected node to the neighbor node. |
Equation (12) | ▹ Save the closest-distance batches from the data. |
Equation (13) | ▹ Remove the closest-distance batches from the data. |
▹ Update the number of network nodes. | |
end for |
3.2. Multi-Scale Batch Processing
Algorithm 2 Multi-scale batch processing. | |
Input: | ▹ initialization data, the total number of starting points, and the total number of learning phases, respectively. |
Output: | ▹ Nodes and edges, respectively. |
for do | |
shuffle | ▹ Shuffle all initialization data. |
end for | |
▹ Define the group size. | |
for do | ▹ For each group. |
for do | ▹ For each initialization data. |
for do | |
▹ Rearrange the data. | |
end for | |
end for | |
end for | |
Equation (14) | |
for do | ▹ For each learning phase. |
Equation (14) | ▹ Batch size of each learning phase. |
Equation (15) | ▹ Data of each learning phase. |
for do | ▹ Learn each mini-batch twice. |
Reset temporary variables | |
Batch learning | |
Network update | |
Network growing | |
end for | |
end for |
3.3. Batch Learning
Algorithm 3 Batch learning. | |
Input: | ▹ They are a batch of data, delta movements, node activation counts, and temporary edges. |
Output: | |
Equation (16) | ▹ Calculate the distance of each data to each node. |
Equation (17) | ▹ Get the winner node for each data. |
Equation (18) | ▹ Get the second winner node for each data. |
Equation (19) | ▹ Update the error node of . |
Equation (20) | ▹ Update the delta movement based on . |
Equation (21) | ▹ Update the delta movement based on the connected nodes of . |
Equation (22) | ▹ Update the node activation count based on . |
Equation (23) | ▹ Update the node activation count based on the connected nodes of . |
Equation (24) | ▹ Update the edge strength based on and . |
3.4. Network Update
Algorithm 4 Network update. | |
Input: | ▹ They are delta movements, node activation counts, and temporary edges. |
Output: | ▹ They are the network nodes, the adjacency matrix, and node errors. |
Equation (25) | ▹ Update network nodes based on delta movement. |
Equation (26) | ▹ Update network edges based on temporary edges. |
Equation (27) | ▹ Get those isolated node indexes. |
Remove() | ▹ Delete those related variables based on node indexes. |
▹ Update the number of network nodes. | |
Equation (28) | ▹ Perform error discounting on all nodes. |
if > 0.9 then | ▹ Gives a 10% chance to perform the following action. |
Equation (29) | ▹ Get those inactive node indexes. |
Remove() | ▹ Delete those related variables based on node indexes. |
▹ Update the number of network nodes. | |
end if |
3.5. Network Growing
Algorithm 5 Network growing. | |
Equation (30) | ▹ The total number of nodes to grow. |
for do | |
if m < M then | ▹ The network does not exceed the maximum number of nodes. |
Equation (31) | ▹ Get the maximum-error node. |
Equation (32) | ▹ Get the maximum-error neighbor node of . |
Equation (33) | ▹ Create a new node in the network. |
Equation (34) | ▹ Create an edge between and . |
Equation (35) | ▹ Update the node errors of , , and . |
▹ Update the number of network nodes. | |
end if | |
end for |
4. Multi-Camera 3D Environmental Map Building
4.1. Topological Mapping-Based Multi-Camera Calibration
4.2. 3D Environmental Map Building Strategy
5. Experimental Results
5.1. Experimental Setup
5.2. Evaluation Metrics
5.3. Model Parameters
5.4. Topological Feature Extraction Results
5.5. 3D Environmental Map Building Results
5.6. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
AAL | Ambient assisted living |
ICP | Iterative closest point |
RANSAC | Random sample consensus |
GNG | Growing neural gas |
FCM-BL-GNG | Fuzzy-c-means batch Learning GNG |
DBL-GNG | Distributed batch learning GNG |
MS-BL-GNG | Multi-scale batch learning GNG |
Fast MS-DBL-GNG | Fast multi-scale DBL-GNG |
References
- Matsuura, T.; Ma, X. Living arrangements and subjective well-being of the elderly in China and Japan. J. Happiness Stud. 2022, 23, 903–948. [Google Scholar] [CrossRef]
- Kojima, G.; Taniguchi, Y.; Kitamura, A.; Fujiwara, Y. Is living alone a risk factor of frailty? A systematic review and meta-analysis. Ageing Res. Rev. 2020, 59, 101048. [Google Scholar] [CrossRef] [PubMed]
- Vimarlund, V.; Borycki, E.M.; Kushniruk, A.W.; Avenberg, K. Ambient assisted living: Identifying new challenges and needs for digital technologies and service innovation. Yearb. Med. Inform. 2021, 30, 141–149. [Google Scholar] [CrossRef] [PubMed]
- Lampoltshammer, T.J.; de Freitas, E.P.; Nowotny, T.; Plank, S.; Da Costa, J.P.C.L.; Larsson, T.; Heistracher, T. Use of local intelligence to reduce energy consumption of wireless sensor nodes in elderly health monitoring systems. Sensors 2014, 14, 4932–4947. [Google Scholar] [CrossRef] [PubMed]
- Cicirelli, G.; Marani, R.; Petitti, A.; Milella, A.; D’Orazio, T. Ambient assisted living: A review of technologies, methodologies and future perspectives for healthy aging of population. Sensors 2021, 21, 3549. [Google Scholar] [CrossRef]
- Peng, Y.; Peng, J.; Li, J.; Yan, P.; Hu, B. Design and development of the fall detection system based on point cloud. Procedia Comput. Sci. 2019, 147, 271–275. [Google Scholar] [CrossRef]
- Lee, G.; Kim, J. Improving human activity recognition for sparse radar point clouds: A graph neural network model with pre-trained 3D human-joint coordinates. Appl. Sci. 2022, 12, 2168. [Google Scholar] [CrossRef]
- Rougier, C.; Auvinet, E.; Rousseau, J.; Mignotte, M.; Meunier, J. Fall detection from depth map video sequences. In Proceedings of the toward Useful Services for Elderly and People with Disabilities: 9th International Conference on Smart Homes and Health Telematics, ICOST 2011, Montreal, QC, Canada, 20–22 June 2011; Proceedings 9. Springer: Berlin/Heidelberg, Germany, 2011; pp. 121–128. [Google Scholar]
- Su, P.C.; Shen, J.; Xu, W.; Cheung, S.C.S.; Luo, Y. A fast and robust extrinsic calibration for RGB-D camera networks. Sensors 2018, 18, 235. [Google Scholar] [CrossRef]
- Huang, L.; Da, F.; Gai, S. Research on multi-camera calibration and point cloud correction method based on three-dimensional calibration object. Opt. Lasers Eng. 2019, 115, 32–41. [Google Scholar] [CrossRef]
- Bu, S.; Lee, S. Easy to Calibrate: Marker-Less Calibration of Multiview Azure Kinect. CMES Comput. Model. Eng. Sci. 2023, 136, 3083–3096. [Google Scholar] [CrossRef]
- Lyu, W.; Ke, W.; Sheng, H.; Ma, X.; Zhang, H. Dynamic Downsampling Algorithm for 3D Point Cloud Map Based on Voxel Filtering. Appl. Sci. 2024, 14, 3160. [Google Scholar] [CrossRef]
- Viejo, D.; Garcia, J.; Cazorla, M.; Gil, D.; Johnsson, M. Using GNG to improve 3D feature extraction—Application to 6DoF egomotion. Neural Netw. 2012, 32, 138–146. [Google Scholar] [CrossRef] [PubMed]
- Thrun, S. Learning metric-topological maps for indoor mobile robot navigation. Artif. Intell. 1998, 99, 21–71. [Google Scholar] [CrossRef]
- Kohonen, T. The self-organizing map. Proc. IEEE 1990, 78, 1464–1480. [Google Scholar] [CrossRef]
- Fritzke, B. A growing neural gas network learns topologies. Adv. Neural Inf. Process. Syst. 1994, 7, 625–632. [Google Scholar]
- Furao, S.; Hasegawa, O. An incremental network for on-line unsupervised classification and topology learning. Neural Netw. 2006, 19, 90–106. [Google Scholar] [CrossRef]
- Siow, C.Z.; Saputra, A.A.; Obo, T.; Kubota, N. Distributed Batch Learning of Growing Neural Gas for Quick and Efficient Clustering. Mathematics 2024, 12, 1909. [Google Scholar] [CrossRef]
- Ardilla, F.; Saputra, A.A.; Kubota, N. Multi-Scale Batch-Learning Growing Neural Gas Efficiently for Dynamic Data Distributions. Int. J. Autom. Technol. 2023, 17, 206–216. [Google Scholar] [CrossRef]
- Ardilla, F.; Saputra, A.A.; Kubota, N. Batch learning growing neural gas for sequential point cloud processing. In Proceedings of the 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Prague, Czech Republic, 9–12 October 2022; pp. 1766–1771. [Google Scholar]
- Toda, Y.; Matsuno, T.; Minami, M. Multilayer batch learning growing neural gas for learning multiscale topologies. J. Adv. Comput. Intell. Intell. Inform. 2021, 25, 1011–1023. [Google Scholar] [CrossRef]
- Iwasa, M.; Kubota, N.; Toda, Y. Multi-scale batch-learning growing neural gas for topological feature extraction in navigation of mobility support robots. In Proceedings of the The 7th International Workshop on Advanced Computational Intelligence and Intelligent Informatics (IWACIII 2021), Beijing, China, 31 October–3 November 2021. [Google Scholar]
- Toda, Y.; Wada, A.; Miyase, H.; Ozasa, K.; Matsuno, T.; Minami, M. Growing neural gas with different topologies for 3D space perception. Appl. Sci. 2022, 12, 1705. [Google Scholar] [CrossRef]
- Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
- Ortega, A.; Silva, M.; Teniente, E.H.; Ferreira, R.; Bernardino, A.; Gaspar, J.; Andrade-Cetto, J. Calibration of an outdoor distributed camera network with a 3D point cloud. Sensors 2014, 14, 13708–13729. [Google Scholar] [CrossRef] [PubMed]
- Tsai, C.Y.; Huang, C.H. Indoor scene point cloud registration algorithm based on RGB-D camera calibration. Sensors 2017, 17, 1874. [Google Scholar] [CrossRef]
- Wang, Q.; Yan, C.; Tan, R.; Feng, Y.; Sun, Y.; Liu, Y. 3D-CALI: Automatic calibration for camera and LiDAR using 3D checkerboard. Measurement 2022, 203, 111971. [Google Scholar] [CrossRef]
- Staranowicz, A.; Brown, G.R.; Morbidi, F.; Mariottini, G.L. Easy-to-use and accurate calibration of rgb-d cameras from spheres. In Proceedings of the Image and Video Technology: 6th Pacific-Rim Symposium, PSIVT 2013, Guanajuato, Mexico, 28 October–1 November 2013; Proceedings 6. Springer: Berlin/Heidelberg, Germany, 2014; pp. 265–278. [Google Scholar]
- Lee, S.h.; Yoo, J.; Park, M.; Kim, J.; Kwon, S. Robust extrinsic calibration of multiple RGB-D cameras with body tracking and feature matching. Sensors 2021, 21, 1013. [Google Scholar] [CrossRef] [PubMed]
- Yang, L.; Dryanovski, I.; Valenti, R.G.; Wolberg, G.; Xiao, J. RGB-D camera calibration and trajectory estimation for indoor mapping. Auton. Robot. 2020, 44, 1485–1503. [Google Scholar] [CrossRef]
- Rusinkiewicz, S.; Levoy, M. Efficient variants of the ICP algorithm. In Proceedings of the Third International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC, Canada, 28 May–1 June 2001; pp. 145–152. [Google Scholar]
- Park, J.; Zhou, Q.Y.; Koltun, V. Colored point cloud registration revisited. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 143–152. [Google Scholar]
- Yang, J.; Huang, Z.; Quan, S.; Cao, Z.; Zhang, Y. RANSACs for 3D rigid registration: A comparative evaluation. IEEE/CAA J. Autom. Sin. 2022, 9, 1861–1878. [Google Scholar] [CrossRef]
- Zhou, Q.Y.; Park, J.; Koltun, V. Fast global registration. In Proceedings of the Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part II 14. Springer: Amsterdam, The Netherlands, 2016; pp. 766–782. [Google Scholar]
- Moreli, V.; Cazorla, M.; Orts-Escolano, S.; Garcia-Rodriguez, J. 3d maps representation using gng. In Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), Beijing, China, 6–11 July 2014; pp. 1482–1487. [Google Scholar]
- Saputra, A.A.; Besari, A.R.A.; Kubota, N. Human joint skeleton tracking using multiple kinect azure. In Proceedings of the 2022 International Electronics Symposium (IES), Surabaya, Indonesia, 9–11 August 2022; pp. 430–435. [Google Scholar]
- Scherzinger, W.M.; Dohrmann, C.R. A robust algorithm for finding the eigenvalues and eigenvectors of 3 × 3 symmetric matrices. Comput. Methods Appl. Mech. Eng. 2008, 197, 4007–4015. [Google Scholar] [CrossRef]
- Rusu, R.B.; Blodow, N.; Beetz, M. Fast point feature histograms (FPFH) for 3D registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3212–3217. [Google Scholar]
- Choi, S.; Zhou, Q.Y.; Koltun, V. Robust reconstruction of indoor scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 5556–5565. [Google Scholar]
- Zhou, Q.Y.; Park, J.; Koltun, V. Open3D: A Modern Library for 3D Data Processing. arXiv 2018, arXiv:1801.09847. [Google Scholar]
Method | SOM | GCS | GNG | SOINN |
---|---|---|---|---|
Topology Preservation | ✓ | ✓ | ||
Incremental Learning | ✓ | ✓ | ✓ | |
Topological Clustering | ✓ | ✓ |
Method | Standard GNG | FCM-BL-GNG [21] | MS-BL-GNG [22] | Fast MS-BL-GNG [19] | DBL-GNG [18] |
---|---|---|---|---|---|
Node initialization | Two random nodes | More than two random nodes | Two random nodes | Three random nodes | Distributed with more than two nodes |
Node growth frequency | One node per interval | One node per epoch | One node per mini-batch | One node per condition met (iteration) | Multiple nodes per epoch |
Data sampling | All data per epoch | All data per epoch | All data per scale | One mini-batch per scale | All data per epoch |
Batch learning strategy | n/a | One by one | One by one | One by one | Matrix calculation |
Notation | Description |
---|---|
M | The maximum number of nodes. |
m | The current number of network nodes. |
The total number of starting points. | |
L | The total learning phase. |
All data in the point cloud. | |
A mini-batch of learning phase l. | |
The position features of the mini-batch in the learning stage l. | |
Data of index i. | |
D | The total number of data. |
W | Network nodes. |
The position features of network nodes. | |
k-th network node. | |
The error of node k. | |
The connection between node k and node j. | |
Batch size for learning phase l. | |
Accumulate weights to update network nodes. | |
A | The number of activations for the node. |
The temporary edge connection. | |
A small positive decimal number. | |
The node index list. | |
The learning rate of the winner node. | |
The learning rate of the winner node’s neighbors. | |
The error discount factor. |
Methods | Quantization Error | Computational Time |
---|---|---|
Standard GNG | 0.01696 ± 0.00194 | 40.41041 ± 4.22518 |
FCM-BL-GNG [21] | 0.01767 ± 0.00194 | 11,662.27850 ± 2798.33940 |
MS-BL-GNG [22] | 0.01742 ± 0.00225 | 220.92249 ± 8.02279 |
Fast MS-DBL-GNG1 | 0.02031 ± 0.00326 | 0.90484 ± 0.28579 |
Fast MS-DBL-GNG2 | 0.01299 ± 0.00144 | 2.34518 ± 0.43387 |
Fast MS-DBL-GNG3 | 0.01264 ± 0.00145 | 4.13767 ± 0.75200 |
Methods | Quantization Error | Computational Time |
---|---|---|
Voxel | 0.01357 ± 0.00001 | 0.01872 ± 0.00001 |
Octree | 0.02402 ± 0.00400 | 0.48581 ± 0.01760 |
Fast MS-DBL-GNG | 0.01145 ± 0.00131 | 49.88926 ± 0.79331 |
Distance Error | |||
---|---|---|---|
Method | Fast Global Registration [34] | Voxel | Fast MS-DBL-GNG |
View Type 1 | 0.45546 ± 0.15897 | 0.33185 ± 0.11623 | 0.23045 ± 0.12913 |
View Type 2 | 0.48632 ± 0.36297 | 0.43923 ± 0.36283 | 0.33663 ± 0.24264 |
Calibration Time (seconds) | |||
Method | Fast Global Registration [34] | Voxel | Fast MS-DBL-GNG |
View Type 1 | 1.30145 ± 0.45650 | 1.56605 ± 0.65369 | 0.26498 ± 0.34462 |
View Type 2 | 1.74838 ± 0.52938 | 2.03262 ± 0.84651 | 0.19595 ± 0.11501 |
Method | Fast Global Registration [34] | Voxel | Fast MS-DBL-GNG |
---|---|---|---|
Distance Error | 0.06328 ± 0.03121 | 0.09531 ± 0.11626 | 0.02779 ± 0.04742 |
Computational Time (Seconds) | 48.62213 ± 18.27764 | 1806.24649 ± 576.47094 | 135.37025 ± 20.09344 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Siow, C.Z.; Saputra, A.A.; Obo, T.; Kubota, N. A Fast Multi-Scale of Distributed Batch-Learning Growing Neural Gas for Multi-Camera 3D Environmental Map Building. Biomimetics 2024, 9, 560. https://doi.org/10.3390/biomimetics9090560
Siow CZ, Saputra AA, Obo T, Kubota N. A Fast Multi-Scale of Distributed Batch-Learning Growing Neural Gas for Multi-Camera 3D Environmental Map Building. Biomimetics. 2024; 9(9):560. https://doi.org/10.3390/biomimetics9090560
Chicago/Turabian StyleSiow, Chyan Zheng, Azhar Aulia Saputra, Takenori Obo, and Naoyuki Kubota. 2024. "A Fast Multi-Scale of Distributed Batch-Learning Growing Neural Gas for Multi-Camera 3D Environmental Map Building" Biomimetics 9, no. 9: 560. https://doi.org/10.3390/biomimetics9090560
APA StyleSiow, C. Z., Saputra, A. A., Obo, T., & Kubota, N. (2024). A Fast Multi-Scale of Distributed Batch-Learning Growing Neural Gas for Multi-Camera 3D Environmental Map Building. Biomimetics, 9(9), 560. https://doi.org/10.3390/biomimetics9090560