Event-Based Motion Capture System for Online Multi-Quadrotor Localization and Tracking
<p>The event camera was mounted on the ceiling facing downward. Depicted here are 3 quadrotors positioned within the indoor arena, depicted in RGB (<b>left</b>) and event-based (<b>right</b>) camera streams.</p> "> Figure 2
<p>Modular multi-process system architecture developed as part of this study featured detection and tracking, and motion planning software services operated as processes on the host machine. The quadrotors received commands from their respective decentralized optimization-based motion planners over WiFi connections.</p> "> Figure 3
<p>Various <math display="inline"><semantics> <msub> <mi>t</mi> <mi>a</mi> </msub> </semantics></math> were assessed by training experimental networks before selecting a value for dataset capture. (Left to Right) <math display="inline"><semantics> <msub> <mi>t</mi> <mi>a</mi> </msub> </semantics></math> = 0.05 s, <math display="inline"><semantics> <msub> <mi>t</mi> <mi>a</mi> </msub> </semantics></math> = 0.1 s, and <math display="inline"><semantics> <msub> <mi>t</mi> <mi>a</mi> </msub> </semantics></math> = 0.15 s. The <math display="inline"><semantics> <msub> <mi>t</mi> <mi>a</mi> </msub> </semantics></math> that provided the values of precision and recall closest to 1 were used for all subsequent data and experiments.</p> "> Figure 4
<p>Path planning experiment scenarios with multiple quadrotors viewed from above. (<b>a</b>) Square path, (<b>b</b>) circle path, (<b>c</b>) lawnmower path, and (<b>d</b>) cubic spline path. A flight safety corridor (as indicated by solid lines) was defined to enclosed all waypoints. Each quadrotor was commanded by the motion planner to stay within this predefined flight corridor as it traversed the waypoints.</p> "> Figure 5
<p>Three categories of experimental evaluations are presented in <a href="#sec4-sensors-22-03240" class="html-sec">Section 4</a>. These categories focus on CNN algorithm comparisons, robustness of the motion capture system to environmental conditions, and actual performance evaluation for multiple quadrotors motion planning.</p> "> Figure 6
<p>(<b>a</b>) Precision, (<b>b</b>) recall, (<b>c</b>) F1 score evaluated on the YOLOv5 validation set.</p> "> Figure 7
<p>Performance metrics for a variety of object detection networks. Two YOLO architectures as well as the Faster R-CNN and RetinaNet methods are compared in terms of precision and recall.</p> "> Figure 8
<p>Sampling/inference rates in Hz are shown for two YOLO architectures, Faster R-CNN, and RetinaNet object detection networks.</p> "> Figure 9
<p>Detection performance was evaluated under various lighting conditions. LED light strips were used to modulate the brightness of the experiment environment. Also shown here is the Lux meter used for ambient light measurements.</p> "> Figure 10
<p>The motion capture system was used to test collision avoidance using a decentralized optimization-based motion planner. In each flight test, the quadrotors were able to successfully avoid <math display="inline"><semantics> <msub> <mi>d</mi> <mrow> <mi>s</mi> <mi>a</mi> <mi>f</mi> <mi>e</mi> </mrow> </msub> </semantics></math> violations. Flight tests involved 2 quadrotors (<b>top</b> panel of images) and 3 quadrotors (<b>bottom</b> panel of images) flying toward each other. All quadrotors were operating between the speeds of 0.2 m/s and 1 m/s as determined by their respective motion planners.</p> "> Figure 11
<p>Indoor flight tests involving up to six quadrotors in the arena are depicted in RGB (<b>top</b>) and event (<b>bottom</b>) formats.</p> "> Figure 12
<p>Experiments involving 2 quadrotors carried out in outdoor environments are depicted in RGB (<b>left</b>) and event (<b>right</b>) format. For the outdoor experiments, the RGB and event cameras were mounted at different locations on the imaging quadrotor. Due to the different camera perspectives, the quadrotors appear to be at slightly different positions in the RGB and event images shown here.</p> ">
Abstract
:1. Introduction
- An event-based motion capture system for online multi-quadrotor motion planning. The implementation details and hardware–software architecture have been detailed in this manuscript. Comprehensive experimental validation results are provided to showcase demanding use-cases beyond a standard indoor laboratory setup. Additionally, key software functionality has been open-sourced.
- A novel open-source event camera dataset of multiple quadrotors flying simultaneously at various speeds and heights. To the best of our knowledge, this dataset is the first to involve event cameras in the capture of quadrotors in flight. Over 10,000 synthetic event images were captured over a series of flight test sessions taking place in both indoor and outdoor arenas. Challenging scenarios, including low-light conditions and unstable camera motions, are represented in the dataset. Bounding box annotations for each image are provided.
2. Related Work
3. System Architecture
3.1. Hardware
3.1.1. Camera Specifications
3.1.2. Quadrotor Specifications
3.1.3. Host Computer
3.1.4. Google Colaboratory (Colab) Cloud Computing Environment
3.2. Software
3.2.1. Detection and Tracking Service
Event Accumulator
Quadrotor Detection
IDTrack: Tracking Service
- New Quad Discovered: If this distance was greater than the width of the physical quadrotor chassis , the new detection was inferred as a distinct quadrotor. A new node corresponding to this newly discovered quadrotor was created in with () coordinates. A counter variable called was incremented by one each time a new node corresponding to a distinct quadrotor was added to . This counter helped keep track of the sequence of IDs being assigned to the newly discovered detection central points.
- Same Quad Rediscovered: On the other hand, if was less than or equal to , the central point was inferred to belong to the same quadrotor represented by central point . In this case, was overwritten by as the latest central point information about the corresponding quadrotor.
3.2.2. Motion Planner Service
4. Experimental Setup and Results
- Detection Metrics: The detection performance was assessed using Precision , Recall , and F1 score . Precision is defined as the ratio of the number of correct detections to the total number of detections. Recall is defined as the ratio of the number of correct detections to the total number of true objects in the data., , and were the number of True Positives, False Positives, and False Negatives, respectively.
- Motion Planning Metrics: The motion planner’s performance was captured using two metrics: (1) waypoint navigation task via waypoint boundary violation rate and (2) the duration of each flight test .was the ratio of the number of frames that the quadrotor was out of bounds of the flight safety corridor containing the waypoints over the number of elapsed frames for the experimental run. was calculated as follows:was the time it took for the last quadrotor to land during each flight test in seconds.and calculations began after the quadrotors were airborne and instructed to proceed to the first waypoint.
4.1. Dataset
- 8300 (83%) images that depict quadrotors flying variable paths in an indoor arena;
- 500 (5%) images that depict quadrotors flying variable paths indoors using an unstable camera;
- 500 (5%) images that depict quadrotors flying variable paths indoors under low light conditions;
- 700 (7%) images that depict quadrotors flying variable paths outdoors using an aerial camera.
4.2. Training Results of YOLOv5
4.3. Comparative Analysis with State of the Art
4.3.1. Detection Performance
4.3.2. Sampling (or Inference) Rate Analysis
4.4. Robustness Analysis
4.4.1. Effect of Varying Ambient Lighting Conditions
4.4.2. Effect of an Unstable Camera on Detection Performance
- In one case, a rope was affixed to the platform and harness attaching the event camera to the ceiling of the indoor experiment area. The rope was constantly pulled, introducing motion to the camera that could be observed as noise artifacts in the event frame. Readers are referred to the accompanying video for relevant footage.
- In the second case, the event-based camera was affixed to a quadrotor in a downward-facing orientation. Although the camera-equipped quadrotor was flown above the motion-coordinated quadrotors and instructed to hold the position in space, drift due to wind introduced instability to the quadrotor and camera.
4.5. Performance Validation for Motion Coordination
4.5.1. Effect of on
4.5.2. Effect of Increasing on
4.5.3. Effect of Varying Quadrotor Paths on Flight Corridor Boundary Violations
5. Discussion
5.1. Strengths
- Experimental results indicate that YOLOv5 and YOLOv4 models performed well on GPU hardware for multi-quadrotor detection. YOLO architectures tested in this study outperformed two other comparable state-of-the-art CNN detector architectures. The performance of the proposed system displayed minimal deterioration under constrained lighting or camera instability.
- YOLOv5 ran natively on the Pytorch framework, making for a seamless development experience when interfacing with additional Python services. While YOLOv5 may be easier to bring quickly into production, YOLOv4 will still be used in circumstances where development in C using the Darknet neural network framework [48] is desirable.
- Detection performance remained consistently high across representations, indicating that the method accommodates greater numbers of quadrotors.
- Runtime performance was unaffected by varying value, supporting the notion that the approach is scalable beyond 6 active quadrotors in the arena.
- The resulting dataset from this study fills a much-needed gap in the aerial robotics community that is looking to work with event cameras.
5.2. Limitations
- The minimum resolvable control distance for the quadrotor flight controller was 5 cm in the x and y directions. As such, it was not possible to test for scenarios with 5 cm.
- As this method of detection and tracking occurs in 2D space at a fixed perspective, there were some notable challenges. For example, quadrotors that flew close to the camera occupied a significant portion of the camera’s field of view and occluded other quadrotor activity. Taller ceilings or incorporating multiple camera sources to this method would expand the detection and tracking area of the indoor arena. Multiple event camera streams would allow quadrotors to follow three-dimensional paths within the experiment arena and introduce camera redundancy and resilience to occluded sensors.
- As depicted in Figure 12, outdoor experiments were conducted with two quadrotors. However, further outdoor experiments with a higher number of quadrotors would shed light on the performance of such a system in windy conditions.
- Finally, with any supervised CNN approach, there is a need for sufficient training data, which can be a time-consuming (and hence expensive) process. The open-sourced dataset accompanying study will provide a strong starting point for the research community.
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Abichandani, P.; Speck, C.; Bucci, D.; Mcintyre, W.; Lobo, D. Implementation of Decentralized Reinforcement Learning-Based Multi-Quadrotor Flocking. IEEE Access 2021, 9, 132491–132507. [Google Scholar] [CrossRef]
- Fernando, M.; Liu, L. Formation Control and Navigation of a Quadrotor Swarm. In Proceedings of the 2019 International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, 11–14 June 2019; pp. 284–291. [Google Scholar] [CrossRef]
- Schiano, F.; Franchi, A.; Zelazo, D.; Giordano, P. A rigidity-based decentralized bearing formation controller for groups of quadrotor UAVs. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 5099–5106. [Google Scholar]
- VICON. VICON Motion Capture System. Available online: https://www.vicon.com (accessed on 18 March 2022).
- Park, J.; Kim, D.; Kim, G.C.; Oh, D.; Kim, H.J. Online Distributed Trajectory Planning for Quadrotor Swarm with Feasibility Guarantee using Linear Safe Corridor. arXiv 2021, arXiv:2109.09041. [Google Scholar] [CrossRef]
- Shen, H.; Zong, Q.; Lu, H.; Zhang, X.; Tian, B.; He, L. A distributed approach for lidar-based relative state estimation of multi-UAV in GPS-denied environments. Chin. J. Aeronaut. 2022, 35, 59–69. [Google Scholar] [CrossRef]
- OptiTrack. OptiTrack Motion Capture System. Available online: https://optitrack.com/ (accessed on 18 March 2022).
- Merriaux, P.; Dupuis, Y.; Boutteau, R.; Vasseur, P.; Savatier, X. A study of vicon system positioning performance. Sensors 2017, 17, 1591. [Google Scholar] [CrossRef]
- Holešovskỳ, O.; Škoviera, R.; Hlaváč, V.; Vítek, R. Experimental Comparison between Event and Global Shutter Cameras. Sensors 2021, 21, 1137. [Google Scholar] [CrossRef] [PubMed]
- Glover, A.; Bartolozzi, C. Event-driven ball detection and gaze fixation in clutter. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 2203–2208. [Google Scholar]
- Ji, Z.; Hu, W.; Wang, Z.; Yang, K.; Wang, K. Seeing through Events: Real-Time Moving Object Sonification for Visually Impaired People Using Event-Based Camera. Sensors 2021, 21, 3558. [Google Scholar] [CrossRef] [PubMed]
- Ozawa, T.; Sekikawa, Y.; Saito, H. Accuracy and Speed Improvement of Event Camera Motion Estimation Using a Bird’s-Eye View Transformation. Sensors 2022, 22, 773. [Google Scholar] [CrossRef] [PubMed]
- Mueggler, E.; Rebecq, H.; Gallego, G.; Delbruck, T.; Scaramuzza, D. The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM. Int. J. Robot. Res. 2017, 36, 142–149. [Google Scholar] [CrossRef]
- Zhu, A.Z.; Thakur, D.; Özaslan, T.; Pfrommer, B.; Kumar, V.; Daniilidis, K. The multivehicle stereo event camera dataset: An event camera dataset for 3D perception. IEEE Robot. Autom. Lett. 2018, 3, 2032–2039. [Google Scholar] [CrossRef] [Green Version]
- Dubeau, E.; Garon, M.; Debaque, B.; de Charette, R.; Lalonde, J.F. RGB-DE: Event camera calibration for fast 6-dof object tracking. In Proceedings of the 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Porto de Galinhas, Brazil, 9–13 November 2020; pp. 127–135. [Google Scholar]
- Iaboni, C.; Patel, H.; Lobo, D.; Choi, J.W.; Abichandani, P. Event Camera Based Real-Time Detection and Tracking of Indoor Ground Robots. IEEE Access 2021, 9, 166588–166602. [Google Scholar] [CrossRef]
- Iaboni, C.; Lobo, D.; won Choi, J.; Abichandani, P. Event Quadrotor Motion Capture: Event Camera Dataset for Multiple Quadrotors. 2022. Available online: https://github.com/radlab-sketch/event-quadrotor-mocap (accessed on 17 March 2022).
- Kushleyev, A.; Mellinger, D.; Powers, C.; Kumar, V. Towards a swarm of agile micro quadrotors. Auton. Robot. 2013, 35, 287–300. [Google Scholar] [CrossRef]
- Jones, L. Coordination and Control for Multi-Quadrotor UAV Missions. 2012. Available online: https://calhoun.nps.edu/handle/10945/6816 (accessed on 18 March 2022).
- Zhou, D.; Wang, Z.; Schwager, M. Agile Coordination and Assistive Collision Avoidance for Quadrotor Swarms Using Virtual Structures. IEEE Trans. Robot. 2018, 34, 916–923. [Google Scholar] [CrossRef]
- Rodríguez-Gómez, J.P.; Eguíluz, A.G.; Martínez-de Dios, J.; Ollero, A. Asynchronous event-based clustering and tracking for intrusion monitoring in UAS. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 8518–8524. [Google Scholar]
- Ramesh, B.; Zhang, S.; Lee, Z.W.; Gao, Z.; Orchard, G.; Xiang, C. Long-term Object Tracking with a Moving Event Camera. In Proceedings of the BMVC, Newcastle, UK, 3–6 September 2018; p. 241. [Google Scholar]
- Mitrokhin, A.; Fermüller, C.; Parameshwara, C.; Aloimonos, Y. Event-based moving object detection and tracking. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1–9. [Google Scholar]
- Liu, H.; Moeys, D.P.; Das, G.; Neil, D.; Liu, S.C.; Delbrück, T. Combined frame-and event-based detection and tracking. In Proceedings of the 2016 IEEE International Symposium on Circuits and Systems (ISCAS), Montréal, QC, Canada, 22–25 May 2016; pp. 2511–2514. [Google Scholar]
- Chen, G.; Cao, H.; Ye, C.; Zhang, Z.; Liu, X.; Mo, X.; Qu, Z.; Conradt, J.; Röhrbein, F.; Knoll, A. Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors. Front. Neurorobot. 2019, 13, 10. [Google Scholar] [CrossRef] [PubMed]
- Duo, J.; Zhao, L. An Asynchronous Real-Time Corner Extraction and Tracking Algorithm for Event Camera. Sensors 2021, 21, 1475. [Google Scholar] [CrossRef]
- Lakshmi, A.; Chakraborty, A.; Thakur, C.S. Neuromorphic vision: From sensors to event-based algorithms. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2019, 9, e1310. [Google Scholar] [CrossRef]
- Jiang, R.; Mou, X.; Shi, S.; Zhou, Y.; Wang, Q.; Dong, M.; Chen, S. Object tracking on event cameras with offline–online learning. CAAI Trans. Intell. Technol. 2020, 5, 165–171. [Google Scholar] [CrossRef]
- Jiang, Z.; Xia, P.; Huang, K.; Stechele, W.; Chen, G.; Bing, Z.; Knoll, A. Mixed Frame-/Event-Driven Fast Pedestrian Detection. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 8332–8338. [Google Scholar]
- Ryan, C.; O’Sullivan, B.; Elrasad, A.; Cahill, A.; Lemley, J.; Kielty, P.; Posch, C.; Perot, E. Real-time face & eye tracking and blink detection using event cameras. Neural Netw. 2021, 141, 87–97. [Google Scholar] [CrossRef] [PubMed]
- Duwek, H.C.; Bitton, A.; Tsur, E.E. 3D Object Tracking with Neuromorphic Event Cameras via Image Reconstruction. In Proceedings of the 2021 IEEE Biomedical Circuits and Systems Conference (BioCAS), Berlin, Germany, 7–9 October 2021; pp. 1–4. [Google Scholar] [CrossRef]
- Paredes-Vallés, F.; Scheper, K.Y.; De Croon, G.C. Unsupervised learning of a hierarchical spiking neural network for optical flow estimation: From events to global motion perception. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 2051–2064. [Google Scholar] [CrossRef] [Green Version]
- Hagenaars, J.; Paredes-Vallés, F.; De Croon, G. Self-supervised learning of event-based optical flow with spiking neural networks. Adv. Neural Inf. Process. Syst. 2021, 34, 7167–7179. [Google Scholar]
- Lin, T.Y.; Maire, M.; Belongie, S.; Bourdev, L.; Girshick, R.; Hays, J.; Perona, P.; Ramanan, D.; Zitnick, C.L.; Dollár, P. Microsoft COCO: Common Objects in Context. 2015. Available online: http://xxx.lanl.gov/abs/1405.0312 (accessed on 18 March 2022).
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
- Orchard, G.; Jayawant, A.; Cohen, G.K.; Thakor, N. Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades. Front. Neurosci. 2015, 9, 437. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Sironi, A.; Brambilla, M.; Bourdis, N.; Lagorce, X.; sman, R. HATS: Histograms of Averaged Time Surfaces for Robust Event-Based Object Classification. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1731–1740. [Google Scholar] [CrossRef] [Green Version]
- Ryze Tello Specs. Available online: https://www.ryzerobotics.com/tello/specs (accessed on 18 March 2022).
- Google. Colaboratory: Frequently Asked Questions. 2022. Available online: https://research.google.com/colaboratory/faq.html (accessed on 18 March 2022).
- Jocher, G. YOLOv5. 2021. Available online: https://github.com/ultralytics/yolov5 (accessed on 18 March 2022).
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Bentley, J.L. Multidimensional binary search trees used for associative searching. Commun. ACM 1975, 18, 509–517. [Google Scholar] [CrossRef]
- Abichandani, P.; Levin, K.; Bucci, D. Decentralized formation coordination of multiple quadcopters under communication constraints. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 3326–3332. [Google Scholar]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32; Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2019; pp. 8024–8035. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. 2016. Available online: http://xxx.lanl.gov/abs/1506.01497 (accessed on 18 March 2022).
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. 2018. Available online: http://xxx.lanl.gov/abs/1708.02002 (accessed on 18 March 2022).
- Holešovský, O.; Hlaváč, V.; Škoviera, R. Practical high-speed motion sensing: Event cameras vs. global shutter. In Proceedings of the Computer Vision Winter Workshop 2020, Snowmass Village, CO, USA, 1–5 March 2020. [Google Scholar]
- Redmon, J. Darknet: Open Source Neural Networks in C. 2013–2016. Available online: http://pjreddie.com/darknet/ (accessed on 18 March 2022).
- Sun, S.; Cioffi, G.; de Visser, C.; Scaramuzza, D. Autonomous Quadrotor Flight Despite Rotor Failure With Onboard Vision Sensors: Frames vs. Events. IEEE Robot. Autom. Lett. 2021, 6, 580–587. [Google Scholar] [CrossRef]
- Amer, K.; Samy, M.; Shaker, M.; ElHelw, M. Deep Convolutional Neural Network-Based Autonomous Drone Navigation. 2019. Available online: http://xxx.lanl.gov/abs/1905.01657 (accessed on 18 March 2022).
- Jembre, Y.Z.; Nugroho, Y.W.; Khan, M.T.R.; Attique, M.; Paul, R.; Shah, S.H.A.; Kim, B. Evaluation of Reinforcement and Deep Learning Algorithms in Controlling Unmanned Aerial Vehicles. Appl. Sci. 2021, 11, 7240. [Google Scholar] [CrossRef]
Evaluation Dimension | Evaluation Method | Evaluation Metric |
---|---|---|
NN Training Results | (Offline) Validation dataset | , , |
Performance comparison of NNs | (Offline) Testing Set | , , Sampling Rate |
Ambient lighting conditions | Varied Lux levels | , |
Unstable camera conditions (indoors) | Shaking fixed camera | , |
Unstable camera conditions (outdoors) | Mounting camera facing downwards on a quadrotor | , |
Effect of on | Optimization-based motion planning | |
Effect of on | Optimization-based motion planning | |
Effect of quadrotor paths on | Optimization-based motion planning |
(cm) | = 2 | = 3 | = 6 |
---|---|---|---|
12 | 14 | N/A | |
11 | 12 | 19 |
Square | Circle | Lawnmower | Spline | |
---|---|---|---|---|
1—Indoor | 0.04 | 0.08 | 0.08 | 0.07 |
2—Indoor | 0.07 | 0.04 | 0.04 | 0.09 |
2—Outdoor | 0.11 | N/A | N/A | N/A |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Iaboni, C.; Lobo, D.; Choi, J.-W.; Abichandani, P. Event-Based Motion Capture System for Online Multi-Quadrotor Localization and Tracking. Sensors 2022, 22, 3240. https://doi.org/10.3390/s22093240
Iaboni C, Lobo D, Choi J-W, Abichandani P. Event-Based Motion Capture System for Online Multi-Quadrotor Localization and Tracking. Sensors. 2022; 22(9):3240. https://doi.org/10.3390/s22093240
Chicago/Turabian StyleIaboni, Craig, Deepan Lobo, Ji-Won Choi, and Pramod Abichandani. 2022. "Event-Based Motion Capture System for Online Multi-Quadrotor Localization and Tracking" Sensors 22, no. 9: 3240. https://doi.org/10.3390/s22093240
APA StyleIaboni, C., Lobo, D., Choi, J. -W., & Abichandani, P. (2022). Event-Based Motion Capture System for Online Multi-Quadrotor Localization and Tracking. Sensors, 22(9), 3240. https://doi.org/10.3390/s22093240