Abstract
Event-based cameras are becoming increasingly popular due to their asynchronous spatial-temporal information, high temporal resolution, power efficiency, and high dynamic range advantages. Despite these benefits, the adoption of these sensors has been hindered, mainly due to their high cost. While prices are decreasing and commercial options exist, researchers and developers face barriers to addressing the potential of event-based vision, especially with more specialized models. Although accurate event-based simulators and emulators exist, their primary limitation lies in their inability to operate in real-time and in that they are designed only for grey-scale video streams. This limitation creates a gap between theoretical exploration and practical application, hindering the seamless integration of event-based systems into real-world applications, especially in robotics. Moreover, the importance of color information is well recognized for many tasks, and most existing event-based cameras do not handle color information, except for a few exceptions. To address this challenge, we propose a ROS-based color event camera emulator to aid in reducing the gap between the real-world applicability of event-based color cameras by presenting its software design and implementation. Finally, we present a preliminary evaluation to demonstrate its performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Event camera - airsim (2021). https://microsoft.github.io/AirSim/event_sim/
Mist lab collected events with event-based cameras (2023). https://github.com/MISTLab/event_based_data
Almatrafi, M., Baldwin, R., Aizawa, K., Hirakawa, K.: Distance surface for event-based optical flow. IEEE Trans. Pattern Anal. Mach. Intell. 42(7), 1547–1556 (2020)
Bajestani, S.E.M., Beltrame, G.: Event-based RGB sensing with structured light. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 5458–5467 (2023)
Brandli, C., Berner, R., Yang, M., Liu, S.C., Delbruck, T.: A 240\(\times \) 180 130 db 3 \(\mu \)s latency global shutter spatiotemporal vision sensor. IEEE J. Solid-State Circ. 49(10), 2333–2341 (2014)
Buber, E., Diri, B.: Performance analysis and CPU vs GPU comparison for deep learning. In: 2018 6th International Conference on Control Engineering & Information Technology (CEIT), pp. 1–6 (2018)
Finlayson, G.D.: Colour and illumination in computer vision. Interface focus 8(4), 20180008 (2018)
Furmonas, J., Liobe, J., Barzdenas, V.: Analytical review of event-based camera depth estimation methods and systems. Sensors 22(3), 1201 (2022)
Gallego, G., et al.: Event-based vision: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 44(1), 154–180 (2022)
Gehrig, M., Aarents, W., Gehrig, D., Scaramuzza, D.: Dsec: a stereo event camera dataset for driving scenarios. IEEE Robot. Autom. Lett. 6(3), 4947–4954 (2021)
Gowda, S.N., Yuan, C.: ColorNet: investigating the importance of color spaces for image classification. In: Jawahar, C., Li, H., Mori, G., Schindler, K. (eds.) ACCV 2018. LNCS, pp. 581–596. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20870-7_36
Hu, Y., Liu, S.C., Delbruck, T.: v2e: From video frames to realistic DVS events. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1312–1321 (2021)
Kaiser, J., et al.: Towards a framework for end-to-end control of a simulated vehicle with spiking neural networks. In: 2016 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots, pp. 127–134 (2016)
Kalluri, T., Pathak, D., Chandraker, M., Tran, D.: Flavr: flow-free architecture for fast video frame interpolation. Mach. Vis. Appl. 34(5), 83 (2023)
Kessler, C., et al.: Programmability and performance portability aspects of heterogeneous multi-/manycore systems. In: 2012 Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 1403–1408 (2012)
Li, C., et al.: Design of an RGBW color VGA rolling and global shutter dynamic and active-pixel vision sensor. In: 2015 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 718–721. IEEE (2015)
Lichtsteiner, P., Posch, C., Delbruck, T.: A 128\(\times \)128 120 db 15 \(\mu s\) latency asynchronous temporal contrast vision sensor. IEEE J. Solid-State Circ. 43, 566–576 (2008)
Long, X., Gong, X., Zhang, B., Zhou, H.: Deep learning based data prefetching in CPU-GPU unified virtual memory. J. Parallel Distrib. Comput. 174, 19–31 (2023)
Marcireau, A., Ieng, S.H., Simon-Chane, C., Benosman, R.B.: Event-based color segmentation with a high dynamic range sensor. Front. Neurosci. 12, 317614 (2018)
Moeys, D.P., et al.: A sensitive dynamic and active pixel vision sensor for color or neural imaging applications. IEEE Trans. Biomed. Circuits Syst. 12(1), 123–136 (2017)
Niklaus, S., Mai, L., Liu, F.: Video frame interpolation via adaptive convolution. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2270–2279 (2017)
Nikolić, G.S., Dimitrijević, B.R., Nikolić, T.R., Stojcev, M.K.: A survey of three types of processing units: CPU, GPU and TPU. In: 2022 57th International Scientific Conference on Information, Communication and Energy Systems and Technologies (ICEST), pp. 1–6 (2022)
Quigley, M., et al.: ROS: an open-source robot operating system. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) Workshop on Open Source Robotics. Kobe, Japan (2009)
Rebecq, H., Gehrig, D., Scaramuzza, D.: ESIM: an open event camera simulator. In: Billard, A., Dragan, A., Peters, J., Morimoto, J. (eds.) Proceedings of The 2nd Conference on Robot Learning. Proceedings of Machine Learning Research, vol. 87, pp. 969–982. PMLR (2018)
Sawant, A., Saha, A., Khoussine, J., Sinha, R., Hoon, M.: New insights into retinal circuits through EM connectomics: what we have learnt and what remains to be learned. Front. Ophthalmol. 3, 1168548 (2023)
Shah, P., Rathod, S.S.: Review of bio-inspired silicon retina: from cell to system level implementation. In: 2021 International Conference on Communication information and Computing Technology (ICCICT), pp. 1–13. IEEE (2021)
Shah, S., Dey, D., Lovett, C., Kapoor, A.: Airsim: high-fidelity visual and physical simulation for autonomous vehicles. In: Hutter, M., Siegwart, R. (eds.) Field and Service Robotics. Springer Proceedings in Advanced Robotics, vol. 5, pp. 621–635. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67361-5_40
Skorych, V., Dosta, M.: Parallel CPU-GPU computing technique for discrete element method. Concurrency Comput. Pract. Experience 34(11), e6839 (2022)
Taverni, G., et al.: Front and back illuminated dynamic and active pixel vision sensors comparison. IEEE Trans. Circ. Syst. II Express Briefs 65(5), 677–681 (2018)
Zhu, A.Z., Thakur, D., Özaslan, T., Pfrommer, B., Kumar, V., Daniilidis, K.: The multivehicle stereo event camera dataset: an event camera dataset for 3D perception. IEEE Robot. Autom. Lett. 3(3), 2032–2039 (2018)
Acknowledgement
This work was partly supported by the Programa de Equipamiento Científico y Tecnológico, ANID, Chile (FONDEQUIP) Project under Grant EQM170041.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Bugueno-Cordova, I., Campusano, M., Guaman-Rivera, R., Verschae, R. (2024). A Color Event-Based Camera Emulator for Robot Vision. In: Filipe, J., Röning, J. (eds) Robotics, Computer Vision and Intelligent Systems. ROBOVIS 2024. Communications in Computer and Information Science, vol 2077. Springer, Cham. https://doi.org/10.1007/978-3-031-59057-3_24
Download citation
DOI: https://doi.org/10.1007/978-3-031-59057-3_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-59056-6
Online ISBN: 978-3-031-59057-3
eBook Packages: Computer ScienceComputer Science (R0)