An Application-Driven Survey on Event-Based Neuromorphic Computer Vision
<p>Two commercial examples of neuromorphic sensors. (<b>a</b>) The Prophesee EVK4 that uses the Sony IMX636 CMOS. (<b>b</b>) The Inivation DAVIS346. Courtesy of Prof. Maria Martini, Kingston University London, UK.</p> "> Figure 2
<p>A summary of the paper organization. Critical analysis and discussions are highlighted with red text.</p> "> Figure 3
<p>Analogies between the human visual system (top) and the neuromorphic vision sensor (bottom). “Neuron” (<a href="https://skfb.ly/oyUVY" target="_blank">https://skfb.ly/oyUVY</a>, accessed on 4 August 2024) by mmarynguyen is licensed under Creative Commons Attribution-NonCommercial. “Human Head” (<a href="https://skfb.ly/ouFsp" target="_blank">https://skfb.ly/ouFsp</a>, accessed on 4 August 2024) by VistaPrime is licensed under Creative Commons Attribution. Car and lens generated with Adobe Firefly©.</p> "> Figure 4
<p>Output from a neuromorphic sensor (<b>left</b>) and a frame-based camera (<b>right</b>) while recording a rotating PC fan in the <span class="html-italic">xyt</span>-plan. ON and OFF events are rendered respectively as blue and black pixels on a white background.</p> "> Figure 5
<p>(<b>a</b>): An example accumulation image from an event camera while moving in an indoor environment. ON and OFF events are rendered respectively as blue and black pixels on a white background. (<b>b</b>): the same scene taken from a RGB camera.</p> "> Figure 6
<p>Two frames with respective color and event information from the <span class="html-italic">CED: Color Event Dataset</span> in [<a href="#B16-information-15-00472" class="html-bibr">16</a>].</p> "> Figure 7
<p>A scheme of the manuscript selection process. We report the documents added after filtering the output from the Scopus search as “injection” in the diagram.</p> "> Figure 8
<p>The three-level hierarchical organization to classify computer vision tasks. Usually, the amount of data tends to be lower with higher-level presentations.</p> "> Figure 9
<p>The association between computer vision tasks and application domains for the works analyzed in <a href="#sec6-information-15-00472" class="html-sec">Section 6</a>.</p> ">
Abstract
:1. Introduction
- High latency: the latency between the presentation of physical stimuli, the transduction into analog values, and the encoding time into a digital representation are high if the data are sampled at a fixed rate, independent from the stimuli.
- Low-dynamic range: frame-based cameras have difficulties in handling scenes with very high variations in brightness.
- Power consumption: high-quality and feature-rich frame-based cameras present high consumption, making them unpractical in many resource-constrained environments.
- Motion blur: when capturing high-speed motion, frame-based cameras introduce motion blur, affecting subsequent image processing accuracy.
- Limited frame rates: traditional sensors allow for slow frame rates (typically in the order of 20–240 fps), whereas for high-speed recordings, specialized highly complex and expensive equipment is needed.
- a collection of past surveys about neuromorphic cameras and computer vision with the introduction of a specific taxonomy to easily classify and refer to them;
- a critical analysis driven by the different application domains, instead of low-, medium-, or high-level computer vision tasks;
- an updated review that includes recent works and research outcomes.
2. Neuromorphic Cameras
3. Materials and Methods
- TITLE(
- ("neuromorphic" AND ("vision" OR "camera" OR "sensor"))
- OR "event camera" OR "event based camera" OR "event triggered camera"
- OR "dynamic vision sensor" OR "event based sensor" OR "event sensor"
- OR "address event representation" OR "event vision sensor"
- OR "event based vision sensor" OR "silicon retina")
- OR KEY(
- ("neuromorphic" AND ("vision" OR "camera" OR "sensor"))
- OR "event camera" OR "event based camera" OR "event triggered camera"
- OR "dynamic vision sensor" OR "event based sensor" OR "event sensor"
- OR "address event representation" OR "event vision sensor"
- OR "event based vision sensor" OR "silicon retina")
- AND LIMIT-TO(LANGUAGE, "English")
4. Other Surveys
- Surveys based on the development of neuromorphic vision sensors: these works focus on physical sensor design and hardware aspects, ranging from conventional devices based on integrated circuits to new emerging technologies;
- Surveys based on a specific application domain: these works mainly focus on a very specific topic, reconstructing the evolution of proposed methods to address the challenges related to the specific domain;
- Surveys based on a collection of methods: these works consider how classic computer vision problems have been redesigned when the input comes from a neuromorphic sensor.
4.1. Development of Neuromorphic Vision Sensors
4.2. Specific Domain
Spiking Neural Networks
4.3. Collection of Methods
4.4. Analysis and Discussion
5. Event Cameras and Computer Vision Tasks
6. Applications
6.1. Agriculture and Animal Monitoring
Work | Application | Sensors | Datasets | Main Computer Vision Tasks | Publication Type |
---|---|---|---|---|---|
[60] (2021) | Autonomous navigation in agricultural environments | DVS, Depth, Color, LiDAR | Released [64] | SLAM | Conference (IEEE ICRA) |
[59] (2022) | Fruit detection | DAVIS | n.a. | Segmentation | MPhil thesis work |
[62] (2022) | Fish trajectory tracking | DVS, Frame-based | n.a. | Detection, tracking | arXiv preprint |
[61] (2023) | Autonomous navigation in dense vegetation environments | DAVIS | Released [65] | SLAM | Journal (Science Robotics) |
[63] (2023) | Penguin behavior analysis | DAVIS | Released [66] | Classification | Conference (IEEE/CVF CVPR) |
6.2. Surveillance and Security
Work | Application | Sensors | Datasets | Main Computer Vision Tasks | Publication Type |
---|---|---|---|---|---|
[67] (2006) | Traffic surveillance | DVS | n.a. | Detection, tracking | Conference (IEEE DSP) |
[68] (2007) | Traffic surveillance | DVS | n.a. | Classification, tracking | Conference (ACM ICDSC) |
[69] (2012) | People detection | DVS | n.a. | Tracking | Conference (IEEE/CVF CVPRW) |
[71] (2019) | Anomalous behavior analysis | DAVIS | n.a. | Classification | arXiv preprint |
[73] (2020) | Intrusion detection | DAVIS, IMU | Released [76] | Detection, tracking | Conference (IEEE ICRA) |
[70] (2021) | UAV tracking | DVS | n.a. | Tracking | Conference (IEEE IROS) |
[72] (2021) | Intrusion detection | DAVIS | n.a. | Detection, tracking | Conference (IEEE ICUAS) |
[74] (2022) | Intrusion detection | DAVIS | n.a. | Detection, tracking | Conference (IEEE SSRR) |
[75] (2022) | Person re-identification | Synthetic data | SAIVT [77], DukeMTMC-reid [78] | Detection, Classification | Conference (IEEE/CVF WACV) |
6.3. Visual Inspection and Machinery Fault Diagnosis
Work | Application | Sensors | Datasets | Main Computer Vision Tasks | Publication Type |
---|---|---|---|---|---|
[79] (2011) | Sensor characterization in machine vision | DVS | n.a. | - (raw data analysis) | Conference (IEEE SIGMAP) |
[81] (2011) | Particle tracking | DVS, ultra high-speed camera | n.a. | Tracking | Journal (Springer Experiments in Fluids) |
[80] (2012) | Particle tracking | DVS, frame-based camera | n.a. | Detection, tracking | Journal (Wiley, Journal of microscopy) |
[82] (2022) | Magnetic materials analysis | DVS, microscopy | n.a. | - (raw data analysis) | Journal (AIP Advances) |
[83] (2022) | Corn grain counting | DVS | n.a. | Tracking | arXiv preprint |
[89] (2022) | Fault detection in industrial pipeline | DAVIS | n.a. | Detection, classification | Journal (Frontiers in Neurorobotics) |
[84] (2023) | Machine fault detection | DVS | n.a. | Classification | Journal (IEEE Transactions on Industrial Informatics) |
[85] (2023) | Rotational speed estimation | DAVIS | n.a. | Segmentation | Journal (IEEE Transactions on Mobile Computing) |
[88] (2023) | Schlieren imaging | DVS, frame-based camera | Released [91] | - (trajectory estimation by optical flow) | Journal (IEEE-TPAMI) |
[87] (2024) | Optical disdrometers | DAVIS | n.a. | - (raw data analysis) | Journal (Copernicus Publications, Atmospheric Measurement Techniques) |
6.4. Space Imaging and Space Situational Awareness
Work | Application | Sensors | Datasets | Main Computer Vision Tasks | Publication Type |
---|---|---|---|---|---|
[92] (2019) | Space imaging | ATIS, DAVIS, telescope | n.a. | Detection | Journal (Springer, The Journal of the Astronautical Sciences) |
[94] (2019) | Star tracker | DAVIS | Released [98] | - (modeling, parameter estimation) | Conference (IEEE/CVF CVPRW) |
[93](2020) | Space imaging | ATIS, DAVIS, telescope | Released [99] | Detection, tracking | Journal (IEEE Sensors Journal) |
[95] (2021) | Sensor characterization for space applications | DVS | n.a. | - (signal processing techniques) | Journal (IEEE Access) |
[97] (2022) | Resident space objects analysis | Simulated data | Released [100] | Detection, tracking | Journal (Frontiers in neuroscience) |
[96] (2023) | Calibration for space imaging | DVS, telescope | Released [101] | Tracking | Journal (Springer Astrodynamics) |
6.5. Eye Tracking, Gaze Estimation, and Driver Monitoring Systems
Work | Application | Sensors | Datasets | Main Computer Vision Tasks | Publication Type |
---|---|---|---|---|---|
[116] (2020) | DMS | DAVIS, RGBD, IR | Released [118] | Detection, classification | Journal (IEEE Transactions on Intelligent Transportation Systems) |
[105] (2021) | DMS | Simulated data | n.a. | Detection, tracking | Journal (Elsevier Neural Networks) |
[108] (2021) | Gaze estimation | DAVIS with NIR lenses, video stimulus | Released [119] | Detection, tracking | Journal (IEEE Transactions on Visualization and Computer Graphics) |
[104] (2022) | Gaze estimation | Simulated data | OpenEDS [120], TEyeD [121] | Segmentation | Conference (IEEE VR) |
[110] (2022) | Eye tracking | DVS | n.a. | Tracking | Conference (IEEE/CVF WACV) |
[115] (2022) | Face detection for DMS | DAVIS | NeuroIV [116] | Detection | Conference (IEEE ICARM) |
[107] (2023) | Pupil localization | Simulated data | WIDER FACE [122] | Detection | Journal (IEEE Access) |
[112] (2023) | Eye tracking | DAVIS | Angelopoulos et al. [108] | Detection | Conference (IEEE AICAS) |
[113](2023) | DMS | Synthetic data | BIWI [123] | Detection, tracking | Journal (IEEE Access) |
[114] (2023) | DMS | DVS | YawDD [124] | Classification | Journal (IEEE Access) |
[117] (2023) | DMS | DVS | n.a. | Classification | Journal (IEEE Open Journal of Vehicular Technology) |
[109] (2024) | Gaze estimation | DAVIS, video stimulus | Available on request | Detection | arXiv preprint |
[111] (2024) | Gaze estimation | DAVIS | Angelopoulos et al. [108] | Segmentation, Detection | Journal (IEEE TPAMI) |
6.6. Gesture, Action Recognition, and Human Pose Estimation
Work | Application | Sensors | Datasets | Main Computer Vision Tasks | Publication Type |
---|---|---|---|---|---|
[125] (2011) | Gesture recognition | DVS | n.a. | Classification | Journal (IEEE-TPAMI) |
[131] (2011) | Gesture recognition | DVS | n.a. | Classification | Conference (IEEE CIMSIVP) |
[132] (2012) | Hand gesture UI | DVS | n.a. | Classification | Conference (IEEE ICIP) |
[133] (2017) | Gesture recognition | DVS | Released [146] | Classification | Conference (IEEE/CVF CVPR) |
[126] (2019) | Pose estimation | DAVIS, motion capture | Released [147] | Pose estimation | Conference (IEEE/CVF CVPRW) |
[134] (2019) | Gesture recognition | DVS | DVS128 Gesture [133] | Classification | Conference (IEEE/CVF WACV) |
[137] (2020) | Gesture recognition | DVS | DVS128 Gesture [133], DHP19 [126] | Classification | Conference (IEEE ISCAS) |
[127] (2020) | Pose estimation | DAVIS | n.a. | Pose estimation, tracking | Conference (IEEE/CVF CVPR) |
[139] (2021) | Gesture recognition | DAVIS | n.a. | Classification, tracking | Journal (IEEE Transactions on Automation Science and Engineering) |
[142] (2022) | Action recognition | DAVIS | DVS128 Gesture [133], N-Caltech101 [148], DVSAction [149], NeuroVI [116] | Classification | Journal (IEEE access) |
[140] (2022) | Sign language | DVS | Released [150] | Classification | Journal (Springer, Pattern Analysis and Applications) |
[143] (2022) | Action recognition | DVS | DVS128 Gesture [133], UCF101-DVS [151], HMDB51-DVS [151] | Classification | Conference (IEEE CRC) |
[128] (2023) | Pose estimation for dancing | DAVIS, RGB (HD), motion capture | Released | Pose estimation | Journal (Elsevier Neurocomputing) |
[130] (2023) | Pose estimation | DVS | DHP19 [126], Human3.6m [152] | Pose estimation | Conference (IEEE/CVF CVPR) |
[141] (2023) | Sign language | DAVIS | Released [153] | Classification | Journal (MDPI Electronics) |
[145] (2023) | Action recognition | Synthetic data | N-EPIC-Kitchens [154] | Classification | Conference (IEEE IROS) |
6.7. Medicine and Healthcare
Work | Application | Sensors | Datasets | Main Computer Vision Tasks | Publication Type |
---|---|---|---|---|---|
[155] (2008) | Fall detection | DVS | n.a. | Classification | Journal (IEEE Transactions on Biomedical Circuits and Systems) |
[158] (2016) | Assisting device | DVS | n.a. | Detection | Conference (IEEE Healthcom) |
[159] (2016) | Assisting device | DVS | n.a. | - (raw data analysis) | Conference (IEEE BioCAS) |
[160] (2018) | Cellular analysis | ATIS, Frame-based camera | n.a. | Detection, tracking | PhD Thesis |
[156] (2022) | Fall detection | DAVIS | n.a. | Classification | Journal (IEEE Transactions on Cybernetics) |
[157] (2023) | Hearth rate detection | DVS | n.a. | - (raw data analysis) | Conference (IEEE ISM) |
[161] (2023) | Cellular analysis (SMLM) | DVS, microscope | Released [162] | Detection | Journal (Nature Photonics) |
6.8. Intelligent Transportation Systems
Work | Application | Sensors | Datasets | Main Computer Vision Tasks | Publication Type |
---|---|---|---|---|---|
[163] (2018) | Vehicle detection | DAVIS | n.a. | Detection, tracking | Journal (Hindawi, Journal of advanced transportation) |
[164] (2023) | Vehicle detection | Simulated data | n.a. | Detection | Conference (IEEE IV) |
[165] (2021) | Vehicle detection | DVS | n.a. | Detection, classification | Journal (IEEE Sensors Journal) |
[166] (2019) | Lane detection | DVS | Released [170] | Classification, segmentation | Conference (IEEE/CVF CVPRW) |
[167] (2021) | Vehicle detection for autonomous navigation | DVS, Grayscale | DDT-17 [171] | Detection | Journal (IEEE Sensors Journal) |
[168] (2022) | Traffic sign detection | DVS | The 1 Megapixel Automotive Detection Dataset [172] | Detection | Conference (IEEE SPA) |
[169] (2020) | Light positioning system | DAVIS | n.a. | - (image processing techniques) | Journal (IEEE Sensors Journal) |
6.9. Robotics
Work | Application | Sensors | Datasets | Computer Vision Tasks | Publication Type |
---|---|---|---|---|---|
[182] (2014) | Terrain reconstruction | DVS, laser scanner | n.a. | - (raw data analysis) | Journal (Frontiers in neuroscience) |
[173] (2020) | Obstacle avoidance | DVS | n.a. | Segmentation | Journal (Science Robotics) |
[181] (2020) | Steering prediction | DAVIS | Released [183] | - (raw data and steering angles prediction) | Conference (IEEE ITSC) |
[175] (2021) | Robot detection | DVS | n.a. | Tracking | Journal (IEEE Access) |
[174] (2022) | Visual Odometry | DVS | KITTI [184], ORB-SLAM2 [185] | SLAM | Journal (MDPI Sensors) |
[177] (2022) | Robot grasping | DAVIS | n.a. | Segmentation | Journal (Springer, Journal of Intelligent Manufacturing) |
[178] (2023) | Load transportation with UAVs | DAVIS | n.a. | Segmentation | Journal (IEEE Robotics and Automation Letters) |
[180] (2024) | LED-based communication system | DVS | n.a. | Detection | Conference (ACM AAMAS) |
7. Discussion
Future Directions
8. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Abbreviations
AER | Address-event representation |
APS | Active pixel sensor |
CNN | Convolutional neural network |
ATIS | Asynchronous time-based image sensor |
DAVIS | Dynamic and active pixel vision sensor |
DGCNN | Dynamic graph CNN |
DMS | Driving monitoring systems |
DVS | Dynamic vision sensor |
FAGC | Feature attention gate component |
MOKE | Magneto-optic Kerr effect |
n.a. | Not available |
NIR | Near-infrared |
SSA | Space situational awareness |
SLAM | Simultanous localization and mapping |
SSNN | sparse neural network models |
SNN | Spiking neural Network |
VLSI | Very large-scale integration |
UAV | Unmanned aerial vehicle |
References
- Golnabi, H.; Asadpour, A. Design and application of industrial machine vision systems. Robot. -Comput.-Integr. Manuf. 2007, 23, 630–637. [Google Scholar] [CrossRef]
- Furmonas, J.; Liobe, J.; Barzdenas, V. Analytical review of event-based camera depth estimation methods and systems. Sensors 2022, 22, 1201. [Google Scholar] [CrossRef] [PubMed]
- Fukushima, K.; Yamaguchi, Y.; Yasuda, M.; Nagata, S. An electronic model of the retina. Proc. IEEE 1970, 58, 1950–1951. [Google Scholar] [CrossRef]
- Mead, C.A.; Mahowald, M.A. A silicon model of early visual processing. Neural Netw. 1988, 1, 91–97. [Google Scholar] [CrossRef]
- Dong, Y.; Li, Y.; Zhao, D.; Shen, G.; Zeng, Y. Bullying10K: A Large-Scale Neuromorphic Dataset towards Privacy-Preserving Bullying Recognition. Adv. Neural Inf. Process. Syst. 2023, 36, 1923–1937. [Google Scholar]
- Prophesee Evaluation Kit 4. Available online: https://www.prophesee.ai/event-camera-evk4/ (accessed on 31 May 2024).
- Inivation DAVIS346 Specifications. Available online: https://inivation.com/wp-content/uploads/2019/08/DAVIS346.pdf. (accessed on 31 May 2024).
- Li, H.; Yu, H.; Wu, D.; Sun, X.; Pan, L. Recent advances in bioinspired vision sensor arrays based on advanced optoelectronic materials. APL Mater. 2023, 11, 081101. [Google Scholar] [CrossRef]
- Etienne-Cummings, R.; Van der Spiegel, J. Neuromorphic vision sensors. Sens. Actuators Phys. 1996, 56, 19–29. [Google Scholar] [CrossRef]
- Li, Z.; Sun, H. Artificial intelligence-based spatio-temporal vision sensors: Applications and prospects. Front. Mater. 2023, 10, 1269992. [Google Scholar] [CrossRef]
- Gallego, G.; Delbrück, T.; Orchard, G.; Bartolozzi, C.; Taba, B.; Censi, A.; Leutenegger, S.; Davison, A.J.; Conradt, J.; Daniilidis, K.; et al. Event-based vision: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 154–180. [Google Scholar] [CrossRef]
- Liao, F.; Zhou, F.; Chai, Y. Neuromorphic vision sensors: Principle, progress and perspectives. J. Semicond. 2021, 42, 013105. [Google Scholar] [CrossRef]
- Lichtsteiner, P.; Posch, C.; Delbruck, T. A 128 x 128 120 db 30 mw asynchronous vision sensor that responds to relative intensity change. In Proceedings of the 2006 IEEE International Solid State Circuits Conference-Digest of Technical Papers, San Francisco, CA, USA, 6–9 February 2006; IEEE: New York, NY, USA, 2006; pp. 2060–2069. [Google Scholar]
- Posch, C.; Matolin, D.; Wohlgenannt, R. A QVGA 143 dB dynamic range frame-free PWM image sensor with lossless pixel-level video compression and time-domain CDS. IEEE J. -Solid-State Circuits 2010, 46, 259–275. [Google Scholar] [CrossRef]
- Berner, R.; Brandli, C.; Yang, M.; Liu, S.C.; Delbruck, T. A 240× 180 10 mw 12 us latency sparse-output vision sensor for mobile applications. In Proceedings of the 2013 Symposium on VLSI Circuits, Kyoto, Japan, 12–14 June 2013; IEEE: New York, NY, USA, 2013; pp. C186–C187. [Google Scholar]
- Scheerlinck, C.; Rebecq, H.; Stoffregen, T.; Barnes, N.; Mahony, R.; Scaramuzza, D. CED: Color event camera dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019; IEEE: New York, NY, USA, 2019. [Google Scholar]
- Posch, C.; Serrano-Gotarredona, T.; Linares-Barranco, B.; Delbruck, T. Retinomorphic event-based vision sensors: Bioinspired cameras with spiking output. Proc. IEEE 2014, 102, 1470–1484. [Google Scholar] [CrossRef]
- Mongeon, P.; Paul-Hus, A. The journal coverage of Web of Science and Scopus: A comparative analysis. Scientometrics 2016, 106, 213–228. [Google Scholar] [CrossRef]
- Indiveri, G.; Kramer, J.; Koch, C. Neuromorphic Vision Chips: Intelligent sensors for industrial applications. In Proceedings of Advanced Microsystems for Automotive Applications; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
- Kramer, J.; Indiveri, G. Neuromorphic vision sensors and preprocessors in system applications. In Proceedings of the Advanced Focal Plane Arrays and Electronic Cameras II, SPIE, Zurich, Switzerland, 7 September 1998; Volume 3410, pp. 134–146. [Google Scholar]
- Indiveri, G. Neuromorphic VLSI models of selective attention: From single chip vision sensors to multi-chip systems. Sensors 2008, 8, 5352–5375. [Google Scholar] [CrossRef]
- Liu, S.C.; Delbruck, T. Neuromorphic sensory systems. Curr. Opin. Neurobiol. 2010, 20, 288–295. [Google Scholar] [CrossRef]
- Wu, N. Neuromorphic vision chips. Sci. China Inf. Sci. 2018, 61, 1–17. [Google Scholar] [CrossRef]
- Kim, M.S.; Kim, M.S.; Lee, G.J.; Sunwoo, S.H.; Chang, S.; Song, Y.M.; Kim, D.H. Bio-inspired artificial vision and neuromorphic image processing devices. Adv. Mater. Technol. 2022, 7, 2100144. [Google Scholar] [CrossRef]
- Steffen, L.; Reichard, D.; Weinland, J.; Kaiser, J.; Roennau, A.; Dillmann, R. Neuromorphic stereo vision: A survey of bio-inspired sensors and algorithms. Front. Neurorobot. 2019, 13, 28. [Google Scholar] [CrossRef]
- Chen, G.; Cao, H.; Conradt, J.; Tang, H.; Rohrbein, F.; Knoll, A. Event-based neuromorphic vision for autonomous driving: A paradigm shift for bio-inspired visual sensing and perception. IEEE Signal Process. Mag. 2020, 37, 34–49. [Google Scholar] [CrossRef]
- Sandamirskaya, Y.; Kaboli, M.; Conradt, J.; Celikel, T. Neuromorphic computing hardware and neural architectures for robotics. Sci. Robot. 2022, 7, eabl8419. [Google Scholar] [CrossRef]
- Aboumerhi, K.; Güemes, A.; Liu, H.; Tenore, F.; Etienne-Cummings, R. Neuromorphic applications in medicine. J. Neural Eng. 2023, 20, 041004. [Google Scholar] [CrossRef] [PubMed]
- Sun, R.; Shi, D.; Zhang, Y.; Li, R.; Li, R. Data-driven technology in event-based vision. Complexity 2021, 2021, 1–19. [Google Scholar] [CrossRef]
- Bartolozzi, C.; Indiveri, G.; Donati, E. Embodied neuromorphic intelligence. Nat. Commun. 2022, 13, 1024. [Google Scholar] [CrossRef] [PubMed]
- Jia, S. Event Camera Survey and Extension Application to Semantic Segmentation. In Proceedings of the 4th International Conference on Image Processing and Machine Vision, Hong Kong, China, 25–27 March 2022; pp. 115–121. [Google Scholar]
- Hodgkin, A.L.; Huxley, A.F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 1952, 117, 500. [Google Scholar] [CrossRef]
- Izhikevich, E.M. Simple model of spiking neurons. IEEE Trans. Neural Netw. 2003, 14, 1569–1572. [Google Scholar] [CrossRef] [PubMed]
- Gerstner, W. Spiking Neurons; Technical Report; MIT-Press: Cambridge, MA, USA, 1998. [Google Scholar]
- Bouvier, M.; Valentian, A.; Mesquida, T.; Rummens, F.; Reyboz, M.; Vianello, E.; Beigne, E. Spiking neural networks hardware implementations and challenges: A survey. Acm J. Emerg. Technol. Comput. Syst. (JETC) 2019, 15, 1–35. [Google Scholar] [CrossRef]
- Nunes, J.D.; Carvalho, M.; Carneiro, D.; Cardoso, J.S. Spiking neural networks: A survey. IEEE Access 2022, 10, 60738–60764. [Google Scholar] [CrossRef]
- Yi, Z.; Lian, J.; Liu, Q.; Zhu, H.; Liang, D.; Liu, J. Learning rules in spiking neural networks: A survey. Neurocomputing 2023, 531, 163–179. [Google Scholar] [CrossRef]
- Tavanaei, A.; Ghodrati, M.; Kheradpisheh, S.R.; Masquelier, T.; Maida, A. Deep learning in spiking neural networks. Neural Netw. 2019, 111, 47–63. [Google Scholar] [CrossRef]
- Yamazaki, K.; Vo-Ho, V.K.; Bulsara, D.; Le, N. Spiking neural networks and their applications: A Review. Brain Sci. 2022, 12, 863. [Google Scholar] [CrossRef]
- Wang, S.; Cheng, T.H.; Lim, M.H. A hierarchical taxonomic survey of spiking neural networks. Memetic Comput. 2022, 14, 335–354. [Google Scholar] [CrossRef]
- Pfeiffer, M.; Pfeil, T. Deep learning with spiking neurons: Opportunities and challenges. Front. Neurosci. 2018, 12, 409662. [Google Scholar] [CrossRef] [PubMed]
- Paredes-Vallés, F.; Scheper, K.Y.; De Croon, G.C. Unsupervised learning of a hierarchical spiking neural network for optical flow estimation: From events to global motion perception. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 2051–2064. [Google Scholar] [CrossRef] [PubMed]
- Bing, Z.; Meschede, C.; Röhrbein, F.; Huang, K.; Knoll, A.C. A survey of robotics control based on learning-inspired spiking neural networks. Front. Neurorobot. 2018, 12, 35. [Google Scholar] [CrossRef] [PubMed]
- Basu, A.; Deng, L.; Frenkel, C.; Zhang, X. Spiking neural network integrated circuits: A review of trends and future directions. In Proceedings of the 2022 IEEE Custom Integrated Circuits Conference (CICC), Newport Beach, CA, USA, 24–27 April 2022; IEEE: New York, NY, USA, 2022; pp. 1–8. [Google Scholar]
- Zheng, X.; Liu, Y.; Lu, Y.; Hua, T.; Pan, T.; Zhang, W.; Tao, D.; Wang, L. Deep learning for event-based vision: A comprehensive survey and benchmarks. arXiv 2023, arXiv:2302.08890. [Google Scholar]
- Zou, X.L.; Huang, T.J.; Wu, S. Towards a new paradigm for brain-inspired computer vision. Mach. Intell. Res. 2022, 19, 412–424. [Google Scholar] [CrossRef]
- Dong-il, C.; Tae-jae, L. A review of bioinspired vision sensors and their applications. Sens. Mater 2015, 27, 447–463. [Google Scholar]
- Lakshmi, A.; Chakraborty, A.; Thakur, C.S. Neuromorphic vision: From sensors to event-based algorithms. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2019, 9, e1310. [Google Scholar] [CrossRef]
- Tayarani-Najaran, M.H.; Schmuker, M. Event-based sensing and signal processing in the visual, auditory, and olfactory domain: A review. Front. Neural Circuits 2021, 15, 610446. [Google Scholar] [CrossRef]
- Zhu, S.; Wang, C.; Liu, H.; Zhang, P.; Lam, E.Y. Computational neuromorphic imaging: Principles and applications. In Proceedings of the Computational Optical Imaging and Artificial Intelligence in Biomedical Sciences, SPIE, San Francisco, CA, USA, 27 January–1 February 2024; Volume 12857, pp. 4–10. [Google Scholar]
- Gonzalez, R.C.; Woods, R.E. Digital Image Processing; Pearson Education: Upper Saddle River, NJ, USA, 2009. [Google Scholar]
- Cavanagh, P. Visual cognition. Vis. Res. 2011, 51, 1538–1551. [Google Scholar] [CrossRef]
- Cantoni, V.; Ferretti, M. A Taxonomy of Hierarchical Machines for Computer Vision. Pyramidal Archit. Comput. Vis. 1994, 1, 103–115. [Google Scholar]
- Zeiler, M.D.; Taylor, G.W.; Fergus, R. Adaptive deconvolutional networks for mid and high level feature learning. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; IEEE: Washington, DC, USA, 2011; pp. 2018–2025. [Google Scholar]
- Ji, Q. Probabilistic Graphical Models for Computer Vision.; Academic Press: Cambridge, MA, USA, 2019. [Google Scholar]
- Tsouros, D.C.; Bibi, S.; Sarigiannidis, P.G. A review on UAV-based applications for precision agriculture. Information 2019, 10, 349. [Google Scholar] [CrossRef]
- Cazzato, D.; Cimarelli, C.; Sanchez-Lopez, J.L.; Voos, H.; Leo, M. A survey of computer vision methods for 2d object detection from unmanned aerial vehicles. J. Imaging 2020, 6, 78. [Google Scholar] [CrossRef] [PubMed]
- Lu, B.; Dao, P.D.; Liu, J.; He, Y.; Shang, J. Recent advances of hyperspectral imaging technology and applications in agriculture. Remote Sens. 2020, 12, 2659. [Google Scholar] [CrossRef]
- El Arja, S. Neuromorphic Perception for Greenhouse Technology Using Event-based Sensors. Ph.D. Thesis, Sydney University, Camperdown, NSW, Australia, 2022. [Google Scholar]
- Zujevs, A.; Pudzs, M.; Osadcuks, V.; Ardavs, A.; Galauskis, M.; Grundspenkis, J. An event-based vision dataset for visual navigation tasks in agricultural environments. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May 2021–5 June 2021; IEEE: New York, NY, USA, 2021; pp. 13769–13775. [Google Scholar]
- Zhu, L.; Mangan, M.; Webb, B. Neuromorphic sequence learning with an event camera on routes through vegetation. Sci. Robot. 2023, 8, eadg3679. [Google Scholar] [CrossRef] [PubMed]
- Hamann, F.; Gallego, G. Stereo Co-capture System for Recording and Tracking Fish with Frame-and Event Cameras. arXiv 2022, arXiv:2207.07332. [Google Scholar]
- Hamann, F.; Ghosh, S.; Martinez, I.J.; Hart, T.; Kacelnik, A.; Gallego, G. Low-power, Continuous Remote Behavioral Localization with Event Cameras. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle WA, USA, 17 June–21 June 2024; pp. 18612–18621. [Google Scholar]
- Dataset. Agri-EVB-Autumn. Available online: https://ieee-dataport.org/open-access/agri-ebv-autumn (accessed on 4 July 2024).
- Dataset. Neuromorphic Sequence Learning with an Event Camera on Routes through Vegetation. Available online: https://zenodo.org/records/8289547 (accessed on 4 July 2024).
- Dataset. Low-Power, Continuous Remote Behavioral Localization with Event Cameras. Available online: https://tub-rip.github.io/eventpenguins/ (accessed on 4 July 2024).
- Litzenberger, M.; Posch, C.; Bauer, D.; Belbachir, A.N.; Schon, P.; Kohn, B.; Garn, H. Embedded vision system for real-time object tracking using an asynchronous transient vision sensor. In Proceedings of the 2006 IEEE 12th Digital Signal Processing Workshop & 4th IEEE Signal Processing Education Workshop, Teton National Park, WY, USA, 24-27 September 2006; IEEE: New York, NY, USA, 2006; pp. 173–178. [Google Scholar]
- Litzenberger, M.; Belbachir, A.N.; Schon, P.; Posch, C. Embedded smart camera for high speed vision. In Proceedings of the 2007 First ACM/IEEE International Conference on Distributed Smart Cameras, Vienna, Austria, 25–28 September 2007; IEEE: New York, NY, USA, 2007; pp. 81–86. [Google Scholar]
- Piątkowska, E.; Belbachir, A.N.; Schraml, S.; Gelautz, M. Spatiotemporal multiple persons tracking using dynamic vision sensor. In Proceedings of the 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Providence, RI, USA, 16–21 June 2012; IEEE: New York, NY, USA, 2012; pp. 35–40. [Google Scholar]
- Stuckey, H.; Al-Radaideh, A.; Escamilla, L.; Sun, L.; Carrillo, L.G.; Tang, W. An optical spatial localization system for tracking unmanned aerial vehicles using a single dynamic vision sensor. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; IEEE: New York, NY, USA, 2021; pp. 3093–3100. [Google Scholar]
- Annamalai, L.; Chakraborty, A.; Thakur, C.S. Evan: Neuromorphic event-based anomaly detection. arXiv 2019, arXiv:1911.09722. [Google Scholar] [CrossRef] [PubMed]
- Pérez-Cutiño, M.A.; Eguíluz, A.G.; Martínez-de Dios, J.; Ollero, A. Event-based human intrusion detection in UAS using deep learning. In Proceedings of the 2021 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 15–18 June 2021; IEEE: New York, NY, USA, 2021; pp. 91–100. [Google Scholar]
- Rodríguez-Gomez, J.P.; Eguíluz, A.G.; Martínez-de Dios, J.R.; Ollero, A. Asynchronous event-based clustering and tracking for intrusion monitoring in UAS. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; IEEE: New York, NY, USA, 2020; pp. 8518–8524. [Google Scholar]
- Gañán, F.J.; Sanchez-Diaz, J.A.; Tapia, R.; Martinez-de Dios, J.; Ollero, A. Efficient Event-based Intrusion Monitoring using Probabilistic Distributions. In Proceedings of the 2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Sevilla, Spain,, 8–10 November 2022; IEEE: New York, NY, USA, 2022; pp. 211–216. [Google Scholar]
- Ahmad, S.; Scarpellini, G.; Morerio, P.; Del Bue, A. Event-driven re-id: A new benchmark and method towards privacy-preserving person re-identification. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 4–8 January 2022; pp. 459–468. [Google Scholar]
- Dataset. Event Camera Dataset For Intruder Monitoring. Available online: https://grvc.us.es/davis-dataset-for-intrusion-monitoring/ (accessed on 4 July 2024).
- Bialkowski, A.; Denman, S.; Sridharan, S.; Fookes, C.; Lucey, P. A database for person re-identification in multi-camera surveillance networks. In Proceedings of the 2012 International Conference on Digital Image Computing Techniques and Applications (DICTA), Fremantle, WA, Australia, 3–5 December 2012; IEEE: New York, NY, USA, 2012; pp. 1–8. [Google Scholar]
- Ristani, E.; Solera, F.; Zou, R.; Cucchiara, R.; Tomasi, C. Performance measures and a data set for multi-target, multi-camera tracking. In Proceedings of the European conference on computer vision, Amsterdam, The Netherlands, 8–10 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 17–35. [Google Scholar]
- Perez-Peña, F.; Morgado-Estevez, A.; Montero-Gonzalez, R.J.; Linares-Barranco, A.; Jimenez-Moreno, G. Video surveillance at an industrial environment using an address event vision sensor: Comparative between two different video sensor based on a bioinspired retina. In Proceedings of the International Conference on Signal Processing and Multimedia Applications, Seville, Spain, 18–21 July 2011; IEEE: New York, NY, USA, 2011; pp. 1–4. [Google Scholar]
- Ni, Z.; Pacoret, C.; Benosman, R.; Ieng, S.; Réginer, S. Asynchronous event-based high speed vision for microparticle tracking. J. Microsc. 2012, 245, 236–244. [Google Scholar] [CrossRef]
- Drazen, D.; Lichtsteiner, P.; Häfliger, P.; Delbrück, T.; Jensen, A. Toward real-time particle tracking using an event-based dynamic vision sensor. Exp. Fluids 2011, 51, 1465–1469. [Google Scholar] [CrossRef]
- Zhang, K.; Zhao, Y.; Chu, Z.; Zhou, Y. Event-based vision in magneto-optic Kerr effect microscopy. AIP Adv. 2022, 12. [Google Scholar] [CrossRef]
- Bialik, K.; Kowalczyk, M.; Blachut, K.; Kryjak, T. Fast-moving object counting with an event camera. arXiv 2022, arXiv:2212.08384. [Google Scholar]
- Li, X.; Yu, S.; Lei, Y.; Li, N.; Yang, B. Intelligent machinery fault diagnosis with event-based camera. IEEE Trans. Ind. Inform. 2023, 20, 380–389. [Google Scholar] [CrossRef]
- Zhao, G.; Shen, Y.; Chen, N.; Hu, P.; Liu, L.; Wen, H. EV-Tach: A Handheld Rotational Speed Estimation System With Event Camera. IEEE Trans. Mob. Comput. 2023, 12, 380–389. [Google Scholar] [CrossRef]
- Davies, D.L.; Bouldin, D.W. A cluster separation measure. IEEE Trans. Pattern Anal. Mach. Intell. 1979, 224–227. [Google Scholar] [CrossRef]
- Micev, K.; Steiner, J.; Aydin, A.; Rieckermann, J.; Delbruck, T. Measuring diameters and velocities of artificial raindrops with a neuromorphic event camera. Atmos. Meas. Tech. 2024, 17, 335–357. [Google Scholar] [CrossRef]
- Shiba, S.; Hamann, F.; Aoki, Y.; Gallego, G. Event-based background-oriented schlieren. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 46, 2011–2026. [Google Scholar] [CrossRef]
- Liu, X.; Yang, Z.X.; Xu, Z.; Yan, X. NeuroVI-based new datasets and space attention network for the recognition and falling detection of delivery packages. Front. Neurorobot. 2022, 16, 934260. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Dataset. Event-based Background-Oriented Schlieren. Available online: https://github.com/tub-rip/event_based_bos (accessed on 4 July 2024).
- Cohen, G.; Afshar, S.; Morreale, B.; Bessell, T.; Wabnitz, A.; Rutten, M.; van Schaik, A. Event-based sensing for space situational awareness. J. Astronaut. Sci. 2019, 66, 125–141. [Google Scholar] [CrossRef]
- Afshar, S.; Nicholson, A.P.; Van Schaik, A.; Cohen, G. Event-based object detection and tracking for space situational awareness. IEEE Sens. J. 2020, 20, 15117–15132. [Google Scholar] [CrossRef]
- Chin, T.J.; Bagchi, S.; Eriksson, A.; Van Schaik, A. Star tracking using an event camera. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
- Roffe, S.; Akolkar, H.; George, A.D.; Linares-Barranco, B.; Benosman, R.B. Neutron-induced, single-event effects on neuromorphic event-based vision sensor: A first step and tools to space applications. IEEE Access 2021, 9, 85748–85763. [Google Scholar] [CrossRef]
- Ralph, N.O.; Marcireau, A.; Afshar, S.; Tothill, N.; Van Schaik, A.; Cohen, G. Astrometric calibration and source characterisation of the latest generation neuromorphic event-based cameras for space imaging. Astrodynamics 2023, 7, 415–443. [Google Scholar] [CrossRef]
- Ralph, N.; Joubert, D.; Jolley, A.; van Schaik, A.; Cohen, G. Real-time event-based unsupervised feature consolidation and tracking for space situational awareness. Front. Neurosci. 2022, 16, 821157. [Google Scholar] [CrossRef]
- Dataset. Event-Based Star Tracking Dataset. Available online: https://www.ai4space.group/research/event-based-star-tracking (accessed on 4 July 2024).
- Dataset. The Event-Based Space Situational Awareness (EBSSA) Dataset. Available online: https://www.westernsydney.edu.au/icns/resources/reproducible_research3/publication_support_materials2/space_imaging (accessed on 4 July 2024).
- Dataset. IEBCS. Available online: https://github.com/neuromorphicsystems/IEBCS (accessed on 4 July 2024).
- Dataset. Event Based—Space Imaging—Speed Dataset. Available online: https://github.com/NicRalph213/ICNS_NORALPH_Event_Based-Space_Imaging-Speed_Dataset (accessed on 4 July 2024).
- Ji, Q.; Yang, X. Real-time eye, gaze, and face pose tracking for monitoring driver vigilance. Real-Time Imaging 2002, 8, 357–377. [Google Scholar] [CrossRef]
- Cazzato, D.; Leo, M.; Distante, C.; Voos, H. When i look into your eyes: A survey on computer vision contributions for human gaze estimation and tracking. Sensors 2020, 20, 3739. [Google Scholar] [CrossRef]
- Feng, Y.; Goulding-Hotta, N.; Khan, A.; Reyserhove, H.; Zhu, Y. Real-time gaze tracking with event-driven eye segmentation. In Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Christchurch, New Zealand, 12–16 March 2022; IEEE: New York, NY, USA, 2022; pp. 399–408. [Google Scholar]
- Ryan, C.; O’Sullivan, B.; Elrasad, A.; Cahill, A.; Lemley, J.; Kielty, P.; Posch, C.; Perot, E. Real-time face & eye tracking and blink detection using event cameras. Neural Netw. 2021, 141, 87–97. [Google Scholar]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Kang, D.; Kang, D. Event Camera-Based Pupil Localization: Facilitating Training With Event-Style Translation of RGB Faces. IEEE Access 2023, 11, 142304–142316. [Google Scholar] [CrossRef]
- Angelopoulos, A.N.; Martel, J.N.; Kohli, A.P.; Conradt, J.; Wetzstein, G. Event-Based Near-Eye Gaze Tracking Beyond 10,000 Hz. IEEE Trans. Vis. Comput. Graph. 2021, 27, 2577–2586. [Google Scholar] [CrossRef]
- Banerjee, A.; Mehta, N.K.; Prasad, S.S.; Saurav, S.; Singh, S.; Himanshu. Gaze-Vector Estimation in the Dark with Temporally Encoded Event-driven Neural Networks. arXiv 2024, arXiv:2403.02909. [Google Scholar]
- Stoffregen, T.; Daraei, H.; Robinson, C.; Fix, A. Event-based kilohertz eye tracking using coded differential lighting. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 2515–2523. [Google Scholar]
- Li, N.; Chang, M.; Raychowdhury, A. E-Gaze: Gaze Estimation with Event Camera. IEEE Trans. Pattern Anal. Mach. Intell. 2024. [Google Scholar] [CrossRef]
- Li, N.; Bhat, A.; Raychowdhury, A. E-track: Eye tracking with event camera for extended reality (xr) applications. In Proceedings of the 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hangzhou, China, 11–13 June 2023; IEEE: New York, NY, USA, 2023; pp. 1–5. [Google Scholar]
- Ryan, C.; Elrasad, A.; Shariff, W.; Lemley, J.; Kielty, P.; Hurney, P.; Corcoran, P. Real-time multi-task facial analytics with event cameras. IEEE Access 2023, 11, 76964–76976. [Google Scholar] [CrossRef]
- Kielty, P.; Dilmaghani, M.S.; Shariff, W.; Ryan, C.; Lemley, J.; Corcoran, P. Neuromorphic driver monitoring systems: A proof-of-concept for yawn detection and seatbelt state detection using an event camera. IEEE Access 2023, 11, 96363–96373. [Google Scholar] [CrossRef]
- Liu, P.; Chen, G.; Li, Z.; Clarke, D.; Liu, Z.; Zhang, R.; Knoll, A. Neurodfd: Towards efficient driver face detection with neuromorphic vision sensor. In Proceedings of the 2022 International Conference on Advanced Robotics and Mechatronics (ICARM), Guilin, China, 9–11 July 2022; IEEE: New York, NY, USA, 2022; pp. 268–273. [Google Scholar]
- Chen, G.; Wang, F.; Li, W.; Hong, L.; Conradt, J.; Chen, J.; Zhang, Z.; Lu, Y.; Knoll, A. NeuroIV: Neuromorphic vision meets intelligent vehicle towards safe driving with a new database and baseline evaluations. IEEE Trans. Intell. Transp. Syst. 2020, 23, 1171–1183. [Google Scholar] [CrossRef]
- Shariff, W.; Dilmaghani, M.S.; Kielty, P.; Lemley, J.; Farooq, M.A.; Khan, F.; Corcoran, P. Neuromorphic driver monitoring systems: A computationally efficient proof-of-concept for driver distraction detection. IEEE Open J. Veh. Technol. 2023, 4, 836–848. [Google Scholar] [CrossRef]
- Dataset. NeuroIV. Available online: https://github.com/ispc-lab/NeuroIV (accessed on 4 July 2024).
- Dataset. Event Based, Near Eye Gaze Tracking Beyond 10,000 Hz. Available online: https://github.com/aangelopoulos/event_based_gaze_tracking (accessed on 4 July 2024).
- Garbin, S.J.; Shen, Y.; Schuetz, I.; Cavin, R.; Hughes, G.; Talathi, S.S. Openeds: Open eye dataset. arXiv 2019, arXiv:1905.03702. [Google Scholar]
- Fuhl, W.; Kasneci, G.; Kasneci, E. Teyed: Over 20 million real-world eye images with pupil, eyelid, and iris 2d and 3d segmentations, 2d and 3d landmarks, 3d eyeball, gaze vector, and eye movement types. In Proceedings of the 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Bari, Italy, 4–8 October 2021; IEEE: New York, NY, USA, 2021; pp. 367–375. [Google Scholar]
- Yang, S.; Luo, P.; Loy, C.C.; Tang, X. Wider face: A face detection benchmark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 5525–5533. [Google Scholar]
- Fanelli, G.; Dantone, M.; Gall, J.; Fossati, A.; Van Gool, L. Random forests for real time 3d face analysis. Int. J. Comput. Vis. 2013, 101, 437–458. [Google Scholar] [CrossRef]
- Abtahi, S.; Omidyeganeh, M.; Shirmohammadi, S.; Hariri, B. YawDD: A yawning detection dataset. In Proceedings of the 5th ACM multimedia systems conference, Singapore, 19 March 2014; pp. 24–28. [Google Scholar]
- Chen, S.; Akselrod, P.; Zhao, B.; Carrasco, J.A.P.; Linares-Barranco, B.; Culurciello, E. Efficient feedforward categorization of objects and human postures with address-event image sensors. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 34, 302–314. [Google Scholar] [CrossRef]
- Calabrese, E.; Taverni, G.; Awai Easthope, C.; Skriabine, S.; Corradi, F.; Longinotti, L.; Eng, K.; Delbruck, T. DHP19: Dynamic vision sensor 3D human pose dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019; pp. 1695–1704. [Google Scholar]
- Xu, L.; Xu, W.; Golyanik, V.; Habermann, M.; Fang, L.; Theobalt, C. Eventcap: Monocular 3d capture of high-speed human motions using an event camera. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 4968–4978. [Google Scholar]
- Zhang, Z.; Chai, K.; Yu, H.; Majaj, R.; Walsh, F.; Wang, E.; Mahbub, U.; Siegelmann, H.; Kim, D.; Rahman, T. Neuromorphic high-frequency 3d dancing pose estimation in dynamic environment. Neurocomputing 2023, 547, 126388. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; proceedings, part III 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- Goyal, G.; Di Pietro, F.; Carissimi, N.; Glover, A.; Bartolozzi, C. MoveEnet: Online High-Frequency Human Pose Estimation with an Event Camera. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 4023–4032. [Google Scholar]
- Ahn, E.Y.; Lee, J.H.; Mullen, T.; Yen, J. Dynamic vision sensor camera based bare hand gesture recognition. In Proceedings of the 2011 IEEE Symposium on Computational Intelligence For Multimedia, Signal And Vision Processing, Paris, France, 11–15 April 2011; IEEE: New York, NY, USA, 2011; pp. 52–59. [Google Scholar]
- Lee, J.H.; Park, P.K.; Shin, C.W.; Ryu, H.; Kang, B.C.; Delbruck, T. Touchless hand gesture UI with instantaneous responses. In Proceedings of the 2012 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012; IEEE: New York, NY, USA, 2012; pp. 1957–1960. [Google Scholar]
- Amir, A.; Taba, B.; Berg, D.; Melano, T.; McKinstry, J.; Di Nolfo, C.; Nayak, T.; Andreopoulos, A.; Garreau, G.; Mendoza, M.; et al. A low power, fully event-based gesture recognition system. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7243–7252. [Google Scholar]
- Wang, Q.; Zhang, Y.; Yuan, J.; Lu, Y. Space-time event clouds for gesture recognition: From RGB cameras to event cameras. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 7–11 January 2019; IEEE: New York, NY, USA, 2019; pp. 1826–1835. [Google Scholar]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
- Chen, J.; Meng, J.; Wang, X.; Yuan, J. Dynamic graph CNN for event-camera based gesture recognition. In Proceedings of the 2020 IEEE International Symposium on Circuits and Systems (ISCAS), Seville, Spain, 12–14 October 2020; IEEE: New York, NY, USA, 2020; pp. 1–5. [Google Scholar]
- Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic graph cnn for learning on point clouds. ACM Trans. Graph. 2019, 38, 1–12. [Google Scholar] [CrossRef]
- Chen, G.; Xu, Z.; Li, Z.; Tang, H.; Qu, S.; Ren, K.; Knoll, A. A novel illumination-robust hand gesture recognition system with event-based neuromorphic vision sensor. IEEE Trans. Autom. Sci. Eng. 2021, 18, 508–520. [Google Scholar] [CrossRef]
- Vasudevan, A.; Negri, P.; Di Ielsi, C.; Linares-Barranco, B.; Serrano-Gotarredona, T. SL-Animals-DVS: Event-driven sign language animals dataset. Pattern Anal. Appl. 2022, 25, 505–520. [Google Scholar] [CrossRef]
- Chen, X.; Su, L.; Zhao, J.; Qiu, K.; Jiang, N.; Zhai, G. Sign language gesture recognition and classification based on event camera with spiking neural networks. Electronics 2023, 12, 786. [Google Scholar] [CrossRef]
- Liu, C.; Qi, X.; Lam, E.Y.; Wong, N. Fast classification and action recognition with event-based imaging. IEEE Access 2022, 10, 55638–55649. [Google Scholar] [CrossRef]
- Xie, B.; Deng, Y.; Shao, Z.; Liu, H.; Xu, Q.; Li, Y. Event Tubelet Compressor: Generating Compact Representations for Event-Based Action Recognition. In Proceedings of the 2022 7th International Conference on Control, Robotics and Cybernetics (CRC), Virtual, 15–17 December 2022; IEEE: New York, NY, USA, 2022; pp. 12–16. [Google Scholar]
- Neimark, D.; Bar, O.; Zohar, M.; Asselmann, D. Video transformer network. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 3163–3172. [Google Scholar]
- de Blegiers, T.; Dave, I.R.; Yousaf, A.; Shah, M. EventTransAct: A video transformer-based framework for Event-camera based action recognition. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 1–5 October 2023; IEEE: New York, NY, USA, 2023; pp. 1–7. [Google Scholar]
- Dataset. DVS128 Gesture. Available online: https://ibm.ent.box.com/s/3hiq58ww1pbbjrinh367ykfdf60xsfm8/folder/50167556794 (accessed on 4 July 2024).
- Dataset. DHP19. Available online: https://sites.google.com/view/dhp19/home (accessed on 4 July 2024).
- Orchard, G.; Jayawant, A.; Cohen, G.K.; Thakor, N. Converting static image datasets to spiking neuromorphic datasets using saccades. Front. Neurosci. 2015, 9, 159859. [Google Scholar] [CrossRef] [PubMed]
- Miao, S.; Chen, G.; Ning, X.; Zi, Y.; Ren, K.; Bing, Z.; Knoll, A. Neuromorphic vision datasets for pedestrian detection, action recognition, and fall detection. Front. Neurorobot. 2019, 13, 38. [Google Scholar] [CrossRef]
- Dataset. SL-Animals-DVS. Available online: http://www2.imse-cnm.csic.es/neuromorphs/index.php/SL-ANIMALS-DVS-Database (accessed on 4 July 2024).
- Bi, Y.; Chadha, A.; Abbas, A.; Bourtsoulatze, E.; Andreopoulos, Y. Graph-based spatio-temporal feature learning for neuromorphic vision sensing. IEEE Trans. Image Process. 2020, 29, 9084–9098. [Google Scholar] [CrossRef]
- Ionescu, C.; Papava, D.; Olaru, V.; Sminchisescu, C. Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 36, 1325–1339. [Google Scholar] [CrossRef]
- Dataset. DVS-Sign. Available online: https://github.com/najie1314/DVS (accessed on 4 July 2024).
- Plizzari, C.; Planamente, M.; Goletto, G.; Cannici, M.; Gusso, E.; Matteucci, M.; Caputo, B. E2 (go) motion: Motion augmented event stream for egocentric action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 19935–19947. [Google Scholar]
- Fu, Z.; Delbruck, T.; Lichtsteiner, P.; Culurciello, E. An address-event fall detector for assisted living applications. IEEE Trans. Biomed. Circuits Syst. 2008, 2, 88–96. [Google Scholar] [CrossRef]
- Chen, G.; Qu, S.; Li, Z.; Zhu, H.; Dong, J.; Liu, M.; Conradt, J. Neuromorphic vision-based fall localization in event streams with temporal–spatial attention weighted network. IEEE Trans. Cybern. 2022, 52, 9251–9262. [Google Scholar] [CrossRef] [PubMed]
- Jagtap, A.; Saripalli, R.V.; Lemley, J.; Shariff, W.; Smeaton, A.F. Heart Rate Detection Using an Event Camera. In Proceedings of the 2023 IEEE International Symposium on Multimedia (ISM), Laguna Hills, CA, USA, 11–13 December 2023; IEEE: New York, NY, USA, 2023; pp. 243–246. [Google Scholar]
- Everding, L.; Walger, L.; Ghaderi, V.S.; Conradt, J. A mobility device for the blind with improved vertical resolution using dynamic vision sensors. In Proceedings of the 2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom), Munich, Germany, 14–16 September 2016; IEEE: New York, NY, USA, 2016; pp. 1–5. [Google Scholar]
- Gaspar, N.; Sondhi, A.; Evans, B.; Nikolic, K. A low-power neuromorphic system for retinal implants and sensory substitution. In Proceedings of the 2016 IEEE Biomedical Circuits and Systems Conference (BioCAS), Shanghai, China, 17–19 October 2016; IEEE: New York, NY, USA, 2016; pp. 78–81. [Google Scholar]
- Berthelon, X. Neuromorphic Analysis of Hemodynamics Using Event-Based Cameras. Ph.D. Thesis, Sorbonne Université, Paris, France, 2018. [Google Scholar]
- Cabriel, C.; Monfort, T.; Specht, C.G.; Izeddin, I. Event-based vision sensor for fast and dense single-molecule localization microscopy. Nat. Photonics 2023, 17, 1105–1113. [Google Scholar] [CrossRef]
- Dataset. Evb-SMLM. Available online: https://github.com/Clement-Cabriel/Evb-SMLM (accessed on 4 July 2024).
- Chen, G.; Cao, H.; Aafaque, M.; Chen, J.; Ye, C.; Röhrbein, F.; Conradt, J.; Chen, K.; Bing, Z.; Liu, X.; et al. Neuromorphic vision based multivehicle detection and tracking for intelligent transportation system. J. Adv. Transp. 2018, 2018, 1–13. [Google Scholar] [CrossRef]
- Ikura, M.; Walter, F.; Knoll, A. Spiking Neural Networks for Robust and Efficient Object Detection in Intelligent Transportation Systems With Roadside Event-Based Cameras. In Proceedings of the 2023 IEEE Intelligent Vehicles Symposium (IV), Anchorage, AK, USA, 4–7 June 2023; IEEE: New York, NY, USA, 2023; pp. 1–6. [Google Scholar]
- Lu, X.; Mao, X.; Liu, H.; Meng, X.; Rai, L. Event camera point cloud feature analysis and shadow removal for road traffic sensing. IEEE Sens. J. 2021, 22, 3358–3369. [Google Scholar] [CrossRef]
- Cheng, W.; Luo, H.; Yang, W.; Yu, L.; Chen, S.; Li, W. Det: A high-resolution dvs dataset for lane extraction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019; pp. 1666–1675. [Google Scholar]
- Cao, H.; Chen, G.; Xia, J.; Zhuang, G.; Knoll, A. Fusion-based feature attention gate component for vehicle detection based on event camera. IEEE Sens. J. 2021, 21, 24540–24548. [Google Scholar] [CrossRef]
- Wzorek, P.; Kryjak, T. Traffic sign detection with event cameras and DCNN. In Proceedings of the 2022 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), Poznan, Poland, 21–22 September 2022; IEEE: New York, NY, USA, 2022; pp. 86–91. [Google Scholar]
- Chen, G.; Chen, W.; Yang, Q.; Xu, Z.; Yang, L.; Conradt, J.; Knoll, A. A novel visible light positioning system with event-based neuromorphic vision sensor. IEEE Sens. J. 2020, 20, 10211–10219. [Google Scholar] [CrossRef]
- Dataset. DET. Available online: https://spritea.github.io/DET/ (accessed on 4 July 2024).
- Binas, J.; Neil, D.; Liu, S.C.; Delbruck, T. DDD17: End-to-end DAVIS driving dataset. arXiv 2017, arXiv:1711.01458. [Google Scholar]
- Perot, E.; De Tournemire, P.; Nitti, D.; Masci, J.; Sironi, A. Learning to detect objects with a 1 megapixel event camera. Adv. Neural Inf. Process. Syst. 2020, 33, 16639–16652. [Google Scholar]
- Falanga, D.; Kleber, K.; Scaramuzza, D. Dynamic obstacle avoidance for quadrotors with event cameras. Sci. Robot. 2020, 5, eaaz9712. [Google Scholar] [CrossRef]
- Wang, Y.; Yang, J.; Peng, X.; Wu, P.; Gao, L.; Huang, K.; Chen, J.; Kneip, L. Visual odometry with an event camera using continuous ray warping and volumetric contrast maximization. Sensors 2022, 22, 5687. [Google Scholar] [CrossRef]
- Iaboni, C.; Patel, H.; Lobo, D.; Choi, J.W.; Abichandani, P. Event camera based real-time detection and tracking of indoor ground robots. IEEE Access 2021, 9, 166588–166602. [Google Scholar] [CrossRef]
- Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the KDD, Portland, OR, USA, 2–4 August 1996; Volume 96, pp. 226–231. [Google Scholar]
- Huang, X.; Halwani, M.; Muthusamy, R.; Ayyad, A.; Swart, D.; Seneviratne, L.; Gan, D.; Zweiri, Y. Real-time grasping strategies using event camera. J. Intell. Manuf. 2022, 33, 593–615. [Google Scholar] [CrossRef]
- Panetsos, F.; Karras, G.C.; Kyriakopoulos, K.J. Aerial Transportation of Cable-Suspended Loads With an Event Camera. IEEE Robot. Autom. Lett. 2023, 9, 231–238. [Google Scholar] [CrossRef]
- Wang, Z.; Ng, Y.; Henderson, J.; Mahony, R. Smart visual beacons with asynchronous optical communications using event cameras. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; IEEE: New York, NY, USA, 2022; pp. 3793–3799. [Google Scholar]
- Nakagawa, H.; Miyatani, Y.; Kanezaki, A. Linking Vision and Multi-Agent Communication through Visible Light Communication using Event Cameras. In Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems, Auckland, New Zealand, 6–10 May 2024; pp. 1436–1444. [Google Scholar]
- Hu, Y.; Binas, J.; Neil, D.; Liu, S.C.; Delbruck, T. Ddd20 end-to-end event camera driving dataset: Fusing frames and events with deep learning for improved steering prediction. In Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, 20–23 September 2020; IEEE: New York, NY, USA, 2020; pp. 1–6. [Google Scholar]
- Brandli, C.; Mantel, T.; Hutter, M.; Höpflinger, M.; Berner, R.; Delbruck, T. Adaptive pulsed laser line extraction for terrain reconstruction using a dynamic vision sensor. Front. Neurosci. 2014, 7, 65397. [Google Scholar] [CrossRef] [PubMed]
- Dataset. DDD20. Available online: https://sites.google.com/view/davis-driving-dataset-2020/home (accessed on 4 July 2024).
- Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The kitti dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
- Mur-Artal, R.; Tardós, J.D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef]
- Amini, A.; Wang, T.H.; Gilitschenski, I.; Schwarting, W.; Liu, Z.; Han, S.; Karaman, S.; Rus, D. Vista 2.0: An open, data-driven simulator for multimodal sensing and policy learning for autonomous vehicles. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; IEEE: New York, NY, USA, 2022; pp. 2419–2426. [Google Scholar]
- Lin, S.; Ma, Y.; Guo, Z.; Wen, B. DVS-Voltmeter: Stochastic process-based event simulator for dynamic vision sensors. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 578–593. [Google Scholar]
- Gehrig, D.; Gehrig, M.; Hidalgo-Carrió, J.; Scaramuzza, D. Video to events: Recycling video datasets for event cameras. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 13–19 June 2020; pp. 3586–3595. [Google Scholar]
- Hu, Y.; Liu, S.C.; Delbruck, T. v2e: From video frames to realistic DVS events. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, TN, USA, 19–25 June 2021; pp. 1312–1321. [Google Scholar]
- Rebecq, H.; Gehrig, D.; Scaramuzza, D. ESIM: An open event camera simulator. In Proceedings of the Conference on Robot Learning, PMLR, Zurich, Switzerland, 29–31 October 2018; pp. 969–982. [Google Scholar]
- Liu, X.; Chen, S.W.; Nardari, G.V.; Qu, C.; Cladera, F.; Taylor, C.J.; Kumar, V. Challenges and opportunities for autonomous micro-uavs in precision agriculture. IEEE Micro 2022, 42, 61–68. [Google Scholar] [CrossRef]
- Qiu, S.; Liu, Q.; Zhou, S.; Wu, C. Review of artificial intelligence adversarial attack and defense technologies. Appl. Sci. 2019, 9, 909. [Google Scholar] [CrossRef]
- Zhang, H.; Gao, J.; Su, L. Data poisoning attacks against outcome interpretations of predictive models. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Virtual, 14–18 August 2021; pp. 2165–2173. [Google Scholar]
- Ahmad, S.; Morerio, P.; Del Bue, A. Person re-identification without identification via event anonymization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Vancouver, BC, Canada, 18–22 June 2023; pp. 11132–11141. [Google Scholar]
- Bardow, P.; Davison, A.J.; Leutenegger, S. Simultaneous optical flow and intensity estimation from an event camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 884–892. [Google Scholar]
- Munda, G.; Reinbacher, C.; Pock, T. Real-time intensity-image reconstruction for event cameras using manifold regularisation. Int. J. Comput. Vis. 2018, 126, 1381–1393. [Google Scholar] [CrossRef]
- Zhang, X.; Wang, Y.; Yang, Q.; Shen, Y.; Wen, H. EV-Perturb: Event-stream perturbation for privacy-preserving classification with dynamic vision sensors. Multimed. Tools Appl. 2024, 83, 16823–16847. [Google Scholar] [CrossRef]
- Prasad, S.S.; Mehta, N.K.; Banerjee, A.; Kumar, H.; Saurav, S.; Singh, S. Real-Time Privacy-Preserving Fall Detection using Dynamic Vision Sensors. In Proceedings of the 2022 IEEE 19th India Council International Conference (INDICON), Kochi, India, 24–26 November 2022; IEEE: New York, NY, USA, 2022; pp. 1–6. [Google Scholar]
- Prasad, S.S.; Mehta, N.K.; Kumar, H.; Banerjee, A.; Saurav, S.; Singh, S. Hybrid SNN-based Privacy-Preserving Fall Detection using Neuromorphic Sensors. In Proceedings of the Fourteenth Indian Conference on Computer Vision, Graphics and Image Processing, Rupnagar, India, 15–17 December 2023; pp. 1–9. [Google Scholar]
- Wang, H.; Sun, B.; Ge, S.S.; Su, J.; Jin, M.L. On non-von Neumann flexible neuromorphic vision sensors. npj Flex. Electron. 2024, 8, 28. [Google Scholar] [CrossRef]
- Vanarse, A.; Osseiran, A.; Rassau, A. Neuromorphic engineering—A paradigm shift for future im technologies. IEEE Instrum. Meas. Mag. 2019, 22, 4–9. [Google Scholar] [CrossRef]
- Gartner©. Gartner Top 10 Strategic Predictions for 2021 and Beyond. Available online: https://www.gartner.com/smarterwithgartner/gartner-top-10-strategic-predictions-for-2021-and-beyond (accessed on 6 July 2024).
Typology | Authors | Publication Year | Brief Description |
---|---|---|---|
Sensors development | Etienne-Cummings and Van der Spiegel [9] | 1996 | First surveys on neuromporhic cameras |
Indiveri et al. [19] | 1996 | First surveys on neuromporhic cameras | |
Kramer and Indiveri [20] | 1998 | Sensor analysis and two robotics applications | |
Indiveri [21] | 2008 | Neuromorphic circuits and selective attention chip pixel analyis | |
Liu and Delbruck [22] | 2010 | Sensor analysis | |
Posch et al. [17] | 2014 | Sensor design | |
Wu [23] | 2018 | Hardware design aspects and neural network-oriented vision chips | |
Liao et al. [12] | 2021 | Sensor analysis | |
Kim et al. [24] | 2022 | Sensor analysis | |
Li et al. [8] | 2023 | Sensor design with focus on materials | |
Domain Specific | Steffen et al. [25] | 2019 | Stereo Vision |
Chen et al. [26] | 2020 | Autonomous driving | |
Sun et al. [29] | 2021 | Data-driven approaches | |
Furmonas et al. [2] | 2022 | Depth estimation | |
Sandamirskaya et al. [27] | 2022 | Robotics | |
Bartolozzi et al. [30] | 2022 | Robotics | |
Jia [31] | 2022 | Semantic segmentation | |
Aboumerhi et al. [28] | 2023 | Medicine | |
Bing et al. [43] | 2018 | SNN | |
Pfeiffer and Pfeil [41] | 2019 | SNN | |
Paredes et al. [42] | 2019 | SNN | |
Bouvier et al. [35] | 2019 | SNN | |
Tavanaei et al. [38] | 2019 | SNN | |
Nunes et al. [36] | 2022 | SNN | |
Yamazaki et al. [39] | 2022 | SNN | |
Wang et al. [40] | 2022 | SNN | |
Basu et al. [44] | 2022 | SNN | |
Yi et al. [37] | 2023 | SNN | |
Collection of Methods | Dong-il and Tae-jae et al. [47] | 2015 | Task-driven analysis |
Lakshmi et al. [48] | 2019 | Paradigm shift, dataset and simulators | |
Gallego et al. [11] | 2020 | Event representation, computational approaches, applications | |
Tayarani-Najaran et al. [49] | 2021 | Visual, auditory, and olfactory domains | |
Zou et al. [46] | 2022 | Analysis driven by paradigm shift | |
Zheng et al. [45] | 2023 | Computer vision tasks with deep learning focus | |
Li and Sun [10] | 2023 | Data | |
Zhu et al. [50] | 2024 | Task-driven analysis |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cazzato, D.; Bono, F. An Application-Driven Survey on Event-Based Neuromorphic Computer Vision. Information 2024, 15, 472. https://doi.org/10.3390/info15080472
Cazzato D, Bono F. An Application-Driven Survey on Event-Based Neuromorphic Computer Vision. Information. 2024; 15(8):472. https://doi.org/10.3390/info15080472
Chicago/Turabian StyleCazzato, Dario, and Flavio Bono. 2024. "An Application-Driven Survey on Event-Based Neuromorphic Computer Vision" Information 15, no. 8: 472. https://doi.org/10.3390/info15080472
APA StyleCazzato, D., & Bono, F. (2024). An Application-Driven Survey on Event-Based Neuromorphic Computer Vision. Information, 15(8), 472. https://doi.org/10.3390/info15080472