Seeing Through Events: End-to-End Approaches to Event-Based Vision Under Extremely Low-Light Conditions
Ongoing Project
Abstract Event camera technology, developed and improved over the past decade, represents a paradigm shift in how we acquire visual data. In contrast to standard cameras, event cameras contain bio-inspired vision sensors that asynchronously respond to relative brightness changes in the scene for each pixel in the camera array and instead produce a sequence of "events" generated at a variable rate at certain times. Hence, they provide very high temporal resolution (in the order of microseconds), high dynamic range, low power consumption and no motion blur. However, because they adopt a fundamentally different design, processing their outputs and unlocking their full potential also require radically new methods. The goal of our project is to contribute to the newly emerging and so-called field of event-based vision.

When compared to event cameras, yet another crucial drawback of traditional cameras is their inability to deal with low-light conditions, which is usually dealt with by employing a longer exposure time in order to allow more light in. This is, however, problematic if the scene to be captured involves dynamic objects or when the camera is in motion, which results in blurry regions. To this end, our project will explore ways to take advantage of event data to improve standard cameras. More specifically, we will investigate enhancing the quality of dark videos as well as accurately estimating optical flow under extremely low-light conditions with the guidance of complementary event data. Toward these goals, we will explore novel deep architectures for constructing intensity images from events and also collect new synthetic and real video datasets to effectively train our models and better test their capabilities.

Our project will provide novel ways to process event data using deep neural networks and will offer hybrid approaches to bring traditional cameras and event cameras together to solve crucial challenges we face when capturing and processing videos in dark. The neural architectures that will be explored in this research project can also be applied to other event-based computer vision tasks. Moreover, as we start to see commercially available high resolution event sensors, we believe that, beyond its scientific impact, our project has also a potential to be commercialized as part of camera systems for future smartphones, mobile robots or autonomous vehicles of any kind.

Related Publications HUE Dataset: High-Resolution Event and Frame Sequences for Low-Light Vision
Workshop on Neuromorphic Vision (NeVi): Advantages and Applications of Event Cameras at ECCV 2024
Burak Ercan, Onur Eker, Aykut Erdem, Erdem Erdem
HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks
IEEE Transactions on Image Processing, Vol. 33, pp. 1826-1837, March 2024
Burak Ercan, Onur Eker, Canberk Saglam, Aykut Erdem, Erkut Erdem
EVREAL: Towards a Comprehensive Benchmark and Analysis Suite for Event-based Video Reconstruction
4th International Workshop on Event-Based Vision at CVPR 2023
Burak Ercan, Onur Eker, Aykut Erdem, Erkut Erdem