Skip to content

Latest commit

 

History

History
829 lines (772 loc) · 76.4 KB

README.md

File metadata and controls

829 lines (772 loc) · 76.4 KB

Event-based Vision Resources

Table of Contents:



Devices & Companies Manufacturing them

Companies working on Event-based Vision



Neuromorphic Systems



Algorithms

Feature Detection and Tracking

Depth Estimation (3D Reconstruction)

Monocular Depth Estimation

Monocular Depth Estimation using Structured Light

Stereo Depth Estimation

Optical Flow Estimation

Intensity-Image Reconstruction from events

Localization and Ego-Motion Estimation

Visual Odometry and SLAM (Simultaneous Localization And Mapping)

Visual-Inertial State Estimation

  • Mueggler, E., Gallego, G., Rebecq, H., Scaramuzza, D.,
    Continuous-Time Visual-Inertial Trajectory Estimation with Event Cameras,
    (Under review), 2017.
  • Mueggler et. al. IJRR 2017.
    The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM.
  • Zhu, A., Atanasov, N., Daniilidis, K.,
    Event-based Visual Inertial Odometry,
    IEEE Conf. Computer Vision and Pattern Recognition (CVPR) 2017. PDF, Supplementary material
  • Rebecq, H., Horstschaefer, T., Scaramuzza, D.,
    Real-time Visual-Inertial Odometry for Event Cameras using Keyframe-based Nonlinear Optimization,
    British Machine Vision Conf. (BMVC), London, 2017. PDF, Appendix, YouTube
  • Rosinol Vidal, A., Rebecq, H., Horstschaefer, T., Scaramuzza, D.,
    Hybrid, Frame and Event based Visual Inertial Odometry for Robust, Autonomous Navigation of Quadrotors,
    Under Review, 2017. PDF, YouTube

Visual Stabilization

Video Processing

Pattern Recognition

Control

Space Applications



Datasets and Simulators (sorted by topic)

Optical Flow

Visual Odometry and SLAM

Recognition



Software

Drivers

Calibration

Algorithms

Utilities



Neuromorphic Processors and Platforms



Workshops and Tutorials



Theses and Dissertations

Dissertations

Masters' Theses



People / Organizations



Contributing

Please see CONTRIBUTING for details.