Spatiotemporal Features for Asynchronous Event-based Data
Spatiotemporal Features for Asynchronous Event-based Data
Blog Article
Bio-inspired asynchronous event-based vision sensors are currently introducing a paradigm shift in visual information processing.These new sensors rely on a stimulus-driven principle of light acquisition similar to biological retinas.They are event-driven and fully asynchronous, thereby reducing redundancy and encoding exact times of input signal changes, leading to a very precise temporal resolution.
Approaches for higher-level computer vision often rely on the realiable detection of features in visual frames, but similar definitions of features q60 headers for the novel dynamic and event-based visual input representation of silicon retinas have so far been lacking.This article addresses the problem of learning and recognizing features for event-based vision sensors, aboriginal flag beanie which capture properties of truly spatiotemporal volumes of sparse visual event information.A novel computational architecture for learning and encoding spatiotemporal features is introduced based on a set of predictive recurrent reservoir networks, competing via winner-take-all selection.
Features are learned in an unsupervised manner from real-world input recorded with event-based vision sensors.It is shown that the networks in the architecture learn distinct and task-specific dynamic visual features, and can predict their trajectories over time.