Example Paper 3: A Benchmark Suite for Neuromorphic Audio Processing

This placeholder resource introduces a new, comprehensive benchmark suite for audio processing tasks, specifically designed for event-based audio sensors and spiking neural networks.

Resource Details

Publication
Fictional Open Science Journal, 2025
Community Approved On
June 1, 2025
ONR Badge
ONM Community Approved
[![ONM Community Approved](https://img.shields.io/badge/Community%20Approved-Open%20Neuromorphic-8A2BE2)](https://neural-loop.github.io/open-neuromorphic.github.io/neuromorphic-computing/research/papers/example-paper-3/)

Note: This is a placeholder entry to demonstrate the layout and structure of the ONR Approved Research Registry.

Abstract

The lack of standardized, event-based audio benchmarks has hindered progress in neuromorphic auditory processing. We introduce the Neuromorphic Audio Benchmark Suite (NABS), a collection of four datasets for tasks ranging from keyword spotting to sound source localization, captured with silicon cochlea sensors. We provide baseline results using several popular SNN frameworks to facilitate future comparisons. All datasets are released under a permissive license and are accessible through the Tonic library.

Resource Overview

NABS is a community resource aimed at standardizing the evaluation of neuromorphic audio models. This resource includes:

  • The four curated event-based audio datasets.
  • A Python package integrated with Tonic for easy data loading and preprocessing.
  • Baseline model implementations in snnTorch and Lava.
  • A detailed website with dataset specifications, baseline performance metrics, and instructions for submitting new results.