The ELM Neuron: An Efficient and Expressive Cortical Neuron Model Can Solve Long-Horizon Tasks

Aaron tells us about the Expressive Leaky Memory (ELM) neuron model, a biologically inspired phenomenological model of a cortical neuron.

Edit Page

Biological cortical neurons are remarkably sophisticated computational devices, temporally integrating their vast synaptic input over an intricate dendritic tree, subject to complex, nonlinearly interacting internal biological processes.

With the aim to explore the computational implications of leaky memory units and nonlinear dendritic processing, we introduce the Expressive Leaky Memory (ELM) neuron model, a biologically inspired phenomenological model of a cortical neuron. Remarkably, by exploiting a few such slowly decaying memory-like hidden states and two-layered nonlinear integration of synaptic input, our ELM neuron can accurately match the aforementioned input-output relationship with under ten-thousand trainable parameters.

We evaluate the model on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets, as well as a novel neuromorphic dataset based on the Spiking Heidelberg Digits dataset (SHD-Adding). The ELM neuron reliably outperforms the classic Transformer or Chrono-LSTM architectures on these tasks, even solving the Pathfinder-X task with over 70% accuracy (16k context length).

Social share preview for The ELM Neuron: An Efficient and Expressive Cortical Neuron Model Can Solve Long-Horizon Tasks

Upcoming Workshops

No workshops are currently scheduled. Check back soon for new events!

Are you an expert in a neuromorphic topic? We invite you to share your knowledge with our community. Hosting a workshop is a great way to engage with peers and share your work.

About the Speaker

Aaron Spieler is a computational neuroscientist passionate about exploring the intersection of deep learning and neuroscience. After earning his Bachelor’s in Computer Science from the University of Potsdam, he undertook an extended internship at Amazon Web Services working in deep learning based forecasting, before further specializing with a Master’s in Computational Neuroscience at the University of Tübingen. Throughout his Master’s thesis and a subsequent internship at the Max Planck Institute for Intelligent Systems, Aaron focused on phenomenological neuron modeling with applications to long-range prediction tasks. Pursuing this work allowed him to collaborate with excellent researchers from diverse backgrounds, including Prof. Bernhard Schölkopf and Prof. Anna Levina.

Inspired? Share your work.

Share your expertise with the community by speaking at a workshop, student talk, or hacking hour. It’s a great way to get feedback and help others learn.

Learn How to Present

Related Workshops

NIR: A unified instruction set for brain-inspired computing

NIR: A unified instruction set for brain-inspired computing

We show how to use the Neuromorphic Intermediate Representation to migrate your spiking model onto neuromorphic hardware.

Towards Training Robust Computer Vision Models for Neuromorphic Hardware

Towards Training Robust Computer Vision Models for Neuromorphic Hardware

Join Gregor Lenz as he delves into the world of event cameras and spiking neural networks, exploring their potential for low-power applications on SynSense's Speck chip. Discover the challenges in data, training, and deployment stages. Don't miss this talk on training robust computer vision models for neuromorphic hardware.

Hybrid Learning for Event-based Visual Motion Detection and Tracking of Pedestrians

Hybrid Learning for Event-based Visual Motion Detection and Tracking of Pedestrians

Revolutionize traffic safety with neuromorphic visual sensing. Explore award-winning solutions for pedestrian detection and tracking, emphasizing sustainability and city-level deployment. Join Dr. Cristian Axenie in this groundbreaking AI exploration