Telluride Results: Natural Language Processing in Spiking Neuromorphic Systems
At the Telluride workshop, we devised a software and hardware workflow for solving a natural language processing task by combining the efficient processing of spiking neural networks with state-of-the-art machine learning techniques.
The ability to perform Natural Language Processing (NLP) on low-power real-time and scalable architectures can become extremely valuable for compact platforms. The brain's unmatched ability at NLP suggests hardware neuromorphic architectures as natural candidates for this task. In this project, we devise a software and hardware workflow for solving a question classification task. Recurrent neural networks are very effective in learning sequential data while being easily implementable on constrained hardware platforms. Furthermore their training is readily possible (offline) through back-propagation through time. Our workflow for solving the task thus consists in first training simple (Elman) recurrent neural networks on a computer an synthesizing these on a spiking neural network. We demonstrate the network on True North a spike-based digital neuromorphic hardware architecture. Mapping the network on this hardware required constraining precision weight precision, activation function and network dimensions (fan-in). We find that using synaptic delays in True North was sufficient to support the feedback in the recurrent neural network. The original machine learning algorithm achieved 85% classification accuracy. Constraining these for True North compatibility reduced the accuracy to 78.4%. Finally, the same model running on the True North achieved 74%. Our results demonstrate a proof-of-concept recurrent neural network that can be trained offline for a in a variety of Natural Language Processing tasks. Furthermore, we show that synaptic delays are sufficient for supporting the temporal dynamics of simple recurrent neural networks.
Journal article coming soon...