New Article: Stochastic Synapses enable power efficient learning Machines.

Nov. 20, 2015, posted by Emre Neftci in Lab info, Projects

Frontiers in Neuroscience. Synaptic sampling machines are the best performing spike-based unsupervised learners to date!

Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism
for inducing the stochasticity observed in cortex. Here, we introduce the Synaptic Sampling
Machine (SSM), a stochastic neural network model that uses synaptic unreliability as a means
to stochasticity for sampling. Synaptic unreliability plays the dual role of an efficient mechanism
for sampling in neuromorphic hardware, and a regularizer during learning akin to DropConnect.
Similar to the original formulation of Boltzmann machines, the SSM can be viewed as a stochastic
counterpart of Hopfield networks, but where stochasticity is induced by a random mask over
the connections. The SSM is trained to learn generative models with a synaptic plasticity
rule implementing an event-driven form of contrastive divergence. We demonstrate this by
learning a model of MNIST hand-written digit dataset, and by testing it in recognition and
inference tasks. We find that SSMs outperform restricted Boltzmann machines (4.4% error rate
vs. 5%), they are more robust to overfitting, and tend to learn sparser representations. SSMs
are remarkably robust to weight pruning: removal of more than 80% of the weakest connections
followed by cursory re-learning causes only a negligible performance loss on the MNIST task
(4.8% error rate). These results show that SSMs offer substantial improvements in terms of
performance, power and complexity over existing methods for unsupervised learning in spiking
neural networks, and are thus promising models for machine learning in neuromorphic execution
platforms.