Thanks for your interest.
1. The incorporation is done by introducing a noisy neuron model (which is based on classical noisy input and output noise models).
- Regarding how did noise help. Firstly, the benefits of internal noise in deep learning are well-studied [1,2,3], but not yet in SNNs. Secondly, noisy neuronal dynamics are biophysically more realistic, and are critical to account for “variability & reliability” in biological neural computation [4]. However, these models are difficult to work in deep networks. Our work incorporates noisy spiking models with deep neural networks, such that, we can leverage the techniques from DL, hold the biophysically realisms, and get performance improvements. BTW, the performance benefits, including generalization ability and robustness improvements are empirically demonstrated in experiments (with some theoretical analyses).
2. Regarding the comparison with conventional deterministic spiking networks & the implifications for ML, please see contents above 👆.
- Regarding the implifications for computational neuroscience research. Spiking neural models are popular in neuroscience research for their biologically realistic spike-based computation paradigm. However, conventional deterministic spiking neural models cannot account for the variability in neural spike trains. Therefore, research into the noisy spiking neural model, which aims to provide a useful tool at a computational level, has practical benefits for computational neuroscience. NSNN provides a promising tool 🔧 for building computational accounts for various sensory neural circuits and will enable richer models of complex neural computations in the brain 🧠.
[1] “How noise help robustness?…” in CVPR 2020
[2] “Regularizing deep neural networks by noise …” in NIPS 2017
[3] “Noisy recurrent neural networks” in NIPS 2021
[4] Wolfgang Maass, “Noise as a resource for computation and learning…” in Proc. of IEEE