On Github tbekolay / cogsci2013-pres
Simultaneous unsupervised and supervised learning of cognitive functions in biologically plausible spiking neural networks
Trevor Bekolay, Carter Kolbeck, Chris Eliasmith Centre for Theoretical Neuroscience, University of Waterloo bekolay.org/cogsci2013-pres
Hi, I'm Trevor. I've always been interested in the issue of nature vs nurture. I grew up being told that I could be anything I wanted to be. But despite that, I never did manage to make it into the NHL, and resigned myself to studying the brain instead. When members of my lab came together to build a large-scale model of the brain I saw it as a golden opportunity to answer a small part of the nature vs nurture question.How can we learn the connection weights in the spiking neural networks in Spaun?
This is that full scale model. We call it Spaun. Spaun is a network of 2.5 million simulated spiking neurons that is able to do several high-level cognitive tasks. In this video, Spaun is solving a problem that you might find on an IQ test. As it gets information about each cell, it's trying to infer the transformation between cells in each row. Then, when we get to the last cell in the last row, we ask Spaun what it thinks should go in that cell, and it writes 333, which is the correct answer. Spaun is able to accomplish this and other tasks by representing information in populations of spiking neurons, and transforming that information through connections between populations of neurons. In order to create Spaun, we analytically solve for the connection weights between each neural population. I wanted to know: Can the connection in Spaun be the result of some learning process? Could Spaun be the result of nurture? Or would Spaun have to be hard-coded by nature?=COUNT⊛1+NUMBER⊛5
How can we learn the binding function ⊛?
X, ei, ai=f(ei⋅X)
ˆX=∑idiai
Given error E=X−ˆX,
Δωij∝aiej⋅E
Bi & Poo (2001)
Kirkwood, Rioult & Bear (1996)
Δωij∝ai[Sej⋅E⏟ Supervised+(1−S)aj(aj−θ)⏟ Unsupervised]
0.18 0.51
Given an error signal, E, we can learn binding.
Thanks to CNRGlab members, NSERC, CRC, CFI and OIT.
Simultaneous unsupervised and supervised learning of cognitive functions in biologically plausible spiking neural networks
Neurons per dimension, learning rate, supervision ratio (S)