cogsci2013-pres



cogsci2013-pres

1 1


cogsci2013-pres

Simultaneous unsupervised and supervised learning of cognitive functions in biologically plausible spiking neural networks (talk at CogSci2013)

On Github tbekolay / cogsci2013-pres

Simultaneous unsupervised and supervised learning of cognitive functions in biologically plausible spiking neural networks

Trevor Bekolay, Carter Kolbeck, Chris Eliasmith Centre for Theoretical Neuroscience, University of Waterloo bekolay.org/cogsci2013-pres

Hi, I'm Trevor. I've always been interested in the issue of nature vs nurture. I grew up being told that I could be anything I wanted to be. But despite that, I never did manage to make it into the NHL, and resigned myself to studying the brain instead. When members of my lab came together to build a large-scale model of the brain I saw it as a golden opportunity to answer a small part of the nature vs nurture question.
Your browser does not support the video tag.

How can we learn the connection weights in the spiking neural networks in Spaun?

This is that full scale model. We call it Spaun. Spaun is a network of 2.5 million simulated spiking neurons that is able to do several high-level cognitive tasks. In this video, Spaun is solving a problem that you might find on an IQ test. As it gets information about each cell, it's trying to infer the transformation between cells in each row. Then, when we get to the last cell in the last row, we ask Spaun what it thinks should go in that cell, and it writes 333, which is the correct answer. Spaun is able to accomplish this and other tasks by representing information in populations of spiking neurons, and transforming that information through connections between populations of neurons. In order to create Spaun, we analytically solve for the connection weights between each neural population. I wanted to know: Can the connection in Spaun be the result of some learning process? Could Spaun be the result of nurture? Or would Spaun have to be hard-coded by nature?

1. Cognitive functions

Vector Symbolic Architecture (Plate, 2003)
5⇒[0.12,0.56,0.48]⇒5⇒[0.12,0.56,0.48]⇒
In order to answer this question, we first have to understand how Spaun is able to perform these cognitive tasks.

=COUNT⊛1+NUMBER⊛5

=COUNT⊛2+NUMBER⊛5
=COUNT⊛3+NUMBER⊛5

How can we learn the binding function ⊛?

2. In spiking neurons

Neural Engineering Framework(Eliasmith & Anderson, 2003)
image/svg+xml
image/svg+xml
image/svg+xml
image/svg+xml
image/svg+xml
image/svg+xml
image/svg+xml
image/svg+xml
image/svg+xml

X, ei, ai=f(ei⋅X)

image/svg+xml

ˆX=∑idiai

image/svg+xml
ˆX=∑idiai

image/svg+xmlijω
ωij∝eidj

3. Learning

image/svg+xml
Random initial ωij

Supervised learning

image/svg+xmlE

Given error E=X−ˆX,

Δωij∝aiej⋅E

01020304050607080Learning time (seconds)0.91.01.11.21.31.41.51.6Error relative to control meanLearning binding

Unsupervised learning

image/svg+xmlPre ( i )ωPost ( j )
Δωij∝aiaj(aj−E[aj])⏟
aj0ωΔijEa[]j
image/svg+xml-100-50050100Spike timing (ms)-50050100Change in connection weight (%)Replicated STDP curveSimulationExperimentPre ( i )Post ( j )tPre ( i )Post ( j )t

Bi & Poo (2001)

image/svg+xml15102050100Stimulation frequency (Hz)-20-1001020Change in connection weight (%)Frequency dependence of STDPLow activity (simulation)Low activity (experiment)High activity (simulation)High activity (experiment)Pre ( i )Post ( j )

Kirkwood, Rioult & Bear (1996)

Combined learning

Δωij∝ai[Sej⋅E⏟ Supervised+(1−S)aj(aj−θ)⏟ Unsupervised]

01020304050607080Learning time (seconds)0.91.01.11.21.31.41.51.6Error relative to control meanLearning bindingSupervised, =1SCombined, =0.73S
image/svg+xml0.00.20.40.60.81.0Transmission accuracyUnsupervised learning in control network050100150200Simulation time (seconds)

Sparsity

0.18 0.51

0.00.20.40.60.81.0Transmission accuracyUnsupervised learning in control network050100150200Simulation time (seconds)0.450.500.550.60Weight sparsity

Given an error signal, E, we can learn binding.

How are error signals generated?

Thanks to CNRGlab members, NSERC, CRC, CFI and OIT.

Simultaneous unsupervised and supervised learning of cognitive functions in biologically plausible spiking neural networks

bekolay.org/cogsci2013-pres

tbekolay/cogsci2013

tbekolay/cogsci2013-pres

Learning transmission

051015202530Learning time (seconds)02468Learning transmissionSupervised, =1SCombined, =0.73S

Learning parameters

Neurons per dimension, learning rate, supervision ratio (S)

jaberg/hyperopt

Bind sup.Bind combined0.91.01.11.21.31.41.5Error relative to control meanTransmit sup.Transmit comb.1.01.52.02.5Error rates for all parameter sets

Machine learning