Theoretical and Computational Neuroscience



Theoretical and Computational Neuroscience

0 0


Biologia-IV-Computational-Neuroscience

Presentation about the Science Special Issue "Modelling the Mind" and correlated works

On Github brk00 / Biologia-IV-Computational-Neuroscience

Theoretical and Computational Neuroscience

Lucas S. Simões and Paulo R. O. Castro

Outline of this presentation

  • Introduction - Why modeling the Brain?
  • Section I - High-Level Cognition Models
  • Section II - Information Theory and Neuroscience
  • Wrapping Up - Conclusions and Other Works

Introduction

Computational Neuroscience

Interdisciplinary field of science that study the brain function in terms of the information processing properties of the structures that make up the nervous system.
The brain is one of the most interesting complex systems in the universe and the most efficient signal processing device known.

Why modeling the Brain?

Theoretical models are used to frame hypotheses that can be directly tested by biological and psychological experiments

How?

Models must capture the essential features of biological systems at multiple spatial-temporal scales, from membrane currents, proteins, and chemical coupling to network oscillations, architecture, learning and memory.

Why using computers to do so?

To estimate the behavior of a system that is too complex for analytical solutions, but also becase of the similarity to the brain.

"A proper exploration of these blank spaces on the map of science could only be made by a team of scientists, each specialist in his own field but each possessing a throughly sound and trained acquaintance with the fields of his neighbors"

- Norbert Wiener

Lines on inquiry

  • Single-neuron modeling
  • Sensory processing
  • Memory and synaptic plasticity
  • Cognition
  • Consciousness

Section I - High-Level Cognition Models

Main desiderata:

  • Models of higher level aspects of human intelligence, which depend critically on the prefrontal cortex (PFC) and associated subcortical areas in the basal ganglia
  • Robust active Maintenance and Rapid Updating of information

PFC Functionality

The PFC is the area of cortex most greatly expanded in humans relative to other mammals, suggesting that it is critical to human intellectuality

  • Example 1: People with damage to PFC areas often exhibit environmental dependency syndrome (lack of free will)
  • Example 2: Dream as the situation when PFC is deactivated

PFC Functionality

The PFC is important for actively maintaining information by sustained neural firing

The PFC system is also capable of adapt to the changing demands of the environment by rapidly updating what is being maintained

But how does the brain actually perform these actions?

Biological Mechanisms

Recurrent Excitatory Connectivity: Positive Feedback System

  • Individual spikes of neural firing may not come frequently enough
  • Noise signals quickly swamps any other signal

Biological Mechanisms

Intrinsic Bistability: Gated Ion Channels Model

  • Neuron as Digital Entity: binary representations
  • Many neurons combined encodes an analog pattern: robustness face to noise
  • Dynamic Gating Mechanism which modulates PFC stability

Biological Mechanisms

Biological Mechanisms

Role of the Basal Ganglia: Gating Mechanism taking advantage of the extensive connectivity with the PFC

"Go" pathway as the Rapid Updating trigger

"NoGo" pathway as the Robust Maintenance trigger

The key difference is that this last one enables spatial selective gating and is thought to be faster

Biological Mechanisms

But how does the gate knows when to open and close?

  • A Biologically Implausible Learning Mechanism can learn to control this gating mechanism
  • Biologically Plausible Model are being tested to determine whether they converge with behavioral data

"Is that the full story, or are there other important ingredients to our intelligence?"

PFC models must support abstract symbols of reasoning and representation

The bistability of the model allowed to achieve this goal despite this model be less flexible than computers or that the actual brain

The data obtained from this model are compatible with the PFC data of monkeys, what leads us to the question:

What is the difference between humans and other animals?

Humans have a basic social instinct to share experiences and knowledge so to understand human inteligence requires a better understanding of this cultural development

From this model arise the possibility of assigning variables, what is not possible with static neuron circuits

That way the PFC has the capacity to learn to represent and decode a symbol and this learning capability makes symbols have an associate meaning

Models with this characteristics are capable of having cognition

Other ingredients like dynamic pattern matching are also important to achieve this goal

Conclusion

Therefore the brain works much more like a social network, where the processing, the memory and the learning are distributed equally on each element as the neurons are strongly connectes with each other. However this doesn't happen with the computer because it splits these tasks

A digital computer routes arbitrary symbolic packages without consideration for the contents of these packages. This affords arbitrary flexibility (any symbol is as good as any other), but at some cost: When everything is arbitrary, then it is difficult to encode the subtle and complex relationships present in our commonsense knowledge of the real world

In contrast, the highly social neural networks of the brain are great at keeping track of "who’s who and what’s what", but they lack flexibility, treating a new symbol like a stranger crashing the party

The gating mechanisms function as a post office: the basal ganglia reads a zip code about which place on the PFC to atualize while the PFC itself would be more worried with the content of the packages

Binary representations on PFC are more symbol-like

Section II - Information Theory and Neuroscience

Differences from artificial systems and why using this theoretical approach

Feedback

Pathways connecting regions of higher-level brain function to regions of lower level functionality. This gives rise to top-down processing theories and selective attention.

What is Information Theory?

Information theory is a branch of applied mathematics and computer science involving the quantification of information.

History and Importance of Information Theory

Claude Elwood Shannon

Parameters of a transmission:

  • Transmission Power
  • Transmission Rate
  • Transmission Code

Shannon work to solve perfect transmission problem

Transmission Rate vs. Channel Capacity

Intra-Organism Communication

And the limitation to spike communication

Basic concepts of Information Theory

The importance of source and channel coding

Compressing an 8-bit message

00110101
15 most common messages: 4-bits Other 241 messages: 1111 + 8-bits
What if we had to compress a very large number of 8-bits messages? N→Total K→Common Total of bits required: 4K+12(N−K)

Probability formalism

Sequence of messages, each having a probability density p(X){x1,x2,x3,...}Assume that 15 messages have 5% of chance of ocurring each. The other 241 have the remaining 25%
The expected number of bits required to compress a single message can be calculated as the following: ∑ip(xi)b(xi)

Is there a better way of compressing these messages?

Concatanating and dealing with symbols

Entropy

H(X)=−Ep(X)[logp(x)] H(X)=−n∑i=1p(xi)logp(xi) Quantifies the expected value of the information contained in a message, therefore providing an absolute limit on the best lossless encoding possible.

Shannon Coding Theorem

As the number of symbols go to infinity, it is possible to compress each symbols to H(X) bits on average, and it's impossible to do better. But Shannon doesn't state how to achieve this limit of compression

Binary sequence to be transmitted to another device {s1,s2,...}
A noisy channel introduces random errors with probability p{x1,x2,...}channel→{˜x1,˜x2,...}
What sequence {xi} should be sent over the channel if the aim is to send {si} reliably to the receiver?

The message S is divided in block of length K, each encoded to a block of length N

Example: parity bit

K=2N=3

Rate of the code

R=KNIf K is large, a sofisticated form of redundancy can be introduced, even keeping rate R constant

Channel Capacity

There is a rate C such that, for any rate R<C and any desired error rate ε>0, exists a K and a block coder/decoder such that the receiver can decode each bit with error probability less that ε.

Mutual Information: Intuition

Measures how much information the output provides about the input. The more reliable the channel, the larger the mutual information.

Example: dice roll

X→{1,...,6}Y→{odd,even}Before knowing Y: H(X)=log26 After knowing Y: H(X|Y)=log23
I(X;Y)=H(X)−H(X|Y)

Example: Binary Channel

In this channel, X takes the value of one with probability q, and the error probability is pI(X;Y)=H(q)−H(p)
Chosing X to maximize I(X;Y), gives us I(X;Y)=0.469. If we transmit at rates below this, error-free communication is possible. C=sup

Neuroscience Models using Information Theory

Difficulties

Assumptions and differences from engineering systems

Why is Entropy useful in neuroscience?

The brain compresses information

Why are Channel Capacity and Mutual Information useful in neuroscience?

Neuronal spikes as depending on input, similar to a noisy channel

Modulation: Neuronal Spikes and Spike Interval Coding

Example: Channel Capacity of a Neuron

C_T = 34,68 bpsC_R = 44,95 bpsDiscrete distribution doesn't reflect the biology of the system

Optimizing output

Given the plasticity and evolution of neural systems, this is a reasonable approach

Discussion and Interpretation

Input and Output

  • Single and memoryless input
  • Less restrictive assumptions
  • Soft and Hard Decoding of \tilde{X}

Discreteness

  • The capacity is achieved by a discrete probability distribution
  • We believe that this is not true on biological systems, therefore it's working under it's capacity
  • Feedback is prevalent in many parts of the brain
  • Information may be shared by many neurons

Wrapping Up

Conclusion

The similarities between computers and brains are remarkable, but maybe it's their difference that may unlock the mysteries of human intelligence

Other works

  • Non-separation of computation and communication on neural systems (Gastpar et al., 2003)
  • Bayesian algorithms during decision making, predictions and pattern recognition (Rao and Ballard, 1999; S. Lee and Mumford, 2003; Knill and Pouget, 2004; George and Hawkins)
  • Analog cortical error correction codes (Fiete et al., 2008)
  • Directed information theory and applications (Granger, 1969; Marko, 1973; Rissanen and Wax, 1987; Massey, 1990; Tatikonda and Mitter, 2009; Hesse et al., 2003; Eichler, 2006; Waddell et al., 2007; Amblard and Michel, 2011; Quinn et al., 2011)
  • Relationship between control, information theory and thermodynamics (Mitter and Newton, 2005; Friston, 2010; Mitter, 2010)

References

  • Dimitrov, A. G., Lazar, A. A., & Victor, J. D. (2011). Information theory in neuroscience. Journal of Computational Neuroscience.
  • O’Reilly, R. C. (2006). Biologically based computational models of high-level cognition. Science (New York, N.Y.), 314(5796), 91–4. doi:10.1126/science.1127242
  • McDonnell, M. D., Ikeda, S., & Manton, J. H. (2011). An introductory review of information theory in the context of computational neuroscience. Biological Cybernetics. doi:10.1007/s00422-011-0451-9

References

  • Wikipedia contributors. "Bayesian approaches to brain function." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 9 Apr. 2014. Web. 21 Jun. 2014.
  • Wikipedia contributors. "Computational neuroscience." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 18 Mar. 2014. Web. 21 Jun. 2014.
  • Wikipedia contributors. "Information theory." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 9 Jun. 2014. Web. 21 Jun. 2014.