(These are my own notes from Coursera's "Computational Neuroscience", as taught by the University of Washington. I'm really enjoying the lecture contents, challenging assignments, level of mathematical treatment, and the otherwise-rare feeling of superlative transcendence that comes with science. It's almost like half the math I learned in college had been dormant, waiting for a non-math professor who knew how to apply it to their subject. This is also the last properly capitalized paragraph you will see on this page. WIP.)

overview

goal of computational/theoretical neuroscience: explaining the mind (mostly behaviour at this point) in terms of brain computation.

computational/theoretical neuroscience: tools and methods for:

  • characterizing what nervous systems do
  • determining how they achieve it
  • understanding why they operate like that

kinds of models in theoretical neuroscience by purpose:

  • descriptive (what): quantitative description of stimuli encoding and decoding
  • mechanistic (how): detailed simulation of neurons, networks, etc.
  • interpretive/normative (why): purpose (function) and underlying principles behind function.

types of models

descriptive

example: in Hubel and Weisel's discoveries in the primary visual field, the frequency at which action potentials occur in single-cell recordings is a function of place and orientation of the stimulus:

freq=f(orientation,place)(TeX formula:  freq = f(orientation, place) )

receptive field: specific properties of a sensory stimulus that maximize response.

  • retinal ganglion (efferent) cells and thalamus' Lateral Geniculate Nucleus. center-surround receptive fields only, i.e:
        -    +
       -+-  +-+
        -    +
  • V1 simple cells. oriented receptive field, e.g:
     --------    +
    -++++++++-  +-+
     --------    +-+
                  +-+
                    +

mechanistic

V1 simple cell model suggested by Hubel & Wiesel (doesn't take recurrent input into account):

LGN cell 1 ----\
LGN cell 2 -----|--> V1 simple cell
...
LGN cell n ----/

interpretive

why are receptive fields in V1's simple cells shaped that way? what is their diversity for?

efficient coding hypothesis: evolution favours representing images as faithfully and efficiently as possible.

on V1 simple cells, given image I(TeX formula: I) and receptive fields RFi(TeX formula: RF_i) , we can try to reconstruct it ( Î(TeX formula: \hat{I}) ) using neural response strenghts (multipliers) r1,r2,...r_1, r_2, ...(TeX formula: r_1, r_2,
...) like this:

Î=iRFi*ri(TeX formula:  \hat{I} = ∑_i RF_i * r_i )

question: what are the RFi(TeX formula: RF_i) that minimize squared pixel-wise errors between I(TeX formula: I) and Î(TeX formula: \hat{I}) , and are as independent as possible?

  1. start with random RFi(TeX formula: RF_i)

  2. run an efficient coding algorithm (sparese coding, Independent Coding Analysis, predictive coding)

neurobiology

neuron

Some types of neurons:

  • pyramidal: most common type, cortical building block
  • Purkinje: multi-level/branching dendrites
  • ganglion retinal cells, amacrine cells

neuron doctrine:

  • functional unit of the nervous system
  • neurons are (mostly) discrete
  • signals travel from dendrines to axon

simple idealized descriptive model:

f={1isynaptic_strengthi*xi>threshold0otherwise(TeX formula:  f = \left\{ \begin{array}{cc}                         1 & ⇔ ∑_i synaptic\_strength_i * x_i > threshold \\                         0 & otherwise \\                 \end{array} \right. )

where synaptic_strength(TeX formula: synaptic\_strength ∈ ℝ) .

ion channels are variously gated, (like transistors):

  • voltage-gated (membrane, gap junctions)
  • chemically-gated (neuroreceptors at post-synaptic cleft, smell-taste sensors)
  • mechanically-gated

synapse

synapse: connection or junction between two neurons.

  • electrical:1

    • uses gap junctions (shared channels)
    • fast. ideal for synchronization between neurons, sympathetic reactions
  • chemical:

    • uses neurotransmitters
    • thought to be more plastic, because post-synaptic cell can independently change its dendritic membrane depending on input, history, etc. hypothetically ideal for learning.

synapse doctrine: synapses are the basis for learning and memory

plasticity

  • hebbian: if neuron A repeatedly fires to neuron B, then their synaptic strength is strengthened. neurons that fire together, wire together.

        before: A_{t1} ---||||--- *.01 ---> B_{t1}
        after:  A_{t2} ---------- *.1  ---> B_{t2}
    
  • LTP/LTD:

    • experimentally observed change in synaptic strength
    • lasts a few hours or days
    • depends on relative timing of input and output spikes

    if pre predicts post, post increases the strength (the smaller the delta, the more excitatory the synapse). if pre fires after post, post reduces the strength (the smaller the delta, the more inhibitory the synapse):

Δstrength=1/Δtiming(TeX formula:  Δstrength = 1 / Δtiming )

nervous system

  • PNS

    • somatic: modal afferent and voluntary efferent
    • autonomic

    • sympathetic: fight or flight

    • parasympathetic: sleep or breed

    • enteric: digestion

  • CNS:

    • spinal chord
    • encephalon/brain

    • rhombo/hindbrain

      • medulla oblongata (bulbo raquídeo): breathing, muscle tone, hemobarycs
      • pons: connects to cerebellum. sleep and arousal
      • cerebellum: equilibrium, PNS-somatic calibration, language?, attention?
    • mesen/mid

      • visual and auditory reflexes, saccadic movement
      • reticular formation: reflexes, pain, breathing, sleep and arousal
    • thalamus

      • relay station for everything modal except smell
    • procen/cerebrum

      • basal ganglia
      • hyppocampus
      • amygdala
      • cortex: all of higher cognition, subjectivity
cortex

6 layers of neurons, relatively uniform in structure. e.g.:

layer function
1 input from "higher" receptive fields
2 output to "
3 "
4 input from subcortical regions
5 output to " "
6 "

description: encoding

recording techniques:

  • fMRI: records aggregate changes in magnetic field resulting from hemodynamics
    • accuracy: millions of neurons, varies with voxel resolution
    • time scale: seconds
    • non-invasive
    • in full 3D
    • great for differential localisation (dissociation) of function localisation
  • EEG: records aggregate changes in electric field resulting from neural activity itself
    • accuracy: rather noisy. confined to skull surface
    • time scale: faster than fMRI
  • electrode array (single-unit recording(s)): roughly targets single neurons
  • calcium imaging: single neurons too, based on fluorescence changes resulting from Ca binding.
  • patch clamp: inserts into cell
    • maximum accuracy. requires less amplification
    • good for mechanistic models of single cells, ions, pumps, etc.

raster plot: spike plots successively stacked

encoding: from stimulus parameters to pattern of responses

P(response|stimulus)(TeX formula:  P(response | stimulus) )

decoding: working back stimulus parameters from response

P(stimulus|response)(TeX formula:  P(stimulus | response) )

examples:

  • in V1 oriented (angle = θ) receptive fields: Hz=Gaussian(θ)(TeX formula: Hz = Gaussian(θ))

  • motor cortex: Hz=cos(θ)(TeX formula: Hz = cos(θ))

functional maps: the feature selectivity of adjacent tissue can be represented with an spatial field, where each point is associated with the values of an stimulus that maximise its response (i.e. it's receptive field)

one map per parameter: position, orientation, grammatical categories, images of both Brad Pitt and Jennifer Aniston, the concept of Dracula: anything related to him (image, audio, words), etc.

even worse: higher-order representations can feed back into the stimuli upstream, shaping the way things are perceived even at very rudimentary levels.

simple models

P(r(t)|s)(TeX formula:  P(r(t) | s) ) where r(t)=response(TeX formula: r(t) = response) , s=stimulus(TeX formula: s = stimulus) .

linear

  • most recent stimulus only:

r(t)=fs(t)(TeX formula:  r(t) = f s(t) )

where the f is synaptic strength.

or to account for a short delay from stimulus to response:

r(t)=fs(tτ)(TeX formula:  r(t) = f s(t - τ) )

  • linear with temporal filtering:

a more realistic model will depend on a linear combination of many recent inputs:

r(t)=k=0nfkstk(TeX formula:  r(t) = ∑_{k=0}^{n} f_k s_{t-k} )

note how constant f(TeX formula: f) turned into weight function f(TeX formula: f) .

in infinitesimal form:

r(t)=tf(τ)s(tτ)dτ(TeX formula:  r(t) = ∫_{-∞}^{t} f(τ) s(t-τ) \; dτ )

this is a convolution of f(τ)(TeX formula: f(τ)) and s(tτ)(TeX formula: s(t-τ)) , where the former acts as the linear filter.

  • leaky filter/integrator:

    f(τ)(TeX formula: f(τ)) decreases the more you look into past synapses, usually exponentially.

example: linear filtering applied to center-surround visual fields, where distance in time is replaced by distance in space. the discrete model thus becomes:

r(x,y)=x=n,y=nnf(x,y)s(xx,yy)(TeX formula:  r(x, y) = ∑_{x'=-n, y'=-n}^{n} f(x', y') s(x - x', y - y') )

in addition to leaking, we can think of the phenomenon of lateral inhibition in retinal ganglionar cells to construct our filter f(TeX formula: f) . this is usually approximated as a difference of a normal distribution and another negative, shallower gaussian. like Sobel's filter this is an edge detector.

non-linear activation function

finally, we can think of an extra activation function that will map the filter convolution to the actual firing rate:

s(t) -> f(τ) -> g() -> r(t)

r(t)=g(f(tτ)f(τ)dτ);(TeX formula:  r(t) = g\left( ∫ f(t-τ) f(τ) \; dτ \right); ) where g()(TeX formula: g()) is the activation function (also called input/output function, or transfer function in ANNs), usually with sigmoid shape.

dimensionality reduction

or how to regress a good filter. as seen in the last example, the stimulus could easily contain an intractable number of parameters.

reverse correlation: spike-triggered average

  1. convert s(t)(TeX formula: s(t)) to discrete vector: a function of a single variable can be thought of as a single point in space:

  2. each application (an instant in the spike trail) is a dimension.

  3. the value of that component is the value of that function/stimulus at that point/time:

s(t)=(st1st2st3...)P(r|s1,s2,...,sn)(TeX formula:  s(t) = \begin{pmatrix}                       s_{t1} \\                       s_{t2} \\                       s_{t3} \\                       ...                     \end{pmatrix} ⊢ P(r | s_1, s_2, ..., s_n) )

  1. one method uses Gaussian white randomness as s(t)(TeX formula: s(t)) to probe a reaction, and approximate what is common among trials.

  2. a random prior stimuli distribution is constructed from multiple such trials of decomposed random s(t)(TeX formula: s(t)) 's:

    per the central limit theorem, this distribution of stimuli will also be gaussian at each dimension, at whatever new dimensions are chosen from a basis change/linear combination; and indeed at the whole space (picture a normal distribution on a plane, or on 3D).

s1(t)=(nrand()t1nrand()t2nrand()t3...)(TeX formula:  s1(t) = \begin{pmatrix} nrand()_{t1} & nrand()_{t2} & nrand()_{t3} & ... \end{pmatrix} )

s2(t)=(nrand()t1nrand()t2nrand()t3...)(TeX formula:  s2(t) = \begin{pmatrix} nrand()_{t1} & nrand()_{t2} & nrand()_{t3} & ... \end{pmatrix} )

s3(t)=(nrand()t1nrand()t2nrand()t3...)(TeX formula:  s3(t) = \begin{pmatrix} nrand()_{t1} & nrand()_{t2} & nrand()_{t3} & ... \end{pmatrix} )

...(TeX formula:  ... )

  1. pick the cluster of points that caused r(t)(TeX formula: r(t)) to fire, called the spike-conditional distribution

  2. and calculate their spike-triggered average point.

  3. then pick some dimension that passes through that point and project the original spike-conditional distribution onto them.

  4. the spike-triggered average vector is then taken to be a unit vector. avg(s)(TeX formula: avg(s)) is our weight filter/feature detector/convolution f(τ)(TeX formula: f(τ)) .

geometrically, it is also a projection. so

r(t)=tf(τ)s(tτ)dτ=f̂·ŝ(TeX formula:  r(t) = ∫_{-∞}^{t} f(τ) s(t-τ) dτ = \hat{f} · \hat{s} )

activation function

or how to regress a good g() for our f().

remember that g()(TeX formula: g()) is the last step in producing r(s)(TeX formula: r(s)) , and that encoding r(s)(TeX formula: r(s)) in its most general form amounts to calculating P(spike|stimulus)(TeX formula: P(spike|stimulus)) . so we start from the equation:

r(s)=g()=P(spike|stimulus)(TeX formula:  r(s) = g() = P(spike|stimulus) )

P(spike|stimulus)(TeX formula: P(spike|stimulus)) has become P(spike|avg(s))(TeX formula: P(spike|avg(s))) after simplifying the multi-dimensional stimulus. so what's the new probability under this subset?

P(spike|s)=P(spikes)P(s)(TeX formula:  P(spike|s') = \frac {P(spike ∩ s')} {P(s')} )

expanded into bayesian form:

P(spikes)P(s)=P(s|spike)P(spike)P(s)(TeX formula:  \frac {P(spike ∩ s')} {P(s')} =             \frac {P(s'|spike) P(spike)} {P(s')} )

P(s)(TeX formula: P(s')) simply has a normal probability distribution for the white-noise experiment:

    . .
    . .
  . . . .
. . . . . .

the histogram of P(s|spike)(TeX formula: P(s'|spike)) is obtained only from those point-values in the stimulus that coincide in time with a spike. let's say those are mostly peaks in the stimulus signal. the distribution would look skewed like this:

       .
      . .
   . . . .
. . . . . .

therefore, g()=P(s|spike)P(s)(TeX formula: g() = \frac {P(s'|spike)} {P(s')}) :

     ....
    .
   .
...

P(spike)(TeX formula: P(spike)) is just calculated empirically by counting during the random experiment.

if P(s|spike)(TeX formula: P(s'|spike)) had also a normal distribution (like P(s)(TeX formula: P(s')) ), the ratio of both functions (as used in Bayes' formula) would yield a constant function. i.e., the activation function would look flat. what we actually want is both distributions to be different.

maximally informative filters (again)

could we work backwards from s(TeX formula: s')

and spike(TeX formula: spike) so that the distributions P(s|spike)(TeX formula: P(s'|spike)) and P(s)(TeX formula: P(s')) are as dissimilar as posible?

one measure for such a difference is Kullback-Leibler divergence:

DKL(P(s),Q(s))=P(s)log(P(s))Q(s)ds(TeX formula:  D_{KL}(P(s), Q(s)) = ∫ P(s) \frac {log(P(s))} {Q(s)} \; ds )

an advantage of this over STA is that the prior P(s)(TeX formula: P(s')) needs not be gaussian random noise. it might as well be natural stimuli.

Generalized Linear Models

  • with a refractory period filter:
(s) -> (f) -> (+) -> (g) -> (Poisson) ----> r
               ^                        |
               |_(refractionary filter)_|
  • with coupling (neighbour neurons) filters, ephaptic transmisison, etc.

description: decoding

signal detection theory (hypothesis testing)

likelihood ratio test:

1<P(neuralresponse|observedbehaviour)P(neuralresponse|not(observedbehaviour))(TeX formula:  1 < \frac {P(neural\_response|observed\_behaviour)}                        {P(neural\_response|not(observed\_behaviour))} )

Neyman-Pearson lemma: the likelihood ratio test is the most efficient statistic for any given size.

maximum lakelihood with asymmetric costs:

type 1 and type 2 errors may not cost the same. we introduce a weighting factor L(TeX formula: L) :

Lossyes=LyesP(no|response)(TeX formula:  Loss_{yes} = L_{yes} P(no|response) )

Lossno=LnoP(yes|response)(TeX formula:  Loss_{no}  = L_{no}  P(yes|response) )

we should clearly answer yes when Lossyes<Lossno(TeX formula: Loss_{yes} < Loss_{no}) . I.e, LyesP(no|response)<LnoP(yes|response)L_{yes} P(no|response) < L_{no} P(yes|response)(TeX formula: L_{yes} P(no|response) <
L_{no} P(yes|response)) . rewriting using Bayes' rule yields:

LyesP(no,r)P(r)<LnoP(yes,r)P(r)(TeX formula:  L_{yes} \frac{P(no, r)}{P(r)} <             L_{no} \frac{P(yes,r)}{P(r)} )

LyesP(r|no)P(no)P(r)<LnoP(r|yes)P(yes)P(r)(TeX formula:  L_{yes} \frac{P(r|no)P(no)}{P(r)} <             L_{no} \frac{P(r|yes)P(yes)}{P(r)} )

LyesP(r|no)P(no)<LnoP(r|yes)P(yes)(TeX formula:  L_{yes} P(r|no) P(no) < L_{no} P(r|yes) P(yes) )

...and arranging for the ratio of conditional distributions to obtain the likelihood ratio:

P(r|yes)P(r|no)>LyesP(no)LnoP(yes)(TeX formula:  \frac {P(r|yes)} {P(r|no)} >             \frac {L_{yes}P(no)} {L_{no} P(yes)} )

population coding

methods for decoding from populations of neurons:

  • population vector
  • bayesian inference

    • maximum likelihood
    • maximum a posteriori

population vector

example: cricket cercal cells in the antenna-like surcus structures in the abdomen are sensitive to wind velocity.

how does the group as a whole communicate wind velocity to the animal?

there are 4 overlapping receptive field distributions, corresponding to the 4 directions relative to the body:

f/r_max
       -    |    -    |
      ---  |||  ---  |||
    -----++|||++---++|||||
      45   135  225  315    degrees

given wind direction (in degrees) s(TeX formula: s) and a neuron's preferred direction sa(TeX formula: s_a) , response looks like: (after normalising for rmax(TeX formula: r_{max}) , because some neurons have an intrinsically higher firing rate, even for equally maximal stimuli)

(f(s)rmax)a=[cos(ssa)]+(TeX formula:  \left( \frac {f(s)} {r_{max}} \right)_a = [cos(s - s_a)]_+ )

or in terms of vectors v(TeX formula: v) (wind vector) and ca(TeX formula: c_a) (neuron's preferred vector):

(f(s)rmax)a=[v·ca]+(TeX formula:  \left( \frac {f(s)} {r_{max}} \right)_a = [v · c_a]_+ )

then, the population vector is calculated as:

vpop=a=14(rrmax)aca(TeX formula:  v_{pop} =             ∑_{a=1}^{4} \left( \frac{r}{r_{max}} \right)_a c_a )

note how 4 orthogonal vectors are used, even though 2 in linear combinations would suffice to generate the whole plane. this is due to the lack of negative firing rates to represent wind in the opposite direction, or its component along that direction.

this method is prevalent in brain-machine interfaces targeting the M1 area.

for a sufficiently large number of directions:

vpop=a=1N(v·ca)ca(TeX formula:  v_{pop} = ∑_{a=1}^{N} (v·c_a) c_a )

bayesian inference

  • more general than population vector
  • takes more information into account
  • less vulnerable to noise

recall Bayes' rule for stimulus s(TeX formula: s) and response r(TeX formula: r) :

P(s|r)=P(r|s)P(s)P(r)(TeX formula:  P(s|r) = \frac {P(r|s)P(s)} {P(r)} )

we could also rewrite the marginal distribution P(r)(TeX formula: P(r)) to yield:

P(s|r)=P(r|s)P(s)dsP(r|s)P(s)(TeX formula:  P(s|r) = \frac {P(r|s)P(s)} {∫ \; ds P(r|s)P(s)} )

  • maximum likelihood distribution: this bayesian decoding strategy tries to find the stimulus which maximizes the likelihood conditional distribution P(r|s)(TeX formula: P(r|s)) .

  • maximum a posteriori distribution: this decoding strategy tries to find the stimulus which maximizes the a posteriori (LHS) conditional distribution P(s|r)(TeX formula: P(s|r)) .

MAPD is influenced by the prior P(s)(TeX formula: P(s)) , because all factors in Bayes equation are taken into accout.

example: imagine a population of neurons that encode stimulus/parameter s(TeX formula: s) , with response r=f(s)(TeX formula: r = f(s)) a Gaussian probability distribution, and mean/maximum at s1(TeX formula: s_1) .

meanwhile, we have each individual neuron's Gaussian responses scattered along s(TeX formula: s) and overlapping each other:

r=f(s)
       -    |    -    |    -
      ---  |||  ---  |||  ---
    -----++|||++---++|||++-----
      n1   n2   s_1  n3    nn     stimulus

assume the following:

  • each neuron fires independently
  • variability in firing is due to a Poisson process, so variance=meanfiringrater(TeX formula: variance = mean\_firing\_rate r)

  • the distribution of distributions is not gaussian, but uniform. i.e., there's plenty of single-neuron distributions across the range of s(TeX formula: s) .

spikes are produced randomly and independently in each time bin, with a probability given by the instantaneous rate. from Poisson:

PT(k)=(cT)kecTk!(TeX formula:  P_T(k) = \frac {(cT)^k e^{-cT}} {k!} )

where:

  • k(TeX formula: k) is the number of occurrences for which PT()(TeX formula: P_T()) is going to be calculated for time interval T(TeX formula: T) .
  • c(TeX formula: c) is the constant mean firing rate per time unit.
  • T(TeX formula: T) is the time interval.

substituting for the variables in our example setting, the probability of seeing ra(TeX formula: r_a) spikes for a single neuron a(TeX formula: a) becomes:

P(ra|s)=(fa(s)T)Traefa(s)T(Tra)!(TeX formula:  P(r_a|s) = \frac {(f_a(s)T)^{Tr_a} e^{-f_a(s)T}} {(Tr_a)!} )

notice how we write the specific number of occurrences k(TeX formula: k) during interval T(TeX formula: T) simply as ra(TeX formula: r_a) times T(TeX formula: T) .

also, the constant c(TeX formula: c) is replaced by our previous probability function fa(TeX formula: f_a) , and the whole equation is accordingly changed to a conditional distribution of stimulus s(TeX formula: s) (that is, our conditional distribution is a bunch of Poisson formulas, one per slice at this neuron's Gaussian).

so the whole population's ( r̂=(r1,r2,...,rn)(TeX formula: \hat{r} = (r_1, r_2, ..., r_n)) ) distribution can be written down as the product of the independent distributions:

P(r̂|s)=a=1N(fa(s)T)Traefa(s)T(Tra)!(TeX formula:  P(\hat{r}|s) =             ∏_{a=1}^N \frac{(f_a(s)T)^{Tr_a} e^{-f_a(s)T}}{(Tr_a)!} )

maximum likelihood

the previous formula is often written in logarithmic form, so as to avoid rounding products and the exponential, in what is known as log-likelihood:

ln(P(r̂|s))=a=1NTraln(fa(s)T)fa(s)Tln((Tra)!)(TeX formula:  ln(P(\hat{r}|s)) =             ∑_{a=1}^N Tr_a ln(f_a(s)T) - f_a(s)T - ln((Tr_a)!) )

which we try to maximise:

0=sln(P(r̂|s))=Ta=1Nraf(s)fa(s)(TeX formula:  0 = \frac{∂}{∂s}ln(P(\hat{r}|s)) =                 T ∑_{a=1}^N r_a \frac {f'(s)} {f_a(s)} )

recall that f(s)(TeX formula: f(s)) is the normal distribution:

0=a=1Nra1σa2πssaσa2e1/2(ssaσa)21σa2πe1/2(ssaσa)2(TeX formula:  0 = ∑_{a=1}^N r_a         \frac { \frac{1}                        {σ_a\sqrt{2π}}                  \frac{s-s_a}                        {σ_a^2}                  e^{-1/2 \left( \frac{s-s_a}                                        {σ_a} \right)^2}}                { \frac{1}                        {σ_a\sqrt{2π}}                  e^{-1/2 \left( \frac{s-s_a}                                        {σ_a} \right)^2}} )

0=a=1Nrassaσa2(TeX formula:  ⊢ 0 = ∑_{a=1}^N r_a \frac{s-s_a}{σ_a^2} )

sf(s)max=a=1Nrasa/σa2a=1Nra/σa2(TeX formula:  ⊢ s_{f(s)_{max}} =               \frac {∑_{a=1}^N r_a s_a/σ_a^2}{∑_{a=1}^N r_a/σ_a^2} )

sf(s)max=a=1Nrasaa=1Nra(TeX formula:  ⊢ s_{f(s)_{max}} = \frac {∑_{a=1}^N r_a s_a}{∑_{a=1}^N r_a} )

think of this as an improved version of the population vector vpop=a=1N(v·ca)ca(TeX formula: v_{pop} = ∑_{a=1}^{N} (v·c_a) c_a) , because each neuron's contribution is weighted against its variance σa2(TeX formula: σ_a^2) , more informative neurons (less spread-out distribution) will contribute more.

maximum a posteriori

in log-a-posteriori form for a single neuron:

ln(P(s|r))=ln(P(r|s))+ln(P(s))ln(P(r))(TeX formula:  ln(P(s|r)) = ln(P(r|s)) + ln(P(s)) - ln(P(r)) )

and for the whole population:

ln(P(s|r))=Ta=1Nraln(fa(s))+ln(P(s))+...(TeX formula:  ln(P(s|r)) = T ∑_{a=1}^N r_aln(f_a(s)) + ln(P(s)) + ... )

optimize setting the derivative to 0:

0=a=1Nraf(s)f(s)P(s)P(s)(TeX formula:  0 = ∑_{a=1}^N r_a \frac{f'(s)}{f(s)} \frac{P'(s)}{P(s)} )

and after we pour in the gaussian expansions of f(s)(TeX formula: f(s)) :

sf(s)max=Ta=1Nrasaσa2+spriorσprior2Ta=1Nraσa2+1σprior2(TeX formula:  s_{f(s)_{max}} =             \frac { T∑_{a=1}^N \frac{r_a s_a} {σ_a^2} +                                 \frac{s_{prior}} {σ_{prior}^2} }                    { T∑_{a=1}^N \frac{r_a} {σ_a^2} +                                 \frac{1}{σ_{prior}^2} } )

if you look closer, this is no otherthan the likelihood, scaled by T(TeX formula: T) and shifted by the biasing effect of the prior's deviation. in other words, the more certain the prior (the more pointy its distribution), the more the sprior(TeX formula: s_{prior}) stimulus takes us away from a simple likelihood distribution.

caveats

  • these methods don't incorporate rapid time variations, as they are modelled after tuning curves and mean firing rate.
  • don't take into account correlations in the population; i.e., we assumed independence of events. this is an open research enterprise in neuroscience.

stimulus reconstruction

or how to extend the bayesian methods to the case where stimuli and response vary continuously in time.

suppose we want the best estimator for stimulus sbayes(TeX formula: s_{bayes}) . we introduce an error function, L(s,sbayes)L(s, s_{bayes})(TeX formula: L(s,
s_{bayes})) , and average it over all possible stimulus choices:

dsL(s,sbayes)P(s|r)(TeX formula:   ∫ ds \; L(s, s_{bayes})P(s|r) )

which we will try to minimize. one choice of L()(TeX formula: L()) is the least squares error function:

0=sbds(ssb)2P(s|r)(TeX formula:  0 = \frac{∂}{∂s_{b}} ∫ ds (s-s_{b})^2 P(s|r) )

0=2ds(ssb)P(s|r)(TeX formula:  0 = 2∫ ds (s-s_{b}) P(s|r) )

dssP(s|r)=dssbP(s|r)(TeX formula:  ∫ ds \; s P(s|r) = ∫ ds \; s_{b} P(s|r) )

sb=dsP(s|r)s(TeX formula:  s_{b} = ∫ ds \; P(s|r)s )

but P(s|r)(TeX formula: P(s|r)) is just another way of saying "all the stimuli that triggered an action potential". in other words, this filter is just the spike triggered average.

example: famous 1999 "mind reading" of a cat's Lateral Geniculate Nucleus.2

here researchers are trying to find s(TeX formula: s) that maximizes the a posteriori distribution P(s|r)P(r|s)P(s)P(s|r) ~ P(r|s)P(s)(TeX formula: P(s|r) ~
P(r|s)P(s)) .

the encoding model/likelihood P(r|s)(TeX formula: P(r|s)) needs to be constructed from input movies.

case study: visual processing in the retina

rods are exquisitely sensitive to the minutest amounts of photons (<0.1% of them contribute signals). averaging would be disastrous under these conditions. how can a noisy neural array reliably detect and relay those signals?

what is the optimal readout of an array of detectors when only a small fraction is active and relevant? this question is pervasive accross attention and decision-making research.

we need a selection process/threshold to discard most sources off-the-bat, based on prior knowledge. once again, probabilistic models for noise and signal are due.

rod -->
       \
rod -->--rod bipolar--
       /              |
rod -->               |
                      v
                   amacrine
                      |
                      |
cone --> cone bipolar--------> ganglion ->

anatomically, such a threshold in non-linearity is thought to occur right at the integrating synapse from rods to rod bipolar cells. no other candidate is left after the bipolar consumes all rod output.

for similar sample sizes of noise and signal, recordings show that the threshold is biased for type-1 errors: even most true positives get rejected, unless the probability of the noise distribution is negligibly small.

P(r)
             --  ||
            ---++|||
           ---++++||||
           noise signal     Amplitude
threshold: .........^

however, under natural conditions the probability of seeing a photon is minuscule compared to the probability of seeing noise. priors matter!

P(r)
              --
             ----
            ------ |
           ------++||
           noise signal     Amplitude
threshold: ........^

description: codec evaluation

at any given time bin in a spike train the probability of seeing an event 1(TeX formula: 1) is given by P(x=1)=p(TeX formula: P(x=1)=p) , and the probability of its complement is given by P(0)=1p(TeX formula: P(0)=1-p) . their respective information values are log2(p)(TeX formula: -log_2(p)) and log2(1p)(TeX formula: -log_2(1-p)) .

entropy counts the yes/no questions necessary to specify a variable's distribution.

the encoding capacity of a spike train to account for variability in its counterpart (ability to represent) is quantified using entropy.

theorem: uniform probability has the highest entropy of all bernoulli distributions:

H=B(X)log(B(x))=plog(p)(1p)log(1p)(TeX formula:  H = -∑ B(X) log(B(x)) = -p \; log(p) -(1-p)log(1-p) )

H(TeX formula: H) , a function of probability p(TeX formula: p) , peaks at p=1/2(TeX formula: p=1/2) .

generally P(x)(TeX formula: P(x)) is not uniform, but it would be best if it were.

mutual information

how much of the response is encoded in the stimulus?

let Response(TeX formula: Response) and Stimulus(TeX formula: Stimulus) be 2 random variables. noise entropy is defined as the entropy of either of their conditional distributions. for S(TeX formula: S) we have that:

H(R|S)=E[log(P(R|S))];(TeX formula:  H(R|S) = E[-log(P(R|S))] ;)

averaged over S(TeX formula: S) . and the mutual information is given by:

Im(R;S)=H(R)E[H(R|S)].(TeX formula:  I_m(R;S) = H(R) - E[H(R|S)] .)

note that Im(R;S)=Im(S;R)(TeX formula: I_m(R;S) = I_m(S;R)) . mutual information is commutative. whatever amount of information is reduced from R by knowing S, it must be the same amount that would be reduced from S if we knew R.

example: we have 2 posible stimuli/responses (yes or no), and 2 possible errors of (presumably) equal probability q(TeX formula: q) .

Confusion matrix of probabilities:

stimulus response no response
yes 1-q q
no q 1-q

let's focus on the "yes" stimulus alone. therefore:

  • Total entropy: H(R) = -P(response)log(P(response)) - P(no response)log(P(no response)).
  • Noise entropy (hits): H(R|yes) = -q log(q) - (1-q)log(1-q).
  • Mutual information: Im(R,yes)=H(R)E[H(R|yes)](TeX formula: I_m(R, yes) = H(R) - E[H(R|yes)])

intuitively, this means that the greater the noise entropy, the less the stimulus encodes the response. but noise entropy is maximum when q=1q=1/2(TeX formula: q = 1-q = 1/2) , thereby telling us that when the response is pure chance, the stimulus has no bearing representing it.

when Im(R,s)0(TeX formula: I_m(R,s) → 0) , P(R|S)P(R)(TeX formula: P(R|S) → P(R)) . I.e. they are independent. when Im(R,s)H(R)(TeX formula: I_m(R,s) → H(R)) , P(R|S)0(TeX formula: P(R|S) → 0) . I.e., response is perfectly encoded by stimulus.

graphical example with continuous distributions:

High mutual information between S and R. the total response (sum of all possible stimuli, shown as the wrapper) stems from very differentiated stimuli (low noise entropy), each contributing uniquely to explain one part of the response:

R=f(S)
       ~~ ~~~~  ~~~~ ~~~  ~~
      ~  ~    ~     ~   ~   ~
     ~ -    |    -    |    - ~
    ~ ---  |||  ---  |||  --- ~
  ~~ ----++|||++---++|||++---- ~
      s1   s2   s3   s4   s5     stimulus

Low mutual information. Same response, but it is just the sum of very noisy stimuli:

R=f(S)
       ~~ ~~~~  ~~~~ ~~~  ~~
      ~  ~    ~     ~   ~   ~
     ~                       ~
    ~ ---**||***--**||***---  ~
  ~~ --*********************** ~
      s1   s2   s3   s4   s5     stimulus

relationship to Kullback-Leibler

if mutual information measures variable independence, then it should be equivalent to the Kullback-Leibler divergence between joint probability and the bare product of both distributions:

Im(S;R)=DKL(P(R,S),P(R)P(S))=(TeX formula:  I_m(S;R) = D_{KL}(P(R,S), P(R)P(S)) = )

dRdSP(R)P(S)log(P(R,S)P(R)P(S))=(TeX formula: - ∫dR∫dS \;P(R)P(S) log\left(\frac{P(R,S)}{P(R)P(S)}\right)=)

or dRdSP(R,S)log(P(R,S)P(R)P(S))(TeX formula:  ∫dR∫dS\; P(R,S) log \left( \frac{P(R,S)}{P(R)P(S)} \right))

let's rewrite the second version using conditionals:

dRdSP(R,S)log(P(R|S)P(S)P(R)P(S))=(TeX formula: ∫dR∫dS \; P(R,S)log\left(\frac{P(R|S)P(S)}{P(R)P(S)}\right)=)

dRdSP(R,S)log(P(R|S)P(R))=(TeX formula:  ∫dR∫dS \; P(R,S) log\left( \frac{P(R|S)}{P(R)} \right) = )

dRdSP(R,S)(log(P(R|S))log(P(R)))=(TeX formula:  ∫dR∫dS \; P(R,S) ( log(P(R|S)) - log(P(R)) ) = )

dRdSP(R,S)log(P(R|S))dRdSP(R,S)log(P(R))=(TeX formula:  ∫dR∫dS \; P(R,S)log(P(R|S))    - ∫dR∫dS \; P(R,S)log(P(R)) = )

E[H(P(R|S))]+H(P(R))(TeX formula:  - E[H(P(R|S))]              +  H(P(R)) )

spike trains

spike patterns

  • so far we've only dealt with single spikes or firing rates.
  • what info is carried by entire patterns?
  • how informative are they?

from spike train to pattern code (macro-symbols):

  1. chop response train into segments (called letters) of size Δt(TeX formula: Δt) .
  2. assign each of them one of 2 values based on presence/amount of contained spikes.
  3. group them into words of length L.
  4. plot the distribution of words, P(W).
  5. (all-zero is a common mode, followed by words with a single one-letter spike, etc).
  6. compute total H(P(W))(TeX formula: H(P(W)))

  7. compute Im(W;S)(TeX formula: I_m(W; S)) from the responses given random Words, averaged.

  8. rinse and repeat for different Δt and T, until Im(TeX formula: I_m) is maximum, i.e., the conditional/noise contributes the least entropy because we learned a lot about W by looking at that S.

example:

from Reinagel and Reid's 2000 study on the Lateral Geniculate Nucleus' pattern code:

  • mutual information was ploted as a function of L and Δt.
  • information drops off as we get to long words, because they require increasingly larger sample sizes to estimate the distributions.
  • nonetheless, if a clear pattern emerges from looking at shorter word lengths, one may extrapolate the entropy curve to cases with larger words and predict if entropy would actually increase or not there.

single spikes

how much does the observation of a single spike tell us about the stimulus? by how much does knowing that a particular stimulus occurred reduce the entropy of the response?

without knowing the stimulus, the probability of seeing a single action potential in the response can be computed from the average spike rate and Δt:

response no response
P(r=1) = r̅Δt P(r=0) = 1 - P(r=0)
P(r=1|s) = r̅(t)Δt P(r=0|s) = 1 - P(r=1|s)

the rates and r̅(t) are calculated empirically.

note that every time t is a sample of s, so avereraging the noise entropy might as well be done online, as opposed to averaging over stimuli. this property is known as ergodicity:

I=H(P(r))E[P(r|s)]=(TeX formula:  I = H(P(r)) - E[P(r|s)] = )

rΔtlog(rΔt)(1rΔt)log(1rΔt)+(TeX formula:  -\bar{r}Δtlog(\bar{r}Δt)-(1-\bar{r}Δt)log(1-\bar{r}Δt) + )

1T0Tdt[r(t)Δtlog(r(t)Δt)+(1r(t)Δt)log(1r(t)Δt)](TeX formula:  \frac{1}{T} ∫_0^T dt \; [             \bar{r}(t)Δt \; log(\bar{r}(t)Δt) +             (1-\bar{r}(t)Δt)log(1-\bar{r}(t)Δt) ] )

(...more unexplained inferences...)

therefore, the per-spike information is approximately:

I(r,s)=1T0Tdtr(t)rlog(r(t)r)(TeX formula:  I(r,s) =             \frac{1}{T} ∫_0^T dt \frac{r(t)}{\bar{r}}                                   log\left(\frac{r(t)}{\bar{r}}\right) )

some properties of this calculation:

  • when r(t) → r̅, information goes to zero. this means that the single spike was barely modulated by that particular stimulus.
  • when is small, information is likely to be large, because the receptive field is a very rare/informative one.
  • stimulus-independent (measuring information is possible in the absence of a coding/decoding model).
  • the spike is a meta-symbol that could be any other event composed of multiple spikes.

example: hippocampal place fields. imagine the following place field:

------------------
|          +     |
|         +++    |
|        +++++   |
|         +++    |
|          +     |
------------------

r(t)(TeX formula: r(t)) will be rather large (and information about any particular stimulus low), because the receptive field is rather large and imprecise. such place cell would fire to a considerable range of positions within the room.

neural code wrap-up: intepretive principles

  • what challenges do natural stimuli pose?
  • what does information theory suggest about the purpose of neural systems?
  • what principles could be at work shaping the neural code?

natural stimuli: usually posses 2 properties, regardless of modality:

  1. huge dynamic range: variations over many orders of magnitude.
  2. power-law scaling: structure at many scales/frequency levels. screams for Fourier analysis?

efficient coding

noise entropy escapes our hands under natural situations.

histogram equalization: Hmax(R)(TeX formula: H_{max}(R)) is obtained by matching limited amount of outputs/symbols to the distribution of inputs P(S)(TeX formula: P(S)) , so that the probability of using certain output symbol is uniform. in other words, our coding filter should be determined by the distribution of natural inputs.

this implies that the response/input-output curve is the cumulative integral of P(S)(TeX formula: P(S)) .

suppose only 20 possible outputs are available. we should divide the area under P(S)(TeX formula: P(S)) in 20 slices of equal area.

example: (Laughlin, 1981) measured the distribution of contrast in natural scenes P(S)(TeX formula: P(S)) . he correctly predicted the empirical P(R)(TeX formula: P(R)) of the fly's monopolar cells that integrate information from photoreceptors, taking the integral of the measure P(S)(TeX formula: P(S)) .

dynamical efficient coding

P(S)(TeX formula: P(S)) is varying widely over time as the neural system moves from environment to environment.

open question: should a neural system optimize over evolutionary time or locally?

if the amplitude of stimulus fluctuations decreases (less variety of frequencies for the same period of time), the probability distribution of stimulus values will contain less possible states. to adjust for this change, the encoding neuron correspondingly squeezes its activation curve to best encode a poorer range of stimuli, in real time! this way the input-output curve will go back to being similar to the cumulative integral of P(s)(TeX formula: P(s)) .

feature adaptation: the encoding filter vector/function under a natural distribution of stimuli needs not be the same as the one derived from gaussian noise.

maximally-informative dimensions: as mentioned in week 2, the filter is chosen to maximize DKL(TeX formula: D_{KL}) between spike-conditional and prior distributions, except that this is done on real time for changing distributions. this is equivalent to dynamically maximizing mutual information, as mentioned above.

redundancy reduction

the efficient-coding hypothesis says that since membrane re-polarisation is costly, neural codes should be as efficient as possible.

for a population code, this means that their joint entropy should be independent. that is: P(R1,R2)P(R1)P(R2)(TeX formula: P(R_1, R_2) ~ P(R_1)P(R_2)) ; because it is true that H(R1,R2)H(R1)+H(R2)H(R_1, R_2) ≤ H(R_1) + H(R_2)(TeX formula: H(R_1, R_2) ≤
H(R_1) + H(R_2)) .

however, correlations have been shown to contradict the hypothesis at times:

  • error correction and robustness
  • helping in discrimination

example: neurons in the retina use redundancy.3

sparse-firing hypothesis: as few neurons as possible should be firing at any time.

suppose we have basis functions fi(TeX formula: f_i) and noise/error e(TeX formula: e) with which to reconstruct a natural scene I(x)(TeX formula: I(x⃗)) :

I(x)=iaifi(x)+e(x)(TeX formula:  I(x⃗) = ∑_i a_i f_i(x⃗) + e(x⃗) )

this means choosing fi(TeX formula: f_i) so as to have as few coefficients ai(TeX formula: a_i) as possible. this is done calculating a new error E(TeX formula: E) consisting of two terms: ordinary mean squares to account for fidelity, but also a cost for ai(TeX formula: a_i) usage:

E=x[I(x)iaifi(x)]2+λi|ai|(TeX formula:  E = ∑_{x⃗} \left[ I(x⃗) - ∑_i a_i f_i(x⃗) \right]^2 +                 λ∑_i |a_i| )

The vector x in the equation above represents the coordinates of a point in the image.

quiz

#!/usr/bin/env python3
from math import *
# Suppose that we have a neuron which, in a given time period, will
# fire with probability 0.1, yielding a Bernoulli distribution for the
# neuron's firing (denoted by the random variable F = 0 or 1) with
# P(F = 1) = 0.1.
# Which of these is closest to the entropy H(F) of this distribution
# (calculated in bits, i.e., using the base 2 logarithm)?
def bernoulli_entropy(p):
    return -(p*log2(p) + (1-p)*log2(1-p))
p1 = 0.1
bernoulli_entropy(p1) # 0.4690
# Continued from Question 1:
# Now lets add a stimulus to the picture. Suppose that we think this
# neuron's activity is related to a light flashing in the eye. Let us
# say that the light is flashing in a given time period with probability
# 0.10. Call this stimulus random variable S.
# If there is a flash, the neuron will fire with probability 1/2. If
# there is not a flash, the neuron will fire with probability 1/18. Call
# this random variable F (whether the neuron fires or not).
# Which of these is closest, in bits (log base 2 units), to the mutual
# information MI(S,F)?
# we will go with H(R) - E[H(R|S)]
r_2 = p1
s_2 = 0.1                     # also happens to be 0.1
r_given_s_2 = 1/2
r_given_no_s_2 = 1/18
def mutual_information(response, noises):
    expected_noise_H = 0
    # this is the expected noise entropy (with weights coming from the
    # MARGINAL stimulus probability distribution), and NOT the average
    # over n noise entropies. not all entropies are born equal :-)
    for p_marginal,p_conditional in noises:
        expected_noise_H += p_marginal * bernoulli_entropy(p_conditional)
    return bernoulli_entropy(response) - expected_noise_H
mutual_information(r_2, [[s_2, r_given_s_2],
                         [1 - s_2, r_given_no_s_2]])
# In the following three questions, we will explore Poisson neuron
# models and population coding.
# This exercise is based on a set of artificial "experiments" that
# we've run on four simulated neurons that emulate the behavior found
# in the cercal organs of a cricket. Please note that all the supplied
# data is synthetic. Any resemblance to a real cricket is purely
# coincidental.
# In the first set of experiments, we probed each neuron with a range
# of air velocity stimuli of uniform intensity and differing
# direction. We recorded the firing rate of each of the neurons in
# response to each of the stimulus values. Each of these recordings
# lasted 10 seconds and we repeated this process 100 times for each
# neuron-stimulus combination.
# We've supplied you with a .mat file for each of the neurons that
# contains the recorded firing rates (in Hz). These are named neuron1,
# neuron2, neuron3, and neuron4. The stimulus, that is, the direction
# of the air velocity, is in the vector named stim.
# This will load everything into a dict called data, and you'll be
# able to access the stim and neuron responses using data['stim'],
# data['neuron1'], etc. (In general, data.keys() will show you all the
# keys available in the dict.)
import pickle
with open('tuning_3.4.pickle', 'rb') as f:
    data = pickle.load(f)
len(data['neuron1'])            # 5
data.keys() # dict_keys(['neuron4', 'stim', 'neuron3', 'neuron2', 'neuron1'])
# The matrices contain the results of running a set of experiments in
# which we probed the synthetic neuron with the stimuli in stim. Each
# column of a neuron matrix contains the firing rate of that neuron (in
# Hz) in response to the corresponding stimulus value in stim. That is,
# nth column of neuron1 contains the 100 trials in which we applied the
# stimulus of value stim(n) to neuron1.
len(data['stim'])               # 24
data['stim']
# array([   0.,   15.,   30.,   45.,   60.,   75.,   90.,  105.,  120.,
#         135.,  150.,  165.,  180.,  195.,  210.,  225.,  240.,  255.,
#         270.,  285.,  300.,  315.,  330.,  345.])
len(data['neuron1'])            # 100
data['neuron1']
# array([[ 19.7,  27.4,  28.1, ...,   0. ,   9.8,  16.1],
#        [ 21.3,  28.7,  32.8, ...,   0. ,   8.3,  16.6],
#        [ 21.5,  27.3,  29.3, ...,   0. ,   8.6,  17.2],
#        ...,
#        [ 21.9,  28.7,  30. , ...,   0. ,   8.7,  14.4],
#        [ 24.1,  28.1,  29.1, ...,   0. ,   7.1,  15.9],
#        [ 24. ,  28.3,  28.4, ...,   0. ,   8.2,  19.1]])
len(data['neuron1'][0])              # 24
data['neuron1'][0]
# array([ 19.7,  27.4,  28.1,  32.6,  31.8,  28. ,  22.6,  15.1,   9. ,
#          0. ,   0. ,   0. ,   0. ,   0. ,   0. ,   0. ,   0. ,   0. ,
#          0. ,   0. ,   0. ,   0. ,   9.8,  16.1])
data['neuron1'][0, 0]           # 19.7
# Plot the tuning curve-- the mean firing rate of the neuron as a
# function of the stimulus-- for each of the neurons.
import numpy as np
import matplotlib.pyplot as plt
tuning_curves = {}
for neuron in ['neuron1', 'neuron2', 'neuron3', 'neuron4']:
    # axis=0 means 'average columns'
    tuning_curves[neuron] = np.mean(data[neuron], axis=0)
    plt.plot(data['stim'], tuning_curves[neuron])
    plt.xlabel('Direction (degrees)')
    plt.ylabel('Mean firing rate')
    plt.title('Tuning curve for ' + neuron)
    plt.show()
# Continued from Question 7:
# We have reason to suspect that one of the neurons is not like the
# others. Three of the neurons are Poisson neurons (they are
# accurately modeling using a Poisson process), but we believe that
# the remaining one might not be.
# Which of the neurons (if any) is NOT Poisson?
# Hint: Think carefully about what it means for a neuron to be
# Poisson. You may find it useful to review the last lecture of week
# 2. Note that we give you the firing rate of each of the neurons, not
# the spike count. You may find it useful to convert the firing rates
# to spike counts in order to test for "Poisson-ness", however this is
# not necessary.
# In order to realize why this might be helpful, consider the fact
# that, for a constant a and a random variable X, E[aX]=aE[X] but
# Var(aX)=a^2 * Var(X). What might this imply about the Poisson statistics
# (like the Fano factor) when we convert the spike counts (the raw
# output of the Poisson spike generator) into a firing rate (what we
# gave you)?
# solution: Poisson distributions have variance equal to mean (Fano
# factor = 1). we could compute the mean spike count for each neuron's
# column/stimulus, then compute its variance. repeat for all
# columns/stimulus and plot. a true Poisson process should regress
# into the identity function:
T = 10 # in seconds
for neuron in ['neuron1', 'neuron2', 'neuron3', 'neuron4']:
    mean_spikes = T * np.mean(data[neuron], axis=0)
    spikes_variance = (T**2) * np.var(data[neuron], axis=0)
    plt.scatter(mean_spikes, spikes_variance)
    plt.xlabel('Mean number of spikes')
    plt.ylabel('Variance of number of spikes')
    plt.title('Fano test for ' + neuron)
    plt.show()
# Finally, we ran an additional set of experiments in which we exposed
# each of the neurons to a single stimulus of unknown direction for 10
# trials of 10 seconds each. We have placed the results of this
# experiment in the following file:
import pickle
with open('pop_coding_3.4.pickle', 'rb') as f:
    data2 = pickle.load(f)
# pop_coding contains four vectors named r1, r2, r3, and r4 that
# contain the responses (firing rate in Hz) of the four neurons to
# this mystery stimulus. It also contains four vectors named c1, c2,
# c3, and c4. These are the basis vectors corresponding to neuron 1,
# neuron 2, neuron 3, and neuron 4.
data2.keys()
# dict_keys(['c1', 'r1', 'r3', 'c2', 'c3', 'r2', 'c4', 'r4'])
# Decode the neural responses and recover the mystery stimulus vector
# by computing the population vector for these neurons. You should use
# the maximum average firing rate (over any of the stimulus values in
# 'tuning.mat') for a neuron as the value of rmax for that
# neuron. That is, rmax should be the maximum value in the tuning
# curve for that neuron.
v = np.array([0,0], dtype='float64')
# recall that a v̂_{pop} is the linear combination of vectors:
# ∑_a (r_a/max(max(r_a))ĉ_a
# in our case we are given many r_a's (many responses per neuron), so
# we create a population vector for each response, then average them
# to obtain a final population vector.
for a in ['1', '2', '3', '4']:
    v += (np.average(data2['r'+a]) /
               max(tuning_curves['neuron'+a])) * data2['c'+a]
# # equivalent to:
#
# v = np.zeros((2, 10), dtype='float64')
# for i in range(0, 10):
#     for a in ['1', '2', '3', '4']:
#         v[:,i] += (data2['r'+a][i] /
#                    max(tuning_curves['neuron'+a])) * data2['c'+a]
# v = np.average(v, axis=1)
# What is the direction, in degrees, of the population vector? You
# should round your answer to the nearest degree. Your answer should
# contain the value only (no units!) and should be between 0∘ and
# 360∘. If your calculations give a negative number or a number
# greater than or equal to 360, convert it to a number in the proper
# range (you may use the mod function to do this).
# You may need to convert your resulting vector from Cartesian
# coordinates to polar coordinates to find the angle. You may use the
# atan() function in MATLAB to do this. Note that the the convention
# we're using defines 0∘ to point in the direction of the positive
# y-axis and 90∘ to point in the direction of the positive x-axis
# (i.e., 0 degrees is north, 90 degrees is east).
# we start from 90 because that's our reference, and substract from
# there in order to rotate clock-wisely:
(90 - round(atan(v[1]/v[0]) * (180/pi))) % 360  # atan works in radians

mechanism: neuron models

  • neuroelectronics

    • membranes
    • ion channels
    • wiring
  • simplified neuron models

    • basic dynamics of neuronal excitability
  • neuronal geometry

    • dendrites and dendritic computing

membrane

bare phospholipid bi-layer

good dielectric (capacitor), but still allows some charge to pass through (paralleled by a resistor), mostly because of embedded channel proteins.

RC circuit:

         ----/\/---
         |        |
inside --|        |--- outside
         |        |
         ----||----

from Kirchhoff's conservation of current:

I=IR+IC(TeX formula:  I = I_R + I_C )

from Ohm's law and the definition of capacitance ( C=Q/V(TeX formula: C=Q/V) )

I=VRR+CdVdt(TeX formula:  I = \frac{V_R}{R} + C\frac{dV}{dt} )

rest potential

  • because of the active work of pumps, K+(TeX formula: K^+) in the inside and Na+,Cl,Ca2+(TeX formula: Na^+, Cl^-, Ca^{2+}) in the outside are under osmotic force to homogenize in the direction of their respective concentration gradients (voltage source).
  • until opposed by electrostatic forces.

this results in Nernst equilibrium:

E=kBTzqln(concentrationinsideconcentrationoutside)(TeX formula:  E = \frac {k_B T}{zq} ln \left(                    \frac{concentration_{inside}}{concentration_{outside}}                 \right) )

where kB(TeX formula: k_B) is the Boltzmann constant, T is the temperature, q is the ionic charge and z number of charges.

this means that the current that manages to flow through the resistor in our RC model must also go through the electric potential created by the ion battery; i.e., not all the voltage drop is due to the resistor:

                 |
         --/\/--||---
         |       |  |
inside --|          |--- outside
         |          |
         ----||------

I=VVrestR+CdVdt(TeX formula:  I = \frac{V - V_{rest}}{R} + C\frac{dV}{dt} )

multiplying by R on both sides:

V=V+RCdVdt(TeX formula:  V_∞ = V + RC\frac{dV}{dt} )

and let τ=RC and V(TeX formula: V_∞) be the whole steady potential. note that when dVdt=0(TeX formula: \frac{dV}{dt} = 0) (no current) V=V(TeX formula: V_∞ = V) ; and when constant, V=VVrest=IRVrestV_∞ = V - V_{rest} = IR - V_{rest}(TeX formula: V_∞ = V - V_{rest} = IR -
V_{rest}) .

with these conditions we can solve the differential equation:

  • V(t)=V(1et/τ)(TeX formula: V(t) = V_∞(1 - e^{-t/τ})) ; during the capacitors rising phase.
  • V(t)=Vet/τ(TeX formula: V(t) = V_∞e^{-t/τ}) ; during discharge.

ion channels

recall their diversity:

  • voltage-gated
  • chemically-gated:
    • transmitter-gated (synaptic):
      • dopamine
      • GABA
      • serotonin
      • glutamatergic:
        • AMPA
        • NMDA
      • etc.
    • Ca2+(TeX formula: Ca^{2+}) -gated.
  • mechanically-gated
  • thermally-gated

we will be focusing on voltage-gated channels.

let g = 1/R be the conductance. Ohm's law becomes:

I=Vg(TeX formula:  I = Vg )

we can break down the general voltage source from our previous circuit into equilibrium (rest) potentials for different ion species:

ion equilibrium potential (E_i)
Na +050 mV
Ca +150 mV
K -080 mV
Cl -060 mV

Ii=gi(VEi)(TeX formula:  I_i = g_i(V-E_i) )

these will be modelled with parallel paths.

from our ion-specific cable model for K, Na and leaks (L):

                 |
         --/\/--||---
         |  L    |  |
         |          |
         |       |  |
         --/\/--||---
         |  K    |  |
         |          |
         |      |   |
         --/\/--||---
         |  Na  |   |
inside --|          |--- outside
         |          |
         ----||------

action potential

there's a limit to how much charge can move into the membrane without losing the capacity to stay in equilibrium. we call it the action potential.

where does the non-linear property that makes irrecoverable excitability (our good old activation function g()) lie?

conductances aren't constant. the resistance imposed by channels changes as voltage changes. we need functions describing them.

functional diagram of a voltage-gated channel:

    :========= =========:
    :========= =========:
    :========= =========: / <-- anchoring subunit
    :========= =========:/
   (___(voltage sensor)___)
                      \
                       \  <-- gate, activated by sensor
    ____________________\_
  -(                      )
    :========= =========:\
    :========= =========: \
    :========= =========:

K dynamics

we will be using kinetic models for the opening of channels:

  • P(gate is open) increases with a depolarised neuron ( K+(TeX formula: K^+) flows away after spike).
  • gate contains 4 moving subunits. PK=i=14PsubiPsub4P_K = ∏{i=1}^4 P ≅ P_{sub}^4(TeX formula: P_K = ∏_{i=1}^4
   P_{sub_i} ≅ P_{sub}^4) , for independent subunits.4

we go for a Bernoulli probability distribution, n = P(open). transitions between states occur at voltage-dependent rates αn(V)(TeX formula: α_n(V)) (closed → open) and βn(V)(TeX formula: β_n(V)) (open → closed). we don't know the function of voltage describing this probability, but we can write the function describing its change in continuous time:

Hudgkin-Huxley-1: dndt=αn(V)(1n)βn(V)n(TeX formula:  \frac{dn}{dt} = α_n(V)(1-n) - β_n(V)n )

that is, the function that describes the probability is given by how much is added to the open state (proportional to the closed state, times the voltage-varying rate α) minus how much is lost.

rewrite in the form of τ and n(TeX formula: n_∞) :

1αn(V)+βn(V)dndt=αn(V)αn(V)+βn(V)n(TeX formula:  \frac{1}{α_n(V) + β_n(V)} \frac{dn}{dt} =             \frac{α_n(V)}{α_n(V) + β_n(V)} - n )

τn(V)dndt=n(V)n(TeX formula:  τ_n(V) \frac{dn}{dt} = n_∞(V) -n )

Na dynamics

  • 3-subunit gate similar to Ka. P(gate is open) = m increases with potential drop.
  • additional subunit can block channel back even if the gate is open, after P(additional is open) = h decreases if potential is too weak, terminating the rising phase during an action potential.
  • P(channelisopen)hm3(TeX formula: P(channel \; is \; open) ≅ hm^3) .4

similarly, HH2:

dmdt=αm(V)(1m)βn(V)m(TeX formula:  \frac{dm}{dt} = α_m(V)(1-m) - β_n(V)m )

1αm(V)+βm(V)dmdt=αm(V)αm(V)+βm(V)m(TeX formula:  \frac{1}{α_m(V) + β_m(V)} \frac{dm}{dt} =             \frac{α_m(V)}{α_m(V) + β_m(V)} - m )

τm(V)dmdt=m(V)m(TeX formula:  τ_m(V) \frac{dm}{dt} = m_∞(V) -m )

And for the additional Na-inactivating subunit, HH3 is:

dhdt=αh(V)(1h)βn(V)h(TeX formula:  \frac{dh}{dt} = α_h(V)(1-h) - β_n(V)h )

1αh(V)+βh(V)dhdt=αh(V)αh(V)+βh(V)h(TeX formula:  \frac{1}{α_h(V) + β_h(V)} \frac{dh}{dt} =             \frac{α_h(V)}{α_h(V) + β_h(V)} - h )

τh(V)dhdt=h(V)h(TeX formula:  τ_h(V) \frac{dh}{dt} = h_∞(V) -h )

Hudgkin-Huxley model

and from those probabilities we go back to our voltage-dependent channel conductances:

gK(V)=gKn4(TeX formula:  g_K(V) = g_K n^4 )

gNa(V)=gNahm3(TeX formula:  g_{Na}(V) = g_{Na} hm^3 )

therefore, the diagram's overall current equation, I=VVrestR+CdVdtI = \frac{V - V_{rest}}{R} + C\frac{dV}{dt}(TeX formula: I =
\frac{V - V_{rest}}{R} + C\frac{dV}{dt}) , we rewrite as:

I=igi(VEi)+CdVdt(TeX formula:  I = ∑_i g_i(V-E_i) + C\frac{dV}{dt} )

this is expanded with the ion-specific terms into the fourth and last equation in Hudgkin-Huxley's dynamical system:

HH4:

I=gL(VEL)+gKn4(VEK)+gNahm3(VENa)+CdVdt(TeX formula:  I = g_L(V-E_L) +                 g_Kn^4(V-E_K) +                 g_{Na}hm^3(V-E_{Na}) +                 C\frac{dV}{dt} )

simplified models

the model can't be so simple that it will fail to capture the variety of neural codes neurons are capable of (frequency modulation, timing-dependent firing, train patterns, etc.)

integrate-and-fire

we observed that at rest potential the neuron behaves very linearly. assume conductances stay constant and C = 1. this is no other than our good old passive membrane:

I(t)=a(VV0)+1dVdt(TeX formula:  I(t) = a(V - V_0) + 1\frac{dV}{dt} )

or

1dVdt=a(VV0)+I(t)(TeX formula:  1\frac{dV}{dt} = -a(V - V_0) + I(t) )

this is a linear function, and since the slope is negative V0(TeX formula: V_0) is a stable fixed point in phase-space:

\           |
   \        |
      \     |
         \  |
------------\-------------
            |  \
            |     \
            |        \
            |           \ dV/dt
---> --> -> V0 <- <-- <---

this is good for latent addition, but there's no end or threshold to how much electric potential can drop before we see excitation.

to account for spikes we delimit the function with a Vthreshold(TeX formula: V_{threshold}) that will drive potential to depolarisation.

Vthreshold(TeX formula: V_{threshold}) marks a new fixed point, unstable this time:

\           |                                      /
   \        |                                   /
      \     |                                /
         \  |                             /
------------\--------------------------/---------------
            |  \                    /
            |     \              /
            |        \        /
            |           \  /
---> --> -> V0 <- <-- <--- <--- <-- <- V_th -> --> --->

now we need an edge Vmax(TeX formula: V_{max}) that will mark the end of an increase in depolarisation and a return to a Vreset(TeX formula: V_{reset}) at the other edge of the graph.

        \           |                                      /
           \        |                                   /
              \     |                                /
                 \  |                             /
--------------------\--------------------------/--------------------
                    |  \                    /
                    |     \              /
                    |        \        /
                    |           \  /
V_reset ---> --> -> V0 <- <-- <--- <--- <-- <- V_th -> --> ---> V_max

in cortical neurons this has been fitted against quadratic functions, or using an exponential for the second half where acceleration is positive (effectively capturing the faster dynamics of depolarisation) in what is known as the exponential integrate-and-fire neuron:5

dVdt=a(VV0)+e(VVth)/Δ+I(t)(TeX formula:  \frac{dV}{dt} = -a(V - V_0) + e^{(V - V_{th})/Δ} + I(t) )

where parameter Δ controls how fast the exponential grows.

        \           |                      |
           \        |                     |
              \     |                    |
                 \  |                   |
--------------------\------------------|--------
                    |  \              /
                    |     \          /
                    |        \      /
                    |           \  /
V_reset ---> --> -> V0 <- <-- <--- <-- V_th ---> V_max

theta neuron

by thinking of potential as a cyclic phenomenon we can simplify Vmax(TeX formula: V_{max}) and Vreset(TeX formula: V_{reset}) into a single state, called Vspike(TeX formula: V_{spike}) :6

dθdt=(1cos(θ))+(1+cos(θ))I(t)(TeX formula:  \frac{dθ}{dt} = (1 - cos(θ)) + (1 + cos(θ))I(t) )

two-dimensional models

what if instead of an abrupt reset we had another decrease leading to yet another (stable) fixed point?

        \           |                      |
           \        |                     | |
              \     |                    |   |
                 \  |                   |     |
--------------------\------------------|-------|-
                    |  \              /         \
                    |     \          /           \
                    |        \      /
                    |           \  /
V_reset ---> --> -> V0 <- <-- <--- <- V_th ---> V-K <- <--

we will draw inspiration from the Hudgkin-Huxley model bringing in a second variable, G(u), to take care for inactivation. with G(u) providing negative values for large potentials dVdt(TeX formula: \frac{dV}{dt}) can be brought to zero again; mimicking the role of the Na gate's second inactivating subunit and the delayed role of the K channel.

dVdt=F(V)+G(u)+I(t)(TeX formula:  \frac{dV}{dt} = F(V) + G(u) + I(t) )

this new variable u(TeX formula: u) is in turn a function, whose change is contingent upon electric potential. we want u(TeX formula: u) to go hand-in-hand with potential:

dudt=u+H(V)(TeX formula:  \frac{du}{dt} = -u + H(V) )

simple™ model

modelled after a subset of the previous phase plane, near the intersection of nullclines. F(V) is chosen to be quadratic, G(u) is simply -u. on the second equation, -u + H(V) gains some weights:

dVdt=F(V)+G(u)+I(t)=(ɑV+βV2+γ)u+I(t)(TeX formula:  \frac{dV}{dt} = F(V) + G(u) + I(t) = (-ɑV+βV^2+γ) - u + I(t) )

dudt=a(bVu)(TeX formula:  \frac{du}{dt} = a(bV - u) )

given a specific quadratic function, two extra parameters (in addition to a (u's decay) and b (u's sensitivity)) are used to reset v and u after reaching Vmax(TeX formula: V_{max}) . these are called c (V's original value) and d (u's original value).

these dynamics are enough to capture a variety of codes:

  • regular (as in repetitive) spiking: | | | | | |
  • intrinsically bursting: ||| | | | |
  • fast spiking: |||||||||||
  • low-threshold spiking: ||||| | | |||||

    • intrinsically bursting with chattering intervals: |||| ||| ||| |||
  • thalamocortical: | | | | | ... |||| | | |

  • resonator: mmm||||||

realistic dendritic geometry

our previous models only really capture computation at and near the soma or axon. dendritic arbors have computationally-significant effects.

nobody really knows what the appropriate level of description is, if any.

for any passive membrane the following stochastic properties hold:

  • effects of traveled distance:

    • impulse amplitude decays. that is, spikes shorten.
    • frequency decays too. that is, spikes broaden.
  • effects of neurite radius:

    • strength (ΔV) is inversely proportional to thickness.

this affects the contribution of individual inputs coming from different dendrites.

cable theory

V is now a function of both time and space. off-the-bat, we are dealing with partial differential equations. run cables/resistors on both inside and outside to connect membrane patches, so as to model local currents produced by potential along the cable.

---------------> a = radius
|        /              \
|        \  ----/\/---  /
|        /  |        |  \
| inside \--|        |--/ outside
|        /  |        |  \
|        \  ----||----  /
|        /              \
|        /              \
|        \  ----/\/---  /
|        /  |        |  \
| inside \--|        |--/ outside
|        /  |        |  \
|        \  ----||----  /
|        /              \
v
x = length

generally we take the external resistance to be zero.

our old membrane's Kirchhoff equality Im=cmVmt+VmrmI_m = c_m \frac{∂V_m}{∂t} + \frac{V_m}{r_m}(TeX formula: I_m = c_m
\frac{∂V_m}{∂t} + \frac{V_m}{r_m}) is symmetric to the second length-derivative of voltage:

1rinside2Vm(x,t)x2=cmVmt+Vmrm(TeX formula:  \frac{1}{r_{inside}} \frac{∂^2V_m(x,t)}{∂x^2} =             c_m \frac{∂V_m}{∂t} + \frac{V_m}{r_m} )

which we rearrange to visualize time and space constants:

(rmri)22Vmx2=rmcmVmt+Vm(TeX formula:  \left( \sqrt{\frac{r_m}{r_i}} \right)^2 \frac{∂^2V_m}{∂x^2}            = r_m c_m \frac{∂V_m}{∂t} + V_m )

λ22Vmx2=τmVmt+Vm.(TeX formula:  λ^2 \frac{∂^2V_m}{∂x^2} =             τ_m \frac{∂V_m}{∂t} + V_m. )

potential diffuses exponentially, starting from the point where current is injected (let's call it x=0), according to V(x)e(|x|λ)V(x) ∝ e^{\left( - \frac{|x|}{λ} \right)}(TeX formula: V(x)
∝ e^{\left( - \frac{|x|}{λ} \right)}) . half length constants away from the centre, the voltage signal is attenuated by a factor of 0.4. one constant away it's 0.2; etc.

the arrival of peaks at different points gives us the velocity of propagation, which naturally turns out to be the ratio of the length and time constants, multiplied by two: Velcable=2λ/τVel_{cable} = 2λ/τ(TeX formula: Vel_{cable} =
2λ/τ) .

here's what the general solution to the differential equation looks like:

V(x,t)τ4πλ2tetττx24λ2t(TeX formula:  V(x,t) ∝ \sqrt{\frac{τ}{4πλ^2t}}                      \; e^{-\frac{t}{τ}-\frac{τx^2}{4λ^2t}} )

this is summed over different locations and times to come up with our actual filter on the input, as seen on week 2.

compartmental models

simple cable theory doesn't get us very far modelling truly realistic neurons anyway, because the it isn't taking channels into account, which also exist in dendrites. analytical solutions become intractable.

compartments are a simpler approximation to cable theory which gets rid of our dependence on x, although it's still a passive model.

  1. it's a divide-and-conquer strategy that discretizes dendritic trees into levels.
  2. for a binary tree, when the sum of the diameters of siblings and parent obey the relation:

    d13/2+d23/2=D3/2(TeX formula:  d_1^{3/2} + d_2^{3/2} = D^{3/2} )

  3. then the subtree can be approximated with a single cable segment the diameter of the parent, extended by the effective electronic length of the children. the final result will invariably be a single neurite.

active models

  1. for a given geometry and function of channel density (yes!, channel density variability can have important effects too), compartmentalize into segments of approximately constant properties.
  2. model each segment according to your favorite model (Hudgkin-Huxley, e.g.). Input current can come from either: parent, sibling or children: uphold Kirchhoff's when thinking about input potential/current.
  3. couple segments into a tree, putting resistors between segments to account for drops. these need not be direction-invariant (i.e. there will be two diode-guarded resistors per segment connection).

dendritic computation

there are many interesting ideas floating around:

  • lowpass filter, attenuation.
  • where the inputs enter the dendrite can have an impact:

    • inputs at different places of the same segment could compute logical conjunction, whereas the parent segment computes disjunction on its children.
    • analogically, this means the ability to segregate or amplify signals.
  • plasticity: Ca-mediated back-propagating action potentials could be the driving force behind plasticity, based on coincidence detection (mentioned in the introduction).

  • hippocampal neurons are known to display synaptic scaling: the ability to make inputs "commutative", or position-independent.

examples:

  • sound localisation: according to the Jeffress model, coincidence detector neurons could be taking advantage of different cable lengths (time-dependent code) to encode something like left-right position. because sounds coming from the left arrives with a delay to the right ear, a soma closer to the right with dendrites of different length coming from each ear could perceive both inputs at the same time, firing only to those sounds. other neurons with different dendritic lengths would detect other sound positions.
  • direction selectivity on LGN/V1 receptive fields (and elsewhere) could be due to the order in which inputs enter the same dendrite. a timely excitation would add to the signal as it travels down; whereas de-synchronized stimuli would see their individual signals fade away. in other words, the sequence in which pre-synaptic neurons touch the post-synaptic one encodes the sequence the post-synaptic neuron is trained to detect.

mechanism: networks

chemical synapse

  1. action potential travels down the axon at the pre-synaptic neuron,
  2. opens Ca2+(TeX formula: Ca^{2+}) channels at the terminal,
  3. which in turn causes neurotransmitter vesicles to throw their contents into the synaptic cleft.
  4. after migration via diffusion, neurotransmitters bind to glycoprotein receptors at the post-synaptic neuron.
  5. ligand-gated channels (also known as ionotropic receptors) open, either from the direct action of neurotransmitters or as a consequence of the chain reaction started at metabotropic (aka G-protein-coupled) receptors.
  6. this causes either further depolarisation or polarisation at the post-synaptic neuron, depending on chemical identity.

    • excitatory: Na+(TeX formula: Na^+) channels open or some other positive ions enter the cell. common mediating neurotransmitters are:

      • acetylcholine: common in neuromuscular junctions.
      • glutamate: CNS. binds to NMDA, AMPA and kainate receptors.
      • catecholamines: dopamine, epinephrine/adrenaline, norepinephrine/noradrenaline, etc.
      • serotonin: parasympathetic functions
      • histamine
    • inhibitory: K+(TeX formula: K^+) leaves out or Cl(TeX formula: Cl^-) comes in.

      • GABA-a, GABA-B. receptors are pentamers which exist in many compositions and conformations.
      • glycine: non-exclusively found in the spinal cord and retina
  7. neurotransmitters detach from the post-synaptic dendritic spine and are reabsorbed.

modelling

given the specific membrane capacitance ( cm10nF/mm2(TeX formula: c_m ≅ 10 nF/mm^2) ), specific membrane resistance ( rm1MΩmm2(TeX formula: r_m ≅ 1 MΩ mm^2) ) and area A; Cm=cmAC_m = c_mA(TeX formula: C_m
= c_mA) and Rm=rmA(TeX formula: R_m = \frac{r_m}{A}) . Notice how surface area becomes irrelevant when determining τ:

crAAdVdt=(VVequilibrium)+IR(TeX formula:  \frac{crA}{A} \frac{dV}{dt} = (V - V_{equilibrium}) + IR )

anyway, as with Na and K channels which are modelled through resistors, our neuron circuit model will need yet another conductance per synapse (with its own equilibrium potential Es(TeX formula: E_s) ):

τmemdVdt=((VEleak)+gsyn(VEsyn)rmem)+IR(TeX formula:  τ_{mem}\frac{dV}{dt} = -((V-E_{leak}) +                                       g_{syn}(V-E_{syn})r_{mem}) + IR )

for an excitatory synapse, Esyn>Eleak(TeX formula: E_{syn} > E_{leak}) , and vice-versa.

similarly to the Hudking-Huxley model, the synaptic conductance gs(TeX formula: g_s) is a probabilistic function specified by a differential kinetic model:

gs=gsmaxPtransmitterreleasePopenchannels(TeX formula:  g_s = g_{s_{max}}P_{transmitter\;release}P_{open\;channels} )

assume Ptransmitterrelease=1(TeX formula: P_{transmitter\;release} = 1) , i.e. vesicles are always available for release. as for Popenchannels(TeX formula: P_{open\;channels}) , let αs(TeX formula: α_s) and βs(TeX formula: β_s) be the rate at which closed channels open and open channels close, respectively. therefore:

dPopenchannelsdt=αs(1Popenchannels)βsPopenchannels(TeX formula:  \frac{dP_{open\;channels}}{dt} = α_s(1 - P_{open\;channels}) -                                                β_sP_{open\;channels} )

evidence shows that Popenchannels(TeX formula: P_{open\;channels}) is best modelled using dissipating exponential function K(t)=et/τs(TeX formula: K(t) = e^{-t/τ_s}) for AMPA synapses, whereas the slightly delayed rise in probability in something like GABA-A is better-suited to a so-called "alpha" function, α(t)=tτpeake(1tτpeak)α(t) = \frac{t}{τ_{peak}}e^{\left( 1 - \frac{t}{τ_{peak}} \right)}(TeX formula: α(t) = \frac{t}{τ_{peak}}e^{\left( 1 -
\frac{t}{τ_{peak}} \right)}) . τpeak(TeX formula: τ_{peak}) is an empirical parameter that will vary with chemical.

P: 1.0  |
    .8  | \
    .6  |   \
    .4 |     \_
    .2 |        \ _
   0.0 |            \  _
       0  5 10 15 20 25 :t (ms)
       alpha function

in order to model the cumulative effect of a synapse train coming from the pre-synaptic neuron we recur to our old linear filter. this we call the response function, rho ( ρ(t)=iδ(tti)ρ(t) = ∑_i δ(t - t_i)(TeX formula: ρ(t) = ∑_i δ(t -
t_i)) ).

example of a final aggregate synaptic conductance for AMPA:

gs(t)=gsmaxti<tK(tti)=gsmaxtK(tτ)ρ(τ)dτ(TeX formula:  g_s(t) = g_{s_{max}} ∑_{t_i<t} K(t - t_i) =                      g_{s_{max}} ∫_{-∞}^t K(t - τ)ρ(τ) dτ )

=gsmaxte(tτ)/τ(iδ(tti))dτ(TeX formula:  = g_{s_{max}} ∫_{-∞}^t e^{-(t - τ)/τ}(∑_i δ(t-t_i)) dτ )

imagine the following spike train arriving at the synapse, and its corresponding filtered conductance composed of alpha or exponential Popenchannels(TeX formula: P_{open\;channels}) functions pieced together:

train:        | |   | | |
              ---------------> time
                        |\
                      |\| \
                |\  |\|    \
              |\|  \|       \
conductance:  |              \
              ---------------> time

networks

now that we know how to feed a neuron's output into another neuron and convert that input into yet more action potentials given by τmdVdt=((VEL)+gs(t)(VEs)rm)+IR(TeX formula: τ_{m}\frac{dV}{dt} = -((V-E_L) + g_s(t)(V-E_s)r_m) + IR) , the next simplest step is to experiment with 2-neuron networks. the following are common properties for a simple feedback loop system with homogeneous synaptic sign:

nature of synapses behavioural pattern
excitatory alternating activation (n1, n2, n1, n2, ...)
inhibitory synchrony7 (n1+n2, n1+n2, n1+n2, ...)

  1. Not to be confused with ephaptic transmission, in which local ion currents from one cell's membrane manage to create a significant effect on the membranes of nearby cells. 

  2. http://www.jneurosci.org/content/19/18/8036.full 

  3. Berry, Chichilnisky. 

  4. I'd like to thank Prof. Markus Müller for pointing out to me that the exponents of probability functions were determined using curve fitting and are unrelated to the number of molecule subunits. 

  5. Fourcaud-Trocmé, Hansel, van Vreeswijk, Brune. 

  6. Ermentrout, Kopell. 

  7. This demands a deeper explanation.