Supplement to “Learning Rule of Homeostatic Synaptic Scaling: Presynaptic Dependent or Not” by Jian K. Liu, Neural Computation, Vol. 23, No. 12 (December 2011), pp. 3145–3161.

URL: http://www.mitpressjournals.org/doi/abs/10.1162/NECO_a_00210

1

Learning Rule of Homeostatic Synaptic Scaling: Presynaptic Dependent or Not: Supplemental Material Jian K. Liu Laboratory of Neurophysics and Physiology, CNRS UMR 8119, Universite Paris Descartes, 75006 Paris, France.

1 Supplementary Color File Color Figure 1, Figure 2, and Figure 3 are shown in the main text.

2 Binary Neural Network Based on the analysis presented in the main text, we conducted numerical simulations to study learning dynamics with a recurrent neural network of binary neurons.

2.1 Binary Neuron We considered the McCulloch & Pitts binary neuron model (?) with discrete-time dynamics, where a neuron is a binary threshold unit. The state of neuron i at time t within (τ )

the trial τ is denoted as si (t) ∈ [0, 1], where 1 or 0 represents as firing or not firing.

Then the neural dynamics has the following form ) ( ∑ (τ ) (τ ) (τ ) si (t + 1) = H wij sj (t) + Ii (t) − Θ ,

t = 1, . . . , tmax ,

(1)

j (τ )

where wij > 0 denotes excitatory synaptic weight from presynaptic neuron j to postsynaptic neuron i at trial τ . tmax is the maximal running time of one trial. The constant Θ = 1 acts as the firing threshold. Ii (t) = 1 is the stimulus for neuron i. H(x) is the Heaviside function such that there is a spike whenever the total current crosses the threshold, otherwise no spike occurs. At the end of every trial, neuron states are re(τ )

(0)

set as their initial conditions as si (0) = si (0), ∀τ . Another setting can also be used: (τ +1)

si

(τ )

(0) = si (tmax ), which gives a continuous neural dynamics over trials. Two reset

schemes have similar behaviors since network dynamics are discrete without decaying dynamics and persistent activity at later time t.

2.2

Simulation Parameters

In all simulations, the network includes 100 excitatory neurons with a connection probability 25% between any pair of neurons chosen randomly. The stimulus is a brief impulse to the network such that 4 excitatory neurons fire at t = 2, which is analogous to a synchronous input composed by a small subset of neurons. The synaptic connection topology is fixed, i.e., synaptic weights never change their signs, and upper and lower bounds of weights are introduced as wmax = 0.6 and wmin = wmax /1000. Thus, weights are always positive, and synapse never die to zero. Self-connections are excluded. wmax = 0.6 is a value that requires at least 2 synchronous presynaptic inputs to fire a postsynaptic cell. The learning parameters are αν¯ = 0.2 and αw = 0.01. The target firing rate νgoal = 1 for all neurons to make network display a sparsely spatiotemporal activity pattern. Results are robust to variations of parameters, such as the network size, the connection probability, weight bounds. A ready-to-use MATLAB code is available online at the author’s homepage.

2.3 Results of Simulations Results are presented in Figure 4, Figure 5, and Figure 6, which are comparable with Figure 1, Figure 2, and Figure 3 from spiking neuron network in the main text. 2

Cell

Cell

A

B 100 200 300 400 500

τ=1

τ=169

τ=1

τ=200

100 200 300 400 500

τ=170

τ=173

τ=300

τ=500

C

100 time

200

100 time

200

100 200 time (ms)

mean ν¯

(τ )

4

ν¯(τ +1) − ν¯(τ )

D

2 0 0 1

100 200 time (ms) 1

SS PSD

100

0.5

200

300

400

500

0 −1 0

0 600 0.02 0

100

200

300 τ (trial)

400

500

−0.02 600

Figure 1: Network dynamics are unstable under SS and stable under PSD. (A) Raster patterns under SS at τ = 1, 169, 170 and 173. (B) Raster patterns under PSD at τ = 1, 200, 300 and 500. White dots are spikes. (C) Mean firing rate ν¯(τ ) averaged over all neurons exhibits large oscillations under SS, and stably converges to the target under PSD. (D) ν¯(τ +1) − ν¯(τ ) indicates the degree of jump discontinuity. Excitation explosion is exhibited under SS but depressed under PSD. In (A-B) all indexes of neurons (y−axis) are sorted according to their spiking time after learning. Data under SS are colored with blue, and red under PSD in all figures.

3

Pre-strength swi

A

Post-strength swj

C

30

B SS

τ=1 τ=100 τ=300

20

10

PSD

τ=1 τ=300 τ=800

5

10 0 0 30

100

200 300 Post-cell j

0 0

400

D

SS

30

20

20

10

10

0 0

100

4

PSDpre

200 300 Pre-cell i

E

400

0 0

100

200 300 Post-cell j

400

100

200 300 Pre-cell i

400

PSD

σsw

SSpre SSpost

2 0 0

PSDpost

100

200

300 τ (trial)

400

500

600

Figure 2: Synaptic competition is realized by PSD, not SS. (A) Pre-strengths swi under SS are distributed uniformly within one trial and scaled globally across different trials τ = 1, 100 and 300. (B) swi under PSD are distributed and changed heterogeneously, particularly at τ = 300. (C) Post-strengths swj under SS are scaled globally across trials, even they are distributed less uniformly within one trial. (D) swj under PSD are heterogeneous both within one trial and across trials. (E) Standard deviations σsw of SSpre in (A), PSDpre in (B), SSpost in (C) and PSDpost in (D), are significantly different and separated under PSD and nearly overlapping under SS, which indicates that synaptic competition is missed under SS but exhibited under PSD.

4

B

10

Ratios

0.03 r1

120

˜ (τ ) ) ρ(D

1

˜ (τ ) ) σ1 (T

A

r2

0 0

200

400

0 600

C

0 0

200

400

0 600

D ρ(D(τ ) ) 0 0

200 400 τ (trial)

0 600

1.01

48.6

1 0

200 400 τ (trial)

σ1 (T(τ ) )

4

ρ(W(τ ))

15

48.4 600

Figure 3: Synaptic matrix is convergent under PSD, not SS. (A) r1 (solid line, blue) and r2 (dash line, blue) are close to the theoretical bound 1 under SS. r1 (solid line, red) and r2 (dash line, red) are 100-fold less under PSD. (B) The spectral norm of allstep transition matrix, (C) the largest eigenvalue of synaptic matrix and (D) the spectral norm of one-step transition matrix are always larger than 1, and convergent under PSD, but not convergent under SS.

5

A Cell

20 40 60 80 100

τ=1

τ=208

τ=1

τ=200

Cell

B

20 40 60 80 100

τ=209

τ=211

τ=300

τ=800

C

10 time

20

10 time

20

10 time

20

10 time

mean ν¯(τ )

10

ν¯(τ +1) − ν¯(τ )

D

5 0 0 5

1 SS PSD

0.5

200

400

600

800

0 −5 0

20

0 1000 0.02 0

200

400 τ (trial) 600

800

−0.02 1000

Figure 4: (A) SS develops unstable network dynamics with excitation explosion, where raster patterns at τ = 1, 208, 209, 211 are displayed. White dots are spikes. (B) PSD produces a stable network dynamics developed at different learning phases τ = 1, 200, 300, 800. (C) Mean firing rate ν¯(τ ) averaged over all neurons exhibits large oscillation under SS, but stably converges to the target under PSD. (D) Excitation explosion is represented by a jump discontinuity under SS indicated by ν¯(τ +1) − ν¯(τ ) . Stability is maintained during learning without discontinuity under PSD. Data of SS are colored with blue, and PSD with red.

6

Pre-strength swi

A

Post-strength swj

C

20

B SS

τ=1 τ=100 τ=200

10

6

PSD

τ=1 τ=300 τ=800

4 2

0 0 15

50 Post-cell j

0 0

100

D

SS

15

10

10

5

5

0 0

50 Pre-cell i

E 4

100

50 Pre-cell i

100

PSD

0 0

100

50 Post-cell j

SSpre

σsw

PSDpre

2 0 0

SSpost PSDpost

200

400

600

800

1000

τ (trial)

Figure 5: (A) Pre-strength swi of all neurons in the network under SS changes uniformly at τ = 1, 100, 200. (B) swi under PSD changes heterogeneously at τ = 1, 300, 800. (C) Post-strength swj under SS is homogeneous. (D) swj under PSD is heterogeneous. (E) Standard deviation σsw of per- and post-strength, SSpre in (A), PSDpre in (B), SSpost in (C) and PSDpost in (D), are different under PSD and similar under SS. The first 4 neurons are stimulated.

7

B

15

Ratios

0.01 r1

1000

˜ (τ ) ) ρ(D

1

˜ (τ ) ) σ1 (T

A

r2

0 0

500

0 1000

C

0 0

500

0 1000

D

500 τ (trial)

0 1000

1.01

σ1 (T(τ ) )

ρ(D(τ ) ) 0 0

100.3

2

ρ(W(τ ))

10

1 0

500 τ (trial)

99.9 1000

Figure 6: (A) r1 (solid line) and r2 (dash line) are close to 1 under SS (blue), and smaller under PSD (red). (B) Spectral norm of transition matrix is increasing under SS and convergent under PSD. The maximal eigenvalue of synaptic matrix (C) and the spectral norm of one step transition matrix (D) are always larger than 1. All curves of PSD are convergent. Data of SS are colored with blue, and PSD with red.

8

Supplement to “Learning Rule of Homeostatic Synaptic Scaling ...

to study learning dynamics with a recurrent neural network of binary neurons. 2.1 Binary Neuron ... code is available online at the author's homepage. 2.3 Results of ... (D) ¯ν(τ+1) - ¯ν(τ) indicates the degree of jump discontinuity. Excitation ...

756KB Sizes 0 Downloads 41 Views

Recommend Documents

Learning Rule of Homeostatic Synaptic Scaling
other type of learning rule, termed presynaptic-dependent synaptic scaling. (PSD), has been ..... artificial neural networks and machine learning (Hertz, Krogh, & Palmer, .... where membrane time constants are 30 ms for all excitatory (E) (gL =.

Supplement to - GitHub
Supplemental Table S6. .... 6 inclusion or exclusion of certain genetic variants in a pharmacogenetic test ..... http://aidsinfo.nih.gov/contentfiles/AdultandAdolescentGL.pdf. .... 2.0 are expected to exhibit higher CYP2D6 enzyme activity versus ...

Homeostatic Application Development -
equilibrium. A set of objects are created that control the homeostatic environment, each as much a part of the communication channel as any subordinate cells.

Supplement to “Contributions to the Theory of Optimal Tests”
Tests which depend on the data only through QS, |QST |, and QT are locally unbiased .... Let y2,−1 be the N-dimensional vector whose i-th entry is y2,i−1, and ...

supplement to study material - ICSI
Ensure that advertisement giving details relating to oversubscription, basis ... Ensure that no advertisement or distribution material with respect to the issue.

Synaptic Theory of Working Memory
Apr 10, 2008 - cilitation) and x decreases (depression). Synaptic efficacy is modulated by the product ux. vm, membrane potential. (B) Network architecture.

Synaptic Theory of Working Memory
Apr 10, 2008 - Supporting Online Material ... Science, NIH; Packard Foundation; and Rogers Family ... tition between different memories [see supporting.

supplement to study material - ICSI
(ii) the issuer undertakes to provide market-making for at least two years from ..... buyers if an issuer has not satisfied the basic eligibility criteria and undertakes ...... buyers on proportionate basis as per illustration given in Part C of Sche

Consumption of pomegranates improves synaptic ...
Jul 28, 2016 - with a custom mixed diet (pellets) containing 4% pomegranate for 15 months ..... and altered APP processing in APPsw/Tg 2576 mice.

Requirement of Synaptic Plasticity - Cell Press
Jun 3, 2015 - [email protected] (T.K.), [email protected] (A.T.). In Brief. Kitanishi et al. identify GluR1-dependent synaptic plasticity as a key cellular.

Supplement to "Robust Nonparametric Confidence ...
Page 1 ... INTERVALS FOR REGRESSION-DISCONTINUITY DESIGNS”. (Econometrica ... 38. S.2.6. Consistent Bandwidth Selection for Sharp RD Designs .

Summary of the Supplement to Draft PEIS: Revised Solar Plan ...
... energy with protection of sensitive resources. ... Solar Energy Development Programmatic Environmental Impact Statement (Solar PEIS). The Solar ... Page 2 ...

Summary of the Supplement to Draft PEIS: Revised Solar Plan ...
... for clean energy with protection of sensitive resources. ... Solar Energy Development Programmatic Environmental Impact Statement (Solar PEIS). The Solar ...

Supplement to: “The Value of Children: Inter ...
b/se n.obs. Good health. 0.0095. 8329. (0.034). Health problems -0.039. 8327. (0.037). Difficulty in ADL 0.026. 8335. (0.020). Physical disability 0.0062. 8326.

Supplement to "Efficient Repeated Implementation"
the definition of ψ of ˆg) but induces regime Dj in which, by (A.1), j obtains vj j > πθ(t) θt j . But this is a contradiction. Q.E.D. ... Next define ρ ≡ maxi θ a a [ui(a θ)−ui(a θ)] and ¯δ ≡ ρ ρ+ε . Mechanism ˜g = (M ψ) is def

Scaling Up Category Learning for Language ...
Scaling Up Category Learning for Language. Acquisition in Human-Robot Interaction. Luís Seabra Lopes1,2 and Aneesh Chauhan2. 1Departamento de Electrónica, Telecomunicações e Informática, Universidade de Aveiro, Portugal. 2Actividade Transversal

Crossed Rhythmic Synaptic Input to Motoneurons ... - Semantic Scholar
solution, whereas the contralateral hemicord was exposed to. 5-HT and NMDA. .... supplied with a digitizer board (Digidata 1200) and acquisition software. (Axoscope 1.1), both ..... contralateral rhythmicity, we also recorded the input during bi-.

Rule
1 Oct 2017 - in which everyday life activities take place, and is related to the patient's physical disorder. Orthotics and ... canes, commodes, traction equipment, suction machines, patient lifts, weight scales, and other items ...... (iii) Elevator

Supplement to Dynamic Mixture-Averse Preferences
Mar 2, 2018 - lotteries △(X), where X is any compact metric space. Of particular interest is the special case where X is an interval, e.g., a set of monetary outcomes or the set of continuation values for an ORA representation. We first state a gen

Rule
Oct 1, 2017 - (8) BUREAU OF TENNCARE (BUREAU) shall mean the administrative unit of TennCare which is responsible for ..... (75) LONG-TERM CARE shall mean programs and services described under Rule 1200-13-01- .01. (76) MCC ...... tional or technical

Rule
Oct 1, 2017 - nance and Administration to provide TennCare-covered benefits to eligible enrollees in the. TennCare Medicaid and ..... fied psychiatrist or a person with at least a Master's degree and/or clinical training in an ac- cepted mental .....