[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Hydrodynamic Limit for Spatially Structured Interacting Neurons

  • Published:
Journal of Statistical Physics Aims and scope Submit manuscript

Abstract

We study the hydrodynamic limit of a stochastic system of neurons whose interactions are given by Kac Potentials that mimic chemical and electrical synapses and leak currents. The system consists of \(\varepsilon ^{-2}\) neurons embedded in \([0,1)^2\), each spiking randomly according to a point process with rate depending on both its membrane potential and position. When neuron i spikes, its membrane potential is reset to 0 while the membrane potential of j is increased by a positive value \(\varepsilon ^2 a(i,j)\), if i influences j. Furthermore, between consecutive spikes, the system follows a deterministic motion due both to electrical synapses and leak currents. The electrical synapses are involved in the synchronization of the membrane potentials of the neurons, while the leak currents inhibit the activity of all neurons, attracting simultaneously their membrane potentials to 0. We show that the empirical distribution of the membrane potentials converges, as \(\varepsilon \) vanishes, to a probability density \(\rho _t(u,r)\) which is proved to obey a nonlinear PDE of Hyperbolic type.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Billingsley, P.: Convergence of Probability Measures. Wiley Series in Probability and Statistics: Probability and Statistics, vol. 2. Wiley, New York (1999). A Wiley-Interscience Publication

    Book  MATH  Google Scholar 

  2. Davis, M.H.A.: Piecewise-deterministic Markov processes: a general class of nondiffusion stochastic models. J. R. Stat. Soc. Ser. B 46(3), 353–388 (1984)

    MathSciNet  MATH  Google Scholar 

  3. De Masi, A., Galves, A., Lcherbach, E., Presutti, E.: Hydrodynamic limit for interacting neurons. J. Stat. Phys. 158(4), 866–902 (2015)

    Article  MathSciNet  ADS  MATH  Google Scholar 

  4. De Masi, A., Presutti, E.: Mathematical Methods for Hydrodynamic Limits. Lecture Notes in Mathematics. Springer, New York (1991)

    Book  MATH  Google Scholar 

  5. Duarte, A., Ost, G.: A model for neural activity in the absence of external stimulus (2014)

  6. Fournier, N., Löcherbach, E.: On a toy model of interacting neurons (2014)

  7. Galves, A., Löcherbach, E.: Infinite systems of interacting chains with memory of variable lengtha stochastic model for biological neural nets. J. Stat. Phys. 151(5), 896–921 (2013)

    Article  MathSciNet  ADS  MATH  Google Scholar 

  8. Galves, A., Löcherbach, E.: Modeling networks of spiking neurons as interacting proces ses with memory of variable length (2015)

  9. Gerstner, W., Kistler, W.: Spiking Neuron Models: An Introduction. Cambridge University Press, New York (2002)

    Book  MATH  Google Scholar 

  10. Kipnis, C., Landim, C.: Scaling Limits of Interacting Particle Systems. Grundlehren der mathematischen Wissenschaften. Springer, Berlin, New York (1999)

    Book  MATH  Google Scholar 

  11. Thieullen, M., Riedler, M., Wainrib, G.: Limit theorems for infinite-dimensional piecewise deterministic markov processes. Applications to stochastic excitable membrane models. Electron. J. Probab. 17(55), 1–48 (2012)

    MathSciNet  MATH  Google Scholar 

  12. Mitoma, I.: Tightness of probabilities on \(c([0, 1 ]; {Y}^{\prime })\) and \(d([0, 1 ]; {Y}^{\prime })\). Ann. Probab. 11(4), 989–999 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  13. Presutti, E.: Scaling Limits in Statistical Mechanics and Microstructures in Continuum Mechanics. Theoretical and Mathematical Physics. Springer, Dordrecht (2008)

    MATH  Google Scholar 

  14. Robert, P., Touboul, J.: On the dynamics of random neuronal networks (2014)

  15. Touboul, J.: Propagation of chaos in neural fields. Ann. Appl. Probab. 24(3), 1298–1328 (2014)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

We are in debt to professor E. Presutti for uncountable illuminating discussions. We also thank A. De Masi, A. Galves and C. Landim for helpful discussions. A. Duarte and G. Ost also thank professor E. Presutti for all the teaching, attention and hospitality during their visiting in GSSI. The authors thank the reviewers by their careful reading of our manuscript and also by all the valuable comments and suggestions made which undoubtedly improved the paper. This article was produced as part of the activities of FAPESP Research, Innovation and Dissemination Center for Neuromathematics (Grant 2011/51350-6) , S.Paulo Research Foundation). A. Duarte is supported by a CNPq fellowship (Grant 141270/2013-6) and G. Ost is supported by a CNPq fellowship (Grant 141482/2013-3). A.A. Rodríguez is supported by GSSI.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guilherme Ost.

Appendices

Appendix 1: Proof of Theorem 4

The proof follows the same the steps of the proof of Theorem 4 of [3]. We start providing an estimate of the the total number of spikes for both processes \(\mathrm {U}^{(\varepsilon )}\) and \(Y^{(\varepsilon ,\delta ,\ell ,E\tau )}\) in the interval [0, T]. Recall that \(Q^{(\varepsilon )}_u\) is the probability law governing the coupled process in which \(\mathrm {U}^{(\varepsilon )}(0)=u\) and \(Y^{(\varepsilon ,\delta ,\ell ,E\tau )}_i(u)=\Phi _0(u_i)\) for all \(i\in \Lambda _{\varepsilon }.\)

Proposition 9

Let \(A_{[0,T]}\) be the event when either \(\mathrm {U}^{(\varepsilon )}\) or \(Y^{(\varepsilon ,\delta ,\ell ,E\tau )}\) have more than \(2\varphi ^*\epsilon ^{-2}\delta \) spikes in some interval \([(k-1)\delta ,k\delta )\), for \(k=1,\ldots , T\delta ^{-1}\). Then, under Assumption 3,

$$\begin{aligned} Q_u^{(\varepsilon )}\Big (A_{[0,T]}\Big )\le 2T\delta ^{-1}e^{-\varphi ^*\delta \epsilon ^{-2}(3-e)}, \end{aligned}$$

for any initial configuration \(u\in \mathbb {R}_+^{\Lambda _{\varepsilon }}.\)

Proof

Fix \(k\in \{1,\ldots , T\delta ^{-1}\}\) and let \(N\big ([(k-1)\delta ,k\delta )\big )\) denote the number of spikes of the \(\mathrm {U}^{(\varepsilon )}\) process in the interval \([(k-1)\delta ,k\delta ).\) Then, under Assumption 3, \(N\big ([(k-1)\delta ,k\delta )\big )\) is stochastically bounded by

$$\begin{aligned} Z:=\sum _{j\in \Lambda _{\varepsilon }} N^*_j\big ([(k-1)\delta ,k\delta )\big ) \end{aligned}$$

where \((N^*_j)_{j\in \Lambda _{\varepsilon }}\) are iid Poisson processes with intensity \(\varphi ^*\). Since Z is distributed as a Poisson random variable with rate \(\varepsilon ^{-2}\delta \varphi ^*\), it follows that

$$\begin{aligned} Q_u^{(\varepsilon )}(N\big ([(k-1)\delta ,k\delta )\big )\ge 2\varphi ^*\delta \epsilon ^{-2} )\le \mathbb {P}(Z\ge 2\varphi ^*\delta \epsilon ^{-2})\le e^{-2\varphi ^*\delta \epsilon ^{-2}(3-e)}. \end{aligned}$$

Bounding in the same manner the number of spikes of the \(Y^{(\delta )}\) process in the interval \([(k-1)\delta ,k\delta )\) and then summing over k we complete the proof. \(\square \)

From now on, we suppose that, in both processes \(\mathrm {U}^{(\varepsilon )}\) and \(Y^{(\varepsilon ,\delta ,\ell ,E\tau )}\), the spiking rate of each neuron is \(\le \varphi ^*\) and the number of spikes of all neurons in any step \([(k-1)\delta ,k\delta ]\) is \(\le 2\varphi ^*\delta \varepsilon ^{-2}\). Moreover, writing \(B^*=C+R_0+2a^*\varphi ^*T\), then we also assume that for all \(t\le T\) and \(k\delta \le T\),

$$\begin{aligned} ||\mathrm {U}^{(\varepsilon )}(t)||\le B^*, \qquad ||\bar{U}^{(\varepsilon )}(t)||\le b^{*} B^*, \qquad ||Y^{(\varepsilon ,\delta ,\ell ,E,\tau )}(k\delta )||\le B^*, \end{aligned}$$

where \(\bar{\mathrm {U}}^{(\varepsilon )}(t)=\big (\bar{\mathrm {U}}^{(\varepsilon )}_i(t),i\in \Lambda _{\varepsilon }\big ).\) By Assumption 2, (3.1) and Proposition 9 such assumptions provide a small error probability.

In what follows, C is a constant which may change from one appearance to another. We shall now proceed as follows. We shall first control the increments of \(\mathcal {B}_k\). We next provide an upper bound for \(\theta _k\) and lastly we conclude the proof.

Controlling the increments of \(\mathcal {B}_n\):

We start noticing that

$$\begin{aligned} |\mathcal {B}_k| \le |\mathcal {B}_{k-1}| + | A_k^1 \cap \mathcal G_{k-1} | + | A^2_k \cap \mathcal G_{k-1} | \le |\mathcal {B}_{ k- 1 }| + |A^1_{k}|+ | A^2_k \cap \mathcal G_{k-1}|, \end{aligned}$$

where \(\mathcal G_{k-1}\) is the set of good labels at time \(k\delta \) (recall Definition 5) and

  • \(A^1_k\) is the set of all labels i for which the clocks \(\xi _i^1\) and \(\xi _i\) associated to label i ring during \( [ (k- 1) \delta , k \delta ],\)

  • \(A^2_k\) is the set of all labels i for which a clock \(\xi _i^{2}\) associated to label i rings during \( [ (k- 1) \delta , k \delta ]\).

Recall the definitions of the random clocks \(\xi ^1_i,\xi _i^2\) and \(\xi _i\) appearing in the coupling algorithm given in Sect. 5.1. Our aim is to prove that

$$\begin{aligned}&P \Big [ |A^1_k | > \epsilon ^{-2} (\delta \varphi ^* )^2 \Big ] \le e^{ - C \epsilon ^{-2} \delta ^4} , \end{aligned}$$
(8.1)
$$\begin{aligned}&P \Big [ | A^2_k \cap \mathcal G_{k-1} | > 2 C \epsilon ^{-2} \delta \left[ \theta _{k-1} + \delta +\ell \right] \Big ] \le e^{ - C \epsilon ^{-2} \delta ^4 }, \end{aligned}$$
(8.2)

where the constant C appearing in (8.1) and (8.2) may be different.

Then, from (8.1) and (8.2), we deduce that with probability \(\ge 1 - 2e^{ - C \epsilon ^{-2} \delta ^4 }\),

$$\begin{aligned} |\mathcal {B}_k| \le |\mathcal {B}_{k-1}| + \epsilon ^{-2}(\delta \varphi ^* )^2 + 2 C \epsilon ^{-2} \delta \left[ \theta _{k-1} + \delta \right] \le |\mathcal {B}_{k-1}| + C \epsilon ^{-2} \delta \left[ \theta _{k-1} + \delta \right] . \end{aligned}$$

Iterating the above bound and using that \( k \le T\delta ^{-1},\) we immediately get that with probability \(\ge 1 - 2k e^{ - C \epsilon ^{-2} \delta ^4 } \ge 1 - \delta ^{-1}C e^{ - C \epsilon ^{-2} \delta ^4 },\)

$$\begin{aligned} \varepsilon ^2|\mathcal {B}_k| \le \varepsilon ^2|\mathcal {B}_1|+ C\delta \sum _{h=1}^{k-1} (\theta _{h} + \delta ), \end{aligned}$$
(8.3)

where C depends only on T. Since by definition \(\theta _{k} \le \theta _{k+1}\), we may bound the right-hand of (8.3) by \( C(\theta _{k-1}+\delta ),\) implying that with probability \(\ge 1 - \delta ^{-1}C e^{ - C \epsilon ^{-2} \delta ^4 }, \)

$$\begin{aligned} \varepsilon ^2|\mathcal {B}_k|\le C(\theta _{k-1}+\delta ), \end{aligned}$$
(8.4)

for each \(k\le T\delta ^{-1}.\)

Proof of (8.1) The random variable \( |A^1_k |\) is stochastically dominated by \(Z^*:= \sum _{ i\in \Lambda _{\varepsilon }} \mathbbm {1}_{ \{ Z_i^* \ge 2 \} } ,\) where \( Z_1^*, \ldots Z_N^* \) are independent Poisson variables of parameter \(\varphi ^*\delta \). Thus, writing \(p^* = P ( N_i^* \ge 2 ) \), we have

$$\begin{aligned} e^{ - \delta \varphi ^* } \frac{1}{2} \delta ^2 (\varphi ^*)^2 \le p^* \le \frac{1}{2} (\delta \varphi ^*)^2,\quad p^* \approx \frac{ 1}{2 }\,(\delta \varphi ^*)^2 \text{ as } \delta \rightarrow 0 . \end{aligned}$$

Therefore, \(Z^*\) is the sum of \(\varepsilon ^{-2}\) Bernoulli random variables, each having mean value \(p^*\). Invoking the Hoeffding’s inequality, we get (8.1).

Proof of (8.2) We shall dominate stochastically the random variable \( |A^2_k \cap \mathcal G_{k-1} | \) by

$$\begin{aligned} \bar{Z}:=\sum _{i\in \Lambda _{\varepsilon }} \mathbbm {1}_{\{{\bar{Z}}_i \ge 1\}}, \end{aligned}$$
(8.5)

where \({\bar{Z}}_i, i\in \Lambda _{\varepsilon }, \) are independent Poisson variables of parameter \( C ( \theta _{k-1} + \delta +\ell ) \delta .\) Once (8.5) is established, (8.2) will follow straightly.

Noticing that, since,

$$\begin{aligned} |A_k^2 \cap \mathcal G_{k-1}|\le \sum _{i\in \Lambda _{\varepsilon }} \mathbbm {1}_{\{\xi ^2_i<\delta , i\in \mathcal G_{k-1}\}}, \end{aligned}$$

it suffices to show the intensity of each random clock \(\xi ^2_i\), \(i\in \mathcal {G}_{k-1},\) is \(\le C ( \theta _{k-1} + \delta +\ell ) \delta .\)

For that sake, we shall write

$$\begin{aligned} y\!:=\!Y^{(\varepsilon , \delta , \ell , E, \tau )}((k-1)\delta ), \quad \! u\!:=\!\mathrm {U}^{(\varepsilon )}((k-1)\delta ) \quad \! \text{ and } \quad \! u_t\!:=\!\mathrm {U}^{(\varepsilon )}((k-1)\delta +t), \ \ t\in [0,\delta ). \end{aligned}$$

Now, for any \(i\in \mathcal G_{k-1}\cap C_m\), the intensity of \(\xi _i^2\) is

$$\begin{aligned} |\varphi (u_i(t),i) - \varphi (y_i,i_m) | \le \Vert \varphi \Vert _{Lip} \big [|u_i(t)-y_i|+\ell \big ], \end{aligned}$$

where \(\Vert \varphi \Vert _{Lip}\) is the Lipschitz constant of the function \(\varphi .\) Denoting the number of spikes of \(U_j\) in interval [st] by \(N_j\big ([s,t]\big )\), we have

$$\begin{aligned}&|u_i(t)-y_i|\le |u_i-y_i|e^{-t(\alpha +\lambda _i)} +y_i\Big (1-e^{-(\alpha +\lambda _i)\delta }\Big ) + \lambda _i\int _0^t \bar{u}_i(s)e^{-(\alpha +\lambda _i)(t-s)}ds \\&\qquad \qquad \qquad +\, \varepsilon ^2\sum _{j\in \Lambda _{\varepsilon }}a(j,i)N_j\big ([(k-1)\delta ,(k-1)\delta +t]\big ). \end{aligned}$$

Since for all \(i\in \Lambda _{\varepsilon }\), \(y_i,\bar{u}_i(s)\le B^*\) and \(\sum _{j\in \Lambda _{\varepsilon }}a(j,i)N_j\big ([(k-1)\delta ,(k-1)\delta +t]\big )\le 2(a\varphi )^*\varepsilon ^{-2}\delta ,\) then if additionally \(i\in \mathcal {G}_{k-1}\), it follows that

$$\begin{aligned} |u_i(t)-y_i|\le \theta _{k-1}+(\alpha +\lambda _i)\delta +\lambda _i\delta + 2a^*\varphi ^*\delta , \end{aligned}$$

and thus

$$\begin{aligned} |\varphi (u_i(t),i) - \varphi (y_i,i_m) |\le & {} \Vert \varphi \Vert _{Lip}\left( \theta _{n-1} + 2(\alpha +\sup _i\lambda _i)\delta +2a^*\varphi ^*+ \ell \right) \\\le & {} C ( \theta _{k-1} + \delta +\ell ), \end{aligned}$$

which implies that

$$\begin{aligned} |A_k\cap \mathcal {G}_{k-1}| \le \sum _{i\in \Lambda _{\varepsilon }} \mathbbm {1}_{\{\bar{Z}_i<\delta \}} \quad \text {stochastically,} \end{aligned}$$

where the \(\bar{Z}_i\) are independent Poisson random variables of intensity \(C(\theta _{k-1} +\delta +\ell )\delta \).

Estimates on \(\theta _k\):

Notice that \(\mathcal{G}_{k}=\mathcal{G}_{k- 1} \cap (C_k \cup F_k )\) where:

  1. (i)

    \(C_k\) is the set of all indexes i whose associated random clock \(\xi _i^{1}\) rings only once during \( [ (k- 1) \delta , k \delta ]\).

  2. (ii)

    \(F_k\) is the set of indexes i which did not spike during \( [ (k- 1) \delta , k \delta ] .\)

In what follows, we will make use of the expression for membrane potential \(\mathrm {U}^{(\varepsilon )}_i(t)\) of a neuron which did not spike in the interval [st]:

$$\begin{aligned} \mathrm {U}^{(\varepsilon )}_i(t) = e^{-(\alpha +\lambda _i)(t-s)}\mathrm {U}^{(\varepsilon )}_i(s) +\lambda _i\int _{s}^t e^{-(\alpha +\lambda _i)(t-h)}\left\{ \bar{\mathrm {U}}^{(\varepsilon )}_i(h)dh + \frac{\varepsilon ^2}{ \lambda _i}\sum _{j\in \Lambda _{\varepsilon }}a(j,i)dN_j(h)\right\} ,\nonumber \\ \end{aligned}$$
(8.6)

being \(N_j(t)\) the total number of spike in the process \(\mathrm {U}\) of neuron j up time t.

  • Take \(i\in C_k \cap \mathcal{G}_{k- 1}\). In this case, we have that for some time \( t \in [(k-1)\delta ,k\delta [\), the random clock \(\xi _i^{1 }=t\). By (8.6),

    $$\begin{aligned} \mathrm {U}^{(\varepsilon )}_i(k \delta ) = \lambda _i\int _t^{ \delta } e^{ - (\alpha +\lambda _i) ( \delta - s)} {\bar{\mathrm {U}}}^{(\varepsilon )}_i (s) ds + e^{ - (\alpha +\lambda _i) \delta }\varepsilon ^2\sum _{j\in \Lambda _{\varepsilon }} \int _t^\delta e^{(\alpha +\lambda _i )s} d N_j(s) , \end{aligned}$$

    since \(U^{(\varepsilon )}_i(t_+)=0\). Noticing also that \(||\bar{\mathrm {U}}^{(\varepsilon )}(t)|| \le B^*\) and \(N\big ([(k-1)\delta ,k\delta )\big ) \le 2{\varphi }^*\delta \varepsilon ^{-2}\) we immediately see that \(U^{(\varepsilon )}_i (k \delta ) \le C \delta \). By similar arguments, \( Y^{(\delta ,\ell ,E,\tau )}_i (k \delta ) \le C \delta ,\) so that

    $$\begin{aligned} D_i (k ) \le C \delta . \end{aligned}$$
    (8.7)

    Observe that the value \(D_i(k-1)\) does not appear on the bound above. We shall now analyse the other case.

  • Fix \(i\in F_k \cap \mathcal{G}_{k- 1}\). Notice that the neuron i is good at time \( (k-1) \delta \) and did not spike in the time interval \([(k-1)\delta , k\delta )\) neither in the \(U^{(\varepsilon )}\) nor in the \(Y^{(\delta ,\ell ,E,\tau )}\) processes. As before, we write \( \mathrm {U}^{(\varepsilon )}((k-1) \delta )= u \) and \( Y^{(\delta ,\ell ,E,\tau )}((k-1) \delta )= y.\) By (8.6) and (5.4), the variable \(|\mathrm {U}_i ( k \delta ) - Y^{(\delta )}_i ( k \delta ) | = D_i(k)\), \(i\in C_m\), is bounded by

    $$\begin{aligned}&D_i (k) \le \left| e^{-\delta (\alpha +\lambda _i)}u_i - e^{-\delta (\alpha +\lambda _m)}y_i \right| \nonumber \\&\quad +\left| \int _{(k-1)\delta }^{k\delta } \lambda _i e^{- (\alpha +\lambda _i) (k\delta -t )}{\bar{\mathrm {U}}}^{(\varepsilon )}_i (t) dt - \lambda _{m}\int _{(k-1)\delta }^{k\delta } {\bar{y}}(m) e^{- (\alpha +\lambda _{m}) (k\delta -t )} dt \right| \nonumber \\&\quad + \left| \varepsilon ^2\sum _{j\in \Lambda _{\varepsilon }}a(j,i)\int _{(k-1)\delta }^{k\delta } e^{-(\alpha +\lambda _j)(k\delta -t)} dN_j(t) - \varepsilon ^2\sum _{m'}a(i_{m'},i_m)\tilde{N}\big ([(k-1)\delta , k\delta )\big ) \right| ,\nonumber \\ \end{aligned}$$
    (8.8)

    where \(\tilde{N}\big ([(k-1)\delta ,k\delta )\big )\) denote the number of spikes of the \(Y^{(\varepsilon ,\delta ,\ell ,E,\tau )}\) process in the interval \([(k-1)\delta ,k\delta ).\) Thus, it suffices to bound each term on the right hand side of (8.8).

We start bounding the first one:

$$\begin{aligned} |e^{-\delta (\alpha +\lambda _i)}u_i - e^{-\delta (\alpha +\lambda _{m})}y_i |\le B^*\delta |\lambda _i-\lambda _{m}|+ e^{-(\alpha +\lambda _{m})\delta }|u_i-y_i|. \end{aligned}$$

Since, \(|\lambda _i-\lambda _{m}|\le ||\lambda ||_\mathrm{Lip}\ell \), and supposing \(\ell \le \delta \), we can bound the last sum by \(C\delta ^2+\theta _{k-1}.\)

Now let’s bound the second term on the right-hand side of (8.8). It is easy to see that it is bounded by

$$\begin{aligned} ||\lambda ||_\mathrm{Lip}B^*\ell \delta (1+\lambda _{m})+\lambda _{m}\delta |\bar{y}(m)-{\bar{u}}_{i_m}|+\lambda _{m}\int _{(k-1)\delta }^{k\delta }\Big [\big |{\bar{\mathrm {U}}}_i(t)-{\bar{u}}_i\big |+\big |{\bar{\mathrm {U}}}_{i_m}(t)-{\bar{u}}_{i_m}\big |\Big ]dt. \end{aligned}$$

To control the second and third terms we notice that for any \(i\in \Lambda _{\varepsilon }\), \(|{\bar{U}}_i(t)-\bar{u}_i|\le C\delta \) and \(|\bar{U}_i(t)-{\bar{y}}_i|\le C\delta \). In addition, for any \(i\in C_m\), \(m=1,\ldots , \ell ^2\), \(|{\bar{U}}_i(t)-{\bar{u}}_{i_m}|\le C\ell \). Requiring that \(\ell \le \delta \), from these three inequalities we can bound the sum above by \(C\delta (\delta +\theta _{k-1}).\)

The argument to bound the third term on (8.8) is a bit more tricky. First we bound that term by

$$\begin{aligned}&\varepsilon ^2\sum _{j}a(j,i)\int _{(k-1)\delta }^{k\delta }(k\delta -t)(\alpha +\lambda _j)dN_j(t)\\&\quad +\, \varepsilon ^2\sum _{m'}\sum _{j\in C_{m'}}\big |a(j,i)-a(i_{m'},i_m)\big |N_j\big ([\delta (k-1),k\delta )\big )\\&\quad +\,\varepsilon ^2\sum _{m'}a(i_{m'},i_m)\Big |N_{C_{m'}}\big ([(k-1)\delta ,k\delta )\big )-\tilde{N}_{C_{m'}}\big ([(k-1)\delta ,k\delta )\big )\Big |, \end{aligned}$$

where \(N_{C_{m'}}\big ([(k-1)\delta ,k\delta )\big )=\sum _{j\in C_{m'}}N_j\big ([(k-1)\delta ,k\delta )\big )\) is total number of spikes in the \(\mathrm {U}^{(\varepsilon )}\) process inside the square \(C_{m'}\) during the time interval \([(k-1)\delta ,k\delta ))\) and \(N_{C_{m'}}\big ([(k-1)\delta ,k\delta )\big )\) is the correspondent quantity associated to the \(Y^{(\varepsilon ,\delta ,\ell ,E,\tau )}\) process.

The first two terms above are easily bounded. One can check that the sum of the two can be bounded by \(C\delta ^2\). To control the third term, we shall show that

$$\begin{aligned} \Big |N_{C_{m'}}\big ([(k-1)\delta ,k\delta )\big )-\tilde{N}_{C_{m'}}\big ([(k-1)\delta ,k\delta )\big )\Big |\le 4 (\varphi ^*\delta )^2 \varepsilon ^{-2}\ell ^2 \end{aligned}$$

Indeed, its difference is smaller or equal to

$$\begin{aligned} \sum \limits _{j\in C_{m'}\cap A^1_k}N_j([(k-1)\delta ,k\delta )) + |C_{m'}\cap A^2_k |, \end{aligned}$$
(8.9)

so that it suffices to control this two terms. We star with the second one. We know that with probability \(\ge 1-e^{ - C \epsilon ^{-2} \delta ^4 }\),

$$\begin{aligned} |C_{m'}\cap A^2_k |=|C_{m'}\cap A^2_k \cap \mathcal {G}_{k-1}|+ |C_{m'}\cap A^2_k \cap \mathcal {B}_{k-1}|\le 2Cl^2\varepsilon ^ {-2}\delta (\theta _{k-1}\delta )+ C\delta \ell ^2 |\mathcal {B}_{k-1}|, \end{aligned}$$

where we used (8.2) and that the number of neurons in \(\mathcal {B}_{k-1}\cap C_{m'}\) which spiked in a time \(\delta \) is dominated by a Poison random variable of rate \(\varphi ^*\delta |\mathcal {B}_{k-1}\cap C_{m'}|.\) Thus, it remains only to bound the first term in (8.9).

In order to do that, we start noticing that

$$\begin{aligned}&P\Big [ \sum _{j\in A^1_{k}\cap C_{m'}} N_j((k-1)\delta ,k\delta ) \ge 4 (\varphi ^*\delta )^2 \varepsilon ^{-2}\ell ^2\Big ]\\&\quad \le P\Big [ \sum _{j\in A^1_{k}\cap C_{m'}} N_j((n-1)\delta ,n\delta ) \ge 4 (\varphi ^*\delta )^2 \varepsilon ^{-2}\ell ^2; |A^1_{k}\cap C_{m'}| \le (\varphi ^*\delta )^2 \varepsilon ^{-2}\ell ^2 \Big ]\nonumber \\&\qquad + P\Big [ |A^1_{k}\cap C_{m'}| > (\varphi ^*\delta )^2 \varepsilon ^{-2}\ell ^2 \Big ] . \end{aligned}$$

The second term is controlled by the estimate on (8.1). Let \(A \subset C_{m'} \), \(|A| \le (\varphi ^*\delta )^2 \varepsilon ^{-2}\ell ^2 ,\) then

$$\begin{aligned}&P\Big [ \sum _{j\in A^1_{k}\cap C_{m'}} N_j((k-1)\delta ,k\delta ) \ge 4 (\varphi ^*\delta )^2 \varepsilon ^{-2}\ell ^2\;|\; A^1_{k}\cap C_{m'} =A \Big ]\\&\quad \le P^*\Big [ \sum _{j\in A}( N^*_j-2) \ge 2 (\varphi ^*\delta )^2 \varepsilon ^{-2}\ell ^2 \Big ] , \end{aligned}$$

being \(P^*\) the distribution of independent Poison random variables \(N^*_j\), \(j\in A\), each having parameter \(\varphi ^*\delta \) and conditioned on being \(N^*_j\ge 2\). In this way, we easily get that

$$\begin{aligned} P^*[N^*_j-2 = k ]= Z_\xi ^{-1} \frac{\xi ^k}{(k+2)!},\quad Z_\xi = \xi ^{-2} \Big (e^\xi - 1 -\xi \Big ),\quad \xi = \varphi ^*\delta . \end{aligned}$$

No let \(X_1,X_2, \ldots , \) be a sequence of independent Poison variables with parameter \(\xi \). It follows that \(N^*_j-2 \le X_j\) stochastically for \(\xi \) small enough, hence for \(\delta \) small enough. Indeed for any integer k we have

$$\begin{aligned} P^*[N^*_j-2 \ge k ] \le P[ X_j \ge k] \end{aligned}$$
(8.10)

because for \(k\ge 1\),

$$\begin{aligned} P^*[N^*_j-2 \ge k ] \le \frac{2\xi ^k}{(k+2)!},\quad P[ X_j \ge k] \ge e^{-\xi } \frac{\xi ^k}{k!} , \end{aligned}$$

hence (8.10) when \(3e^{-\xi } \ge 2\).

Since \(X=\sum _{j\in A} X_j\) is a Poisson variable of parameter \(|A| \xi \le (\varphi ^*\delta )^2 \varepsilon ^{-2}\ell ^2 \varphi ^*\delta \) we have

$$\begin{aligned} P^*\Big [ \sum _{j\in A}( N^*_j-2) \ge 2 (\varphi ^*\delta )^2 \varepsilon ^{-2}\ell ^2 \Big ] \le P^*\Big [X \ge 2 (\varphi ^*\delta )^2 \varepsilon ^{-2}\ell ^2 \Big ] , \end{aligned}$$

where the expectation \(E^* (X) \) of X is smaller (for \(\delta \) small) than \((\varphi ^*\delta )^2 \varepsilon ^{-2}\ell ^2\). As a consequence,

$$\begin{aligned} P^*\Big [ \sum _{j\in A}( N^*_j-2) \ge 2 (\varphi ^*\delta )^2 \varepsilon ^{-2}\ell ^2 \Big ] \le e^{-C\epsilon ^{-2} \delta ^{2}\ell ^2} . \end{aligned}$$

To sum up, we have for \(i\in F_k \cap \mathcal{G}_{k- 1}\) with probability \(\ge 1- e^{-C\epsilon ^{-2} \delta ^{2}\ell ^2}\),

$$\begin{aligned} D_i (k) \le \theta _{k-1}(1+ C \delta ) + C\delta |\mathcal {B}_{k-1}|\varepsilon ^2+ C\delta ^2. \end{aligned}$$

The above inequality together with (8.7) guarantee that with probability \(\ge 1- e^{-C\epsilon ^{-2} \delta ^{2}\ell ^2},\)

$$\begin{aligned} \theta _k \le \max \{ C\delta ; \theta _{k-1}(1+ C \delta ) + C\delta |\mathcal {B}_{k-1}|\varepsilon ^2+ C\delta ^2\} . \end{aligned}$$
(8.11)

Iteration on the bound of \(\theta _k\):

As a consequence of (8.4), \( \varepsilon ^{2}|\mathcal {B}_k| \le C (\theta _{k-1} + \delta )\) for all \(k\delta \le T\) with probability \( 1 - \delta ^{-1}C e^{ - C \epsilon ^{-2} \delta ^4 }\). As a by product of (8.11), with probability \( 1 - \delta ^{-1}C e^{ - C \epsilon ^{-2} \delta ^4 }\), it follows that

$$\begin{aligned} \theta _k \le \max \Big ( C \delta ,\left[ 1 + C \delta \right] \theta _{k-1} + C \delta ^2 \Big ) . \end{aligned}$$

As a direct consequence (iterate the above inequality), it holds

$$\begin{aligned} \theta _k \le C \sum _{ s=0}^{k-1} \left[ 1 + C \delta \right] ^s \delta ^2 + (1 + C \delta )^k C \delta , \end{aligned}$$

and since,

$$\begin{aligned} C \sum _{ s=0}^{k-1} \left[ 1 + C \delta \right] ^s \delta ^2 + (1 + C \delta )^k C \delta= & {} C\delta [\left( 1 + C \delta \right) ^k - 1] + (1 + C \delta )^k C \delta \\\le & {} C e^{ C T } \delta \ , \end{aligned}$$

remember that \( k \delta \le T,\) we conclude that

$$\begin{aligned} \theta _k \le C \delta \end{aligned}$$

for all \( \delta \le \delta _0, \) with probability \(\ge 1 - \delta ^{-1}C e^{ - C \epsilon ^{-2} \delta ^4 }\). This finishes the proof of Theorem 4.

Appendix 2: Proof of Proposition 3

Proof

Fix \(\phi \in \mathcal {S}\). By (A), the left-hand side of (5.7) does not change if we consider \(U^*(t)=\min \{\mathrm {U}^{(\varepsilon )}(t),B^{*}\}\) and \(Y^*(t)=\min \{Y^{(\varepsilon ,\delta ,\ell ,E,\tau )}(t),B^*\}\) instead of \(\mathrm {U}^{(\varepsilon )}(t)\) and \(Y^{(\varepsilon ,\delta ,\ell ,E,\tau )}(t)\). Now, by the smoothness of the function \(\phi \),

$$\begin{aligned} Q_u^{(\varepsilon )}\left[ \Big |\varepsilon ^2\sum _{i\in \Lambda _{\varepsilon }}\phi (U_i(t),i)-\varepsilon ^2\sum _{i\in \Lambda _{\varepsilon }}\phi (Y_i(t),i_m)\Big |\right] \le ||\varphi ||_\mathrm{Lip}Q_u^{(\varepsilon )}\Big [\varepsilon ^2\sum _{i\in C_m} |U^*(t)-Y^*(t)|\Big ]. \end{aligned}$$

Applying the Theorem 4 and using that \(|U^*(t)-Y^*(t)|\le B^*\), we get the desired upper bound in (5.7). \(\square \)

Appendix 3: Proof of Theorem 5

Proof

Let \(\mathcal {F}_n\) be the sigma-algebra generated by the variables \(\xi _i=\xi _i(k), k\le n-1,i\in \Lambda _{\varepsilon }\) appearing in (5.2). Observe that all variables \(Y^{(\varepsilon ,\delta ,\ell ,E,\tau )}(n\delta )\), \(e^{(\varepsilon )}_{n}(m)\), \(S^{(\varepsilon )}_{n+1}(m,h)\) and \(S^{(\varepsilon )}_{n+1}(m)\) are \(\mathcal {F}_{n}-\) measurable. In what follows, the constants \(C,c_1\) and \(c_2\) may change from appearance to another.

The proof is made by induction. For \(n=0\), the proposition is easy to check. Indeed, notice that in this case \(E^{(\varepsilon )}_{0,k}=D^{(\varepsilon )}_{0,k}\) . Moreover, notice also that

$$\begin{aligned} \zeta _{0,m}(D^{(\varepsilon )}_{0,k})= \tilde{E}_{Y^{(\varepsilon ,\delta ,\ell ,E,\tau )}(0)}^{(\varepsilon )}\big [\eta _{0,m}(E^{(\varepsilon )}_{0,k})\big ]=\sum _{i\in C_m}\int _{I_k}\psi _0(u,i)du \end{aligned}$$

and that \(\eta _{0,m}(E^{(\varepsilon )}_{0,k})\) is a sum of \(\ell ^{2}\varepsilon ^{-2}\) independent Bernoulli random variables \(X_i, i\in C_m\), where expected value of \(X_i\) is \(\int _{I_k}\psi _0(u,i)du.\) By Hoeffding inequality we deduce that

$$\begin{aligned} \varepsilon ^2 |\eta ^{(\varepsilon )}_{0,m}(E_{0,k}) -\zeta ^{(\varepsilon )}_{0,m}(D_{0,k})|>E\ell ^2\varepsilon ^{1/2}=C \end{aligned}$$

with probability \(\le 2e^{-c_2\varepsilon ^{-1}}\) where \(c_2=2E^2\ell ^2.\) Therefore, it follows, for \(n=0\), that the inequality above holds for all k and m with probability larger or equal to

$$\begin{aligned} 1-c_1 e^{-c_2\varepsilon ^{-1}}, \end{aligned}$$

establishing the Theorem in the case \(n=0\). We now suppose that the result holds for \(k\le n\). Introduce the set \(G_n\) in which:

  • \(\big | E^{(\varepsilon )}_{n,k}-D^{(\varepsilon )}_{n,k} \big | \le C\varepsilon ^{1/2},\) \(k=1,\ldots , |\mathcal {E}^{(\varepsilon )}_{n}|\)

  • \(\varepsilon ^2 \Big |\eta _{n,m}\Big (E^{(\varepsilon )}_{n,k+\delta \tau ^{-1}}\Big ) -\zeta _{n,m}\Big (D^{(\varepsilon )}_{n,k+\delta \tau ^{-1}}\Big )\Big | \le E\ell ^2\varepsilon ^{1/2}, \) \(k=1,\ldots , |\mathcal {E}^{(\varepsilon )}_{n}|,\) and

  • \(\varepsilon ^2 \Big |\eta _{n,m}\Big (E^{(\varepsilon )}_{n,h}\Big ) -\zeta _{n,m}\Big (D^{(\varepsilon )}_{n,h}\Big )\Big | \le \tau \ell ^2\varepsilon ^{1/2}, \ h=1,\ldots , \delta \tau ^{-1}.\)

By the inductive hypothesis, \(\tilde{P}^{(\varepsilon )}_{Y^{(\varepsilon ,\delta ,\ell ,E,\tau )}(0)}(G_n)\ge 1-c_1e^{-c_2\varepsilon ^{-1/2}}.\)

Since,

$$\begin{aligned}&|E^{(\varepsilon )}_{n+1,k+\delta \tau ^{-1}}-D^{(\varepsilon )}_{n+1,k+\delta \tau ^{-1}}| \le |E^{(\varepsilon )}_{n,k}-D^{(\varepsilon )}_{n,k}| + \lambda _m \delta \Big |\bar{y}^{(\varepsilon )}_{n}(m)-e^{(\varepsilon )}_{n}(m)\Big |\\&\qquad \qquad \qquad + \Big |S^{(\varepsilon )}_{n+1}(m)-\tilde{E}^{(\varepsilon )}\big [S^{(\varepsilon )}_{n+1}(m)\big ]\Big |, \end{aligned}$$

we have that on \(G_n,\)

$$\begin{aligned} |E^{(\varepsilon )}_{n+1,k+\delta \tau ^{-1}}-D^{(\varepsilon )}_{n+1,k+\delta \tau ^{-1}}| \le C\varepsilon ^{1/2}+\Big |S^{(\varepsilon )}_{n+1}(m)-\tilde{E}^{(\varepsilon )}\big [S^{(\varepsilon )}_{n+1}(m)\big ]\Big |. \end{aligned}$$

We shall show that there exist positive constants \(c,c_1\) and \(c_2\) not depending on \(\varepsilon \) such that

$$\begin{aligned} \Big |S^{(\varepsilon )}_{n+1}(m)-\tilde{E}^{(\varepsilon )}\big [S^{(\varepsilon )}_{n+1}(m)\big ]\Big |\le c\varepsilon ^{1/2}, \end{aligned}$$
(10.1)

with probability \(\ge 1-c_1e^{-c_2\varepsilon ^{-1}}\). For that sake, we first write

$$\begin{aligned} N_{n+1}(m,k,\delta )=\sum \limits _{i\in C_m}\mathbbm {1}_{\{\xi _i< \delta \}}, \ \xi _i \sim \exp (\varphi (E^{(\varepsilon )}_{n,k},i_m) \end{aligned}$$

and then by the conditional version of Hoeffding’s inequality we deduce that

$$\begin{aligned} \tilde{P}_{Y^{(\varepsilon ,\delta ,\ell ,E,\tau )}(0)}^{(\varepsilon )}(\varepsilon ^{2}\big | N_{n+1}(m,k,\delta )\!-\! \eta _{n,m}(E^{(\varepsilon )}_{n,k})(1\!-\!e^{-\delta \varphi (E^{(\varepsilon )}_{n,k},i_m)})\big |\!>\! E\ell ^2 \varepsilon ^{1/2}|\mathcal {F}_n)\le c_1e^{-c_2\varepsilon ^{-1}}\nonumber \\ \end{aligned}$$
(10.2)

Since on \(G_n\)

$$\begin{aligned} |\varphi (E^{(\varepsilon )}_{n,k},i_m)-\varphi (D^{(\varepsilon )}_{n,k},i_m)|\le C\varepsilon ^{1/2}, \end{aligned}$$

noticing that \(N_{n+1}(m,\delta )=\sum _{k}N_{n+1}(m,k,\delta )\) and \(\varepsilon ^2\zeta (D^{(\varepsilon )}_{n,k})\le 1,\) then it follows together with (10.2) that there exist constants \(C,c_1\) and \(c_2\) such that

$$\begin{aligned} \tilde{P}_{Y^{(\varepsilon ,\delta ,\ell ,E,\tau )}(0)}^{(\varepsilon )}\Big (G_n,\varepsilon ^{2}\big |S^{(\varepsilon )}_{n+1}(m)-\tilde{E}^{(\varepsilon )}\big [S^{(\varepsilon )}_{n+1}(m)\big ]\big |>C\varepsilon ^{1/2}\Big |\mathcal {F}_n\Big )\le c_1e^{-c_2\varepsilon ^{-1}}, \end{aligned}$$

proving (10.1). Therefore,

$$\begin{aligned} \tilde{P}_{Y^{(\varepsilon ,\delta ,\ell ,E,\tau )}(0)}^{(\varepsilon )}\Big (G_n,|E^{(\varepsilon )}_{n+1,k+\delta \tau ^{-1}}-D^{(\varepsilon )}_{n+1,k+\delta \tau ^{-1}}|>C\varepsilon ^{1/2}\Big |\mathcal {F}_n\Big )\le c_1e^{-c_2\varepsilon ^{-1}}. \end{aligned}$$

A similar argument may be used to prove that we may replace in the probability above \(E^{(\varepsilon )}_{n+1,k+\delta \tau ^{-1}}\) and \(D^{(\varepsilon )}_{n+1,k+\delta \tau ^{-1}}\) respectively by \(E^{(\varepsilon )}_{n+1,h}\) and \(D^{(\varepsilon )}_{n+1,h}\). Thus, summing over all k,h and m we prove the first part of Theorem 5 for \(n+1.\)

Now, we noticing that \(\eta _{n+1}(m,k+\delta \tau ^{-1})=\eta _{n}(m,k)-N_{n+1}(m,k,\delta )\) and remembering that by (6.6), \(\zeta _{n+1}(m,k+\delta \tau ^{-1})=\zeta _{n+1}(m,k)e^{-\delta \varphi \big (D^{(\varepsilon )}_{n,k},i_m\big )},\) we easily see, together with (10.2), that

$$\begin{aligned} \tilde{P}_{Y^{(\varepsilon ,\delta ,\ell ,E,\tau )}(0)}^{(\varepsilon )}\Big (G_n,\varepsilon ^2 \big |\eta ^{(\varepsilon )}_{n+1}(m,k+\delta \tau ^{-1}) -\zeta _{n+1}(m,k+\delta \tau ^{-1})\big |>C\varepsilon ^{1/2}\Big |\mathcal {F}_n\Big )\le c_1e^{-c_2\varepsilon ^{-1}}, \end{aligned}$$

for some suitable constants not depending on \(\varepsilon \). A similar argument shows that the same type of bound for \(\varepsilon ^2 \big |\eta ^{(\varepsilon )}_{n+1}(m,h) -\zeta ^{(\varepsilon )}_{n+1}(m,h)\big |\) also holds, finishing the proof of the theorem. \(\square \)

Appendix 4: Proof of Theorem 2 for General Firing Rates

The proof is analogous to the proof presented in Appendix 4 of [3]. For the sake of completeness we shall give it here.

Let \(\varphi , R, T\) and C as in the statement of Theorem 1 and take \(\phi \) be any bounded continuous functions on \(D\big ([0,T], \mathcal {S}' \big ).\) We have to show that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \mathcal {P}^{(\varepsilon )}_{[0,T]}(\phi )=\phi (\rho ). \end{aligned}$$

Let A be the set \(A=\{||U^{(\varepsilon )}(t)||\le C,t\in [0,T]\}.\) Theorem 1 implies that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0 } \big | \mathcal {P}^{(\varepsilon )}_{[0,T]}(\phi ) - \mathcal {P}^{(\varepsilon )}_{[0,T]}(\phi 1_A)\big |=0. \end{aligned}$$
(11.1)

Now, consider \(\mathcal {P}^{(*,\varepsilon )}_{[0,T]}\) the distribution of the process with a spiking rate \(\varphi ^*(\cdot ,\cdot )\) which fulfils the Assumption 3 and it is equal to \(\varphi \) for \(u\le C\). By definition, it follows that

$$\begin{aligned} \mathcal {P}^{(\varepsilon )}_{[0,T]}(\phi 1_A)=\mathcal {P}^{(*,\varepsilon )}_{[0,T]}(\phi 1_A). \end{aligned}$$
(11.2)

Having proved Theorem 2 under the Assumption 3, we get the desired convergence to a limit density \(\rho ^*=(\rho ^*_tdudr)_{t\in [0,T]}\), for the process whose spiking rate is \(\varphi ^*\). It follows then, from (11.1) and (11.2), that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0 }\mathcal {P}^{(\varepsilon )}_{[0,T]}(\phi )=\psi (\rho ^*1_A). \end{aligned}$$

We claim that \(\rho ^*=\rho ^*1_A\). Indeed, by considering \(\phi (w)=\sup \{w_t(1),t\le T\} \wedge 1,\) we immediately see that \(1=\lim _{\varepsilon \rightarrow 0 } \mathcal {P}^{(\varepsilon )}_{[0,T]}(\phi )=\phi (\rho ^*1_A).\) This last equalty implies that \(\rho ^*\) have support in [0, C]. As a consequence,

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0 } \mathcal {P}^{(\varepsilon )}_{[0,T]}(\phi )=\phi (\rho ^*1_A)=\phi (\rho ^*), \end{aligned}$$

which concludes the proof of the Theorem.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Duarte, A., Ost, G. & Rodríguez, A.A. Hydrodynamic Limit for Spatially Structured Interacting Neurons. J Stat Phys 161, 1163–1202 (2015). https://doi.org/10.1007/s10955-015-1366-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10955-015-1366-y

Keywords

Mathematics Subject Classification

Navigation