Abstract
We study the hydrodynamic limit of a stochastic system of neurons whose interactions are given by Kac Potentials that mimic chemical and electrical synapses and leak currents. The system consists of \(\varepsilon ^{-2}\) neurons embedded in \([0,1)^2\), each spiking randomly according to a point process with rate depending on both its membrane potential and position. When neuron i spikes, its membrane potential is reset to 0 while the membrane potential of j is increased by a positive value \(\varepsilon ^2 a(i,j)\), if i influences j. Furthermore, between consecutive spikes, the system follows a deterministic motion due both to electrical synapses and leak currents. The electrical synapses are involved in the synchronization of the membrane potentials of the neurons, while the leak currents inhibit the activity of all neurons, attracting simultaneously their membrane potentials to 0. We show that the empirical distribution of the membrane potentials converges, as \(\varepsilon \) vanishes, to a probability density \(\rho _t(u,r)\) which is proved to obey a nonlinear PDE of Hyperbolic type.
Similar content being viewed by others
References
Billingsley, P.: Convergence of Probability Measures. Wiley Series in Probability and Statistics: Probability and Statistics, vol. 2. Wiley, New York (1999). A Wiley-Interscience Publication
Davis, M.H.A.: Piecewise-deterministic Markov processes: a general class of nondiffusion stochastic models. J. R. Stat. Soc. Ser. B 46(3), 353–388 (1984)
De Masi, A., Galves, A., Lcherbach, E., Presutti, E.: Hydrodynamic limit for interacting neurons. J. Stat. Phys. 158(4), 866–902 (2015)
De Masi, A., Presutti, E.: Mathematical Methods for Hydrodynamic Limits. Lecture Notes in Mathematics. Springer, New York (1991)
Duarte, A., Ost, G.: A model for neural activity in the absence of external stimulus (2014)
Fournier, N., Löcherbach, E.: On a toy model of interacting neurons (2014)
Galves, A., Löcherbach, E.: Infinite systems of interacting chains with memory of variable lengtha stochastic model for biological neural nets. J. Stat. Phys. 151(5), 896–921 (2013)
Galves, A., Löcherbach, E.: Modeling networks of spiking neurons as interacting proces ses with memory of variable length (2015)
Gerstner, W., Kistler, W.: Spiking Neuron Models: An Introduction. Cambridge University Press, New York (2002)
Kipnis, C., Landim, C.: Scaling Limits of Interacting Particle Systems. Grundlehren der mathematischen Wissenschaften. Springer, Berlin, New York (1999)
Thieullen, M., Riedler, M., Wainrib, G.: Limit theorems for infinite-dimensional piecewise deterministic markov processes. Applications to stochastic excitable membrane models. Electron. J. Probab. 17(55), 1–48 (2012)
Mitoma, I.: Tightness of probabilities on \(c([0, 1 ]; {Y}^{\prime })\) and \(d([0, 1 ]; {Y}^{\prime })\). Ann. Probab. 11(4), 989–999 (1983)
Presutti, E.: Scaling Limits in Statistical Mechanics and Microstructures in Continuum Mechanics. Theoretical and Mathematical Physics. Springer, Dordrecht (2008)
Robert, P., Touboul, J.: On the dynamics of random neuronal networks (2014)
Touboul, J.: Propagation of chaos in neural fields. Ann. Appl. Probab. 24(3), 1298–1328 (2014)
Acknowledgments
We are in debt to professor E. Presutti for uncountable illuminating discussions. We also thank A. De Masi, A. Galves and C. Landim for helpful discussions. A. Duarte and G. Ost also thank professor E. Presutti for all the teaching, attention and hospitality during their visiting in GSSI. The authors thank the reviewers by their careful reading of our manuscript and also by all the valuable comments and suggestions made which undoubtedly improved the paper. This article was produced as part of the activities of FAPESP Research, Innovation and Dissemination Center for Neuromathematics (Grant 2011/51350-6) , S.Paulo Research Foundation). A. Duarte is supported by a CNPq fellowship (Grant 141270/2013-6) and G. Ost is supported by a CNPq fellowship (Grant 141482/2013-3). A.A. Rodríguez is supported by GSSI.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1: Proof of Theorem 4
The proof follows the same the steps of the proof of Theorem 4 of [3]. We start providing an estimate of the the total number of spikes for both processes \(\mathrm {U}^{(\varepsilon )}\) and \(Y^{(\varepsilon ,\delta ,\ell ,E\tau )}\) in the interval [0, T]. Recall that \(Q^{(\varepsilon )}_u\) is the probability law governing the coupled process in which \(\mathrm {U}^{(\varepsilon )}(0)=u\) and \(Y^{(\varepsilon ,\delta ,\ell ,E\tau )}_i(u)=\Phi _0(u_i)\) for all \(i\in \Lambda _{\varepsilon }.\)
Proposition 9
Let \(A_{[0,T]}\) be the event when either \(\mathrm {U}^{(\varepsilon )}\) or \(Y^{(\varepsilon ,\delta ,\ell ,E\tau )}\) have more than \(2\varphi ^*\epsilon ^{-2}\delta \) spikes in some interval \([(k-1)\delta ,k\delta )\), for \(k=1,\ldots , T\delta ^{-1}\). Then, under Assumption 3,
for any initial configuration \(u\in \mathbb {R}_+^{\Lambda _{\varepsilon }}.\)
Proof
Fix \(k\in \{1,\ldots , T\delta ^{-1}\}\) and let \(N\big ([(k-1)\delta ,k\delta )\big )\) denote the number of spikes of the \(\mathrm {U}^{(\varepsilon )}\) process in the interval \([(k-1)\delta ,k\delta ).\) Then, under Assumption 3, \(N\big ([(k-1)\delta ,k\delta )\big )\) is stochastically bounded by
where \((N^*_j)_{j\in \Lambda _{\varepsilon }}\) are iid Poisson processes with intensity \(\varphi ^*\). Since Z is distributed as a Poisson random variable with rate \(\varepsilon ^{-2}\delta \varphi ^*\), it follows that
Bounding in the same manner the number of spikes of the \(Y^{(\delta )}\) process in the interval \([(k-1)\delta ,k\delta )\) and then summing over k we complete the proof. \(\square \)
From now on, we suppose that, in both processes \(\mathrm {U}^{(\varepsilon )}\) and \(Y^{(\varepsilon ,\delta ,\ell ,E\tau )}\), the spiking rate of each neuron is \(\le \varphi ^*\) and the number of spikes of all neurons in any step \([(k-1)\delta ,k\delta ]\) is \(\le 2\varphi ^*\delta \varepsilon ^{-2}\). Moreover, writing \(B^*=C+R_0+2a^*\varphi ^*T\), then we also assume that for all \(t\le T\) and \(k\delta \le T\),
where \(\bar{\mathrm {U}}^{(\varepsilon )}(t)=\big (\bar{\mathrm {U}}^{(\varepsilon )}_i(t),i\in \Lambda _{\varepsilon }\big ).\) By Assumption 2, (3.1) and Proposition 9 such assumptions provide a small error probability.
In what follows, C is a constant which may change from one appearance to another. We shall now proceed as follows. We shall first control the increments of \(\mathcal {B}_k\). We next provide an upper bound for \(\theta _k\) and lastly we conclude the proof.
Controlling the increments of \(\mathcal {B}_n\):
We start noticing that
where \(\mathcal G_{k-1}\) is the set of good labels at time \(k\delta \) (recall Definition 5) and
-
\(A^1_k\) is the set of all labels i for which the clocks \(\xi _i^1\) and \(\xi _i\) associated to label i ring during \( [ (k- 1) \delta , k \delta ],\)
-
\(A^2_k\) is the set of all labels i for which a clock \(\xi _i^{2}\) associated to label i rings during \( [ (k- 1) \delta , k \delta ]\).
Recall the definitions of the random clocks \(\xi ^1_i,\xi _i^2\) and \(\xi _i\) appearing in the coupling algorithm given in Sect. 5.1. Our aim is to prove that
where the constant C appearing in (8.1) and (8.2) may be different.
Then, from (8.1) and (8.2), we deduce that with probability \(\ge 1 - 2e^{ - C \epsilon ^{-2} \delta ^4 }\),
Iterating the above bound and using that \( k \le T\delta ^{-1},\) we immediately get that with probability \(\ge 1 - 2k e^{ - C \epsilon ^{-2} \delta ^4 } \ge 1 - \delta ^{-1}C e^{ - C \epsilon ^{-2} \delta ^4 },\)
where C depends only on T. Since by definition \(\theta _{k} \le \theta _{k+1}\), we may bound the right-hand of (8.3) by \( C(\theta _{k-1}+\delta ),\) implying that with probability \(\ge 1 - \delta ^{-1}C e^{ - C \epsilon ^{-2} \delta ^4 }, \)
for each \(k\le T\delta ^{-1}.\)
Proof of (8.1) The random variable \( |A^1_k |\) is stochastically dominated by \(Z^*:= \sum _{ i\in \Lambda _{\varepsilon }} \mathbbm {1}_{ \{ Z_i^* \ge 2 \} } ,\) where \( Z_1^*, \ldots Z_N^* \) are independent Poisson variables of parameter \(\varphi ^*\delta \). Thus, writing \(p^* = P ( N_i^* \ge 2 ) \), we have
Therefore, \(Z^*\) is the sum of \(\varepsilon ^{-2}\) Bernoulli random variables, each having mean value \(p^*\). Invoking the Hoeffding’s inequality, we get (8.1).
Proof of (8.2) We shall dominate stochastically the random variable \( |A^2_k \cap \mathcal G_{k-1} | \) by
where \({\bar{Z}}_i, i\in \Lambda _{\varepsilon }, \) are independent Poisson variables of parameter \( C ( \theta _{k-1} + \delta +\ell ) \delta .\) Once (8.5) is established, (8.2) will follow straightly.
Noticing that, since,
it suffices to show the intensity of each random clock \(\xi ^2_i\), \(i\in \mathcal {G}_{k-1},\) is \(\le C ( \theta _{k-1} + \delta +\ell ) \delta .\)
For that sake, we shall write
Now, for any \(i\in \mathcal G_{k-1}\cap C_m\), the intensity of \(\xi _i^2\) is
where \(\Vert \varphi \Vert _{Lip}\) is the Lipschitz constant of the function \(\varphi .\) Denoting the number of spikes of \(U_j\) in interval [s, t] by \(N_j\big ([s,t]\big )\), we have
Since for all \(i\in \Lambda _{\varepsilon }\), \(y_i,\bar{u}_i(s)\le B^*\) and \(\sum _{j\in \Lambda _{\varepsilon }}a(j,i)N_j\big ([(k-1)\delta ,(k-1)\delta +t]\big )\le 2(a\varphi )^*\varepsilon ^{-2}\delta ,\) then if additionally \(i\in \mathcal {G}_{k-1}\), it follows that
and thus
which implies that
where the \(\bar{Z}_i\) are independent Poisson random variables of intensity \(C(\theta _{k-1} +\delta +\ell )\delta \).
Estimates on \(\theta _k\):
Notice that \(\mathcal{G}_{k}=\mathcal{G}_{k- 1} \cap (C_k \cup F_k )\) where:
-
(i)
\(C_k\) is the set of all indexes i whose associated random clock \(\xi _i^{1}\) rings only once during \( [ (k- 1) \delta , k \delta ]\).
-
(ii)
\(F_k\) is the set of indexes i which did not spike during \( [ (k- 1) \delta , k \delta ] .\)
In what follows, we will make use of the expression for membrane potential \(\mathrm {U}^{(\varepsilon )}_i(t)\) of a neuron which did not spike in the interval [s, t]:
being \(N_j(t)\) the total number of spike in the process \(\mathrm {U}\) of neuron j up time t.
-
Take \(i\in C_k \cap \mathcal{G}_{k- 1}\). In this case, we have that for some time \( t \in [(k-1)\delta ,k\delta [\), the random clock \(\xi _i^{1 }=t\). By (8.6),
$$\begin{aligned} \mathrm {U}^{(\varepsilon )}_i(k \delta ) = \lambda _i\int _t^{ \delta } e^{ - (\alpha +\lambda _i) ( \delta - s)} {\bar{\mathrm {U}}}^{(\varepsilon )}_i (s) ds + e^{ - (\alpha +\lambda _i) \delta }\varepsilon ^2\sum _{j\in \Lambda _{\varepsilon }} \int _t^\delta e^{(\alpha +\lambda _i )s} d N_j(s) , \end{aligned}$$since \(U^{(\varepsilon )}_i(t_+)=0\). Noticing also that \(||\bar{\mathrm {U}}^{(\varepsilon )}(t)|| \le B^*\) and \(N\big ([(k-1)\delta ,k\delta )\big ) \le 2{\varphi }^*\delta \varepsilon ^{-2}\) we immediately see that \(U^{(\varepsilon )}_i (k \delta ) \le C \delta \). By similar arguments, \( Y^{(\delta ,\ell ,E,\tau )}_i (k \delta ) \le C \delta ,\) so that
$$\begin{aligned} D_i (k ) \le C \delta . \end{aligned}$$(8.7)Observe that the value \(D_i(k-1)\) does not appear on the bound above. We shall now analyse the other case.
-
Fix \(i\in F_k \cap \mathcal{G}_{k- 1}\). Notice that the neuron i is good at time \( (k-1) \delta \) and did not spike in the time interval \([(k-1)\delta , k\delta )\) neither in the \(U^{(\varepsilon )}\) nor in the \(Y^{(\delta ,\ell ,E,\tau )}\) processes. As before, we write \( \mathrm {U}^{(\varepsilon )}((k-1) \delta )= u \) and \( Y^{(\delta ,\ell ,E,\tau )}((k-1) \delta )= y.\) By (8.6) and (5.4), the variable \(|\mathrm {U}_i ( k \delta ) - Y^{(\delta )}_i ( k \delta ) | = D_i(k)\), \(i\in C_m\), is bounded by
$$\begin{aligned}&D_i (k) \le \left| e^{-\delta (\alpha +\lambda _i)}u_i - e^{-\delta (\alpha +\lambda _m)}y_i \right| \nonumber \\&\quad +\left| \int _{(k-1)\delta }^{k\delta } \lambda _i e^{- (\alpha +\lambda _i) (k\delta -t )}{\bar{\mathrm {U}}}^{(\varepsilon )}_i (t) dt - \lambda _{m}\int _{(k-1)\delta }^{k\delta } {\bar{y}}(m) e^{- (\alpha +\lambda _{m}) (k\delta -t )} dt \right| \nonumber \\&\quad + \left| \varepsilon ^2\sum _{j\in \Lambda _{\varepsilon }}a(j,i)\int _{(k-1)\delta }^{k\delta } e^{-(\alpha +\lambda _j)(k\delta -t)} dN_j(t) - \varepsilon ^2\sum _{m'}a(i_{m'},i_m)\tilde{N}\big ([(k-1)\delta , k\delta )\big ) \right| ,\nonumber \\ \end{aligned}$$(8.8)where \(\tilde{N}\big ([(k-1)\delta ,k\delta )\big )\) denote the number of spikes of the \(Y^{(\varepsilon ,\delta ,\ell ,E,\tau )}\) process in the interval \([(k-1)\delta ,k\delta ).\) Thus, it suffices to bound each term on the right hand side of (8.8).
We start bounding the first one:
Since, \(|\lambda _i-\lambda _{m}|\le ||\lambda ||_\mathrm{Lip}\ell \), and supposing \(\ell \le \delta \), we can bound the last sum by \(C\delta ^2+\theta _{k-1}.\)
Now let’s bound the second term on the right-hand side of (8.8). It is easy to see that it is bounded by
To control the second and third terms we notice that for any \(i\in \Lambda _{\varepsilon }\), \(|{\bar{U}}_i(t)-\bar{u}_i|\le C\delta \) and \(|\bar{U}_i(t)-{\bar{y}}_i|\le C\delta \). In addition, for any \(i\in C_m\), \(m=1,\ldots , \ell ^2\), \(|{\bar{U}}_i(t)-{\bar{u}}_{i_m}|\le C\ell \). Requiring that \(\ell \le \delta \), from these three inequalities we can bound the sum above by \(C\delta (\delta +\theta _{k-1}).\)
The argument to bound the third term on (8.8) is a bit more tricky. First we bound that term by
where \(N_{C_{m'}}\big ([(k-1)\delta ,k\delta )\big )=\sum _{j\in C_{m'}}N_j\big ([(k-1)\delta ,k\delta )\big )\) is total number of spikes in the \(\mathrm {U}^{(\varepsilon )}\) process inside the square \(C_{m'}\) during the time interval \([(k-1)\delta ,k\delta ))\) and \(N_{C_{m'}}\big ([(k-1)\delta ,k\delta )\big )\) is the correspondent quantity associated to the \(Y^{(\varepsilon ,\delta ,\ell ,E,\tau )}\) process.
The first two terms above are easily bounded. One can check that the sum of the two can be bounded by \(C\delta ^2\). To control the third term, we shall show that
Indeed, its difference is smaller or equal to
so that it suffices to control this two terms. We star with the second one. We know that with probability \(\ge 1-e^{ - C \epsilon ^{-2} \delta ^4 }\),
where we used (8.2) and that the number of neurons in \(\mathcal {B}_{k-1}\cap C_{m'}\) which spiked in a time \(\delta \) is dominated by a Poison random variable of rate \(\varphi ^*\delta |\mathcal {B}_{k-1}\cap C_{m'}|.\) Thus, it remains only to bound the first term in (8.9).
In order to do that, we start noticing that
The second term is controlled by the estimate on (8.1). Let \(A \subset C_{m'} \), \(|A| \le (\varphi ^*\delta )^2 \varepsilon ^{-2}\ell ^2 ,\) then
being \(P^*\) the distribution of independent Poison random variables \(N^*_j\), \(j\in A\), each having parameter \(\varphi ^*\delta \) and conditioned on being \(N^*_j\ge 2\). In this way, we easily get that
No let \(X_1,X_2, \ldots , \) be a sequence of independent Poison variables with parameter \(\xi \). It follows that \(N^*_j-2 \le X_j\) stochastically for \(\xi \) small enough, hence for \(\delta \) small enough. Indeed for any integer k we have
because for \(k\ge 1\),
hence (8.10) when \(3e^{-\xi } \ge 2\).
Since \(X=\sum _{j\in A} X_j\) is a Poisson variable of parameter \(|A| \xi \le (\varphi ^*\delta )^2 \varepsilon ^{-2}\ell ^2 \varphi ^*\delta \) we have
where the expectation \(E^* (X) \) of X is smaller (for \(\delta \) small) than \((\varphi ^*\delta )^2 \varepsilon ^{-2}\ell ^2\). As a consequence,
To sum up, we have for \(i\in F_k \cap \mathcal{G}_{k- 1}\) with probability \(\ge 1- e^{-C\epsilon ^{-2} \delta ^{2}\ell ^2}\),
The above inequality together with (8.7) guarantee that with probability \(\ge 1- e^{-C\epsilon ^{-2} \delta ^{2}\ell ^2},\)
Iteration on the bound of \(\theta _k\):
As a consequence of (8.4), \( \varepsilon ^{2}|\mathcal {B}_k| \le C (\theta _{k-1} + \delta )\) for all \(k\delta \le T\) with probability \( 1 - \delta ^{-1}C e^{ - C \epsilon ^{-2} \delta ^4 }\). As a by product of (8.11), with probability \( 1 - \delta ^{-1}C e^{ - C \epsilon ^{-2} \delta ^4 }\), it follows that
As a direct consequence (iterate the above inequality), it holds
and since,
remember that \( k \delta \le T,\) we conclude that
for all \( \delta \le \delta _0, \) with probability \(\ge 1 - \delta ^{-1}C e^{ - C \epsilon ^{-2} \delta ^4 }\). This finishes the proof of Theorem 4.
Appendix 2: Proof of Proposition 3
Proof
Fix \(\phi \in \mathcal {S}\). By (A), the left-hand side of (5.7) does not change if we consider \(U^*(t)=\min \{\mathrm {U}^{(\varepsilon )}(t),B^{*}\}\) and \(Y^*(t)=\min \{Y^{(\varepsilon ,\delta ,\ell ,E,\tau )}(t),B^*\}\) instead of \(\mathrm {U}^{(\varepsilon )}(t)\) and \(Y^{(\varepsilon ,\delta ,\ell ,E,\tau )}(t)\). Now, by the smoothness of the function \(\phi \),
Applying the Theorem 4 and using that \(|U^*(t)-Y^*(t)|\le B^*\), we get the desired upper bound in (5.7). \(\square \)
Appendix 3: Proof of Theorem 5
Proof
Let \(\mathcal {F}_n\) be the sigma-algebra generated by the variables \(\xi _i=\xi _i(k), k\le n-1,i\in \Lambda _{\varepsilon }\) appearing in (5.2). Observe that all variables \(Y^{(\varepsilon ,\delta ,\ell ,E,\tau )}(n\delta )\), \(e^{(\varepsilon )}_{n}(m)\), \(S^{(\varepsilon )}_{n+1}(m,h)\) and \(S^{(\varepsilon )}_{n+1}(m)\) are \(\mathcal {F}_{n}-\) measurable. In what follows, the constants \(C,c_1\) and \(c_2\) may change from appearance to another.
The proof is made by induction. For \(n=0\), the proposition is easy to check. Indeed, notice that in this case \(E^{(\varepsilon )}_{0,k}=D^{(\varepsilon )}_{0,k}\) . Moreover, notice also that
and that \(\eta _{0,m}(E^{(\varepsilon )}_{0,k})\) is a sum of \(\ell ^{2}\varepsilon ^{-2}\) independent Bernoulli random variables \(X_i, i\in C_m\), where expected value of \(X_i\) is \(\int _{I_k}\psi _0(u,i)du.\) By Hoeffding inequality we deduce that
with probability \(\le 2e^{-c_2\varepsilon ^{-1}}\) where \(c_2=2E^2\ell ^2.\) Therefore, it follows, for \(n=0\), that the inequality above holds for all k and m with probability larger or equal to
establishing the Theorem in the case \(n=0\). We now suppose that the result holds for \(k\le n\). Introduce the set \(G_n\) in which:
-
\(\big | E^{(\varepsilon )}_{n,k}-D^{(\varepsilon )}_{n,k} \big | \le C\varepsilon ^{1/2},\) \(k=1,\ldots , |\mathcal {E}^{(\varepsilon )}_{n}|\)
-
\(\varepsilon ^2 \Big |\eta _{n,m}\Big (E^{(\varepsilon )}_{n,k+\delta \tau ^{-1}}\Big ) -\zeta _{n,m}\Big (D^{(\varepsilon )}_{n,k+\delta \tau ^{-1}}\Big )\Big | \le E\ell ^2\varepsilon ^{1/2}, \) \(k=1,\ldots , |\mathcal {E}^{(\varepsilon )}_{n}|,\) and
-
\(\varepsilon ^2 \Big |\eta _{n,m}\Big (E^{(\varepsilon )}_{n,h}\Big ) -\zeta _{n,m}\Big (D^{(\varepsilon )}_{n,h}\Big )\Big | \le \tau \ell ^2\varepsilon ^{1/2}, \ h=1,\ldots , \delta \tau ^{-1}.\)
By the inductive hypothesis, \(\tilde{P}^{(\varepsilon )}_{Y^{(\varepsilon ,\delta ,\ell ,E,\tau )}(0)}(G_n)\ge 1-c_1e^{-c_2\varepsilon ^{-1/2}}.\)
Since,
we have that on \(G_n,\)
We shall show that there exist positive constants \(c,c_1\) and \(c_2\) not depending on \(\varepsilon \) such that
with probability \(\ge 1-c_1e^{-c_2\varepsilon ^{-1}}\). For that sake, we first write
and then by the conditional version of Hoeffding’s inequality we deduce that
Since on \(G_n\)
noticing that \(N_{n+1}(m,\delta )=\sum _{k}N_{n+1}(m,k,\delta )\) and \(\varepsilon ^2\zeta (D^{(\varepsilon )}_{n,k})\le 1,\) then it follows together with (10.2) that there exist constants \(C,c_1\) and \(c_2\) such that
proving (10.1). Therefore,
A similar argument may be used to prove that we may replace in the probability above \(E^{(\varepsilon )}_{n+1,k+\delta \tau ^{-1}}\) and \(D^{(\varepsilon )}_{n+1,k+\delta \tau ^{-1}}\) respectively by \(E^{(\varepsilon )}_{n+1,h}\) and \(D^{(\varepsilon )}_{n+1,h}\). Thus, summing over all k,h and m we prove the first part of Theorem 5 for \(n+1.\)
Now, we noticing that \(\eta _{n+1}(m,k+\delta \tau ^{-1})=\eta _{n}(m,k)-N_{n+1}(m,k,\delta )\) and remembering that by (6.6), \(\zeta _{n+1}(m,k+\delta \tau ^{-1})=\zeta _{n+1}(m,k)e^{-\delta \varphi \big (D^{(\varepsilon )}_{n,k},i_m\big )},\) we easily see, together with (10.2), that
for some suitable constants not depending on \(\varepsilon \). A similar argument shows that the same type of bound for \(\varepsilon ^2 \big |\eta ^{(\varepsilon )}_{n+1}(m,h) -\zeta ^{(\varepsilon )}_{n+1}(m,h)\big |\) also holds, finishing the proof of the theorem. \(\square \)
Appendix 4: Proof of Theorem 2 for General Firing Rates
The proof is analogous to the proof presented in Appendix 4 of [3]. For the sake of completeness we shall give it here.
Let \(\varphi , R, T\) and C as in the statement of Theorem 1 and take \(\phi \) be any bounded continuous functions on \(D\big ([0,T], \mathcal {S}' \big ).\) We have to show that
Let A be the set \(A=\{||U^{(\varepsilon )}(t)||\le C,t\in [0,T]\}.\) Theorem 1 implies that
Now, consider \(\mathcal {P}^{(*,\varepsilon )}_{[0,T]}\) the distribution of the process with a spiking rate \(\varphi ^*(\cdot ,\cdot )\) which fulfils the Assumption 3 and it is equal to \(\varphi \) for \(u\le C\). By definition, it follows that
Having proved Theorem 2 under the Assumption 3, we get the desired convergence to a limit density \(\rho ^*=(\rho ^*_tdudr)_{t\in [0,T]}\), for the process whose spiking rate is \(\varphi ^*\). It follows then, from (11.1) and (11.2), that
We claim that \(\rho ^*=\rho ^*1_A\). Indeed, by considering \(\phi (w)=\sup \{w_t(1),t\le T\} \wedge 1,\) we immediately see that \(1=\lim _{\varepsilon \rightarrow 0 } \mathcal {P}^{(\varepsilon )}_{[0,T]}(\phi )=\phi (\rho ^*1_A).\) This last equalty implies that \(\rho ^*\) have support in [0, C]. As a consequence,
which concludes the proof of the Theorem.
Rights and permissions
About this article
Cite this article
Duarte, A., Ost, G. & Rodríguez, A.A. Hydrodynamic Limit for Spatially Structured Interacting Neurons. J Stat Phys 161, 1163–1202 (2015). https://doi.org/10.1007/s10955-015-1366-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10955-015-1366-y
Keywords
- Hydrodynamic limit
- Piecewise deterministic Markov process
- Neuronal Systems
- Interacting particle systems