Abstract
In this work we are interested in a mathematical model of the collective behavior of a fully connected network of finitely many neurons, when their number and when time go to infinity. We assume that every neuron follows a stochastic version of the Hodgkin–Huxley model, and that pairs of neurons interact through both electrical and chemical synapses, the global connectivity being of mean field type. When the leak conductance is strictly positive, we prove that if the initial voltages are uniformly bounded and the electrical interaction between neurons is strong enough, then, uniformly in the number of neurons, the whole system synchronizes exponentially fast as time goes to infinity, up to some error controlled by (and vanishing with) the channels noise level. Moreover, we prove that if the random initial condition is exchangeable, on every bounded time interval the propagation of chaos property for this system holds (regardless of the interaction intensities). Combining these results, we deduce that the nonlinear McKean–Vlasov equation describing an infinite network of such neurons concentrates, as time goes to infinity, around the dynamics of a single Hodgkin–Huxley neuron with chemical neurotransmitter channels. Our results are illustrated and complemented with numerical simulations.
Similar content being viewed by others
References
Ambrosio L, Gigli N, Savaré G (2008) Gradient flows: in metric spaces and in the space of probability measures. Springer, Berlin
Austin TD (2008) The emergence of the deterministic Hodgkin–Huxley equations as a limit from the underlying stochastic ion-channel mechanism. Ann Appl Probab 18(4):1279–1325
Axmacher N, Mormann F, Fernández G, Elger CE, Fell J (2006) Memory formation by neuronal synchronization. Brain Res Rev 52(1):170–182
Baladron J, Fasoli D, Faugeras O, Touboul J (2012) Mean field description of and propagation of chaos in networks of Hodgkin–Huxley and Fitzhugh–Nagumo neurons. J Math Neurosci 2(1):10
Berglund N, Gentz B (2004) On the noise-induced passage through an unstable periodic orbit i: two-level model. J Stat Phys 114(5–6):1577–1618
Berglund N, Gentz B (2014) On the noise-induced passage through an unstable periodic orbit ii: general case. SIAM J Math Anal 46(1):310–352
Bertini L, Giacomin G, Pakdaman K (2010) Dynamical aspects of mean field plane rotators and the Kuramoto model. J Stat Phys 138(1):270–290
Bertini L, Giacomin G, Poquet C (2014) Synchronization and random long time dynamics for mean-field plane rotators. Probab Theory Relat Fields 160(3–4):593–653
Bossy M, Faugeras O, Talay D (2015) Clarification and complement to “mean-field description and propagation of chaos in networks of Hodgkin–Huxley and Fitzhugh–Nagumo neurons”. JMN 5(1):1–23
Bossy M, Espina J, Morice J, Paris C, Rosseau A (2016) Modeling the wind circulation around mills with a lagrangian stochastic approach. SMAI J Comput Math 2:177–214
Bressloff PC, Lai YM (2011) Stochastic synchronization of neuronal populations with intrinsic and extrinsic noise. J Math Neurosci 1(1):2
Burkitt AN (2006a) A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input. Biol Cybern 95(1):1–19
Burkitt AN (2006b) A review of the integrate-and-fire neuron model: II. Inhomogeneous synaptic input and network properties. Biol Cybern 95(2):97–112
Chan T, Golub G, LeVeque R (1983) Algorithms for computing the sample variance: analysis and recommendations. Am Stat 37(3):242–247
Dangerfield CE, Kay D, Burrage K (2012) Modeling ion channel dynamics through reflected stochastic differential equations. Phys Rev E 85:051907
Delarue F, Inglis J, Rubenthaler S, Tanré E (2015) Global solvability of a networked integrate-and-fire model of Mckean–Vlasov type. Ann Appl Probab 25(4):2096–2133
Ermentrout GB, Terman DH (2010) Mathematical foundations of neuroscience. Springer, New York
Faugeras O, Touboul J, Cessac B (2009) A constructive mean-field analysis of multi population neural networks with random synaptic weights and stochastic inputs. Front Comput Neurosci 3:1
FitzHugh R (1961) Impulses and physiological states in theoretical models of nerve membrane. Biophys J 1(6):445–466
Fournier N, Guillin A (2015) On the rate of convergence in Wasserstein distance of the empirical measure. Probab Theory Relat Fields 162(3–4):707–738
Fournier N, Löcherbach E (2016) On a toy model of interacting neurons. Annales de l’Institut Henri Poincaré Probabilités et Statistiques 52(4):1844–1876
Friedman A (2006) Stochastic differential equations and applications. Dover Publications Inc., Mineola (Two volumes bound as one, Reprint of the 1975 and 1976 original published in two volumes)
Gärtner J (1988) On the McKean–Vlasov limit for interacting diffusions. Math Nachr 137:197–248
Giacomin G, Luçon E, Poquet C (2014) Coherence stability and effect of random natural frequencies in populations of coupled oscillators. J Dyn Differ Equ 26(2):333–367
Goldwyn J, Shea-Brown E (2011) The what and where of adding channel noise to the Hodgkin–Huxley equations. PLoS Comput Biol 7(11):e1002247
Goldwyn J, Imennov Nikita S, Famulare M, Shea-Brown E (2011) Stochastic differential equation models for ion channel noise in Hodgkin–Huxley neurons. Phys Rev E 83(4):041908
Hansel D, Mato G (1993) Patterns of synchrony in a heterogeneous Hodgkin–Huxley neural network with weak coupling. Phys A Stat Mech Appl 200(1–4):662–669
Hansel D, Mato G, Meunier C (1993) Phase dynamics for weakly coupled Hodgkin–Huxley neurons. EPL 23(5):367
Hodgkin A, Huxley A (1952) A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol 117:500–544
Hormuzdi SG, Filippov MA, Mitropoulou G, Monyer H, Bruzzone R (2004) Electrical synapses: a dynamic signaling system that shapes the activity of neuronal networks. BBA Biomembr 1662(1–2):113–137
Izhikevich EM (2007) Dynamical systems in neuroscience. The MIT Press, Cambridge
Jiruska P, de Curtis M, Jefferys JGR, Schevon CA, Schiff SJ, Schindler K (2013) Synchronization and desynchronization in epilepsy: controversies and hypotheses. J Physiol 591(4):787–797
Karatzas I, Shreve S (1991) Brownian motion and stochastic calculus. Graduate texts in mathematics, 2nd edn. Springer, New York
Kopell Nancy, Ermentrout Bard (2004) Chemical and electrical synapses perform complementary roles in the synchronization of interneuronal networks. Proc Natl Acad Sci 101(43):15482–15487
Kuramoto Y (1984) Chemical oscillations, waves, and turbulence. Springer, Berlin
Lapicque L (1907) Recherches quantitatives sur l’excitation électrique des nerfs traitée comme une polarization. J Physiol Pathol Gen (Paris) 9:620–635
Luçon E, Poquet C (2017) Long time dynamics and disorder-induced traveling waves in the stochastic Kuramoto model. Ann Inst Henri Poincaré Probab Stat 53(3):1196–1240
Marella S, Ermentrout GB (2008) Class-ii neurons display a higher degree of stochastic synchronization than class-i neurons. Phys Rev E 77(4):041918
Méléard S (1996) Asymptotic behaviour of some interacting particle systems; Mckean–Vlasov and Boltzmann models. In: Probabilistic models for nonlinear partial differential equations. Springer, pp 42–95
Mischler S, Quiñinao C, Touboul J (2016) On a kinetic Fitzhugh–Nagumo model of neuronal network. Commun Math Phys 342(3):1001–1042
Morris C, Lecar H (1981) Voltage oscillations in the barnacle gian muscle fiber. Biophys J 31(1):193–213
Nagumo J, Arimoto S, Yoshizawa S (1962) An active pulse transmission line simulating nerve axon. Proc IRE 50:2061–2070
Ostojic S, Brunel N, Hakim V (2008) Synchronization properties of networks of electrically coupled neurons in the presence of noise and heterogeneities. J Comput Neurosci 26(3):369
Pakdaman K, Thieullen M, Wainrib G (2010) Fluid limit theorems for stochastic hybrid systems with aplication to neuron models. Adv Appl Probab 42(3):761–794
Perthame B, Salort D (2013) On a voltage-conductance kinetic system for integrate and fire neural networks. Kinet Relat Models 6(4):841–864
Pikovskii AS (1984) Synchronization and stochastization of nonlinear oscillations by external noise. In: Nonlinear and turbulent processes in physics, vol 1, p 1601
Pikovsky Arkady, Rosenblum Michael, Kurths Jürgen (2003) Synchronization: a universal concept in nonlinear sciences, vol 12. Cambridge University Press, Cambridge
Sacerdote L, Giraudo M (2013) Stochastic integrate and fire models: a review on mathematical methods and their applications. Springer, Berlin, pp 99–148
Sznitman A-S (1991) Topics in propagation of chaos. In: Ecole d’été de probabilités de Saint-Flour XIX—1989. Springer, pp 165–251
Villani C (2009) Optimal transport, old and new, vol 338. Grundlehren der Mathematischen Wissenschaften [Fundamental principles of mathematical sciences]. Springer, Berlin
Wainrib G (2010) Randomness in neurons: a multiscale probabilistic analysis. PhD thesis, École Polytechnique
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Joaquín Fontbona: Supported by CMM-Basal Conicyt AFB 170001, Nucleo Milenio NC 130062 and Fondecyt Grant 1150570.
Héctor Olivero: Partially supported by Nucleo Milenio NC 130062, Beca Chile Postdoctorado and Fondecyt Postdoctoral Grant 3180777.
This research was partially supported by the supercomputing infrastructure of NLHPC (ECM-02), Conicyt.
Appendices
Basic properties of the model (2.1)
We start establishing three basic facts about the system of stochastic differential equations (2.1): its (strong) global well-posedness, the fact that the open channels proportion processes stay (as required) in [0, 1] and, finally, and explicit global bound for the voltage processes in terms of a bound for the initial values.
Lemma A.1
Assume Hypothesis 2.1. Then, strong existence and pathwise uniqueness holds for system (2.1). Moreover, a.s. for all \(t\ge 0\) and every \(i=1,\ldots ,N\) we have \(( m_t^{(i)},n_t^{(i)},h_t^{(i)},y_t^{(i)})\in [0,1]^4\). In particular, the absolute value in (2.2) can be removed.
Proof
It is enough to prove the result for deterministic initial data so we assume this is the case. Take \(M>0\) fixed, and for \(j=1,3, 4\) define truncation functions \(p^j_M\) on \(\mathbb {R}\) by
Let \(X^{(M)}:=(X^{(1,M)},\ldots , X^{(N,M)}) \) with \(X_t^{(i,M)}=(V_t^{(i,M)}, m_t^{(i,M)}, n_t^{(i,M)}, h_t^{(i,M)},y_t^{(i,M)})\), \(i=1,\ldots ,N\) be defined by
where
Is is immediate that the drift coefficients in system (A.1) are Lipschitz continuous. This is less clear in the case of the diffusion coefficients, so we check this point next. Notice that
whereas, thanks to point 2) in Hypothesis 2.1,
Therefore, one can find a bounded Lipschitz continuous function \(g_x: \mathbb {R}\rightarrow \mathbb {R}_+\) such that \(g_x(s)=\sqrt{s}\) on \((\delta _M/2,2S_M)\) and rewrite the diffusion coefficients in (A.1) as
It is then easily seen that \(|\sigma _x(p^1_M(v),u)-\sigma _x(p^1_M(v'),u')|\le C_M (|u-u'|+|v-v'|) \) for some \(C_M>0\) in each of the three cases \((u,u')\in [0,1]^2\), \((u,u')\in ( [0,1]^2)^c\) and \((u,u')\in [0,1]\times [0,1]^c\) for any \(v,v'\in \mathbb {R}\). Thus, global pathwise well-posedness for system (A.1) holds.
Thanks to the second assumption in point 4) of Hypothesis 2.1 and the fact that \(\sigma _x(v,u)=0\) for \((v,u)\in \mathbb {R}\times (0,1)^c\) and \([\rho _x(v)(1-u)- \zeta _x(v)u][\mathbf {1}_{(-\infty ,0]}(u)- \mathbf {1}_{[1,-\infty )}(u)]\ge 0\) for \((v,u)\in \mathbb {R}^2\), we can more apply Proposition 3.3 in Bossy et al. (2015) to get that \(x^{(1,M)},\ldots , x^{(N,M)}\) are confined in [0, 1] for all time (notice that the proof of that result still works if Hypothesis 2.1 i) therein that \(\chi \) be compactly supported in (0, 1) is replaced by \(\chi \) being supported in [0, 1]).
We can now use standard arguments to deduce global existence and pathwise uniqueness of a solution to system (2.1). Indeed, setting \(\theta _M = \inf \{t\ge 0: |X_t^{(M)}|\ge M\}\), using the global Lipschitz character of its coefficients together with Itô calculus and Gronwall’s lemma we get for every \(M'>M\) that a.s. for all \(t\ge 0\), \(X^{(M)}_{t\wedge \theta _M} = X_{t\wedge \theta _M}^{(M')}\). This implies that \(\theta _{M'}>\theta _M\) a.s. and allows us to unambiguously define a process X solving (2.1) on the random interval \([0,\theta )\), with \(\theta := \sup _{M>0}\theta _M\), by \(X_t= X^{(M)}_t\) for all \(t\in [0,\theta _M]\). On the other hand, since \(|p_M^1(z)|\le |z|\) for all \(z\in \mathbb {R}\), for two constants \(C_1,C_2>0\) not depending on \(M>0\) we have \(|F_M(v,m,n,h)|\le C_1+ C_2 |v|\) for every \((v,m,n,h)\in \mathbb {R}\times [0,1]^3\). Using this control on the right hand side of the equations for \(V^{(1,M)},\ldots , V^{(N,M)}\) in (A.1) and Gronwall’s lemma we get
for some constant \(C(t)>0\) not depending on M. This yields \(M\mathbb {P}\left[ \theta _M<t \right] \le C(t) \), whence \(\mathbb {P}\left[ \theta <\infty \right] =0\) letting M and then \(t \nearrow \infty \). The statement follows. \(\square \)
Remark A.2
-
(i)
The arguments given in the previous proof also show that each of the functions \(\sigma _x\) is locally Lipschitz on \(\mathbb {R}\times [0,1]\).
-
(ii)
The same proof also works for some extensions of our model. For instance, if independent Brownian motions are added to each of the voltage processes.
We next show that under the additional Hypothesis 2.2, each of the voltage processes is bounded uniformly in time and in N. Below and in all the sequel we denote
We also set
Proposition A.3
Under Hypothesis 2.2, for every \(N\ge 1\) and \(t\ge 0\) we have a.s.
and
As a consequence for every \(N\ge 1\), there exists at least one invariant law \(\mu ^N_{\infty }\) for the solution to (2.1), namely there exists a solution \((X_t, t\ge 0 )\) to (2.1) such that \(X_t\) has law \(\mu ^N_{\infty }\) for all \(t\ge 0\) as soon as \(X_0\) has law \(\mu ^N_{\infty }\). Moreover, this invariant measure is exchangeable.
Remark A.4
-
(i)
The bound \(V_t^* \) on \( V^\text {max}_{t,\infty }\) is in general not optimal. For instance, if \(V^\text {max}_0< \frac{ 2 R_\text {max}}{g_L}\), one can choose \(V^\text {max}_0> \frac{ 2 R_\text {max}}{g_L + J_{\text {E}}}\) and get from the last identity in (A.5) that \( V^\text {max}_{0,\infty }\le \frac{4 R_\text {max}}{g_L}<V_0^*\). However, in order to state a synchronization result that holds for a general class of initial conditions \(V_0\), the fact that the bound \(V^*_t\) does not depend on the electrical connectivity \(J_{\text {E}}\) and that \(V^*_{\infty }:= \lim _{t \rightarrow \infty } V^*_t= \frac{4 R_\text {max}}{g_L}\) does not depend on the initial condition will be crucial. See point i) in Remark B.4 for a related discussion.
-
(ii)
If point 2) of Hypothesis 2.2 does not hold, by slightly modifying the arguments of Lemma A.3 we still can get the a.s. bound
$$\begin{aligned} \left| V_t^{(i)}\right| \le \frac{4R_\text {max}}{g_\text {L}}+2 \frac{ | V_0|}{\sqrt{N}} e^{-g_\text {L}t} \, \end{aligned}$$implying a uniform in N bound for \(\mathbb {E}( V^\text {max}_{t,\infty })\) if for instance all the random variables \(V_0^{i,N}\), \(i=1,\ldots ,N\), \(N\ge 1\) are equal in law and have finite second moment. However, we have not been able to fully extend our results to such a framework.
-
(iii)
The same arguments also show that a bound like (A.3) holds with \(V^\text {max}_{t,\infty }\) replaced by
$$\begin{aligned} \widehat{V^\text {max}}_{t,\infty }:= \max _{i=1,\ldots ,N} \sup _{s\in [t,\infty )} |\widehat{ V}_s^{(i)}|. \end{aligned}$$That is, the voltages obtained with the EPE scheme are also uniformly bounded.
In the proof of Proposition A.3 and later, we will make use of the the following version of Gronwall’s lemma [see e.g. Ambrosio et al. (2008, p. 88)].
Lemma A.5
Let \(\theta :[0,+\infty )\rightarrow \mathbb {R}\) be a locally absolutely continuous function and \(a, b \in L_{\text {loc}}^1([0,+\infty ))\) be given functions satisfying, for \(\lambda \in \mathbb {R}\),
Then for every \(T>0\) we have
Proof of Proposition A.3
Setting
the dynamics of the potential can be written as
Therefore, we get
and
Notice that
which yields
By Lemma A.5 we deduce that
Since \(|R^{(i)}_s|\le R_\text {max}, \) we then get
which is the first desired inequality. Using this in (A.4) yields
Applying once again Lemma A.5, we obtain
which implies the asserted bounds on \( V^\text {max}_{t,\infty }\).
Let us now deduce the existence of an invariant distribution which is exchangeable. Let \(P_t^N\) denote the semigroup associated to the solution of (2.1), that is for each \(\mathcal {X}\in (\mathbb {R}\times [0,1]^4)^N\) and B Borel set of \((\mathbb {R}\times [0,1]^4)^N\),
Consider also the probability measure \(R_T^N(\lambda )\) on \((\mathbb {R}\times [0,1]^4)^N\), defined for any law \(\lambda \) as
Since the voltage component is uniformly bounded in time, by (A.5), the solution to (2.1) lies in the compact set \(([-4\tfrac{R_\text {max}}{g_\text {L}}-2V^\text {max}_0, 4\tfrac{R_\text {max}}{g_\text {L}}+2V^\text {max}_0]\times [0,1]^4)^N\), and then for any \((T_M)\nearrow \infty \), and any \(\lambda \) with compact support, the sequence \((R_{T_M}^N(\lambda ),M\ge 0)\) is tight and has a subsequence weakly converging to some probability measure \(\mu _{\infty }^N\). According to Krylov–Bogoliubov Theorem, \(\mu _{\infty }^N\) is invariant for \(P_t^N\).
Let us now choose and exchangeable initial law \(\lambda \). For any measurable and bounded function \(\psi \), the identity
for any N-permutation \(\pi \) of the coordinates follows directly from the exchangeable structure of the system of Eq. (2.1). Therefore, \(R_{T_M}^N(\lambda )\) is exchangeable for any \(T_M\), and the corresponding \(\mu _{\infty }^N\) is exchangeable as the weak limit of exchangeable measures. \(\square \)
Synchronization: proof of Theorem 2.3 a
In the sequel, for any locally bounded real function f on \(\mathbb {R}\) and each \(R>0\) we will write
We will repeatedly use a simple control of the increments of the function F, stated in next lemma for convenience:
Lemma B.1
We have
for every \(m_i,n_i,h_i\in [0,1]\) and \(V_i\in \mathbb {R}\), \(i=1,2\).
Proof
Since
and
we get
and the asserted bound follows. \(\square \)
The following result is the core of the proof of Theorem 2.3:
Proposition B.2
For each \(V^*>0 \), there are constants \(J_{\text {E}}^*>0\) and \(\lambda ^*>0\) not depending on N nor on \(\sigma \) such that for each \(J_{\text {E}}>J_{\text {E}}^*\) and any solution X of (2.1) satisfying \(V^\text {max}_{0,\infty }\le V^*\), one has
for all \( i,j\in \{1,\ldots ,N\}\), where
Proof
Let us write \(\Delta V_t = V_t^{(i)}-V_t^{(j)}\) and \(\Delta x_t = x_t^{(i)}-x_t^{(j)} \). Thanks to the bound (B.1), we have
where we have used Young’s inequality: \(ab\le \varepsilon _x a^2 + \frac{b^2}{4 \varepsilon _x }\) for \(x=m,n,h,y\), with \(\varepsilon _x >0\) to be chosen later, and where we have set
On the other hand, for the channel types \(x=m,n,h,y\), we have
By our assumptions, for all \(t\ge 0\) we have fror \(k=i,j\),
Using Young’s inequality in the same way as before yields
where \(L_f^*\) denotes the Lipschitz constant on \([-V^*,V^*]\) of a locally Lipschitz function f, and where
Adding up, we get
Define now \(\lambda ^*\) as the optimal value of the problem
where
Notice that \(\lambda ^*\) is strictly positive since \(\Psi (J,\varepsilon _m,\varepsilon _n,\varepsilon _h,\varepsilon _y )\) can be made so by taking small enough \(\varepsilon _x>0\) for \(x=m,n,h,y\) and then large enough \(J>0\). Calling \(J_{\text {E}}^*\) the smallest \(J>0\) such that \((J,\varepsilon _m,\varepsilon _n,\varepsilon _h,\varepsilon _y )\in \arg \max \Psi \), it follows that for every \(J_{\text {E}}>J_{\text {E}}^*\),
Applying Lemma A.5, we obtain
and the desired result. \(\square \)
The next result removes the dependance of the previous one on the bound \(V^*\), at the price of ensuring exponentially fast synchronization only from some time instant \(t_0\ge 0\) on. It will then be easy to deduce part a) of Theorem 2.3.
Theorem B.3
There are constants \(J_{\text {E}}^0>0\) and \(\lambda ^0>0\) not depending on \(N\ge 1\), on \(\sigma \ge 0\) nor on the initial data, and \(t_0\ge 0\) not depending on \(N\ge 1\) nor on \(\sigma \ge 0\), such that for each \(J_{\text {E}}>J_{\text {E}}^0\) the solution X of (2.1) satisfies, for every \(t\ge t_0\),
where
Proof
Fix \(\epsilon _0 \in (0,1)\), take \(t_0\ge 0\) such that \(2 V^\text {max}_0 e^{-g_\text {L}t_0}\le \epsilon _0 \frac{R_\text {max}}{g_L} \) and, conditionally on the sigma-field generated by \((X_s:s \le t_0)\), apply Proposition B.2 to the shifted process \(X':=(X_{t+t_0}:t\ge 0)\) with \(V^*=V^*_{t_0}\le (4+\epsilon _0)\frac{ R_\text {max}}{g_L}\le 5\frac{ R_\text {max}}{g_L} \). The proof is then achieved taking expectation in the obtained inequality.
We can now finish the proof of Theorem 2.3. a). Here and in the sequel we denote by \(\bar{S}^V_t\) and \(\bar{S}^x_t\) the empirical variance of voltages and x type channels at time t, respectively:
Proof of Theorem 2.3. a)
Applying in the conclusion of Theorem B.3 the elementary identity
with \(\bar{\alpha }^N=\frac{1}{N} \sum _{i=1}^N \alpha _i\) we get
in the general case. If, additionally, exchangeability of the initial condition is assumed, the path law of system (2.1) is exchangeable, by pathwise uniqueness. The asserted inequality follows. \(\square \)
Remark B.4
-
(i)
Theorems 2.3 and B.3 show that, for large enough \(J_{\text {E}}\), synchronization of the network (2.1) always occurs, as long as the initial voltage \(V_0\) is bounded, but regardless of its actual values. More precisely, the time \(t_0>0\) which depends on \(V^\text {max}_0\), on \(\frac{R_\text {max}}{g_L}\) and on some arbitrary choice of the parameter \(\epsilon _0>0\), but not on \(J_{\text {E}}\), is one possible time after which we can grant that the voltage trajectories stay in some fixed interval not depending on \(V_0\). Then, after \(t_0\) and if \(J_{\text {E}}\) was chosen large enough, synchronization occurs at least at the exponential rate \(\lambda ^0\) which depends on coefficients of the system (2.1) but no longer on the initial data. In turn, for large enough \(J_{\text {E}}\), Proposition B.2 ensures synchronization from \(t_0=0\) on but only if \(V^\text {max}_0\) is small enough.
-
(ii)
Notice that the function \(\Psi \) in the proof of Proposition B.2 (and hence the constant \(\lambda ^*\) therein) increases when its parameter \(V^*\) decreases, whereas \(C^*_{\zeta ,\rho }\) decreases when \(V^*\) does. Therefore, letting \(\epsilon _0\rightarrow 0\) (or \(t_0\rightarrow \infty \)) yields the best (by this approach) bounds for the \(\limsup \) in Theorem 2.3. Moreover, the largest possible exponential rate \(\lambda ^0>0\) and the smallest possible interaction strength \(J_{\text {E}}^0\ge 0 \) that can be obtained (but not necessarily attained) in Theorems 2.3 and B.3 by our approach are \(\lambda ^*\) and \(J_{\text {E}}^*\) corresponding to \(V^*=\frac{4 R_\text {max}}{g_L}\). These choices are certainly not optimal in general.
Synchronized dynamics: proof of Theorem 2.3 b
Our next goal is to prove part b) of Theorem 2.3.
Remark C.1
Proceeding in a similar way as in the proof of Proposition A.3 one checks that the process (2.6) satisfies \(\frac{d}{dt}| \widehat{V}_t|^2_2 + 2g_\text {L}| \widehat{V}_t |^2_2 \le 2R_\text {max}| \widehat{V}_t|, \) which now yields, for any \(t\ge t_1\),
Applying on \(\bar{V}^N_{t_1}= \widehat{V}_{t_1}\) the first bound in Lemma A.3 we get that \(| \widehat{V}_t |\le V^\text {max}_0 e^{-g_\text {L}t } +\frac{ 2 R_\text {max}}{g_\text {L}}\) for every \(t\ge t_1\). Thus, if \(t_0\ge 0\) is chosen as in Theorem B.3, we deduce that
We first prove
Proposition C.2
Let \(t_0\) be as in Theorem B.3 and \(\delta >0\). There are constants \(K_{1,\delta }, K_{2,\delta }>0\) increasingly depending on \(\delta >0\), but not depending on N nor on the initial condition, such that for each \(t_1\ge t_0\),
Proof
For notational simplicity we write in the proof \(\widehat{X}_{t}^{N}: = \widehat{X}_{t}^{N,t_1} \). Notice that the average process satisfies the dynamics
Therefore, after some manipulations, we get that
By Jensen’s inequality and the bound (C.1) we have
with \(K_V^1\) explicitly depending on \( \sup _{v\in [-\frac{ 5R_\text {max}}{2 g_L} ,\frac{ 5R_\text {max}}{2 g_L} ]} \max \{ |v-V_\text {Na}|, |v-V_\text {K}| \}\), \(g_{\text {K}}\) and \(g_{\text {Na}}\). Meanwhile, using (B.1) we get
with \(K_V^2\) also depending on those quantities and on \(g_L\). By similar arguments, we get
for some \(K_V^3\) depending on \(J_{\text {Ch}}\) and on \(\sup _{v\in [-\frac{ 5R_\text {max}}{2 g_L} ,\frac{ 5R_\text {max}}{2 g_L} ]}|v-V_\text {rev} |\). We thus get:
for some explicit \(\tilde{K}_V\) a.s. from where
On the other hand, for x type channels we get
For \(t\in (t_1,t_1+\delta )\) we deduce:
The previous yields,
Denoting by \(L_{f,R}\) a Lipschitz constant of a function f on \([- R,R]\) and using standard arguments, we get that
and that
for all \(t\in (t_1,t_1+\delta )\). By Doob’s inequality, we moreover obtain
Summarizing, for the x-type channel we have shown that for all \(t\in (t_1,t_1+\delta )\),
for some constants \(K_x>0\). Putting together (C.3) and (C.4) we get for all \(t\in (t_1,t_1+\delta )\) and some constants \(K_1,K_2>0\),
from where, using Gronwall’s inequality, we deduce:
We can now use Theorem 2.3 to bound the integral on the r.h.s. With \(K_{1,\delta } =e^{K_2(1+\delta )}K_1(1+\delta ) \) and \(K_{1,\delta } =12 e^{K_2(1+\delta )}\) we get, for all \(t_1\ge t_0\), that
since \( \bar{S}^V_{t_0}\le (V^*_{t_0} )^2\). \(\square \)
Proof of Theorem 2.3. b)
Notice on hand that, for each \(t\ge t_1\), we always have the bounds
thanks to (C.1) and that \(V^*_{t_0} \le \frac{ 5R_\text {max}}{g_L} \). On the other hand, combining Proposition C.2 with Theorem 2.3. a) we get for every \(t\in [t_1,t_1+\delta ]\) that
with \(K_0'=\left( \frac{ 5R_\text {max}}{g_L} \right) ^2+4\). The statement follows. \(\square \)
Propagation of chaos and synchronization for the McKean–Vlasov limit: proofs of Theorem 2.5 and Corollary 2.7
We first address the asymptotic behavior of the flow of empirical measures (2.9) when \(N\rightarrow \infty \) and the proof of Theorem 2.5. In particular, we will prove the propagation of chaos property for system (2.1). Following the classic pathwise approach developed in Sznitman (1991) and Méléard (1996), we first establish:
Theorem D.1
Under the assumptions of Theorem 2.5, we have:
-
(a)
Let \(W^{x}, x=m,n,h,y\) be independent standard Brownian motions and \((V_0,m_0,n_0,h_0,y_0)\) an independent random vector with law \(\mu _0\). There is existence and uniqueness, pathwise and in law, of a solution \(\widetilde{X}= (\widetilde{V}_t,\widetilde{m}_t,\widetilde{n}_t,\widetilde{h}_t,\widetilde{y}_t, t\ge 0 )\) to the nonlinear stochastic differential equation (in the sense of McKean) with values in \(\mathbb {R}\times [0,1]^4\):
$$\begin{aligned} \begin{aligned} \widetilde{V}_t&= V_0+ \int _{0}^{t}{F(\widetilde{V}^{}_s,\widetilde{m}_s,\widetilde{n}_s,\widetilde{h}_s)ds}-\int _{0}^{t}{ J_{\text {E}}(\widetilde{V}^{}_s-\mathbb {E}[\widetilde{V}_s])ds}\\&\quad - \int _{0}^{t}{J_{\text {Ch}}\mathbb {E}[\widetilde{y}_s](\widetilde{V}^{}_s-V_\text {rev}) ds},\\ \widetilde{x}^{}_t&= x^{}_0+\int _{0}^{t}\rho _x(\widetilde{V}^{}_s)(1-\widetilde{x}^{}_s) -\zeta _x(\widetilde{V}^{}_s)\widetilde{x}^{}_sds + \int _{0}^{t}{\sigma _x(\widetilde{V}_s^{},\widetilde{x}_s^{})dW_s^{x}},\;\;x=m,n,h,y \, \end{aligned} \end{aligned}$$(D.1)such that for all \(t\ge 0\), \(|\widetilde{V}_t|\le {4R_\text {max}}/{g_\text {L}} + 2V^\text {max}_0 e^{-g_\text {L}t}\) almost surely.
-
(b)
\((\mu _t:=\text{ law }(\tilde{X}_t): t\ge 0)\) is a weak solution globally defined in \(C((0,+\infty ]; \mathcal{P}_2(\mathbb {R}\times [0,1]^4)) \) of the McKean–Vlasov equation (2.10).
-
(c)
For each \(T>0\), let \(\widetilde{X}^{(i)}= \left( (\widetilde{V}^{(i)}_t,\widetilde{m}^{(i)}_t,\widetilde{n}^{(i)}_t,\widetilde{h}^{(i)}_t,\widetilde{y}^{(i)}_t):t\in [0,T]\right) \), \(i=1,\ldots ,N\) be independent copies of the nonlinear process (D.1) each of them driven by the same Brownian motions \((W^{x,i},\ x=m,n,h,y)\) and with same initial conditions \(X^{(i)}_0=\widetilde{X}^{(i)}_0 \) as the N-particle system (2.1). Then, there is a constant \(C(T)>0\) such that for every \(N\ge 1\) and \(i\in \{1,\ldots , N\}\),
$$\begin{aligned} \mathbb {E}\left[ \sup _{0\le t\le T}| X^{(i)}_t-\widetilde{X}^{(i)}_t |^2 \right] \le \frac{C(T)}{N}. \end{aligned}$$
Proof
The statements a), b) and c) would be standard if the coefficients in each of the N components of (2.1) were replaced by globally Lipschitz functions of \(X^{(i)}_s\) and \(X^{(j)}_s\), see Theorems 2.2 and 2.3 in Méléard (1996). In particular, with functions \(p^j_M\) and \(F_M\) defined for fixed \(M>0\) as in Lemma A.1, for any \(T>0\) there is existence and uniqueness, pathwise and in law, of a solution to the nonlinear stochastic differential equation on [0, T]:
Moreover, letting \(\widetilde{X}^{(i,M)}= \left( (\widetilde{V}^{(i,M)}_t,\widetilde{m}^{(i,M)}_t,\widetilde{n}^{(i,M)}_t,\widetilde{h}^{(i,M)}_t,\widetilde{y}^{(i,M)}_t):t\in [0,T]\right) \), \(i=1,\ldots ,N\) be independent copies of the nonlinear process (D.2) driven by the same Brownian motions \((W^{x,i},\ x=m,n,h,y)\) and with same initial conditions \(X^{(i)}_0=\widetilde{X}^{(i)}_0 \) as the system \((X^{(1,M)},\ldots , X^{(N,M)}) \) defined in (A.1), we obtain that
for every \(N\ge 1\) and \(i\in \{1,\ldots , N\}\), and some constant \(C_M(T)>0\).
We notice now that, by Proposition A.3, for \(M>0\) large enough the system \((X^{(1)},\ldots , X^{(N)}) \) is also a solution to the system of Eq. (A.1). Pathwise uniqueness of the latter yields for all such \(M>0\) that \((X^{(1)},\ldots , X^{(N)}) =(X^{(1,M)},\ldots , X^{(N,M)}) \) on [0, T], from where
for every \(N\ge 1\) and \(i\in \{1,\ldots , N\}\). Furthermore, for any \(M'>0\)
Taking \(M'=1\), letting \(N\rightarrow \infty \) and then \(\varepsilon \rightarrow 0\) we deduce that \( \widetilde{x}^{(i,M)}_t \le 1\) a.s. for every \(t\in [0.T]\) and \(i\in \mathbb {N}\). In a similar way, \( \widetilde{x}^{(i,M)}_t \ge 0\) and \(|\tilde{V}^{(i,M)}_t | \le V^\text {max}_{t,\infty }\) hold a.s. for every \(t\in [0, T]\) and \(i\in \mathbb {N}\). This implies that for \(M>0\) large enough but fixed, a solution to (D.2) also solves (D.1), and proves the existence part in a).
We show now that any solution have uniform in time bounded compact support, from which uniqueness in part a) will immediately follow. We shall first consider a solution \((U_t,q^m_t,q^n_t,q^h_t,q^y_t)\) of (D.1) with explosion time \(\xi \), and we will show that it coincides with \((\widetilde{V}^{M}_t,\widetilde{m}^{M}_t,\widetilde{n}^{M}_t,\widetilde{h}^{M}_t,\widetilde{y}^{M}_t, t\ge 0)\) for a M big enough. For \(M>1\), we define \(\tau _M = \inf \{t\ge 0: \max \{|U_t|,|q^m_t|,|q^n_t|,|q^h_t|,|q^y_t|\}\ge M\}\). Then we observe that the coefficients of (D.1) applied to \((U_t,q^m_t,q^n_t,q^h_t,q^y_t, 0 \le t \le \tau _M)\) coincide with the truncated coefficients of (D.2) and thanks to the uniqueness property for (D.2) we conclude that almost surely
In particular, we observe that \(q^x_{t \wedge \tau _M} \in [0,1]\) for \(x=m,n,h,y\), and that \(\tau _M = \inf \{t\ge 0:|U_t|\ge M\}\) for \(M > 1\). Moreover the second order moment \( \mathbb {E}[U_{t\wedge \tau _M}^2]\) is uniformly bounded in M, since
from where, it is easy to show that
and therefore, thanks to Gronwall’s inequality
On the other hand \(\mathbb {E}(U_{t\wedge \tau _M}^2) = \mathbb {E}(U_{t}^2\mathbb {1}_{\tau _M>t})+ M^2\mathbb {P}(\tau _M \le t)\) and then we can conclude for all \(t\ge 0\) and all \(M\ge 1\)
Since \(\tau _M\nearrow \xi \), we conclude that for all t\(\mathbb {P}(\xi \le t)=0\), from where it follows that \(\xi \) is almost surely infinite.
Now, since \((U_t,q^m_t,q^n_t,q^h_t,q^y_t)\) has no explosion, we apply Proposition 3.3 in Bossy et al. (2015) to get that almost surely \(q^x_{t} \in [0,1]\) for any \(t>0\). Using this, we derive a more precise bound for the second order moment:
where as in the proof of Proposition A.3,
Applying one more time Lemma A.5 we conclude
Thus, the second moment of any solution of (D.1) is uniformly bounded in time. Moreover, since the initial condition \(V_0\) is bounded, proceeding exactly as in the proof of Proposition A.3 we obtain that
with the same bound \(V^\text {max}_0\) for \(V_0\). In conclusion, solutions of (D.1) are non explosive, even more they are uniformly bounded in time. Choosing \(M>4R_\text {max}/g_\text {L}+ 2V^\text {max}_0\), we get \(\tau _M=\infty \) almost surely, and for any \(t\ge 0\),
Hence Eq. (D.1) has a unique solution.
Part b) derives from a direct and easy application of the Ito’s formula to compute
for a \(C^\infty _c\) test function \(\psi \), thanks to the fact that the Lebesgue integrals on the right hand side of the It formula will be all bounded, since the supports of the laws \((\mu _t: t\ge 0)\) are contained in some compact set, and by continuity of coefficients.
Part c) is immediate taking large enough M in (D.3). \(\square \)
We are now in position to prove
Proof of of Theorem 2.5
a) We write \(\mathcal {C}_T:=C([0,T], \mathbb {R}\times [0,1]^4)\). Part c) of Theorem D.1 implies that for each \(T>0\) and \(k\ge 1\) the convergence \(\text{ Law }(X^{(1)},\ldots , X^{(k)})\rightarrow \mu ^{\otimes k}\) with \(\mu =\text{ Law }(\tilde{X}^{(1)})\) holds on the space \(\mathcal {C}_T^k\) as \(N\rightarrow \infty \). By Proposition 2.2. in Sznitman (1991) or Proposition 4.2. in Méléard (1996), this implies that the empirical measure
with \(\mathcal{P}(\mathcal {C}_T)\) denoting the space of probability measures on \(\mathcal {C}_T\) endowed with the weak topology, converges in law to the (deterministic) probability measure \(\mu \). The first assertion of the theorem follows then from the fact that the mapping associating with \(\nu \in \mathcal{P}(\mathcal {C}_T)\) its flow \((\nu _t:t\in [0,T])\in C([0,T];\mathcal{P}( \mathbb {R}\times [0,1]^4))\) of one-dimensional time-marginals laws is continuous, together with part b) of Theorem D.1 (notice that \(C([0,T];\mathcal{P}( \mathbb {R}\times [0,1]^4))\) can be replaced by \(C([0,T]; \mathcal{P}_2( \mathbb {R}\times [0,1]^4))\) since all the random measures involved have a common compact support).
b) We observe first that for each \(t\ge 0\) one has
where \(\tilde{\mu }^N_t\) is the empirical measure of any random i.i.d. sample of the law \(\mu _t\) constructed in the same probability as \(\mu ^N_t\). Taking \(\tilde{\mu }^N_t:=\frac{1}{N}\sum _{i=1}^N \delta _{\tilde{X}_t^{(i)}}\) with \(\widetilde{X}^{(i)}_t\), \(i=1,\ldots ,N\) the processes defined in part c) of Theorem D.1 we get for every \(t\in [0,T]\) that
On the other hand, we have \(\sup _{t\in [0,T]}( \int |z|^q\mu _t(dz))^{1/q} <\infty \) for each \(q\ge 1\), using for instance the bound obtained at the end of the proof of Theorem D.1. We can therefore apply Theorem 1 in Fournier and Guillin (2015) with \(p=2\), \(d=5\) and a sufficiently large \(q>2\), to get that \( \mathbb {E}\left( \mathcal {W}_2^2(\tilde{\mu }^N_t,\mu _t )\right) \le C N^{-2/5}\). The second assertion thus follows.
c) In order to prove uniqueness for the McKean–Vlasov equation (2.10), we adapt to our setting a generic argument going back at least to Gärtner (1988). Assume for a while that for each compactly supported \(\nu _0\in \mathcal{P}( \mathbb {R}\times [0,1]^4))\) and \((\nu ^*_t:t\in [0,T])\in C([0,T],\mathcal{P}_2( \mathbb {R}\times [0,1]^4))\) the linear Fokker–Planck equation
has at most one weak solution with supports bounded uniformly in \(t\in [0,T]\). By similar arguments as in Lemma A.1, strong well-posedness holds for the stochastic differential equation:
with \((V_0^* ,m_0^* ,n_0^* ,h_0^* ,y_0^* )\) independent of the Brownian motions \(W^x\) and with law \(\nu _ 0\). Moreover, one can check that \(x^*_t\in [0,1]\) a.s. for all \(t\in [0,T]\) and that the process \((V_t^*:t\in [0,T])\) is bounded. It follows using Itô’s formula that a unique weak solution to Eq. (D.4) with uniformly bounded supports does exist, and is given by \(\nu _ t= \text{ law } (V_t^* ,m_t^* ,n_t^* ,h_t^* ,y_t^* )\) for all \(t\in [0,T]\). Now, any solution \((\mu _t:t\in [0,T])\) in \(C([0,T],\mathcal{P}( \mathbb {R}\times [0,1]^4))\) of (2.10) with uniformly bounded supports also solves the linear equation (D.4) with \((\nu ^*_t:t\in [0,T])=(\mu _t:t\in [0,T])\). This yields, for all \(t\in [0,T]\), that \(\mu _t= \text{ law } (V_t^* ,m_t^* ,n_t^* ,h_t^* ,y_t^* )\), for the process defined as in (D.5), with \(\nu ^*_s=\mu _s\) for all \( s\in [0,T]\). In other words, this process solves the nonlinear stochastic differential equation (D.1). From Theorem D.1 we conclude that \((\mu _t:t\in [0,T])=(\text{ law }(\tilde{X}_t):t\in [0,T])\), that is, there is uniqueness of solutions in \(C([0,T],\mathcal{P}( \mathbb {R}\times [0,1]^4))\) of (2.10) having uniformly bounded support.
Hence, in order to conclude the proof of Theorem 2.5 it is enough to show that, given functions \(\alpha ,\beta \in C([0,T],\mathbb {R})\) and \(\nu _0\in \mathcal{P}_2( \mathbb {R}\times [0,1]^4)\) there is at most one solution \((\nu _t:t\in [0,T])\in C([0,T],\mathcal{P}( \mathbb {R}\times [0,1]^4))\) with support bounded uniformly in [0, T], to the distribution formulation of Eq. (D.4)
for all \(t\in [0,T]\) and for an extended class of test function \(\psi \in C^{1,1,2}_b([0,T]\times \mathbb {R}\times [0,1]^4)\). Let \(\rho _x'\) and \(\zeta _x'\) denote compactly supported functions coinciding with \(\rho _x\) and \(\zeta _x\) on some compact set \(\mathcal{K}\subset \mathbb {R}\) containing the supports of the measures \(\nu ^V_t\) for \( t\in [0,T]\), and define \(\sigma _x'\), \(a'_x\) and \(b_x'\) in terms of them in a similar way as \(\sigma _x\), \(a_x\) and \(b_x\) were defined in terms of \(\rho _x\) and \(\zeta _x\). For a given \(t>0\), consider the following Cauchy problem in \(\mathbb {R}^5\) : for all \((s,v,u)\in [0,t)\times \mathbb {R}\times \mathbb {R}^4\),
By the Feynman–Kac formula (see e.g. Karatzas and Shreve 1991), if a solution \(f_t\in C_b([0,t] \times \mathbb {R}^5 )\cap C_b^{1,1,2}([0,t)\times \mathbb {R}\times \mathbb {R}^4)\) exists, then it is given by
where \((X_r^{s,v,u}:=(V_r,m_r,n_r,h_r,y_r)\,: r\in [s,t])\) is the unique (pathwise and in law) solution in [s, t] of the stochastic differential equation:
Moreover, for v chosen in some fixed compact set, this solution is bounded independently of \(s\in [0,t]\), and one has \(x_r\in [0,1]\) for all \(r\in [s,t]\). Hence, under the assumption that \(\sigma >0\), \(\rho _x\) and \(\zeta _x\) are of class \(C^2(\mathbb {R})\), one can moreover prove, following the lines of Friedman (2006, p. 124), that the function \(f_t\) defined by (D.8) actually is of class \(C_b^{1,1,2}([0,t)\times \mathbb {R}\times \mathbb {R}^4)\) and solves the Cauchy problem (D.7). Putting \(\psi =f_t\) in (D.6) yields
for all \(\psi \in C_0^2(\mathbb {R}^5 )\), which uniquely determines \(\nu _t.\) Notice that when \(\sigma =0\), the required regularity for \(\phi \) and for f turns from \(C^{1,1,2}\) to \(C^{1,1,1}\) and the Feymann Kac formula in the argument can be replaced by the characteristics formula. The proof of part c) is complete.
d) This is immediate from parts b) and d) of Theorem D.1 . \(\square \)
Proof of Corollary 2.7
Recall first that, for any \(\nu \in \mathcal{P}_2(\mathbb {R}\times [0,1]^4)\) and \(w\in \mathbb {R}\times [0,1]^4)\), one has
Moreover, for every \(t\ge t_1\) and \(N\ge 1\) it holds by exchangeability that:
Therefore, it is enough to prove that, for any \(t_1\ge 0\),
as \(N\rightarrow \infty \). Given \(t\ge t_1\) and \(N\ge 1\), let \(\pi _t^N(dz,dz')\) be a coupling between \(\mu _t\) and \(\mu ^N_t\). Then, for some constant \(C>0\) not depending on \(t\ge t_1\) nor on \(N\ge 1\), we have
since the supports of \(\mu _t\) and \(\mu _t^N\) and the processes \(\widehat{X}^{ t_1,\infty }_t\) and \(\widehat{X}^{ t_1,N}_t\) are uniformly bounded in \(t\ge t_1\) and N. The latter property also allows us to write the dynamics in (2.6) and (2.12) using globally Lipschitz coefficients. Thanks to Gronwall’s lemma this yields the estimates
for some constant \(C_{\delta }>0\) not depending on N. Since \( \int | z- z' |\pi _{t}^N(dz,dz') \le \left( \int | z- z' |^2\pi _{t}^N(dz,dz')\right) ^{1/2} \), by taking the above couplings to be optimal for \( \mathcal {W}_2\), we get the estimate
for some \(C'>0\). We conclude thanks to Theorem 2.5. \(\square \)
Strong convergence rate result for the exponential projective Euler scheme (EPES)
The main object of this section is to prove the convergence of the numerical scheme presented in Sect. 3 to the model (2.1) and establish the following rate of convergence
Proposition E.1
Assume Hypothesis 2.2, if \(\chi (x)=O(x(1-x))\), then there exists a constant C depending on the parameters of the system, but independent of \(\Delta t\), such that for any \(i=1,\ldots ,N\):
We decompose the proof of this proposition in several preliminary results.
The next result follows from the uniform bound for \(\widehat{V}_t^{(i)}\) (see iii) in Remark A.4) and some standard arguments on local approximation of SDEs, so we omit the proof.
Lemma E.2
Under Hypothesis 2.2, there exists a constant C depending on the parameters of the system, but independent of \(\Delta t\) such that
Next we establish a the key step in the convergence of the scheme, namely that, with extremely high probability, the processes \(\widehat{x}^{(i)}\) and \(\check{x}^{(i)}\) coincide.
Lemma E.3
Asumme Hypothesis 2.2, if \(\chi (x)=O(x(1-x))\), then there exists a constant C depending on the parameters of the system, but independent of \(\Delta t\), such that
Remark E.4
It is not difficult to see that
Notice that the RHS above tends to zero faster than any power of \(\Delta t\) when \(\Delta t\rightarrow 0\).
Proof of Lemma E.3
We first notice that conditional to \(\mathcal {F}_{\eta (t)}\), \(\check{x}^{(i)}\) corresponds to an Ornstein–Uhlenbeck process, therefore its law is Gaussian with known conditional mean and conditional variance given by
Observe that the conditional variance is strictly positive if \(t>\eta (t)\), \(\widehat{x}^{(i)}_{\eta (t)}\ne 0\) and \(\widehat{x}^{(i)}_{\eta (t)}\ne 1\). Since for \(\widehat{x}^{(i)}_{\eta (t)}=0\) or \(\widehat{x}^{(i)}_{\eta (t)}=1\) the diffusions coefficient vanish, and in that case the solution to the ODE for \(\check{x}^{(i)}\) remains in [0, 1] almost surely, we can restrict ourselves to the case \(\widehat{x}^{(i)}_{\eta (t)}\in (0,1)\).
Using the inequality for Gaussian concentration, conditional to \(\mathcal {F}_{\eta (t)}\), we have
Since for t small enough
and \(t-\eta (t)\le \Delta t\), we can bound the conditional variance, and then it follows
On the other hand, \(\mathbb {E}_{ \eta (t)}\left[ \check{x}^{(i)}_t \right] \) is a weighted mean between to quantities in [0, 1], therefore
hence
To bound the first exponential in the right-hand side of the last inequality, we notice that since the process \(\widehat{V}^{(i)}\) is uniformly bounded and \(\sigma \) is bounded, we can easily exhibit a constant \(C_1>0\) independent of i such that
For the second term in the right-hand side of (E.1), since \(x^2/\chi (x)^2\) is bounded from below in (0, 1), there exists \(C_2>0\), such that
from which we conclude
An analogous computation shows
\(\square \)
The last preliminary step in the proof of Proposition E.1 is the following
Lemma E.5
Under hypotheses of Proposition E.1, consider
Then there exists a constant C depending on the parameters of the system, but independent of \(\Delta t\), such that
Proof
Thanks to the boundedness of the processes, drift and diffusion coefficients, that is \(b_x\) and \(\sigma _x\), behave like Lipschitz functions, just as in the proof of Lemma A.1. Then, thanks to Itô formula and pivoting with in drift and diffusion with the point \(( \widehat{V}^{(i)}_{s},\check{x}^{(i)}_s)\)
from where the Lipchitz property of the coefficients, Lemma E.2 to bound the terms involving the local error and some classical arguments lead to
On the other hand, for the voltage error we obtain first the a.s. bound
Thanks to the exchangeability of the particles, it follows that
and then, since the processes are uniformly bounded, we get that
We can summarize the previous computations as
from where we conclude thanks to Gronwall’s inequality. \(\square \)
Proof of Proposition E.1
From the previous Lemma, denoting \(u_k = u(t_k)\) we obtain the following recurrence relationship:
Iterating this inequality, it is easy to conclude that
But when \(\Delta t\rightarrow 0\), we have \(e^{C\Delta t}- 1 \sim C\Delta t\), and therefore \(u_{k+1} \le C\Delta t.\) Inserting this in (E.2), we conclude
from where the statement follows, applying Lemma E.3. \(\square \)
Rights and permissions
About this article
Cite this article
Bossy, M., Fontbona, J. & Olivero, H. Synchronization of stochastic mean field networks of Hodgkin–Huxley neurons with noisy channels. J. Math. Biol. 78, 1771–1820 (2019). https://doi.org/10.1007/s00285-019-01326-7
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00285-019-01326-7
Keywords
- Hodgkin–Huxley neurons
- Synchronization of neuron networks
- Mean-field limits
- Propagation of chaos
- Stochastic differential equations