[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Synchronization of stochastic mean field networks of Hodgkin–Huxley neurons with noisy channels

  • Published:
Journal of Mathematical Biology Aims and scope Submit manuscript

Abstract

In this work we are interested in a mathematical model of the collective behavior of a fully connected network of finitely many neurons, when their number and when time go to infinity. We assume that every neuron follows a stochastic version of the Hodgkin–Huxley model, and that pairs of neurons interact through both electrical and chemical synapses, the global connectivity being of mean field type. When the leak conductance is strictly positive, we prove that if the initial voltages are uniformly bounded and the electrical interaction between neurons is strong enough, then, uniformly in the number of neurons, the whole system synchronizes exponentially fast as time goes to infinity, up to some error controlled by (and vanishing with) the channels noise level. Moreover, we prove that if the random initial condition is exchangeable, on every bounded time interval the propagation of chaos property for this system holds (regardless of the interaction intensities). Combining these results, we deduce that the nonlinear McKean–Vlasov equation describing an infinite network of such neurons concentrates, as time goes to infinity, around the dynamics of a single Hodgkin–Huxley neuron with chemical neurotransmitter channels. Our results are illustrated and complemented with numerical simulations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  • Ambrosio L, Gigli N, Savaré G (2008) Gradient flows: in metric spaces and in the space of probability measures. Springer, Berlin

    MATH  Google Scholar 

  • Austin TD (2008) The emergence of the deterministic Hodgkin–Huxley equations as a limit from the underlying stochastic ion-channel mechanism. Ann Appl Probab 18(4):1279–1325

    Article  MathSciNet  MATH  Google Scholar 

  • Axmacher N, Mormann F, Fernández G, Elger CE, Fell J (2006) Memory formation by neuronal synchronization. Brain Res Rev 52(1):170–182

    Article  Google Scholar 

  • Baladron J, Fasoli D, Faugeras O, Touboul J (2012) Mean field description of and propagation of chaos in networks of Hodgkin–Huxley and Fitzhugh–Nagumo neurons. J Math Neurosci 2(1):10

    Article  MathSciNet  MATH  Google Scholar 

  • Berglund N, Gentz B (2004) On the noise-induced passage through an unstable periodic orbit i: two-level model. J Stat Phys 114(5–6):1577–1618

    Article  MathSciNet  MATH  Google Scholar 

  • Berglund N, Gentz B (2014) On the noise-induced passage through an unstable periodic orbit ii: general case. SIAM J Math Anal 46(1):310–352

    Article  MathSciNet  MATH  Google Scholar 

  • Bertini L, Giacomin G, Pakdaman K (2010) Dynamical aspects of mean field plane rotators and the Kuramoto model. J Stat Phys 138(1):270–290

    Article  MathSciNet  MATH  Google Scholar 

  • Bertini L, Giacomin G, Poquet C (2014) Synchronization and random long time dynamics for mean-field plane rotators. Probab Theory Relat Fields 160(3–4):593–653

    Article  MathSciNet  MATH  Google Scholar 

  • Bossy M, Faugeras O, Talay D (2015) Clarification and complement to “mean-field description and propagation of chaos in networks of Hodgkin–Huxley and Fitzhugh–Nagumo neurons”. JMN 5(1):1–23

    MathSciNet  MATH  Google Scholar 

  • Bossy M, Espina J, Morice J, Paris C, Rosseau A (2016) Modeling the wind circulation around mills with a lagrangian stochastic approach. SMAI J Comput Math 2:177–214

    Article  MathSciNet  MATH  Google Scholar 

  • Bressloff PC, Lai YM (2011) Stochastic synchronization of neuronal populations with intrinsic and extrinsic noise. J Math Neurosci 1(1):2

    Article  MathSciNet  MATH  Google Scholar 

  • Burkitt AN (2006a) A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input. Biol Cybern 95(1):1–19

    Article  MathSciNet  MATH  Google Scholar 

  • Burkitt AN (2006b) A review of the integrate-and-fire neuron model: II. Inhomogeneous synaptic input and network properties. Biol Cybern 95(2):97–112

    Article  MathSciNet  MATH  Google Scholar 

  • Chan T, Golub G, LeVeque R (1983) Algorithms for computing the sample variance: analysis and recommendations. Am Stat 37(3):242–247

    MathSciNet  MATH  Google Scholar 

  • Dangerfield CE, Kay D, Burrage K (2012) Modeling ion channel dynamics through reflected stochastic differential equations. Phys Rev E 85:051907

    Article  Google Scholar 

  • Delarue F, Inglis J, Rubenthaler S, Tanré E (2015) Global solvability of a networked integrate-and-fire model of Mckean–Vlasov type. Ann Appl Probab 25(4):2096–2133

    Article  MathSciNet  MATH  Google Scholar 

  • Ermentrout GB, Terman DH (2010) Mathematical foundations of neuroscience. Springer, New York

    Book  MATH  Google Scholar 

  • Faugeras O, Touboul J, Cessac B (2009) A constructive mean-field analysis of multi population neural networks with random synaptic weights and stochastic inputs. Front Comput Neurosci 3:1

    Article  Google Scholar 

  • FitzHugh R (1961) Impulses and physiological states in theoretical models of nerve membrane. Biophys J 1(6):445–466

    Article  Google Scholar 

  • Fournier N, Guillin A (2015) On the rate of convergence in Wasserstein distance of the empirical measure. Probab Theory Relat Fields 162(3–4):707–738

    Article  MathSciNet  MATH  Google Scholar 

  • Fournier N, Löcherbach E (2016) On a toy model of interacting neurons. Annales de l’Institut Henri Poincaré Probabilités et Statistiques 52(4):1844–1876

    Article  MathSciNet  MATH  Google Scholar 

  • Friedman A (2006) Stochastic differential equations and applications. Dover Publications Inc., Mineola (Two volumes bound as one, Reprint of the 1975 and 1976 original published in two volumes)

    MATH  Google Scholar 

  • Gärtner J (1988) On the McKean–Vlasov limit for interacting diffusions. Math Nachr 137:197–248

    Article  MathSciNet  MATH  Google Scholar 

  • Giacomin G, Luçon E, Poquet C (2014) Coherence stability and effect of random natural frequencies in populations of coupled oscillators. J Dyn Differ Equ 26(2):333–367

    Article  MathSciNet  MATH  Google Scholar 

  • Goldwyn J, Shea-Brown E (2011) The what and where of adding channel noise to the Hodgkin–Huxley equations. PLoS Comput Biol 7(11):e1002247

    Article  MathSciNet  Google Scholar 

  • Goldwyn J, Imennov Nikita S, Famulare M, Shea-Brown E (2011) Stochastic differential equation models for ion channel noise in Hodgkin–Huxley neurons. Phys Rev E 83(4):041908

    Article  Google Scholar 

  • Hansel D, Mato G (1993) Patterns of synchrony in a heterogeneous Hodgkin–Huxley neural network with weak coupling. Phys A Stat Mech Appl 200(1–4):662–669

    Article  Google Scholar 

  • Hansel D, Mato G, Meunier C (1993) Phase dynamics for weakly coupled Hodgkin–Huxley neurons. EPL 23(5):367

    Article  Google Scholar 

  • Hodgkin A, Huxley A (1952) A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol 117:500–544

    Article  Google Scholar 

  • Hormuzdi SG, Filippov MA, Mitropoulou G, Monyer H, Bruzzone R (2004) Electrical synapses: a dynamic signaling system that shapes the activity of neuronal networks. BBA Biomembr 1662(1–2):113–137

    Article  Google Scholar 

  • Izhikevich EM (2007) Dynamical systems in neuroscience. The MIT Press, Cambridge

    Google Scholar 

  • Jiruska P, de Curtis M, Jefferys JGR, Schevon CA, Schiff SJ, Schindler K (2013) Synchronization and desynchronization in epilepsy: controversies and hypotheses. J Physiol 591(4):787–797

    Article  Google Scholar 

  • Karatzas I, Shreve S (1991) Brownian motion and stochastic calculus. Graduate texts in mathematics, 2nd edn. Springer, New York

    MATH  Google Scholar 

  • Kopell Nancy, Ermentrout Bard (2004) Chemical and electrical synapses perform complementary roles in the synchronization of interneuronal networks. Proc Natl Acad Sci 101(43):15482–15487

    Article  Google Scholar 

  • Kuramoto Y (1984) Chemical oscillations, waves, and turbulence. Springer, Berlin

    Book  MATH  Google Scholar 

  • Lapicque L (1907) Recherches quantitatives sur l’excitation électrique des nerfs traitée comme une polarization. J Physiol Pathol Gen (Paris) 9:620–635

    Google Scholar 

  • Luçon E, Poquet C (2017) Long time dynamics and disorder-induced traveling waves in the stochastic Kuramoto model. Ann Inst Henri Poincaré Probab Stat 53(3):1196–1240

    Article  MathSciNet  MATH  Google Scholar 

  • Marella S, Ermentrout GB (2008) Class-ii neurons display a higher degree of stochastic synchronization than class-i neurons. Phys Rev E 77(4):041918

    Article  MathSciNet  Google Scholar 

  • Méléard S (1996) Asymptotic behaviour of some interacting particle systems; Mckean–Vlasov and Boltzmann models. In: Probabilistic models for nonlinear partial differential equations. Springer, pp 42–95

  • Mischler S, Quiñinao C, Touboul J (2016) On a kinetic Fitzhugh–Nagumo model of neuronal network. Commun Math Phys 342(3):1001–1042

    Article  MathSciNet  MATH  Google Scholar 

  • Morris C, Lecar H (1981) Voltage oscillations in the barnacle gian muscle fiber. Biophys J 31(1):193–213

    Article  Google Scholar 

  • Nagumo J, Arimoto S, Yoshizawa S (1962) An active pulse transmission line simulating nerve axon. Proc IRE 50:2061–2070

    Article  Google Scholar 

  • Ostojic S, Brunel N, Hakim V (2008) Synchronization properties of networks of electrically coupled neurons in the presence of noise and heterogeneities. J Comput Neurosci 26(3):369

    Article  MathSciNet  Google Scholar 

  • Pakdaman K, Thieullen M, Wainrib G (2010) Fluid limit theorems for stochastic hybrid systems with aplication to neuron models. Adv Appl Probab 42(3):761–794

    Article  MathSciNet  MATH  Google Scholar 

  • Perthame B, Salort D (2013) On a voltage-conductance kinetic system for integrate and fire neural networks. Kinet Relat Models 6(4):841–864

    Article  MathSciNet  MATH  Google Scholar 

  • Pikovskii AS (1984) Synchronization and stochastization of nonlinear oscillations by external noise. In: Nonlinear and turbulent processes in physics, vol 1, p 1601

  • Pikovsky Arkady, Rosenblum Michael, Kurths Jürgen (2003) Synchronization: a universal concept in nonlinear sciences, vol 12. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  • Sacerdote L, Giraudo M (2013) Stochastic integrate and fire models: a review on mathematical methods and their applications. Springer, Berlin, pp 99–148

    MATH  Google Scholar 

  • Sznitman A-S (1991) Topics in propagation of chaos. In: Ecole d’été de probabilités de Saint-Flour XIX—1989. Springer, pp 165–251

  • Villani C (2009) Optimal transport, old and new, vol 338. Grundlehren der Mathematischen Wissenschaften [Fundamental principles of mathematical sciences]. Springer, Berlin

  • Wainrib G (2010) Randomness in neurons: a multiscale probabilistic analysis. PhD thesis, École Polytechnique

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Héctor Olivero.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Joaquín Fontbona: Supported by CMM-Basal Conicyt AFB 170001, Nucleo Milenio NC 130062 and Fondecyt Grant 1150570.

Héctor Olivero: Partially supported by Nucleo Milenio NC 130062, Beca Chile Postdoctorado and Fondecyt Postdoctoral Grant 3180777.

This research was partially supported by the supercomputing infrastructure of NLHPC (ECM-02), Conicyt.

Appendices

Basic properties of the model (2.1)

We start establishing three basic facts about the system of stochastic differential equations (2.1): its (strong) global well-posedness, the fact that the open channels proportion processes stay (as required) in [0, 1] and, finally, and explicit global bound for the voltage processes in terms of a bound for the initial values.

Lemma A.1

Assume Hypothesis 2.1. Then, strong existence and pathwise uniqueness holds for system (2.1). Moreover, a.s. for all \(t\ge 0\) and every \(i=1,\ldots ,N\) we have \(( m_t^{(i)},n_t^{(i)},h_t^{(i)},y_t^{(i)})\in [0,1]^4\). In particular, the absolute value in (2.2) can be removed.

Proof

It is enough to prove the result for deterministic initial data so we assume this is the case. Take \(M>0\) fixed, and for \(j=1,3, 4\) define truncation functions \(p^j_M\) on \(\mathbb {R}\) by

$$\begin{aligned} p^j_M(x) = \left\{ \begin{array}{ll} x^j&{}x\in [-M,M]\\ M^j&{}x\in (M,\infty )\\ (-M)^j&{}x\in (-\infty ,-M). \end{array}\right. \end{aligned}$$

Let \(X^{(M)}:=(X^{(1,M)},\ldots , X^{(N,M)}) \) with \(X_t^{(i,M)}=(V_t^{(i,M)}, m_t^{(i,M)}, n_t^{(i,M)}, h_t^{(i,M)},y_t^{(i,M)})\), \(i=1,\ldots ,N\) be defined by

$$\begin{aligned} V_t^{(i,M)}= & {} V^{(i,M)}_0 + \int _{0}^{t}F_M(V_s^{(i,M)}, m_s^{(i,M)},n_s^{(i,M)},h_s^{(i,M)})\nonumber \\&\quad -\frac{1}{N}\sum _{j=1}^{N}J_{\text {E}}(V_s^{(i,M)}-V_s^{(j,M)}) \nonumber \\&\quad -\frac{1}{N}\sum _{i=1}^{N}{J_{\text {Ch}}p_M^1(y^{(j,M)}_s) (p_M^1(V_s^{(i,M)})-V_\text {rev})}ds,\nonumber \\ x^{(i,M)}_t= & {} x^{(i,M)}_0+\int _{0}^{t}\rho _x(p_M^1(V^{(i,M)}_s))(1-p_M^1(x^{(i,M)}_s))\nonumber \\&\quad -\zeta _x(p_M^1(V^{(i,M)}_s))p_M^1(x^{(i,M)}_s)ds\nonumber \\&\quad + \int _{0}^{t}{\sigma _x(p_M^1(V_s^{(i,M)}),x_s^{(i,M)})dW_s^{x,i}},\;\;x=m,n,h,y , \, \end{aligned}$$
(A.1)

where

$$\begin{aligned} F_M(v,m,n,h)= & {} I - g_\text {K}p_M^4(n)(p_M^1(v)-V_\text {K}) - g_\text {Na}p_M^3(m)p_M^1(h)(p_M^1(v)-V_\text {Na}) \nonumber \\&-\,g_\text {L}(v-V_\text {L}). \end{aligned}$$
(A.2)

Is is immediate that the drift coefficients in system (A.1) are Lipschitz continuous. This is less clear in the case of the diffusion coefficients, so we check this point next. Notice that

$$\begin{aligned} \max _{(v,u)\in \mathbb {R}\times [0,1]} \rho _x(p_M^1(v))(1-u) + \zeta _x(p_M^1(v))u\le S_M{\,:=} \max _{v\in [-M,M]} \rho _x(v) + \zeta _x(v){<}\infty \end{aligned}$$

whereas, thanks to point 2) in Hypothesis 2.1,

$$\begin{aligned} \min _{(v,u)\in \mathbb {R}\times [0,1]} \rho _x(p_M^1(v))(1-u) + \zeta _x(p_M^1(v))u\ge \delta _M{:=} \min _{v\in [-M,M]}\{\rho _x(v), \zeta _x(v)\}>0. \end{aligned}$$

Therefore, one can find a bounded Lipschitz continuous function \(g_x: \mathbb {R}\rightarrow \mathbb {R}_+\) such that \(g_x(s)=\sqrt{s}\) on \((\delta _M/2,2S_M)\) and rewrite the diffusion coefficients in (A.1) as

$$\begin{aligned} \sigma _x(p^1_M(v),u)= \sigma g_x( |\rho _x(p^1_M(v))(1-u) + \zeta _x(p^1_M(v))u|)\chi (u). \end{aligned}$$

It is then easily seen that \(|\sigma _x(p^1_M(v),u)-\sigma _x(p^1_M(v'),u')|\le C_M (|u-u'|+|v-v'|) \) for some \(C_M>0\) in each of the three cases \((u,u')\in [0,1]^2\), \((u,u')\in ( [0,1]^2)^c\) and \((u,u')\in [0,1]\times [0,1]^c\) for any \(v,v'\in \mathbb {R}\). Thus, global pathwise well-posedness for system (A.1) holds.

Thanks to the second assumption in point 4) of Hypothesis 2.1 and the fact that \(\sigma _x(v,u)=0\) for \((v,u)\in \mathbb {R}\times (0,1)^c\) and \([\rho _x(v)(1-u)- \zeta _x(v)u][\mathbf {1}_{(-\infty ,0]}(u)- \mathbf {1}_{[1,-\infty )}(u)]\ge 0\) for \((v,u)\in \mathbb {R}^2\), we can more apply Proposition 3.3 in Bossy et al. (2015) to get that \(x^{(1,M)},\ldots , x^{(N,M)}\) are confined in [0, 1] for all time (notice that the proof of that result still works if Hypothesis 2.1 i) therein that \(\chi \) be compactly supported in (0, 1) is replaced by \(\chi \) being supported in [0, 1]).

We can now use standard arguments to deduce global existence and pathwise uniqueness of a solution to system (2.1). Indeed, setting \(\theta _M = \inf \{t\ge 0: |X_t^{(M)}|\ge M\}\), using the global Lipschitz character of its coefficients together with Itô calculus and Gronwall’s lemma we get for every \(M'>M\) that a.s. for all \(t\ge 0\), \(X^{(M)}_{t\wedge \theta _M} = X_{t\wedge \theta _M}^{(M')}\). This implies that \(\theta _{M'}>\theta _M\) a.s. and allows us to unambiguously define a process X solving (2.1) on the random interval \([0,\theta )\), with \(\theta := \sup _{M>0}\theta _M\), by \(X_t= X^{(M)}_t\) for all \(t\in [0,\theta _M]\). On the other hand, since \(|p_M^1(z)|\le |z|\) for all \(z\in \mathbb {R}\), for two constants \(C_1,C_2>0\) not depending on \(M>0\) we have \(|F_M(v,m,n,h)|\le C_1+ C_2 |v|\) for every \((v,m,n,h)\in \mathbb {R}\times [0,1]^3\). Using this control on the right hand side of the equations for \(V^{(1,M)},\ldots , V^{(N,M)}\) in (A.1) and Gronwall’s lemma we get

$$\begin{aligned} \mathbb {E}\left[ |X_{t\wedge \theta _M}| \right] \le C(t), \end{aligned}$$

for some constant \(C(t)>0\) not depending on M. This yields \(M\mathbb {P}\left[ \theta _M<t \right] \le C(t) \), whence \(\mathbb {P}\left[ \theta <\infty \right] =0\) letting M and then \(t \nearrow \infty \). The statement follows. \(\square \)

Remark A.2

  1. (i)

    The arguments given in the previous proof also show that each of the functions \(\sigma _x\) is locally Lipschitz on \(\mathbb {R}\times [0,1]\).

  2. (ii)

    The same proof also works for some extensions of our model. For instance, if independent Brownian motions are added to each of the voltage processes.

We next show that under the additional Hypothesis 2.2, each of the voltage processes is bounded uniformly in time and in N. Below and in all the sequel we denote

$$\begin{aligned} V^\text {max}_{t,\infty }:= \max _{i=1,\ldots ,N} \sup _{s\in [t,\infty )} | V_s^{(i)}|. \end{aligned}$$

We also set

$$\begin{aligned} R_\text {max}:= \max _{r,s,u \in [0,1]} |I+g_\text {Na}V_\text {Na}r +g_\text {K}V_\text {K}s+ g_\text {L}V_\text {L}+ J_{\text {Ch}}V_\text {rev}u |. \end{aligned}$$

Proposition A.3

Under Hypothesis 2.2, for every \(N\ge 1\) and \(t\ge 0\) we have a.s.

$$\begin{aligned} \left| \bar{V}^N_t \right| \le V^\text {max}_0 e^{-g_\text {L}t} +\frac{ 2 R_\text {max}}{g_\text {L}}(1-e^{-g_\text {L}t}) \end{aligned}$$

and

$$\begin{aligned} V^\text {max}_{t,\infty }\le V^*_t:= \frac{4R_\text {max}}{g_\text {L}}+2 V^\text {max}_0 e^{-g_\text {L}t}. \end{aligned}$$
(A.3)

As a consequence for every \(N\ge 1\), there exists at least one invariant law \(\mu ^N_{\infty }\) for the solution to (2.1), namely there exists a solution \((X_t, t\ge 0 )\) to (2.1) such that \(X_t\) has law \(\mu ^N_{\infty }\) for all \(t\ge 0\) as soon as \(X_0\) has law \(\mu ^N_{\infty }\). Moreover, this invariant measure is exchangeable.

Remark A.4

  1. (i)

    The bound \(V_t^* \) on \( V^\text {max}_{t,\infty }\) is in general not optimal. For instance, if \(V^\text {max}_0< \frac{ 2 R_\text {max}}{g_L}\), one can choose \(V^\text {max}_0> \frac{ 2 R_\text {max}}{g_L + J_{\text {E}}}\) and get from the last identity in (A.5) that \( V^\text {max}_{0,\infty }\le \frac{4 R_\text {max}}{g_L}<V_0^*\). However, in order to state a synchronization result that holds for a general class of initial conditions \(V_0\), the fact that the bound \(V^*_t\) does not depend on the electrical connectivity \(J_{\text {E}}\) and that \(V^*_{\infty }:= \lim _{t \rightarrow \infty } V^*_t= \frac{4 R_\text {max}}{g_L}\) does not depend on the initial condition will be crucial. See point i) in Remark B.4 for a related discussion.

  2. (ii)

    If point 2) of Hypothesis 2.2 does not hold, by slightly modifying the arguments of Lemma A.3 we still can get the a.s. bound

    $$\begin{aligned} \left| V_t^{(i)}\right| \le \frac{4R_\text {max}}{g_\text {L}}+2 \frac{ | V_0|}{\sqrt{N}} e^{-g_\text {L}t} \, \end{aligned}$$

    implying a uniform in N bound for \(\mathbb {E}( V^\text {max}_{t,\infty })\) if for instance all the random variables \(V_0^{i,N}\), \(i=1,\ldots ,N\), \(N\ge 1\) are equal in law and have finite second moment. However, we have not been able to fully extend our results to such a framework.

  3. (iii)

    The same arguments also show that a bound like (A.3) holds with \(V^\text {max}_{t,\infty }\) replaced by

    $$\begin{aligned} \widehat{V^\text {max}}_{t,\infty }:= \max _{i=1,\ldots ,N} \sup _{s\in [t,\infty )} |\widehat{ V}_s^{(i)}|. \end{aligned}$$

    That is, the voltages obtained with the EPE scheme are also uniformly bounded.

In the proof of Proposition A.3 and later, we will make use of the the following version of Gronwall’s lemma [see e.g. Ambrosio et al. (2008, p. 88)].

Lemma A.5

Let \(\theta :[0,+\infty )\rightarrow \mathbb {R}\) be a locally absolutely continuous function and \(a, b \in L_{\text {loc}}^1([0,+\infty ))\) be given functions satisfying, for \(\lambda \in \mathbb {R}\),

$$\begin{aligned} \frac{d}{dt}\theta ^2(t)+2\lambda \theta ^2(t)\le a(t) + 2b(t)\theta (t)\;\text {for }\mathcal {L^1}-a.e.\;t>0. \end{aligned}$$

Then for every \(T>0\) we have

$$\begin{aligned} e^{\lambda T}|\theta (T)|\le \left( \theta ^2(0)+ \sup _{t\in [0,T]}\int _{0}^{t}{e^{2\lambda s}a(s)ds}\right) ^{1/2}+2\int _{0}^{T}{e^{\lambda t}|b(t)|dt}. \end{aligned}$$

Proof of Proposition A.3

Setting

$$\begin{aligned} R^{(i)}_s:= & {} I+g_\text {Na}V_\text {Na}\left[ m^{(i)}_s\right] ^3h^{(i)}_s+g_\text {K}V_\text {K}\left[ n^{(i)}_s\right] ^4 + g_\text {L}V_\text {L}+ J_{\text {Ch}}V_\text {rev}\bar{y}^N_s, \text{ and } \\ A^{(i)}_s:= & {} g_\text {Na}\left[ m^{(i)}_s\right] ^3h^{(i)}_s+g_\text {K}\left[ n^{(i)}_s\right] ^4 + g_\text {L}+ J_{\text {Ch}}\bar{y}_s^N , \end{aligned}$$

the dynamics of the potential can be written as

$$\begin{aligned} V_t^{(i)} = V_0^{(i)} + \int _{0}^{t}{R^{(i)}_s - A^{(i)}_sV_s^{(i)} - J_{\text {E}}V^{(i)}_s+ J_{\text {E}}\bar{V}^N_sds}. \end{aligned}$$

Therefore, we get

$$\begin{aligned} \left( V_t^{(i)}\right) ^2 = \left( V_0^{(i)}\right) ^2 + 2\int _{0}^{t}{R^{(i)}_s V^{(i)}_s - A^{(i)}_s\left( V_s^{(i)}\right) ^2 - J_{\text {E}}\left( V^{(i)}_s\right) ^2+ J_{\text {E}}\bar{V}^N_s V^{(i)}_sds}. \end{aligned}$$
(A.4)

and

$$\begin{aligned} | V_t|^2 = |V_0|^2 + 2\int _{0}^{t}{\sum _{i=1}^N\left[ R^{(i)}_s V^{(i)}_s - A^{(i)}_s\left( V_s^{(i)}\right) ^2\right] - J_{\text {E}}|V_s|^2+ NJ_{\text {E}}(\bar{V}^N_s)^2ds}. \end{aligned}$$

Notice that

$$\begin{aligned} (\bar{V}^N_s)^2&=\frac{1}{N^2}\sum _{i,j=1}^{N}V^{(i)}_sV^{(j)}_s \le \frac{1}{2N^2}\sum _{i,j=1}^{N}(V^{(i)}_s)^2 + (V^{(j)}_s)^2 = \frac{1}{N}|V_s|^2, \end{aligned}$$

which yields

$$\begin{aligned} \frac{d}{dt}| V_t|^2 + 2g_\text {L}| V_t|^2 \le 2|R_t|| V_t|. \end{aligned}$$

By Lemma A.5 we deduce that

$$\begin{aligned} | V_t| \le | V_0| e^{-g_\text {L}t} + 2 e^{-g_\text {L}t}\int _{0}^{t}{e^{g_\text {L}s}|R_s|ds}. \end{aligned}$$

Since \(|R^{(i)}_s|\le R_\text {max}, \) we then get

$$\begin{aligned} \left| \bar{V}^N_t \right| \le \frac{ | V_t|}{\sqrt{N}} \le \frac{ | V_0|}{\sqrt{N}} e^{-g_\text {L}t} +\frac{ 2 R_\text {max}}{g_\text {L}}(1-e^{-g_\text {L}t})\le V^\text {max}_0 e^{-g_\text {L}t} +\frac{ 2 R_\text {max}}{g_\text {L}}(1-e^{-g_\text {L}t}). \end{aligned}$$

which is the first desired inequality. Using this in (A.4) yields

$$\begin{aligned}&\frac{d}{dt}(V_t^{(i)})^2 + 2(g_\text {L}+J_{\text {E}})(V_t^{(i)})^2\\&\quad \le 2|R^{(i)}_t + J_{\text {E}}\bar{V}^N_t||V_t^{(i)}|\\&\quad \le 2 \left( R_\text {max}+ J_{\text {E}}\left( V^\text {max}_0 -\frac{ 2 R_\text {max}}{g_\text {L}}\right) e^{-g_\text {L}t} +\frac{ 2 J_{\text {E}}R_\text {max}}{g_\text {L}}\right) |V_t^{(i)}|. \end{aligned}$$

Applying once again Lemma A.5, we obtain

$$\begin{aligned} \left| V_t^{(i)}\right|&\le V^\text {max}_0 e^{-(g_\text {L}+J_{\text {E}})t}\nonumber \\&\quad +2e^{-(g_\text {L}+J_{\text {E}})t}\int _{0}^{t}e^{(g_\text {L}+J_{\text {E}})s} \left( R_\text {max}\left( \frac{g_\text {L}+ 2 J_{\text {E}}}{g_\text {L}}\right) \right. \nonumber \\&\qquad \left. + J_{\text {E}}\left( V^\text {max}_0 -\frac{ 2 R_\text {max}}{g_\text {L}}\right) e^{-g_\text {L}s}\right) ds\nonumber \\&=V^\text {max}_0 e^{-(g_\text {L}+J_{\text {E}})t}+2R_\text {max}\left( \frac{g_\text {L}+ 2 J_{\text {E}}}{g_\text {L}(g_\text {L}+J_{\text {E}})}\right) (1- e^{-(g_\text {L}+J_{\text {E}})t})\nonumber \\&\quad +2 \left( V^\text {max}_0 -\frac{ 2 R_\text {max}}{g_\text {L}}\right) \left( e^{-g_\text {L}t} -e^{-(g_\text {L}+J_{\text {E}})t}\right) \nonumber \\&=\frac{2R_\text {max}}{g_\text {L}}\left( \frac{g_\text {L}+ 2 J_{\text {E}}}{g_\text {L}+J_{\text {E}}}\right) +2 \left( V^\text {max}_0 -\frac{ 2 R_\text {max}}{g_\text {L}}\right) e^{-g_\text {L}t}\nonumber \\&\quad + \left( \frac{ 2 R_\text {max}}{g_\text {L}+J_{\text {E}}} - V^\text {max}_0 \right) e^{-(g_\text {L}+J_{\text {E}})t}\nonumber \\&\le \frac{4R_\text {max}}{g_\text {L}}+2 V^\text {max}_0 e^{-g_\text {L}t}=V_t^* \end{aligned}$$
(A.5)

which implies the asserted bounds on \( V^\text {max}_{t,\infty }\).

Let us now deduce the existence of an invariant distribution which is exchangeable. Let \(P_t^N\) denote the semigroup associated to the solution of (2.1), that is for each \(\mathcal {X}\in (\mathbb {R}\times [0,1]^4)^N\) and B Borel set of \((\mathbb {R}\times [0,1]^4)^N\),

$$\begin{aligned} P_t^N(\mathcal {X},B)=\mathbb {P}\left( X_t\in B \vert X_0 = \mathcal {X}\right) . \end{aligned}$$

Consider also the probability measure \(R_T^N(\lambda )\) on \((\mathbb {R}\times [0,1]^4)^N\), defined for any law \(\lambda \) as

$$\begin{aligned} R_T^N(\lambda )(B)=\int _{ (\mathbb {R}\times [0,1]^4)^N} \left( \frac{1}{T}\int _{0}^{T} P_t^N(\mathcal {X},B) dt \right) \lambda (d\mathcal {X}). \end{aligned}$$

Since the voltage component is uniformly bounded in time, by (A.5), the solution to (2.1) lies in the compact set \(([-4\tfrac{R_\text {max}}{g_\text {L}}-2V^\text {max}_0, 4\tfrac{R_\text {max}}{g_\text {L}}+2V^\text {max}_0]\times [0,1]^4)^N\), and then for any \((T_M)\nearrow \infty \), and any \(\lambda \) with compact support, the sequence \((R_{T_M}^N(\lambda ),M\ge 0)\) is tight and has a subsequence weakly converging to some probability measure \(\mu _{\infty }^N\). According to Krylov–Bogoliubov Theorem, \(\mu _{\infty }^N\) is invariant for \(P_t^N\).

Let us now choose and exchangeable initial law \(\lambda \). For any measurable and bounded function \(\psi \), the identity

$$\begin{aligned} \int _{(\mathbb {R}\times [0,1]^4)^N}P_t^N(\mathcal {X},dy) \psi (y) \lambda (d\mathcal {X}) = \int _{(\mathbb {R}\times [0,1]^4)^N}P_t^N(\mathcal {X},dy) (\psi \circ \pi )(y) \lambda (d\mathcal {X}). \end{aligned}$$

for any N-permutation \(\pi \) of the coordinates follows directly from the exchangeable structure of the system of Eq. (2.1). Therefore, \(R_{T_M}^N(\lambda )\) is exchangeable for any \(T_M\), and the corresponding \(\mu _{\infty }^N\) is exchangeable as the weak limit of exchangeable measures. \(\square \)

Synchronization: proof of Theorem 2.3 a

In the sequel, for any locally bounded real function f on \(\mathbb {R}\) and each \(R>0\) we will write

$$\begin{aligned} \Vert f \Vert _{\infty ,R}:= \sup _{v\in [-R,R]}|f (v)|. \end{aligned}$$

We will repeatedly use a simple control of the increments of the function F, stated in next lemma for convenience:

Lemma B.1

We have

$$\begin{aligned} \begin{aligned}&\left( F(V_1,m_1,n_1,h_1)-F(V_2,m_2,n_2,h_2)\right) (V_1-V_2) \\&\quad \le - g_\text {L}(V_1-V_2)^2\\&\qquad + 4g_\text {K}|V_2-V_\text {K}| | n_1 - n_2||V_1-V_2| \\&\qquad + 3g_\text {Na}|V_2-V_\text {Na}||m_1-m_2||V_1-V_2|\\&\qquad + g_\text {Na}|V_2-V_\text {Na}| |h_1-h_2||V_1-V_2|. \end{aligned} \end{aligned}$$
(B.1)

for every \(m_i,n_i,h_i\in [0,1]\) and \(V_i\in \mathbb {R}\), \(i=1,2\).

Proof

Since

$$\begin{aligned} x^4-y^4 = (x^2+y^2)(x+y)(x-y), \end{aligned}$$

and

$$\begin{aligned} x^3u-y^3v = u(x^2+xy+y^2)(x-y)+y^3(u-v), \end{aligned}$$

we get

$$\begin{aligned} \begin{aligned}&F(V_1,m_1,n_1,h_1)-F(V_2,m_2,n_2,h_2)\\&\quad = - (g_\text {K}n^4_1+g_\text {Na}m^3_1h_1+g_\text {L})(V_1-V_2)\\&\qquad - g_\text {K}(V_2-V_\text {K})( n^2_1 + n^2_2)( n_1 + n_2) ( n_1 - n_2) \\&\qquad - g_\text {Na}(V_2-V_\text {Na})h_1 (m^2_1+m_1m_2+m_2^2)(m_1-m_2)\\&\qquad - g_\text {Na}(V_2-V_\text {Na}) m^3_2(h_1-h_2) \end{aligned} \end{aligned}$$

and the asserted bound follows. \(\square \)

The following result is the core of the proof of Theorem 2.3:

Proposition B.2

For each \(V^*>0 \), there are constants \(J_{\text {E}}^*>0\) and \(\lambda ^*>0\) not depending on N nor on \(\sigma \) such that for each \(J_{\text {E}}>J_{\text {E}}^*\) and any solution X of (2.1) satisfying \(V^\text {max}_{0,\infty }\le V^*\), one has

$$\begin{aligned} \mathbb {E}\left( \left| X_t^{(i)} -X_t^{(j)} \right| ^2\right) \le \mathbb {E}\left( \left| X_0^{(i)} -X_0^{(j)} \right| ^2 \right) e^{-\lambda ^* t} + \sigma ^2 \frac{2 C^*_{\zeta ,\rho }}{\lambda ^*} \, \quad \forall \,t\ge 0, \end{aligned}$$

for all \( i,j\in \{1,\ldots ,N\}\), where

$$\begin{aligned} C^*_{\zeta ,\rho }=\sum _{x=m,n,h,y} \Vert \rho _x \vee \zeta _x \Vert _{\infty ,V^*} <\infty . \end{aligned}$$

Proof

Let us write \(\Delta V_t = V_t^{(i)}-V_t^{(j)}\) and \(\Delta x_t = x_t^{(i)}-x_t^{(j)} \). Thanks to the bound (B.1), we have

$$\begin{aligned} (\Delta V_t)^2&= (\Delta V_0)^2+ 2 \int _{0}^{t}{\left[ F(V^{(i)}_s,m_s^{(i)},n_s^{(i)},h_s^{(i)})-F(V^{(j)}_s,m_s^{(j)},n_s^{(j)},h_s^{(j)})\right] \Delta V_sds}\\&\quad -\int _{0}^{t}{(2J_{\text {E}}+2J_{\text {Ch}}\bar{y}^N_s ) (\Delta V_s )^2ds}\\&\le (\Delta V_0)^2+ \int _{0}^{t}{ 8g_\text {K}|V^{(j)}_s-V_\text {K}| | \Delta n_s||\Delta V_s| + 6g_\text {Na}|V^{(j)}_s-V_\text {Na}|| \Delta m_s||\Delta V_s|ds}\\&\quad + \int _{0}^{t}{ 2g_\text {Na}|V^{(j)}_s-V_\text {Na}| | \Delta h_s||\Delta V_s|ds}-\int _{0}^{t}{(2g_\text {L}+ 2J_{\text {E}}+2J_{\text {Ch}}\bar{y}^N_s ) (\Delta V_s )^2ds}\\&\le (\Delta V_0)^2 +\int _{0}^{t}{\varepsilon _m\left( \Delta m_s\right) ^2+\varepsilon _n \left( \Delta n_s\right) ^2+\varepsilon _h \left( \Delta h_s\right) ^2 ds} \\&\quad -\int _{0}^{t}{\left( 2g_\text {L}+ 2J_{\text {E}}+2J_{\text {Ch}}\bar{y}^N_s - \frac{9 M_\text {Na}^2}{\varepsilon _m}-\frac{16M_\text {K}^2}{\varepsilon _n} -\frac{M_\text {Na}^2}{\varepsilon _h} \right) \left( \Delta V_s\right) ^2ds}, \end{aligned}$$

where we have used Young’s inequality: \(ab\le \varepsilon _x a^2 + \frac{b^2}{4 \varepsilon _x }\) for \(x=m,n,h,y\), with \(\varepsilon _x >0\) to be chosen later, and where we have set

$$\begin{aligned} M_\text {Na} = g_\text {Na}\max _{v\in [-V^*,V^*]}|v-V_\text {Na}|,\;\;M_\text {K} =g_\text {K}\max _{v\in [-V^*,V^*]}|v-V_\text {K}|. \end{aligned}$$

On the other hand, for the channel types \(x=m,n,h,y\), we have

$$\begin{aligned} \mathbb {E}\left[ ( \Delta x_t)^2\right]&=\mathbb {E}\left[ (\Delta x_0)^2\right] +2\int _{0}^{t}{\mathbb {E}\left[ (1-x^{(i)}_t)(\rho _x(V_t^{(i)})-\rho _x(V_t^{(j)}))\Delta x_s\right] ds}\\&\quad -2\int _{0}^{t}{\mathbb {E}\left[ x^{(i)}_t(\zeta _x(V_t^{(i)})-\zeta _x(V_t^{(j)}))\Delta x_s\right] ds}\\&\quad -2\int _{0}^{t}{\mathbb {E}\left[ \left( \rho _x(V_s^{(j)}) + \zeta _x(V_s^{(j)}) \right) (\Delta x_s)^2\right] ds}\\&\quad +\int _{0}^{t}{\mathbb {E}\left[ \sigma _x^2( V_s^{(j)}, x^{(j)}_s) + \sigma _x^2(V_s^{(i)},x^{(i)}_s)\right] ds}. \end{aligned}$$

By our assumptions, for all \(t\ge 0\) we have fror \(k=i,j\),

$$\begin{aligned} \sigma _x^2(V^{(k)}_t,x_t)\le \sigma ^2{\left( (1-x^{(k)}_t)\rho _x(V^{(k)}_t)+x^{(k)}_t\zeta _x(V^{(k)}_t)\right) }\le \sigma ^2 \Vert \rho _x \vee \zeta _x \Vert _{\infty ,V^*} . \end{aligned}$$

Using Young’s inequality in the same way as before yields

$$\begin{aligned} \mathbb {E}\left[ ( \Delta x_t)^2\right]&\le \mathbb {E}\left[ (\Delta x_0)^2\right] +\int _{0}^{t}{\mathbb {E}\left[ \frac{(L^*_{\rho _x}+L^*_{\zeta _x})^2}{\varepsilon _x}(\Delta V_s)^2\right] ds}\\&\quad - \left( 2\eta _x-\varepsilon _x\right) \int _{0}^{t}{\mathbb {E}\left[ (\Delta x_s)^2\right] ds}+ 2 t\, \sigma ^2 \Vert \rho _x \vee \zeta _x \Vert _{\infty ,V^*}, \end{aligned}$$

where \(L_f^*\) denotes the Lipschitz constant on \([-V^*,V^*]\) of a locally Lipschitz function f, and where

$$\begin{aligned} \eta ^*_x:= \inf _{v\in [-V^*,V^*]}\left\{ \rho _x(v)+\zeta _x(v)\right\} >0 . \end{aligned}$$

Adding up, we get

$$\begin{aligned} \mathbb {E}\left( |X_t^{(i)} -X_t^{(j)} |^2\right)&\le \mathbb {E}\left[ |X_0^{(i)} -X_0^{(j)} |^2\right] \\&\quad -\int _{0}^{t}\mathbb {E}\Big [ \Big (2g_\text {L}+ 2J_{\text {E}}- \frac{9 M_\text {Na}^2+ (L^*_{\rho _m}+L^*_{\zeta _m})^2}{\varepsilon _m}\\&\quad - \frac{16 M_\text {K}^2 +(L^*_{\rho _n}+L^*_{\zeta _n})^2}{\varepsilon _n} \\&\quad - \frac{M_\text {Na}^2 +(L^*_{\rho _h}+ L^*_{\zeta _h})^2}{\varepsilon _h}- \frac{(L^*_{\rho _y}+L^*_{\zeta _y})^2}{\varepsilon _y} \Big )(\Delta V_s)^2\Big ]ds\\&\quad - \left( 2\eta _m -2\varepsilon _m\right) \int _{0}^{t}{\mathbb {E}\left[ (\Delta m_s)^2\right] ds}- \left( 2\eta _n -2\varepsilon _n\right) \int _{0}^{t}{\mathbb {E}\left[ (\Delta n_s)^2\right] ds}\\&\quad -\left( 2\eta _h -2\varepsilon _h\right) \int _{0}^{t}{\mathbb {E}\left[ (\Delta h_s)^2\right] ds}- \left( 2\eta _y -\varepsilon _y\right) \int _{0}^{t}{\mathbb {E}\left[ (\Delta y_s)^2\right] ds}\\&\quad + 2 t\, \sigma ^2 C^*_{\zeta ,\rho }. \end{aligned}$$

Define now \(\lambda ^*\) as the optimal value of the problem

$$\begin{aligned} \max _{J,\varepsilon _m,\varepsilon _n,\varepsilon _h,\varepsilon _y >0} \Psi (J,\varepsilon _m,\varepsilon _n,\varepsilon _h,\varepsilon _y ), \end{aligned}$$

where

$$\begin{aligned}&\Psi (J,\varepsilon _m,\varepsilon _n,\varepsilon _h,\varepsilon _y ) \\&\quad :=\min \Bigg \{2g_\text {L}+ 2J - \frac{12M_\text {Na}^2+ (L^*_{\rho _h}+L^*_{\zeta _h})^2}{\varepsilon _m} - \frac{16M_\text {K}^2 +(L^*_{\rho _n}+L^*_{\zeta _n})^2}{\varepsilon _n}\\&\qquad - \frac{M_\text {Na}^2 +(L^*_{\rho _h}+L^*_{\zeta _h})^2}{\varepsilon _h}- \frac{(L^*_{\rho _y}+L^*_{\zeta _y})^2}{\varepsilon _y}, \\&\quad \qquad 2\eta ^*_m -2\varepsilon _m,\,2\eta ^*_n -2\varepsilon _n, 2\eta ^*_h -2\varepsilon _h, 2\eta ^*_y -\varepsilon _y\Bigg \}. \end{aligned}$$

Notice that \(\lambda ^*\) is strictly positive since \(\Psi (J,\varepsilon _m,\varepsilon _n,\varepsilon _h,\varepsilon _y )\) can be made so by taking small enough \(\varepsilon _x>0\) for \(x=m,n,h,y\) and then large enough \(J>0\). Calling \(J_{\text {E}}^*\) the smallest \(J>0\) such that \((J,\varepsilon _m,\varepsilon _n,\varepsilon _h,\varepsilon _y )\in \arg \max \Psi \), it follows that for every \(J_{\text {E}}>J_{\text {E}}^*\),

$$\begin{aligned} \mathbb {E}\left( |X_t^{(i)} -X_t^{(j)} |^2\right) \le \mathbb {E}\left[ |X_0^{(i)} -X_0^{(j)} |^2\right] -\lambda ^* \int _{0}^{t}{\mathbb {E}\left( |X_s^{(i)} -X_s^{(j)} |^2\right) }+ 2 t\, \sigma ^2C_{\zeta ,\rho }. \end{aligned}$$

Applying Lemma A.5, we obtain

$$\begin{aligned} \sqrt{\mathbb {E}\left( |X_t^{(i)} -X_t^{(j)} |^2\right) } \le e^{-\frac{\lambda ^* t}{2}}\left( \mathbb {E}\left[ |X_0^{(i)} -X_0^{(j)} |^2\right] + \frac{(e^{\lambda ^* t}-1)}{\lambda ^*}2 \sigma ^2C^*_{\zeta ,\rho } \right) ^{1/2}, \end{aligned}$$

and the desired result. \(\square \)

The next result removes the dependance of the previous one on the bound \(V^*\), at the price of ensuring exponentially fast synchronization only from some time instant \(t_0\ge 0\) on. It will then be easy to deduce part a) of Theorem 2.3.

Theorem B.3

There are constants \(J_{\text {E}}^0>0\) and \(\lambda ^0>0\) not depending on \(N\ge 1\), on \(\sigma \ge 0\) nor on the initial data, and \(t_0\ge 0\) not depending on \(N\ge 1\) nor on \(\sigma \ge 0\), such that for each \(J_{\text {E}}>J_{\text {E}}^0\) the solution X of (2.1) satisfies, for every \(t\ge t_0\),

$$\begin{aligned} \mathbb {E}\left( |X_t^{(i)} -X_t^{(j)} |^2\right) {\le }\mathbb {E}\left( |X_{t_0}^{(i)} -X_{t_0}^{(j)} |^2 \right) e^{-\lambda ^0( t-t_0)} + \sigma ^2 \frac{2 C^0_{\zeta ,\rho }}{\lambda ^0}, \quad \forall \, i,j\in \{1,\ldots ,N\}, \end{aligned}$$

where

$$\begin{aligned} C^0_{\zeta ,\rho }:=\sum _{x=m,n,h,y} \Vert \rho _x \vee \zeta _x \Vert _{\infty ,\frac{ 5R_\text {max}}{g_L}} <\infty . \end{aligned}$$

Proof

Fix \(\epsilon _0 \in (0,1)\), take \(t_0\ge 0\) such that \(2 V^\text {max}_0 e^{-g_\text {L}t_0}\le \epsilon _0 \frac{R_\text {max}}{g_L} \) and, conditionally on the sigma-field generated by \((X_s:s \le t_0)\), apply Proposition B.2 to the shifted process \(X':=(X_{t+t_0}:t\ge 0)\) with \(V^*=V^*_{t_0}\le (4+\epsilon _0)\frac{ R_\text {max}}{g_L}\le 5\frac{ R_\text {max}}{g_L} \). The proof is then achieved taking expectation in the obtained inequality.

We can now finish the proof of Theorem  2.3. a). Here and in the sequel we denote by \(\bar{S}^V_t\) and \(\bar{S}^x_t\) the empirical variance of voltages and x type channels at time t, respectively:

$$\begin{aligned} \bar{S}^V_t = \frac{1}{N}\sum _{i=1}^N \left( V_t^{(i)}-\bar{V}^N_t\right) ^2 \;\; \text{ and } \; \bar{S}^x_t = \frac{1}{N}\sum _{i=1}^N\left( x_t^{(i)}-\bar{x}^N_t\right) ^2. \end{aligned}$$

Proof of Theorem 2.3. a)

Applying in the conclusion of Theorem B.3 the elementary identity

$$\begin{aligned} \frac{1}{N^2}\sum _{i,j=1}^N{(\alpha _i-\alpha _j)^2} = \frac{2}{N}\sum _{k=1}^N{(\alpha _k-\bar{\alpha }^N)^2} \quad \text{ for } \text{ every } \alpha _1,\ldots ,\alpha _N\in \mathbb {R}, \end{aligned}$$

with \(\bar{\alpha }^N=\frac{1}{N} \sum _{i=1}^N \alpha _i\) we get

$$\begin{aligned} \mathbb {E}\left( \bar{S}^V_t + \sum _{x=mn,n,h,y} \bar{S}^x_t \right) \le \mathbb {E}\left( \bar{S}^V_{t_0} +\sum _{x=mn,n,h,y} \bar{S}^x_{t_0} \right) e^{-\lambda ^0( t-t_0)} + \sigma ^2 \frac{C^0_{\zeta ,\rho }}{\lambda ^0} \end{aligned}$$

in the general case. If, additionally, exchangeability of the initial condition is assumed, the path law of system (2.1) is exchangeable, by pathwise uniqueness. The asserted inequality follows. \(\square \)

Remark B.4

  1. (i)

    Theorems 2.3 and B.3 show that, for large enough \(J_{\text {E}}\), synchronization of the network (2.1) always occurs, as long as the initial voltage \(V_0\) is bounded, but regardless of its actual values. More precisely, the time \(t_0>0\) which depends on \(V^\text {max}_0\), on \(\frac{R_\text {max}}{g_L}\) and on some arbitrary choice of the parameter \(\epsilon _0>0\), but not on \(J_{\text {E}}\), is one possible time after which we can grant that the voltage trajectories stay in some fixed interval not depending on \(V_0\). Then, after \(t_0\) and if \(J_{\text {E}}\) was chosen large enough, synchronization occurs at least at the exponential rate \(\lambda ^0\) which depends on coefficients of the system (2.1) but no longer on the initial data. In turn, for large enough \(J_{\text {E}}\), Proposition B.2 ensures synchronization from \(t_0=0\) on but only if \(V^\text {max}_0\) is small enough.

  2. (ii)

    Notice that the function \(\Psi \) in the proof of Proposition B.2 (and hence the constant \(\lambda ^*\) therein) increases when its parameter \(V^*\) decreases, whereas \(C^*_{\zeta ,\rho }\) decreases when \(V^*\) does. Therefore, letting \(\epsilon _0\rightarrow 0\) (or \(t_0\rightarrow \infty \)) yields the best (by this approach) bounds for the \(\limsup \) in Theorem 2.3. Moreover, the largest possible exponential rate \(\lambda ^0>0\) and the smallest possible interaction strength \(J_{\text {E}}^0\ge 0 \) that can be obtained (but not necessarily attained) in Theorems 2.3 and B.3 by our approach are \(\lambda ^*\) and \(J_{\text {E}}^*\) corresponding to \(V^*=\frac{4 R_\text {max}}{g_L}\). These choices are certainly not optimal in general.

Synchronized dynamics: proof of Theorem 2.3 b

Our next goal is to prove part b) of Theorem 2.3.

Remark C.1

Proceeding in a similar way as in the proof of Proposition A.3 one checks that the process (2.6) satisfies \(\frac{d}{dt}| \widehat{V}_t|^2_2 + 2g_\text {L}| \widehat{V}_t |^2_2 \le 2R_\text {max}| \widehat{V}_t|, \) which now yields, for any \(t\ge t_1\),

$$\begin{aligned} |\widehat{V}_t |\le | \widehat{V}_{t_1}| e^{-g_\text {L}( t-t_1) } +\frac{ 2 R_\text {max}}{g_\text {L}}(1-e^{-g_\text {L}( t-t_1)}). \end{aligned}$$

Applying on \(\bar{V}^N_{t_1}= \widehat{V}_{t_1}\) the first bound in Lemma A.3 we get that \(| \widehat{V}_t |\le V^\text {max}_0 e^{-g_\text {L}t } +\frac{ 2 R_\text {max}}{g_\text {L}}\) for every \(t\ge t_1\). Thus, if \(t_0\ge 0\) is chosen as in Theorem B.3, we deduce that

$$\begin{aligned} \max \left\{ \sup _{s\in [t_1,\infty )} | \bar{V}_s |, \sup _{s\in [t_1,\infty )} | \widehat{V}_s |\right\} \le \frac{V^*_{t_0}}{2} \le \frac{ 5R_\text {max}}{2 g_L} . \end{aligned}$$
(C.1)

We first prove

Proposition C.2

Let \(t_0\) be as in Theorem B.3 and \(\delta >0\). There are constants \(K_{1,\delta }, K_{2,\delta }>0\) increasingly depending on \(\delta >0\), but not depending on N nor on the initial condition, such that for each \(t_1\ge t_0\),

$$\begin{aligned} \begin{aligned} \mathbb {E}\left( \sup _{t_1\le t \le t_1+\delta }|\bar{X}^{N,t_1}_t-\widehat{X}^N_t|^2\right)&\le \left( \left[ (V^*_{t_0} )^2 + 4\right] e^{-\lambda ^0(t_1-t_0)} + \frac{\sigma ^2 C^0_{\zeta ,\rho }}{\lambda ^0 }\right) \, \delta K_{1,\delta }\\&\quad + \delta K_{2,\delta } \frac{\sigma ^2 }{N} C^0_{\zeta ,\rho } . \end{aligned} \end{aligned}$$
(C.2)

Proof

For notational simplicity we write in the proof \(\widehat{X}_{t}^{N}: = \widehat{X}_{t}^{N,t_1} \). Notice that the average process satisfies the dynamics

$$\begin{aligned} \bar{V}^N_t&= \bar{V}^N_{t_1} + \int _{t_1}^{t} \frac{1}{N}\sum _{i=1}^N{F(V_s^{(i)}, m_s^{(i)},n_s^{(i)},h_s^{(i)})} - J_{\text {Ch}}\bar{y}^N_s (\bar{V}^N_s-V_\text {rev})ds \\ \bar{x}^N_t&= \bar{x}^N_{t_1}+\frac{1}{N}\sum _{j=1}^N\int _{t_1}^{t}\rho _x(V^{(j)}_s)(1-x^{(j)}_s) -\zeta _x(V^{(j)}_s)x^{(j)}_s\\&\quad +\frac{1}{N}\sum _{i=1}^N\int _{t_1}^{t}{\sigma _x(V_s^{(j)},x_s^{(j)})dW_s^{x,j}}. \end{aligned}$$

Therefore, after some manipulations, we get that

$$\begin{aligned} \left( \bar{V}^N_t - \widehat{V}_t\right) ^2&=\int _{t_1}^{t} \left[ \frac{1}{N}\sum _{i=1}^N{F(V_s^{(i)}, m_s^{(i)},n_s^{(i)},h_s^{(i)})-F(\bar{V}^N_s, \bar{m}^N_s,\bar{n}^N_s,\bar{h}^N_s)} \right] ^2 ds\\&\quad +2\int _{t_1}^{t} \left[ F(\bar{V}^N_s, \bar{m}^N_s,\bar{n}^N_s,\bar{h}^N_s) - F(\widehat{V}^{N}_s,\widehat{m}_s^N,\widehat{n}_s^N,\widehat{h}_s^N) \right] (\bar{V}^N_s-\widehat{V}^{N}_s) ds\\&\quad + \int _{t_1}^{t}{(1-2J_{\text {Ch}}\bar{y}^N_s )(\bar{V}^N_s-\widehat{V}^{N}_s)^2 +2J_{\text {Ch}}(\widehat{V}^{N}_s-V_\text {rev})(\bar{y}^N_s- \widehat{y}_s^N)(\bar{V}^N_s-\widehat{V}^{N}_s) ds}\\&= I_1+I_2+I_3. \end{aligned}$$

By Jensen’s inequality and the bound (C.1) we have

$$\begin{aligned} I_1&\le \int _{t_1}^{t} \frac{1}{N}\sum _{i=1}^N{\left[ F(V_s^{(i)}, m_s^{(i)},n_s^{(i)},h_s^{(i)})-F(\bar{V}^N_s, \bar{m}^N_s,\bar{n}^N_s,\bar{h}^N_s)\right] ^2} ds\\&\le \int _{t_1}^{t} \frac{1}{N}\sum _{i=1}^N{4\left[ \left( g_\text {K}(n^{(i)}_s)^4+g_\text {Na}(m^{(i)}_s)^3 h^{(i)}_s+g_\text {L}\right) (V_s^{(i)}-\bar{V}^N_s) \right] ^2} ds\\&\quad +\int _{t_1}^{t} \frac{1}{N}\sum _{i=1}^N{4\left[ g_\text {Na}(\bar{V}^N_s-V_\text {Na})h^{(i)}_s \left( (m^{(i)}_s)^2 +m^{(i)}_s\bar{m}^N_s+(\bar{m}^N_s)^2 \right) (m_s^{(i)}-\bar{m}^N_s) \right] ^2} ds\\&\quad +\int _{t_1}^{t} \frac{1}{N}\sum _{i=1}^N{4\left[ g_\text {Na}(\bar{V}^N_s-V_\text {K})\left( (n^{(i)}_s)^2 +(\bar{n}^N_s)^2\right) \left( n^{(i)}_s +\bar{n}^N_s\right) (n_s^{(i)}-\bar{n}^N_s) \right] ^2} ds\\&\quad + \int _{t_1}^{t} \frac{1}{N}\sum _{i=1}^N{4\left[ g_\text {Na}(\bar{V}^N_s-V_\text {Na})(\bar{m}^N_s)^3 (h_s^{(i)}-\bar{h}^N_s) \right] ^2} ds\\&\le K_V^1 \int _{t_1}^{t}{\bar{S}^V_s+\bar{S}^m_s+\bar{S}^n_s+\bar{S}^h_sds}, \end{aligned}$$

with \(K_V^1\) explicitly depending on \( \sup _{v\in [-\frac{ 5R_\text {max}}{2 g_L} ,\frac{ 5R_\text {max}}{2 g_L} ]} \max \{ |v-V_\text {Na}|, |v-V_\text {K}| \}\), \(g_{\text {K}}\) and \(g_{\text {Na}}\). Meanwhile, using (B.1) we get

$$\begin{aligned} I_2&\le \int _{t_1}^{t} -2g_\text {L}(\bar{V}^N_s-\widehat{V}^{N}_s)^2+4g_\text {K}|\widehat{V}^{N}_s-V_\text {K}|\left( (\bar{V}^N_s-\widehat{V}^{N}_s)^2+(\bar{n}^N_s-\widehat{n}^{N}_s)^2 \right) \\&\quad +3g_\text {Na}|\widehat{V}^{N}_s-V_\text {Na}|\left( (\bar{V}^N_s-\widehat{V}^{N}_s)^2+(\bar{m}^N_s-\widehat{m}^{N}_s)^2 \right) \\&\quad +g_\text {Na}|\widehat{V}^{N}_s-V_\text {Na}|\left( (\bar{V}^N_s-\widehat{V}^{N}_s)^2+(\bar{h}^N_s-\widehat{h}^{N}_s)^2 \right) ds \\&\le K_V^2\int _{t_1}^{t} (\bar{V}^N_s-\widehat{V}^{N}_s)^2+(\bar{n}^N_s-\widehat{n}^{N}_s)^2+(\bar{m}^N_s-\widehat{m}^{N}_s)^2 +(\bar{h}^N_s-\widehat{h}^{N}_s)^2 ds, \end{aligned}$$

with \(K_V^2\) also depending on those quantities and on \(g_L\). By similar arguments, we get

$$\begin{aligned} I_3 \le K_V^3\int _{t_1}^{t} (\bar{V}^N_s-\widehat{V}^{N}_s)^2+(\bar{y}^N_s-\widehat{y}^{N}_s)^2 ds \end{aligned}$$

for some \(K_V^3\) depending on \(J_{\text {Ch}}\) and on \(\sup _{v\in [-\frac{ 5R_\text {max}}{2 g_L} ,\frac{ 5R_\text {max}}{2 g_L} ]}|v-V_\text {rev} |\). We thus get:

$$\begin{aligned} \left( \bar{V}^N_t - \widehat{V}_t\right) ^2 \le K_V^1 \int _{t_1}^{t}{\left[ \bar{S}^V_s+\bar{S}^m_s+\bar{S}^n_s+\bar{S}^h_s\right] ds} + \tilde{K}_V\int _{t_1}^{t}{\left| \bar{X}^N_s-\widehat{X}^N_s\right| ^2ds} \end{aligned}$$

for some explicit \(\tilde{K}_V\) a.s. from where

$$\begin{aligned} \mathbb {E}\left[ \sup _{t_1\le s\le t}\left( \bar{V}^N_s - \widehat{V}_s\right) ^2\right]\le & {} K_V^1 \int _{t_1}^{t}{\mathbb {E}\left[ \bar{S}^V_s+\bar{S}^m_s+\bar{S}^n_s+\bar{S}^h_s\right] ds}\nonumber \\&+\, \tilde{K}_V\int _{t_1}^{t}{\mathbb {E}\left[ \sup _{t_1\le u\le s}|\bar{X}^N_u-\widehat{X}^N_u|^2\right] ds}. \end{aligned}$$
(C.3)

On the other hand, for x type channels we get

$$\begin{aligned} \bar{x}^N_t -\widehat{x}^{N}_t&=\frac{1}{N}\sum _{j=1}^N\int _{t_1}^{t} \rho _x(V^{(j)}_s)(1-x^{(j)}_s) -\zeta _x(V^{(j)}_s)x^{(j)}_s\\&\quad -\rho _x(\bar{V}^N_s)(1-\bar{x}^N_s) +\zeta _x(\bar{V}^N_s)\bar{x}^N_sds\\&\quad +\int _{t_1}^{t}\rho _x(\bar{V}^N_s)(1-\bar{x}^N_s) -\zeta _x(\bar{V}^N_s)\bar{x}^N_s- \rho _x(\widehat{V}^{N}_s)(1-\widehat{x}^{N}_s) +\zeta _x(\widehat{V}^{N}_s)\widehat{x}^{N}_sds\\&\quad +\frac{1}{N}\sum _{j=1}^N\int _{t_1}^{t} \sigma _x(V_s^{(j)},x_s^{(j)}) dW_s^{x,j}. \end{aligned}$$

For \(t\in (t_1,t_1+\delta )\) we deduce:

$$\begin{aligned} (\bar{x}^N_t -\widehat{x}^{N}_t)^2&\le 3\delta \int _{t_1}^{t} \frac{1}{N}\sum _{j=1}^N\left( \rho _x(V^{(j)}_s)(1{-}x^{(j)}_s) {-}\zeta _x(V^{(j)}_s)x^{(j)}_s{-}\rho _x(\bar{V}^N_s)(1-\bar{x}^N_s) +\zeta _x(\bar{V}^N_s)\bar{x}^N_s\right) ^2 \, ds \\&\quad + 3\delta \int _{0}^{t} \left( \rho _x(\bar{V}^N_s)(1-\bar{x}^N_s) -\zeta _x(\bar{V}^N_s)\bar{x}^N_s- \rho _x(\widehat{V}^{N}_s)(1-\widehat{x}^{N}_s) +\zeta _x(\widehat{V}^{N}_s)\widehat{x}^{N}_s\right) ^2 \, ds\\&\quad + 3\left( \frac{1}{N}\sum _{i=1}^N\int _{t_1}^{t}{\sigma _x(V_s^{(j)},x_s^{(j)})dW_s^{x,j}}\right) ^2.\\ \end{aligned}$$

The previous yields,

$$\begin{aligned}&\mathbb {E}\bigg [ \sup _{t_1\le s\le t} (\bar{x}^N_s -\widehat{x}^{N}_s)^2\bigg ]\\&\quad \le 3\delta \int _{t_1}^{t}{ \mathbb {E}\left[ \frac{1}{N}\sum _{j=1}^N\left( \rho _x(V^{(j)}_s)(1-x^{(j)}_s) -\zeta _x(V^{(j)}_s)x^{(j)}_s-\rho _x(\bar{V}^N_s)(1-\bar{x}^N_s) +\zeta _x(\bar{V}^N_s)\bar{x}^N_s\right) ^2\right] ds}\\&\qquad +3\delta \int _{0}^{t}\mathbb {E}\left[ \left( \rho _x(\bar{V}^N_s)(1-\bar{x}^N_s) -\zeta _x(\bar{V}^N_s)\bar{x}^N_s- \rho _x(\widehat{V}^{N}_s)(1-\widehat{x}^{N}_s) +\zeta _x(\widehat{V}^{N}_s)\widehat{x}^{N}_s\right) ^2\right] ds\\&\qquad +3\mathbb {E}\left[ \sup _{t_1\le s\le t} \left( \frac{1}{N}\sum _{i=1}^N\int _{t_1}^{s}{\sigma _x(V_u^{(j)},x_u^{(j)}) dW_u^{x,j}}\right) ^2\right] \\&= I_1+I_2+I_3. \end{aligned}$$

Denoting by \(L_{f,R}\) a Lipschitz constant of a function f on \([- R,R]\) and using standard arguments, we get that

$$\begin{aligned} I_1 \le K_x\delta (L_{\rho _x, \frac{ 5R_\text {max}}{2 g_L} }^2+L_{\rho _x+\zeta _x, \frac{ 5R_\text {max}}{2 g_L} }^2 )\int _{t_1}^{t}{ \mathbb {E}\left[ \bar{S}^V_s + \bar{S}^x_s \right] ds} \end{aligned}$$

and that

$$\begin{aligned} I_2\le K_x \delta (L_{\rho _x, \frac{ 5R_\text {max}}{2 g_L} }^2+L_{\rho _x+\zeta _x, \frac{ 5R_\text {max}}{2 g_L} }^2 )\int _{t_1}^{t}{ \mathbb {E}\left[ (\bar{V}^N_s-\widehat{V}^{N}_s)^2+(\bar{x}^N_s-\widehat{x}^{N}_s)^2 \right] ds} \end{aligned}$$

for all \(t\in (t_1,t_1+\delta )\). By Doob’s inequality, we moreover obtain

$$\begin{aligned} I_3 \le 3 \cdot 4 \mathbb {E}\left[ \frac{1}{N^2}\sum _{i=1}^N\int _{t_1}^{t}{ \sigma ^2 _x(V_s^{(j)},x_s^{(j)}) ds}\right] \le \frac{12 \sigma ^2 \delta }{N} \Vert \rho _x \vee \zeta _x \Vert _{\infty , \frac{ 5R_\text {max}}{2 g_L}}. \end{aligned}$$

Summarizing, for the x-type channel we have shown that for all \(t\in (t_1,t_1+\delta )\),

$$\begin{aligned} \mathbb {E}\left[ \sup _{t_1\le s\le t}(\bar{x}^N_s -\widehat{x}^{N}_s)^2\right]\le & {} \delta K_x\int _{t_1}^{t}{\mathbb {E}\left[ \bar{S}^V_s+\bar{S}^x_s+\sup _{t_1\le u\le s}|\bar{X}^N_u-\widehat{X}^N_u|^2 \right] ds} \nonumber \\&+ \frac{ 12\sigma ^2 \delta }{N} \Vert \rho _x \vee \zeta _x \Vert _{\infty , \frac{ 5R_\text {max}}{2 g_L}} \end{aligned}$$
(C.4)

for some constants \(K_x>0\). Putting together (C.3) and (C.4) we get for all \(t\in (t_1,t_1+\delta )\) and some constants \(K_1,K_2>0\),

$$\begin{aligned}&\mathbb {E}\left[ \sup _{t_1\le s\le t}|\bar{X}^N_s-\widehat{X}^N_s|^2\right] \le (1+\delta ) K_1\int _{t_1}^{t_1+\delta }{\mathbb {E}\left( \bar{S}^V_s+ \sum _{x=m,n,h,y} \bar{S}^x_s\right) ds} + \frac{12 \sigma ^2 \delta }{N} C^0_{\zeta ,\rho } \\&\quad +\, (1+\delta ) K_2\int _{t_1}^{t}{\mathbb {E}\left[ \sup _{t_1\le u\le s}|\bar{X}^N_u-\widehat{X}^N_u|^2\right] ds}, \quad \quad \quad \quad \quad \end{aligned}$$

from where, using Gronwall’s inequality, we deduce:

$$\begin{aligned}&\mathbb {E}\left( \sup _{t_1\le t \le t_1+\delta }|\bar{X}^N_t-\widehat{X}^N_t|^2\right) \\&\quad \le e^{K_2(1+\delta )} \left( K_1(1+\delta ) \int _{t_1}^{t_1+\delta } \mathbb {E}\left( \bar{S}^V_s + \sum _{x=m,n,h,y} \bar{S}^x_s \right) ds + \frac{12 \sigma ^2\delta }{N} C^0_{\zeta ,\rho } \right) . \end{aligned}$$

We can now use Theorem 2.3 to bound the integral on the r.h.s. With \(K_{1,\delta } =e^{K_2(1+\delta )}K_1(1+\delta ) \) and \(K_{1,\delta } =12 e^{K_2(1+\delta )}\) we get, for all \(t_1\ge t_0\), that

$$\begin{aligned} \begin{aligned}&\mathbb {E}\left( \sup _{t_1\le t \le t_1+\delta }|\bar{X}^N_t-\widehat{X}^N_t|^2\right) \\&\quad \le \mathbb {E}\left( \bar{S}^V_{t_0}+ \sum _{x=m,n,h,y} \bar{S}^x_{t_0} \right) \frac{1}{\lambda ^0 }(1-e^{-\lambda ^0 \delta }) e^{-\lambda ^0( t_1-t_0) }K_{1,\delta }\\&\qquad + \frac{\sigma ^2 C^0_{\zeta ,\rho }}{\lambda ^0 } \delta K_{1,\delta } + \delta K_{2,\delta } \frac{\sigma ^2 }{N} C^0_{\zeta ,\rho } \\&\quad \le \left( \left[ (V^*_{t_0} )^2 + 4\right] e^{-\lambda ^0( t_1-t_0) } + \frac{\sigma ^2 C^0_{\zeta ,\rho }}{\lambda ^0 }\right) \delta K_{1,\delta } + \delta K_{2,\delta } \frac{\sigma ^2 }{N} C^0_{\zeta ,\rho } \\ \end{aligned} \end{aligned}$$

since \( \bar{S}^V_{t_0}\le (V^*_{t_0} )^2\). \(\square \)

Proof of Theorem 2.3. b)

Notice on hand that, for each \(t\ge t_1\), we always have the bounds

$$\begin{aligned} | X^{(i)}_t- \widehat{X}^{t_1,N}_t|^2 \le 2 \bar{S}^V_{t}+ 2 |\widehat{V}_t- \bar{V}_t^N|^2+ 4 \le 4(V^*_{t_0})^2 + 4 \le K_0 := 4\left( \frac{ 5R_\text {max}}{g_L} \right) ^2+4, \end{aligned}$$

thanks to (C.1) and that \(V^*_{t_0} \le \frac{ 5R_\text {max}}{g_L} \). On the other hand, combining Proposition C.2 with Theorem 2.3. a) we get for every \(t\in [t_1,t_1+\delta ]\) that

$$\begin{aligned} \mathbb {E}\left( | X^{(i)}_t- \widehat{X}^{t_1,N}_t|^2\right) \le 2 \bigg [ \left( K_0' e^{-\lambda _0(t-t_0)} + \sigma ^2 \frac{C^0_{\zeta ,\rho }}{\lambda ^0}\right) (1+ \delta K_{1,\delta })+ \delta K_{2,\delta } \frac{\sigma ^2 }{N} C^0_{\zeta ,\rho } \bigg ] \end{aligned}$$

with \(K_0'=\left( \frac{ 5R_\text {max}}{g_L} \right) ^2+4\). The statement follows. \(\square \)

Propagation of chaos and synchronization for the McKean–Vlasov limit: proofs of Theorem 2.5 and Corollary 2.7

We first address the asymptotic behavior of the flow of empirical measures (2.9) when \(N\rightarrow \infty \) and the proof of Theorem 2.5. In particular, we will prove the propagation of chaos property for system (2.1). Following the classic pathwise approach developed in Sznitman (1991) and Méléard (1996), we first establish:

Theorem D.1

Under the assumptions of Theorem 2.5, we have:

  1. (a)

    Let \(W^{x}, x=m,n,h,y\) be independent standard Brownian motions and \((V_0,m_0,n_0,h_0,y_0)\) an independent random vector with law \(\mu _0\). There is existence and uniqueness, pathwise and in law, of a solution \(\widetilde{X}= (\widetilde{V}_t,\widetilde{m}_t,\widetilde{n}_t,\widetilde{h}_t,\widetilde{y}_t, t\ge 0 )\) to the nonlinear stochastic differential equation (in the sense of McKean) with values in \(\mathbb {R}\times [0,1]^4\):

    $$\begin{aligned} \begin{aligned} \widetilde{V}_t&= V_0+ \int _{0}^{t}{F(\widetilde{V}^{}_s,\widetilde{m}_s,\widetilde{n}_s,\widetilde{h}_s)ds}-\int _{0}^{t}{ J_{\text {E}}(\widetilde{V}^{}_s-\mathbb {E}[\widetilde{V}_s])ds}\\&\quad - \int _{0}^{t}{J_{\text {Ch}}\mathbb {E}[\widetilde{y}_s](\widetilde{V}^{}_s-V_\text {rev}) ds},\\ \widetilde{x}^{}_t&= x^{}_0+\int _{0}^{t}\rho _x(\widetilde{V}^{}_s)(1-\widetilde{x}^{}_s) -\zeta _x(\widetilde{V}^{}_s)\widetilde{x}^{}_sds + \int _{0}^{t}{\sigma _x(\widetilde{V}_s^{},\widetilde{x}_s^{})dW_s^{x}},\;\;x=m,n,h,y \, \end{aligned} \end{aligned}$$
    (D.1)

    such that for all \(t\ge 0\), \(|\widetilde{V}_t|\le {4R_\text {max}}/{g_\text {L}} + 2V^\text {max}_0 e^{-g_\text {L}t}\) almost surely.

  2. (b)

    \((\mu _t:=\text{ law }(\tilde{X}_t): t\ge 0)\) is a weak solution globally defined in \(C((0,+\infty ]; \mathcal{P}_2(\mathbb {R}\times [0,1]^4)) \) of the McKean–Vlasov equation (2.10).

  3. (c)

    For each \(T>0\), let \(\widetilde{X}^{(i)}= \left( (\widetilde{V}^{(i)}_t,\widetilde{m}^{(i)}_t,\widetilde{n}^{(i)}_t,\widetilde{h}^{(i)}_t,\widetilde{y}^{(i)}_t):t\in [0,T]\right) \), \(i=1,\ldots ,N\) be independent copies of the nonlinear process (D.1) each of them driven by the same Brownian motions \((W^{x,i},\ x=m,n,h,y)\) and with same initial conditions \(X^{(i)}_0=\widetilde{X}^{(i)}_0 \) as the N-particle system (2.1). Then, there is a constant \(C(T)>0\) such that for every \(N\ge 1\) and \(i\in \{1,\ldots , N\}\),

    $$\begin{aligned} \mathbb {E}\left[ \sup _{0\le t\le T}| X^{(i)}_t-\widetilde{X}^{(i)}_t |^2 \right] \le \frac{C(T)}{N}. \end{aligned}$$

Proof

The statements a), b) and c) would be standard if the coefficients in each of the N components of (2.1) were replaced by globally Lipschitz functions of \(X^{(i)}_s\) and \(X^{(j)}_s\), see Theorems 2.2 and 2.3 in Méléard (1996). In particular, with functions \(p^j_M\) and \(F_M\) defined for fixed \(M>0\) as in Lemma A.1, for any \(T>0\) there is existence and uniqueness, pathwise and in law, of a solution to the nonlinear stochastic differential equation on [0, T]:

$$\begin{aligned} \begin{aligned} \widetilde{V}_t^M&= V_0+ \int _{0}^{t}{F_M(\widetilde{V}^M_s,\widetilde{m}_s^M,\widetilde{n}_s^M,\widetilde{h}_s^M)ds}-\int _{0}^{t}{ J_{\text {E}}(\widetilde{V}^M_s-\mathbb {E}[\widetilde{V}^M_s])ds}\\&\quad - \int _{0}^{t}{J_{\text {Ch}}\mathbb {E}[p_M^1(\widetilde{y}_s)](p_M^1(\widetilde{V}^M_s)-V_\text {rev}) ds},\\ \widetilde{x}^M_t&= x^{}_0+\int _{0}^{t}\rho _x(p^1_M(\widetilde{V}^M_s)(1-p_M^1(\widetilde{x}^M_s)) -\zeta _x(p_M^2(\widetilde{V}^M_s))p_M^1(\widetilde{x}^M_s) ds\\&\quad + \int _{0}^{t}{\sigma _x(p_M^1(\widetilde{V}_s^M),\widetilde{x}_s^M)dW_s^{x}},\;\;x=m,n,h,y. \, \end{aligned} \end{aligned}$$
(D.2)

Moreover, letting \(\widetilde{X}^{(i,M)}= \left( (\widetilde{V}^{(i,M)}_t,\widetilde{m}^{(i,M)}_t,\widetilde{n}^{(i,M)}_t,\widetilde{h}^{(i,M)}_t,\widetilde{y}^{(i,M)}_t):t\in [0,T]\right) \), \(i=1,\ldots ,N\) be independent copies of the nonlinear process (D.2) driven by the same Brownian motions \((W^{x,i},\ x=m,n,h,y)\) and with same initial conditions \(X^{(i)}_0=\widetilde{X}^{(i)}_0 \) as the system \((X^{(1,M)},\ldots , X^{(N,M)}) \) defined in (A.1), we obtain that

$$\begin{aligned} \mathbb {E}\left[ \sup _{0\le t\le T}| X^{(i,M)}_t-\widetilde{X}^{(i,M)}_t |^2 \right] \le \frac{C_M(T)}{N} \end{aligned}$$

for every \(N\ge 1\) and \(i\in \{1,\ldots , N\}\), and some constant \(C_M(T)>0\).

We notice now that, by Proposition A.3, for \(M>0\) large enough the system \((X^{(1)},\ldots , X^{(N)}) \) is also a solution to the system of Eq. (A.1). Pathwise uniqueness of the latter yields for all such \(M>0\) that \((X^{(1)},\ldots , X^{(N)}) =(X^{(1,M)},\ldots , X^{(N,M)}) \) on [0, T], from where

$$\begin{aligned} \mathbb {E}\left[ \sup _{0\le t\le T}| X^{(i)}_t-\widetilde{X}^{(i,M)}_t |^2 \right] \le \frac{C_M(T)}{N} \end{aligned}$$
(D.3)

for every \(N\ge 1\) and \(i\in \{1,\ldots , N\}\). Furthermore, for any \(M'>0\)

$$\begin{aligned} \mathbb {P}\left( \sup _{0\le t\le T} \widetilde{x}^{(i,M)}_t \ge M'+\varepsilon \right) \le \mathbb {P}\left( \sup _{0\le t\le T} x^{(i,M)}_t \ge M' \right) + \frac{2 C_M(T)}{N \varepsilon ^2} \end{aligned}$$

Taking \(M'=1\), letting \(N\rightarrow \infty \) and then \(\varepsilon \rightarrow 0\) we deduce that \( \widetilde{x}^{(i,M)}_t \le 1\) a.s. for every \(t\in [0.T]\) and \(i\in \mathbb {N}\). In a similar way, \( \widetilde{x}^{(i,M)}_t \ge 0\) and \(|\tilde{V}^{(i,M)}_t | \le V^\text {max}_{t,\infty }\) hold a.s. for every \(t\in [0, T]\) and \(i\in \mathbb {N}\). This implies that for \(M>0\) large enough but fixed, a solution to (D.2) also solves (D.1), and proves the existence part in a).

We show now that any solution have uniform in time bounded compact support, from which uniqueness in part a) will immediately follow. We shall first consider a solution \((U_t,q^m_t,q^n_t,q^h_t,q^y_t)\) of (D.1) with explosion time \(\xi \), and we will show that it coincides with \((\widetilde{V}^{M}_t,\widetilde{m}^{M}_t,\widetilde{n}^{M}_t,\widetilde{h}^{M}_t,\widetilde{y}^{M}_t, t\ge 0)\) for a M big enough. For \(M>1\), we define \(\tau _M = \inf \{t\ge 0: \max \{|U_t|,|q^m_t|,|q^n_t|,|q^h_t|,|q^y_t|\}\ge M\}\). Then we observe that the coefficients of (D.1) applied to \((U_t,q^m_t,q^n_t,q^h_t,q^y_t, 0 \le t \le \tau _M)\) coincide with the truncated coefficients of (D.2) and thanks to the uniqueness property for (D.2) we conclude that almost surely

$$\begin{aligned} (U,q^m,q^n,q^h,q^y)_{t\wedge \tau _M} = (\widetilde{V}^{M},\widetilde{m}^{M},\widetilde{n}^{M},\widetilde{h}^{M},\widetilde{y}^{M})_{t\wedge \tau _M}. \end{aligned}$$

In particular, we observe that \(q^x_{t \wedge \tau _M} \in [0,1]\) for \(x=m,n,h,y\), and that \(\tau _M = \inf \{t\ge 0:|U_t|\ge M\}\) for \(M > 1\). Moreover the second order moment \( \mathbb {E}[U_{t\wedge \tau _M}^2]\) is uniformly bounded in M, since

$$\begin{aligned} U_{t\wedge \tau _M}^2&= V_0^2+ 2\int _{0}^{t\wedge \tau _M}{U_sF(U_s,q^m_s,q^n_s,q^h_s)ds}-2\int _{0}^{t\wedge \tau _M}{ J_{\text {E}}U_s(U_s-\mathbb {E}[U_s])ds}\\&\quad - 2\int _{0}^{t\wedge \tau _M}{J_{\text {Ch}}\mathbb {E}[q^y_s]U_s(U_s-V_\text {rev}) ds}, \end{aligned}$$

from where, it is easy to show that

$$\begin{aligned} \mathbb {E}(U_{t\wedge \tau _M}^2) \le C_1 + C_2\int _{0}^{t}{\mathbb {E}(U_{s\wedge \tau _M}^2)}, \end{aligned}$$

and therefore, thanks to Gronwall’s inequality

$$\begin{aligned} \mathbb {E}(U_{t\wedge \tau _M}^2) \le C_1e^{C_2 t}. \end{aligned}$$

On the other hand \(\mathbb {E}(U_{t\wedge \tau _M}^2) = \mathbb {E}(U_{t}^2\mathbb {1}_{\tau _M>t})+ M^2\mathbb {P}(\tau _M \le t)\) and then we can conclude for all \(t\ge 0\) and all \(M\ge 1\)

$$\begin{aligned} \mathbb {P}(\tau _M \le t) \le \frac{C_1e^{C_2t}}{M^2}. \end{aligned}$$

Since \(\tau _M\nearrow \xi \), we conclude that for all t\(\mathbb {P}(\xi \le t)=0\), from where it follows that \(\xi \) is almost surely infinite.

Now, since \((U_t,q^m_t,q^n_t,q^h_t,q^y_t)\) has no explosion, we apply Proposition 3.3 in Bossy et al. (2015) to get that almost surely \(q^x_{t} \in [0,1]\) for any \(t>0\). Using this, we derive a more precise bound for the second order moment:

$$\begin{aligned} \mathbb {E}(U_{t}^2)&\le \mathbb {E}(V_0^2)+ 2\int _{0}^{t}{\sqrt{\mathbb {E}(R_s^2)}\sqrt{\mathbb {E}(U_s^2)}-g_\text {L}\mathbb {E}(U_s^2) ds}, \end{aligned}$$

where as in the proof of Proposition A.3,

$$\begin{aligned} R_s&\le R_\text {max}:= \max _{a,b,c \in [0,1]}{| I + g_\text {L}V_\text {L}+g_\text {K}V_\text {K}a+g_\text {Na}V_\text {Na}b+J_{\text {Ch}}V_\text {rev}c}|. \end{aligned}$$

Applying one more time Lemma A.5 we conclude

$$\begin{aligned} \sqrt{ \mathbb {E}(U_{t}^2)} \le \sqrt{\mathbb {E}(V_0^2)} e^{-g_\text {L}t} + \frac{2R_\text {max}}{g_\text {L}}(1-e^{-g_\text {L}t}). \end{aligned}$$

Thus, the second moment of any solution of (D.1) is uniformly bounded in time. Moreover, since the initial condition \(V_0\) is bounded, proceeding exactly as in the proof of Proposition A.3 we obtain that

$$\begin{aligned} |U_t| \le \frac{4R_\text {max}}{g_\text {L}} + 2V^\text {max}_0 e^{-g_\text {L}t}, \end{aligned}$$

with the same bound \(V^\text {max}_0\) for \(V_0\). In conclusion, solutions of (D.1) are non explosive, even more they are uniformly bounded in time. Choosing \(M>4R_\text {max}/g_\text {L}+ 2V^\text {max}_0\), we get \(\tau _M=\infty \) almost surely, and for any \(t\ge 0\),

$$\begin{aligned} (U,q^m,q^n,q^h,q^y)_{t} = (\widetilde{V}^{M},\widetilde{m}^{M},\widetilde{n}^{M},\widetilde{h}^{M},\widetilde{y}^{M})_{t}. \end{aligned}$$

Hence Eq. (D.1) has a unique solution.

Part b) derives from a direct and easy application of the Ito’s formula to compute

$$\begin{aligned} \mathbb {E}[\psi (\widetilde{X}_t)] = \int _{\mathbb {R}\times [0,1]^4} \psi (x) \mu _t(dx) \end{aligned}$$

for a \(C^\infty _c\) test function \(\psi \), thanks to the fact that the Lebesgue integrals on the right hand side of the It formula will be all bounded, since the supports of the laws \((\mu _t: t\ge 0)\) are contained in some compact set, and by continuity of coefficients.

Part c) is immediate taking large enough M in (D.3). \(\square \)

We are now in position to prove

Proof of of Theorem 2.5

a) We write \(\mathcal {C}_T:=C([0,T], \mathbb {R}\times [0,1]^4)\). Part c) of Theorem D.1 implies that for each \(T>0\) and \(k\ge 1\) the convergence \(\text{ Law }(X^{(1)},\ldots , X^{(k)})\rightarrow \mu ^{\otimes k}\) with \(\mu =\text{ Law }(\tilde{X}^{(1)})\) holds on the space \(\mathcal {C}_T^k\) as \(N\rightarrow \infty \). By Proposition 2.2. in Sznitman (1991) or Proposition 4.2. in Méléard (1996), this implies that the empirical measure

$$\begin{aligned} \mu ^N:=\frac{1}{N}\sum _{i=1}^N \delta _{X_{\cdot }^{(i)}}\in \mathcal{P}(\mathcal {C}_T), \end{aligned}$$

with \(\mathcal{P}(\mathcal {C}_T)\) denoting the space of probability measures on \(\mathcal {C}_T\) endowed with the weak topology, converges in law to the (deterministic) probability measure \(\mu \). The first assertion of the theorem follows then from the fact that the mapping associating with \(\nu \in \mathcal{P}(\mathcal {C}_T)\) its flow \((\nu _t:t\in [0,T])\in C([0,T];\mathcal{P}( \mathbb {R}\times [0,1]^4))\) of one-dimensional time-marginals laws is continuous, together with part b) of Theorem D.1 (notice that \(C([0,T];\mathcal{P}( \mathbb {R}\times [0,1]^4))\) can be replaced by \(C([0,T]; \mathcal{P}_2( \mathbb {R}\times [0,1]^4))\) since all the random measures involved have a common compact support).

b) We observe first that for each \(t\ge 0\) one has

$$\begin{aligned} \mathbb {E}\left( \mathcal {W}_2^2(\mu ^N_t,\mu _t ) \right) \le 2 \mathbb {E}\left( \mathcal {W}_2^2(\mu ^N_t,\tilde{\mu }^N_t ) \right) + 2 \mathbb {E}\left( \mathcal {W}_2^2(\tilde{\mu }^N_t,\mu _t ) \right) , \end{aligned}$$

where \(\tilde{\mu }^N_t\) is the empirical measure of any random i.i.d. sample of the law \(\mu _t\) constructed in the same probability as \(\mu ^N_t\). Taking \(\tilde{\mu }^N_t:=\frac{1}{N}\sum _{i=1}^N \delta _{\tilde{X}_t^{(i)}}\) with \(\widetilde{X}^{(i)}_t\), \(i=1,\ldots ,N\) the processes defined in part c) of Theorem D.1 we get for every \(t\in [0,T]\) that

$$\begin{aligned} \mathbb {E}\left( \mathcal {W}_2^2(\mu ^N_t,\mu _t ) \right) \le 2 \frac{C(T)}{N} + 2 \mathbb {E}\left( \mathcal {W}_2^2(\tilde{\mu }^N_t,\mu _t )\right) . \end{aligned}$$

On the other hand, we have \(\sup _{t\in [0,T]}( \int |z|^q\mu _t(dz))^{1/q} <\infty \) for each \(q\ge 1\), using for instance the bound obtained at the end of the proof of Theorem D.1. We can therefore apply Theorem 1 in Fournier and Guillin (2015) with \(p=2\), \(d=5\) and a sufficiently large \(q>2\), to get that \( \mathbb {E}\left( \mathcal {W}_2^2(\tilde{\mu }^N_t,\mu _t )\right) \le C N^{-2/5}\). The second assertion thus follows.

c) In order to prove uniqueness for the McKean–Vlasov equation (2.10), we adapt to our setting a generic argument going back at least to Gärtner (1988). Assume for a while that for each compactly supported \(\nu _0\in \mathcal{P}( \mathbb {R}\times [0,1]^4))\) and \((\nu ^*_t:t\in [0,T])\in C([0,T],\mathcal{P}_2( \mathbb {R}\times [0,1]^4))\) the linear Fokker–Planck equation

$$\begin{aligned} \begin{aligned} \partial _t\nu _t&= \partial _v\left( \Phi (\langle (\nu ^*_t)^V\rangle ,\langle (\nu ^*_t)^y\rangle , \cdot ,\cdot )\nu _t \right) + \sum _{x=m,n,h,y} \frac{1}{2}\sigma ^2 \, \partial _{u_x u_x}^2 \left( a_x\nu _t\right) -\partial _{u_x}\left( b_x\nu _t\right) \end{aligned} \end{aligned}$$
(D.4)

has at most one weak solution with supports bounded uniformly in \(t\in [0,T]\). By similar arguments as in Lemma A.1, strong well-posedness holds for the stochastic differential equation:

$$\begin{aligned} \begin{aligned} V_t^*&= V^*_0+ \int _{0}^{t}{F(V^*_s,m_s^*,n_s^*,h_s^*)ds}-\int _{0}^{t}{ J_{\text {E}}(V^*_s-\langle (\nu ^*_s)^V \rangle )ds}\\&\quad - \int _{0}^{t}{J_{\text {Ch}}\langle (\nu ^*_s)^y \rangle (V^*_s-V_\text {rev}) ds},\\ x^*_t&= x^*_0+\int _{0}^{t}\rho _x(V^*_s)(1-x^*_s) -\zeta _x(V^*_s)x^*_sds + \int _{0}^{t}{\sigma _x(V_s^*,x_s^*)dW_s^{x}},\;\;x=m,n,h,y, \end{aligned} \end{aligned}$$
(D.5)

with \((V_0^* ,m_0^* ,n_0^* ,h_0^* ,y_0^* )\) independent of the Brownian motions \(W^x\) and with law \(\nu _ 0\). Moreover, one can check that \(x^*_t\in [0,1]\) a.s. for all \(t\in [0,T]\) and that the process \((V_t^*:t\in [0,T])\) is bounded. It follows using Itô’s formula that a unique weak solution to Eq. (D.4) with uniformly bounded supports does exist, and is given by \(\nu _ t= \text{ law } (V_t^* ,m_t^* ,n_t^* ,h_t^* ,y_t^* )\) for all \(t\in [0,T]\). Now, any solution \((\mu _t:t\in [0,T])\) in \(C([0,T],\mathcal{P}( \mathbb {R}\times [0,1]^4))\) of (2.10) with uniformly bounded supports also solves the linear equation (D.4) with \((\nu ^*_t:t\in [0,T])=(\mu _t:t\in [0,T])\). This yields, for all \(t\in [0,T]\), that \(\mu _t= \text{ law } (V_t^* ,m_t^* ,n_t^* ,h_t^* ,y_t^* )\), for the process defined as in (D.5), with \(\nu ^*_s=\mu _s\) for all \( s\in [0,T]\). In other words, this process solves the nonlinear stochastic differential equation (D.1). From Theorem D.1 we conclude that \((\mu _t:t\in [0,T])=(\text{ law }(\tilde{X}_t):t\in [0,T])\), that is, there is uniqueness of solutions in \(C([0,T],\mathcal{P}( \mathbb {R}\times [0,1]^4))\) of (2.10) having uniformly bounded support.

Hence, in order to conclude the proof of Theorem 2.5 it is enough to show that, given functions \(\alpha ,\beta \in C([0,T],\mathbb {R})\) and \(\nu _0\in \mathcal{P}_2( \mathbb {R}\times [0,1]^4)\) there is at most one solution \((\nu _t:t\in [0,T])\in C([0,T],\mathcal{P}( \mathbb {R}\times [0,1]^4))\) with support bounded uniformly in [0, T], to the distribution formulation of Eq. (D.4)

$$\begin{aligned} \begin{aligned}&\int \psi (t,v,u) \nu _t(dv,du)\\&\quad = \int \psi (0,v,u) \nu _0(dv,du) - \int _0^t \int \bigg [ \Phi (\alpha _s ,\beta _s , v, u ) \partial _v \psi (s,v,u) \\&\qquad + \big ( \partial _s + \sum _{x=m,n,h,y} \frac{1}{2} \sigma ^2 \, a_x \partial _{u_x u_x}^2 + b_x \partial _{u_x} \big ) \psi (s,v,u) \bigg ] \nu _s (dv,du) \, ds \end{aligned} \end{aligned}$$
(D.6)

for all \(t\in [0,T]\) and for an extended class of test function \(\psi \in C^{1,1,2}_b([0,T]\times \mathbb {R}\times [0,1]^4)\). Let \(\rho _x'\) and \(\zeta _x'\) denote compactly supported functions coinciding with \(\rho _x\) and \(\zeta _x\) on some compact set \(\mathcal{K}\subset \mathbb {R}\) containing the supports of the measures \(\nu ^V_t\) for \( t\in [0,T]\), and define \(\sigma _x'\), \(a'_x\) and \(b_x'\) in terms of them in a similar way as \(\sigma _x\), \(a_x\) and \(b_x\) were defined in terms of \(\rho _x\) and \(\zeta _x\). For a given \(t>0\), consider the following Cauchy problem in \(\mathbb {R}^5\) : for all \((s,v,u)\in [0,t)\times \mathbb {R}\times \mathbb {R}^4\),

$$\begin{aligned} \begin{aligned} \big ( \partial _s- \Phi (\alpha _s ,\beta _s , v, u ) \partial _v + \sum _{x=m,n,h,y} \frac{1}{2} \sigma ^2 \, a'_x \partial _{u_x u_x}^2 + b'_x \partial _{u_x} \big ) f_t(s,v,u)&=0,\\ f_t(t,v,u) =&\, \psi (v,u). \end{aligned} \end{aligned}$$
(D.7)

By the Feynman–Kac formula (see e.g. Karatzas and Shreve 1991), if a solution \(f_t\in C_b([0,t] \times \mathbb {R}^5 )\cap C_b^{1,1,2}([0,t)\times \mathbb {R}\times \mathbb {R}^4)\) exists, then it is given by

$$\begin{aligned} f_t(s,v,u):=\mathbb {E}(\psi (X^{s,v,u}_t)) \end{aligned}$$
(D.8)

where \((X_r^{s,v,u}:=(V_r,m_r,n_r,h_r,y_r)\,: r\in [s,t])\) is the unique (pathwise and in law) solution in [st] of the stochastic differential equation:

$$\begin{aligned} \begin{aligned} V_r&= v+ \int _{s}^{r}{F(V_{\theta },m_{\theta },n_{\theta },h_{\theta })d\theta }-\int _{s}^{r}{ J_{\text {E}}(V_{\theta }-\alpha _{\theta }) ds}- \int _{s}^{t}{J_{\text {Ch}}\beta _{\theta } (V_{\theta }-V_\text {rev}) d\theta },\\ x_r&= u_x+\int _{s}^{r}\rho '_x(V_{\theta })(1-x_{\theta }) -\zeta '_x(V_{\theta })x_{\theta }d\theta + \int _{s}^{r}{\sigma '_x(V_{\theta },x_{\theta })dW_{\theta }^{x}},\;\;x=m,n,h,y. \end{aligned} \end{aligned}$$

Moreover, for v chosen in some fixed compact set, this solution is bounded independently of \(s\in [0,t]\), and one has \(x_r\in [0,1]\) for all \(r\in [s,t]\). Hence, under the assumption that \(\sigma >0\), \(\rho _x\) and \(\zeta _x\) are of class \(C^2(\mathbb {R})\), one can moreover prove, following the lines of Friedman (2006, p. 124), that the function \(f_t\) defined by (D.8) actually is of class \(C_b^{1,1,2}([0,t)\times \mathbb {R}\times \mathbb {R}^4)\) and solves the Cauchy problem (D.7). Putting \(\psi =f_t\) in (D.6) yields

$$\begin{aligned} \int \psi (v,u) \nu _t(dv,du) = \int \mathbb {E}(\psi (X^{s,v,u}_t)) \nu _0(dv,du) \end{aligned}$$

for all \(\psi \in C_0^2(\mathbb {R}^5 )\), which uniquely determines \(\nu _t.\) Notice that when \(\sigma =0\), the required regularity for \(\phi \) and for f turns from \(C^{1,1,2}\) to \(C^{1,1,1}\) and the Feymann Kac formula in the argument can be replaced by the characteristics formula. The proof of part c) is complete.

d) This is immediate from parts b) and d) of Theorem D.1 . \(\square \)

Proof of Corollary 2.7

Recall first that, for any \(\nu \in \mathcal{P}_2(\mathbb {R}\times [0,1]^4)\) and \(w\in \mathbb {R}\times [0,1]^4)\), one has

$$\begin{aligned} \mathcal {W}_2^2(\nu ,\delta _w) = \int | z-w|^2 \nu (dz). \end{aligned}$$

Moreover, for every \(t\ge t_1\) and \(N\ge 1\) it holds by exchangeability that:

$$\begin{aligned} \mathbb {E}\left( | X^{(i)}_t- \widehat{X}^{t_1,N}_t|^2\right) = \mathbb {E}\left( \mathcal {W}_2^2(\mu ^N_t,\delta _{ \widehat{X}^{ t_1,N}_t }) \right) . \end{aligned}$$

Therefore, it is enough to prove that, for any \(t_1\ge 0\),

$$\begin{aligned} \sup _{t_1\le t\le t_1+\delta } \mathbb {E}\left| \mathcal {W}_2^2(\mu _t,\delta _{ \widehat{X}^{ t_1,\infty }_t }) - \mathcal {W}_2^2(\mu ^N_t,\delta _{ \widehat{X}^{ t_1,N}_t }) \right| \rightarrow 0 \end{aligned}$$

as \(N\rightarrow \infty \). Given \(t\ge t_1\) and \(N\ge 1\), let \(\pi _t^N(dz,dz')\) be a coupling between \(\mu _t\) and \(\mu ^N_t\). Then, for some constant \(C>0\) not depending on \(t\ge t_1\) nor on \(N\ge 1\), we have

$$\begin{aligned} \begin{aligned}&\left| \mathcal {W}_2^2(\mu _t,\delta _{ \widehat{X}^{ t_1,\infty }_t }) - \mathcal {W}_2^2(\mu ^N_t,\delta _{ \widehat{X}^{ t_1,N}_t }) \right| \\&\quad = \left| \int \pi _t^N(dz,dz') \left[ | z- \widehat{X}^{ t_1,\infty }_t |^2 - | z'-\widehat{X}^{ t_1,N}_t |^2\right] \right| \\&\quad \le C \left[ \int | z- z' |\pi _t^N(dz,dz') + | \widehat{X}^{ t_1,\infty }_t-\widehat{X}^{ t_1,N}_t |\right] \\ \end{aligned} \end{aligned}$$

since the supports of \(\mu _t\) and \(\mu _t^N\) and the processes \(\widehat{X}^{ t_1,\infty }_t\) and \(\widehat{X}^{ t_1,N}_t\) are uniformly bounded in \(t\ge t_1\) and N. The latter property also allows us to write the dynamics in (2.6) and (2.12) using globally Lipschitz coefficients. Thanks to Gronwall’s lemma this yields the estimates

$$\begin{aligned} \sup _{t_1\le t\le t_1+\delta } | \widehat{X}^{ t_1,\infty }_t-\widehat{X}^{ t_1,N}_t |\le & {} C_{\delta } | \widehat{X}^{ t_1,\infty }_{t_1}-\widehat{X}^{ t_1,N}_{t_1} |= C_{\delta } | \langle \mu _{t_1}\rangle - \langle \mu ^N_{t_1} \rangle |\\\le & {} C_{\delta } \int | z- z' |\pi _{t_1}^N(dz,dz') \end{aligned}$$

for some constant \(C_{\delta }>0\) not depending on N. Since \( \int | z- z' |\pi _{t}^N(dz,dz') \le \left( \int | z- z' |^2\pi _{t}^N(dz,dz')\right) ^{1/2} \), by taking the above couplings to be optimal for \( \mathcal {W}_2\), we get the estimate

$$\begin{aligned} \sup _{t_1\le t\le t_1+\delta } \mathbb {E}\left| \mathcal {W}_2^2(\mu _t,\delta _{ \widehat{X}^{ t_1,\infty }_t }) - \mathcal {W}_2^2(\mu ^N_t,\delta _{ \widehat{X}^{ t_1,N}_t }) \right| \le C' \sup _{t_1\le t\le t_1+\delta } \mathbb {E}^{1/2}\left( \mathcal {W}_2^2(\mu _t, \mu ^N_t) \right) \end{aligned}$$

for some \(C'>0\). We conclude thanks to Theorem 2.5. \(\square \)

Strong convergence rate result for the exponential projective Euler scheme (EPES)

The main object of this section is to prove the convergence of the numerical scheme presented in Sect. 3 to the model (2.1) and establish the following rate of convergence

Proposition E.1

Assume Hypothesis 2.2, if \(\chi (x)=O(x(1-x))\), then there exists a constant C depending on the parameters of the system, but independent of \(\Delta t\), such that for any \(i=1,\ldots ,N\):

$$\begin{aligned} \mathbb {E}\left[ \left( V_t^{(i)}- \widehat{V}_t^{(i)}\right) ^2\right] +\sum _{x=m,n,h,y}\mathbb {E}\left[ |x_t^{(i)}-\widehat{x}_t^{(i)}|^2\right]&\le C\Delta t. \end{aligned}$$

We decompose the proof of this proposition in several preliminary results.

The next result follows from the uniform bound for \(\widehat{V}_t^{(i)}\) (see iii) in Remark A.4) and some standard arguments on local approximation of SDEs, so we omit the proof.

Lemma E.2

Under Hypothesis 2.2, there exists a constant C depending on the parameters of the system, but independent of \(\Delta t\) such that

$$\begin{aligned} \sup _{i=1,\ldots ,N}\mathbb {E}\left[ \left( \widehat{V}_t^{(i)}- \widehat{V}^{(i)}_{\eta (t)}\right) ^2\right] \le C\Delta t^2,\qquad \sup _{i=1,\ldots ,N}\mathbb {E}\left[ \left( \check{x}^{(i)}_{t}- \widehat{x}^{(i)}_{\eta (t)}\right) ^2\right] \le C\Delta t. \end{aligned}$$

Next we establish a the key step in the convergence of the scheme, namely that, with extremely high probability, the processes \(\widehat{x}^{(i)}\) and \(\check{x}^{(i)}\) coincide.

Lemma E.3

Asumme Hypothesis 2.2, if \(\chi (x)=O(x(1-x))\), then there exists a constant C depending on the parameters of the system, but independent of \(\Delta t\), such that

$$\begin{aligned} \sup _{i=1,\ldots ,N}\sum _{x=m,n,h,y} \mathbb {P}\left( \check{x}^{(i)}_t \notin [0,1]\right) \le \exp \left( -\frac{C}{\Delta t}\right) . \end{aligned}$$

Remark E.4

It is not difficult to see that

$$\begin{aligned} \mathbb {E}\left[ \left( \check{x}^{(i)}_{t} - \widehat{x}^{(i)}_{t}\right) ^2\right]&= \mathbb {E}\left[ \left( \check{x}^{(i)}_{t}- \widehat{x}^{(i)}_{t}\right) ^2\mathbb {1}_{\{\check{x}^{(i)}_t \notin [0,1]\}}\right] \\&\le \sqrt{\sup _{j=1,\ldots ,N}\mathbb {E}\left[ 2(\check{x}^j_{t})^2+1\right] \mathbb {P}\left( \check{x}^{(i)}_t \notin [0,1]\right) } {\le } K \exp \left( -\frac{C}{2\Delta t}\right) . \end{aligned}$$

Notice that the RHS above tends to zero faster than any power of \(\Delta t\) when \(\Delta t\rightarrow 0\).

Proof of Lemma E.3

We first notice that conditional to \(\mathcal {F}_{\eta (t)}\), \(\check{x}^{(i)}\) corresponds to an Ornstein–Uhlenbeck process, therefore its law is Gaussian with known conditional mean and conditional variance given by

$$\begin{aligned} \mathbb {E}_{ \eta (t)}\left[ \check{x}^{(i)}_t \right]&= \widehat{x}^{(i)}_{\eta (t)}\exp \left( -\left( \rho _x+\zeta _x\right) (\widehat{V}^{(i)}_{\eta (t)})(t-\eta (t))\right) \\&\quad + \frac{\rho _x(\widehat{V}^{(i)}_{\eta (t)})}{\left( \rho _x+\zeta _x\right) (\widehat{V}^{(i)}_{\eta (t)})}\left( 1-\exp \left( -\left( \rho _x+\zeta _x\right) (\widehat{V}^{(i)}_{\eta (t)})(t-\eta (t))\right) \right) ,\\ \mathbb {V}\text {ar}_{\eta (t)}\left[ \check{x}^{(i)}_t \right]&= \frac{\sigma ^2_x(\widehat{V}^{(i)}_{\eta (t)}, \widehat{x}_{\eta (t)}^{(i)})}{2\left( \rho _x+\zeta _x\right) (\widehat{V}^{(i)}_{\eta (t)})}\left( 1-\exp \left( -2\left( \rho _x+\zeta _x\right) (\widehat{V}^{(i)}_{\eta (t)})(t-\eta (t))\right) \right) . \end{aligned}$$

Observe that the conditional variance is strictly positive if \(t>\eta (t)\), \(\widehat{x}^{(i)}_{\eta (t)}\ne 0\) and \(\widehat{x}^{(i)}_{\eta (t)}\ne 1\). Since for \(\widehat{x}^{(i)}_{\eta (t)}=0\) or \(\widehat{x}^{(i)}_{\eta (t)}=1\) the diffusions coefficient vanish, and in that case the solution to the ODE for \(\check{x}^{(i)}\) remains in [0, 1] almost surely, we can restrict ourselves to the case \(\widehat{x}^{(i)}_{\eta (t)}\in (0,1)\).

Using the inequality for Gaussian concentration, conditional to \(\mathcal {F}_{\eta (t)}\), we have

$$\begin{aligned} \mathbb {P}_{\eta (t)}\left( \check{x}^{(i)}_t \le 0\right)&= \mathbb {P}_{\eta (t)}\left( \frac{\check{x}^{(i)}_t-\mathbb {E}_{ \eta (t)}\left[ \check{x}^{(i)}_t \right] }{\sqrt{\mathbb {V}\text {ar}_{\eta (t)}\left[ \check{x}^{(i)}_t \right] }} \le \frac{-\mathbb {E}_{ \eta (t)}\left[ \check{x}^{(i)}_t \right] }{\sqrt{\mathbb {V}\text {ar}_{\eta (t)}\left[ \check{x}^{(i)}_t \right] }}\right) \\&\le \frac{1}{2}\exp \left( - \frac{\mathbb {E}_{ \eta (t)}\left[ \check{x}^{(i)}_t \right] ^2}{\mathbb {V}\text {ar}_{\eta (t)}\left[ \check{x}^{(i)}_t \right] }\right) . \end{aligned}$$

Since for t small enough

$$\begin{aligned} 1-\exp \left( -2\left( \rho _x+\zeta _x\right) (\widehat{V}^{(i)}_{\eta (t)})(t-\eta (t))\right) \le 2\left( \rho _x+\zeta _x\right) (\widehat{V}^{(i)}_{\eta (t)})(t-\eta (t)), \end{aligned}$$

and \(t-\eta (t)\le \Delta t\), we can bound the conditional variance, and then it follows

$$\begin{aligned} \mathbb {P}_{\eta (t)}\left( \check{x}^{(i)}_t \le 0\right)&\le \frac{1}{2}\exp \left( - \frac{\mathbb {E}_{ \eta (t)}\left[ \check{x}^{(i)}_t \right] ^2}{\sigma ^2_x(\widehat{V}^{(i)}_{\eta (t)}, \widehat{x}_{\eta (t)}^{(i)})\Delta t}\right) . \end{aligned}$$

On the other hand, \(\mathbb {E}_{ \eta (t)}\left[ \check{x}^{(i)}_t \right] \) is a weighted mean between to quantities in [0, 1], therefore

$$\begin{aligned} \mathbb {E}_{ \eta (t)}\left[ \check{x}^{(i)}_t \right] \ge \widehat{x}^{(i)}_{\eta (t)} \wedge \frac{\rho _x(\widehat{V}^{(i)}_{\eta (t)})}{\left( \rho _x+\zeta _x\right) (\widehat{V}^{(i)}_{\eta (t)})}, \end{aligned}$$

hence

$$\begin{aligned} \begin{aligned} \mathbb {P}_{\eta (t)}\left( \check{x}^{(i)}_t \le 0\right)&\le \frac{1}{2}\exp \left( \frac{-\rho _x^2(\widehat{V}^{(i)}_{\eta (t)})}{\left( \rho _x+\zeta _x\right) ^2(\widehat{V}^{(i)}_{\eta (t)})\sigma ^2_x(\widehat{V}^{(i)}_{\eta (t)}, \widehat{x}_{\eta (t)}^{(i)})\Delta t}\right) \mathbb {1}_{\left\{ \widehat{x}_{\eta (t)}^{(i)} \ge \frac{\rho _x(\widehat{V}^{(i)}_{\eta (t)})}{\left( \rho _x+\zeta _x\right) (\widehat{V}^{(i)}_{\eta (t)})} \right\} }\\&\quad +\frac{1}{2}\exp \left( - \frac{\left( \widehat{x}_{\eta (t)}^{(i)}\right) ^2}{\sigma ^2_x(\widehat{V}^{(i)}_{\eta (t)}, \widehat{x}_{\eta (t)}^{(i)})\Delta t}\right) \mathbb {1}_{\left\{ \widehat{x}_{\eta (t)}^{(i)} \le \frac{\rho _x(\widehat{V}^{(i)}_{\eta (t)})}{\left( \rho _x+\zeta _x\right) (\widehat{V}^{(i)}_{\eta (t)})} \right\} }. \end{aligned} \end{aligned}$$
(E.1)

To bound the first exponential in the right-hand side of the last inequality, we notice that since the process \(\widehat{V}^{(i)}\) is uniformly bounded and \(\sigma \) is bounded, we can easily exhibit a constant \(C_1>0\) independent of i such that

$$\begin{aligned} \frac{\rho _x^2(\widehat{V}^{(i)}_{\eta (t)})}{\left( \rho _x+\zeta _x\right) ^2 (\widehat{V}^{(i)}_{\eta (t)})\sigma ^2_x(\widehat{V}^{(i)}_{\eta (t)}, \widehat{x}_{\eta (t)}^{(i)})}\ge C_1. \end{aligned}$$

For the second term in the right-hand side of (E.1), since \(x^2/\chi (x)^2\) is bounded from below in (0, 1), there exists \(C_2>0\), such that

$$\begin{aligned} \frac{\left( \widehat{x}_{\eta (t)}^{(i)}\right) ^2}{\sigma ^2_x(\widehat{V}^{(i)} _{\eta (t)}, \widehat{x}_{\eta (t)}^{(i)})} \ge C_2, \end{aligned}$$

from which we conclude

$$\begin{aligned} \mathbb {P}\left( \check{x}^{(i)}_t \le 0\right) \le \exp \left( -\frac{C_1\wedge C_2}{\Delta t}\right) . \end{aligned}$$

An analogous computation shows

$$\begin{aligned} \mathbb {P}\left( \check{x}^{(i)}_t \ge 1\right) \le \exp \left( -\frac{C_1\wedge C_2}{\Delta t}\right) . \end{aligned}$$

\(\square \)

The last preliminary step in the proof of Proposition E.1 is the following

Lemma E.5

Under hypotheses of Proposition E.1, consider

$$\begin{aligned} u(t):= \mathbb {E}\left[ \left( V_t^{(i)}- \widehat{V}_t^{(i)}\right) ^2\right] +\sum _{x=m,n,h,y}\mathbb {E}\left[ |x_t^{(i)}-\check{x}^{(i)}_t|^2\right] . \end{aligned}$$

Then there exists a constant C depending on the parameters of the system, but independent of \(\Delta t\), such that

$$\begin{aligned} u(t) \le \Bigg ( u(\eta (t)) + C\Delta t^2\Bigg )e^{C\Delta t}. \end{aligned}$$
(E.2)

Proof

Thanks to the boundedness of the processes, drift and diffusion coefficients, that is \(b_x\) and \(\sigma _x\), behave like Lipschitz functions, just as in the proof of Lemma A.1. Then, thanks to Itô formula and pivoting with in drift and diffusion with the point \(( \widehat{V}^{(i)}_{s},\check{x}^{(i)}_s)\)

$$\begin{aligned}&\mathbb {E}\left[ (x^{(i)}_t- \check{x}^{(i)}_t)^2 \right] \\&\quad \le \mathbb {E}\left[ ( x^{(i)}_{\eta (t)}- \widehat{x}^{(i)}_{\eta (t)})^2\right] +2\int _{{\eta (t)}}^{t}\mathbb {E}\left[ (x^{(i)}_s- \check{x}^{(i)}_s)\left( b_x(V^{(i)}_s,x^{(i)}_s)-b_x( \widehat{V}^{(i)}_{s},\check{x}^{(i)}_s)\right) \right] ds\\&\qquad +2\int _{{\eta (t)}}^{t}\mathbb {E}\left[ (x^{(i)}_s- \check{x}^{(i)}_s)\left( b_x( \widehat{V}^{(i)}_{s},\check{x}^{(i)}_s) -b_x( \widehat{V}^{(i)}_{\eta (t)},\check{x}^{(i)}_s)\right) \right] ds\\&\qquad + \int _{\eta (t)}^{t}2\mathbb {E}\left[ \left( \sigma _x(V_s^{(i)},x_s^{(i)})- \sigma _x( \widehat{V}_{s}^{(i)}, \check{x}^{(i)}_s)\right) ^2\right] \\&\qquad +2\mathbb {E}\left[ \left( \sigma _x( \widehat{V}_{s}^{(i)},\check{x}^{(i)}_s) - \sigma _x( \widehat{V}_{\eta (t)}^{(i)}, \widehat{x}_{\eta (t)}^{(i)})\right) ^2\right] ds. \end{aligned}$$

from where the Lipchitz property of the coefficients, Lemma E.2 to bound the terms involving the local error and some classical arguments lead to

$$\begin{aligned} \mathbb {E}\left[ (x^{(i)}_t- \check{x}^{(i)}_t)^2 \right]&\le \mathbb {E}\left[ ( x^{(i)}_{\eta (t)}-\check{x}^{(i)}_{\eta (t)})^2\right] +C\int _{{\eta (t)}}^{t} \mathbb {E}\left[ ( x^{(i)}_{s}-\check{x}^{(i)}_{s})^2\right] \\&\quad + \mathbb {E}\left[ ( V^{(i)}_{s}- \widehat{V}^{(i)}_{s})^2\right] ds + C\Delta t^2. \end{aligned}$$

On the other hand, for the voltage error we obtain first the a.s. bound

$$\begin{aligned} \left( V_t^{(i)}- \widehat{V}_t^{(i)}\right) ^2&{\le } \left( V^{(i)}_{\eta (t)}- \widehat{V}^{(i)}_{\eta (t)}\right) ^2 {+} C\int _{{\eta (t)}}^{t} |V_s^{(i)}- \widehat{V}_s^{(i)}|^2 {+} \sum _{x=m,n,h} |x_s^{(i)}-\widehat{x}_{\eta (t)}^{(i)}|^2 ds \\&\quad + C\int _{\eta (t)}^t |V_s^{(i)}- \widehat{V}_s^{(i)}|^2 + \frac{1}{N}\sum _{j=1}^{N} |V_s^{(j)} - \widehat{V}_{\eta (t)}^{(j)}|^2 ds,\\&\quad + C \int _{\eta (t)}^t |V_s^{(i)}-\widehat{V}_s^{(i)}|^2+ (V_s^{(i)}+V_\text {rev})^2\frac{1}{N}\sum _{j=1}^{N}{|y^{(j)}_s - \widehat{y}^{(j)}_{\eta (t)}|^2 } ds. \end{aligned}$$

Thanks to the exchangeability of the particles, it follows that

$$\begin{aligned} \mathbb {E}\left[ \frac{1}{N}\sum _{j=1}^{N}{|y^{(j)}_s - \widehat{y}^{(j)}_{\eta (t)}|^2 }\right]= & {} \mathbb {E}\left[ \left( y^{(i)}_s - \widehat{y}^{(i)}_{\eta (t)}\right) ^2\right] ,\;\;\; \mathbb {E}\left[ \frac{1}{N}\sum _{j=1}^{N}{|V_s^{(j)} - \widehat{V}_{\eta (t)}^{(j)}|^2 }\right] \\= & {} \mathbb {E}\left[ \left( V^{(i)}_s - \widehat{V}^{(i)}_{\eta (t)}\right) ^2\right] , \end{aligned}$$

and then, since the processes are uniformly bounded, we get that

$$\begin{aligned}&\mathbb {E}\left[ \left( V_t^{(i)}- \widehat{V}_t^{(i)}\right) ^2\right] \\&\quad \le \mathbb {E}\left[ \left( V^{(i)}_{\eta (t)}- \widehat{V}^{(i)}_{\eta (t)}\right) ^2\right] + C\int _{{\eta (t)}}^{t}\mathbb {E}\left[ \left( V_s^{(i)}- \widehat{V}_s^{(i)}\right) ^2\right] ds \\&\qquad + C\int _{\eta (t)}^t \mathbb {E}\left[ \left( V^{(i)}_s - \widehat{V}^{(i)}_{\eta (t)}\right) ^2\right] +\sum _{x=m,n,h,y}\mathbb {E}\left[ |x_s^{(i)}-\widehat{x}_{\eta (t)}^{(i)}|^2\right] ds\\&\quad \le \mathbb {E}\left[ \left( V^{(i)}_{\eta (t)}- \widehat{V}^{(i)}_{\eta (t)}\right) ^2\right] + C\int _{{\eta (t)}}^{t}\mathbb {E}\left[ |V_s^{(i)}- \widehat{V}_s^{(i)}|^2\right] ds \\&\qquad + C\int _{\eta (t)}^t \mathbb {E}\left[ \left( V^{(i)}_s - \widehat{V}^{(i)}_{s}\right) ^2\right] + C\Delta t^2 +\sum _{x=m,n,h,y}\mathbb {E}\left[ |x_s^{(i)}-\check{x}^{(i)}_{s}|^2\right] + C\Delta tds\\&\quad \le \mathbb {E}\left[ \left( V^{(i)}_{\eta (t)}- \widehat{V}^{(i)}_{\eta (t)}\right) ^2\right] + C\int _{{\eta (t)}}^{t}\mathbb {E}\left[ \left( V_s^{(i)}- \widehat{V}_s^{(i)}\right) ^2\right] \\&\qquad +\sum _{x=m,n,h,y}\mathbb {E}\left[ |x_s^{(i)}-\check{x}^{(i)}_{s}|^2\right] ds+ C\Delta t^2. \end{aligned}$$

We can summarize the previous computations as

$$\begin{aligned} u(t)&\le u(\eta (t)) + C\int _{{\eta (t)}}^{t}u(s)ds+ C\Delta t^2, \end{aligned}$$

from where we conclude thanks to Gronwall’s inequality. \(\square \)

Proof of Proposition E.1

From the previous Lemma, denoting \(u_k = u(t_k)\) we obtain the following recurrence relationship:

$$\begin{aligned} u_{k+1} \le A u_k + B,\quad u_0=0,\quad A= e^{C\Delta t} ,\;\; B= C\Delta t^2e^{C\Delta t}. \end{aligned}$$

Iterating this inequality, it is easy to conclude that

$$\begin{aligned} u_{k+1} \le B\frac{A^{k+1}-1}{A- 1}\le C\Delta t^2\frac{\left( e^{C\Delta t}\right) ^{k+1}}{e^{C\Delta t}- 1}=C\Delta t^2\frac{e^{Ct_{k+1}}}{e^{C\Delta t}- 1}. \end{aligned}$$

But when \(\Delta t\rightarrow 0\), we have \(e^{C\Delta t}- 1 \sim C\Delta t\), and therefore \(u_{k+1} \le C\Delta t.\) Inserting this in (E.2), we conclude

$$\begin{aligned} \mathbb {E}&\left[ \left( V_t^{(i)}- \widehat{V}_t^{(i)}\right) ^2\right] +\sum _{x=m,n,h,y}\mathbb {E}\left[ |x_t^{(i)}-\widehat{x}_t^{(i)}|^2\right] \\&\le \mathbb {E}\left[ \left( V_t^{(i)}- \widehat{V}_t^{(i)}\right) ^2\right] +\sum _{x}\mathbb {E}\left[ |x_t^{(i)}-\check{x}^{(i)}_t|^2\right] + \mathbb {E}\left[ |\check{x}^{(i)}_t-\widehat{x}_t^{(i)}|^2\right] \\&\le C\Delta t+ \sum _{x}\mathbb {P}\left( \check{x}^{(i)}_t \notin [0,1]\right) , \end{aligned}$$

from where the statement follows, applying Lemma E.3. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bossy, M., Fontbona, J. & Olivero, H. Synchronization of stochastic mean field networks of Hodgkin–Huxley neurons with noisy channels. J. Math. Biol. 78, 1771–1820 (2019). https://doi.org/10.1007/s00285-019-01326-7

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00285-019-01326-7

Keywords

Mathematics Subject Classification

Navigation