[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = stochastics synaptic noise

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 5027 KiB  
Article
Ornstein–Uhlenbeck Adaptation as a Mechanism for Learning in Brains and Machines
by Jesús García Fernández, Nasir Ahmad and Marcel van Gerven
Entropy 2024, 26(12), 1125; https://doi.org/10.3390/e26121125 - 22 Dec 2024
Viewed by 813
Abstract
Learning is a fundamental property of intelligent systems, observed across biological organisms and engineered systems. While modern intelligent systems typically rely on gradient descent for learning, the need for exact gradients and complex information flow makes its implementation in biological and neuromorphic systems [...] Read more.
Learning is a fundamental property of intelligent systems, observed across biological organisms and engineered systems. While modern intelligent systems typically rely on gradient descent for learning, the need for exact gradients and complex information flow makes its implementation in biological and neuromorphic systems challenging. This has motivated the exploration of alternative learning mechanisms that can operate locally and do not rely on exact gradients. In this work, we introduce a novel approach that leverages noise in the parameters of the system and global reinforcement signals. Using an Ornstein–Uhlenbeck process with adaptive dynamics, our method balances exploration and exploitation during learning, driven by deviations from error predictions, akin to reward prediction error. Operating in continuous time, Ornstein–Uhlenbeck adaptation (OUA) is proposed as a general mechanism for learning in dynamic, time-evolving environments. We validate our approach across a range of different tasks, including supervised learning and reinforcement learning in feedforward and recurrent systems. Additionally, we demonstrate that it can perform meta-learning, adjusting hyper-parameters autonomously. Our results indicate that OUA provides a promising alternative to traditional gradient-based methods, with potential applications in neuromorphic computing. It also hints at a possible mechanism for noise-driven learning in the brain, where stochastic neurotransmitter release may guide synaptic adjustments. Full article
Show Figures

Figure 1

Figure 1
<p>Dependency structure of the variables that together determine Ornstein–Uhlenbeck adaptation (hyper-parameters not shown). Variables <math display="inline"><semantics> <mover accent="true"> <mi>r</mi> <mo stretchy="false">¯</mo> </mover> </semantics></math>, <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> </mrow> </semantics></math> (green) are related to learning, whereas variables <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">z</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">y</mi> </mrow> </semantics></math> (blue) are related to inference. The average reward estimate <math display="inline"><semantics> <mover accent="true"> <mi>r</mi> <mo stretchy="false">¯</mo> </mover> </semantics></math> depends on rewards <span class="html-italic">r</span> (red) that indirectly depend on the outputs <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">y</mi> </mrow> </semantics></math> generated by the model. The output itself depends on input <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">x</mi> </mrow> </semantics></math> (black).</p>
Full article ">Figure 2
<p>Dynamics of a single-parameter model across 15 random seeds (each color represents a different seed) with <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mi>λ</mi> <mo>=</mo> <mi>η</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math>. Initial conditions are <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>r</mi> <mo stretchy="false">¯</mo> </mover> <mn>0</mn> </msub> <mo>=</mo> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>. The target output is generated by a ground-truth parameter <math display="inline"><semantics> <mrow> <msup> <mi>θ</mi> <mo>*</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. (<b>a</b>) Target output vs. model output (<b>b</b>) Evolution of <math display="inline"><semantics> <mi>θ</mi> </semantics></math> over time. (<b>c</b>) Evolution of <math display="inline"><semantics> <mi>μ</mi> </semantics></math> over time. (<b>d</b>) RPE <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>r</mi> </msub> </semantics></math> over time, shown on a logarithmic axis to better visualize initial convergence. The dotted line denotes zero reward prediction error. (<b>e</b>) Cumulative reward <span class="html-italic">G</span> over time, showing improvement with learning compared to the untrained model (dashed line).</p>
Full article ">Figure 3
<p>Sensitivity of the final cumulative reward <math display="inline"><semantics> <mrow> <mi>G</mi> <mo>(</mo> <mi>T</mi> <mo>)</mo> </mrow> </semantics></math> to model hyper-parameters for the input–output mapping task. Results are averaged over 15 runs using different random seeds. The shaded area represents variability across runs, showing stability across a wide range of hyper-parameter settings. Vertical lines show the chosen hyper-parameter values, and horizontal lines show the return without learning. (<b>a</b>) Impact of <math display="inline"><semantics> <mi>λ</mi> </semantics></math>. (<b>b</b>) Impact of <math display="inline"><semantics> <mi>σ</mi> </semantics></math>. (<b>c</b>) Impact of <math display="inline"><semantics> <mi>ρ</mi> </semantics></math>. (<b>d</b>) Impact of <math display="inline"><semantics> <mi>η</mi> </semantics></math>.</p>
Full article ">Figure 4
<p>Learning dynamics in a multi-parameter model with <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mi>λ</mi> <mo>=</mo> <mi>η</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math>. Initial conditions are <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>r</mi> <mo stretchy="false">¯</mo> </mover> <mn>0</mn> </msub> <mo>=</mo> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">θ</mi> <mn>0</mn> </msub> <mo>=</mo> <msub> <mi mathvariant="bold-italic">μ</mi> <mn>0</mn> </msub> <mo>=</mo> <mn mathvariant="bold">0</mn> </mrow> </semantics></math>. The ground-truth parameters used to generate the target output are <math display="inline"><semantics> <mrow> <msup> <mrow> <mi mathvariant="bold-italic">θ</mi> </mrow> <mo>*</mo> </msup> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mn>0.3</mn> <mo>,</mo> <mn>1.1</mn> <mo>,</mo> <mn>0.0</mn> <mo>,</mo> <mo>−</mo> <mn>0.3</mn> <mo>,</mo> <mo>−</mo> <mn>1.5</mn> <mo>,</mo> <mo>−</mo> <mn>0.4</mn> <mo>)</mo> </mrow> <mo>⊤</mo> </msup> </mrow> </semantics></math>. (<b>a</b>) Target output vs. model output. (<b>b</b>) Evolution of <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>θ</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>θ</mi> <mn>6</mn> </msub> <mo>)</mo> </mrow> <mo>⊤</mo> </msup> </mrow> </semantics></math> with individual parameter values denoted by different colors. (<b>c</b>) Evolution of <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">μ</mi> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>μ</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>μ</mi> <mn>6</mn> </msub> <mo>)</mo> </mrow> <mo>⊤</mo> </msup> </mrow> </semantics></math> with individual mean values denoted by different colors. (<b>d</b>) RPE <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>r</mi> </msub> </semantics></math>, shown on a logarithmic axis to better visualize initial convergence. The dotted line denotes zero prediction error. (<b>e</b>) Cumulative reward <span class="html-italic">G</span>, showing improvement with learning compared to the untrained model (dashed line). The blue line indicates the cumulative reward during parameter learning. The dashed line denotes the return when <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> </mrow> </semantics></math> is fixed to the initial value <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">θ</mi> <mn>0</mn> </msub> </mrow> </semantics></math>. The orange line indicates the return obtained when we fix parameters to the final mean parameters <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mi mathvariant="bold-italic">μ</mi> <mo>(</mo> <mi>T</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Dynamics of the weather prediction model whose parameters <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> </mrow> </semantics></math> denote the different weather features. The goal is to learn to predict the temperature 24 h ahead. Hyper-parameters: <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mi>λ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>. Initial conditions: <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>r</mi> <mo stretchy="false">¯</mo> </mover> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">θ</mi> <mn>0</mn> </msub> <mo>∼</mo> <mi mathvariant="script">N</mi> <mrow> <mo>(</mo> <mn mathvariant="bold">0</mn> <mo>,</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>6</mn> </mrow> </msup> <mi mathvariant="bold-italic">I</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">μ</mi> <mn>0</mn> </msub> <mo>=</mo> <msub> <mi mathvariant="bold-italic">θ</mi> <mn>0</mn> </msub> </mrow> </semantics></math>. (<b>a</b>) Evolution of <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>θ</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>θ</mi> <mn>6</mn> </msub> <mo>)</mo> </mrow> <mo>⊤</mo> </msup> </mrow> </semantics></math> with individual parameter values denoted by different colors. (<b>b</b>) Evolution of <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">μ</mi> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>μ</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>μ</mi> <mn>6</mn> </msub> <mo>)</mo> </mrow> <mo>⊤</mo> </msup> </mrow> </semantics></math> with individual mean values denoted by different colors. (<b>c</b>) RPE <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>r</mi> </msub> </semantics></math>, shown on a logarithmic axis to better visualize initial convergence. The dotted line denotes zero prediction error. (<b>d</b>) Cumulative reward <span class="html-italic">G</span> using ZCA-decorrelated data, showing improvement with learning compared to the untrained model (dashed line) and comparable performance to stochastic gradient descent (green line). The blue line indicates the return during parameter learning. The dashed line denotes the return when <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> </mrow> </semantics></math> are fixed to their initial values <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">θ</mi> <mn>0</mn> </msub> </mrow> </semantics></math>. The orange line indicates the return obtained when we fix parameters to the final mean parameters <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mi mathvariant="bold-italic">μ</mi> <mo>(</mo> <mi>T</mi> <mo>)</mo> </mrow> </semantics></math>. (<b>e</b>) Predicted vs. true 24 h-ahead temperature on ZCA-decorrelated test data, using final <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">μ</mi> <mo>(</mo> <mi>T</mi> <mo>)</mo> </mrow> </semantics></math> and initial <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">θ</mi> <mn>0</mn> </msub> </mrow> </semantics></math>. (<b>f</b>) Final parameter values for each regressor using ZCA-decorrelated data.</p>
Full article ">Figure 6
<p>Learning dynamics in a recurrent model with <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mi>λ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math>. Initial conditions are <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>r</mi> <mo stretchy="false">¯</mo> </mover> <mn>0</mn> </msub> <mo>=</mo> <mo>−</mo> <mn>0.1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">θ</mi> <mn>0</mn> </msub> <mo>=</mo> <msub> <mi mathvariant="bold-italic">μ</mi> <mn>0</mn> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mn>0.2</mn> <mo>,</mo> <mn>0.1</mn> <mo>,</mo> <mn>0.5</mn> <mo>)</mo> </mrow> <mo>⊤</mo> </msup> </mrow> </semantics></math>. The ground-truth parameters used to generate the target output are <math display="inline"><semantics> <mrow> <msup> <mrow> <mi mathvariant="bold-italic">θ</mi> </mrow> <mo>*</mo> </msup> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mn>0.3</mn> <mo>,</mo> <mn>0.7</mn> <mo>,</mo> <mn>1.0</mn> <mo>)</mo> </mrow> <mo>⊤</mo> </msup> </mrow> </semantics></math>. (<b>a</b>) Target output vs. model output. (<b>b</b>) Evolution of <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>θ</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>θ</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>θ</mi> <mn>3</mn> </msub> <mo>)</mo> </mrow> <mo>⊤</mo> </msup> </mrow> </semantics></math> in blue, orange and green. (<b>c</b>) Evolution of <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">μ</mi> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>μ</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>μ</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>μ</mi> <mn>3</mn> </msub> <mo>)</mo> </mrow> <mo>⊤</mo> </msup> </mrow> </semantics></math> in blue, orange and green. (<b>d</b>) RPE <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>r</mi> </msub> </semantics></math>, shown on a logarithmic axis to better visualize initial convergence. The dotted line denotes zero reward prediction error. (<b>e</b>) Cumulative reward <span class="html-italic">G</span>, showing improvement with learning compared to the untrained model (dashed line). The blue line indicates the return during parameter learning. The dashed line denotes the return when <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> </mrow> </semantics></math> are fixed to their initial values <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">θ</mi> <mn>0</mn> </msub> </mrow> </semantics></math>. The orange line indicates the return obtained when we fix parameters to the final mean parameters <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mi mathvariant="bold-italic">μ</mi> <mo>(</mo> <mi>T</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Learning to control a stochastic double integrator. Hyper-parameters: <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.02</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mi>β</mi> <mo>=</mo> <mn>0.005</mn> </mrow> </semantics></math>. Initial conditions are set to zero for all variables. (<b>a</b>) Evolution of <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>θ</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>θ</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>⊤</mo> </msup> </mrow> </semantics></math>. (<b>b</b>) Evolution of <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">μ</mi> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>μ</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>μ</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>⊤</mo> </msup> </mrow> </semantics></math>. (<b>c</b>) Reward prediction error <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>r</mi> </msub> </semantics></math>. (<b>d</b>) Cumulative reward <span class="html-italic">G</span>, where higher is better. Blue line: return during learning; dashed line: return with fixed <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">θ</mi> <mn>0</mn> </msub> </mrow> </semantics></math>. (<b>e</b>) Particle position after learning (blue) with observation noise (light blue); dashed line: position changes without learning. (<b>f</b>) Particle velocity after learning (orange) with observation noise (light orange); dashed line: velocity fluctuations without learning.</p>
Full article ">Figure 8
<p>Results when using a learnable <math display="inline"><semantics> <mi>σ</mi> </semantics></math> (blue line) compared to a constant <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <msub> <mi>σ</mi> <mn>0</mn> </msub> </mrow> </semantics></math> (black line). Parameters are set to <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mi>λ</mi> <mo>=</mo> <mi>η</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msup> <mi>λ</mi> <mi>σ</mi> </msup> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msup> <mi>η</mi> <mi>σ</mi> </msup> <mo>=</mo> <mn>3.0</mn> </mrow> </semantics></math> for the model with learnable <math display="inline"><semantics> <mi>σ</mi> </semantics></math>. The initial conditions are given by <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>r</mi> <mo stretchy="false">¯</mo> </mover> <mo>=</mo> <msub> <mi>θ</mi> <mn>0</mn> </msub> <mo>=</mo> <msub> <mi>μ</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mn>0</mn> </msub> <mo>=</mo> <msubsup> <mi>μ</mi> <mn>0</mn> <mi>σ</mi> </msubsup> <mo>=</mo> <mn>0.15</mn> </mrow> </semantics></math>. (<b>a</b>) Dynamics of <math display="inline"><semantics> <mi>θ</mi> </semantics></math>. (<b>b</b>) Dynamics of <math display="inline"><semantics> <mi>μ</mi> </semantics></math>. (<b>c</b>) Dynamics of <math display="inline"><semantics> <mi>σ</mi> </semantics></math>. (<b>d</b>) Dynamics of <math display="inline"><semantics> <msup> <mi>μ</mi> <mi>σ</mi> </msup> </semantics></math>. (<b>e</b>) Dynamics of <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>r</mi> </msub> </semantics></math>. (<b>f</b>) Dynamics of <span class="html-italic">G</span>.</p>
Full article ">
25 pages, 3323 KiB  
Article
Phase-Dependent Response to Electrical Stimulation of Cortical Networks during Recurrent Epileptiform Short Discharge Generation In Vitro
by Anton V. Chizhov, Vasilii S. Tiselko, Tatyana Yu. Postnikova and Aleksey V. Zaitsev
Int. J. Mol. Sci. 2024, 25(15), 8287; https://doi.org/10.3390/ijms25158287 - 29 Jul 2024
Viewed by 922
Abstract
The closed-loop control of pathological brain activity is a challenging task. In this study, we investigated the sensitivity of continuous epileptiform short discharge generation to electrical stimulation applied at different phases between the discharges using an in vitro 4-AP-based model of epilepsy in [...] Read more.
The closed-loop control of pathological brain activity is a challenging task. In this study, we investigated the sensitivity of continuous epileptiform short discharge generation to electrical stimulation applied at different phases between the discharges using an in vitro 4-AP-based model of epilepsy in rat hippocampal slices. As a measure of stimulation effectiveness, we introduced a sensitivity function, which we then measured in experiments and analyzed with different biophysical and abstract mathematical models, namely, (i) the two-order subsystem of our previous Epileptor-2 model, describing short discharge generation governed by synaptic resource dynamics; (ii) a similar model governed by shunting conductance dynamics (Epileptor-2B); (iii) the stochastic leaky integrate-and-fire (LIF)-like model applied for the network; (iv) the LIF model with potassium M-channels (LIF+KM), belonging to Class II of excitability; and (v) the Epileptor-2B model with after-spike depolarization. A semi-analytic method was proposed for calculating the interspike interval (ISI) distribution and the sensitivity function in LIF and LIF+KM models, which provided parametric analysis. Sensitivity was found to increase with phase for all models except the last one. The Epileptor-2B model is favored over other models for subthreshold oscillations in the presence of large noise, based on the comparison of ISI statistics and sensitivity functions with experimental data. This study also emphasizes the stochastic nature of epileptiform discharge generation and the greater effectiveness of closed-loop stimulation in later phases of ISIs. Full article
(This article belongs to the Special Issue Epilepsy: From Molecular Basis to Therapy)
Show Figures

Figure 1

Figure 1
<p>Experiment and data processing. (<b>A</b>) The closed-loop feedback system with a dynamic control of the stimulation. The system uses software to process and analyze LFP signals derived from the CA1 region of rat hippocampal slices in real time. The temporal characteristics of interictal-like population discharges during status epilepticus are used to calculate the parameters for subsequent stimulation. (<b>B</b>) Experimental LFP recording during status epilepticus. Stimulation is applied at a certain phase between the discharges <math display="inline"><semantics> <mrow> <mi>φ</mi> </mrow> </semantics></math> relative to the previous control interspike interval (ISI), i.e., at the time moment <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mi>φ</mi> <mo> </mo> <mi>T</mi> </mrow> </semantics></math> after the control interval. The population response has a probabilistic nature and depends on the phase of stimulation. (<b>C</b>) An example of the control ISI distribution during status epilepticus, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>c</mi> <mi>o</mi> <mi>n</mi> </mrow> </msub> <mo>(</mo> <mi>T</mi> <mo>)</mo> </mrow> </semantics></math>. (<b>D</b>) The distributions of the ratio of the intervals with and without stimulation, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>z</mi> </mrow> </msub> <mfenced separators="|"> <mrow> <mi>z</mi> </mrow> </mfenced> <mo>=</mo> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>z</mi> </mrow> </msub> <mo>(</mo> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>m</mi> </mrow> </msub> <mo>/</mo> <mi>T</mi> <mo>)</mo> </mrow> </semantics></math>, for three different phases <math display="inline"><semantics> <mrow> <mi>φ</mi> </mrow> </semantics></math> (0.3, 0.5 and 0.7). The sharp peak in each distribution corresponds to the evoked responses in a certain phase <math display="inline"><semantics> <mrow> <mi>φ</mi> </mrow> </semantics></math>. The red curves show theoretical distributions <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>z</mi> </mrow> </msub> <mfenced separators="|"> <mrow> <mi>z</mi> </mrow> </mfenced> </mrow> </semantics></math>, taking into account the estimated value of the stimulation efficiency γ. (<b>E</b>) The theoretical ratio distribution <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>z</mi> </mrow> </msub> <mfenced separators="|"> <mrow> <mi>z</mi> </mrow> </mfenced> </mrow> </semantics></math> is calculated as the sum of <math display="inline"><semantics> <mrow> <mi>α</mi> <mfenced separators="|"> <mrow> <mi>z</mi> </mrow> </mfenced> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>P</mi> </mrow> <mrow> <mi>X</mi> <mo>/</mo> <mi>Y</mi> </mrow> <mrow> <mi>φ</mi> </mrow> </msubsup> <mo>(</mo> <mi>z</mi> <mo>)</mo> </mrow> </semantics></math>, based on the estimated value of the stimulation efficiency (see details in the <a href="#sec4-ijms-25-08287" class="html-sec">Section 4</a>). <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>P</mi> </mrow> <mrow> <mi>X</mi> <mo>/</mo> <mi>Y</mi> </mrow> <mrow> <mi>φ</mi> </mrow> </msubsup> <mo>(</mo> <mi>z</mi> <mo>)</mo> </mrow> </semantics></math> corresponds to the ISI ratio distribution in the absence of stimulation, where z is the <math display="inline"><semantics> <mrow> <mi>X</mi> <mo>/</mo> <mi>Y</mi> </mrow> </semantics></math> ratio, and <math display="inline"><semantics> <mrow> <mi>X</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>Y</mi> </mrow> </semantics></math> are both distributed according to <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>c</mi> <mi>o</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math>. The evoked component of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>z</mi> </mrow> </msub> <mfenced separators="|"> <mrow> <mi>z</mi> </mrow> </mfenced> </mrow> </semantics></math> is approximated by the shape <math display="inline"><semantics> <mrow> <mi>α</mi> <mfenced separators="|"> <mrow> <mi>z</mi> </mrow> </mfenced> </mrow> </semantics></math> (see <a href="#sec2dot2-ijms-25-08287" class="html-sec">Section 2.2</a> “Phase-Dependent Sensitivity to Stimulation”). (<b>F</b>) Stimulation efficiency <math display="inline"><semantics> <mrow> <mi>γ</mi> </mrow> </semantics></math> curves estimated for three sets of experimental data.</p>
Full article ">Figure 2
<p>LIF model: ISI distribution, phase-dependent sensitivity function, and its dependence on parameters. (<b>A</b>) Example traces for the control parameter set of a supra-threshold regime (top) with {<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi>r</mi> <mi>e</mi> <mi>s</mi> <mi>e</mi> <mi>t</mi> </mrow> </msub> <mo>=</mo> <mo>−</mo> <mn>20</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>V</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msup> <mo>=</mo> <mo>−</mo> <mn>1</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>V</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>g</mi> </mrow> <mrow> <mi>L</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo> </mo> <mi mathvariant="normal">n</mi> <mi mathvariant="normal">S</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>C</mi> <mo>=</mo> <mn>1</mn> <mo> </mo> <mi mathvariant="normal">n</mi> <mi mathvariant="normal">F</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>m</mi> </mrow> </msub> <mo>=</mo> <mn>10</mn> <mo> </mo> <mi mathvariant="normal">p</mi> <mi mathvariant="normal">A</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mo>∆</mo> <mi>t</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>m</mi> </mrow> </msub> <mo>=</mo> <mn>200</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mi>e</mi> <mi>x</mi> <mi>t</mi> </mrow> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>} and the subthreshold regime (bottom) with <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>V</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msup> <mo>=</mo> <mn>1</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>V</mi> </mrow> </msub> <mo>=</mo> <mn>2.4</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math>. The blue line marks the stimulus; the gray dashed line is the threshold; and the orange dashed line is the zero level. The stimulation is applied at <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, i.e., at <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.5</mn> <mo> </mo> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math> is the mean control interval. Arrows mark successive stimuli, and crosses mark unsuccessful stimuli. (<b>B</b>) The steps of the calculation of sensitivity to the stimulus for the LIF model. Top to bottom, for two cases with (red) and without stimulus (black) at the phase <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <msup> <mrow> <mi>t</mi> </mrow> <mrow> <mo>∗</mo> </mrow> </msup> <mo>/</mo> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>: the membrane potential versus time since the previous spike; the hazard of new spike generation <math display="inline"><semantics> <mrow> <mi>H</mi> </mrow> </semantics></math>; the density <math display="inline"><semantics> <mrow> <mi>ρ</mi> </mrow> </semantics></math>; the ISI distribution; the distribution of the ratio of ISI after stimulation versus control ISI, cut at <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> (red), and its approximation with Equation (3) (green). (<b>C</b>,<b>D</b>) The distribution of ISI (<b>C</b>) and the phase dependence of the sensitivity <math display="inline"><semantics> <mrow> <mi>γ</mi> </mrow> </semantics></math> (<b>D</b>) calculated with the control parameter values and with modified values for one of the parameters (top to bottom): the voltage threshold <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>V</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msup> </mrow> </semantics></math>; the stimulation current <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math>; the noise amplitude <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>V</mi> </mrow> </msub> </mrow> </semantics></math>; the leak conductance <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>g</mi> </mrow> <mrow> <mi>L</mi> </mrow> </msub> </mrow> </semantics></math>; and the reset potential <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi>r</mi> <mi>e</mi> <mi>s</mi> <mi>e</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>. Legends for (<b>C</b>,<b>D</b>) are the same. The control case is marked by the black line.</p>
Full article ">Figure 2 Cont.
<p>LIF model: ISI distribution, phase-dependent sensitivity function, and its dependence on parameters. (<b>A</b>) Example traces for the control parameter set of a supra-threshold regime (top) with {<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi>r</mi> <mi>e</mi> <mi>s</mi> <mi>e</mi> <mi>t</mi> </mrow> </msub> <mo>=</mo> <mo>−</mo> <mn>20</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>V</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msup> <mo>=</mo> <mo>−</mo> <mn>1</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>V</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>g</mi> </mrow> <mrow> <mi>L</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mo> </mo> <mi mathvariant="normal">n</mi> <mi mathvariant="normal">S</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>C</mi> <mo>=</mo> <mn>1</mn> <mo> </mo> <mi mathvariant="normal">n</mi> <mi mathvariant="normal">F</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>m</mi> </mrow> </msub> <mo>=</mo> <mn>10</mn> <mo> </mo> <mi mathvariant="normal">p</mi> <mi mathvariant="normal">A</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mo>∆</mo> <mi>t</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>m</mi> </mrow> </msub> <mo>=</mo> <mn>200</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mi>e</mi> <mi>x</mi> <mi>t</mi> </mrow> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>} and the subthreshold regime (bottom) with <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>V</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msup> <mo>=</mo> <mn>1</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>V</mi> </mrow> </msub> <mo>=</mo> <mn>2.4</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math>. The blue line marks the stimulus; the gray dashed line is the threshold; and the orange dashed line is the zero level. The stimulation is applied at <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, i.e., at <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.5</mn> <mo> </mo> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math> is the mean control interval. Arrows mark successive stimuli, and crosses mark unsuccessful stimuli. (<b>B</b>) The steps of the calculation of sensitivity to the stimulus for the LIF model. Top to bottom, for two cases with (red) and without stimulus (black) at the phase <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <msup> <mrow> <mi>t</mi> </mrow> <mrow> <mo>∗</mo> </mrow> </msup> <mo>/</mo> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>: the membrane potential versus time since the previous spike; the hazard of new spike generation <math display="inline"><semantics> <mrow> <mi>H</mi> </mrow> </semantics></math>; the density <math display="inline"><semantics> <mrow> <mi>ρ</mi> </mrow> </semantics></math>; the ISI distribution; the distribution of the ratio of ISI after stimulation versus control ISI, cut at <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> (red), and its approximation with Equation (3) (green). (<b>C</b>,<b>D</b>) The distribution of ISI (<b>C</b>) and the phase dependence of the sensitivity <math display="inline"><semantics> <mrow> <mi>γ</mi> </mrow> </semantics></math> (<b>D</b>) calculated with the control parameter values and with modified values for one of the parameters (top to bottom): the voltage threshold <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>V</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msup> </mrow> </semantics></math>; the stimulation current <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math>; the noise amplitude <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>V</mi> </mrow> </msub> </mrow> </semantics></math>; the leak conductance <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>g</mi> </mrow> <mrow> <mi>L</mi> </mrow> </msub> </mrow> </semantics></math>; and the reset potential <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi>r</mi> <mi>e</mi> <mi>s</mi> <mi>e</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>. Legends for (<b>C</b>,<b>D</b>) are the same. The control case is marked by the black line.</p>
Full article ">Figure 3
<p>LIF model: two types of stimulation (<b>A</b>) and two types of ISI normalization (<b>B</b>). (<b>A</b>) Black and gray, analytical and numerical solutions, respectively, for <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>c</mi> <mi>o</mi> <mi>n</mi> </mrow> </msub> <mo>(</mo> <mi>T</mi> <mo>)</mo> </mrow> </semantics></math> in the control case without stimulation. Red and rose, analytical and numerical solutions, respectively, for <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>m</mi> <mo>.</mo> <mo> </mo> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>c</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> <mo>(</mo> <mi>T</mi> <mo>)</mo> </mrow> </semantics></math> in the case of stimulation that is traced in the membrane potential after the end of the stimulus. Green, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>m</mi> </mrow> </msub> <mo>(</mo> <mi>T</mi> <mo>)</mo> </mrow> </semantics></math> in the case of stimulation that does not affect the membrane potential after the stimulus. (<b>B</b>) Numerically calculated distributions for the ratio of the interval with the evoked discharge to either the mean control ISI (top panel) or the previous control ISI (bottom). The red distribution <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> <mo>(</mo> <mi>T</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>m</mi> </mrow> </msub> <mo>/</mo> <mfenced open="&#x2329;" close="&#x232A;" separators="|"> <mrow> <mi>T</mi> </mrow> </mfenced> <mo>)</mo> </mrow> </semantics></math> in (<b>B</b>) corresponds to the rescaled distribution of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>m</mi> <mo>.</mo> <mo> </mo> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>c</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> <mo>(</mo> <mi>T</mi> <mo>)</mo> </mrow> </semantics></math> in (<b>A</b>).</p>
Full article ">Figure 4
<p>LIF model with and without M-channels. (<b>A</b>) The selection of parameters based on the sensitivity function and the relative ISI distribution (inset) for the LIF model. For the red curve, the following three parameters were fitted: <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>V</mi> </mrow> </msub> <mo>=</mo> <mn>3.3</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>V</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msup> <mo>=</mo> <mo>−</mo> <mn>8.8</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>m</mi> </mrow> </msub> <mo>=</mo> <mn>16</mn> <mo> </mo> <mi mathvariant="normal">p</mi> <mi mathvariant="normal">A</mi> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi>r</mi> <mi>e</mi> <mi>s</mi> <mi>e</mi> <mi>t</mi> </mrow> </msub> <mo>=</mo> <mo>−</mo> <mn>32</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math>, and the rest of the parameters are from the control set; the other curves are replotted from <a href="#ijms-25-08287-f002" class="html-fig">Figure 2</a>D. The ISI distribution in the inset corresponds to the white circle at <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> on the black curve. The blue squares correspond to the mean over experimental data points shown in <a href="#ijms-25-08287-f001" class="html-fig">Figure 1</a>C. (<b>B</b>) f-I curves for models of different classes of excitability, the control LIF model (Class I), and the LIF model with M-channels (Class II). (<b>C</b>) The sensitivity function for the two models with <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>V</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msup> <mo>=</mo> <mo>−</mo> <mn>1</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>V</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msup> <mo>=</mo> <mn>2</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math>. In the case of <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>V</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msup> <mo>=</mo> <mo>−</mo> <mn>1</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math>, the firing rates in LIF and LIF+KM models correspond to the values from the black and green dots in (<b>B</b>), respectively. (<b>D</b>) ISI distributions (bottom) and the membrane potential evolution after the discharge at <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>t</mi> </mrow> <mrow> <mo>∗</mo> </mrow> </msup> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, for LIF and LIF+KM models.</p>
Full article ">Figure 5
<p>LIF model. (<b>A</b>) An example of a single trace with three light pulse stimuli, successful (red arrow) and unsuccessful (crosses). (<b>B</b>) The distributions of the control interval (left column), the interval with a stimulus (middle), and the ratio of the intervals (right) for different phases of stimulation (<math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <mn>0.3</mn> <mo>,</mo> <mn>0.5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>0.7</mn> </mrow> </semantics></math>). The red curves are the Gaussian profile fitted to the control interval distribution (left) and the <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>z</mi> </mrow> </msub> <mfenced separators="|"> <mrow> <mi>z</mi> </mrow> </mfenced> </mrow> </semantics></math> approximation (right). (<b>C</b>) The sensitivity function in comparison with the mean experimental data. The model parameters were <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi>r</mi> <mi>e</mi> <mi>s</mi> <mi>e</mi> <mi>t</mi> </mrow> </msub> <mo>=</mo> <mo>−</mo> <mn>18</mn> <mi>mV</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>V</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msup> <mo>=</mo> <mo>−</mo> <mn>5.6</mn> <mi>mV</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>V</mi> </mrow> </msub> <mo>=</mo> <mn>1.74</mn> <mi>mV</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>g</mi> </mrow> <mrow> <mi>L</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> <mi>nS</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>C</mi> <mo>=</mo> <mn>1</mn> <mi>nF</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>m</mi> </mrow> </msub> <mo>=</mo> <mn>20</mn> <mi>pA</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mo>∆</mo> <mi>t</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>m</mi> </mrow> </msub> <mo>=</mo> <mn>200</mn> <mi>ms</mi> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mi>e</mi> <mi>x</mi> <mi>t</mi> </mrow> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Model 1 is similar to the experiment by either sensitivity function or CV, but not both of them, comparing solutions in subthreshold (<math display="inline"><semantics> <mrow> <msup> <mrow> <mi>V</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msup> <mo>=</mo> <mn>25</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math>, panels (<b>A</b>,<b>C</b>)) and supra-threshold (<math display="inline"><semantics> <mrow> <msup> <mrow> <mi>V</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msup> <mo>=</mo> <mn>17</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math>, panels (<b>B</b>,<b>D</b>)) regimes. (<b>A</b>,<b>B</b>) An example of a single trace with three light pulse stimuli, successful (red arrows) and unsuccessful (cross). (<b>C</b>,<b>D</b>) The distributions of the control interval (left column), the interval with the stimulus (middle), and the ratio of the intervals (right) for different phases of stimulation (<math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <mn>0.3</mn> <mo>,</mo> <mo> </mo> <mn>0.5</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mn>0.7</mn> </mrow> </semantics></math>). The red curves are the Gaussian profile fitted to the control interval distribution and the <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>z</mi> </mrow> </msub> <mfenced separators="|"> <mrow> <mi>z</mi> </mrow> </mfenced> </mrow> </semantics></math> approximation. (<b>E</b>) The sensitivity function in comparison with the mean experimental data. (<b>F</b>) A bifurcation diagram for Model 1 without noise. The cycle (green) appears with the decreasing <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>V</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msup> </mrow> </semantics></math> at about <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>V</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msup> <mo>=</mo> <mn>17</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math> via SNIC bifurcation; it disappears at about <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>V</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msup> <mo>=</mo> <mn>8</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math> via supercritical Hopf bifurcation. The red lines correspond to stable fixed-point solutions.</p>
Full article ">Figure 7
<p>Model 2 in two regimes, subthreshold (<math display="inline"><semantics> <mrow> <msup> <mrow> <mi>V</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msup> <mo>=</mo> <mn>25</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math>, panels (<b>A</b>,<b>C</b>)) and supra-threshold (<math display="inline"><semantics> <mrow> <msup> <mrow> <mi>V</mi> </mrow> <mrow> <mi>T</mi> </mrow> </msup> <mo>=</mo> <mn>10</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">V</mi> </mrow> </semantics></math>, panels (<b>B</b>,<b>D</b>)), in comparison with the experiment. (<b>A</b>,<b>B</b>) An example of a single trace with three light pulse stimuli, successful (red arrows) and unsuccessful (cross). (<b>C</b>,<b>D</b>) The distributions of the control interval (left column), the interval with the stimulus (middle), and the ratio of the intervals (right) for different phases of stimulation (<math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <mn>0.3</mn> <mo>,</mo> <mo> </mo> <mn>0.5</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mn>0.7</mn> </mrow> </semantics></math>). The red curves are the Gaussian profile fitted to the control interval distribution and the <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>z</mi> </mrow> </msub> <mfenced separators="|"> <mrow> <mi>z</mi> </mrow> </mfenced> </mrow> </semantics></math> approximation. (<b>E</b>) The sensitivity functions in comparison with the mean experimental data. (<b>F</b>) The bifurcation diagram for Model 2 without noise and a smooth sigmoid function, as in Equation (24a). The cycle appears and disappears via subcritical Hopf bifurcations followed by the saddle-node of the cycles.</p>
Full article ">Figure 8
<p>Model 2 with after-spike depolarization (“3-D model”). (<b>A</b>) An example of a single trace with three light pulse stimuli, successful (red arrows) and unsuccessful (cross). (<b>B</b>) The distributions of the control interval (left column), the interval with the stimulus (middle), and the ratio of the intervals (right) for different phases of stimulation (<math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <mn>0.3</mn> <mo>,</mo> <mo> </mo> <mn>0.5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>0.7</mn> </mrow> </semantics></math>). The red curves are the Gaussian profile fitted to the control interval distribution and the <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>z</mi> </mrow> </msub> <mfenced separators="|"> <mrow> <mi>z</mi> </mrow> </mfenced> </mrow> </semantics></math> approximation. (<b>C</b>) The sensitivity function in comparison with the mean experimental data.</p>
Full article ">
13 pages, 5841 KiB  
Article
A Mathematical Model of Spontaneous Action Potential Based on Stochastics Synaptic Noise Dynamics in Non-Neural Cells
by Chitaranjan Mahapatra and Inna Samuilik
Mathematics 2024, 12(8), 1149; https://doi.org/10.3390/math12081149 - 11 Apr 2024
Cited by 5 | Viewed by 1499
Abstract
We developed a mathematical model to simulate the dynamics of background synaptic noise in non-neuronal cells. By employing the stochastic Ornstein–Uhlenbeck process, we represented excitatory synaptic conductance and integrated it into a whole-cell model to generate spontaneous and evoke cellular electrical activities. This [...] Read more.
We developed a mathematical model to simulate the dynamics of background synaptic noise in non-neuronal cells. By employing the stochastic Ornstein–Uhlenbeck process, we represented excitatory synaptic conductance and integrated it into a whole-cell model to generate spontaneous and evoke cellular electrical activities. This single-cell model encompasses numerous biophysically detailed ion channels, depicted by a set of ordinary differential equations in Hodgkin–Huxley and Markov formalisms. Consequently, this approach effectively induced irregular spontaneous depolarizations (SDs) and spontaneous action potentials (sAPs), resembling electrical activity observed in vitro. The input resistance decreased significantly, while the firing rate of spontaneous action potentials increased. Moreover, alterations in the ability to reach the action potential threshold were observed. Background synaptic activity can modify the input/output characteristics of non-neuronal excitatory cells. Hence, suppressing these baseline activities could aid in identifying new pharmaceutical targets for various clinical diseases. Full article
(This article belongs to the Section C2: Dynamical Systems)
Show Figures

Figure 1

Figure 1
<p>The schematic diagram illustrates cellular mechanisms for membrane depolarization. It delineates how membrane depolarization occurs through the release of neurotransmitters, the activation of ion channels, and the establishment of gap junction connections with adjacent cells. Further elucidation is provided in the subsequent paragraph.</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic overview of ion channel mechanisms (PMCA, I<sub>CaT</sub>, I<sub>CaL</sub>, I<sub>KCa</sub>, I<sub>Kv</sub>, I<sub>Leak</sub>, and I<sub>h</sub>) in the isolated DSM model. (<b>b</b>) A diagram illustrating the parallel conductance model for the ionic current is presented schematically. It represents the flow of ion X using g<sub>ion</sub>, Cm, and Rm. Further elucidation is provided in the subsequent paragraph.</p>
Full article ">Figure 3
<p>The schematic diagram illustrates a 10-state Markov model for the BK channel. It includes five closed “horizontal” conformation states labeled as C0, C1, C2, C3, and C4, and five open-oriented “horizontal” conformation states labeled as O0, O1, O2, O3, and O4, with each corresponding to the respective closed state. Further details on this model are elaborated in the following paragraph.</p>
Full article ">Figure 4
<p>Model simulation shows that RMP was maintained at −52 mV.</p>
Full article ">Figure 5
<p>The model shows the AP (red line) and depolarization (black line) with the current stimulus.</p>
Full article ">Figure 6
<p>The model shows the simulated AP (red line), experimental AP (blue line), and simulated depolarization (black line) with synaptic input stimulus.</p>
Full article ">Figure 7
<p>Model-generated RMP fluctuated with synaptic background conductance noise.</p>
Full article ">Figure 8
<p>The model shows the AP (red line) and depolarization (black line) with current stimulus and synaptic background conductance noise.</p>
Full article ">Figure 9
<p>The model shows the evoked response for synaptic inputs with synaptic background conductance noise.</p>
Full article ">Figure 10
<p>The model demonstrates spontaneous action potential generation under conditions of increased noise, along with the application of blockers for L-type and T-type calcium channels.</p>
Full article ">
20 pages, 5208 KiB  
Article
A Bio-Inspired Probabilistic Neural Network Model for Noise-Resistant Collision Perception
by Jialan Hong, Xuelong Sun, Jigen Peng and Qinbing Fu
Biomimetics 2024, 9(3), 136; https://doi.org/10.3390/biomimetics9030136 - 23 Feb 2024
Cited by 2 | Viewed by 1695
Abstract
Bio-inspired models based on the lobula giant movement detector (LGMD) in the locust’s visual brain have received extensive attention and application for collision perception in various scenarios. These models offer advantages such as low power consumption and high computational efficiency in visual processing. [...] Read more.
Bio-inspired models based on the lobula giant movement detector (LGMD) in the locust’s visual brain have received extensive attention and application for collision perception in various scenarios. These models offer advantages such as low power consumption and high computational efficiency in visual processing. However, current LGMD-based computational models, typically organized as four-layered neural networks, often encounter challenges related to noisy signals, particularly in complex dynamic environments. Biological studies have unveiled the intrinsic stochastic nature of synaptic transmission, which can aid neural computation in mitigating noise. In alignment with these biological findings, this paper introduces a probabilistic LGMD (Prob-LGMD) model that incorporates a probability into the synaptic connections between multiple layers, thereby capturing the uncertainty in signal transmission, interaction, and integration among neurons. Comparative testing of the proposed Prob-LGMD model and two conventional LGMD models was conducted using a range of visual stimuli, including indoor structured scenes and complex outdoor scenes, all subject to artificial noise. Additionally, the model’s performance was compared to standard engineering noise-filtering methods. The results clearly demonstrate that the proposed model outperforms all comparative methods, exhibiting a significant improvement in noise tolerance. This study showcases a straightforward yet effective approach to enhance collision perception in noisy environments. Full article
(This article belongs to the Section Bioinspired Sensorics, Information Processing and Control)
Show Figures

Figure 1

Figure 1
<p>Schematic illustrations of typical LGMD neural network models are shown. The notations P, I, E, S, G, and FFI represent the photoreceptor, inhibition, excitation, summation, grouping layers, and the feed-forward inhibition pathway, respectively. The LGMD-based neural network models evolved from single-channel to dual-channel visual processing: (<b>a</b>) adapted from [<a href="#B36-biomimetics-09-00136" class="html-bibr">36</a>]; and (<b>b</b>) adapted from [<a href="#B13-biomimetics-09-00136" class="html-bibr">13</a>].</p>
Full article ">Figure 2
<p>Illustrations of the proposed Prob-LGMD model: sub-figure (<b>a</b>) depicts the anatomical LGMD neuron and its dendritic structure; sub-figure (<b>b</b>) shows the basic network structure of Prob-LGMD. Note that we incorporate the probability parameters into the multi-layered neural network representing the non-deterministic nature of signal transmission, interaction, and integration. The model retrieves differential images from videos for visual processing. To account for the potential occurrence of negative values in the frame difference, we present the images using absolute gray-scale values.</p>
Full article ">Figure 3
<p>Sub-figures (<b>a</b>–<b>c</b>) show snapshots of the approaching black-ball in real physical scenes, as input stimuli with artificial salt-and-pepper noise, and Gaussian noise, respectively. ‘PNR’ and ‘GNV’ are abbreviations for pepper noise ratio and Gaussian noise variance, respectively. Sub-figures (<b>d</b>–<b>f</b>) represent the responses of the proposed model given different probability parameters, and comparative LGMD models against the input stimuli of (<b>a</b>–<b>c</b>), respectively. The gray-shaded area shows the variance of the Prob-LGMD model’s response across repeated trials, and the vertical dashed line indicates the ground-truth collision time in each scenario. The proposed method outperforms the comparative state-of-the-art LGMD models.</p>
Full article ">Figure 4
<p>Sub-figures (<b>a</b>–<b>c</b>) show snapshots of the approaching white-ball in real physical scenes, as input stimuli with artificial salt-and-pepper noise, and Gaussian noise, respectively. Sub-figures (<b>d</b>–<b>f</b>) represent the responses of the proposed model given different probability parameters, and comparative LGMD models against the input stimuli of (<b>a</b>–<b>c</b>), respectively. Notations are consistent with <a href="#biomimetics-09-00136-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>Results of the proposed method and two LGMD models against vehicle crash videos as input stimuli: this set of figures represents three datasets: the original data without noise, the data with salt-and-pepper noise, and the data with Gaussian noise. It can be observed that the leading vehicle collides with other vehicles from the left front and continues to approach the camera until a collision occurs. The Prob-LGMD model performs most robustly to perceive the crash: the membrane potential dramatically peaks only before the colliding moment.</p>
Full article ">Figure 6
<p>Results of the proposed method and two LGMD models against vehicle near-crossing videos as input stimuli: sub-figures (<b>a</b>–<b>c</b>) display snapshots of the original video, the video with salt-and-pepper noise, and the video with Gaussian noise, respectively. The Prob-LGMD model also performs well under different noisy circumstances.</p>
Full article ">Figure 7
<p>This set of visual stimuli illustrates the process of a car turning from the front side of the camera and colliding with it. Unlike previous stimuli, it gradually approaches from the right side, eventually filling the camera’s whole field of view.</p>
Full article ">Figure 8
<p>This set of visual stimuli shows a gradually approaching balloon directly to the frontal view of vehicle dashboard.</p>
Full article ">Figure 9
<p>Sub-figures (<b>a</b>–<b>c</b>) show snapshots of a UAV approaching a black balloon, with artificial salt-and-pepper noise and Gaussian noise added as input stimuli, respectively. In this scene, the Prob-LGMD model also performs well.</p>
Full article ">Figure 10
<p>Statistical results of a distinct ratio calculated by Equation (<a href="#FD14-biomimetics-09-00136" class="html-disp-formula">14</a>): sub-figures (<b>a</b>,<b>b</b>) display the DR of the proposed Prob-LGMD and two comparative models in vehicle-crash and ball-approaching physical scenes, respectively. The vehicle crash datasets include twelve video sequences. The physical datasets comprise ten approaching processes. The grey shadow represents the variance of the proposed model under different probability parameters, while the solid blue line represents the average DR. The experiments help us select an optimal probability parameter around the highest DR for optimizing the performance of our proposed model.</p>
Full article ">Figure 11
<p>Sub-figures (<b>a</b>,<b>b</b>) display two sets of stimuli with different types of noise. Sub-figures (<b>c</b>,<b>d</b>) demonstrate the responses of the proposed Prob-LGMD, the comparative LGMD1, as well as the LGMD1 with a pre-filtering module under these two sets of stimuli. ‘Comparative method-1’ represents the approach of pre-filtering salt-and-pepper noise using median filtering, while ‘Comparative method-2’ represents the method of pre-filtering Gaussian noise using Gaussian filtering. These typical filters can improve the performance of the previous LGMD1 model to some extent whilst the proposed method still outperforms.</p>
Full article ">Figure 12
<p>Responses of the proposed and the comparative methods under artificial salt-and-pepper and Gaussian noise in the colliding scenario: notations are consistent with <a href="#biomimetics-09-00136-f011" class="html-fig">Figure 11</a>.</p>
Full article ">Figure 13
<p>Responses of the proposed and the comparative methods under artificial salt-and-pepper and Gaussian noise in the colliding scenario.</p>
Full article ">Figure 14
<p>Sub-figures (<b>a</b>,<b>b</b>) show snapshots of the colliding scenario after adding noise, while sub-figures (<b>c</b>,<b>d</b>) correspond to the responses of the proposed model and comparative methods.</p>
Full article ">Figure 15
<p>Sub-figures (<b>a</b>,<b>b</b>) show snapshots of the colliding scenario after adding noise, while sub-figures (<b>c</b>,<b>d</b>) correspond to the responses of the proposed model and comparative methods.</p>
Full article ">Figure 16
<p>Sub-figures (<b>a</b>,<b>b</b>) show snapshots of the colliding scenario after adding noise, while sub-figures (<b>c</b>,<b>d</b>) correspond to the responses of the proposed model and comparative methods.</p>
Full article ">Figure 17
<p>Investigation into introducing salt-and-pepper noise and Gaussian noise within the four-layered structure of the proposed neural network model: sub-figures (<b>a</b>,<b>b</b>) depict the responses of two models, the proposed Prob-LGMD and the comparative LGMD1, respectively. The input stimulus used accords with that in <a href="#biomimetics-09-00136-f005" class="html-fig">Figure 5</a>a.</p>
Full article ">Figure 18
<p>Investigation into introducing salt-and-pepper noise and Gaussian noise within the four-layered structure of the proposed neural network model: sub-figures (<b>a</b>,<b>b</b>) depict the responses of two models, the proposed Prob-LGMD and the comparative LGMD1, respectively. The input stimulus used accords with that in <a href="#biomimetics-09-00136-f007" class="html-fig">Figure 7</a>a.</p>
Full article ">
12 pages, 5576 KiB  
Article
Realization of Artificial Neurons and Synapses Based on STDP Designed by an MTJ Device
by Manman Wang, Yuhai Yuan and Yanfeng Jiang
Micromachines 2023, 14(10), 1820; https://doi.org/10.3390/mi14101820 - 23 Sep 2023
Cited by 1 | Viewed by 1440
Abstract
As the third-generation neural network, the spiking neural network (SNN) has become one of the most promising neuromorphic computing paradigms to mimic brain neural networks over the past decade. The SNN shows many advantages in performing classification and recognition tasks in the artificial [...] Read more.
As the third-generation neural network, the spiking neural network (SNN) has become one of the most promising neuromorphic computing paradigms to mimic brain neural networks over the past decade. The SNN shows many advantages in performing classification and recognition tasks in the artificial intelligence field. In the SNN, the communication between the pre-synapse neuron (PRE) and the post-synapse neuron (POST) is conducted by the synapse. The corresponding synaptic weights are dependent on both the spiking patterns of the PRE and the POST, which are updated by spike-timing-dependent plasticity (STDP) rules. The emergence and growing maturity of spintronic devices present a new approach for constructing the SNN. In the paper, a novel SNN is proposed, in which both the synapse and the neuron are mimicked with the spin transfer torque magnetic tunnel junction (STT-MTJ) device. The synaptic weight is presented by the conductance of the MTJ device. The mapping of the probabilistic spiking nature of the neuron to the stochastic switching behavior of the MTJ with thermal noise is presented based on the stochastic Landau–Lifshitz–Gilbert (LLG) equation. In this way, a simplified SNN is mimicked with the MTJ device. The function of the mimicked SNN is verified by a handwritten digit recognition task based on the MINIST database. Full article
(This article belongs to the Special Issue Artificial Intelligence for Micro/Nano Materials and Devices)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The typical sandwich structure of the MTJ device. The left MTJ device is in the anti-parallel (AP) state, while the right MTJ device is in the parallel (P) state. (<b>b</b>) With the values of <span class="html-italic">m</span> = (sin 5°, 0, cos 5°) and <span class="html-italic">I<sub>STT</sub></span> = 5 μA, the spin transfer torque is not large enough to switch the device because of the damping effect. At this time, <span class="html-italic">m</span> = (0, 0, 1), the MTJ device is in the P state. (<b>c</b>) With the values of <span class="html-italic">m</span> = (sin 5°, 0, cos 5°) and <span class="html-italic">I<sub>STT</sub></span> = 200 μA, the MTJ device is switched to the AP state at this time and <span class="html-italic">m</span> = (0, 0, −1).</p>
Full article ">Figure 2
<p>The designed synapse based on the STT-MTJ device. (<b>a</b>) The 1T-1MTJ device can be used to mimic the synapse. The NMOS transistor is used as the communication between the PRE and the POST. (<b>b</b>) The STDP-based pulsed signal. The time information of the <span class="html-italic">V<sub>G</sub></span> pulse is controlled by the membrane potential of the PRE. The <span class="html-italic">V<sub>T</sub></span> pulse is controlled by the membrane potential of the POST. The pulse timing information determines the positive and negative of the time window, which in turn determines the current of the MTJ device, resulting in a corresponding change in the conductance of the MTJ device.</p>
Full article ">Figure 3
<p>(<b>a</b>) The image of the handwritten digit “4”. (<b>b</b>) The pixels of the handwritten digit are converted to the current pulse sequence, which is named the presynaptic pulse sequence. The pixels close to ‘0’ are converted into a negative current pulse, and pixels close to ‘1’ are converted into a positive current pulse.</p>
Full article ">Figure 4
<p>(<b>a</b>) The original handwritten digit “4”. (<b>b</b>) The reconstructed synapse. (<b>c</b>) The input process was repeated 10 times for training synapses.</p>
Full article ">Figure 5
<p>Schematic diagram of the membrane potential of a biological neuron.</p>
Full article ">Figure 6
<p>(<b>a</b>) The integration function of <span class="html-italic">m<sub>z</sub></span> in the STT-MTJ device. <span class="html-italic">m<sub>z</sub></span> shows the precession tendency due to the application of the input pulses. The precession starts due to the applied pulse and then starts to leak when the pulse is removed. (<b>b</b>) The activation function of <span class="html-italic">m<sub>z</sub></span> in the STT-MTJ device. <span class="html-italic">m<sub>z</sub></span> precesses due to the application of the input pulses. The precession starts due to the applied pulse and then starts to leak after the pulse is removed. After <span class="html-italic">m<sub>z</sub></span> reaches the threshold, it remains in the activated state due to the non-volatility of the MTJ device.</p>
Full article ">Figure 7
<p>Reset circuit of the MTJ neuron. When the MTJ neuron is activated, the <span class="html-italic">V<sub>SPIKE</sub></span> is high during the reading period, initiating a reset. The reverse current flows through the heavy metal layer, resetting the MTJ neuron back to the P state.</p>
Full article ">Figure 8
<p>The switching probabilities of the MTJ device are varied non-linearly with the input current, <span class="html-italic">I</span>. The probabilistic switching can be directly mapped with the stochastic activation properties of the neuron. (<b>a</b>) The switching probability of the MTJ device is decreased with increasing <span class="html-italic">t<sub>FL</sub></span>, while <span class="html-italic">t<sub>pw</sub></span> is kept as 1 ns. (<b>b</b>) The switching probability of the MTJ device is decreased when <span class="html-italic">t<sub>pw</sub></span> decreases, where <span class="html-italic">t<sub>FL</sub></span> is kept as 1 nm. It should be noted that the 0.2 ns pulse width is very short, so a large current is needed for certain switching.</p>
Full article ">Figure 9
<p>Schematic of the SNN for handwritten digit image recognition. The input includes the pixel information of the handwritten digit image.</p>
Full article ">Figure 10
<p>Ten images of the handwritten digits “0” and “1” used for the MTJ-based image recognition.</p>
Full article ">Figure 11
<p>The recognition process for the handwritten digit “1” based on the MTJ synapses and the MTJ neuron. According (<b>a</b>–<b>e</b>), it can be seen that the input of the handwritten digit “1” does not activate the MTJ neuron. Each set of figures includes the original image, the random initial synapses, the training process for the synapses repeated ten times, the post-synapse current pulse, and the <span class="html-italic">m<sub>z</sub></span> of the MTJ neurons. Different columns correspond to the samples shown in <a href="#micromachines-14-01820-f010" class="html-fig">Figure 10</a>.</p>
Full article ">Figure 12
<p>The recognition process for the handwritten digit “0” based on the MTJ synapses and the MTJ neuron. According (<b>a</b>–<b>e</b>), it can be seen that the input of the handwritten digit “0” activates the MTJ neuron. Each set of figures includes the original image, the random initial synapses, the training process for the synapses repeated ten times, the post-synapse current pulse, and the <span class="html-italic">m<sub>z</sub></span> of the MTJ neurons. Different columns correspond to the samples shown in <a href="#micromachines-14-01820-f010" class="html-fig">Figure 10</a>.</p>
Full article ">
Back to TopTop