[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Previous Issue
Volume 26, December
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 27, Issue 1 (January 2025) – 5 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 483 KiB  
Article
Kinetic Theory with Casimir Invariants—Toward Understanding of Self-Organization by Topological Constraints
by Zensho Yoshida
Entropy 2025, 27(1), 5; https://doi.org/10.3390/e27010005 - 25 Dec 2024
Abstract
A topological constraint, characterized by the Casimir invariant, imparts non-trivial structures in a complex system. We construct a kinetic theory in a constrained phase space (infinite-dimensional function space of macroscopic fields), and characterize a self-organized structure as a thermal equilibrium on a leaf [...] Read more.
A topological constraint, characterized by the Casimir invariant, imparts non-trivial structures in a complex system. We construct a kinetic theory in a constrained phase space (infinite-dimensional function space of macroscopic fields), and characterize a self-organized structure as a thermal equilibrium on a leaf of foliated phase space. By introducing a model of a grand canonical ensemble, the Casimir invariant is interpreted as the number of topological particles. Full article
16 pages, 2464 KiB  
Article
Multi-Agent Hierarchical Graph Attention Actor–Critic Reinforcement Learning
by Tongyue Li, Dianxi Shi, Songchang Jin, Zhen Wang, Huanhuan Yang and Yang Chen
Entropy 2025, 27(1), 4; https://doi.org/10.3390/e27010004 - 25 Dec 2024
Abstract
Multi-agent systems often face challenges such as elevated communication demands, intricate interactions, and difficulties in transferability. To address the issues of complex information interaction and model scalability, we propose an innovative hierarchical graph attention actor–critic reinforcement learning method. This method naturally models the [...] Read more.
Multi-agent systems often face challenges such as elevated communication demands, intricate interactions, and difficulties in transferability. To address the issues of complex information interaction and model scalability, we propose an innovative hierarchical graph attention actor–critic reinforcement learning method. This method naturally models the interactions within a multi-agent system as a graph, employing hierarchical graph attention to capture the complex cooperative and competitive relationships among agents, thereby enhancing their adaptability to dynamic environments. Specifically, graph neural networks encode agent observations as single feature-embedding vectors, maintaining a constant dimensionality irrespective of the number of agents, which improves model scalability. Through the “inter-agent” and “inter-group” attention layers, the embedding vector of each agent is updated into an information-condensed and contextualized state representation, which extracts state-dependent relationships between agents and model interactions at both individual and group levels. We conducted experiments across several multi-agent tasks to assess our proposed method’s effectiveness, stability, and scalability. Furthermore, to enhance the applicability of our method in large-scale tasks, we tested and validated its performance within a curriculum learning training framework, thereby enhancing its transferability. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The overall structure of the MAHGAC. Left: an interactive multi-agent pursuit environment. Right: a shared HGAT module. The MAHGAC adopts the centralized training and decentralized execution (CTDE) training paradigm. During the training, adopting a centralized critic and sharing a hierarchical graph attention mechanism, agent <span class="html-italic">i</span> can obtain information from all agents and learn the importance weights of other agents in its vicinity. During the testing, each agent executes actions based on its own observations.</p>
Full article ">Figure 2
<p>The information aggregation steps in GAT between connected agents. For agent 1, connected agent 2, and agent 3, calculate the attention weights of node 2: <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mn>12</mn> </msub> <mo>=</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <mo form="prefix">exp</mo> <mo>(</mo> <mi>LeakyReLU</mi> <mfenced separators="" open="(" close=")"> <msub> <mi>e</mi> <mn>12</mn> </msub> </mfenced> <mo>)</mo> </mrow> <mrow> <mo form="prefix">exp</mo> <mo>(</mo> <mi>LeakyReLU</mi> <mfenced separators="" open="(" close=")"> <msub> <mi>e</mi> <mn>12</mn> </msub> </mfenced> <mo>)</mo> <mo>+</mo> <mo form="prefix">exp</mo> <mo>(</mo> <mi>LeakyReLU</mi> <mspace width="4.pt"/> <mfenced separators="" open="(" close=")"> <msub> <mi>e</mi> <mn>13</mn> </msub> </mfenced> <mo>)</mo> </mrow> </mfrac> </mstyle> </mrow> </semantics></math> and node 3: <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mn>13</mn> </msub> <mo>=</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <mo form="prefix">exp</mo> <mo>(</mo> <mi>LeakyReLU</mi> <mfenced separators="" open="(" close=")"> <msub> <mi>e</mi> <mn>13</mn> </msub> </mfenced> <mo>)</mo> </mrow> <mrow> <mo form="prefix">exp</mo> <mo>(</mo> <mi>LeakyReLU</mi> <mfenced separators="" open="(" close=")"> <msub> <mi>e</mi> <mn>12</mn> </msub> </mfenced> <mo>)</mo> <mo>+</mo> <mo form="prefix">exp</mo> <mo>(</mo> <mi>LeakyReLU</mi> <mspace width="4.pt"/> <mfenced separators="" open="(" close=")"> <msub> <mi>e</mi> <mn>13</mn> </msub> </mfenced> <mo>)</mo> </mrow> </mfrac> </mstyle> </mrow> </semantics></math> towards node 1, respectively, to obtain the node embedding vector of agent 1 for a more robust state representation of the agent’s feature information.</p>
Full article ">Figure 3
<p>The information exchange process among agents. The connectivity diagram between agents, where nodes can exchange information through edges.</p>
Full article ">Figure 4
<p>Hierarchical graph attention network. An example of the multi-agent pursuit task, where the entities in the environment are classified into 3 groups: pursuer-group, prey-group, and obstacle-group. In the “inter-agent” graph attention layer, attention weights are calculated between agents within each group, and then the aggregated feature vectors <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>h</mi> <mo stretchy="false">→</mo> </mover> <mrow> <mi>i</mi> </mrow> <mi>g</mi> </msubsup> </semantics></math> are used as inputs for the “inter-group” graph attention layer to obtain information-aggregated and contextualized state representation <math display="inline"><semantics> <msub> <mover accent="true"> <msup> <mi>h</mi> <mo>′</mo> </msup> <mo stretchy="false">→</mo> </mover> <mi>i</mi> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>Experimental environments: (<b>a</b>) Cooperative navigation, where agents reach different landmarks while avoiding obstacles. (<b>b</b>) Linear formation, where agents form a line between two landmarks. (<b>c</b>) Regular polygon formation, where agents encircle landmarks to form a regular polygon. (<b>d</b>) Confronting pursuit, where pursuers collaborate to chase two prey, and when both prey are caught, the task is successful.</p>
Full article ">Figure 6
<p>Subfigure (<b>a</b>) represents the mean episode rewards curves of 3 agents in the cooperative navigation task. (<b>b</b>) represents the mean episode rewards curves of 5 agents in the linear formation task. (<b>c</b>) represents the mean episode rewards curves of 4 agents in the regular polygonal formation task. (<b>d</b>) represents the mean episode rewards curves for the task of 3 pursuers cooperating to pursue 2 prey.</p>
Full article ">Figure 7
<p>The average episode rewards of different methods in cooperative navigation task with different numbers of agents.</p>
Full article ">Figure 8
<p>The 5-agent line formation strategy is transferred to a new 10-agent line formation task through curriculum learning.</p>
Full article ">
17 pages, 2161 KiB  
Article
Entropy Production in an Electro-Membrane Process at Underlimiting Currents—Influence of Temperature
by Juan Carlos Maroto, Sagrario Muñoz and Vicenta María Barragán
Entropy 2025, 27(1), 3; https://doi.org/10.3390/e27010003 - 25 Dec 2024
Abstract
The entropy production in the polarization phenomena occurring in the underlimiting regime, when an electric current circulates through a single cation-exchange membrane system, has been investigated in the 3–40 °C temperature range. From the analysis of the current–voltage curves and considering the electro-membrane [...] Read more.
The entropy production in the polarization phenomena occurring in the underlimiting regime, when an electric current circulates through a single cation-exchange membrane system, has been investigated in the 3–40 °C temperature range. From the analysis of the current–voltage curves and considering the electro-membrane system as a unidimensional heterogeneous system, the total entropy generation in the system has been estimated from the contribution of each part of the system. Classical polarization theory and the irreversible thermodynamics approach have been used to determine the total electric potential drop and the entropy generation, respectively, associated with the different transport mechanisms in each part of the system. The results show that part of the electric power input is dissipated as heat due to both electric migration and diffusion ion transports, while another part is converted into chemical energy stored in the saline concentration gradient. Considering the electro-membrane process as an energy conversion process, an efficiency has been defined as the ratio between stored power and electric power input. This efficiency increases as both applied electric current and temperature increase. Full article
Show Figures

Figure 1

Figure 1
<p>Sketch of the ion-exchange membrane system under study: An electric current passes through a cation exchange membrane separating two aqueous solutions of NaCl of equal concentration <span class="html-italic">c</span><sub>0</sub> (<b>above</b>). The Ag|AgCl electrodes are reversible to Cl<sup>−</sup> ion concentration profiles and different fluxes across the system (<b>bottom</b>).</p>
Full article ">Figure 2
<p>Sketch of a typical current–voltage curve of an ion-exchange membrane in an electrolyte solution (in black), showing the three distinct regions: (I) linear increase, (II) plateau, and (III) overlimiting transport. Profile predicted by the Nernst model (in red) and linear voltage–current curve (in blue) for a system without concentration polarization effects are also shown.</p>
Full article ">Figure 3
<p>Sketch of the experimental setup for measuring current–voltage curves. I: electric current; B: bath; T: thermostat; S: solution; M: membrane; IE: Ag|AgCl injecting electrode; ME: voltage Ag|AgCl electrode.</p>
Full article ">Figure 4
<p>(<b>a</b>) Current–Voltage curves and (<b>b</b>) corresponding Cowan plots at different temperatures. Dashed lines are included as visual guides.</p>
Full article ">Figure 5
<p>Estimated entropy production as a function of the applied electric current at different temperatures. Lines are provided as visual guides.</p>
Full article ">Figure 6
<p>(<b>a</b>) Estimated stored and (<b>b</b>) dissipated powers as a function of the electric current at different temperatures.</p>
Full article ">Figure 7
<p>Estimated efficiencies as a function of <span class="html-italic">I</span><sub>r</sub> = <span class="html-italic">I</span>/<span class="html-italic">I<sub>L</sub></span> at different temperatures. Dashed lines are only visual guides. The figure legend shows the efficiency values at different <span class="html-italic">I</span><sub>r</sub> percentages at the different temperatures for a better visualization.</p>
Full article ">
70 pages, 7977 KiB  
Article
A Martingale-Free Introduction to Conditional Gaussian Nonlinear Systems
by Marios Andreou and Nan Chen
Entropy 2025, 27(1), 2; https://doi.org/10.3390/e27010002 - 24 Dec 2024
Abstract
The conditional Gaussian nonlinear system (CGNS) is a broad class of nonlinear stochastic dynamical systems. Given the trajectories for a subset of state variables, the remaining follow a Gaussian distribution. Despite the conditionally linear structure, the CGNS exhibits strong nonlinearity, thus capturing many [...] Read more.
The conditional Gaussian nonlinear system (CGNS) is a broad class of nonlinear stochastic dynamical systems. Given the trajectories for a subset of state variables, the remaining follow a Gaussian distribution. Despite the conditionally linear structure, the CGNS exhibits strong nonlinearity, thus capturing many non-Gaussian characteristics observed in nature through its joint and marginal distributions. Desirably, it enjoys closed analytic formulae for the time evolution of its conditional Gaussian statistics, which facilitate the study of data assimilation and other related topics. In this paper, we develop a martingale-free approach to improve the understanding of CGNSs. This methodology provides a tractable approach to proving the time evolution of the conditional statistics by deriving results through time discretization schemes, with the continuous-time regime obtained via a formal limiting process as the discretization time-step vanishes. This discretized approach further allows for developing analytic formulae for optimal posterior sampling of unobserved state variables with correlated noise. These tools are particularly valuable for studying extreme events and intermittency and apply to high-dimensional systems. Moreover, the approach improves the understanding of different sampling methods in characterizing uncertainty. The effectiveness of the framework is demonstrated through a physics-constrained, triad-interaction climate model with cubic nonlinearity and state-dependent cross-interacting noise. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
16 pages, 10177 KiB  
Article
A Secure and Efficient White-Box Implementation of SM4
by Xiaobo Hu, Yanyan Yu, Yinzi Tu, Jing Wang, Shi Chen, Yuqi Bao, Tengyuan Zhang, Yaowen Xing and Shihui Zheng
Entropy 2025, 27(1), 1; https://doi.org/10.3390/e27010001 - 24 Dec 2024
Abstract
Differential Computation Analysis (DCA) leverages memory traces to extract secret keys, bypassing countermeasures employed in white-box designs, such as encodings. Although researchers have made great efforts to enhance security against DCA, most solutions considerably decrease algorithmic efficiency. In our approach, the Feistel cipher [...] Read more.
Differential Computation Analysis (DCA) leverages memory traces to extract secret keys, bypassing countermeasures employed in white-box designs, such as encodings. Although researchers have made great efforts to enhance security against DCA, most solutions considerably decrease algorithmic efficiency. In our approach, the Feistel cipher SM4 is implemented by a series of table-lookup operations, and the input and output of each table are protected by affine transformations and nonlinear encodings generated randomly. We employ fourth-order non-linear encoding to reduce the loss of efficiency while utilizing a random sequence to shuffle lookup table access, thereby severing the potential link between memory data and the intermediate values of SM4. Experimental results indicate that the DCA procedure fails to retrieve the correct key. Furthermore, theoretical analysis shows that the techniques employed in our scheme effectively prevent existing algebraic attacks. Finally, our design requires only 1.44 MB of memory, significantly less than that of the known DCA-resistant schemes—Zhang et al.’s scheme (24.3 MB), Yuan et al.’s scheme (34.5 MB) and Zhao et al.’s scheme (7.8 MB). Thus, our SM4 white-box design effectively ensures security while maintaining a low memory cost. Full article
(This article belongs to the Special Issue Information-Theoretic Cryptography and Security)
Show Figures

Figure 1

Figure 1
<p>The flowchart of <math display="inline"><semantics> <msup> <mi>i</mi> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msup> </semantics></math> round iteration of SM4.</p>
Full article ">Figure 2
<p>The <math display="inline"><semantics> <msup> <mi>i</mi> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msup> </semantics></math> round function of the Xiao–Lai scheme.</p>
Full article ">Figure 3
<p>Round function of our SM4 white-box scheme.</p>
Full article ">Figure 4
<p>Generating a TableM table.</p>
Full article ">Figure 5
<p>Generating an XOR Table.</p>
Full article ">Figure 6
<p>Generating a TableT table.</p>
Full article ">Figure 7
<p>Generating a TableC table.</p>
Full article ">Figure 8
<p>Generating a TableD table.</p>
Full article ">Figure 9
<p>BGE analysis in our scheme.</p>
Full article ">Figure 10
<p>The differential traces related to the first roundkey.</p>
Full article ">
Previous Issue
Back to TopTop