[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 47, IS4SI 2019
Previous Issue
Volume 45, Getafrica 2019
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
proceedings-logo

Journal Browser

Journal Browser

Proceedings, 2020, ECEA-5

The 5th International Electronic Conference on Entropy and Its Applications

Online | 18–30 November 2019

Volume Editor: Geert Verdoolaege, Ghent University, Belgium

Number of Papers: 31
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Cover Story (view full-size image): The conference aims to bring together researchers to present and discuss their recent contributions without the need for travel. This e-conference is hosted on the MDPI Sciforum platform, which [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:

Other

7 pages, 517 KiB  
Proceeding Paper
Reverse Weighted-Permutation Entropy: A Novel Complexity Metric Incorporating Distance and Amplitude Information
by Yuxing Li
Proceedings 2020, 46(1), 1; https://doi.org/10.3390/ecea-5-06688 - 17 Nov 2019
Cited by 7 | Viewed by 1495
Abstract
Permutation entropy (PE), as one of the effective complexity metrics to represent the complexity of time series, has the merits of simple calculation and high calculation efficiency. In view of the limitations of PE, weighted-permutation entropy (WPE) and reverse permutation entropy (RPE) were [...] Read more.
Permutation entropy (PE), as one of the effective complexity metrics to represent the complexity of time series, has the merits of simple calculation and high calculation efficiency. In view of the limitations of PE, weighted-permutation entropy (WPE) and reverse permutation entropy (RPE) were proposed to improve the performance of PE. WPE introduces amplitude information to weigh each arrangement pattern, it can not only better reveal the complexity of time series with a sudden change of amplitude, but it also has better robustness to noise; by introducing distance information, RPE is defined as the distance to white noise, it has the reverse trend to traditional PE and has better stability for time series of different lengths. In this paper, we propose a novel complexity metric incorporating distance and amplitude information, and name it reverse weighted-permutation entropy (RWPE), which incorporates the advantages of both WPE and RPE. Three simulation experiments were conducted, including mutation signal detection testing, robustness testing to noise based on complexity, and complexity testing of time series with various lengths. The simulation results show that RWPE can be used as a complexity metric, which has the ability to accurately detect the abrupt amplitudes of time series and has better robustness to noise. Moreover, it also shows greater stability than the other three kinds of PE for time series with various lengths. Full article
Show Figures

Figure 1

Figure 1
<p>An original pattern and its corresponding three possible patterns (<math display="inline"> <semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>).</p>
Full article ">Figure 2
<p>The time domain waveform of synthetic signal <math display="inline"> <semantics> <mrow> <mi>y</mi> <mi>s</mi> </mrow> </semantics> </math>.</p>
Full article ">Figure 3
<p>The four entropies of synthetic signal <math display="inline"> <semantics> <mrow> <mi>y</mi> <mi>s</mi> </mrow> </semantics> </math>.</p>
Full article ">Figure 4
<p>The four entropies of synthetic signal under different SNRs.</p>
Full article ">Figure 5
<p>The four entropies of cosine signals with frequencies of 100 Hz. (<b>a</b>) PE; (<b>b</b>) WPE; (<b>c</b>) RPE; (<b>d</b>) RWPE.</p>
Full article ">Figure 5 Cont.
<p>The four entropies of cosine signals with frequencies of 100 Hz. (<b>a</b>) PE; (<b>b</b>) WPE; (<b>c</b>) RPE; (<b>d</b>) RWPE.</p>
Full article ">
7 pages, 6483 KiB  
Proceeding Paper
New Explanation for the Mpemba Effect
by Ilias J. Tyrovolas
Proceedings 2020, 46(1), 2; https://doi.org/10.3390/ecea-5-06658 - 17 Nov 2019
Viewed by 4597
Abstract
The purpose of this study is to check out the involvement of entropy in Mpemba effect. Several water samples were cooled down to frozen in order to probe if preheat affects the cooling duration time. We found out that preheating of the water [...] Read more.
The purpose of this study is to check out the involvement of entropy in Mpemba effect. Several water samples were cooled down to frozen in order to probe if preheat affects the cooling duration time. We found out that preheating of the water sample the cooling duration was reduced. Given this, we theoretically show that water gains more entropy when warmed and re-cooled to the original temperature. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental plan</p>
Full article ">Figure 2
<p>Experimental layout.</p>
Full article ">Figure 3
<p>The three processes.</p>
Full article ">Figure 4
<p>Curves of entropy as function of temperature.</p>
Full article ">Figure A1
<p>Freezer.</p>
Full article ">Figure A2
<p>PET bottles.</p>
Full article ">Figure A3
<p>Degaser.</p>
Full article ">Figure A4
<p>Heat plate.</p>
Full article ">
8 pages, 1620 KiB  
Proceeding Paper
Spin Waves and Skyrmions in Magneto-Ferroelectric Superlattices: Theory and Simulation
by Hung T. Diep and Ildus F. Sharafullin
Proceedings 2020, 46(1), 3; https://doi.org/10.3390/ecea-5-06662 - 17 Nov 2019
Viewed by 1397
Abstract
We present in this paper the effects of Dzyaloshinskii–Moriya (DM) magnetoelectric coupling between ferroelectric and magnetic layers in a superlattice formed by alternate magnetic and ferroelectric films. Magnetic films are films of simple cubic lattice with Heisenberg spins interacting with each other via [...] Read more.
We present in this paper the effects of Dzyaloshinskii–Moriya (DM) magnetoelectric coupling between ferroelectric and magnetic layers in a superlattice formed by alternate magnetic and ferroelectric films. Magnetic films are films of simple cubic lattice with Heisenberg spins interacting with each other via an exchange J and a DM interaction with the ferroelectric interface. Electrical polarizations of ±1 are assigned at simple cubic lattice sites in the ferroelectric films. We determine the ground-state (GS) spin configuration in the magnetic film. In zero field, the GS is periodically non-collinear (helical structure) and in an applied field H perpendicular to the layers, it shows the existence of skyrmions at the interface. Using the Green’s function method we study the spin waves (SW) excited in a monolayer and also in a bilayer sandwiched between ferroelectric films, in zero field. We show that the DM interaction strongly affects the long-wave length SW mode. We calculate also the magnetization at low temperatures. We use next Monte Carlo simulations to calculate various physical quantities at finite temperatures such as the critical temperature, the layer magnetization and the layer polarization, as functions of the magneto-electric DM coupling and the applied magnetic field. Phase transition to the disordered phase is studied. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Magneto-ferroelectric superlattice; (<b>b</b>) interfacial coupling between a polarization <span class="html-italic">P</span> with five spins in a Dzyaloshinskii–Moriya (DM) interaction; (<b>c</b>) positions of the spins in the <math display="inline"> <semantics> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </semantics> </math> plane and the position of non-magnetic ion oxygen, defining the DM vector (see text).</p>
Full article ">Figure 2
<p>Ground-state (GS) spin configurations for (<b>a</b>) <math display="inline"> <semantics> <mrow> <msup> <mi>J</mi> <mrow> <mi>m</mi> <mi>f</mi> </mrow> </msup> <mo>=</mo> <mo>−</mo> <mn>0.45</mn></mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <msup> <mi>J</mi> <mrow> <mi>m</mi> <mi>f</mi> </mrow> </msup> <mo>=</mo> <mo>−</mo> <mn>1.2</mn></mrow> </semantics> </math>, with <math display="inline"> <semantics> <mrow> <mi>H</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>; (<b>c</b>) angles between nearest neighbors (NN) are schematically zoomed. See text for comments.</p>
Full article ">Figure 3
<p>(<b>a</b>) Three-dimensional view of the GS configuration of the interface for moderate frustration <math display="inline"> <semantics> <mrow> <msup> <mi>J</mi> <mrow> <mn>2</mn> <mi>m</mi> </mrow> </msup> <mo>=</mo> <msup> <mi>J</mi> <mrow> <mn>2</mn> <mi>f</mi> </mrow> </msup> <mo>=</mo> <mo>−</mo> <mn>0.2</mn></mrow> </semantics> </math>; (<b>b</b>) 3D view of the GS configuration of the second magnetic layers; (<b>c</b>) zoom of a skyrmion on the interface layer: Red denotes up spin, four spins with clear blue color are down spin, other colors correspond to spin orientations between the two. The skyrmion is of the Bloch type; (<b>d</b>) <span class="html-italic">z</span>-components of spins across the skyrmion shown in (<b>c</b>). Other parameters: <math display="inline"> <semantics> <mrow> <msup> <mi>J</mi> <mi>m</mi> </msup> <mo>=</mo> <msup> <mi>J</mi> <mi>f</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msup> <mi>J</mi> <mrow> <mi>m</mi> <mi>f</mi> </mrow> </msup> <mo>=</mo> <mo>−</mo> <mn>1.25</mn></mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>H</mi> <mo>=</mo> <mn>0.25</mn></mrow> </semantics> </math>.</p>
Full article ">Figure 4
<p>Spin-wave energy <math display="inline"> <semantics> <mrow> <mi>E</mi> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </semantics> </math> versus <span class="html-italic">k</span> (<math display="inline"> <semantics> <mrow> <mi>k</mi> <mo>≡</mo> <msub> <mi>k</mi> <mi>x</mi> </msub> <mo>=</mo> <msub> <mi>k</mi> <mi>z</mi> </msub> </mrow> </semantics> </math>) for (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>θ</mi> <mo>=</mo> <mn>0.3</mn></mrow> </semantics> </math> radian and (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>θ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math> for a monolayer at <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>. See text for comments.</p>
Full article ">Figure 5
<p>(<b>a</b>) Energy of the magnetic films versus temperature <span class="html-italic">T</span> for <math display="inline"> <semantics> <mrow> <mo>(</mo> <msup> <mi>J</mi> <mrow> <mn>2</mn> <mi>m</mi> </mrow> </msup> <mo>=</mo> <msup> <mi>J</mi> <mrow> <mn>2</mn> <mi>f</mi> </mrow> </msup> <mo>=</mo> <mo>−</mo> <mn>0.4</mn><mo>)</mo> </mrow> </semantics> </math> (red), coinciding with the curve for <math display="inline"> <semantics> <mrow> <mo>(</mo> <msup> <mi>J</mi> <mrow> <mn>2</mn> <mi>m</mi> </mrow> </msup> <mo>=</mo> <mo>−</mo> <mn>0.4</mn><mo>,</mo> <msup> <mi>J</mi> <mrow> <mn>2</mn> <mi>f</mi> </mrow> </msup> <mo>=</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics> </math> (black, hidden behind the red curve). Blue curve is for <math display="inline"> <semantics> <mrow> <mo>(</mo> <msup> <mi>J</mi> <mrow> <mn>2</mn> <mi>m</mi> </mrow> </msup> <mo>=</mo> <mn>0</mn> <mo>,</mo> <msup> <mi>J</mi> <mrow> <mn>2</mn> <mi>f</mi> </mrow> </msup> <mo>=</mo> <mo>−</mo> <mn>0.4</mn><mo>)</mo> </mrow> </semantics> </math>; (<b>b</b>) order parameter of the magnetic films versus temperature <span class="html-italic">T</span> for <math display="inline"> <semantics> <mrow> <mo>(</mo> <msup> <mi>J</mi> <mrow> <mn>2</mn> <mi>m</mi> </mrow> </msup> <mo>=</mo> <msup> <mi>J</mi> <mrow> <mn>2</mn> <mi>f</mi> </mrow> </msup> <mo>=</mo> <mo>−</mo> <mn>0.4</mn><mo>)</mo> </mrow> </semantics> </math> (red), <math display="inline"> <semantics> <mrow> <mo>(</mo> <msup> <mi>J</mi> <mrow> <mn>2</mn> <mi>m</mi> </mrow> </msup> <mo>=</mo> <mo>−</mo> <mn>0.4</mn><mo>,</mo> <msup> <mi>J</mi> <mrow> <mn>2</mn> <mi>f</mi> </mrow> </msup> <mo>=</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics> </math> (black), <math display="inline"> <semantics> <mrow> <mo>(</mo> <msup> <mi>J</mi> <mrow> <mn>2</mn> <mi>m</mi> </mrow> </msup> <mo>=</mo> <mn>0</mn> <mo>,</mo> <msup> <mi>J</mi> <mrow> <mn>2</mn> <mi>f</mi> </mrow> </msup> <mo>=</mo> <mo>−</mo> <mn>0.4</mn><mo>)</mo> </mrow> </semantics> </math> (blue). Other used parameters: <math display="inline"> <semantics> <mrow> <msup> <mi>J</mi> <mrow> <mi>m</mi> <mi>f</mi> </mrow> </msup> <mo>=</mo> <mo>−</mo> <mn>1.25</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>H</mi> <mo>=</mo> <mn>0.25</mn></mrow> </semantics> </math>.</p>
Full article ">
9 pages, 2342 KiB  
Proceeding Paper
Social Conflicts Studied by Statistical Physics Approach and Monte Carlo Simulations
by Hung T. Diep, Miron Kaufman and Sanda Kaufman
Proceedings 2020, 46(1), 4; https://doi.org/10.3390/ecea-5-06661 - 17 Nov 2019
Viewed by 1244
Abstract
Statistical physics models of social systems with a large number of members, each interacting with a subset of others, have been used in very diverse domains such as culture dynamics, crowd behavior, information dissemination and social conflicts. We observe that such models rely [...] Read more.
Statistical physics models of social systems with a large number of members, each interacting with a subset of others, have been used in very diverse domains such as culture dynamics, crowd behavior, information dissemination and social conflicts. We observe that such models rely on the fact that large societal groups display surprising regularities despite individual agency. Unlike physics phenomena that obey Newton’s third law, in the world of humans the magnitudes of action and reaction are not necessarily equal. The effect of the actions of group n on group m can differ from the effect of group m on group n. We thus use the spin language to describe humans with this observation in mind. Note that particular individual behaviors do not survive in statistical averages. Only common characteristics remain. We have studied two-group conflicts as well as three-group conflicts. We have used time-dependent Mean-Field Theory and Monte Carlo simulations. Each group is defined by two parameters which express the intra-group strength of interaction among members and its attitude toward negotiations. The interaction with the other group is parameterized by a constant which expresses an attraction or a repulsion to other group average attitude. The model includes a social temperature T which acts on each group and quantifies the social noise. One of the most striking features is the periodic oscillation of the attitudes toward negotiation or conflict for certain ranges of parameter values. Other striking results include chaotic behavior, namely intractable, unpredictable conflict outcomes. Full article
Show Figures

Figure 1

Figure 1
<p>All graphs are for <math display="inline"> <semantics> <mrow> <mo>−</mo> <msub> <mi>K</mi> <mn>12</mn> </msub> <mo>=</mo> <msub> <mi>K</mi> <mn>21</mn> </msub> <mo>=</mo> <mn>0.2</mn></mrow> </semantics> </math>. For <math display="inline"> <semantics> <mrow> <msub> <mi>J</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>J</mi> <mn>2</mn> </msub> <mo>&lt;</mo> <mn>0.15</mn></mrow> </semantics> </math> the time evolution is decaying oscillations (not shown). (<b>a</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>J</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>J</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.15</mn></mrow> </semantics> </math> the state is critical and the amplitude of oscillations does not decay in time; (<b>b</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>J</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>J</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.6</mn></mrow> </semantics> </math> the period of oscillations is long; (<b>c</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>J</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>J</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.7</mn></mrow> </semantics> </math> the <math display="inline"> <semantics> <msub> <mi>S</mi> <mi>A</mi> </msub> </semantics> </math>, <math display="inline"> <semantics> <msub> <mi>S</mi> <mi>B</mi> </msub> </semantics> </math> evolve in time to non-zero steady state values. The transition from (<b>b</b>) to (<b>c</b>) is discontinuous (not shown).</p>
Full article ">Figure 2
<p>(<b>a</b>) Before interaction <math display="inline"> <semantics> <mrow> <msub> <mi>J</mi> <mi>A</mi> </msub> <mo>=</mo> <msub> <mi>J</mi> <mi>B</mi> </msub> <mo>=</mo> <mn>0.02</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>q</mi> <mi>A</mi> </msub> <mo>=</mo> <msub> <mi>q</mi> <mi>B</mi> </msub> <mo>=</mo> <mn>7</mn> </mrow> </semantics> </math>, initial conditions <math display="inline"> <semantics> <mrow> <msub> <mi>S</mi> <mi>A</mi> </msub> <mo>=</mo> <mo>−</mo> <msub> <mi>S</mi> <mi>B</mi> </msub> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>, both groups become “disordered” at <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mrow> <mi>c</mi> </mrow> <mn>0</mn> </msubsup> <mo>≃</mo> <mn>102</mn> </mrow> </semantics> </math> (arbitrary unit); (<b>b</b>) with inter-group interaction <math display="inline"> <semantics> <mrow> <mo>−</mo> <msub> <mi>K</mi> <mrow> <mi>A</mi> <mi>B</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>K</mi> <mrow> <mi>B</mi> <mi>A</mi> </mrow> </msub> <mo>=</mo> <mn>0.005</mn></mrow> </semantics> </math>, both groups become “disordered” at <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>53</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 3
<p>Dynamics of two groups upon interaction <math display="inline"> <semantics> <mrow> <mo>−</mo> <msub> <mi>K</mi> <mrow> <mi>A</mi> <mi>B</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>K</mi> <mrow> <mi>B</mi> <mi>A</mi> </mrow> </msub> <mo>=</mo> <mn>0.005</mn></mrow> </semantics> </math> with <math display="inline"> <semantics> <mrow> <msub> <mi>J</mi> <mi>A</mi> </msub> <mo>=</mo> <msub> <mi>J</mi> <mi>B</mi> </msub> <mo>=</mo> <mn>0.02</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>q</mi> <mi>A</mi> </msub> <mo>=</mo> <msub> <mi>q</mi> <mi>B</mi> </msub> <mo>=</mo> <mn>7</mn> </mrow> </semantics> </math>, initial conditions <math display="inline"> <semantics> <mrow> <msub> <mi>S</mi> <mi>A</mi> </msub> <mo>=</mo> <mo>−</mo> <msub> <mi>S</mi> <mi>B</mi> </msub> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>: (<b>a</b>) At “social temperature” <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>48</mn> </mrow> </semantics> </math> below <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>53</mn> </mrow> </semantics> </math> where both groups are “ordered”; (<b>b</b>) at “social temperature” <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>74</mn> </mrow> </semantics> </math> between <math display="inline"> <semantics> <msub> <mi>T</mi> <mi>c</mi> </msub> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mrow> <mi>c</mi> </mrow> <mn>0</mn> </msubsup> <mo>=</mo> <mn>102</mn> </mrow> </semantics> </math> (values given in the caption of <a href="#proceedings-46-00004-f002" class="html-fig">Figure 2</a>); (<b>c</b>) at <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>125</mn> </mrow> </semantics> </math> above <math display="inline"> <semantics> <msubsup> <mi>T</mi> <mrow> <mi>c</mi> </mrow> <mn>0</mn> </msubsup> </semantics> </math> in the initial disordered phase of both groups. See text for comments.</p>
Full article ">Figure 4
<p>(<b>a</b>–<b>c</b>) Synchronized sustained oscillations for groups 1–3, respectively at <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>1.0</mn></mrow> </semantics> </math>; (<b>d</b>) closed loop attractor. For color codes and values of parameters, see <a href="#proceedings-46-00004-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 5
<p>In this figure and for all following figures, groups 1–3 are represented by red, blue and green symbols, respectively. (<b>a</b>–<b>c</b>) Damped oscillations at high <span class="html-italic">T</span> for groups 1–3, respectively: <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>1.45</mn></mrow> </semantics> </math>; (<b>d</b>) spiral trajectory to disorder. <math display="inline"> <semantics> <mrow> <msub> <mi>J</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.15</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>J</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.35</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>J</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.25</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>K</mi> <mn>12</mn> </msub> <mo>=</mo> <mo>−</mo> <mn>0.2</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>K</mi> <mn>21</mn> </msub> <mo>=</mo> <mn>0.2</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>K</mi> <mn>23</mn> </msub> <mo>=</mo> <mn>0.1</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>K</mi> <mn>32</mn> </msub> <mo>=</mo> <mn>0.1</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>K</mi> <mn>31</mn> </msub> <mo>=</mo> <mn>0.15</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>K</mi> <mn>13</mn> </msub> <mo>=</mo> <mo>−</mo> <mn>0.15</mn></mrow> </semantics> </math>.</p>
Full article ">Figure 6
<p>(<b>a</b>) Synchronization; (<b>b</b>) fragmented attractor. <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>0.5</mn></mrow> </semantics> </math>. For color codes and values of parameters, see <a href="#proceedings-46-00004-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 7
<p>(<b>a</b>) Synchronization; (<b>b</b>) fragmented attractor. <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>0.465</mn></mrow> </semantics> </math>. For color codes and values of parameters, see <a href="#proceedings-46-00004-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 8
<p>Stance of the three groups as a function of social temperature <span class="html-italic">T</span> before inter-group interactions are turned on. <math display="inline"> <semantics> <mrow> <msub> <mi>J</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.15</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>J</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.35</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>J</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.25</mn></mrow> </semantics> </math>. For color codes, see <a href="#proceedings-46-00004-f005" class="html-fig">Figure 5</a>. See text for comments.</p>
Full article ">Figure 9
<p>Time-dependence of three groups’ stances at low temperatures (for color codes, see <a href="#proceedings-46-00004-f005" class="html-fig">Figure 5</a>): (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>2.5254</mn></mrow> </semantics> </math>, all three groups are ordered; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>5.8474</mn></mrow> </semantics> </math>, groups 1 and 3 are disordered, group 2 is not disordered; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>7.5084</mn></mrow> </semantics> </math>, all 3 groups are disordered. The same parameters as in <a href="#proceedings-46-00004-f008" class="html-fig">Figure 8</a> have been used: <math display="inline"> <semantics> <mrow> <msub> <mi>J</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.15</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>J</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.35</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>J</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.25</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>K</mi> <mn>12</mn> </msub> <mo>=</mo> <mo>−</mo> <mn>0.20</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>K</mi> <mn>21</mn> </msub> <mo>=</mo> <mn>0.20</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>K</mi> <mn>13</mn> </msub> <mo>=</mo> <mo>−</mo> <mn>0.15</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>K</mi> <mn>31</mn> </msub> <mo>=</mo> <mn>0.15</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>K</mi> <mn>23</mn> </msub> <mo>=</mo> <mn>0.10</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>K</mi> <mn>32</mn> </msub> <mo>=</mo> <mn>0.10</mn></mrow> </semantics> </math>.</p>
Full article ">
8 pages, 795 KiB  
Proceeding Paper
Fast Tuning of Topic Models: An Application of Rényi Entropy and Renormalization Theory
by Sergei Koltcov, Vera Ignatenko and Sergei Pashakhin
Proceedings 2020, 46(1), 5; https://doi.org/10.3390/ecea-5-06674 - 17 Nov 2019
Cited by 1 | Viewed by 1260
Abstract
In practice, the critical step in building machine learning models of big data (BD) is costly in terms of time and the computing resources procedure of parameter tuning with a grid search. Due to the size, BD are comparable to mesoscopic physical systems. [...] Read more.
In practice, the critical step in building machine learning models of big data (BD) is costly in terms of time and the computing resources procedure of parameter tuning with a grid search. Due to the size, BD are comparable to mesoscopic physical systems. Hence, methods of statistical physics could be applied to BD. The paper shows that topic modeling demonstrates self-similar behavior under the condition of a varying number of clusters. Such behavior allows using a renormalization technique. The combination of a renormalization procedure with the Rényi entropy approach allows for fast searching of the optimal number of clusters. In this paper, the renormalization procedure is developed for the Latent Dirichlet Allocation (LDA) model with a variational Expectation-Maximization algorithm. The experiments were conducted on two document collections with a known number of clusters in two languages. The paper presents results for three versions of the renormalization procedure: (1) a renormalization with the random merging of clusters, (2) a renormalization based on minimal values of Kullback–Leibler divergence and (3) a renormalization with merging clusters with minimal values of Rényi entropy. The paper shows that the renormalization procedure allows finding the optimal number of topics 26 times faster than grid search without significant loss of quality. Full article
Show Figures

Figure 1

Figure 1
<p>Rényi entropy curves. Black: successive topic modeling. Other colors: renormalization with the random merging of topics.</p>
Full article ">Figure 2
<p>Rényi entropy curves. Black is successive topic modeling; red is renormalization with minimum local entropy principle of merging.</p>
Full article ">Figure 3
<p>Rényi entropy curves. Black: successive topic modeling. Red: renormalization with minimum KL divergence principle of merging.</p>
Full article ">Figure 4
<p>Rényi entropy curves. Black: successive topic modeling. Other colors: renormalization with the random merging of topics.</p>
Full article ">Figure 5
<p>Rényi entropy curves. Black is successive topic modeling; red is renormalization with minimum local entropy principle of merging.</p>
Full article ">Figure 6
<p>Rényi entropy curves. Black is successive topic modeling; red is renormalization with minimum KL divergence principle of merging.</p>
Full article ">
5 pages, 892 KiB  
Proceeding Paper
Computer Simulation of Magnetic Skyrmions
by Vitalii Kapitan, Egor Vasiliev and Alexander Perzhu
Proceedings 2020, 46(1), 6; https://doi.org/10.3390/ecea-5-06678 - 17 Nov 2019
Cited by 1 | Viewed by 2118
Abstract
In this paper, we present the results of a numerical simulation of thermodynamics for the array of Classical Heisenberg spins placed on a 2D square lattice, which effectively represents the behaviour of a single layer. Using the Metropolis algorithm, we show the temperature [...] Read more.
In this paper, we present the results of a numerical simulation of thermodynamics for the array of Classical Heisenberg spins placed on a 2D square lattice, which effectively represents the behaviour of a single layer. Using the Metropolis algorithm, we show the temperature behaviour of the system with a competing Heisenberg and Dzyaloshinskii–Moriya interaction (DMI) in contrast with the classical Heisenberg system. We show the process of nucleation of the skyrmion depending on the value of the external magnetic field. We proposed the controlling method for the movement of skyrmions. Full article
Show Figures

Figure 1

Figure 1
<p>A ground state of the system with DMI and the Heisenberg exchange—a Spiral state.</p>
Full article ">Figure 2
<p>Skyrmion’s lattice.</p>
Full article ">Figure 3
<p>Movement of the skyrmion.</p>
Full article ">
9 pages, 305 KiB  
Proceeding Paper
The Measurement of Statistical Evidence as the Basis for Statistical Reasoning
by Michael Evans
Proceedings 2020, 46(1), 7; https://doi.org/10.3390/ecea-5-06682 - 17 Nov 2019
Cited by 1 | Viewed by 1631
Abstract
There are various approaches to the problem of how one is supposed to conduct a statistical analysis. Different analyses can lead to contradictory conclusions in some problems so this is not a satisfactory state of affairs. It seems that all approaches make reference [...] Read more.
There are various approaches to the problem of how one is supposed to conduct a statistical analysis. Different analyses can lead to contradictory conclusions in some problems so this is not a satisfactory state of affairs. It seems that all approaches make reference to the evidence in the data concerning questions of interest as a justification for the methodology employed. It is fair to say, however, that none of the most commonly used methodologies is absolutely explicit about how statistical evidence is to be characterized and measured. We will discuss the general problem of statistical reasoning and the development of a theory for this that is based on being precise about statistical evidence. This will be shown to lead to the resolution of a number of problems. Full article
8 pages, 699 KiB  
Proceeding Paper
Detection of Arrhythmic Cardiac Signals from ECG Recordings Using the Entropy–Complexity Plane
by Pablo Martinez Coq, Walter Legnani and Ricardo Armentano
Proceedings 2020, 46(1), 8; https://doi.org/10.3390/ecea-5-06693 - 18 Nov 2019
Cited by 3 | Viewed by 1149
Abstract
The aim of this work was to analyze in the Entropy–Complexity plane (HxC) time series coming from ECG, with the objective to discriminate recordings from two different groups of patients: normal sinus rhythm and cardiac arrhythmias. The HxC plane [...] Read more.
The aim of this work was to analyze in the Entropy–Complexity plane (HxC) time series coming from ECG, with the objective to discriminate recordings from two different groups of patients: normal sinus rhythm and cardiac arrhythmias. The HxC plane used in this study was constituted by Shannon’s Entropy as one of its axes, and the other was composed using statistical complexity. To compute the entropy, the probability distribution function (PDF) of the observed data was obtained using the methodology proposed by Bandt and Pompe (2002). The database used in the present study was the ECG recordings obtained from PhysioNet, 47 long-term signals of patients with diagnosed cardiac arrhythmias and 18 long-term signals from normal sinus rhythm patients were processed. Average values of statistical complexity and normalized Shannon entropy were calculated and analyzed in the HxC plane for each time series. The average values of complexity of ECG for patients with diagnosed arrhythmias were bigger than normal sinus rhythm group. On the other hand, the Shannon entropy average values for arrhythmias patients were lower than the normal sinus rhythm group. This characteristic made it possible to discriminate the position of both signals’ groups in the HxC plane. The results were analyzed through a multivariate statistical test hypothesis. The methodology proposed has a remarkable conceptual simplicity, and shows a promising efficiency in the detection of cardiovascular pathologies. Full article
Show Figures

Figure 1

Figure 1
<p>The orange trace represents record named from origin <span class="html-italic">16273</span> from MIT-BIH Normal sinus rhythm database, and light blue trace represents record <span class="html-italic">100</span> from MIT-BIH Arrhythmia database.</p>
Full article ">Figure 2
<p>Age distribution of each group of patients: (<b>a</b>) Normal sinus rhythm recordings; (<b>b</b>) Cardiac arrhythmias recordings.</p>
Full article ">Figure 3
<p><b>HxC</b> plane. Mean values of statistical complexity and the normalized Shannon entropy of different comparisons of sub-groups of interest.</p>
Full article ">Figure 4
<p>Detailed HxC for discriminating the mean values of both groups of interest.</p>
Full article ">
9 pages, 470 KiB  
Proceeding Paper
Graph Entropy Associated with Multilevel Atomic Excitation
by Abu Mohamed Alhasan
Proceedings 2020, 46(1), 9; https://doi.org/10.3390/ecea-5-06675 - 17 Nov 2019
Viewed by 1503
Abstract
A graph-model is presented to describe multilevel atomic structure. As an example, we take the double ? configuration in alkali-metal atoms with hyperfine structure and nuclear spin I = 3 / 2 , as a graph with four vertices. Links are treated as [...] Read more.
A graph-model is presented to describe multilevel atomic structure. As an example, we take the double ? configuration in alkali-metal atoms with hyperfine structure and nuclear spin I = 3 / 2 , as a graph with four vertices. Links are treated as coherence. We introduce the transition matrix which describes the connectivity matrix in static graph-model. In general, the transition matrix describes spatiotemporal behavior of the dynamic graph-model. Furthermore, it describes multiple connections and self-looping of vertices. The atomic excitation is made by short pulses, in order that the hyperfine structure is well resolved. Entropy associated with the proposed dynamic graph-model is used to identify transitions as well as local stabilization in the system without invoking the energy concept of the propagated pulses. Full article
Show Figures

Figure 1

Figure 1
<p>Energy level diagram of the <math display="inline"> <semantics> <msub> <mi>D</mi> <mn>1</mn> </msub> </semantics> </math> transition in <math display="inline"> <semantics> <mrow> <msup> <mrow/> <mn>87</mn> </msup> <mrow> <mi>R</mi> <mi>b</mi> </mrow> </mrow> </semantics> </math> atom including hyperfine structure.</p>
Full article ">Figure 2
<p>Graph-model based upon the transition matrix <math display="inline"> <semantics> <mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>;</mo> <mi>z</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math> and corresponding to energy level diagram for the <math display="inline"> <semantics> <msub> <mi>D</mi> <mn>1</mn> </msub> </semantics> </math> transition in <math display="inline"> <semantics> <mrow> <msup> <mrow/> <mn>87</mn> </msup> <mrow> <mi>R</mi> <mi>b</mi> </mrow> </mrow> </semantics> </math> atom including hyperfine structure, as depicted in <a href="#proceedings-46-00009-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 3
<p>Graph-model based upon the transition matrix <math display="inline"> <semantics> <mrow> <msub> <mi>M</mi> <mn>6</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>;</mo> <mi>z</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math>, which includes full details of the atomic components of the density matrix as well as the transition field contributions.</p>
Full article ">Figure 4
<p>Space-dependent SVD-entropy as the residual von Neumann entropy for different classes of the transition matrix <math display="inline"> <semantics> <mrow> <mi>M</mi> <mo>(</mo> <mi>t</mi> <mo>;</mo> <mi>z</mi> <mo>)</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure 5
<p>Space-dependent SVD-entropy as the residual von Neumann entropy for different classes of the transition matrix as <math display="inline"> <semantics> <mrow> <msub> <mi>M</mi> <mn>6</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>;</mo> <mi>z</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math> and its modified version. We replace the atomic components in the last two columns of Equation (<a href="#sec3-proceedings-46-00009" class="html-sec">Section 3</a>) by the field components.</p>
Full article ">
8 pages, 428 KiB  
Proceeding Paper
Information Length as a New Diagnostic of Stochastic Resonance
by Eun-jin Kim and Rainer Hollerbach
Proceedings 2020, 46(1), 10; https://doi.org/10.3390/ecea-5-06667 - 17 Nov 2019
Viewed by 1047
Abstract
Stochastic resonance is a subtle, yet powerful phenomenon in which noise plays an interesting role of amplifying a signal instead of attenuating it. It has attracted great attention with a vast number of applications in physics, chemistry, biology, etc. Popular measures to study [...] Read more.
Stochastic resonance is a subtle, yet powerful phenomenon in which noise plays an interesting role of amplifying a signal instead of attenuating it. It has attracted great attention with a vast number of applications in physics, chemistry, biology, etc. Popular measures to study stochastic resonance include signal-to-noise ratios, residence time distributions, and different information theoretic measures. Here, we show that the information length provides a novel method to capture stochastic resonance. The information length measures the total number of statistically different states along the path of a system. Specifically, we consider the classical double-well model of stochastic resonance in which a particle in a potential V ( x , t ) = [ - x 2 / 2 + x 4 / 4 - A sin ( ? t ) x ] is subject to an additional stochastic forcing that causes it to occasionally jump between the two wells at x ? ± 1 . We present direct numerical solutions of the Fokker–Planck equation for the probability density function p ( x , t ) for ? = 10 - 2 to 10 - 6 , and A ? [ 0 , 0 . 2 ] and show that the information length shows a very clear signal of the resonance. That is, stochastic resonance is reflected in the total number of different statistical states that a system passes through. Full article
Show Figures

Figure 1

Figure 1
<p>The Probability Density Functions (PDFs) <math display="inline"> <semantics> <mrow> <mi>p</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics> </math> at four times throughout the cycle, with the numbers <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> <mo>−</mo> <mn>3</mn> </mrow> </semantics> </math> beside individual curves corresponding to <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>=</mo> <mi>n</mi> <mspace width="0.166667em"/> <mi>T</mi> <mo>/</mo> <mn>4</mn> <mspace width="4pt"/> <mi>mod</mi> <mo>(</mo> <mi>T</mi> <mo>)</mo> </mrow> </semantics> </math>. All three panels are for <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0.04</mn></mrow> </semantics> </math>, and (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>0.01</mn></mrow> </semantics> </math>, (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>0.0324</mn></mrow> </semantics> </math>, (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>0.1</mn></mrow> </semantics> </math>.</p>
Full article ">Figure 2
<p>The maxima over the cycle of <math display="inline"> <semantics> <mrow> <msubsup> <mo>∫</mo> <mn>0</mn> <mo>∞</mo> </msubsup> <mi>p</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mspace width="0.166667em"/> <mi>d</mi> <mi>x</mi> </mrow> </semantics> </math>, as functions of the noise level <span class="html-italic">D</span>. The numbers 2 to 6 beside individual curves correspond to <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics> </math> to <math display="inline"> <semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>6</mn> </mrow> </msup> </semantics> </math>. (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0.02</mn></mrow> </semantics> </math>, (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0.04</mn></mrow> </semantics> </math>. The thick dashed curves show results from System (<a href="#FD10-proceedings-46-00010" class="html-disp-formula">10</a>). The dotted vertical lines are at <math display="inline"> <semantics> <msub> <mi>D</mi> <mi>res</mi> </msub> </semantics> </math> given by System (<a href="#FD9-proceedings-46-00010" class="html-disp-formula">9</a>) for <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics> </math> to <math display="inline"> <semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>6</mn> </mrow> </msup> </semantics> </math>; note how well these values agree with the maxima over <span class="html-italic">D</span> of the corresponding curves.</p>
Full article ">Figure 3
<p>The top row shows <math display="inline"> <semantics> <mrow> <mi>G</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mo>∫</mo> <mn>0</mn> <mo>∞</mo> </msubsup> <mi>p</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mspace width="0.166667em"/> <mi>d</mi> <mi>x</mi> </mrow> </semantics> </math>, and the bottom row the corresponding <math display="inline"> <semantics> <mrow> <mi>W</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>−</mo> <mfrac> <mi>d</mi> <mrow> <mi>d</mi> <mi>t</mi> </mrow> </mfrac> <mi>G</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math>. All three solutions are for <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics> </math>, (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>0.023</mn></mrow> </semantics> </math>, (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>0.025</mn></mrow> </semantics> </math>, (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>0.027</mn></mrow> </semantics> </math>, and <math display="inline"> <semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0.02</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mn>0.04</mn></mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mn>0.08</mn></mrow> </semantics> </math> as indicated by the numbers beside individual curves.</p>
Full article ">Figure 4
<p><math display="inline"> <semantics> <mrow> <mi mathvariant="script">E</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics> </math> as a function of time throughout the period <span class="html-italic">T</span>. All three panels are for <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics> </math>, (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>0.01</mn></mrow> </semantics> </math>, (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>0.0324</mn></mrow> </semantics> </math>, (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>0.1</mn></mrow> </semantics> </math>, and <math display="inline"> <semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0.02</mn></mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mn>0.04</mn></mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mn>0.08</mn></mrow> </semantics> </math> as indicated by the numbers beside individual curves.</p>
Full article ">Figure 5
<p><math display="inline"> <semantics> <mi mathvariant="script">L</mi> </semantics> </math> over one cycle, as functions of the noise level <span class="html-italic">D</span>. The numbers 2 to 6 beside individual curves correspond to <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics> </math> to <math display="inline"> <semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>6</mn> </mrow> </msup> </semantics> </math>. (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0.02</mn></mrow> </semantics> </math>, (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0.04</mn></mrow> </semantics> </math>. The thick dashed curves are <math display="inline"> <semantics> <mrow> <mi mathvariant="script">L</mi> <mo>=</mo> <mn>4</mn> <mi>A</mi> <mo>/</mo> <mn>1.4</mn><msup> <mi>D</mi> <mrow> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics> </math>. As in <a href="#proceedings-46-00010-f002" class="html-fig">Figure 2</a>, the dotted vertical lines are at <math display="inline"> <semantics> <msub> <mi>D</mi> <mi>res</mi> </msub> </semantics> </math> given by (<a href="#FD9-proceedings-46-00010" class="html-disp-formula">9</a>) for <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics> </math> to <math display="inline"> <semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>6</mn> </mrow> </msup> </semantics> </math>; note how well these values again agree with the maxima of the corresponding curves.</p>
Full article ">
8 pages, 571 KiB  
Proceeding Paper
Entropy Production and the Maximum Entropy of the Universe
by Vihan M. Patel and Charles Lineweaver
Proceedings 2020, 46(1), 11; https://doi.org/10.3390/ecea-5-06672 - 17 Nov 2019
Cited by 1 | Viewed by 2795
Abstract
The entropy of the observable universe has been calculated as Suni ~ 10104 k and is dominated by the entropy of supermassive black holes. Irreversible processes in the universe can only happen if there is an entropy gap ?S between [...] Read more.
The entropy of the observable universe has been calculated as Suni ~ 10104 k and is dominated by the entropy of supermassive black holes. Irreversible processes in the universe can only happen if there is an entropy gap ?S between the entropy of the observable universe Suni and its maximum entropy Smax: ?S = Smax ? Suni. Thus, the entropy gap ?S is a measure of the remaining potentially available free energy in the observable universe. To compute ?S, one needs to know the value of Smax. There is no consensus on whether Smax is a constant or is time-dependent. A time-dependent Smax(t) has been used to represent instantaneous upper limits on entropy growth. However, if we define Smax as a constant equal to the final entropy of the observable universe at its heat death, Smax ? Smax,HD, we can interpret T ?S as a measure of the remaining potentially available (but not instantaneously available) free energy of the observable universe. The time-dependent slope dSuni/dt(t) then becomes the best estimate of current entropy production and T dSuni/dt(t) is the upper limit to free energy extraction. Full article
Show Figures

Figure 1

Figure 1
<p>The CMB temperature is plotted alongside logarithmic scale factor and time. The temperature of the CMB is currently ~2.7 <span class="html-italic">K</span> and reaches the de Sitter temperature ~2.4 × 10<sup>−30</sup> <span class="html-italic">K</span> at <math display="inline"> <semantics> <mrow> <msup> <mrow> <mn mathvariant="bold">10</mn> </mrow> <mrow> <mn mathvariant="bold">107</mn> </mrow> </msup> </mrow> </semantics> </math> s (=<math display="inline"> <semantics> <mrow> <msup> <mrow> <mn mathvariant="bold">10</mn> </mrow> <mrow> <mn mathvariant="bold">100</mn> </mrow> </msup> <mo> </mo> </mrow> </semantics> </math> years). At this time all of the supermassive black holes have evaporated and their Hawking radiation has been red-shifted and diluted during the exponential expansion of the <math display="inline"> <semantics> <mstyle mathvariant="bold" mathsize="normal"> <mi>Λ</mi> </mstyle> </semantics> </math> -dominated era. The dashed line is the <math display="inline"> <semantics> <mstyle mathvariant="bold" mathsize="normal"> <mi>Λ</mi> </mstyle> </semantics> </math> -dependent de Sitter temperature which we set equal to the Planck temperature during inflation.</p>
Full article ">Figure 2
<p>The dashed lines refer to the competing models for maximum entropy. The maximum entropy achieved at heat death <math display="inline"> <semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi mathvariant="bold-italic">S</mi> </mstyle> <mrow> <mi mathvariant="bold-italic">m</mi> <mi mathvariant="bold-italic">a</mi> <mi mathvariant="bold-italic">x</mi> <mo>,</mo> <mi mathvariant="bold-italic">H</mi> <mi mathvariant="bold-italic">D</mi> </mrow> </msub> </mrow> </semantics> </math> is a constant value at <math display="inline"> <semantics> <mrow> <mi mathvariant="bold-italic">S</mi> <mo> </mo> <mo>~</mo> <mo> </mo> <msup> <mrow> <mn mathvariant="bold">10</mn> </mrow> <mrow> <mn mathvariant="bold">123</mn> </mrow> </msup> <mo> </mo> <mi mathvariant="bold-italic">k</mi> </mrow> </semantics> </math>. The time-dependent maximum <math display="inline"> <semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi mathvariant="bold-italic">S</mi> </mstyle> <mrow> <mi mathvariant="bold-italic">m</mi> <mi mathvariant="bold-italic">a</mi> <mi mathvariant="bold-italic">x</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi mathvariant="bold-italic">t</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math>, as described by Davies [<a href="#B22-proceedings-46-00011" class="html-bibr">22</a>], considers which processes at a given point in the evolution can contribute to entropy increase. It does not account for the rate at which these processes can increase entropy but does try to account for the total amount of entropy the new entropy-increasing processes could produce. The limits on the rate of entropy production will slow as the expansion of the universe prevents further accretion of mass onto supermassive black holes in the <math display="inline"> <semantics> <mstyle mathvariant="bold" mathsize="normal"> <mi>Λ</mi> </mstyle> </semantics> </math> -dominated era.</p>
Full article ">Figure 3
<p>Following the growth of size for a spherical distribution of matter. <b>Dotted lines</b> describe the virialization of structure by gravitational collapse. <b>Dashed line:</b> A flat <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="sans-serif">Ω</mi> <mi mathvariant="bold-italic">m</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math> universe in which structure formation does not turn off. <b>Solid line:</b> The consensus <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">Λ</mi> <mo>−</mo> <mi>C</mi> <mi>D</mi> <mi>M</mi> </mrow> </semantics> </math> universe in which structure formation turns off, with the largest objects on the order of galaxy clusters ~5 Mpc, having already formed.</p>
Full article ">
15 pages, 373 KiB  
Proceeding Paper
Inheritance is a Surjection: Description and Consequence
by Paul Ballonoff
Proceedings 2020, 46(1), 12; https://doi.org/10.3390/ecea-5-06659 - 17 Nov 2019
Viewed by 1134
Abstract
Consider an evolutionary process. In genetic inheritance and in human cultural systems, each new offspring is assigned to be produced by a specific pair of the previous population. This form of mathematical arrangement is called a surjection. We have thus briefly described the [...] Read more.
Consider an evolutionary process. In genetic inheritance and in human cultural systems, each new offspring is assigned to be produced by a specific pair of the previous population. This form of mathematical arrangement is called a surjection. We have thus briefly described the mechanics of genetics—physical mechanics describes the possible forms of loci, and normal genetic statistics describe the results as viability of offspring in actual use. However, we have also described much of the mechanics of mathematical anthropology. Understanding that what we know as inheritance is the result of finding surjections and their consequences is useful in understanding, and perhaps predicting, biological—as well as human—evolution. Full article
Show Figures

Figure 1

Figure 1
<p>Example from Ruheman’s study of Australian kinship.</p>
Full article ">
8 pages, 243 KiB  
Proceeding Paper
Symplectic/Contact Geometry Related to Bayesian Statistics
by Atsuhide Mori
Proceedings 2020, 46(1), 13; https://doi.org/10.3390/ecea-5-06665 - 17 Nov 2019
Viewed by 985
Abstract
In the previous work, the author gave the following symplectic/contact geometric description of the Bayesian inference of normal means: The space H of normal distributions is an upper halfplane which admits two operations, namely, the convolution product and the normalized pointwise product of [...] Read more.
In the previous work, the author gave the following symplectic/contact geometric description of the Bayesian inference of normal means: The space H of normal distributions is an upper halfplane which admits two operations, namely, the convolution product and the normalized pointwise product of two probability density functions. There is a diffeomorphism F of H that interchanges these operations as well as sends any e-geodesic to an e-geodesic. The product of two copies of H carries positive and negative symplectic structures and a bi-contact hypersurface N. The graph of F is Lagrangian with respect to the negative symplectic structure. It is contained in the bi-contact hypersurface N. Further, it is preserved under a bi-contact Hamiltonian flow with respect to a single function. Then the restriction of the flow to the graph of F presents the inference of means. The author showed that this also works for the Student t-inference of smoothly moving means and enables us to consider the smoothness of data smoothing. In this presentation, the space of multivariate normal distributions is foliated by means of the Cholesky decomposition of the covariance matrix. This provides a pair of regular Poisson structures, and generalizes the above symplectic/contact description to the multivariate case. The most of the ideas presented here have been described at length in a later article of the author. Full article
17 pages, 339 KiB  
Proceeding Paper
Comparative Examination of Nonequilibrium Thermodynamic Models of Thermodiffusion in Liquids
by Semen N. Semenov and Martin E. Schimpf
Proceedings 2020, 46(1), 14; https://doi.org/10.3390/ecea-5-06680 - 17 Nov 2019
Cited by 2 | Viewed by 1266
Abstract
We analyze existing models for material transport in non-isothermal non-electrolyte liquid mixtures that utilize non-equilibrium thermodynamics. Many different sets of equations for material have been derived that, while based on the same fundamental expression of entropy production, utilize different terms of the temperature- [...] Read more.
We analyze existing models for material transport in non-isothermal non-electrolyte liquid mixtures that utilize non-equilibrium thermodynamics. Many different sets of equations for material have been derived that, while based on the same fundamental expression of entropy production, utilize different terms of the temperature- and concentration-induced gradients in the chemical potential to express the material flux. We reason that only by establishing a system of transport equations that satisfies the following three requirements can we obtain a valid thermodynamic model of thermodiffusion based on entropy production, and understand the underlying physical mechanism: (1) Maintenance of mechanical equilibrium in a closed steady-state system, expressed by a form of the Gibbs–Duhem equation that accounts for all the relevant gradients in concentration, temperature, and pressure and respective thermodynamic forces; (2) thermodiffusion (thermophoresis) is zero in pure unbounded liquids (i.e., in the absence of wall effects); (3) invariance in the derived concentrations of components in a mixture, regardless of which concentration or material flux is considered to be the dependent versus independent variable in an overdetermined system of material transport equations. The analysis shows that thermodiffusion in liquids is based on the entropic mechanism. Full article
8 pages, 442 KiB  
Proceeding Paper
Performance of Portfolios Based on the Expected Utility-Entropy Fund Rating Approach
by Daniel Chiew, Judy Qiu, Sirimon Treepongkaruna, Jiping Yang and Chenxiao Shi
Proceedings 2020, 46(1), 15; https://doi.org/10.3390/ecea-5-06679 - 17 Nov 2019
Viewed by 1184
Abstract
Yang and Qiu proposed and reframed an expected utility-entropy (EU-E) based decision model; later on, similar numerical representation for a risky choice was axiomatically developed by Luce et al. under the condition of segregation. Recently, we established a fund rating approach based on [...] Read more.
Yang and Qiu proposed and reframed an expected utility-entropy (EU-E) based decision model; later on, similar numerical representation for a risky choice was axiomatically developed by Luce et al. under the condition of segregation. Recently, we established a fund rating approach based on the EU-E decision model and Morningstar ratings. In this paper, we apply the approach to US mutual funds and construct portfolios using the best rated funds. Furthermore, we evaluate the performance of the fund ratings based on EU-E decision model against Morningstar ratings by examining the performance of the three models in portfolio selection. The conclusions show that portfolios constructed using the ratings based on the EU-E models with moderate tradeoff coefficients perform better than those constructed using Morningstar. The conclusion is robust to different rebalancing intervals. Full article
Show Figures

Figure 1

Figure 1
<p>Summary of portfolio rebalancing periods. Note: Each portfolio is rebalanced and the performance of each portfolio is recorded at each interval as illustrated by the interval number. <span class="html-italic">N</span> refers to the total number of portfolios constructed using each of the ratings based on Morningstar and the EU-E model where λ takes a value of 0.25 and 0.75.</p>
Full article ">
8 pages, 547 KiB  
Proceeding Paper
A Novel Improved Feature Extraction Technique for Ship-radiated Noise Based on Improved Intrinsic Time-Scale Decomposition and Multiscale Dispersion Entropy
by Zhaoxi Li, Yaan Li, Kai Zhang and Jianli Guo
Proceedings 2020, 46(1), 16; https://doi.org/10.3390/ecea-5-06687 - 17 Nov 2019
Viewed by 1159
Abstract
Entropy feature analysis is an important tool for the classification and identification of different types of ships. In order to improve the limitations of traditional feature extraction of ship-radiation noise in complex marine environments, we proposed a novel feature extraction method for ship-radiated [...] Read more.
Entropy feature analysis is an important tool for the classification and identification of different types of ships. In order to improve the limitations of traditional feature extraction of ship-radiation noise in complex marine environments, we proposed a novel feature extraction method for ship-radiated noise based on improved intrinsic time-scale decomposition (IITD) and multiscale dispersion entropy (MDE). The proposed feature extraction technique is named IITD-MDE. IITD, as an improved algorithm, has more reliable performance than intrinsic time-scale decomposition (ITD). Firstly, five types of ship-radiated noise signals are decomposed into a series of intrinsic scale component (ISCs) by IITD. Then, we select the ISC with the main information through correlation analysis, and calculate the MDE value as a feature vector. Finally, the feature vector is input into the support vector machine (SVM) classifier to analyze and get classification. The experimental results demonstrate that the recognition rate of the proposed technique reaches 86% accuracy. Therefore, compared with the other feature extraction methods, the proposed method is able to classify the different types of ships effectively. Full article
Show Figures

Figure 1

Figure 1
<p>The comparison of the interpolation methods: (<b>a</b>) linear interpolation, (<b>b</b>) cubic spline interpolation, and (<b>c</b>) akima interpolation.</p>
Full article ">Figure 2
<p>Intrinsic scale component (ISC) satisfies the conditions.</p>
Full article ">Figure 3
<p>Results of signal obtained by (<b>a</b>) intrinsic time-scale decomposition (ITD) and (<b>b</b>) improved intrinsic time-scale decomposition (IITD).</p>
Full article ">Figure 4
<p>Coarse-grained process.</p>
Full article ">Figure 5
<p>The time-domain waveform of five types of ship-radiated noise signals.</p>
Full article ">Figure 6
<p>The results of the feature extraction method: (<b>a</b>) IITD-MDE, (<b>b</b>) ITD-MDE.</p>
Full article ">Figure 7
<p>Error bar graph of these method: (<b>a</b>) improved intrinsic time-scale decomposition-multiscale dispersion entropy (IITD-MDE), (<b>b</b>) intrinsic time-scale decomposition-multiscale dispersion entropy (ITD-MDE).</p>
Full article ">Figure 8
<p>Support vector machine (SVM) classification results of different methods: (<b>a</b>) the proposed method, (<b>b</b>) the ITD-MDE method.</p>
Full article ">
7 pages, 1165 KiB  
Proceeding Paper
Entropy-Based Approach for the Analysis of Spatio-Temporal Urban Growth Dynamics
by Garima Nautiyal, Sandeep Maithani, Ashutosh Bhardwaj and Archana Sharma
Proceedings 2020, 46(1), 17; https://doi.org/10.3390/ecea-5-06670 - 17 Nov 2019
Viewed by 1294
Abstract
Relative Entropy (RE) is defined as the measure of the degree of randomness of any geographical variable (i.e., urban growth). It is an effective indicator to evaluate the patterns of urban growth, whether compact or dispersed. In the present study, RE has been [...] Read more.
Relative Entropy (RE) is defined as the measure of the degree of randomness of any geographical variable (i.e., urban growth). It is an effective indicator to evaluate the patterns of urban growth, whether compact or dispersed. In the present study, RE has been used to evaluate the urban growth of Dehradun city. Dehradun, the capital of Uttarakhand, is situated in the foothills of the Himalayas and has undergone rapid urbanization. Landsat satellite data for the years 2000, 2010 and 2019 have been used in the study. Built-up cover outside municipal limits and within municipal limits was classified for the given time period. The road network and city center of the study area were also delineated using satellite data. RE was calculated for the periods 2000–2010 and 2010–2019 with respect to the road network and city center. High values of RE indicate higher levels of urban sprawl, whereas lower values indicate compactness. The urban growth pattern over a period of 19 years was examined with the help of RE. Full article
Show Figures

Figure 1

Figure 1
<p>Location map of the study area.</p>
Full article ">Figure 2
<p>Methodological flowchart of the study.</p>
Full article ">Figure 3
<p>Land cover (2000) of the study area.</p>
Full article ">Figure 4
<p>Land cover (2010) of the study area.</p>
Full article ">Figure 5
<p>Land cover (2019) of the study area.</p>
Full article ">Figure 6
<p>Buffer zones along roads.</p>
Full article ">Figure 7
<p>Buffer zones in the city center.</p>
Full article ">Figure 8
<p>RE with regard to the roads and city core during 2000–2010 and 2010–2019 within municipal limits.</p>
Full article ">Figure 9
<p>RE with regard to the roads and city core during 2000–2010 and 2010–2019 outside municipal limits.</p>
Full article ">
10 pages, 345 KiB  
Proceeding Paper
Information Theoretic Objective Function for Genetic Software Clustering
by Habib Izadkhah and Mahjoubeh Tajgardan
Proceedings 2020, 46(1), 18; https://doi.org/10.3390/ecea-5-06681 - 17 Nov 2019
Cited by 5 | Viewed by 1323
Abstract
Software clustering is usually used for program comprehension. Since it is considered to be the most crucial NP-complete problem, several genetic algorithms have been proposed to solve this problem. In the literature, there exist some objective functions (i.e., fitness functions) which are used [...] Read more.
Software clustering is usually used for program comprehension. Since it is considered to be the most crucial NP-complete problem, several genetic algorithms have been proposed to solve this problem. In the literature, there exist some objective functions (i.e., fitness functions) which are used by genetic algorithms for clustering. These objective functions determine the quality of each clustering obtained in the evolutionary process of the genetic algorithm in terms of cohesion and coupling. The major drawbacks of these objective functions are the inability to (1) consider utility artifacts, and (2) to apply to another software graph such as artifact feature dependency graph. To overcome the existing objective functions’ limitations, this paper presents a new objective function. The new objective function is based on information theory, aiming to produce a clustering in which information loss is minimized. For applying the new proposed objective function, we have developed a genetic algorithm aiming to maximize the proposed objective function. The proposed genetic algorithm, named ILOF, has been compared to that of some other well-known genetic algorithms. The results obtained confirm the high performance of the proposed algorithm in solving nine software systems. The performance achieved is quite satisfactory and promising for the tested benchmarks. Full article
Show Figures

Figure 1

Figure 1
<p>A sample Artifact Dependency Graph (ADG).</p>
Full article ">Figure 2
<p>An obtained clustering for <a href="#proceedings-46-00018-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 3
<p>Another clustering for <a href="#proceedings-46-00018-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 4
<p>The chromosome structure for a sample ADG.</p>
Full article ">Figure 5
<p>Obtained clustering for a sample string S = 1122233.</p>
Full article ">
11 pages, 3034 KiB  
Proceeding Paper
Quantifying Total Influence between Variables with Information Theoretic and Machine Learning Techniques
by Andrea Murari, Riccardo Rossi, Michele Lungaroni, Pasquale Gaudio and Michela Gelfusa
Proceedings 2020, 46(1), 19; https://doi.org/10.3390/ecea-5-06666 - 17 Nov 2019
Viewed by 1180
Abstract
The increasingly sophisticated investigations of complex systems require more robust estimates of the correlations between the measured quantities. The traditional Pearson Correlation Coefficient is easy to calculate but is sensitive only to linear correlations. The total influence between quantities is therefore often expressed [...] Read more.
The increasingly sophisticated investigations of complex systems require more robust estimates of the correlations between the measured quantities. The traditional Pearson Correlation Coefficient is easy to calculate but is sensitive only to linear correlations. The total influence between quantities is therefore often expressed in terms of the Mutual Information, which takes into account also the nonlinear effects but is not normalised. To compare data from different experiments, the Information Quality Ratio is therefore in many cases of easier interpretation. On the other hand, both Mutual Information and Information Quality Ratio are always positive and therefore cannot provide information about the sign of the influence between quantities. Moreover, they require an accurate determination of the probability distribution functions of the variables involved. Since the quality and amount of data available is not always sufficient to grant an accurate estimation of the probability distribution functions, it has been investigated whether neural computational tools can help and complement the aforementioned indicators. Specific encoders and autoencoders have been developed for the task of determining the total correlation between quantities related by a functional dependence, including information about the sign of their mutual influence. Both their accuracy and computational efficiencies have been addressed in detail, with extensive numerical tests using synthetic data. A careful analysis of the robustness against noise has also been performed. The neural computational tools typically outperform the traditional indicators in practically every respect. Full article
Show Figures

Figure 1

Figure 1
<p>General topology of autoencoders.</p>
Full article ">Figure 2
<p>Architecture of the autoencoders used in the present work with the matrixes that multiplied give W. The case shown in the figure is particularised for a latent space of dimension 2.</p>
Full article ">Figure 3
<p>Comparison of correlation coefficients for the two examples described in the text. Case 1: no noise. Case 2: Gaussian noise with standard deviation 20% of the actual signal standard deviation.</p>
Full article ">Figure 4
<p>Trend of the errors in the reconstruction of the input data with the dimensionality of the intermediate layer in the autoencoder.</p>
Full article ">Figure 5
<p>Left: the PCC for a set of 10 variables correlated as specified in the text. Right: the correlation coefficients obtained with the proposed method of the autoencoders.</p>
Full article ">Figure 6
<p>Trend of the off-diagonal term of the matrix Λ and the PCC versus the percentage of additive Gaussian noise. The noise intensity is calculated as the standard deviation of the noise divided by the standard deviation of the variable amplitude.</p>
Full article ">Figure 7
<p>Top: two linearly dependent variables (left) and the relative local correlation coefficient <span class="html-italic">ρ</span><span class="html-italic"><sub>int</sub></span> (right). Middle: two quadratic dependent variables (left) and the relative local correlation coefficient <span class="html-italic">ρ</span><span class="html-italic"><sub>int</sub></span> (right). Bottom: two variables with a cubic negative dependence (left) and the relative local correlation coefficient <span class="html-italic">ρ</span> (right). The integral values of the correlation coefficient and of the monotonicity are reported in the inserts.</p>
Full article ">Figure 8
<p>Comparison of the <span class="html-italic">ρ</span><span class="html-italic"><sub>int</sub></span> and the IQR for the negative cubic dependence (third case of <a href="#proceedings-46-00019-f007" class="html-fig">Figure 7</a>). The x axis reports the number of bins and N is the number of generated points used to calculate the indicators.</p>
Full article ">Figure 9
<p>Effects of the noise amplitude on <span class="html-italic">ρ</span><span class="html-italic"><sub>int</sub></span> for various choices of the number of bins. Left plot: the investigated dependence is <span class="html-italic">y</span> = <span class="html-italic">x</span><sup>2</sup>. Right plot: the investigated dependence is <span class="html-italic">y</span> = <span class="html-italic">x</span><sup>3</sup>. The independent variable x varies in the range [−10; 10] and the number of points is 10<sup>5</sup>. The noise intensity is calculated as the standard deviation of the noise divided by the standard deviation of the variable amplitude.</p>
Full article ">
13 pages, 472 KiB  
Proceeding Paper
Photochemical Dissipative Structuring of the Fundamental Molecules of Life
by Karo Michaelian
Proceedings 2020, 46(1), 20; https://doi.org/10.3390/ecea-5-06692 - 18 Nov 2019
Cited by 2 | Viewed by 1202
Abstract
It has been conjectured that the origin of the fundamental molecules of life, their proliferation over the surface of Earth, and their complexation through time, are examples of photochemical dissipative structuring, dissipative proliferation, and dissipative selection, respectively, arising out of the nonequilibrium conditions [...] Read more.
It has been conjectured that the origin of the fundamental molecules of life, their proliferation over the surface of Earth, and their complexation through time, are examples of photochemical dissipative structuring, dissipative proliferation, and dissipative selection, respectively, arising out of the nonequilibrium conditions created on Earth’s surface by the solar photon spectrum. Here I describe the nonequilibrium thermodynamics and the photochemical mechanisms involved in the synthesis and evolution of the fundamental molecules of life from simpler more common precursor molecules under the long wavelength UVC and UVB solar photons prevailing at Earth’s surface during the Archean. Dissipative structuring through photochemical mechanisms leads to carbon based UVC pigments with peaked conical intersections which endow them with a large photon disipative capacity (broad wavelength absorption and rapid radiationless dexcitation). Dissipative proliferation occurs when the photochemical dissipative structuring becomes autocatalytic. Dissipative selection arises when fluctuations lead the system to new stationary states (corresponding to different molecular concentration profiles) of greater dissipative capacity as predicted by the universal evolution criterion of Classical Irreversible Thermodynamic theory established by Onsager, Glansdorff, and Prigogine. An example of the UV photochemical dissipative structuring, proliferation, and selection of the nucleobase adenine from an aqueous solution of HCN under UVC light is given. Full article
Show Figures

Figure 1

Figure 1
<p>Maxima in the absorption of many of the fundamental molecules of life coincide with the predicted atmospheric window which existed from before the origin of life at approximately 3.85 Ga and until at least 2.9 Ga (curves black and red respectively). CO<math display="inline"> <semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics> </math> and probably some H<math display="inline"> <semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics> </math>S were responsible for absorption at wavelengths shorter than ∼205 nm and atmospheric aldehydes (common photochemical products of CO<math display="inline"> <semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics> </math> and water) absorbed between approximately 285 and 310 nm [<a href="#B7-proceedings-46-00020" class="html-bibr">7</a>]. Around 2.2 Ga (yellow curve), UVC light at Earth’s surface was extinguished by oxygen and ozone resulting from organisms performing oxygenic photosynthesis. The green curve corresponds to the present surface spectrum. Energy fluxes are for the sun at the zenith. The font size of the letter is roughly proportional to the relative size of the molar extinction coefficient of the indicated fundamental molecule. Adapted from (Michaelian and Simeonov, 2015) [<a href="#B10-proceedings-46-00020" class="html-bibr">10</a>].</p>
Full article ">Figure 2
<p>The photochemical synthesis of adenine from 5 molecules of hydrogen cyanide in water is a dissipative structuring process which involves the absorption of at least three photons in the UVC and UVB regions of the Archean spectrum.</p>
Full article ">Figure 3
<p>Concentration versus time for the various molecular components involved in the dissipative structuring of adenine. A stationary state is reached with a concentration of about 1.3 M for adenine after about <math display="inline"> <semantics> <mrow> <mn>1.2</mn><mo>×</mo> <msup> <mn>10</mn> <mn>7</mn> </msup> </mrow> </semantics> </math> seconds. Arrival at the stationary state is seen for two different initial conditions of HCN, 1 M (solid lines) and 0.01 M (dotted lines).</p>
Full article ">Figure 4
<p>The concentrations of the relevant components as a function of time for two different values of the diffusion constant for HCN. The case of no diffusion, or low diffusion, is given by the dotted lines and corresponds to the stationary state of <a href="#proceedings-46-00020-f003" class="html-fig">Figure 3</a>. For HCN diffusion dependent on the concentration of adenine (solid lines), a stationary state is reached but there is no production of adenine even though the concentration of HCN (solid blue line) is only slightly lower than the case of no diffusion (dotted blue line). Note that the production of trans-DAMN becomes greater than that of cis-DAMN for the stationary state of the higher concentration of HCN, but this is not the case for the stationary state of slightly lower concentration of HCN.</p>
Full article ">Figure 5
<p>The concentrations of the relevant components as a function of time for a diffusion rate of HCN dependent on the concentration of adenine. An instantaneous perturbation of HCN to 1 M concentration is performed at <math display="inline"> <semantics> <mrow> <mn>8</mn> <mo>×</mo> <msup> <mn>10</mn> <mn>6</mn> </msup> </mrow> </semantics> </math> s but the system relaxes back to its original stationary state with practically zero adenine production.</p>
Full article ">Figure 6
<p>The same as for <a href="#proceedings-46-00020-f005" class="html-fig">Figure 5</a> except that the instantaneous perturbation of HCN is increased to 2 M. The resulting dynamics is now very different; the system relaxes to a new stationary state in which the production of adenine is very large.</p>
Full article ">Figure 7
<p>The entropy production of the system for the case of <a href="#proceedings-46-00020-f006" class="html-fig">Figure 6</a> in which an instantaneous fluctuation of HCN to 2 M shifts the system to a stationary state with large adenine production. The universal evolutionary criterion of Glansdorff and Prigogine suggests that nature is more likely to amplify those fluctuations at the bifurcation leading to greater entropy production for auto- or cross-catalytic systems.</p>
Full article ">
8 pages, 4737 KiB  
Proceeding Paper
The Potential of L-Band UAVSAR Data for the Extraction of Mangrove Land Cover Using Entropy and Anisotropy Based Classification
by Ojasvi Saini, Ashutosh Bhardwaj and R. S. Chatterjee
Proceedings 2020, 46(1), 21; https://doi.org/10.3390/ecea-5-06673 - 17 Nov 2019
Cited by 1 | Viewed by 1399
Abstract
Mangrove forests serve as an ecosystem stabilizer since they play an important role in providing habitats for many terrestrial and aquatic species along with a huge capability of carbon sequestration and absorbing greenhouse gases. The process of conversion of carbon dioxide into biomass [...] Read more.
Mangrove forests serve as an ecosystem stabilizer since they play an important role in providing habitats for many terrestrial and aquatic species along with a huge capability of carbon sequestration and absorbing greenhouse gases. The process of conversion of carbon dioxide into biomass is very rapid in mangrove forests. Mangroves play a crucial role in protecting the human settlement and arresting shoreline erosion by reducing wave height to a great extent, as they form a natural barricade against high sea tides and windstorms. In most cases, human settlement in the vicinity of mangrove forests has affected the eco-system of the forest and placed them under environmental pressure. Since, a continuous mapping, monitoring, and preservation of coastal mangroves may help in climate resilience, a mangrove land cover extraction method using remotely sensed L-band full-pol UAVSAR data (acquired on 25 February 2016) based on Entropy (H) and Anisotropy (A) concepts has therefore been proposed in this study. The k-Mean clustering has been applied to the subsetted (1-Entropy) * (Anisotropy) image generated by PolSARpro_v5.0 software’s H/A/Alpha Decomposition. The mangrove land cover of the study area was extracted to be 116.07 Km2 using k-Mean clustering and validated with the mangrove land cover area provided by Global Mangrove Watch (GMW) data. Full article
Show Figures

Figure 1

Figure 1
<p>The geographical location of the study area shown with Pauli decomposed RGB UAVSAR data.</p>
Full article ">Figure 1 Cont.
<p>The geographical location of the study area shown with Pauli decomposed RGB UAVSAR data.</p>
Full article ">Figure 2
<p>Flow chart for methodology adopted.</p>
Full article ">Figure 3
<p><span class="html-italic">k</span>-Mean Clustering result for the extraction of mangrove land cover using Entropy and Anisotropy.</p>
Full article ">Figure 4
<p><span class="html-italic">k</span>-Mean Clustering result with overlaid Global Mangrove Watch (GMW) shapefile for the mangrove land cover.</p>
Full article ">Figure 5
<p>The intersection area between the mangrove cover area provided by GMW and the extracted mangrove land cover using the tested methodology.</p>
Full article ">
7 pages, 436 KiB  
Proceeding Paper
On the Relationship between City Mobility and Blocks Uniformity
by Eric K. Tokuda, Cesar H. Comin, Roberto M. Cesar and Luciano da F. Costa
Proceedings 2020, 46(1), 22; https://doi.org/10.3390/ecea-5-06669 - 17 Nov 2019
Viewed by 1362
Abstract
The spatial organization and the topological organization of cities have a great influence on the lives of their inhabitants, including mobility efficiency. Entropy has been often adopted for the characterization of diverse natural and human-made systems and structures. In this work, we apply [...] Read more.
The spatial organization and the topological organization of cities have a great influence on the lives of their inhabitants, including mobility efficiency. Entropy has been often adopted for the characterization of diverse natural and human-made systems and structures. In this work, we apply the exponential of entropy (evenness) to characterize the uniformity of city blocks. It is suggested that this measurement is related to several properties of real cities, such as mobility. We consider several real-world cities, from which the logarithm of the average shortest path length is also calculated and compared with the evenness of the city blocks. Several interesting results have been found, as discussed in the article. Full article
Show Figures

Figure 1

Figure 1
<p>Diagram illustrating the main stages of the proposed method.</p>
Full article ">Figure 2
<p>Graphs of the streets of three different cities in California, Atherton, El Segundo and Firebaugh, respectively. (<b>a</b>) High variation of block shapes and sizes; (<b>b</b>) block formats, sizes are more even; (<b>c</b>) another type of city structure. Not drawn to scale.</p>
Full article ">Figure 3
<p>Identification of the blocks of the graphs of Monterey Park, CA. Blocks randomly colored for visualization purposes.</p>
Full article ">Figure 4
<p>Illustrative example to explain the concept of entropy of areas in one dimension. In (<b>a</b>), partitions have the same size, and the entropy assumes its maximum value of 1.39, for the same number of partitions, with maximum evenness of one. In (<b>b</b>), the same segment is partitioned in a regular one but with uneven sizes and the same measure has value 1.28.</p>
Full article ">Figure 5
<p>Histogram of the evenness of the block areas for all the considered cities.</p>
Full article ">Figure 6
<p>Evenness of the block sizes by the average shortest path length. Higher values of the evenness of block areas correspond to lower values of shortest path lengths.</p>
Full article ">Figure 7
<p>Coefficient of variation of the shortest path lengths in terms of the evenness of the block areas. The higher the evenness of the block areas, the smaller the variation of the shortest path lengths.</p>
Full article ">
7 pages, 1534 KiB  
Proceeding Paper
A New Perspective on the Kauzmann Entropy Paradox: A Crystal/Glass Critical Point in Four- and Three-Dimensions
by Caroline S. Gorham and David E. Laughlin
Proceedings 2020, 46(1), 23; https://doi.org/10.3390/ecea-5-06677 - 17 Nov 2019
Viewed by 1441
Abstract
In this article, a new perspective on the Kauzmann point is presented. The “ideal glass transition” that occurs at the Kauzmann temperature is the point at which the configurational entropy of an undercooled metastable liquid equals that of its crystalline counterpart. We model [...] Read more.
In this article, a new perspective on the Kauzmann point is presented. The “ideal glass transition” that occurs at the Kauzmann temperature is the point at which the configurational entropy of an undercooled metastable liquid equals that of its crystalline counterpart. We model solidifying liquids by using a quaternion orientational order parameter and find that the Kauzmann point is a critical point that exists to separate crystalline and non-crystalline solid states. We identify the Kauzmann point as a first-order critical point, and suggest that it belongs to quaternion ordered systems that exist in four- or three-dimensions. This “Kauzmann critical point” can be considered to be a higher-dimensional analogue to the superfluid-to-Mott insulator quantum phase transition that occurs in two- and one-dimensional complex ordered systems. Such critical points are driven by tuning a non-thermal frustration parameter, and result due to characteristic softening of a `Higgs’ type mode that corresponds to amplitude fluctuations of the order parameter. The first-order nature of the finite temperature Kauzmann critical point is a consequence of the discrete change of the topology of the ground state manifold of the quaternion order parameter field that applies to crystalline and non-crystalline solids. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) The ‘Mexican hat’ free energy function of complex ordered systems whose order parameter has the form <math display="inline"> <semantics> <mrow> <mi mathvariant="normal">Ψ</mi> <mo>=</mo> <mrow> <mo>|</mo> <mi mathvariant="normal">Ψ</mi> <mo>|</mo> </mrow> <msup> <mi>e</mi> <mrow> <mover accent="true"> <mi>i</mi> <mo>^</mo> </mover> <mi>θ</mi> </mrow> </msup> </mrow> </semantics> </math>. In phase-coherent superfluid phases, <math display="inline"> <semantics> <mrow> <mo>|</mo> <mi mathvariant="normal">Ψ</mi> <mo>|</mo> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics> </math> and a massless Nambu–Goldstone and a massive Higgs modes arise. (<b>B</b>) On approaching the superfluid/Mott insulator QPT in two- and one dimensions [<a href="#B9-proceedings-46-00023" class="html-bibr">9</a>], the free energy function transforms to one with a minimum at <math display="inline"> <semantics> <mrow> <mo>|</mo> <mi mathvariant="normal">Ψ</mi> <mo>|</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> at a critical value of frustration (<math display="inline"> <semantics> <msub> <mi>g</mi> <mi>C</mi> </msub> </semantics> </math>) [Reproduced from Ref. [<a href="#B9-proceedings-46-00023" class="html-bibr">9</a>]].</p>
Full article ">Figure 2
<p>Classical 2D/1D <math display="inline"> <semantics> <mrow> <mi>O</mi> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </semantics> </math> rotor model. (<b>A</b>) An abundance of misorientational fluctuations develops below the bulk Bose-Einstein condensation temperature (<math display="inline"> <semantics> <msub> <mi>T</mi> <mrow> <mi>B</mi> <mi>E</mi> <mi>C</mi> </mrow> </msub> </semantics> </math>), and may be discretized as a plasma of isolated point defects and anti-defects. (<b>B</b>) As the temperature is lowered below the Kosterlitz–Thouless transition temperature (<math display="inline"> <semantics> <msub> <mi>T</mi> <mrow> <mi>K</mi> <mi>T</mi> </mrow> </msub> </semantics> </math>), complementary defects/anti-defects begin to form bound pairs. (<b>C</b>) As the temperature approaches 0 K, defects and anti-defects that comprise bound states come together and annihilate. In the absence of frustration, no signed defects persist to the ground state that is perfectly phase-coherent.</p>
Full article ">Figure 3
<p>(<b>A</b>) Complex ordered systems (<math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>) that exist in 2D/1D are mathematically described using O(2) quantum rotor models, and admit a second-order quantum critical point at absolute zero temperature [<a href="#B16-proceedings-46-00023" class="html-bibr">16</a>] that is known as the superfluid/Mott-insulator QPT [<a href="#B9-proceedings-46-00023" class="html-bibr">9</a>]. (<b>B</b>) Solidification in four- and three-dimensions, as characterized by a quaternion orientational order parameter (<math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics> </math>), may be described using <math display="inline"> <semantics> <mrow> <mi>O</mi> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </semantics> </math> quantum rotor models. Such <math display="inline"> <semantics> <mrow> <mi>O</mi> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </semantics> </math> quantum rotor models admit a critical point that is first-order which may be identified with the “ideal glass transition”, that occurs at a finite Kauzmann [<a href="#B5-proceedings-46-00023" class="html-bibr">5</a>] temperature.</p>
Full article ">Figure 4
<p>Geometrically-frustrated crystalline structures are topologically close-packed, and are stabilized in the ground state by the inclusion of a periodic arrangement of frustration-induced topological defects. Ordered arrangements of negative wedge disclinations [<a href="#B27-proceedings-46-00023" class="html-bibr">27</a>,<a href="#B28-proceedings-46-00023" class="html-bibr">28</a>] are known as “major skeleton networks.” Signed third homotopy group defects also form a periodic arrangement in geometrically-frustrated crystalline solid ground states, but are not visible as a consequence of their nature as points in four-dimensions. [Reproduced from Ref. [<a href="#B33-proceedings-46-00023" class="html-bibr">33</a>]].</p>
Full article ">
7 pages, 369 KiB  
Proceeding Paper
Identifying Systemic Risks and Policy-Induced Shocks in Stock Markets by Relative Entropy
by Feiyan Liu, Jianbo Gao and Yunfei Hou
Proceedings 2020, 46(1), 24; https://doi.org/10.3390/ecea-5-06689 - 17 Nov 2019
Viewed by 1330
Abstract
Systemic risks have to be vigilantly guided against at all times in order to prevent their contagion across stock markets. New policies also may not work as desired and even induce shocks to market, especially those emerging ones. Therefore, timely detection of systemic [...] Read more.
Systemic risks have to be vigilantly guided against at all times in order to prevent their contagion across stock markets. New policies also may not work as desired and even induce shocks to market, especially those emerging ones. Therefore, timely detection of systemic risks and policy-induced shocks is crucial to safeguard the health of stock markets. In this paper, we show that the relative entropy or Kullback–Liebler divergence can be used to identify systemic risks and policy-induced shocks in stock markets. Concretely, we analyzed the minutely data of two stock indices, the Dow Jones Industrial Average (DJIA) and the Shanghai Stock Exchange (SSE) Composite Index, and examined the temporal variation of relative entropy for them. We show that clustered peaks in relative entropy curves can accurately identify the timing of the 2007–2008 global financial crisis and its precursors, and the 2015 stock crashes in China. Moreover, a sharpest needle-like peak in relative entropy curves, especially for the SSE market, always served as a precursor of an unusual market, a strong bull market or a bubble, thus possessing a certain ability of forewarning. Full article
Show Figures

Figure 1

Figure 1
<p>The distributions of minutely returns for (<b>a</b>) DJIA and (<b>b</b>) SSE Composite Index over long-term.</p>
Full article ">Figure 2
<p>The deviations of the distributions of minutely returns from their references on two selected days for DJIA (<b>a</b>,<b>b</b>) and SSE Composite Index (<b>c</b>,<b>d</b>).</p>
Full article ">Figure 3
<p>Relative entropy (blue) vs re-scaled DJIA (red).</p>
Full article ">Figure 4
<p>Relative entropy (blue) vs re-scaled SSE Composite Index (red).</p>
Full article ">
6 pages, 663 KiB  
Proceeding Paper
Nonequilibrium Thermodynamics and Entropy Production in Simulation of Electrical Tree Growth
by Adrián César Razzitte, Luciano Enciso, Marcelo Gun and María Sol Ruiz
Proceedings 2020, 46(1), 25; https://doi.org/10.3390/ecea-5-06683 - 17 Nov 2019
Viewed by 1217
Abstract
In the present work we applied the nonequilibrium thermodynamic theory in the analysis of the dielectric breakdown (DB) process. As the tree channel front moves, the intense field near the front moves electrons and ions irreversibly in the region beyond the tree channel [...] Read more.
In the present work we applied the nonequilibrium thermodynamic theory in the analysis of the dielectric breakdown (DB) process. As the tree channel front moves, the intense field near the front moves electrons and ions irreversibly in the region beyond the tree channel tips where electromechanical, thermal and chemical effects cause irreversible damage and, from the nonequilibrium thermodynamic viewpoint, entropy production. From the nonequilibrium thermodynamics analysis, the entropy production is due to the product of fluxes Ji and conjugated forces Xi: ? = ?iJiXi ? 0. We consider that the coupling between fluxes can describe the dielectric breakdown in solids as a phenomenon of transport of heat, mass and electric charge. Full article
Show Figures

Figure 1

Figure 1
<p>Potential and electrical field of grid with tip in potential electrode. Inset corresponds to magnified area in tip, on the side of potential electrode.</p>
Full article ">Figure 2
<p>Increasing of electrical field in arbitrary units as a function of <b><span class="html-italic">η</span></b>.</p>
Full article ">Figure 3
<p>Number of branches of the tree as a function of <b><span class="html-italic">η</span></b>.</p>
Full article ">
7 pages, 806 KiB  
Proceeding Paper
Quantum Genetic Terrain Algorithm (Q-GTA): A Technique to Study the Evolution of the Earth Using Quantum Genetic Algorithm
by Pranjal Sharma, Ankit Agarwal and Bhawna Chaudhary
Proceedings 2020, 46(1), 26; https://doi.org/10.3390/ecea-5-06685 - 17 Nov 2019
Viewed by 1300
Abstract
In recent years, geologists have put in a lot of effort trying to study the evolution of Earth using different techniques studying rocks, gases, and water at different channels like mantle, lithosphere, and atmosphere. Some of the methods include estimation of heat flux [...] Read more.
In recent years, geologists have put in a lot of effort trying to study the evolution of Earth using different techniques studying rocks, gases, and water at different channels like mantle, lithosphere, and atmosphere. Some of the methods include estimation of heat flux between the atmosphere and sea ice, modeling global temperature changes, and groundwater monitoring networks. That being said, algorithms involving the study of Earth’s evolution have been a debated topic for decades. In addition, there is distinct research on the mantle, lithosphere, and atmosphere using isotopic fractionation, which this paper will take into consideration to form genes at the former stage. This factor of isotopic fractionation could be molded in QGA to study the Earth’s evolution. We combined these factors because the gases containing these isotopes move from mantle to lithosphere or atmosphere through gaps or volcanic eruptions contributing to it. We are likely to use the Rb/Sr and Sm/Nd ratios to study the evolution of these channels. This paper, in general, provides the idea of gathering some information about temperature changes by using isotopic ratios as chromosomes, in QGA the chromosomes depict the characteristic of a generation. Here these ratios depict the temperature characteristic and other steps of QGA would be molded to study these ratios in the form of temperature changes, which would further signify the evolution of Earth based on the study that temperature changes with the change in isotopic ratios. This paper will collect these distinct studies and embed them into an upgraded quantum genetic algorithm called Quantum Genetic Terrain Algorithm or Quantum GTA. Full article
Show Figures

Figure 1

Figure 1
<p>Proposed Flow Chart for Quantum GTA.</p>
Full article ">Figure 2
<p>Step for calculation the Mutation Parameter. Gen 1—starting generation; Gen 2—second generation from start. Anchor value—here we simply use difference as anchor value. Anchor value is a value that calculates the difference between the two generations. We can use different functions to calculate anchor value. Expected mutation—it is simply the next expected value by subtracting the anchor value from next-generation value. Fitness value—here we calculated simple percentage error. The fitness function can be changed to calculate a more accurate value.</p>
Full article ">Figure 2 Cont.
<p>Step for calculation the Mutation Parameter. Gen 1—starting generation; Gen 2—second generation from start. Anchor value—here we simply use difference as anchor value. Anchor value is a value that calculates the difference between the two generations. We can use different functions to calculate anchor value. Expected mutation—it is simply the next expected value by subtracting the anchor value from next-generation value. Fitness value—here we calculated simple percentage error. The fitness function can be changed to calculate a more accurate value.</p>
Full article ">
7 pages, 388 KiB  
Proceeding Paper
Systematic Coarse-Grained Models for Molecular Systems Using Entropy
by Evangelia Kalligiannaki, Vagelis Harmandaris and Markos Katsoulakis
Proceedings 2020, 46(1), 27; https://doi.org/10.3390/ecea-5-06710 - 22 Nov 2019
Viewed by 1552
Abstract
The development of systematic coarse-grained mesoscopic models for complex molecular systems is an intense research area. Here we first give an overview of different methods for obtaining optimal parametrized coarse-grained models, starting from detailed atomistic representation for high dimensional molecular systems. We focus [...] Read more.
The development of systematic coarse-grained mesoscopic models for complex molecular systems is an intense research area. Here we first give an overview of different methods for obtaining optimal parametrized coarse-grained models, starting from detailed atomistic representation for high dimensional molecular systems. We focus on methods based on information theory, such as relative entropy, showing that they provide parameterizations of coarse-grained models at equilibrium by minimizing a fitting functional over a parameter space. We also connect them with structural-based (inverse Boltzmann) and force matching methods. All the methods mentioned in principle are employed to approximate a many-body potential, the (n-body) potential of mean force, describing the equilibrium distribution of coarse-grained sites observed in simulations of atomically detailed models. We also present in a mathematically consistent way the entropy and force matching methods and their equivalence, which we derive for general nonlinear coarse-graining maps. We apply, and compare, the above-described methodologies in several molecular systems: A simple fluid (methane), water and a polymer (polyethylene) bulk system. Finally, for the latter we also provide reliable confidence intervals using a statistical analysis resampling technique, the bootstrap method. Full article
Show Figures

Figure 1

Figure 1
<p>Methane: The effective pair potential for a one-site methane melt, derived with the FM with cubic splines, and the IBI methods, at <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>80</mn> </mrow> </semantics> </math> K.</p>
Full article ">Figure 2
<p>Methane: The FM and the path-space force matching (PSFM) methods at equilibrium. (<b>a</b>) The FM pair force with linear and cubic B-splines, and Lennard–Jones parametrizations. (<b>b</b>) The PSFM reproduces the FM method, [<a href="#B16-proceedings-46-00027" class="html-bibr">16</a>].</p>
Full article ">Figure 3
<p>Water: RE derived potential reproduces well the target pair correlation.</p>
Full article ">Figure 4
<p>Polyethylene: The FM potential for linear B-splines, for a set of 5000 observations and the 95% Bootstrap Confidence interval for the FM potential, with a set of 300 observations.</p>
Full article ">
7 pages, 465 KiB  
Proceeding Paper
The New Method Using Shannon Entropy to Decide the Power Exponents on JMAK Equation
by Hirokazu Maruoka
Proceedings 2020, 46(1), 28; https://doi.org/10.3390/ecea-5-06660 - 17 Nov 2019
Cited by 1 | Viewed by 1290
Abstract
The JMAK (Johnson–Mehl–Avrami–Kolmogorov) equation is exponential equation inserted power-law behavior on the parameter, and is widely utilized to describe the relaxation process, the nucleation process, the deformation of materials and so on. Theoretically the power exponent is occasionally associated with the geometrical factor [...] Read more.
The JMAK (Johnson–Mehl–Avrami–Kolmogorov) equation is exponential equation inserted power-law behavior on the parameter, and is widely utilized to describe the relaxation process, the nucleation process, the deformation of materials and so on. Theoretically the power exponent is occasionally associated with the geometrical factor of the nucleus, which gives the integral power exponent. However, non-integral power exponents occasionally appear and they are sometimes considered as phenomenological in the experiment. On the other hand, the power exponent decides the distribution of step time when the equation is considered as the superposition of the step function. This work intends to extend the interpretation of the power exponent by the new method associating Shannon entropy of distribution of step time with the method of Lagrange multiplier in which cumulants or moments obtained from the distribution function are preserved. This method intends to decide the distribution of step time through the power exponent, in which certain statistical values are fixed. The Shannon entropy to which the second cumulant is introduced gives fractional power exponents that reveal the symmetrical distribution function that can be compared with the experimental results. Various power exponents in which another statistical value is fixed are discussed with physical interpretation. This work gives new insight into the JMAK function and the method of Shannon entropy in general. Full article
Show Figures

Figure 1

Figure 1
<p>(Color online) (<b>a</b>) Dependence of <span class="html-italic">H</span> introduced variance with <math display="inline"> <semantics> <mi>β</mi> </semantics> </math>. (<b>b</b>) Comparison of distribution of step time for different <math display="inline"> <semantics> <mi>β</mi> </semantics> </math> values (<math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>0.5</mn><mo>,</mo> <mn>1.0</mn><mo>,</mo> <mn>1.5</mn><mo>,</mo> <mn>3.76</mn><mo>,</mo> <mn>10</mn> </mrow> </semantics> </math>). <math display="inline"> <semantics> <mrow> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>100</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>C</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 2
<p>(Color online) (<b>a</b>) Dependence of Shannon entropy of which skewness is fixed with <math display="inline"> <semantics> <mi>β</mi> </semantics> </math>. (<b>b</b>) Comparison of distribution of step times <math display="inline"> <semantics> <mi>τ</mi> </semantics> </math> for <math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>0.5</mn><mo>,</mo> <mn>1.2</mn><mo>,</mo> <mn>3.76</mn></mrow> </semantics> </math>. <math display="inline"> <semantics> <mrow> <msup> <mi>c</mi> <mn>3</mn> </msup> <mo>=</mo> <mn>100</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>C</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>.</p>
Full article ">
8 pages, 663 KiB  
Proceeding Paper
Exposing Face-Swap Images Based on Deep Learning and ELA Detection
by Weiguo Zhang and Chenggang Zhao
Proceedings 2020, 46(1), 29; https://doi.org/10.3390/ecea-5-06684 - 17 Nov 2019
Cited by 9 | Viewed by 2888
Abstract
New developments in artificial intelligence (AI) have significantly improved the quality and efficiency in generating fake face images; for example, the face manipulations by DeepFake are so realistic that it is difficult to distinguish their authenticity—either automatically or by humans. In order to [...] Read more.
New developments in artificial intelligence (AI) have significantly improved the quality and efficiency in generating fake face images; for example, the face manipulations by DeepFake are so realistic that it is difficult to distinguish their authenticity—either automatically or by humans. In order to enhance the efficiency of distinguishing facial images generated by AI from real facial images, a novel model has been developed based on deep learning and error level analysis (ELA) detection, which is related to entropy and information theory, such as cross-entropy loss function in the final Softmax layer, normalized mutual information in image preprocessing, and some applications of an encoder based on information theory. Due to the limitations of computing resources and production time, the DeepFake algorithm can only generate limited resolutions, resulting in two different image compression ratios between the fake face area as the foreground and the original area as the background, which leaves distinctive artifacts. By using the error level analysis detection method, we can detect the presence or absence of different image compression ratios and then use Convolution neural network (CNN) to detect whether the image is fake. Experiments show that the training efficiency of the CNN model can be significantly improved by using the ELA method. And the detection accuracy rate can reach more than 97% based on CNN architecture of this method. Compared to the state-of-the-art models, the proposed model has the advantages such as fewer layers, shorter training time, and higher efficiency. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the DeepFake production pipeline. (<b>a</b>) The source image. (<b>b</b>) The green box is the detected face area. (<b>c</b>) Blue points are the face landmarks. (<b>d</b>) Calculate the transform matrix to wrap face region in (<b>e</b>) to the normalized region (<b>f</b>). (<b>g</b>) The face image generated by the neural network is used to cover the source image (<b>a</b>). (<b>h</b>) Synthesized face wrapped back using the same transform matrix. (<b>i</b>) Post-processing, such as applying boundary smoothing to composite image. (<b>j</b>) The final synthesized image.</p>
Full article ">Figure 2
<p>Overview of the process of generating a negative example. (<b>a</b>) The original image. Use dlib to extract the face in (<b>a</b>) and align face with different scales as in (<b>b</b>). We randomly select a scale of face in (<b>b</b>) and apply Gaussian blur as (<b>c</b>), and after the affine transform, cover the original image to generate a negative example.</p>
Full article ">Figure 3
<p>Samples of the original image and tampered images and their error level analysis (ELA) results: The first line are the original image and its ELA image. It can be seen that the compression ratio of the whole image remains the same. The second line is the tempered image and its ELA image. It can be seen that the compression ratio between the tampered face as the foreground and the original image as the background are quite different.</p>
Full article ">Figure 4
<p>This is the experimental result images. (<b>a</b>) The image of the accuracy curve and the loss function curve; (<b>b</b>) the confusion matrix of verification data.</p>
Full article ">
774 KiB  
Proceeding Paper
Interpreting the High Energy Consumption of the Brain at Rest
by Alejandro Chinea Manrique de Lara
Proceedings 2020, 46(1), 30; https://doi.org/10.3390/ecea-5-06694 - 18 Nov 2019
Cited by 1 | Viewed by 1719
Abstract
The notion that the brain has a resting state mode of functioning has received increasing attention in recent years. The idea derives from experimental observations that showed a relatively spatially and temporally uniform high level of neuronal activity when no explicit task was [...] Read more.
The notion that the brain has a resting state mode of functioning has received increasing attention in recent years. The idea derives from experimental observations that showed a relatively spatially and temporally uniform high level of neuronal activity when no explicit task was being performed. Surprisingly, the total energy consumption supporting neuronal firing in this conscious awake baseline state is orders of magnitude larger than the energy changes during stimulation. This paper presents a novel and counterintuitive explanation of the high energy consumption of the brain at rest obtained using the recently developed intelligence and embodiment hypothesis. This hypothesis is based on evolutionary neuroscience and postulates the existence of a common information-processing principle associated with nervous systems that evolved naturally and serves as the foundation from which intelligence can emerge and to the efficiency of brain’s computations. The high energy consumption of the brain at rest is shown to be the result of the energetics associated to the most probable state of a statistical physics model aimed at capturing the behavior of a system constrained by power consumption and evolutionary designed to minimize metabolic demands. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The cognitive architecture (Top part of the figure): Example of symbolic structure used to capture some of the information-processing characteristics of brain’s cognitive functions. The most important aspects of the intelligence and embodiment hypothesis [<a href="#B12-proceedings-46-00030" class="html-bibr">12</a>,<a href="#B13-proceedings-46-00030" class="html-bibr">13</a>] are synthesized in a space of cognitive architecture complexity that is defined by the average number of information-processing levels and the average connectivity between symbols. The nodes of the structure are representing symbols, where the symbols act as the embodiment of brain representations. The average number of information processing levels is hypothesized to be a function of the density of neurons and the number of physical laminae (or nuclei) that are present in the cerebral structures postulated by the hypothesis as mainly responsible of intelligence in macroscopic moving animals. In this example, the structure comprises three information-processing levels (depth of the structure) where the connectivity associated to each level is <math display="inline"> <semantics> <mrow> <msub> <mi>n</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>n</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>, and <math display="inline"> <semantics> <mrow> <msub> <mi>n</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>4</mn> </mrow> </semantics> </math> respectively. The circular arrows represent the combinatorial operations (movement primitives) common to brain’s cognitive functions which are hypothesized to have been conserved through the evolutionary process. Each subtree of the structure is modeled in terms of multi-state variables. The connectivity between brain representations (i.e., the branching factor) determines the number of states of the multi-state variables. Similarly, the number of nodes belonging to each of the levels determines the number of subtrees (i.e, multi-state variables) belonging to each of the levels comprising the structure. (<b>b</b>) The energy model (Bottom left and Right part of the figure): The knowledge that computations performed by nervous systems are limited by power consumption is explicitly embedded within the energy model. Moving from one symbol permutation configuration to another has an associated energy cost that is proportional to the distance from a resting state configuration represented by the fixed-point permutation. It is important to remember that in Statistical Physics the Boltzmann-Gibbs distribution tell us that physical systems prefer to visit low energy states more than high energy states. The tables show the mapping between symbol configurations and energy states for permutations of size two and three respectively.</p>
Full article ">Figure 2
<p>(<b>a</b>) Functional aspects associated to the concept of movement primitives. The figure shows a very simple structure of four nodes where the upper level symbol denoted as <math display="inline"> <semantics> <msub> <mi>a</mi> <mn>1</mn> </msub> </semantics> </math> (the node depicted in yellow) is expressed as a combination of the lower level symbols <math display="inline"> <semantics> <msub> <mi>b</mi> <mn>1</mn> </msub> </semantics> </math>,<math display="inline"> <semantics> <msub> <mi>b</mi> <mn>2</mn> </msub> </semantics> </math>, and <math display="inline"> <semantics> <msub> <mi>b</mi> <mn>3</mn> </msub> </semantics> </math> (nodes depicted in red, cyan, and green respectively). The nodes <math display="inline"> <semantics> <msub> <mi>b</mi> <mn>1</mn> </msub> </semantics> </math>, <math display="inline"> <semantics> <msub> <mi>b</mi> <mn>2</mn> </msub> </semantics> </math>, and <math display="inline"> <semantics> <msub> <mi>b</mi> <mn>3</mn> </msub> </semantics> </math> are modulated by the higher-order symbol <math display="inline"> <semantics> <msub> <mi>a</mi> <mn>1</mn> </msub> </semantics> </math>. After one permutation operation carried out over the symbols <math display="inline"> <semantics> <msub> <mi>b</mi> <mn>2</mn> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>b</mi> <mn>3</mn> </msub> </semantics> </math> the symbol <math display="inline"> <semantics> <msub> <mi>a</mi> <mn>2</mn> </msub> </semantics> </math> (the node depicted in violet) becomes activated. In other words the node <math display="inline"> <semantics> <msub> <mi>a</mi> <mn>1</mn> </msub> </semantics> </math> acts as a driver-like node [<a href="#B18-proceedings-46-00030" class="html-bibr">18</a>] in complex networks terminology. (<b>b</b>) How brain representations are structured according to the Intelligence and Embodiment hypothesis [<a href="#B12-proceedings-46-00030" class="html-bibr">12</a>,<a href="#B13-proceedings-46-00030" class="html-bibr">13</a>]. At the bottom of each symbolic structure a functionally equivalent representation is shown using square brackets to indicate that the content of higher-order representations (e.g., a semantic representation) include lower-order representations (e.g., sensorimotor representations). In this case, the driver-like nodes are those situated at the left of each openning square bracket.</p>
Full article ">
6 pages, 421 KiB  
Proceeding Paper
Quantum Gravity Strategy for the Production of Dark Matter Using Cavitation by Minimum Entropy
by Edward Jiménez and Esteban E. Jimenez
Proceedings 2020, 46(1), 31; https://doi.org/10.3390/ecea-5-06664 - 17 Nov 2019
Viewed by 1374
Abstract
The minimum entropy is responsible for the formation of dark matter bubbles in a black hole, while the variation in the density of dark matter allows these bubbles to leave the event horizon. Some experimental evidence supports the dark matter production model in [...] Read more.
The minimum entropy is responsible for the formation of dark matter bubbles in a black hole, while the variation in the density of dark matter allows these bubbles to leave the event horizon. Some experimental evidence supports the dark matter production model in the inner vicinity of the border of a black hole. The principle of minima entropy explains how cavitation occurs on the event horizon, which in turn complies with the Navier–Stokes 3D equations. Moreover, current works in an axiomatic way show that in the event horizon Einstein’s equations are equivalent to Navier–Stokes’ equations. Thus, The solutions of Einstein combined with the boundary conditions establish a one-to-one correspondence with solutions of incompressible Navier–Stokes and in the near-horizon limit it provides a precise mathematical sense in which horizons are always incompressible fluids. It is also essential to understand that Cavitation by minimum entropy is the production of dark matter bubbles, by variation of the pressure inside or on the horizon of a black hole, in general ? p = p n + 1 ? p n = ? n ? n + 1 ? 1 p n or in particular ? p = ? ( 1 ? P ) p 0 , where ? P ? t = ? p ? 0 P . Finally, fluctuations in the density of dark matter can facilitate its escape from a black hole, if and only if there is previously dark matter produced by cavitation inside or on the horizon of a black hole and also ? D M < ? B . Full article
Show Figures

Figure 1

Figure 1
<p>Dark matter production inside a black hole.</p>
Full article ">
Back to TopTop