[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
entropy-logo

Journal Browser

Journal Browser

Information and Entropy in Biological Systems

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Entropy and Biology".

Deadline for manuscript submissions: closed (20 December 2015) | Viewed by 59849

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mathematics, University of California, Riverside, CA 92521, USA
Interests: information theory, network theory, and mathematical physics

E-Mail Website
Guest Editor
1. Energy and Resources Group, and Department of Environmental Science, Policy & Management, University of California at Berkeley, Berkeley, CA 94720, USA
2. The Santa Fe Institute, Santa Fe, NM 87131, USA
Interests: climate–ecosystem linkages; understanding patterns in the distribution and abundance of species across spatial scales and across habitats and taxonomic groups; using MaxEnt to develop unified and parsimonious theory of the distribution, abundance, and energetics of species in both static and dynamic ecosystems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Covariant Consulting, Illinois, IL, USA
Interests: mathematical biology, bioinformatics, information theory, educational technology

Special Issue Information

Dear Colleagues

Please visit this site http://www.nimbios.org/workshops/WS_entropy for a detailed description of this special issue.

Prof. Dr. John Baez
Dr. Marc Harper
Prof. Dr. John Harte
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

3619 KiB  
Article
Stationary Stability for Evolutionary Dynamics in Finite Populations
by Marc Harper and Dashiell Fryer
Entropy 2016, 18(9), 316; https://doi.org/10.3390/e18090316 - 25 Aug 2016
Cited by 10 | Viewed by 7007
Abstract
We demonstrate a vast expansion of the theory of evolutionary stability to finite populations with mutation, connecting the theory of the stationary distribution of the Moran process with the Lyapunov theory of evolutionary stability. We define the notion of stationary stability for the [...] Read more.
We demonstrate a vast expansion of the theory of evolutionary stability to finite populations with mutation, connecting the theory of the stationary distribution of the Moran process with the Lyapunov theory of evolutionary stability. We define the notion of stationary stability for the Moran process with mutation and generalizations, as well as a generalized notion of evolutionary stability that includes mutation called an incentive stable state (ISS) candidate. For sufficiently large populations, extrema of the stationary distribution are ISS candidates and we give a family of Lyapunov quantities that are locally minimized at the stationary extrema and at ISS candidates. In various examples, including for the Moran and Wright–Fisher processes, we show that the local maxima of the stationary distribution capture the traditionally-defined evolutionarily stable states. The classical stability theory of the replicator dynamic is recovered in the large population limit. Finally we include descriptions of possible extensions to populations of variable size and populations evolving on graphs. Full article
(This article belongs to the Special Issue Information and Entropy in Biological Systems)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Vector field for the replicator dynamic with Fermi selection, with fitness landscape defined by game matrix 7 in Bomze’s classification (above); made with Dynamo [<a href="#B23-entropy-18-00316" class="html-bibr">23</a>]; (<b>b</b>) Stationary distribution of the Moran process with Fermi selection (<math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>60</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>1</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mfrac> <mn>3</mn> <mrow> <mn>2</mn> <mi>N</mi> </mrow> </mfrac> </mrow> </semantics> </math>) which is locally maximal at the interior stable rest point; (<b>c</b>) Euclidean distance between each population state and the expected next state; local minima correspond to rest points of the vector field; (<b>d</b>) Relative entropy of each state and the expected next state. For the heatmaps the boundary has not been plotted to make the interior more visible.</p>
Full article ">Figure 2
<p>(<b>a</b>–<b>c</b>) Relative entropy of the expected next state with the current state <math display="inline"> <semantics> <mrow> <msub> <mi>D</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math> for Fermi incentives (<math display="inline"> <semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>) for the incentive process with <math display="inline"> <semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mfrac> <mn>3</mn> <mn>2</mn> </mfrac> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> </mrow> </semantics> </math> for game matrices 2, 20, and 47 in Bomze’s classification. (<b>d</b>–<b>f</b>) Stationary distributions for the incentive process for the same parameters.</p>
Full article ">Figure 3
<p>Stationary distributions for the rock-scissors-paper game for <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>60</mn> </mrow> </semantics> </math> with the Fermi incentive, <math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>. We have <math display="inline"> <semantics> <mrow> <mfrac> <mn>2</mn> <mn>3</mn> </mfrac> <mi>μ</mi> <mo>=</mo> <mfrac> <mn>1</mn> <msqrt> <mi>N</mi> </msqrt> </mfrac> <mrow> <mo>(</mo> <mi mathvariant="bold">a</mi> <mo>)</mo> </mrow> <mo>,</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <mrow> <mo>(</mo> <mi mathvariant="bold">b</mi> <mo>)</mo> </mrow> <mo>,</mo> <mfrac> <mn>1</mn> <msup> <mi>N</mi> <mrow> <mn>3</mn> <mo>/</mo> <mn>2</mn> </mrow> </msup> </mfrac> </mrow> </semantics> </math> (<b>c</b>).</p>
Full article ">Figure 4
<p>Demonstration of Theorem 1, for the Moran process (replicator incentive) with game matrix given by <math display="inline"> <semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>1</mn> <mo>=</mo> <mi>d</mi> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>b</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>c</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>, population size <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>1000</mn> </mrow> </semantics> </math>, which has an ESS at approximately <math display="inline"> <semantics> <mrow> <mo>(</mo> <mn>33</mn> <mo>,</mo> <mn>67</mn> <mo>)</mo> </mrow> </semantics> </math>. (<b>a</b>) Transition probabilities <math display="inline"> <semantics> <msubsup> <mi>T</mi> <mrow> <mi>i</mi> </mrow> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> </semantics> </math> (blue) and <math display="inline"> <semantics> <msubsup> <mi>T</mi> <mrow> <mi>i</mi> </mrow> <mrow> <mi>i</mi> <mo>−</mo> <mn>1</mn> </mrow> </msubsup> </semantics> </math> (green); (<b>b</b>) Relative entropy <math display="inline"> <semantics> <mrow> <mi>D</mi> <mo>(</mo> <mover accent="true"> <mi>a</mi> <mo stretchy="false">¯</mo> </mover> <mo>)</mo> </mrow> </semantics> </math>; (<b>c</b>) Stationary distribution.</p>
Full article ">Figure 5
<p>(<b>a</b>) Stationary distribution for the three common faces for the incentive process for matrix <math display="inline"> <semantics> <msub> <mi>M</mi> <mn>4</mn> </msub> </semantics> </math> above, <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>40</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <msub> <mi>D</mi> <mn>0</mn> </msub> </semantics> </math> expected distances for the faces shown in (a); (<b>c</b>) Stationary distribution for the face with <math display="inline"> <semantics> <mrow> <msub> <mi>a</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>; (<b>d</b>) <math display="inline"> <semantics> <msub> <mi>D</mi> <mn>0</mn> </msub> </semantics> </math> expected distances for the face shown in (c).</p>
Full article ">Figure 6
<p>(<b>a</b>–<b>c</b>) Relative entropy of the expected next state with the current state <math display="inline"> <semantics> <mrow> <msub> <mi>D</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mover accent="true"> <mi>a</mi> <mo stretchy="false">¯</mo> </mover> <mo>)</mo> </mrow> </mrow> </semantics> </math> for Fermi incentives (<math display="inline"> <semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>) for the Wright Fisher process with <math display="inline"> <semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mfrac> <mn>3</mn> <mn>2</mn> </mfrac> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> </mrow> </semantics> </math> for game matrices 2, 20, and 47 in Bomze’s classification (same as <a href="#entropy-18-00316-f002" class="html-fig">Figure 2</a>; (<b>d</b>–<b>f</b>) Stationary distributions for the Wright–Fisher process; (<b>g</b>–<b>i</b>) Stationary distributions with <math display="inline"> <semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> </mrow> </semantics> </math>. The bottom row has relative entropies that are slightly different (notably the lower left vertex of the middle column plots), but they are very similar to the top row and so omitted.</p>
Full article ">Figure 7
<p>(<b>a</b>) Stationary distribution for a Moran-like process with non-constant population size with the replicator incentive for the fitness landscape defined by <math display="inline"> <semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>1</mn> <mo>=</mo> <mi>d</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>b</mi> <mo>=</mo> <mn>2</mn> <mo>=</mo> <mi>c</mi> </mrow> </semantics> </math>, maximum population size <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>40</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>01</mn> </mrow> </semantics> </math>. The coordinates are <math display="inline"> <semantics> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>a</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </semantics> </math>, with the population size <math display="inline"> <semantics> <mrow> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>a</mi> <mn>2</mn> </msub> </mrow> </semantics> </math> increasing to the top and right; (<b>b</b>) <math display="inline"> <semantics> <msqrt> <msub> <mi>D</mi> <mn>0</mn> </msub> </msqrt> </semantics> </math> distance between the expected next state and the current state. As expected, minima occur along the line <math display="inline"> <semantics> <mrow> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>a</mi> <mn>2</mn> </msub> </mrow> </semantics> </math> with a local minima of <math display="inline"> <semantics> <msub> <mi>D</mi> <mn>1</mn> </msub> </semantics> </math> at <math display="inline"> <semantics> <mrow> <mo>(</mo> <mn>20</mn> <mo>,</mo> <mn>20</mn> <mo>)</mo> </mrow> </semantics> </math>. We have taken the root of <math display="inline"> <semantics> <msub> <mi>D</mi> <mn>1</mn> </msub> </semantics> </math> to exaggerate the variation visually. Sigmoid (s-curve) probabilities for the birth-death decision behave similarly.</p>
Full article ">Figure 8
<p>Two configurations for an incentive process on a cycle. (<b>a</b>) The configuration is inherently unstable since any non-mutation replication will alter the configuration; (<b>b</b>) The configuration is much more stable. For small mutation rates, only replication events on the interface between the two subpopulations can alter the configuration.</p>
Full article ">Figure 9
<p>(<b>a</b>) <math display="inline"> <semantics> <msub> <mi>D</mi> <mn>0</mn> </msub> </semantics> </math> for the 40-fold incentive process, Fermi incentive (<math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>), Rock-Paper-Scissors matrix (Bomze’s 17), <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>80</mn> </mrow> </semantics> </math>; Local minima occur at the boundary states and at the center; (<b>b</b>) <math display="inline"> <semantics> <msub> <mi>D</mi> <mn>1</mn> </msub> </semantics> </math> for the same process; Compare to Figure 1.4 in [<a href="#B25-entropy-18-00316" class="html-bibr">25</a>], which depicts a Lyapunov quantity equivalent to the relative entropy for the replicator equation.</p>
Full article ">
294 KiB  
Article
A Cost/Speed/Reliability Tradeoff to Erasing
by Manoj Gopalkrishnan
Entropy 2016, 18(5), 165; https://doi.org/10.3390/e18050165 - 28 Apr 2016
Cited by 6 | Viewed by 4794
Abstract
We present a Kullback–Leibler (KL) control treatment of the fundamental problem of erasing a bit. We introduce notions of reliability of information storage via a reliability timescale τ r , and speed of erasing via an erasing timescale τ e . Our problem [...] Read more.
We present a Kullback–Leibler (KL) control treatment of the fundamental problem of erasing a bit. We introduce notions of reliability of information storage via a reliability timescale τ r , and speed of erasing via an erasing timescale τ e . Our problem formulation captures the tradeoff between speed, reliability, and the KL cost required to erase a bit. We show that rapid erasing of a reliable bit costs at least log 2 - log 1 - e - τ e τ r > log 2 , which goes to 1 2 log 2 τ r τ e when τ r > > τ e . Full article
(This article belongs to the Special Issue Information and Entropy in Biological Systems)
Show Figures

Figure 1

Figure 1
<p>The discrete-time path space <math display="inline"> <semantics> <msub> <mi mathvariant="script">P</mi> <mi>h</mi> </msub> </semantics> </math>. A specific path is labeled in red.</p>
Full article ">
289 KiB  
Article
Open Markov Processes: A Compositional Perspective on Non-Equilibrium Steady States in Biology
by Blake S. Pollard
Entropy 2016, 18(4), 140; https://doi.org/10.3390/e18040140 - 15 Apr 2016
Cited by 9 | Viewed by 5735
Abstract
In recent work, Baez, Fong and the author introduced a framework for describing Markov processes equipped with a detailed balanced equilibrium as open systems of a certain type. These “open Markov processes” serve as the building blocks for more complicated processes. In this [...] Read more.
In recent work, Baez, Fong and the author introduced a framework for describing Markov processes equipped with a detailed balanced equilibrium as open systems of a certain type. These “open Markov processes” serve as the building blocks for more complicated processes. In this paper, we describe the potential application of this framework in the modeling of biological systems as open systems maintained away from equilibrium. We show that non-equilibrium steady states emerge in open systems of this type, even when the rates of the underlying process are such that a detailed balanced equilibrium is permitted. It is shown that these non-equilibrium steady states minimize a quadratic form which we call “dissipation”. In some circumstances, the dissipation is approximately equal to the rate of change of relative entropy plus a correction term. On the other hand, Prigogine’s principle of minimum entropy production generally fails for non-equilibrium steady states. We use a simple model of membrane transport to illustrate these concepts. Full article
(This article belongs to the Special Issue Information and Entropy in Biological Systems)
Show Figures

Figure 1

Figure 1
<p>A simple model for passive diffusion across a membrane.</p>
Full article ">Figure 2
<p>A depiction of an open Markov process as a labeled, directed graph.</p>
Full article ">Figure 3
<p>An open detailed balanced Markov process modeling membrane transport.</p>
Full article ">Figure 4
<p>Another layer of membrane whose interior population is labeled by <span class="html-italic">D</span> and whose exterior populations are labeled by <math display="inline"> <semantics> <msup> <mi>C</mi> <mo>′</mo> </msup> </semantics> </math> and <span class="html-italic">E</span>.</p>
Full article ">Figure 5
<p>Membranes arranged in series modeled as an open detailed balanced Markov process.</p>
Full article ">Figure 6
<p>Composition of open detailed balanced Markov processes results in an open detailed balanced Markov process.</p>
Full article ">Figure 7
<p>A depiction of two membranes arranged in series.</p>
Full article ">Figure 8
<p>A model of passive transport across a membrane where all transition rates are set equal.</p>
Full article ">
331 KiB  
Article
The Free Energy Requirements of Biological Organisms; Implications for Evolution
by David H. Wolpert
Entropy 2016, 18(4), 138; https://doi.org/10.3390/e18040138 - 13 Apr 2016
Cited by 20 | Viewed by 10623 | Correction
Abstract
Recent advances in nonequilibrium statistical physics have provided unprecedented insight into the thermodynamics of dynamic processes. The author recently used these advances to extend Landauer’s semi-formal reasoning concerning the thermodynamics of bit erasure, to derive the minimal free energy required to implement an [...] Read more.
Recent advances in nonequilibrium statistical physics have provided unprecedented insight into the thermodynamics of dynamic processes. The author recently used these advances to extend Landauer’s semi-formal reasoning concerning the thermodynamics of bit erasure, to derive the minimal free energy required to implement an arbitrary computation. Here, I extend this analysis, deriving the minimal free energy required by an organism to run a given (stochastic) map π from its sensor inputs to its actuator outputs. I use this result to calculate the input-output map π of an organism that optimally trades off the free energy needed to run π with the phenotypic fitness that results from implementing π. I end with a general discussion of the limits imposed on the rate of the terrestrial biosphere’s information processing by the flux of sunlight on the Earth. Full article
(This article belongs to the Special Issue Information and Entropy in Biological Systems)
357 KiB  
Article
Maximizing Diversity in Biology and Beyond
by Tom Leinster and Mark W. Meckes
Entropy 2016, 18(3), 88; https://doi.org/10.3390/e18030088 - 9 Mar 2016
Cited by 28 | Viewed by 10473
Abstract
Entropy, under a variety of names, has long been used as a measure of diversity in ecology, as well as in genetics, economics and other fields. There is a spectrum of viewpoints on diversity, indexed by a real parameter q giving greater or [...] Read more.
Entropy, under a variety of names, has long been used as a measure of diversity in ecology, as well as in genetics, economics and other fields. There is a spectrum of viewpoints on diversity, indexed by a real parameter q giving greater or lesser importance to rare species. Leinster and Cobbold (2012) proposed a one-parameter family of diversity measures taking into account both this variation and the varying similarities between species. Because of this latter feature, diversity is not maximized by the uniform distribution on species. So it is natural to ask: which distributions maximize diversity, and what is its maximum value? In principle, both answers depend on q, but our main theorem is that neither does. Thus, there is a single distribution that maximizes diversity from all viewpoints simultaneously, and any list of species has an unambiguous maximum diversity value. Furthermore, the maximizing distribution(s) can be computed in finite time, and any distribution maximizing diversity from some particular viewpoint q > 0 actually maximizes diversity for all q. Although we phrase our results in ecological terms, they apply very widely, with applications in graph theory and metric geometry. Full article
(This article belongs to the Special Issue Information and Entropy in Biological Systems)
Show Figures

Figure 1

Figure 1
<p>Two bird communities. Heights of stacks indicate species abundances. In (<b>a</b>), there are four species, with the first dominant and the others relatively rare; in (<b>b</b>), the fourth species is absent but the community is otherwise evenly balanced.</p>
Full article ">Figure 2
<p>Visualizations of the main theorem: (<b>a</b>) in terms of how different values of q rank the set of distributions; and (<b>b</b>) in terms of diversity profiles.</p>
Full article ">Figure 3
<p>Hypothetical three-species system. Distances between species indicate degrees of dissimilarity between them (not to scale).</p>
Full article ">Figure 4
<p>Hypothetical community consisting of one species of oak (▪) and ten species of pine (•), to which one further species of pine is then added (◦). Distances between species indicate degrees of dissimilarity (not to scale).</p>
Full article ">
376 KiB  
Article
Using Expectation Maximization and Resource Overlap Techniques to Classify Species According to Their Niche Similarities in Mutualistic Networks
by Hugo Fort and Muhittin Mungan
Entropy 2015, 17(11), 7680-7697; https://doi.org/10.3390/e17117680 - 12 Nov 2015
Cited by 3 | Viewed by 5380
Abstract
Mutualistic networks in nature are widespread and play a key role in generating the diversity of life on Earth. They constitute an interdisciplinary field where physicists, biologists and computer scientists work together. Plant-pollinator mutualisms in particular form complex networks of interdependence between often [...] Read more.
Mutualistic networks in nature are widespread and play a key role in generating the diversity of life on Earth. They constitute an interdisciplinary field where physicists, biologists and computer scientists work together. Plant-pollinator mutualisms in particular form complex networks of interdependence between often hundreds of species. Understanding the architecture of these networks is of paramount importance for assessing the robustness of the corresponding communities to global change and management strategies. Advances in this problem are currently limited mainly due to the lack of methodological tools to deal with the intrinsic complexity of mutualisms, as well as the scarcity and incompleteness of available empirical data. One way to uncover the structure underlying complex networks is to employ information theoretical statistical inference methods, such as the expectation maximization (EM) algorithm. In particular, such an approach can be used to cluster the nodes of a network based on the similarity of their node neighborhoods. Here, we show how to connect network theory with the classical ecological niche theory for mutualistic plant-pollinator webs by using the EM algorithm. We apply EM to classify the nodes of an extensive collection of mutualistic plant-pollinator networks according to their connection similarity. We find that EM recovers largely the same clustering of the species as an alternative recently proposed method based on resource overlap, where one considers each party as a consuming resource for the other party (plants providing food to animals, while animals assist the reproduction of plants). Furthermore, using the EM algorithm, we can obtain a sequence of successfully-refined classifications that enables us to identify the fine-structure of the ecological network and understand better the niche distribution both for plants and animals. This is an example of how information theoretical methods help to systematize and unify work in ecology. Full article
(This article belongs to the Special Issue Information and Entropy in Biological Systems)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Four-fold expectation maximization (EM) classification of the unipartite pollinator-pollinator network SCHE. The color of each node <span class="html-italic">i</span> codes the class membership, namely the class <span class="html-italic">r</span> for which <math display="inline"> <msub> <mi>γ</mi> <mrow> <mi>i</mi> <mi>r</mi> </mrow> </msub> </math> is maximum. Since the network contains pollinators with similar pollination preferences, the resulting network structure contains cliques. The EM algorithm uncovers this clique structure, as is evident from the fact that the node colors coincide with the cliques.</p>
Full article ">Figure 2
<p>(Top to bottom) 2-, 3- and 4-fold expectation maximization (EM) classification of the SCHE [<a href="#B35-entropy-17-07680" class="html-bibr">35</a>] unipartite pollinator-pollinator network. Each figure is a plot of degree <span class="html-italic">vs.</span> pollinator ID (shown on the <span class="html-italic">x</span>-axis of the bottom figure). The pollinators have been placed according to their ordering on the niche axis, as inferred by the resource overlap (RO) method of <a href="#sec2dot2-entropy-17-07680" class="html-sec">Subsection 2.2</a>. The color of the bars denotes the class that the pollinator has been assigned to by the EM algorithm. The degree refers to the number of different plant species that the pollinator pollinates. A large (low) value of the degree implies that the pollinator is a generalist (specialist).</p>
Full article ">Figure 3
<p>(Left) The OFST pollinator-pollinator network with the nine-fold EM classification. The color of each node denotes the class in which the pollinator has been classified by the EM algorithm. (Right, top) The corresponding plot of degree <span class="html-italic">D</span> <span class="html-italic">vs</span>. inferred relative location on a one-dimensional niche axis (legend as in <a href="#entropy-17-07680-f002" class="html-fig">Figure 2</a>). The numbering of the pollinators corresponds to the IDs given in the data source [<a href="#B27-entropy-17-07680" class="html-bibr">27</a>] and agrees with those in the left panel. (Right, bottom) The Jaccard <math display="inline"> <msub> <mi>J</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </math> matrix measuring the resource overlap of species <span class="html-italic">i</span> and <span class="html-italic">j</span>. <math display="inline"> <mrow> <msub> <mi>J</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <mn>0</mn> </mrow> </math> (dark blue) corresponds to no resource overlap, while <math display="inline"> <mrow> <msub> <mi>J</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> </math> (dark red) corresponds to complete resource overlap, in general only possible between two specialists with <math display="inline"> <mrow> <mi>D</mi> <mo>=</mo> <mn>1</mn> </mrow> </math> that have the same resource.</p>
Full article ">Figure 4
<p>(Top to bottom) 3-, 4-, 6-, 7- and 9-fold EM classification of the DISH unipartite pollinator-pollinator network. As the number of classes increase, we see both robustness, namely certain species being classified together, as well as the emergence of finer structure, as classes are split up (legend as in <a href="#entropy-17-07680-f002" class="html-fig">Figure 2</a>).</p>
Full article ">

Review

Jump to: Research, Other

291 KiB  
Review
Relative Entropy in Biological Systems
by John C. Baez and Blake S. Pollard
Entropy 2016, 18(2), 46; https://doi.org/10.3390/e18020046 - 2 Feb 2016
Cited by 46 | Viewed by 10435
Abstract
In this paper we review various information-theoretic characterizations of the approach to equilibrium in biological systems. The replicator equation, evolutionary game theory, Markov processes and chemical reaction networks all describe the dynamics of a population or probability distribution. Under suitable assumptions, the distribution [...] Read more.
In this paper we review various information-theoretic characterizations of the approach to equilibrium in biological systems. The replicator equation, evolutionary game theory, Markov processes and chemical reaction networks all describe the dynamics of a population or probability distribution. Under suitable assumptions, the distribution will approach an equilibrium with the passage of time. Relative entropy—that is, the Kullback–Leibler divergence, or various generalizations of this—provides a quantitative measure of how far from equilibrium the system is. We explain various theorems that give conditions under which relative entropy is nonincreasing. In biochemical applications these results can be seen as versions of the Second Law of Thermodynamics, stating that free energy can never increase with the passage of time. In ecological applications, they make precise the notion that a population gains information from its environment as it approaches equilibrium. Full article
(This article belongs to the Special Issue Information and Entropy in Biological Systems)

Other

Jump to: Research, Review

176 KiB  
Correction
Correction: Wolpert, D.H. The Free Energy Requirements of Biological Organisms; Implications for Evolution. Entropy 2016, 18, 138
by David H. Wolpert
Entropy 2016, 18(6), 219; https://doi.org/10.3390/e18060219 - 2 Jun 2016
Cited by 3 | Viewed by 4110
Abstract
The following corrections should be made to the published paper [1]: [...] Full article
(This article belongs to the Special Issue Information and Entropy in Biological Systems)
Back to TopTop