[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = Fisher–Rao metric tensor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
8 pages, 296 KiB  
Proceeding Paper
Dynamical Systems over Lie Groups Associated with Statistical Transformation Models
by Daisuke Tarama and Jean-Pierre Françoise
Phys. Sci. Forum 2022, 5(1), 21; https://doi.org/10.3390/psf2022005021 - 7 Dec 2022
Viewed by 1360
Abstract
A statistical transformation model consists of a smooth data manifold, on which a Lie group smoothly acts, together with a family of probability density functions on the data manifold parametrized by elements in the Lie group. For such a statistical transformation model, the [...] Read more.
A statistical transformation model consists of a smooth data manifold, on which a Lie group smoothly acts, together with a family of probability density functions on the data manifold parametrized by elements in the Lie group. For such a statistical transformation model, the Fisher–Rao semi-definite metric and the Amari–Chentsov cubic tensor are defined in the Lie group. If the family of probability density functions is invariant with respect to the Lie group action, the Fisher–Rao semi-definite metric and the Amari–Chentsov tensor are left-invariant, and hence we have a left-invariant structure of a statistical manifold. In the present work, the general framework of statistical transformation models is explained. Then, the left-invariant geodesic flow associated with the Fisher–Rao metric is considered for two specific families of probability density functions on the Lie group. The corresponding Euler–Poincaré and the Lie–Poisson equations are explicitly found in view of geometric mechanics. Related dynamical systems over Lie groups are also mentioned. A generalization in relation to the invariance of the family of probability density functions is further studied. Full article
30 pages, 460 KiB  
Article
Differential Geometric Aspects of Parametric Estimation Theory for States on Finite-Dimensional C-Algebras
by Florio M. Ciaglia, Jürgen Jost and Lorenz Schwachhöfer
Entropy 2020, 22(11), 1332; https://doi.org/10.3390/e22111332 - 23 Nov 2020
Cited by 7 | Viewed by 2779
Abstract
A geometrical formulation of estimation theory for finite-dimensional C-algebras is presented. This formulation allows to deal with the classical and quantum case in a single, unifying mathematical framework. The derivation of the Cramer–Rao and Helstrom bounds for parametric statistical models with [...] Read more.
A geometrical formulation of estimation theory for finite-dimensional C-algebras is presented. This formulation allows to deal with the classical and quantum case in a single, unifying mathematical framework. The derivation of the Cramer–Rao and Helstrom bounds for parametric statistical models with discrete and finite outcome spaces is presented. Full article
(This article belongs to the Special Issue Quantum Statistical Decision and Estimation Theory)
61 pages, 2469 KiB  
Review
An Elementary Introduction to Information Geometry
by Frank Nielsen
Entropy 2020, 22(10), 1100; https://doi.org/10.3390/e22101100 - 29 Sep 2020
Cited by 86 | Viewed by 44276
Abstract
In this survey, we describe the fundamental differential-geometric structures of information manifolds, state the fundamental theorem of information geometry, and illustrate some use cases of these information manifolds in information sciences. The exposition is self-contained by concisely introducing the necessary concepts of differential [...] Read more.
In this survey, we describe the fundamental differential-geometric structures of information manifolds, state the fundamental theorem of information geometry, and illustrate some use cases of these information manifolds in information sciences. The exposition is self-contained by concisely introducing the necessary concepts of differential geometry. Proofs are omitted for brevity. Full article
(This article belongs to the Special Issue Review Papers for Entropy)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The parameter inference <inline-formula><mml:math id="mm1440" display="block"><mml:semantics><mml:mover accent="true"><mml:mi>θ</mml:mi><mml:mo>^</mml:mo></mml:mover></mml:semantics></mml:math></inline-formula> of a model from data <inline-formula><mml:math id="mm1441" display="block"><mml:semantics><mml:mi mathvariant="script">D</mml:mi></mml:semantics></mml:math></inline-formula> can also be interpreted as a decision making problem: decide which parameter of a parametric family of models <inline-formula><mml:math id="mm1442" display="block"><mml:semantics><mml:mrow><mml:mi>M</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mo>{</mml:mo><mml:msub><mml:mi>m</mml:mi><mml:mi>θ</mml:mi></mml:msub><mml:mo>}</mml:mo></mml:mrow><mml:mrow><mml:mi>θ</mml:mi><mml:mo>∈</mml:mo><mml:mi mathvariant="sans-serif">Θ</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:semantics></mml:math></inline-formula> suits the “best” the data. Information geometry provides a differential-geometric structure on manifold <italic>M</italic> which useful for designing and studying statistical decision rules.</p>
Full article ">Figure 2
<p>Primal basis (red) and reciprocal basis (blue) of an inner product <inline-formula><mml:math id="mm1443" display="block"><mml:semantics><mml:mrow><mml:mo>〈</mml:mo><mml:mo>·</mml:mo><mml:mo>,</mml:mo><mml:mo>·</mml:mo><mml:mo>〉</mml:mo></mml:mrow></mml:semantics></mml:math></inline-formula> space. The primal/reciprocal basis are mutually orthogonal: <inline-formula><mml:math id="mm1444" display="block"><mml:semantics><mml:msup><mml:mi>e</mml:mi><mml:mn>1</mml:mn></mml:msup></mml:semantics></mml:math></inline-formula> is orthogonal to <inline-formula><mml:math id="mm1445" display="block"><mml:semantics><mml:msub><mml:mi>e</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:semantics></mml:math></inline-formula>, and <inline-formula><mml:math id="mm1446" display="block"><mml:semantics><mml:msub><mml:mi>e</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:semantics></mml:math></inline-formula> is orthogonal to <inline-formula><mml:math id="mm1447" display="block"><mml:semantics><mml:msup><mml:mi>e</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:semantics></mml:math></inline-formula>.</p>
Full article ">Figure 3
<p>Illustration of the parallel transport of vectors on tangent planes along a smooth curve. For a smooth curve <italic>c</italic>, with <inline-formula><mml:math id="mm1448" display="block"><mml:semantics><mml:mrow><mml:mi>c</mml:mi><mml:mo>(</mml:mo><mml:mn>0</mml:mn><mml:mo>)</mml:mo><mml:mo>=</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:semantics></mml:math></inline-formula> and <inline-formula><mml:math id="mm1449" display="block"><mml:semantics><mml:mrow><mml:mi>c</mml:mi><mml:mo>(</mml:mo><mml:mn>1</mml:mn><mml:mo>)</mml:mo><mml:mo>=</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:semantics></mml:math></inline-formula>, a vector <inline-formula><mml:math id="mm1450" display="block"><mml:semantics><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>p</mml:mi></mml:msub><mml:mo>∈</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mi>p</mml:mi></mml:msub></mml:mrow></mml:semantics></mml:math></inline-formula> is parallel transported smoothly to a vector <inline-formula><mml:math id="mm1451" display="block"><mml:semantics><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>q</mml:mi></mml:msub><mml:mo>∈</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mi>q</mml:mi></mml:msub></mml:mrow></mml:semantics></mml:math></inline-formula> such that for any <inline-formula><mml:math id="mm1452" display="block"><mml:semantics><mml:mrow><mml:mi>t</mml:mi><mml:mo>∈</mml:mo><mml:mo>[</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn><mml:mo>]</mml:mo></mml:mrow></mml:semantics></mml:math></inline-formula>, we have <inline-formula><mml:math id="mm1453" display="block"><mml:semantics><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mrow><mml:mi>c</mml:mi><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:msub><mml:mo>∈</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>c</mml:mi><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:msub></mml:mrow></mml:semantics></mml:math></inline-formula>.</p>
Full article ">Figure 4
<p>Parallel transport with respect to the metric connection: the curvature effect can be visualized as the angle defect along the parallel transport on smooth (infinitesimal) loops. For a sphere manifold, a vector parallel-transported along a loop does not coincide with itself (e.g., a sphere), while it always conside with itself for a (flat) manifold (e.g., a cylinder).</p>
Full article ">Figure 5
<p>Differential-geometric concepts associated to an affine connection ∇ and a metric tensor <italic>g</italic>.</p>
Full article ">Figure 6
<p>Dual Pythagorean theorems in a dually flat space.</p>
Full article ">Figure 7
<p>Five concentric pairs of dual Itakura–Saito circles.</p>
Full article ">Figure 8
<p>Common dually flat spaces associated to smooth and strictly convex generators.</p>
Full article ">Figure 9
<p>Visualizing the Cramér–Rao lower bound: the red ellipses display the Fisher information matrix of normal distributions <inline-formula><mml:math id="mm1454" display="block"><mml:semantics><mml:mrow><mml:mi>N</mml:mi><mml:mo>(</mml:mo><mml:mi>μ</mml:mi><mml:mo>,</mml:mo><mml:msup><mml:mi>σ</mml:mi><mml:mn>2</mml:mn></mml:msup><mml:mo>)</mml:mo></mml:mrow></mml:semantics></mml:math></inline-formula> at grid locations. The black ellipses are sample covariance matrices centered at the sample means calculated by repeating 200 runs of sampling 100 iid variates for the normal parameters of the grid.</p>
Full article ">Figure 10
<p>A divergence satisfies the property of information monotonicity iff <inline-formula><mml:math id="mm1455" display="block"><mml:semantics><mml:mrow><mml:mi>D</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>θ</mml:mi><mml:mover accent="true"><mml:mi mathvariant="script">A</mml:mi><mml:mo stretchy="false">¯</mml:mo></mml:mover></mml:msub><mml:mo>:</mml:mo><mml:msubsup><mml:mi>θ</mml:mi><mml:mover accent="true"><mml:mi mathvariant="script">A</mml:mi><mml:mo stretchy="false">¯</mml:mo></mml:mover><mml:mo>′</mml:mo></mml:msubsup><mml:mo>)</mml:mo></mml:mrow><mml:mo>≤</mml:mo><mml:mi>D</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>θ</mml:mi><mml:mo>:</mml:mo><mml:msup><mml:mi>θ</mml:mi><mml:mo>′</mml:mo></mml:msup><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:semantics></mml:math></inline-formula>. Here, parameter <inline-formula><mml:math id="mm1456" display="block"><mml:semantics><mml:mi>θ</mml:mi></mml:semantics></mml:math></inline-formula> represents a discrete distribution.</p>
Full article ">Figure 11
<p>Overview of the main types of information manifolds with their relationships in information geometry.</p>
Full article ">Figure 12
<p>Statistical Bayesian hypothesis testing: the best Maximum A Posteriori (MAP) rule chooses to classify an observation from the class that yields the maximum likelihood.</p>
Full article ">Figure 13
<p>Exact geometric characterization (not necessarily i closed-form) of the best exponent error rate <inline-formula><mml:math id="mm1457" display="block"><mml:semantics><mml:msup><mml:mi>α</mml:mi><mml:mo>*</mml:mo></mml:msup></mml:semantics></mml:math></inline-formula>.</p>
Full article ">Figure 14
<p>Geometric characterization of the best exponent error rate in the multiple hypothesis testing case.</p>
Full article ">Figure 15
<p>Example of a mixture family of order <inline-formula><mml:math id="mm1458" display="block"><mml:semantics><mml:mrow><mml:mi>D</mml:mi><mml:mo>=</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula> (3 components: Laplacian, Gaussian and Cauchy prefixed distributions).</p>
Full article ">Figure 16
<p>Example of <italic>w</italic>-GMM clustering into <inline-formula><mml:math id="mm1459" display="block"><mml:semantics><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula> clusters.</p>
Full article ">Figure 17
<p>Principled classes of distances/divergences.</p>
Full article ">Figure A1
<p>Illustration of the chordal slope lemma.</p>
Full article ">
27 pages, 441 KiB  
Article
From the Jordan Product to Riemannian Geometries on Classical and Quantum States
by Florio M. Ciaglia, Jürgen Jost and Lorenz Schwachhöfer
Entropy 2020, 22(6), 637; https://doi.org/10.3390/e22060637 - 8 Jun 2020
Cited by 15 | Viewed by 2370
Abstract
The Jordan product on the self-adjoint part of a finite-dimensional C * -algebra A is shown to give rise to Riemannian metric tensors on suitable manifolds of states on A , and the covariant derivative, the geodesics, the Riemann tensor, and the sectional [...] Read more.
The Jordan product on the self-adjoint part of a finite-dimensional C * -algebra A is shown to give rise to Riemannian metric tensors on suitable manifolds of states on A , and the covariant derivative, the geodesics, the Riemann tensor, and the sectional curvature of all these metric tensors are explicitly computed. In particular, it is proved that the Fisher–Rao metric tensor is recovered in the Abelian case, that the Fubini–Study metric tensor is recovered when we consider pure states on the algebra B ( H ) of linear operators on a finite-dimensional Hilbert space H , and that the Bures–Helstrom metric tensors is recovered when we consider faithful states on B ( H ) . Moreover, an alternative derivation of these Riemannian metric tensors in terms of the GNS construction associated to a state is presented. In the case of pure and faithful states on B ( H ) , this alternative geometrical description clarifies the analogy between the Fubini–Study and the Bures–Helstrom metric tensor. Full article
(This article belongs to the Section Quantum Information)
273 KiB  
Article
On Monotone Embedding in Information Geometry
by Jun Zhang
Entropy 2015, 17(7), 4485-4499; https://doi.org/10.3390/e17074485 - 25 Jun 2015
Cited by 19 | Viewed by 4706
Abstract
A paper was published (Harsha and Subrahamanian Moosath, 2014) in which the authors claimed to have discovered an extension to Amari's \(\alpha\)-geometry through a general monotone embedding function. It will be pointed out here that this so-called \((F, G)\)-geometry (which includes \(F\)-geometry as [...] Read more.
A paper was published (Harsha and Subrahamanian Moosath, 2014) in which the authors claimed to have discovered an extension to Amari's \(\alpha\)-geometry through a general monotone embedding function. It will be pointed out here that this so-called \((F, G)\)-geometry (which includes \(F\)-geometry as a special case) is identical to Zhang's (2004) extension to the \(\alpha\)-geometry, where the name of the pair of monotone embedding functions \(\rho\) and \(\tau\) were used instead of \(F\) and \(H\) used in Harsha and Subrahamanian Moosath (2014). Their weighting function \(G\) for the Riemannian metric appears cosmetically due to a rewrite of the score function in log-representation as opposed to \((\rho, \tau)\)-representation in Zhang (2004). It is further shown here that the resulting metric and \(\alpha\)-connections obtained by Zhang (2004) through arbitrary monotone embeddings is a unique extension of the \(\alpha\)-geometric structure. As a special case, Naudts' (2004) \(\phi\)-logarithm embedding (using the so-called \(\log_\phi\) function) is recovered with the identification \(\rho=\phi, \, \tau=\log_\phi\), with \(\phi\)-exponential \(\exp_\phi\) given by the associated convex function linking the two representations. Full article
(This article belongs to the Special Issue Information, Entropy and Their Geometric Structures)
Back to TopTop