Abstract
The Jensen-variance (JV) distance measure is introduced and some properties are developed. The JV distance measure can be expressed using two interesting representations: the first one is based on mixture covariances, and the second one is in terms of the scaled variance of the absolute difference of two random variables. The connections between the JV distance measure and some well-known information measures, such as Fisher information, Gini mean difference, cumulative residual entropy, Fano factor, varentropy, varextropy, and chi-square distance measures, are examined. Specifically, the JV distance measure possesses metric properties and unifies most of the information measures within a general framework. It also includes variance and conditional variance as special cases. Furthermore, an extension of the JV distance measure in terms of transformed variables is provided. Finally, to demonstrate the usefulness of proposed methods, JV distance is applied to a real-life dataset related to fish condition factor index and some numerical results assuming skew-normal-distributed samples are presented.
Similar content being viewed by others
Data availibility
Data used in Sect. 3.3 and R codes of this paper will be made available upon reasonable request from the corresponding author.
References
Abid SH, Quaez UJ, Contreras-Reyes JE (2021) An information-theoretic approach for multivariate skew-\(t\) distributions and applications. Mathematics 9:146
Azzalini A (2013) The skew-normal and related families, vol 3. Cambridge University Press, Cambridge, UK
Balakrishnan N, Buono F, Calì C, Longobardi M (2023) Dispersion indices based on Kerridge inaccuracy measure and Kullback-Leibler divergence. Commun Stat Theory Method. https://doi.org/10.1080/03610926.2023.2222926
Balakrishnan N, Kharazmi O (2022) Cumulative past Fisher information measure and its extensions. Braz J Prob Stat 36:540–559
Barlow RE, Proschan F (1975) Statistical theory of reliability and life testing: probability models. Florida State University, Tallahassee
Bobkov SG (2019) Moments of the scores. IEEE Trans Inf Theory 65:5294–5301
Bercher JF (2013) Some properties of generalized Fisher information in the context of nonextensive thermostatistics. Physica A 392:3140–3154
Casella G, Berger RL (2021) Statistical inference. Cengage Learning, Sao Paulo
Chiogna M (2005) A note on the asymptotic distribution of the maximum likelihood estimator for the scalar skew-normal distribution. Stat Methods Appl 14:331–341
Contreras-Reyes JE (2016) Analyzing fish condition factor index through skew-Gaussian information theory quantifiers. Fluct Noise Lett 15:1650013
Contreras-Reyes JE (2021) Fisher information and uncertainty principle for skew-Gaussian random variables. Fluct Noise Lett 20:2150039
Contreras-Reyes JE (2023a) Information quantity evaluation of nonlinear time series processes and applications. Physica D 445:133620
Contreras-Reyes JE (2023b) Information quantity evaluation of multivariate SETAR processes of order one and applications. Stat Papers. https://doi.org/10.1007/s00362-023-01457-6
Contreras-Reyes JE, Canales TM, Rojas PM (2016) Influence of climate variability on anchovy reproductive timing off northern Chile. J Mar Syst 164:67–75
Cover TM, Thomas JA (2006) Elements of information theory. Wiley, New York
Di Crescenzo A, Paolillo L (2021) Analysis and applications of the residual varentropy of random lifetimes. Prob Eng Inf Sci 35:680–698
Feller W, Morse PM (1958) An introduction to probability theory and its applications. Wiley, New York
Fisher RA (1929) Tests of significance in harmonic analysis. Proc R Soc Lond Series A 125:54–59
Gupta RC, Brown N (2001) Reliability studies of the skew-normal distribution and its application to a strength-stress model. Commun Stat Theory Methods 30:2427–2445
Hastie T, Tibshirani R, Friedman JH, Friedman JH (2009) The elements of statistical learning: data mining, inference, and prediction, vol 2. Springer, New York
Johnson O (2004) Information theory and the central limit theorem. World Scientific, Singapore
Kattumannil SK, Sreelakshmi N, Balakrishnan N (2020) Non-parametric inference for Gini covariance and its variants. Sankhya A 84:790–807
Kharazmi O, Asadi M (2018) On the time-dependent Fisher information of a density function. Braz J Prob Stat 32:795–814
Kharazmi O, Balakrishnan N, Jamali H (2022) Cumulative residual \(q\)-Fisher information and Jensen-cumulative residual \(\chi ^2\) divergence measures. Entropy 24:341
Kharazmi O, Contreras-Reyes JE, Balakrishnan N (2023a) Jensen-Fisher information and Jensen-Shannon entropy measures based on complementary discrete distributions with an application to Conway’s game of life. Physica D 453:133822
Kharazmi O, Jamali H, Contreras-Reyes JE (2023b) Fisher information and its extensions based on infinite mixture density functions. Physica A 624:128959
Kharazmi O, Balakrishnan N, Ozonur D (2023c) Jensen-discrete information generating function with an application to image processing. Soft Comput 27:4543–4552
Lin J (1991) Divergence measures based on the Shannon entropy. IEEE Trans Inf Theory 37:145–151
Mehrali Y, Asadi M, Kharazmi O (2018) A Jensen-Gini measure of divergence with application in parameter estimation. Metron 76:115–131
Montgomery DC, Runger GC (2020) Applied statistics and probability for engineers. Wiley, Chichester
Nielsen F, Nock R (2013) On the chi square and higher-order chi distances for approximating \(f\)-divergences. IEEE Signal Process Lett 21:10–13
Noughabi HA, Noughabi MS (2023) Varentropy estimators with applications in testing uniformity. J Stat Comput Simul 93:2582–2599
Ramachandran KM, Tsokos CP (2020) Mathematical statistics with applications in R. Academic Press, Hoboken
Ross SM (2014) Introduction to probability models. Academic Press, San Diego
Sánchez-Moreno P, Zarzo A, Dehesa JS (2012) Jensen divergence based on Fisher’s information. J Phys A 45:125305
Shao J (2003) Mathematical statistics. Springer, New York
Wooldridge JM (2015) Introductory econometrics: a modern approach. Cengage Learning, Mason
Acknowledgements
The authors would like to thank the editor and an anonymous referee for their helpful comments and suggestions.
Funding
No funds received.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no known conflict of interest/competing interest that could have appeared to influence the work reported in this paper.
Additional information
Communicated by Clémentine Prieur.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: \((\alpha ,w)\)-Jensen-variance distance measure
Definition 4
The \((\alpha ,w)-\)JV distance between two random variables X and Y, for \(\alpha \) and w \(\in (0,1)\), is defined as
where \({\bar{s}}=w\alpha +(1-w)(1-\alpha )\).
Theorem 10
The connection between \((\alpha ,w)-\)JV distance and JV one is given by
Proof
From the definition of \(\mathcal{J}_{w,\alpha }(X,Y)\) measure in (18) and upon making use Theorem 3, we have
as required. \(\square \)
Corollary 4
If X and Y be two independent random variables, then from the inequality in (5) and Theorem 10, we find that
Appendix B: Connection between Jensen-variance distance and Fano factor measure
Let X be a random variable with finite mean E[X] and variance Var(X). Then, the ratio
is known as Fano factor (Feller and Morse 1958; Cover and Thomas 2006). In fact, the Fano factor is a normalized measure used to quantify the dispersion of event occurrences within a given time window. It is often used to analyze and characterize the variability or clustering behavior of events, particularly in scenarios where events occur randomly or intermittently.
Lemma 2
Then connection between JV distance and Fano factor measure is given by
where W is a degenerate random variable in point \(E[W]=c\).
Proof
From the definition of JV distance and making use Theorem 3, we have
as required. \(\square \)
Theorem 11
Let X and Y be two random variables. Then, a connection between JV measure and Fano factor is given by
where \(\Lambda =\frac{\alpha E[X]}{\mu _\alpha }\) and \(\mu _\alpha =\alpha E[X]+(1-\alpha )E[Y].\)
Proof
From the definition of \({\mathcal{J}_\alpha (X,Y)}\) and considering \(\Lambda =\frac{\alpha E[X]}{\mu _\alpha }\) and \(\mu _\alpha =\alpha E[X]+(1-\alpha )E[Y]\), we have
as required. \(\square \)
From Theorem 11, it is clear that
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Kharazmi, O., Contreras-Reyes, J.E. & Basirpour, M.B. Jensen-variance distance measure: a unified framework for statistical and information measures. Comp. Appl. Math. 43, 144 (2024). https://doi.org/10.1007/s40314-024-02666-x
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40314-024-02666-x