[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Reconstructing Nonparametric Productivity Networks
Next Article in Special Issue
Redundant Information Neural Estimation
Previous Article in Journal
Intermittency, Moments, and Friction Coefficient during the Subcritical Transition of Channel Flow
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Heterogeneous Graphical Granger Causality by Minimum Message Length

by
Kateřina Hlaváčková-Schindler
1,2,* and
Claudia Plant
1,3
1
Faculty of Computer Science, University of Vienna, 1090 Wien, Austria
2
Institute of Computer Science of the Czech Academy of Sciences, 18207 Prague, Czech Republic
3
ds:UniVie, University of Vienna, 1090 Wien, Austria
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(12), 1400; https://doi.org/10.3390/e22121400
Submission received: 2 November 2020 / Revised: 26 November 2020 / Accepted: 7 December 2020 / Published: 11 December 2020
(This article belongs to the Special Issue Information Flow in Neural Systems)

Abstract

:
The heterogeneous graphical Granger model (HGGM) for causal inference among processes with distributions from an exponential family is efficient in scenarios when the number of time observations is much greater than the number of time series, normally by several orders of magnitude. However, in the case of “short” time series, the inference in HGGM often suffers from overestimation. To remedy this, we use the minimum message length principle (MML) to determinate the causal connections in the HGGM. The minimum message length as a Bayesian information-theoretic method for statistical model selection applies Occam’s razor in the following way: even when models are equal in their measure of fit-accuracy to the observed data, the one generating the most concise explanation of data is more likely to be correct. Based on the dispersion coefficient of the target time series and on the initial maximum likelihood estimates of the regression coefficients, we propose a minimum message length criterion to select the subset of causally connected time series with each target time series and derive its form for various exponential distributions. We propose two algorithms—the genetic-type algorithm (HMMLGA) and exHMML to find the subset. We demonstrated the superiority of both algorithms in synthetic experiments with respect to the comparison methods Lingam, HGGM and statistical framework Granger causality (SFGC). In the real data experiments, we used the methods to discriminate between pregnancy and labor phase using electrohysterogram data of Islandic mothers from Physionet databasis. We further analysed the Austrian climatological time measurements and their temporal interactions in rain and sunny days scenarios. In both experiments, the results of HMMLGA had the most realistic interpretation with respect to the comparison methods. We provide our code in Matlab. To our best knowledge, this is the first work using the MML principle for causal inference in HGGM.

1. Introduction

Granger causality is a popular method for causality analysis in time series due to its computational simplicity. Its application to time series with non-Gaussian distributions can be, however, misleading. Recently, Behzadi et al. in [1] proposed the heterogeneous graphical Granger Model (HGGM) for detecting causal relations among time series with distributions from the exponential family, which includes a wider class of common distributions. HGGM employs regression in generalized linear models (GLM) with adaptive Lasso penalization [2] as a variable selection method and applies it to time series with a given lag. This approach allows one to apply causal inference among time series, also with discrete values. HGGM, using penalization by adaptive Lasso, showed its efficiency in scenarios when the number of time observations is much greater than the number of time series, normally by several orders of magnitude—however, on “short” time series, the inference in HGGM suffers often from overestimation.
Overestimation on short time series is a problem which also occurs in general forecasting problems. For example, when forecasting demand for a new product or a new customer, there are usually very few time series observations available. For such short time series, the traditional forecasting methods may be inaccurate. To overcome this problem in forecasting, Ref. [3] proposed to utilize a prior information derived from the data and applied a Bayesian inference approach. Similarly for another data mining problem, a Bayesian approach has shown to be efficient for the clustering of short time series [4].
Motivated by the efficiency of the Bayes approaches in these problems on short time series, we propose to use the Bayesian approach called minimum message principle, as introduced in [5] to causal inference in HGGM. The contributions of our paper are the following:
(1)
We used the minimum message length (MML) principle for determination of causal connections in the heterogeneous graphical Granger model.
(2)
Based on the dispersion coefficient of the target time series and on the initial maximum likelihood estimates of the regression coefficients, we proposed a minimum message length criterion to select the subset of causally connected time series with each target time series; Furthermore, we derived its form for various exponential distributions.
(3)
We found this subset in two ways: by a proposed genetic-type algorithm (HMMLGA), as well as by exhaustive search (exHMML). We evaluated the complexities of these algorithms and provided the code in Matlab.
(4)
We demonstrated the superiority of both methods with respect to the comparison methods Lingam [6], HGGM [1] and statistical framework Granger causality (SFGC) [7] in the synthetic experiments with short time series. In the real data experiments without known ground truth, the interpretation of causal connections achieved by HMMLGA was the most realistic with respect to the comparison methods.
(5)
To our best knowledge, this is the first work applying the minimum message length principle to the heterogeneous graphical Granger model.
The paper is organized as follows. Section 2 presents definitions of the graphical Granger causal model and of the heterogeneous graphical Granger causal model as well as of the minimum message length principle. Our method including the derived criteria and algorithm are described in Section 3. Related work is discussed in Section 4. Our experiments are summarized in Section 5. Section 6 is devoted to the conclusions and the derivation of the criteria from Section 3 can be found in Appendix A and Appendix B.

2. Preliminaries

To make this paper self-contained and to introduce the notation, we briefly summarize the basics about graphical Granger causal model in Section 2.1. The heterogeneous graphical Granger model, as introduced in [1], is presented in Section 2.2. Section 2.3 discusses the strengths and limitations of the Granger causal models. The idea of the minimum message length principle is briefly explained in Section 2.4.

2.1. Graphical Granger Model

The (Gaussian) graphical Granger model extends the autoregressive concept of Granger causality to p 2 time series [8]. Let x 1 t , , x p t be the time instances of p time series, t = 1 , , n . As it is common, we will use bold font in notation of vectors or matrices. Consider the vector auto-regressive (VAR) models with time lag d 1 for i = 1 , , p
x i t = X t , d L a g β i + ε i t
where X t , d L a g = ( x 1 t d , , x 1 t 1 , , x p t d , , x p t 1 ) and β i be a matrix of the regression coefficients and ε i t be white noise. One can easily show that X t , d L a g β i = j = 1 p l = 1 d x j t l β j l .
Definition 1.
One says time series x j Granger–causes time series x i for a given lag d, denote x j x i , for i , j = 1 , , p if and only if at least one of the d coefficients in j-th row of β i in (1) is non-zero.
The solution of problem (1) has been approached by various forms of penalization methods in the literature, e.g., Lasso in [8], truncated Lasso in [9] or group Lasso [10].

2.2. Heterogeneous Graphical Granger Model

The heterogeneous graphical Granger model (HGGM) [1] considers time series x i , for which their likelihood function belongs into the exponential family with a canonical parameter θ i . The generic density form for each x i can be written as
p ( x i | X t , d L a g , θ i ) = h ( x i ) exp ( x i θ i η i ( θ i ) )
where θ i = X t , d L a g ( β i * ) ( β i * is the optimum) and η i is a link function corresponding to time series x i . (The sign denotes a transpose of a matrix). The heterogeneous graphical Granger model uses the idea of generalized linear models (GLM, see e.g., [11]) and applies them to time series in the following form
x i t μ i t = η i t ( X t , d L a g β i ) = η i t ( j = 1 p l = 1 d x j t l β j l )
for x i t , i = 1 , , p , t = d + 1 , , n each having a probability density from the exponential family; μ i denotes the mean of x i and v a r ( x i | μ i , ϕ i ) = ϕ i v i ( μ i ) where ϕ i is a dispersion parameter and v i is a variance function dependent only on μ i ; η i t is the t-th coordinate of η i .
Causal inference in (3) can be solved as
β ^ i = a r g min β i t = d + 1 n ( x i t ( X t , d L a g β i ) + η i t ( X t , d L a g β i ) ) + λ i R ( β i )
for a given lag d > 0 , λ i > 0 , and all t = d + 1 , , n with R ( β i ) to be the adaptive Lasso penalty function [1]. (The first two summands in (4) correspond to the maximum likelihood estimates in the GLM).
Definition 2.
One says, time series x j Granger–causes time series x i for a given lag d, denote x j x i , for i , j = 1 , , p if and only if at least one of the d coefficients in j-th row of β i ^ of the solution of (4) is non-zero [1].
Remark 1.
Non-zero values in Definitions 1 and 2 are in practice, distinguished by considering values bigger than a given threshold, which is a positive number “close” to zero.
For example, Equation (4) for the Poisson graphical Granger model [12] where for each i = 1 , , p η i t : = exp is considered, can be written as
β ^ i = a r g min β i t = d + 1 n ( x i t ( X t , d L a g β i ) + exp ( X t , d L a g β i ) ) + λ i R ( β i ) .
Equation (4) for the binomial graphical Granger model can be written as
β ^ i = a r g min β i t = d + 1 n ( x i t ( X t , d L a g β i ) + log ( 1 + exp ( X t , d L a g β i ) ) ) + λ i R ( β i )
and finally Equation (4) for the Gaussian graphical Granger model reduces to the least squares error of (1) with a R ( β i ) to be adaptive Lasso penalty function. The heterogeneous graphical Granger model can be applied to causal inference among processes, for example in climatology, e.g., Ref. [1] investigated the causal inference among precipitation time series (having gamma distribution) and time series of sunny days (having Poisson distribution).

2.3. Granger Causality and Graphical Granger Models

Since its introduction, Granger causality [13] has faced criticism, since it e.g., does not take into account counterfactuals, [14,15]. In defense of his method, Granger in [16] wrote: “Possible causation is not considered for any arbitrarily selected group of variables, but only for variables for which the researcher has some prior belief that causation is, in some sense, likely.” In other words, drawing conclusions about the existence of a causal relation between time series and about its direction is possible only if theoretical knowledge of mechanisms connecting the time series is accessible.
Concerning the graphical causal models, including the Granger ones, Lindquist and Sobel in [17] claim that (1) they are not able to discover causal effects; (2) the theory of graphical causal models developed by Spirtes et al. in [18] makes no counterfactual claims; and (3) causal relations cannot be determined non-experimentally from samples that are a combination of systems with different propensities. However, Glymour in [19] argues that each of these claims are false or exaggerated. For arguments against (1) and (3), we refer the reader to [19]. We focus here only to his arguments to (2). Quoting Glymour, claims about what the outcome would be of a hypothetical experiment that has not been done are one form of counterfactual claims. Such claims say that if such and such were to happen then the result would be thus and so—where such and such has not happened or has not yet happened. (Of course, if the experiment is later done, then the proposition becomes factually true or factually false.) Glymour argues that it is not true that the graphical model framework does not represent or entail any counterfactual claims and emphasizes that no counterfactual variables are used or needed in the graphical causal model framework. In the potential outcomes framework, if nothing is known about which of many variables are causes of the others, then for each variable, and for each value of the other variables, a new counterfactual variable is required. In practice that would require an astronomical number of counterfactual variables for even a few actual variables. To summarize, as also confirmed by a recent Nature publication [20], if the theoretical background of investigated processes is insufficient, graphical causal methods (Granger causality including), to infer causal relations from data rather than knowledge of mechanisms, are helpful.

2.4. Minimum Message Length Principle

The minimum message length principle of statistical and inductive inference and machine learning was developed by C.S. Wallace and D.M. Boulton in 1968 in the seminal paper [5]. Minimum message length principle is a formal information theory restatement of Occam’s razor: even when models are not equal in goodness of fit accuracy to the observed data, the one generating the shortest overall message is more likely to be correct (where the message consists of a statement of the model, followed by a statement of data encoded concisely using that model). The MML principle selects the model which most compresses the data (i.e., the one with the “shortest message length”) as the most descriptive for the data. To be able to decompress this representation of the data, the details of the statistical model used to encode the data must also be part of the compressed data string. The calculation of the exact message is an NP hard problem, however the most widely used less computationally intensive is the Wallace–Freeman approximation called MML87 [21]. MML is Bayesian (i.e., it incorporates prior beliefs) and information-theoretic. It has the desirable properties of statistical invariance (i.e., the inference transforms with a re-parametrisation), statistical consistency (i.e., even for very hard problems, MML will converge to any underlying model) and efficiency (i.e., the MML model will converge to any true underlying model about as quickly as is possible). Wallace and Dowe (1999) showed in [22] a formal connection between MML and Kolmogorov complexity, i.e., the length of a shortest computer program that produces the object as output.

3. Method

In this section, we will describe our method in detail. First, in Section 3.1, we will derive a fixed design matrix for HGGM, so that the minimum message length principle can be applied. In Section 3.2, we propose our minimum message length criterion for HGGM. The exact forms of the criterion for various exponential distributions are derived in Section 3.3. Then, we present our two variable selection algorithms and their computational complexity in Section 3.4 and Section 3.5.

3.1. Heterogeneous Graphical Granger Model with Fixed Design Matrix

We can see that the models from Section 2 do not have fixed matrices. Since the MML principle proposed for generalized linear models in [23] requires a fixed design matrix, it cannot be directly applied to them. In the following section, we will derive the heterogeneous graphical Granger model (3) with a fixed lag d as an instance of regression in generalized linear models (GLM) with a fixed design matrix.
Consider the full model for p variables x i t and lag d 1 (be an integer) corresponding to the optimization problem (3). To be able to use the maximum likelihood (ML) estimation over the regression parameters, we reformulate the matrix of lagged time series X t , d L a g from (1) into a fixed design matrix form. Assume n d > p d and denote x i = ( x i d + 1 , x i d + 2 , , x i n ) . We construct the ( n d ) × ( d × p ) design matrix
X = x 1 d x 1 1 x p d x p 1 x 1 d + 1 x 1 2 x p d + 1 x p 2 x 1 n 1 x 1 n d + 1 x p n 1 x p n d + 1
and the 1 × ( d × p ) vector β i = ( β 1 1 , , β 1 d , , β p 1 , , β p d ) . We can see that problem
x i μ i = η i ( X β i )
for i = d + 1 , , n is equivalent to problem (3) in the matrix form where μ i = ( μ i d + 1 , , μ i d + 1 ) and link function η i operates on each coordinate.
Denote now by γ i Γ = { 1 , , p } the subset of indices of regressor’s variables and k i : = | γ i | its cardinality. Let β i : = β i ( γ i ) R 1 × ( d × k i ) be the vector of unknown regression coefficients with a fixed ordering within the γ i subset. For illustration purposes and without lack of generality, we can assume that the first k i indices out of p vectors belong to γ i . Considering only the columns from matrix X in (7), which correspond to γ i , we define the ( n d ) × ( d × k i ) matrix of lagged vectors with indices from γ i as
X i : = X ( γ i ) = x 1 d x 1 1 x k i d x k i d 1 x k i 1 x 1 d + 1 x 1 2 x k i d + 1 x k i d x k i 2 x 1 d + 2 x 1 3 x k i d + 2 x k i d + 1 x k i 3 x 1 n 1 x 1 n d + 1 x k i n 1 x k i n 2 x k i n d + 1
The problem (8) for explanatory variables with indices from γ i is expressed as
x i μ i = E ( x i | X i ) = η i ( X i β i ) .
with β i : = β i ( γ i ) to be a 1 × ( d k i ) matrix of unknown coefficients and η i operates on each coordinate. Wherever it is clear from context, we will simplify the notation β i instead of β i ( γ i ) and X i instead of X ( γ i ) .

3.2. Minimum Message Length Criterion for Heterogeneous Graphical Granger Model

As before, we assume for each x i t , where i = 1 , , p , t = d + 1 , , n to have a density from the exponential family; furthermore, μ i to be the mean of x i and v a r ( x i | μ i , ϕ i ) = ϕ i v i ( μ i ) where ϕ i is a dispersion parameter and v i a variance function dependent only on μ i . In the concrete case, for Poisson regression, it is well known that it can be still used in over- or underdispersed settings. However, the standard error for Poisson regression would not be correct for the overdispersed situation. In the Poisson graphical Granger model, it is the case when, for the dispersion of at least one time series holds ϕ i 1 . In the following, we assume that an estimate of ϕ i is given. Denote Γ the set of all subsets of covariates x i , i = 1 , , p . Assume now a fixed set γ i Γ of covariates with size k i p and the corresponding design matrix X i from (9). Furthermore, we assume that the targets x i are independent random variables, conditioned on the features given by X i , so that the likelihood function can be factorized into the product p ( x i | β i , X i , γ i ) = t = 1 n d p ( x i t | β i , X i , γ i ) . The log-likelihood function L i has then the form L i : = log p ( x i | β i , X i , γ i ) = t = 1 n d log p ( x i t | β i , X i , γ i ) . Since X i is highly collinear, to make the ill-posed problem for coefficients β i (8) a well-posed one, we could use regularization by the ridge regression for GLM (see e.g., [24]). Ridge regression requires an initial estimate of β i , which can be set as the maximum likelihood estimator of (10) obtained by the iteratively reweighted least square algorithm (IRLS). For a fixed λ i > 0 , for the ridge estimates of coefficients β ^ i , λ i holds
β ^ i , λ i = arg min β i R 1 × d k i { L i + λ i β i Σ i β i } .
In our paper however, we will not use the GLM ridge regression in form (11), but we apply the principle of minimum description length. Ridge regression in the minimum description length framework is equivalent to allowing the prior distribution to depend on a hyperparameter (= the ridge regularization parameter). To compute the message length of HGGM using the MML87 approximation, we need the negative log-likelihood function, prior distribution over the parameters and an appropriate Fisher information matrix, similarly as proposed in [23], where it is done for a general GLM regression. Moreover, [23] proposed the corrected form of Fisher information matrix for a GLM regression with ridge penalty. In our work, we will use this form of ridge regression and apply it to the heterogeneous graphical Granger model. In the following, we will construct the MML code for every subset of covariates in HGGM. The derivation of the criterion can be found in Appendix A.
The MML criterion for inference in HGGM.Assume x i , i = 1 , , p be given time series of length n having distributions from exponential family, and for each of them, the estimate of the dispersion parameter ϕ ^ i is given. Consider β ^ i be an initial solution of (8) with a fixed d 1 achieved as the maximum likelihood estimate. Then
(i) 
the causal graph of the heterogeneous graphical Granger problem (8) can be inferred from the solutions of p variable selection problems, where for each i = 1 , , p , the set γ ^ i of Granger–causal variables to x i is found;
(ii) 
For the estimated set γ ^ i holds
γ i ^ = arg min γ i Γ { H M M L ( x i , X i , γ i ) } = arg min γ i Γ { I ( x i , β ^ i , ϕ ^ i , λ ^ i , X i , γ i ) + I ( γ i ) }
where I ( x i , β ^ i , ϕ ^ i , λ i ^ , X i , γ i ) = min λ i R + { M M L ( x i , β ^ i , ϕ ^ i , λ i , X i , γ i ) } and
M M L ( x i , β ^ i , ϕ ^ i , λ i , X i , γ i ) is the minimum message length code of set γ i . It can be expressed as
M M L ( x i , β ^ i , ϕ ^ i , λ i , X i , γ i ) = L i + 1 2 log det ( X i W i X i + λ i Σ i )
+ k i 2 log ( 2 π λ i ) + ( λ i 2 ϕ ^ i ) β ^ i Σ i β ^ i + 1 2 log ( n d ) k i + 1 2 log ( 2 π ) + 1 2 log ( ( k i + 1 ) π ) where | γ ^ i | = k i , Σ i is the unity matrix of size d k i × d k i , I ( γ i ) = log p k i + log ( p + 1 ) , L i is the log-likelihood function depending on the density function of x i and matrix W i is a diagonal matrix depending on link function η i .
Remark 2. ([23]) compared A I C c criterion with MML code for generalized linear models. We constructed the A I C c criterion also for HGGM. This criterion however requires the computation of pseudoinverse of a matrix multiplication, which includes matrices X i . Since X i s are highly collinear, these matrix multiplications had, in our experiments, very high condition numbers. This consequently led to the application of A I C c for HGGM, giving spurious results, and therefore we do not report them in our paper.

3.3. Log-Likelihood L i , Matrix W i and Dispersion ϕ i for x i with Various Exponential Distributions

In this section, we will present the form for the log-likelihood function and for matrix W i for Gaussian, binomial, Poisson, gamma and inverse-Gaussian distributed time series x i . The derivation for each case can be found in Appendix B. μ i = η i ( X i β i ) holds in each case for the link function as in (10). By [ X i β i ] t , we denote the t-th coordinate of vector X i β i .
Case x i is Gaussian This is the case when x i is an independent Gaussian random variable and link function η i is identity. Assume ϕ ^ i = σ i 2 to be the variance of the Gaussian random variable. We assume that in model (10) x i follows Gaussian distribution with the density function p ( x i | β ^ i , σ i 2 , X i , γ i ) =
t = d + 1 n p ( x i t | β ^ i , σ i 2 , X i , γ i ) = ( 1 ( 2 π σ i 2 ) ) ( n d ) / 2 exp [ 1 2 σ i 2 t = d + 1 n ( x i t [ X i β ^ i ] t ) 2 ] .
Then
L i = log p ( x i | β ^ i , σ i 2 , X i , γ i ) = n d 2 log ( 2 π σ i 2 ) 1 2 σ i 2 t = d + 1 n ( x i t [ X i β ^ i ] t ) 2
and W i : = I n d , n d is a unit matrix of dimension ( n d ) × ( n d ) .
Case x i is binomial This is the case when x i is an independent Bernoulli random variable and it can achieve only two different values. For the link function, it holds η i = log ( μ i 1 μ j ) . Without lack of generality, we consider ϕ ^ i = 1 and the density function p ( x i | β ^ i , σ i 2 , X i , γ i ) =
t = d + 1 n p ( x i t | β ^ i , σ i 2 , X i , γ i ) = t = d + 1 n ( [ X i β ^ i ] t ) x i t ( 1 ( [ X i β ^ i ] t ) ) ( 1 x i t ) .
Then
L i = log ( p ( x i | β ^ i , X i , γ i ) ) = t = d + 1 n ( x i t [ X i β ^ i ] t log ( 1 + exp [ X i β ^ i ] t ) )
and
W i : = diag ( exp ( [ X i β ^ i ] 1 ) ( 1 + exp ( [ X i β ^ i ] 1 ) ) 2 , , exp ( [ X i β ^ i ] n d ) ( 1 + exp ( [ X i β ^ i ] n d ) 2 ) ) .
In the case that we cannot assume accurate fitting to one of the two values, for robust estimation we can consider the sandwich estimate of the covariance matrix of β ^ i with
W i = diag ( [ x i 1 exp ( [ X i β ^ i ] 1 ) ( 1 + exp ( [ X i β ^ i ] 1 ) ) 2 ] 2 , , [ x i n d exp ( [ X i β ^ i ] n d ) ( 1 + exp ( [ X i β ^ i ] n d ) ) 2 ] 2 ) .
Case x i is Poisson If x i is an independent Poisson random variable with link function η i t = log ( μ i t ) = log ( [ X i β ^ i ] t ) , the density is
p ( x i | β ^ i , X i , β i ) = t = d + 1 n exp ( [ X i β ^ i ] t ) x i t exp ( exp ( [ X i β ^ i ] t ) ) x i t ! .
Then
L i = log ( p ( x i | β ^ i , X i , γ i ) ) = t = d + 1 n x i t [ X i β ^ i ] t exp ( [ X i β ^ i ] t ) log ( x i t ! )
and diagonal matrix
W i : = diag ( exp ( X i β ^ i ) 1 , , exp ( X i β ^ i ) n d )
for Poisson x i with ϕ ^ i = 1 and
W i : = diag ( [ x i d + 1 exp ( X i β ^ i ) 1 ] 2 , , [ x i d + ( n d ) exp ( X i β ^ i ) n d ] 2 )
for over- or underdispersed Poisson x i , i.e., when ϕ ^ i 1 and is positive, where t = 1 , , n d .
Case x i is gamma If x i is an independent gamma random variable, we consider for the inverse of shape parameter κ i for each t rate parameter κ i μ i t and for the link function it holds μ i t = 1 η i t = 1 [ X i β i ] t . For parameters of gamma function a i , b i we take a i = 1 κ i , b i t = κ i μ ^ i t and assume for dispersion ϕ ^ i = κ i . Then, we have density function
p ( x i | β ^ i , 1 κ i , κ i μ ^ i , X i , γ i ) = t = d + 1 n ( x i t ) ( 1 κ i 1 ) exp ( x i t κ i μ i t ) ( κ i μ i t ) 1 κ i Γ ( 1 κ i )
and log-likelihood L i = log ( p ( x i | β ^ i , 1 κ i , κ i μ ^ i , X i , γ i ) )
= t = d + 1 n ( ( 1 κ i 1 ) log x i t x i t κ i μ ^ i t 1 κ i log ( κ i μ ^ i t ) log Γ ( 1 κ i ) )
and diagonal matrix
W i : = diag ( ( μ ^ i 1 ) 2 , , ( μ ^ i n d ) 2 ) = diag ( 1 ( [ X i β ^ i ] 1 ) 2 , , 1 ( [ X i β ^ i ] n d ) 2 ) .
Case x i is inverse-Gaussian If x i is an independent inverse-Gaussian random variable, we consider the inverse of the shape parameter ξ i and link function η i t = log ( μ i t ) = log ( [ X i β ^ i ] t ) . Assume dispersion ϕ ^ i = ξ i . Then we have density function
p ( x i | β ^ i , ξ i , μ ^ i , X i , γ i ) = t = d + 1 n 1 2 π ξ i ( x i t ) 3 exp [ 1 2 ξ i t = d + 1 n ( x i t μ ^ i t ) 2 ( μ ^ i t ) 2 x i t ]
and log-likelihood L i = log ( p ( x i | β ^ i , ξ i μ ^ i , X i , γ i ) )
= t = d + 1 n ( 1 2 ξ i t = d + 1 n ( x i t μ ^ i t ) 2 ( μ ^ i t ) 2 x i t log ( 2 π ξ i ) + 3 log ( x i t ) )
and diagonal matrix
W i : = diag ( 1 μ ^ i 1 , , 1 μ i n d ) = diag ( 1 ( [ X i β ^ i ] 1 ) , , 1 ( [ X i β ^ i ] n d ) ) .
One could express similarly L i and W i for other common exponential distributions, applied in GLMs.

3.4. Variable Selection by MML in Heterogeneous Graphical Granger Model

For all considered cases of exponential distributions of x i we define the family of models M ( γ i ) : = { p ( x i | β ^ i , ϕ ^ i , X i , γ i ) , γ i Γ } with the corresponding exponential density p ( x i | β ^ i , ϕ ^ i , X i , γ i ) . First, we present the procedure which for each x i computes the MML code for a set γ i Γ in Algorithm 1. Then we present Algorithm 2 for computation of γ ^ i .
Algorithm 1 MML Code for γ i
  • Input: γ i Γ , d 1 , | γ i | = k i , series is the matrix of x i t , ϕ ^ i dispersion parameter,
    i = 1 , , p , t = 1 , , n d , Σ i a unity matrix of size d k i × d k i , H a set of positive numbers;
    I ( γ i ) = log p k i + log ( p + 1 ) .
  • Output: For each i minimum of H M M L ( x i , X i , γ i ) over H is found;
  • for all x i do
  •  // Construct the d-lagged matrix X i with time series with indices from γ i .
  •  // Compute matrix W i .
  • for all λ i H do
  •   // Compute L i
  •   // Find the initial estimates of β ^ i .
  •   // Compute M M L ( x i , β ^ i , ϕ ^ i , λ i , X i , γ i ) from (13).
  • end for// to λ i
  •  // Compute I ( x i , β ^ i , ϕ ^ i , λ ^ i , X i , γ i ) = min λ i H M M L ( x i , β ^ i , ϕ ^ i , λ i , X i , γ i ) .
  •  // H M M L ( x i , X i , γ i ) : = I ( x i , β ^ i , ϕ ^ i , λ ^ i , X i , γ i ) + I ( γ i ) .
  • end for// to x i
  • return H M M L ( x i , X i , γ i ) for each i.
In general, the selection of the best structure γ i amounts to evaluate values of H M M L ( γ i ) for all γ i Γ , i.e., for all 2 p possible subsets and then to pick the subset with which the minimum of the function was achieved.

3.5. Search Algorithms

We will find the best structure of γ i with MML code by two approaches. The first way is by the exhaustive search approach exHMML and the second one is by minimizing the HMML by genetic algorithm type procedure called HMMLGA, which we introduce in the following. Since HMML in (12) is a function having multiple local minima, the achievement of the global minimum by these two approaches is not, in general, guaranteed. In [12], a similar genetic algorithm MMLGA was proposed for the Poisson GGM. In this paper, we propose its modification, which is more appropriate for the objective functions that we have here.
The idea of HMMLGA is as follows. Consider an arbitrary γ i Γ with size k i for a fixed i. Define a Boolean vector Q i of length p corresponding to a given γ i , so that it has ones in the positions of the indices of covariates from γ i , otherwise zeros. Define H M M L ( Q i ) : = H M M L ( γ i ) where H M M L ( γ i ) is from (12). Genetic algorithm MMLGA executes genetic operations on populations of Q i . In the first step, a population of size m (m an even integer), is generated randomly in the set of all 2 p binary strings (individuals) of length p. Then, we select m / 2 individuals in the current population with the lowest value of (12) as the elite subpopulation of parents of the next population. For a predefined number of generated populations n g , the crossover operation of parents and the mutation operation of a single parent are executed on the elite to create the rest of the new population. A mutation corresponds to a random change in Q i and a crossover combines the vector entries of a pair of parents. The position of mutation is for each individual selected randomly in contrast to MMLGA, where the position was, for all individuals, the same, and is given as an input parameter. Similarly, the position of crossover in HMMLGA is for each pair of individuals selected randomly. After each run of these two operations on a current population, the current population is replaced with the children with the lowest value of (12) to form the next generation. The algorithm stops after the number of population generations n g is achieved. Since HMML in (12) has multiple local minima, in contract to MMLGA, we selected in the HMMLGA the following strategy: We do not take the first Q i with the sorted HMML values ascendently, but based on the parsimony principle, we take that Q i among all with minimum HMML value, which has the minimum number of ones in Q i . Concerning the approach by exhaustive search exHMML, similarly we do not take the first Q i with sorted HMML code ascendently, but also, here, we take that Q i , among all with a minimum value of HMML, which has the minimum number of ones in Q i . The algorithm HMMLGA is summarized in Algorithm 2.
Algorithm 2 HMMLGA
  • Input: Γ , d 1 , p , n g , m an even integer;
    series is the matrix of x i t , i = 1 , , p , t = 1 , , n d ;
  • Output: A d j := adjacency matrix of the output causal graph;
  • // For every x i Q i with minimum of (12) is found;
  • for all x i do
  •  Create initial population { Q i j , j = 1 , , m } at random; Compute
    H M M L ( Q i j ) : = I ( x i , β ^ i , ϕ ^ i , λ ^ i , X i , Q i j ) + p k i j + log ( p + 1 ) for each j = 1 , , m where
    k i j is the number of ones in Q i j ; v:=1;
  • while v n g do
  •   u:=1;
  •   while u m do
  •    Sort H M M L ( Q i j ) ascendently and create the elite population; By crossover of Q i j and Q i r , r j
       at a random crossing position create children and add them to elite; Compute H M M L ( Q i j )
       for each j; Mutate a single parent Q i j at a random position; Compute H M M L ( Q i j ) for each j;
       Add the children with minimum H M M L ( Q i j ) until the new population is not filled;
  •    u := u + 1;
  •   end while// to u
  •   v := v + 1;
  • end while// to v
  • end for// to x i
    The i-th row of Adj: A d j i : = Q i with min of (12) such that | Q i | is minimum.
  • return ( A d j )
Our code in Matlab is publicly available at: https://t1p.de/26f3.

Computational Complexity of HMMLGA and of exHMML

We used Matlab function fminsearch for computation of H M M L ( x i , β ^ i , λ ^ i , X i , γ i ) . It is well known that the upper bound of the computational complexity of a genetic algorithm is of order of the product of the size of an individual, of the size of each population, of the number of generated populations and of the complexity of the function to be minimized. Therefore, an upper bound of the computational complexity of HMMLGA for p time series, size p of an individual, m the population size and n g the number of population generations is O ( p m n g ) × O ( fminsearch ) × p , where O ( fminsearch ) can also be estimated. The highest complexity in fminsearch has the computation of the Hessian matrix, which is the same as for the Fisher information matrix (our matrix W i ) or the computation of the determinant. The computational complexity of Hessian for i fixed for ( n d ) × ( n d ) matrix is O ( ( n d ) ( n d + 1 ) 2 ) . An upper bound on the complexity of determinant in (13) is O ( ( p d ) 3 ) (for proof see e.g., [25]). Denote M = max { ( p d ) 3 , ( n d ) ( n d + 1 ) 2 } . Since we have p optimization functions, our upper bound on the computational complexity of HMMLGA is then O ( p 2 m n g M ) . The computational complexity of e x H M M L is p 2 p O ( fminsearch ) = p 2 p M .

4. Related Work

In this section, we discuss the related work on the application of two description length based compression schemes for generalized linear models, further the related work on these compression principles applied to causal inference in graphical models, and finally, other papers on causal inference in graphical models for non-Gaussian time series.
Minimum description length (MDL) is another principle based on compression. Similarly as for MML, by viewing statistical modeling as a means of generating descriptions of observed data, the MDL framework (Rissanen [26], Barron et al. [27], and Hansen and Yu [28]) discriminates between competing model classes based on the complexity of each description. The minimum description length principle is based on the idea that one chooses the model that gives the shortest description of data. The methods based on MML and MDL appear mostly equivalent, but there are some differences, especially in interpretation. MML is a Bayesian approach: it assumes that the data-generating process has a given prior distribution. MDL avoids assumptions about the data-generating process. Both methods make use of two-part codes: the first part always represents the information that one is trying to learn, such as the index of a model class (model selection) or parameter values (parameter estimation); the second part is an encoding of the data, given the information in the first part.
Hansen and Yu 2003 in [29] derived objective functions for one-dimensional GLM regression by the minimum description principle. The extension to the multi-dimensional case is however not straighforward. Schmidt and Makalic in [23] used MML87 to derive the MML code of a multivariate GLM ridge regression. Since these works were not designed for time series and do not consider any lag, the mentioned codes cannot be directly used for Granger models.
Marx and Vreeken in [30,31] and Budhathoki and Vreeken [32] applied the MDL principle to the Granger causal inference. The inference in these papers is however done for the bivariate Granger causality and the extension to graphical Granger methods is not straightforward. Hlaváčková-Schindler and Plant in [33] applied both MML and MDL principle to the inference in the graphical Granger models for Gaussian time series. Inference in graphical Granger models for Poisson distributed data using the MML principle was done by the same authors in [12]. To our best knowledge, papers on compression criteria for heterogeneous graphical Granger model have not been published yet.
Among the causal inference on time series, Kim et al. in [7] proposed the statistical framework Granger causality (SFGC) that can operate on point processes, including neural-spike trains. The proposed framework uses multiple statistical hypothesis testing for each pair of involved neurons. A pair-wise hypothesis test was used for each pair of possible connections among all time series and the false discovery rate (FDR) applied. The method can also be used for time series from exponential family.
For a fair comparison with our method, we selected the causal inference methods, which are designed for p 3 non-Gaussian processes. In our experiments, we used SFGC as a comparison method, and as another comparison method, we selected the method LINGAM from Shimizu et al. [6], which estimates causal structure in Bayesian networks among non-Gaussian time series using structural equation models and independent component analysis. Finally, as a comparison method, we used the HGGM with the adaptive Lasso penalisation method, as introduced in [1] and described in Section 2.2. The experiments reported in the papers with comparison methods were done only in scenarios when the number of time observations is several orders of magnitude greater than the number of time series.

5. Experiments

We performed experiments with HMMLGA and with exHMML on processes, which have an exponential distribution of types given in Section 3.3. We used the methods HGGM [1], LINGAM [6] and SFGC [7] for comparison. To assess similarity between the target and output causal graphs in synthetic experiments by all methods, we used the commonly applied F-measure, which takes both precision and recall into account.

5.1. Implementation and Parameter Setting

The comparison method HGGM uses Matlab package p e n a l i z e d from [34] with adaptive Lasso penalty. The algorithm in this package employs the Fisher scoring algorithm to estimate the regression coefficients. As recommended by the author of p e n a l i z e d in [34] and employed in [1], we used adaptive Lasso with λ m a x = 5 , applying cross validation and taking the best result with respect to F measure from the interval ( 0 , λ m a x ] . We also followed the recommendation of the authors of LINGAM in [6] and used threshold = 0.05 and the number of boots n/2, where n is the length of the time series. In method SFGC , we used the setting recommended by the authors, the significance level 0.05 of FDR.
In HMMLGA and exHMML, the initial estimates of β i were achieved by the iteratively re-weighted least square procedure implemented in Matlab function glmfit; in the same function, we obtained also the estimates of the dispersion parameters of time series. (Considering initial estimates of β i by the IRLS procedure using function penalized with ridge penalty gave poor results in the experiments.) In case of gamma distribution, we achieved the estimates of parameters κ i by statistical fitting, concretely by Matlab function gamfit. The minimization over λ i was done by function fminsearch, which defined set H from Algorithm 1 as positive numbers from interval [0.1, 1000].

5.2. Synthetically Generated Processes

To be able to evaluate the performance of HMML and to compare it to other methods, the ground truth, i.e., the target causal graph in the experiments, should be known. In this series of experiments, we examined randomly generated processes, having an exponential distribution of Gaussian and gamma types from Section 3.3, together with the correspondingly generated target causal graphs. The performance of all tested algorithms depends on various parameters, including the number of time series (features), the number of causal relations in Granger causal graph (dependencies), the length of time series, and finally, on the lag parameter. Concerning the calculation of an appropriate lag for each time series; theoretically, it can be done by AIC or BIC. However, the calculation of AIC and BIC assumes that the degrees of freedom are equal to the number of nonzero parameters, which is only known to be true for the Lasso penalty [35], but not known for adaptive Lasso. In our experiments, we followed the recommendation of [1] on how to select the lag of time series in HGGM. It was observed that varying the lag parameter from 3 to 50 did not influence either the performance of HGGM nor SFGC significantly. Based on that, we considered lags 3 and 4 in our experiments.
We examined causal graphs with mixed types of time series for p = 5 and p = 8 number of features. For each case, we considered causal graphs with higher edge density (dense case) and lower edge density (sparse case), which corresponds to the parameter “dependency” in the code, where the full graph has for p time series p ( p 1 ) possible directed edges. Since we concentrate on a short time series in the paper; the length of generated time series varied from 100 to 1000.

5.2.1. Causal Networks with 5 and 8 Time Series

We considered 5 time series with 2 gamma, 2 Gaussian and 1 Poisson distributions, which we generated randomly together with the corresponding network. For the denser case with 5 time series, we generated randomly graphs with 18 edges, and for the sparser case, random graphs with 8 edges. The results of our experiments on causal graphs with 5 features ( p = 5 ) are presented in Table 1. Each value in Table 1 represents the mean value of all F-measures over 10 random generations of causal graphs for length n and lag d. For dependency 8, we took strength = 0.9; for dependency 18, we took strength = 0.5 of causal connections.
One can see from Table 1 that HMMLGA and exHMML gave considerably higher precision in terms of F-measure than three other comparison methods, for all considered n up to 1000.
In the second network, we considered 8 time series with 7 gamma and 1 Gaussian distributions, which we generated randomly together with a corresponding network. For the denser case, we randomly generated graphs with 52 edges and for the sparser case random graphs with 15 edges. The results are presented in Table 2. Each value in Table 2 represents the mean value of all F-measures over 10 random generations of causal graphs for length n and lag d. For graph with 52 dependencies, we had strength = 0.3; for graph with 15 dependencies, strength = 0.9. Similarly as in the experiments with p = 5 , one can see in Table 2 for p = 8 that both exHMML and HMMLGA gave considerably higher F-measure than the comparison methods for considered n up to 1000. The pair-wise hypothesis test used in SFGC for each pair of possible connections among all time series showed its efficiency for long time series in [1,7], however, it was in all experiments in our short-time series scenarios outperformed by LINGAM. The performance of method HGGM, efficient in long-term scenarios [1], was for 5 times series comparable to Lingam; for 8 times, this was the performance of HGGM the weakest from all the methods.

5.2.2. Performance of exHMML and MMLGA

The strategy to select the set γ i with minimum HMML and with minimum number of regressors is applied in both methods. In exHMML, all 2 p possible values of HMML were sorted ascendently. Among those having the same minimum value, that one in the list is selected so that it has minimum number of ones (regressors) and is the last in the list. Similarly, this strategy is applied iteratively in HMMLGA on populations of individuals which have size m < 2 p . This strategy is an improvement with respect to MMLGA [12], where the first γ i in the list with minimum MML function was selected. However, since the function HMML has multiple local minima, the convergence to the global minimum by both exHMML and HMMLGA cannot be guaranteed. The different performance of exHMML and HMMLGA for various p and various causal graph density is given by the nature of the objective function in (12) to be minimized. This function has multiple local minima. The above described implementation of both procedures for the exhaustive search and for the genetic algorithm, therefore, without any prior knowledge of the ground truth causal graph, can give different performance of HMMLGA and exHMML. However as shown in the experiments, the achieved local minima are for both methods much closer to the global one than in case of the three rival methods.

5.3. Climatological Data

We studied dynamics among seven climatic time series in a time interval. All time series were measured in the station of the Institute for Meteorology of the University of Life Science in Vienna 265 m above sea level [36]. Since weather is a very changeable process, it makes sense to focus on shorter time interval. We considered time series of dewpoint temperature (degree C, dew p), air temperature (degree C, air tmp), relative humidity (%, rel hum), global radiation (W m 2 , gl rad), wind speed (km/h, w speed), wind direction (degree, w dir), and air pressure (hPa, air pr). All processes were measured every ten minutes, which corresponds to n = 432 time observations for each time series. We concentrated on the temporal interactions of these processes during two scenarios. The first one corresponded to 7 to 9 July 2020 which were days with no rain. The second one corresponded to 16 to 18 July 2020 which were rainy days.
Before we applied the methods, we tested the distributions of each time series. In the first scenario (rainy days), air temperature (2) and global radiation (4) followed a gamma distribution and the remaining processes, the dew point temperature (1), relative humidity (3), wind speed (5), wind direction (6), and air pressure (7), following a Gaussian distribution. In the second scenario (dry days), wind direction (6) and air pressure (7) followed a Gaussian distribution, the dew point temperature (1), air temperature (2), relative humidity (3), global radiation (4) and wind speed (5), following a gamma distribution. To decide which of the algorithms exHMML or HMMLGA would be preferable to apply in this real valued experiment, we executed synthetic experiments for constellations of 5 gamma and 2 Gaussian (dry days), as well as of 2 gamma and 5 Gaussian (rainy days), with n = 432 for sparse and dense graphs with d = 4 and 5, each for 10 random graphs. Higher F-measure was obtained for HMMLGA, therefore we applied the HMMLGA procedure in the climatological experiments.
The resulting output graphs for methods HMMLGA, Lingam and HGGM for rainy and dry days gave the same graphs each for both lags; for dry days, we obtained, in HGGM, different graphs for each lag. We were interested in (a) how realistic were the detected temporal interactions of the processes by each method and in each scenario and (b) how realistic were the detected temporal interactions by each method, coming from the difference of graphs for dry and rainy days. In this case, we focused here only on the connections which differed in both graphs for each method. The figures of the output graphs for methods HMMLGA, Lingam, SFGC and HGGM for rainy and dry days can be for lag d = 4 , seen in Figure 1 and Figure 2.
For Lingam, the ouput graphs for rainy and dry days were identical and complete, so we omitted this method from further analysis.
Based on the expert knowledge [37], the temporal interactions in HMMLGA output graphs in both the rainy and dry scenarios correspond to the reality. In H M M L G A D R , which is the subgraph of HMMLGA of connections of the complement for dry days and of rainy days, the following directed edges in the form (cause, effect) were detected: (air tmp, air pr) and (dew p, air pr). The (direct) influence of dew point on air pressure is more strongly observable during sunny days, since the dew point is not possible to determine during rainy days. Similarly, the causal influence of air temperature on the air pressure is stronger during sunny days than during rainy days. So, both detected edges in HMMLGA were realistic. H M M L G A R D was empty. Output graph H G G M D R gave no edges. For H G G M R D , we obtained these directed edges: (dew p, air pr), which is, during rain, not observable, but the achieved influence (rel hum, dew p) is also during rain observable. Moreover, (rel hum, air pr) are observable (as humidity increases, pressure decreases). The edge (w speed, w dir) is not observable in reality, (w speed, air pr) is observable (higher wind speeds will show lower air pressure); also (w speed, air tmp) and (w speed, gl rad) are observable, however direct effect (w dir, rel hum) is not observable in reality. So, H G G M R D had 2 falsely detected directions out of 8. Graph S F G C R D gave this edge (dew p, air pr). Similarly, as in the case of HGGM, this edge is, during rain, not observable; (dew p, air tmp)—is during rain not observable; (dew p, w speed)—is during rain not observable; (dew p, rel hum)—is during rain not observable; (dew p, gl rad)—is during rain not observable; (rel hum, gl rad)—is during rain observable; (gl rad, w speed)—is during rain not observable; (gl rad, w dir)—is during rain not observable. So, S F G C R D had 7 falsely detected directions out of 8. The output of S F G C D R gave these edges: (rel hum, dew p)—this is during a dry period observable; (rel hum, air tmp)—this is during a dry period observable; (gl rad, w speed)—this is during a dry period observable; (dev p, air tmp)—this is during a dry period observable; (air press, w dir)—this is during a dry period observable; (w speed, air pr)—this is during a dry period observable; (air pr, w speed) is during dry period in reality observable. So, S F G C D R had 7 correctly detected directions out of 7.
We conclude that, in this climatological experiment, method HMMLGA, followed by SFGC, gave the most realistic causal connections with respect to the comparison methods.

5.4. Electrohysterogram Time Series

In the current obstetrics, there is no effective way of preventing preterm birth. The main reason is that no good objective method is known to evaluate the stepwise progression of pregnancy through to labor [38]. Any better understanding of the underlying labor dynamics can contribute to prevent preterm birth, which is the main cause of mortality in newborns. External recordings of the electrohysterogram (EHG) can provide new knowledge on uterine electrical activity associated with contractions.
We considered a database of 4-by-4 electrode EHG recordings performed on pregnant women, which were recorded in Iceland between 2008 and 2010 and are available via PhysioNet (PhysioBank ATM) [39]. This EHC grid (in the matrix form) was placed on the abdomen of the pregnant women. The electrode numbering, as considered in [38], can be found in Figure 3.
We applied the recordings, concretely for EHG signal for women in the third phase of pregnancy and during labor, to all the methods. We selected all (five) mothers for which the recordings were performed, both in the third trimester and during labor. Since there is no ground truth known on how the dynamics among the electrodes should look like for both modalities, we set a modest objective for us, whether HMMLGA and the comparison methods are able to distinguish labor from pregnancy from the EHG recordings. During labor, a higher density of interactions among electrodes is expected than during pregnancy, due to the higher occurrence of contractions of the uterine smooth muscles, which is also supported by some recent research in obstetrics, e.g., [40].
The 16 electromyographic time series (channels) were taken for all women (woman 11, 27, 30, 31 and 34), for each in the third trimester (P) and during labor (L). The observations in time series correspond to the time resolution every 5th microsecond. The time series in the databasis are commented by information about contraction, possible contraction, participant movement, participant change of position, fetal movement and equipment manipulation. By statistical fitting, we found out that all 16 time series followed Poisson distribution (setting raw ADC units in the Physionet database). We analysed the causal connections of each method for labor and pregnancy for all five women.
Since HMMLGA had higher F-measure than exHMML in the synthetic experiments with 16 Poisson time series, we considered further only HMMLGA in this real data experiment. In the synthetic experiments in [12], Poisson time series showed the highest F-measure on short time series, i.e., the case when the number of time observations is smaller than approximately two orders times the number of time series. Based on this, we took the last 1200 observations for labor, since in the last phase, it was sure the labor had already started and the contractions had increased in time. Labor still continued for another few hours after the EHG recording finished for each of five women. For pregnancy time series, we took also 1200 observations, starting the moment where all electrodes had been fixed. The hypothesis, that during labor all electrodes were activated was confirmed by HMMLGA, HGGM and Lingam at all mothers. The hypothesis, that the causal graph during labor had higher density of causal connections than in the pregnancy case, was confirmed at all mothers by HMMLGA, for HGGM for mothers 30 and 31, but for SFGC and Lingam, we could not confirm it. In fact, Lingam gave identical complete causal graphs for both labor and pregnancy cases. The real computational time for Lingam (with 100 boots, as recommended by the authors) was for 16 time series and both labor and pregnancy modalities cca 12 h (in HP Elite Notebook); on the other side, for other methods, the time was in order of minutes. We present the causal graphs of all methods for labor and pregnant phase of mother 31 in Figure 4.
One can see that the density of connections by HMMLGA for labor is higher than for pregnancy. Causal graphs of HMMLGA for all mothers were for labor also denser than the pregnancy one. To make some more concrete hypotheses about the temporal interactions among the electrodes based on contractions, we would probably have to consider only intervals about which we know that they are without or with a limited number of artifacts in terms of participant movement, participant change of position, etc.

6. Conclusions

Common graphical Granger models in scenarios with short time series suffer often from overestimation, including the heterogeneous graphical Granger model. To remedy this, in this paper, we proposed to use the minimum message length principle for determination of causal connections in the heterogeneous graphical Granger model. Based on the dispersion coefficient of the target time series and on the initial maximum likelihood estimates of the regression coefficients, we proposed a minimum message length criterion to select the subset of causally connected time series with each target time series, and we derived its concrete form for various exponential distributions. We found this subset by a genetic-type algorithm (HMMLGA), which we have proposed as well as by exhaustive search (exHMML). We evaluated the complexity of these algorithms. The code in Matlab is provided. We demonstrated superiority of both methods with respect to the comparison methods in synthetic experiments in short data scenarios. In two real data experiments, the interpretation of the causal connections as the result of HMMLGA was the most realistic with respect to the comparison methods. The superiority of HMMLGA with respect to the comparison methods for short time series can be explained by utilizing the dispersion of time series in the criterion as an additional (prior) information, as well as the fact that this criterion is optimized in the finite search space.

Author Contributions

Conceptualization, K.H.-S.; Data curation, K.H.-S.; Formal analysis, K.H.-S.; Investigation, K.H.-S.; Methodology, K.H.-S.; Resources, C.P.; Software, K.H.-S.; Supervision, C.P.; Validation, K.H.-S.; Visualization, K.H.-S.; Writing—original draft, K.H.-S.; Writing—review & editing, C.P., K.H.-S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Czech Science Foundation grant number GA19-16066S.

Acknowledgments

This work was supported by the Czech Science Foundation, project GA19-16066S. The authors thank to Dr. Irene Schicker and Dipl.-Ing. Petrina Papazek from [37] for their help with analysing the results of the climatological experiments. Open Access Funding by the University of Vienna.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Derivation of the MML Criterion for HGGM

Assume p independent random variables from the exponential family which are represented by time series x i t , t = d + 1 , , n and for each i be ϕ ^ i the given estimate of its dispersion. Consider the problem (10) for a given lag d > 0 .
We consider now γ i fixed, so for simplicity of writing we omit it from the list of variables of the functions. Having function L i , we can now compute an initial estimate of β ^ i from (10) which is the solution to the system of score equations. Since L i forms a convex function, one can use standard convex optimization techniques (e.g., Newton-Raphson method) to solve these equations numerically. (In our code, we use the Matlab implementation of an iteratively reweighted least squares (IRLS) algorithm of the Newton–Raphson method). Assume now we have an initial solution β ^ i from (10).
Having parameters β ^ i , ϕ ^ i , Σ i W i and λ i , we need to construct the function H M M L ( γ i ) : We use for each i = 1 , , p and for regression (10) Formula (18) from [23] i.e., the case when we plug in variables α : = 0 and β : = β ^ i and X : = X i , y : = x i , n : = n d , k : = k i , θ : = β ^ i , λ : = λ i , ϕ : = ϕ ^ i , S = Σ i be the unity matrix of dimension d k i . The corrected Fisher information matrix for the parameters β i is then J ( β ^ i | ϕ ^ i , λ i ) = ( 1 ϕ i ) X i W i X i + λ i Σ i . Function c ( m ) for m : = k i + 1 is then c ( k i + 1 ) = k i + 1 2 log ( 2 π ) + 1 2 log ( ( k i + 1 ) π ) 0.5772 and the constants which are independent of k i we omitted from the HMML code, since the optimization w.r.t. γ i is independent of them. Among all subsets γ i Γ , there are p k i subsets of size k i . If nothing is known a priori about the likelihood of any covariate x i being included in the final model, a prior that treats all subset sizes equally likely π ( | γ i | ) = 1 / ( p + 1 ) is appropriate [23]. This gives the code length I ( γ i ) = log p k i + log ( p + 1 ) as in (12).

Appendix B. Derivation of Li, Wi, ϕi for Various Exponential Distributions of x i

Case x i is Gaussian Since in this case is ϕ i = σ i 2 its variance, we will omit ϕ i from the list of parameters which condition function p. L i in (15) is obtained directly from (14) by applying logarithm on it. By plugging values for identity link corresponding to the Gaussian case as η i t = μ i t = [ X i β i ] t and δ η i t δ μ i t = 1 into Formula (13) from [23], matrix W i = I k i d × k i d is directly obtained.
Case x i is binomial Assuming ϕ i be a constant, we can omit ϕ i from the list of parameters which condition function p. L i in (17) is obtained directly from (16) by applying logarithm on it. As in the previous case, it is obtained by plugging values into formula (13) from [23]. Value of W i from (19) is obtained by plugging values for logit link corresponding to the binomial case as η i t = [ X i β i ] t = log ( μ i t 1 μ i t ) and δ η i t δ μ i t = 1 μ i t ( 1 μ i t ) into Formula (13) from [23]. In case we cannot assume ϕ i = 1 , we apply the sandwich estimate of the covariance matrix of β ^ i for robust estimation which for a general logistic regression can be found in e.g., [41]) and in our case it gives matrix W i in the form W i = diag ( ( x i 1 exp ( [ X i β ^ i ] 1 ) ( 1 + exp ( [ X i β ^ i ] 1 ) ) 2 ) 2 , , ( x i n d exp ( [ X i β ^ i ] n d ) ( 1 + exp ( [ X i β ^ i ] n d ) ) 2 ) 2 ) .
Case x i is Poisson First we will express the log-likelihood function L i in terms of parameters β i . Since we use Poisson model for x i having the Poisson distribution or overdispersed Poisson, we omit ϕ i from the list of parameters which condition function p. For a given set of parameters β i , the probability of attaining x i d + 1 , , x i n is given by p ( x i d + 1 , , x i n | X i , β i ) = t = d + 1 n ( μ i t ) x i t exp ( μ i t ) ( x i t ) ! = t = d + 1 n exp ( [ X i β i ] t ) x i t exp ( exp ( [ X i β i ] t ) ) x i t ! and η i t = exp ( [ X i β i ] t ) , (recalling the notation from Section 3.2, [ X i β i ] t denotes the t-th coordinate of the vector X i β i ). The log-likelihood in terms of β i is L i = log p ( β i | x i , X i ) = t = d + 1 n x i t [ X i β i ] t exp ( [ X i β i ] t ) log ( x i t ! ) . Now we derive matrix W i for x i with (exact) Poisson distribution: The Fisher information matrix J i = J ( β i ) = E β i ( 2 L i ( β i | x i , X i ) ) may be obtained by computing the second order partial derivatives of L i for r , s = 1 , , k i . This gives
δ 2 L i ( β i | x i , X i ) δ 2 β i r β i s = δ L i δ β i s t = d + 1 n [ x i t l = 1 d x r t l exp ( j = 1 k i l = 1 d x j t l β j l ) l = 1 d x r t l ] = t = d + 1 n exp ( j = 1 k i l = 1 d x j t l β j l ) ( l = 1 d x s t l ) ( l = 1 d x r t l ) .
If we denote W i : = d i a g ( exp ( j = 1 k i l = 1 d x j d + 1 l β j l ) , , exp ( j = 1 k i l = 1 d x j n l β j l ) ) then we have Fisher information matrix J ( β i ) = ( X i ) W i X i . Alternatively, W i can be obtained by plugging values into formula (13) from [23]. Value of W i from (22) is obtained by plugging values for log link corresponding to the Poisson case as η i t = [ X i β i ] t = log ( μ i t ) and δ η i t δ μ i t = 1 μ i t into Formula (13) from [23].
Derivation of matrix W i for x i with overdispersed Poisson distribution: Assume now the dispersion parameter ϕ i > 0 , 1 . The variance of the overdispersed Poisson distribution is ϕ i μ i . We know that the Poisson regression model can be still used in overdispersed settings and the function L i is the same as L i ( β i ) derived above. We use the robust sandwich estimate of covariance of β ^ i as it was proposed in [42] for general Poisson regression. The Fisher information matrix of overdispersed problem is J i = J ( β i ) = ( X i ) W i X i where W i is constructed for x i Poisson based on [42] and has the form W i = diag ( [ x i d + 1 exp ( j = 1 k i l = 1 d x j d + 1 l β j l ) ] 2 , , [ x i n exp ( j = 1 k i l = 1 d x j n l β j l ) ] 2 ) .
Case x i is gamma L i in (25) is obtained directly from (24) by applying logarithm on it. By plugging values for log link corresponding to the gamma case as η i t = 1 μ i t and δ η i t δ μ i t = 1 ( μ i t ) 2 into Formula (13) from [23], matrix W i from (26) is directly obtained.
Case x i is inverse-Gaussian L i in (28) is obtained directly from (24) by applying logarithm on it. By plugging values for log link corresponding to the inverse-Gaussian case as η i t = [ X i β i ] t = log ( μ i t ) and δ η i t δ μ i t = 1 μ i t into Formula (13) from [23], matrix W i from (29) is directly obtained.

References

  1. Behzadi, S.; Hlaváčková-Schindler, K.; Plant, C. Granger Causality for Heterogeneous Processes. In Pacific-Asia Conference on Knowledge Discovery and Data Mining; Springer: Cham, Switzerland, 2019. [Google Scholar]
  2. Zou, H. The adaptive lasso and its oracle property. J. Am. Stat. Assoc. 2006, 101, 1418–1429. [Google Scholar] [CrossRef] [Green Version]
  3. Hryniewicz, O.; Kaczmarek, K. Forecasting short time series with the bayesian autoregression and the soft computing prior information. In Strengthening Links Between Data Analysis and Soft Computing; Springer: Cham, Switzerland, 2015; pp. 79–86. [Google Scholar]
  4. Bréhélin, L. A Bayesian approach for the clustering of short time series. Rev. D’Intell. Artif. 2006, 20, 697–716. [Google Scholar] [CrossRef]
  5. Wallace, C.S.; Boulton, D.M. An information measure for classification. Comput. J. 1968, 11, 185–194. [Google Scholar] [CrossRef] [Green Version]
  6. Shimizu, S.; Inazumi, T.; Sogawa, Y.; Hyvärinen, A.; Kawahara, Y.; Washio, T.; Hoyer, P.O.; Bollen, K. DirectLiNGAM: A direct method for learning a linear non-Gaussian structural equation model. J. Mach. Learn. Res. 2011, 12, 1225–1248. [Google Scholar]
  7. Kim, S.; Putrino, D.; Ghosh, S.; Brown, E.N. A Granger causality measure for point process models of ensemble neural spiking activity. PLoS Comput. Biol. 2011, 7, e1001110. [Google Scholar] [CrossRef] [Green Version]
  8. Arnold, A.; Liu, Y.; Abe, N. Temporal causal modeling with graphical Granger methods. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Jose, CA, USA, 12–15 August 2007; pp. 66–75. [Google Scholar]
  9. Shojaie, A.; Michailidis, G. Discovering graphical Granger causality using the truncating lasso penalty. Bioinformatics 2010, 26, i517–i523. [Google Scholar] [CrossRef]
  10. Lozano, A.C.; Abe, N.; Liu, Y.; Rosset, S. Grouped graphical Granger modeling methods for temporal causal modeling. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Paris, France, 28 June–1 July 2009; pp. 577–586. [Google Scholar]
  11. Nelder, J.; Wedderburn, R. Generalized Linear Models. J. R. Stat. Soc. Ser. A (General) 1972, 135, 370–384. [Google Scholar] [CrossRef]
  12. Hlaváčková-Schindler, K.; Plant, C. Poisson Graphical Granger Causality by Minimum Message Length. In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases 2020 (ECML/PKDD), Ghent, Belgium, 14–18 September 2020. [Google Scholar]
  13. Granger, C.W. Investigating causal relations by econometric models and cross-spectral methods. Econometrica 1969, 37, 424–438. [Google Scholar] [CrossRef]
  14. Mannino, M.; Bressler, S.L. Foundational perspectives on causality in large-scale brain networks. Phys. Life Rev. 2015, 15, 107–123. [Google Scholar] [CrossRef]
  15. Maziarz, M. A review of the Granger-causality fallacy. J. Philos. Econ. Reflect. Econ. Soc. Issues 2015, 8, 86–105. [Google Scholar]
  16. Granger, C.W. Some recent development in a concept of causality. J. Econom. 1988, 39, 199–211. [Google Scholar] [CrossRef]
  17. Lindquist, M.A.; Sobel, M.E. Graphical models, potential outcomes and causal inference: Comment on Ramsey, Spirtes and Glymour. NeuroImage 2011, 57, 334–336. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Spirtes, P.; Glymour, C.N.; Scheines, R.; Heckerman, D. Causation, Prediction, and Search; MIT Press: Cambridge, MA, USA, 2000. [Google Scholar]
  19. Glymour, C. Counterfactuals, graphical causal models and potential outcomes: Response to Lindquist and Sobel. NeuroImage 2013, 76, 450–451. [Google Scholar] [CrossRef] [PubMed]
  20. Marinescu, I.E.; Lawlor, P.N.; Kording, K.P. Quasi-experimental causality in neuroscience and behavioural research. Nat. Hum. Behav. 2018, 2, 891–898. [Google Scholar] [CrossRef] [PubMed]
  21. Wallace, C.S.; Freeman, P.R. Estimation and inference by compact coding. J. R. Stat. Soc. Ser. B 1987, 49, 240–252. [Google Scholar] [CrossRef]
  22. Wallace, C.S.; Dowe, D.L. Minimum message length and Kolmogorov complexity. Comput. J. 1999, 42, 270–283. [Google Scholar] [CrossRef] [Green Version]
  23. Schmidt, D.F.; Makalic, E. Minimum message length ridge regression for generalized linear models. In Australasian Joint Conference on Artificial Intelligence; Springer: Cham, Switzerland, 2013; pp. 408–420. [Google Scholar]
  24. Segerstedt, B. On ordinary ridge regression in generalized linear models. Commun. Stat. Theory Methods 1992, 21, 2227–2246. [Google Scholar] [CrossRef]
  25. Computational Complexity of Mathmatical Operations. Available online: https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations (accessed on 2 October 2020).
  26. Rissanen, J. Stochastic Complexity in Statistical Inquiry; World Scientific: Singapore, 1989; Volume 15, p. 188. [Google Scholar]
  27. Barron, A.; Rissanen, J.; Yu, B. The minimum description length principle in coding and modeling. IEEE Trans. Inf. Theory 1998, 44, 2743–2760. [Google Scholar] [CrossRef] [Green Version]
  28. Hansen, M.; Yu, B. Model selection and minimum description length principle. J. Am. Stat. Assoc. 2001, 96, 746–774. [Google Scholar] [CrossRef]
  29. Hansen, M.H.; Yu, B. Minimum description length model selection criteria for generalized linear models. Lect. Notes Monogr. Ser. 2003, 40, 145–163. [Google Scholar]
  30. Marx, A.; Vreeken, J. Telling cause from effect using MDL-based local and global regression. In Proceedings of the 2017 IEEE International Conference on Data Mining, New Orleans, LA, USA, 18–21 November 2017; pp. 307–316. [Google Scholar]
  31. Marx, A.; Vreeken, J. Causal inference on multivariate and mixed-type data. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Dublin, Ireland, 10–14 September 2018; Volume 2018, pp. 655–671. [Google Scholar]
  32. Budhathoki, K.; Vreeken, J. Origo: Causal inference by compression. Knowl. Inf. Syst. 2018, 56, 285–307. [Google Scholar] [CrossRef] [Green Version]
  33. Hlaváčková-Schindler, K.; Plant, C. Graphical Granger causality by information-theoretic criteria. In Proceedings of the European Conference on Artificial Intelligence 2020 (ECAI), Santiago de Compostela, Spain, 29 August–2 September 2020; pp. 1459–1466. [Google Scholar]
  34. McIlhagga, W.H. Penalized: A MATLAB toolbox for fitting generalized linear models with penalties. J. Stat. Softw. 2016, 72. [Google Scholar] [CrossRef] [Green Version]
  35. Zou, H.; Hastie, T.; Tibshirani, R. On the “degrees of freedom” of the lasso. Ann. Stat. 2007, 35, 2173–2192. [Google Scholar] [CrossRef]
  36. Available online: https://meteo.boku.ac.at/wetter/mon-archiv/2020/202009/202009.html (accessed on 5 September 2020).
  37. Zentralanstalt für Meteorologie und Geodynamik 1190 Vienna, Hohe Warte 38. Available online: https://www.zamg.ac.at/cms/de/aktuell (accessed on 5 September 2020).
  38. Alexandersson, A.; Steingrimsdottir, T.; Terrien, J.; Marque, C.; Karlsson, B. The Icelandic 16-electrode electrohysterogram database. Nat. Sci. Data 2015, 2, 1–9. [Google Scholar] [CrossRef] [Green Version]
  39. Available online: https://www.physionet.org (accessed on 5 September 2020).
  40. Mikkelsen, E.; Johansen, P.; Fuglsang-Frederiksen, A.; Uldbjerg, N. Electrohysterography of labor contractions: Propagation velocity and direction. Acta Obstet. Gynecol. Scand. 2013, 92, 1070–1078. [Google Scholar] [CrossRef]
  41. Agresti, A. Categorical Data Analysis; Section 12.3.3.; John Wiley and Sons: Hoboken, NJ, USA, 2003; Volume 482. [Google Scholar]
  42. Huber, P.J. The behavior of maximum likelihood estimates under nonstandard conditions. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability; University of California Press: Berkeley, CA, USA, 1967; Volume 1, pp. 221–233. [Google Scholar]
Figure 1. Output causal graphs for method HMMLGA and Lingam for rainy days and dry day scenarios.
Figure 1. Output causal graphs for method HMMLGA and Lingam for rainy days and dry day scenarios.
Entropy 22 01400 g001
Figure 2. Output causal graphs for method HGGM and SFGC for rainy days and dry day scenarios.
Figure 2. Output causal graphs for method HGGM and SFGC for rainy days and dry day scenarios.
Entropy 22 01400 g002
Figure 3. The ordering of the electrodes as mounted on the abdomen of women.
Figure 3. The ordering of the electrodes as mounted on the abdomen of women.
Entropy 22 01400 g003
Figure 4. Output causal graphs for mother 31 during (a) labor and (b) pregnancy for all methods.
Figure 4. Output causal graphs for mother 31 during (a) labor and (b) pregnancy for all methods.
Entropy 22 01400 g004
Table 1. p = 5 , average F-measure for each method, HMML, n g = 10 , m = 20 , HGGM with λ m a x = 5 , LINGAM with n / 2 boots. The first one subtable is for d = 3 , the second one for d = 4 .
Table 1. p = 5 , average F-measure for each method, HMML, n g = 10 , m = 20 , HGGM with λ m a x = 5 , LINGAM with n / 2 boots. The first one subtable is for d = 3 , the second one for d = 4 .
dense g. 18, n = 1003005001000;sparse g. 8, n = 1003005001000
exHMML0.690.830.820.88exHMML0.700.720.720.67
HMMLGA0.730.900.890.90HMMLGA0.730.760.740.67
HGGM0.50.480.540.52HGGM0.520.360.660.36
LINGAM0.570.580.620.58LINGAM0.580.540.690.45
SFGC0.330.260.260.33SFGC0.140.350.440.31
dense g. 18, n = 1003005001000;sparse g. 8, n = 1003005001000
exHMML0.710.730.830.83exHMML0.670.800.800.68
HMMLGA0.820.790.870.92HMMLGA0.670.730.770.70
HGGM0.440.370.400.39HGGM0.530.470.650.36
LINGAM0.710.580.580.65LINGAM0.330.520.740.46
SFGC0.430.550.420.63SFGC0.350.590.420.38
Table 2. p = 8 , average F-measure for each method, HMML, with d = 3 , n g = 10 , m = 20 , HGGM with λ m a x = 5 , LINGAM with n / 2 boots. The first subtable is for d = 3 , the second one for d = 4 .
Table 2. p = 8 , average F-measure for each method, HMML, with d = 3 , n g = 10 , m = 20 , HGGM with λ m a x = 5 , LINGAM with n / 2 boots. The first subtable is for d = 3 , the second one for d = 4 .
dense g. 52, n = 1003005001000;sparse g. 15, n = 1003005001000
exHMML0.680.780.790.82exHMML0.690.730.770.64
HMMLGA0.840.670.660.87HMMLGA0.570.690.70.56
HGGM0.160.170.170.17HGGM0.20.090.180.17
LINGAM0.620.540.510.55LINGAM0.280.330.40.19
SFGC0.320.210.350.20SFGC0.30.240.220.19
dense g. 52, n = 1003005001000;sparse g. 15, n = 1003005001000
exHMML0.590.640.560.75exHMML0.580.840.800.69
HGGMGA0.770.720.630.79HMMLGA0.420.690.700.56
HGGM0.160.160.180.17HGGM0.170.100.180.19
LINGAM0.620.540.510.55LINGAM0.270.330.400.18
SFGC0.360.450.820.83SFGC0.290.290.240.20
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hlaváčková-Schindler, K.; Plant, C. Heterogeneous Graphical Granger Causality by Minimum Message Length. Entropy 2020, 22, 1400. https://doi.org/10.3390/e22121400

AMA Style

Hlaváčková-Schindler K, Plant C. Heterogeneous Graphical Granger Causality by Minimum Message Length. Entropy. 2020; 22(12):1400. https://doi.org/10.3390/e22121400

Chicago/Turabian Style

Hlaváčková-Schindler, Kateřina, and Claudia Plant. 2020. "Heterogeneous Graphical Granger Causality by Minimum Message Length" Entropy 22, no. 12: 1400. https://doi.org/10.3390/e22121400

APA Style

Hlaváčková-Schindler, K., & Plant, C. (2020). Heterogeneous Graphical Granger Causality by Minimum Message Length. Entropy, 22(12), 1400. https://doi.org/10.3390/e22121400

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop