Abstract
In this paper novel simulation methods are provided for the generalised inverse Gaussian (GIG) Lévy process. Such processes are intractable for simulation except in certain special edge cases, since the Lévy density associated with the GIG process is expressed as an integral involving certain Bessel functions, known as the Jaeger integral in diffusive transport applications. We here show for the first time how to solve the problem indirectly, using generalised shot-noise methods to simulate the underlying point processes and constructing an auxiliary variables approach that avoids any direct calculation of the integrals involved. The resulting augmented bivariate process is still intractable and so we propose a novel thinning method based on upper bounds on the intractable integrand. Moreover, our approach leads to lower and upper bounds on the Jaeger integral itself, which may be compared with other approximation methods. The shot noise method involves a truncated infinite series of decreasing random variables, and as such is approximate, although the series are found to be rapidly convergent in most cases. We note that the GIG process is the required Brownian motion subordinator for the generalised hyperbolic (GH) Lévy process and so our simulation approach will straightforwardly extend also to the simulation of these intractable processes. Our new methods will find application in forward simulation of processes of GIG and GH type, in financial and engineering data, for example, as well as inference for states and parameters of stochastic processes driven by GIG and GH Lévy processes.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
The Lévy process is a fundamental tool for the study of continuous time stochastic phenomena Bertoin (1997), Barndorff-Nielsen et al. (2001), Applebaum (2009). The most familiar example is of course Brownian motion, but it is well known that the Gaussian assumption is inadequate for modelling of real-world phenomena, which are often more heavy-tailed than the Gaussian. Applications are found in areas such as financial modelling Mandelbrot (1963), Fama (1965), Cont and Tankov (2003), communications Azzaoui and Clavier (2010), Fahs and Abou-Faycal (2012), de Freitas et al. (2017), Liebeherr et al. (2012), Shevlyakov and Kim (2006), Warren and Thomas (1991), signal processing Nikias and Shao (1995), image analysis Achim et al. (2001), Achim et al. (2006) and audio processing Godsill and Rayner (1998), Lombardi and Godsill (2006). Non-Gaussian heavy-tailed effects are also important in the climatological sciences Katz and Brown (1992), Katz et al. (2002), in the medical sciences Chen et al. (2010) and for the understanding of sparse modelling/compressive sensing Unser et al. (2014), Unser et al. (2014), Unser and Tafti (2014), Amini and Unser (2014), Carrillo et al. (2016), Lopes (2016), Zhou and Yu (2017), Tzagkarakis (2009), Achim et al. (2010).
In this paper we study a very general class of non-Gaussian Lévy processes, the generalised inverse Gaussian process Barndorff-Nielsen et al. (2001), Eberlein and Hammerstein (2004), a process which can capture various degrees of heavy-tailed behaviour, including the gamma process, the inverse Gaussian and the reciprocal-gamma process, as well as processes that lie somewhere ‘in between’ these edge cases Eberlein and Hammerstein (2004). Our work also enables direct simulation of the mean–variance mixtures of GIG processes, leading to the generalised hyperbolic (GH) processes Eberlein (2001). Important sub-classes of the GH process include the asymmetric Student-t process which has previously been intractable for simulation and inference to our knowledge (this class is entirely distinct from the Student-t processes developed in the machine learning literature Shah et al. 2014). The use of asymmetric Student-t models in financial econometric applications is discussed in Zhu and Galbraith (2010).
Rosiński (2001) presents a generalised shot-noise representation of non-Gaussian Lévy processes, and it is this approach that we develop here for the GIG and GH processes. Our previous work using the shot noise representation has focussed on stable law processes and their applications in engineering, see Lemke and Godsillc (2015), Riabiz et al. (2020), Godsill et al. (2019) and references therein. There have been relevant developments in various special cases over recent years, including Barndorff-Nielsen (1997) who present the theory of normal-inverse Gaussian (NIG) processes, Rydberg (1997) who present approximate sampling methods for the NIG case, while Barndorff-Nielsen and Shephard (2001) give applications of series based methods for non-Gaussian Ornstein–Uhlenbeck (OU) processes. Zhang (2011b) provided rejection sampling and series based simulations for the GIG OU process using the concepts of Rosiński (2001), but these are applied to a different Background driving Lévy process (BDLP) than our work and indeed require evaluation of a generally intractable integral involving Bessel functions. In addition Zhang (2011a), Qu et al. (2021), Grabchak (2019) have provided exact simulation methods for the related class of tempered stable (TS) processes, while recent relevant literature on GIG Lévy fields can be found in Barth and Stein (2017). It should be noted that the shot noise method involves infinite series of decreasing random variables, and in practice the series must be truncated at some finite limit in simulation. This truncation is the only approximation involved in our methods, which are otherwise exact, and in most cases the series are found to converge rapidly as the number of terms increases.
The distribution of the GIG Lévy process at \(t=1\), the GIG distribution, possesses a three-parameter probability density function defined for positive real random variables as follows Eberlein and Hammerstein (2004):
where \(\lambda \in {\mathbb {R}}\), \(K_{\lambda }(\cdot )\) is the modified Bessel function of the second kind and \({{\mathcal {I}}}\) is the indicator function. The support of parameters \(\gamma \) and \(\delta \) depends on the sign of \(\lambda \) such that:
as discussed in Eberlein and Hammerstein (2004). Random variate generation algorithms for a GIG variable are studied in Devroye (2014), Hörmann and Leydold (2013).
It is shown in Barndorff-Nielsen and Halgreen (1977) that the GIG distribution is infinitely divisible and hence can be the distribution of a Lévy process at time \(t=1\). Furthermore, particular values of the parameters lead to special cases of the GIG distribution such as the inverse Gaussian \((\lambda = -1/2)\), Gamma (\(\delta =0\), \(\lambda >0\)) and the reciprocal-Gamma (\(\gamma = 0\), \(\lambda <0\)) distributions, and in these limit cases the normalising constant is replaced by the normalising constant of these well-known distributions.
The principal contribution of this paper is to provide a comprehensive suite of methods for simulation of GIG processes, without the need for evaluation of intractable integrals, beyond pointwise evaluation of the relevant Bessel functions. An auxiliary variables approach transforms the univariate GIG point process into a bivariate point process having the GIG process as its marginal by construction and requiring no explicit evaluation of integrals. We derive tractable dominating measures for the augmented GIG Lévy density, hence leading to a random thinning methodology for generation of jumps of the underlying marginal GIG process. The whole procedure is carried out through generation of familiar random variables (from the gamma family) and point processes (both gamma and tempered stable). In addition we are able to bound the average acceptance rates of the random variate generation and also to provide upper and lower bounds on the GIG Lévy density and the corresponding Jaeger integral. Finally the whole methodology is made accessible to practitioners through the publication of Matlab and Python codeFootnote 1 to implement the GIG sampling schemes.
Section 2 presents the necessary preliminaries for simulation of Lévy processes and their corresponding point processes, using a generalised shot-noise approach. Section 3 gives the specific forms for the GIG Lévy density and derives bounds on these densities, as well as a generic thinning method for tractable sampling of the underlying point processes, and presents in detail the simulation method for two different parameter ranges of the process. Section 4 presents example simulations, compared with exact simulations of the GIG random variable, and finally Sect. 5 discusses the application of GIG processes in simulation and inference for more general processes including the generalised hyperbolic process.
2 Shot noise representations, Lévy densities and thinning
In this section we present the required preliminaries about Lévy processes and shot-noise simulation of those processes, using the framework of Rosiński (2001). The characteristic function for a general non-negative-valued Lévy process W(t) having no drift or Brownian motion part is given by Kallenberg (2002), Corollary 15.8, as:
where Q is a Lévy measure on \([0,\infty )\) and satisfying \(\int _{(0,\infty )}(1 \wedge x)Q(dx)<\infty \) (Bertoin 1997, p.72). By the Lévy–Ito integral representation, we may express W(t) directly as:
Here N is a bivariate point process having mean measure \(Lebesgue \times Q\) on \([0,T]\times (0,\infty )\), which can be conveniently expressed as:
where \(\{V_i\in [0,T]\}\) are i.i.d. uniform random variables which give the times of arrival of jumps, \(\{X_i\in (0,\infty )\}\) are the size of the jumps and \(\delta _{V,X}\) is the Dirac measure centred at time V and jump size X.
Substituting N directly into (3) leads to
The almost sure convergence of this series to \(\{W(t)\}\) is proven in Rosiński (2001). Most of the new material in this paper is concerned with generating the jump sizes \(\{X_i\}\) for the GIG case. We will thus usually refer to the point process as just the set of jump sizes, \(N=\{X_i\}\), but of course corresponding jump times \(\{V_i\}\) will need to be sampled as above in all cases in order to realise the process according to (4).
In principle it may be possible to simulate directly from the point process N, but in practice the potentially infinite number of jumps in any finite time interval make this impossible. If the jumps can be ordered by size, however, it may be possible to simulate all of the significant jumps and ignore or approximate the residual error from omitting the smallest jumps. It turns out that this can be done in a very convenient way, using the well-known approach of Ferguson and Klass (1972), Rosiński (2001), Wolpert and Ickstadt (1998). The starting point is the simulation of the epochs of a unit rate Poisson process \(\{{\Gamma }_i\}_{i\ge 1}\). The intensity function of this process is of course \(\lambda _{\Gamma }=1\), and the resulting realisations contain almost surely an infinite number of terms. We know how to simulate an arbitrarily large number of ordered terms from this process by repeatedly generating standard exponential random variables and calculating the cumulative sum of these to obtain an ordered \({\Gamma }_i\) sequence. Now, define the upper tail probability of the Lévy measure as
and a corresponding non-increasing function h(.) as the inverse tail probability:
Then, the following point process converges a.s. to N Rosiński (2001):
and the corresponding convergent representation of the Lévy process is (neglecting the compensator term \(c_i\), which is zero for all cases considered here):
Thus, to generate points directly from N by this method it is necessary to be able to compute \(h(\gamma )={Q^+}^{-1}(\gamma )\) explicitly. In the cases considered here it will not be possible to do this and instead an indirect approach is adopted, using thinning or rejection sampling Lewis and Shedler (1979), Rosiński (2001) of more tractable point processes for which \(h(\gamma )\) is directly available. In particular, we seek a ‘bounding’ process \(N_0\) having Lévy measure \(Q_0\) and satisfying \(dQ_0(x)/dQ(x)\ge 1\) \(\forall x\in (0,\infty )\); then, realisations of \(N_0\) are thinned with probability \(dQ(x)/dQ_0(x)\) in order to obtain samples from the desired process N. In all cases considered here the Lévy measure possesses a density function, which we also denote by Q(x) (using the minor abuse of notation ‘\(dQ(x)=Q(x)dx\)’) and the required bounding condition is then \(Q_0(x)\ge Q(x)\), with associated thinning probability \(Q(x)/Q_0(x)\).
Two bounding processes are used extensively in the methods of this paper, the tempered stable (TS) and gamma processes, which may be simulated by standard shot noise methods as follows:
2.1 Tempered stable point process
In the tempered stable (TS) case the Lévy density is, for \(\alpha \in (0,1)\) Shephard and Barndorff-Nielsen (2001) (see also Rosiński 2007)
Several possible approaches to simulation of sample paths from tempered stable processes were proposed in Rosiński (2001), Rosiński (2007) and compared in Imai and Kawai (2011), which recommends the use of the inverse Lévy measure approach over thinning and rejection sampling methods. For the inverse Lévy measure approach the tail probability may be calculated in terms of gamma functions, but is not easily inverted, and numerical approximations are needed Imai and Kawai (2011). We thus adopt a thinning approach Rosiński (2001) in which the Lévy density is factorised into a positive \(\alpha \)-stable process with Lévy density \(Q_0(x)=Cx^{-1-\alpha }\) Samorodnitsky and Taqqu (1994) and a tempering function \(e^{-\beta x}\). The stable law process has tail mass \(Q_0^+(x)=\frac{C}{\alpha }x^{-\alpha }\) and hence \(h(\gamma )=Q_0^+(\gamma )^{-1}=\left( \frac{\alpha \gamma }{C}\right) ^{-1/\alpha }\). Having simulated the stable point process with rate function \(Q_0\), points are individually selected (thinned) with probability \(e^{-\beta x}\), otherwise deleted.
The recipe for generating the tempered stable process \(N_{TS}\) is then:
In practice i is truncated at some large value and an approximate realisation results.
2.2 Gamma process
The Lévy density for the Gamma process is:
Rosiński (2001) suggests four possible sampling schemes for this process. The tail probability is the exponential integral, which would require numerical inversion, see Wolpert and Ickstadt (1998). Instead we adopt the thinning (also called the ‘rejection method’ in Rosiński 2001) version of this in which a dominating point process is chosen as \(Q_0(x)=\frac{C}{x}(1+\beta x)^{-1}\). The tail probability for this is \(Q_0^+(x)=C\log \left( \beta ^{-1} x^{-1}+1\right) \) and hence \(h(\gamma ) = \frac{1}{\beta \left( \exp (\gamma / C) - 1 \right) }\). Points are then thinned with probability \((1+\beta x) \exp (-\beta x) \le 1\). As reported in Rosiński (2001), this thinning method is highly effective, with very few point rejections observed. The recipe for generating the gamma process \(N_{Ga}\) is then given in Algorithm 2.
As before i is truncated at some large value, yielding an approximate realisation of the gamma process.
3 Simulation from the generalised inverse Gaussian Lévy process
In this section the GIG Lévy density is presented and a general scheme is presented that will enable simulation from the GIG process.
The Lévy density for the generalised inverse Gaussian (GIG) process is given by Eberlein and Hammerstein (2004):
where \(H_{\lambda }(z)=J_{\lambda }(z)+iY_{\lambda }(z)\) is the Hankel function of the first kind (z is always real-valued in the current context), \(J_{\lambda }(z)\) is the Bessel function of the first kind, and \(Y_{\lambda }(z)\) is the Bessel function of the second kind.
The GIG Lévy density comprises two terms: the initial integral, which we denote \(Q_{GIG}(x)\):
added to a second term, present only for \(\lambda >0\):
This second term is a Gamma process that may be straightforwardly simulated using the standard methods of 2.2 and added to the simulation of points from the first term \(Q_{GIG}(x)\). Hence we will neglect this second term for now. It will be convenient to rewrite \(Q_{GIG}(x)\) using the substitution \(z=\delta \sqrt{2y}\) as:
Note that this integral is known elsewhere as the Jaeger integral, which finds application in diffusive transport Freitas (2018). Beyond our direct interest in GIG processes, there is significant interest in approximating these integrals accurately, and our bounding approach is likely to provide accurate bounds and approximations which may be compared with those proposed in Freitas (2018).
Throughout this paper we will consider only positive and real-valued variables x, y and z, as the complex versions of these functions are not required in the present context.
Our general scheme is to consider the following intensity function associated with a bivariate point process on \((0,\infty )\times (0,\infty )\):
which has by construction the GIG process as its marginal:
We propose to simulate points directly from this bivariate process, hence avoiding any direct evaluation of the Jaeger integral. Since \(Q_{GIG}(x,z)\) is intractable itself for simulation, dominating processes \(Q^0_{GIG}(x,z)\ge Q_{GIG}(x,z)\) are constructed and points sampled from \(Q^0_{GIG}(x,z)\) are thinned with probability \(Q_{GIG}(x,z) / Q^0_{GIG}(x,z)\). Thus a significant part of our approach is in constructing suitable dominating functions that are tractable for simulation, and this is achieved by studying the properties of the Jaeger integral.
The first set of bounds are obtained from the basic properties of the Hankel function Watson (1944) and will lead in particular to a simulation algorithm for the case \(|\lambda |\ge 0.5\). Properties of the modulus of the Hankel function are mainly derived from the Nicholson integral representation Watson (1944),
where \(K_0\) is the modified Bessel function of the second kind. In particular this leads to an asymptotic (\(z\rightarrow \infty \)) series expansion (Watson 1944, Sect. 13.75):
from which the well-known asymptotic value is obtained,
From the Nicholson integral the following important property can also be derived (Watson 1944, Sect. 13.74):
Property 1
For any real, positive z, \(z|H_{\nu }(z)|^{2}\) is a decreasing function of z for \(\nu >0.5\) and an increasing function of z when \(0<\nu <0.5\).
For the case \(\nu =0.5\), \(z|H_{\nu }(z)|^{2}=2/\pi \), i.e. a constant.
These basic properties are used to prove the following bounds:
Theorem 1
For any positive z and fixed \(|\lambda |\ge 0.5\), the following bound applies:
and, for \(|\lambda |\le 0.5\),
with equality holding in both cases when \(|\lambda |=0.5\).
Proof
Property 1 and the limiting value of \(2/\pi \) lead to
The results follow by substitution of (12) into \(Q_{GIG}(x,z)\) (9). \(\square \)
Corollary 1
The bound in (10), applicable for \(|\lambda |\ge 0.5\), can be rewritten as
where \(\sqrt{Ga}\) is the square-root gamma density, i.e. the density of \(X^{0.5}\) when \(X\sim Ga(x|\alpha ,\beta )\), having probability density function
Remark 1
It can be seen immediately that (13) corresponds marginally to a tempered stable process in x, and conditionally to a \(\sqrt{Ga}\) density for z, a tractable feature that will enable sampling from the dominating bivariate point process. In fact, this decomposition and that of Corollary 2 are the key point for our new GIG simulation methods. We are here decomposing the bivariate point process in (x, z) as a marked point process comprising a marginal Poisson process for x and a conditional mark random variable z (Cont and Tankov 2003, Sect. 2.6.4). The important point here is that the conditional distributions of z|x are integrable and tractable probability density functions and hence a marginal-conditional sampling procedure may be adopted constructively to simulate from the joint point process in (x, z).
Remark 2
Integration with respect to z leads to a simple upper bound on the GIG Lévy density:
which was also obtained by Zhang (2011b), and used in a rejection sampling procedure that requires a direct evaluation of the Jaeger integral, in contrast with our approach. For \(|\lambda |\le 0.5\), a similar argument leads to a lower bound with the same formula as (14).
Remark 3
The first bound for \(\nu \ge 0.5\) in (12) will be used shortly as an envelope function for rejection sampling in the case \(|\lambda |\ge 0.5\) using Corollary 1. The second bound in (12) for \(\nu \le 0.5\) will be adapted in Theorem 2 to develop more sophisticated bounds in this parameter range.
A second set of bounds can be stated as follows:
Theorem 2
Choose a point \(z_0\in (0,\infty )\) and compute \(H_0=z_0|H_{\nu }(z_0)|^2\). This will define the corner point on a piecewise lower or upper bound. Define now \(z_1 = \left( \frac{ 2^{1-2\nu }\pi }{{\varGamma }^2(\nu )}\right) ^{1/(1-2\nu )}\) and define the following functions:
and
Then, for \(0<\nu \le 0.5\),
and for \(\nu \ge 0.5\),
with all inequalities becoming equalities when \(\nu =0.5\), and both A(z) bounds (left side inequalities) becoming tight at \(z=0\) and \(z=\infty \).
Proof
First note that \(z|H_{\nu }(z)|^2\) is an increasing function for \(0< \nu < 0.5\) and decreasing for \(\nu > 0.5\) as stated in Property 1. When combined with the tight bound of \(2/\pi \) as \(z\rightarrow \infty \) (12) this leads directly to the \(2/\pi \) components of A(z). Similarly the first part of the bound \(A(z)= z({\varGamma }(\nu )/\pi )^2\left( \frac{2}{z}\right) ^{2\nu }\), is justified since Freitas (2018) proves that \(z^{2\nu }|H_{\nu }(z)|^2\) is increasing when \(\nu >0.5\) and decreasing when \(\nu <0.5\), with a tight bound of \(({\varGamma }(\nu )/\pi )^2 2^{2\nu }\) as \(z\rightarrow 0\). Thus the left hand side of the inequalities, A(z), are established.
The right hand side of the inequalities, B(z), are proven by the monotonicity and sign of gradient for both \(z^{2\nu }|H_{\nu }(z)|^2\) and \(z|H_{\nu }(z)|^2\). We may choose an arbitrary corner point \((z_0,H_0)\) on \(z|H_{\nu }(z)|^2\) and monotonicity of \(z|H_{\nu }(z)|^2\) (Property 1) immediately implies that, for \(z\ge z_0\),
Moreover, monotonicity and sign of gradient of \(z^{2\nu }|H_{\nu }(z)|^2\) Freitas (2018) imply that
from which the bounds B(z) are immediately obtained. \(\square \)
Remark 4
The proven bounds can be clearly visualised on a log-log plot, which highlights the asymptotic behaviour of the functions and bounds as \(z\rightarrow 0 \) and \(z\rightarrow \infty \), see Figs. 1 and 2.
Remark 5
The choice of \(z_0\in (0,\infty )\) is arbitrary. However, its value will impact the tightness of the bounding functions B(z) and will hence impact the effectiveness of our subsequent sampling algorithms and integral approximations. A suitable generic choice was found to be at the same z value as the corner point of the corresponding A(z) bounds, i.e. set \(z_0=z_1\), as plotted in Figs. 1 and 2, though further optimisation may be possible in applications.
Corollary 2
For the case \(|\lambda |<0.5\), the right hand bound in (15), B(z), can be substituted into \(Q_{GIG}(x,z)\) (9) and rearranged to obtain:
Furthermore, the two piecewise sections of this bound may be factorised in terms of left- and right-truncated square-root gamma densities:
where the truncated square-root gamma densities are, with their associated normalising constants:
and lower/upper incomplete gamma functions are defined in the usual way, for \(\text {Re}(s) > 0\), as:
\(Q_{GIG}(x,z)\) can thus be split up into two tractable point processes for simulation: a first, \(N_1\), comprising a modified tempered \(|\lambda |\)-stable process with truncated \(\sqrt{Ga}\) conditional for z; and a second, \(N_2\), comprising a modified tempered 0.5-stable process with a further truncated \(\sqrt{Ga}\) conditional for z. We will later use this union of point processes as the dominating Lévy measure in simulation of the \(|\lambda |<0.5\) case.
Corollary 3
Integration of the bounding function (18) with respect to z allows a more sophisticated estimate for the Jaeger integral and therefore the Lévy density for the GIG process, which is an upper bound for \(|\lambda |<0.5\) and a lower bound for \(|\lambda |>0.5\), following the direction of the right hand inequalities in (18) and (16). A similar procedure inserting the left hand (A(z)) inequalities from (18) and (16) yields the corresponding lower (\(|\lambda |<0.5\)) and upper (\(|\lambda |>0.5\)) bounds. Define first the bounding functions \(Q^A_{GIG}(x)\) and \(Q^B_{GIG}(x)\), corresponding to the bounds A(z) and B(z) as follows:
These are obtained by substituting the bounds A(z) or B(z) into the expression for \(Q_{GIG}(x,z)\) (9) and integrating with respect to z.
Noting that \(z_0\in (0,\infty )\) can be chosen arbitrarily, the \(Q^B_{GIG}\) estimates may be improved by optimising with respect to \(z_0\) to either maximise (\(|\lambda |< 0.5\)) or minimise (\(|\lambda |> 0.5\)) \(Q^B_{GIG}(x)\) at each point x.
Then, following the direction of the inequalities in (15) and (16) we obtain the following estimates of \(Q_{GIG}(x)\):
with all inequalities becoming equalities at \(|\lambda |=0.5\). Optimisation to achieve the required maximum and minimum functions can be achieved by numerical search.
Example plots are given in Fig. 3 in which the optimised lower and upper bounds are plotted, showing very close agreement (often indistinguishable by eye) in their estimation of \(Q_{GIG}(x)\), over various parameter ranges and large x ranges (i.e. the bounds are visually quite tight). Overlaid are the \(x^{-3/2}\) and \(x^{-(1+|\lambda |)}\) trends and also the simple approximation of (14). This simple approximation diverges significantly from our proposed bounds, particularly when \(|\lambda |\) is not close to 0.5 (when \(|\lambda |=0.5\) all bounds are equal to the true function \(Q_{GIG}(x)\) for the inverse Gaussian (IG) process).
3.1 Simulation of GIG processes with \(|\lambda |\ge 0.5\)
In this section the specific algorithm applied for simulation in the case \(|\lambda |\ge 0.5\) is detailed. In this parameter range it has been found very effective to use Theorem 1 and Corollary 1, so that the dominating bivariate process is:
which is simulated as a marked point process having factorised (marginal-conditional) intensity function (Corollary 1) as follows:
The thinning probability for points drawn from the dominating point process is then:
The procedure for generation of points from the process with intensity function \(Q_{GIG}(x)\) is given in Algorithm 3. In the case of \(\lambda > 0\), points generated from the process with intensity function shown in Eq. (8) are added to the set of points coming from \(Q_{GIG}(x)\) to obtain jump sizes from the GIG process.
The procedure is completed as before by independently drawing an associated jump time \(v\sim {{{\mathcal {U}}}}[0,T]\) for each accepted jump size (‘point’) \(x\in N\). The value of the GIG Lévy process at t is given in Eq. (4).
3.1.1 Acceptance rate
The mean acceptance probability for fixed x is:
which may be bounded using Theorem 2. For the current parameter setting (\(|\lambda |>0.5\)) we have \(A(z) \le z |H_{|\lambda |}(z)|^2 \le B(z)\). Lower and upper bounds are obtained by substituting A(z) and B(z) and then by direct integration to give:
As noted before, the corner point \(z_0\in (0,\infty )\) may be chosen arbitrarily, while \(z_1\) is fixed. Hence the lower bound may be optimised with respect to \(z_0\) at each x value to obtain a tighter lower bound:
The (numerically optimised) lower bound and (fixed) upper bound on the mean acceptance rate \(\rho (x)\) are visualised for different parameter settings in Fig. 4. It can be observed, as expected from the inequalities (19), that the bounds are tight as \(x \rightarrow 0\) and \(x \rightarrow \infty \), and that acceptance rates get lower as \(|\lambda |\) increases. In all cases though they indicate that only large jumps (large x values) will be rejected and that at some point all jumps below a certain x value are highly likely to be accepted on average.
In principle we can go a little further than the acceptance rate for fixed x and compute overall acceptance rates for the algorithm, and quantities such as the expected number of acceptances/rejections overall. In particular it may be informative to calculate the expected number of accepted points if the underlying Poisson process \(\{{\varGamma }_i\}\) is truncated to values of less than say c, i.e. the process is approximated using the random truncation \(\sum _{{\varGamma }_i\le c}h({\varGamma }_i)\). Since \(\{{\varGamma }_i\}\) is a unit rate Poisson process, the expected number of accepted points at a truncation level c is:
Here \(h({\varGamma })\) is the non-increasing function used to generate a particular point process (5), and in the case of Algorithm 3 this is the tempered stable process with \(h({\varGamma })=\left( \frac{\alpha {\varGamma }}{C}\right) ^{-1/\alpha }\). \(\alpha (x)\) is the acceptance probability in any accept–reject step prior to generating z; in this case this is the term \(\exp (-\beta x)\) from the accept–reject step from the tempered stable process, where \(\beta =\gamma ^2/2\) (Algorithm 1). Computing the integrand in (20) gives the point process intensity of acceptances as a function of the unit rate Poisson process’s time evolution. Figure 5 illustrates this by plotting the upper and lower bounds on this intensity for two values of \(\lambda <-0.5\). Note that on average many fewer points are accepted at the start of the series for the larger magnitude \(\lambda \), which lends some theoretical weight to the empirically observed property that the series are more slowly converging as \(\lambda \) becomes increasingly negative (and the process more light-tailed), see experimental simulations section for further results on this effect. We also plot in Fig. 6 the total number of expected rejections, computed as \(c-N_R(c)\) by numerically integrating (20) with the upper and lower bounds substituted for the integrand. This serves to illustrate again the significant impact of \(\lambda \) on total number of rejected points for a particular truncation limit c. Also overlaid on this figure are the actual mean number of rejections averaged over 1000 simulations of the relevant process with different truncation levels c, illustrating that the true expectation lies between the theoretical bounds.
3.2 Simulation of GIG processes with \(0<|\lambda |<0.5\)
The simple bound of Theorem 1 cannot be straightforwardly applied to simulation the GIG process for \(|\lambda | <0.5\). Instead we use the more sophisticated piecewise bounds of Theorem 2 and Corollary 2, which give us the dominating point process (18) that can be split into two independent point processes \(N_1\) and \(N_2\) as follows:
Each may be simulated using a thinned tempered stable process for x and a truncated \(\sqrt{Ga}\) density for z. Having simulated each pair (x, z) they are accepted with probability equal to the ratio \(Q_{GIG}(x,z)/Q^0_{GIG}(x,z)\). Owing to the piecewise form of the acceptance probability the two processes may be treated independently, including accept reject steps and the point process union taken at the final step to achieve the final GIG samples. The sampling procedure for point process \(N_1\) is given in Algorithm 4.
Similarly, for \(N_2\), the procedure is given in Algorithm 5.
Finally, the set of points \(N=N_1\cup N_2\) is a realisation of jump sizes from the point process having intensity \(Q_{GIG}(x)\). The procedure is completed as before by generating independent jump times \(v\sim {{{\mathcal {U}}}}[0,T]\) for all points \(x\in N\).
3.2.1 Acceptance rates
Theorem 2 is used once again to find lower bounds on the expected acceptance rates. In this case the bound to apply is \(z|H_{\lambda }(z)|^2 \le A(z)\).
For \(N_1\) the average acceptance rate \(\rho _1(x)\) is
which may be lower bounded by substituting the bound \(z|H_{\lambda }(z)|^2 \le A(z)\) from Theorem 2 and by direct integration to give:
Similarly, for \(N_2\) the average acceptance rate \(\rho _2(x)\) is
and again lower bounds are obtained by substituting \(z|H_{\lambda }(z)|^2 \le A(z)\) from Theorem 2 and direct integration to give:
We note that both of these bounds are in fact independent of the value of x. If we use B(z) to obtain an upper bound on the acceptance rates \(\rho _1\) and \(\rho _2\), both bounds are found to be 1, i.e. no useful information is gleaned from the upper bound.
3.3 Sampling from the marginal point process envelope
In steps 2 above we require simulation of the marginal point process for x in the dominating bivariate point process. For \(N_1\) this has intensity:
and we propose two possible methods for simulating this point process.
The first and most basic method uses the fact that the lower incomplete gamma function is upper bounded by the complete gamma function, i.e. \(\gamma (|\lambda |,z_0^2x/(2\delta ^2))\le {\varGamma }(|\lambda |)\), which allows generation from \(Q_1\) by thinning of a tempered stable process. We thus have the following upper bounding envelope as a tempered stable density:
This process may be routinely sampled by thinning of a positive tempered \(|\lambda |\)-stable process as described in 2.1.
Having generated points from the tempered stable envelope function the process is thinned with probability:
Experimentally we found that the acceptance rates were quite low for this bound, meaning that fairly long tempered stable series had to be generated (but see note below about the case \(\gamma =0\)), so a more sophisticated bound was sought, as follows.
The second and more effective method employs the following bound on the lower incomplete gamma function (Neuman 2013, Theorem 4.1):
and so
so that the point process intensity function is upper bounded by
where (24) follows from \(0<e^{-z_0^2 x/(2\delta ^2)} \le 1\) for \(x \in [0, \infty )\). The bounding process \({\bar{Q}}^2_1(x)\) may be simulated as a single gamma process, having parameters \(a = \frac{z_0}{\pi ^2 |\lambda |H_0}\) and \(\beta =\gamma ^2/2\). It is then thinned with the following probability:
The whole procedure is given in Algorithm 6.
Note that a slightly more refined bounding procedure may be implemented by observing that (23) is the intensity function of the union of two gamma processes, which may be independently simulated and their union thinned with probability \(\frac{Q_1(x)}{{\bar{Q}}_1^{2a}(x)}\), at the expense of generating one additional gamma process.
Note also that the second method using Algorithm 6 does not work for \(\gamma =0\) since the bounding point process cannot be simulated in this case; instead the first method with bound \({\bar{Q}}_1^1(x)\) must be used for the \(\gamma =0\) case. This second method, using bounding function \({\bar{Q}}_1^2(x)\), is regarded as the superior approach for all parameter settings except for \(\gamma =0\) since it has increased acceptance probabilities and also relies on an underpinning Gamma process rather than a \(|\lambda |\)-stable process as its basis, and the series representation of the Gamma process is known to converge very rapidly to zero compared to the \(|\lambda |\)-stable process Rosiński (2001), Barndorff-Nielsen and Shephard (2001).
Similarly, for \(N_2\) we follow a thinning approach. The marginal dominating Lévy density for \(N_2\) is:
Using the complete gamma function as part of an upper bound for this density, we can sample \(N_2\) as a tempered stable intensity \(\frac{e^{-x\gamma ^2/2} (2\delta ^2)^{0.5} {\varGamma }(0.5)}{\pi ^2H_0 x^{3/2}}\), and apply thinning with probability \({\varGamma }(0.5, z_0^2 x/(2\delta ^2))/{\varGamma }(0.5)\). This procedure is found to work well, and the procedure is summarised in Algorithm 7.
Rejection rates could potentially be further improved by more tightly upper bounding the term \({\varGamma }(0.5, \frac{z_0^2 x}{2\delta ^2})\). The following simple bound is suitable, and valid for \(0<s<1\):
This was not explored in the simulations as Algorithm 7 was found to perform well.
4 Simulations
In this section example simulations from the new method are generated up to \(t=T\), where \(T=1\), and compared with a random variable generator for the GIG distribution Devroye (2014), Statovic (2017) (a ‘ground truth’ simulation for comparing the distribution at \(t=T\), but unlike our method, not able to generate the entire path of the process). In our new shot noise case, \(M=1000\) terms are generated for each point process series representation and \(10^6\) random realisations of the GIG random variable are generated to produce Figs. 9-15, in which examples of the principal parameter ranges and edge case \(\gamma =0\) are presented. Note that M represents the total number of terms generated for each of the underlying tempered stable and gamma processes, rather than the final number of accepted terms. \(10^4\) random samples were generated in each case for our new shot noise method and \(10^6\) samples for the ‘ground truth’ method of Devroye (2014), Statovic (2017) at \(t=1\). In Figs. 16-19 example pathwise simulations are plotted for several parameter settings, drawing 30 independent realisations of the GIG process in each case, setting \(T=1\). For cases where \(\lambda >0\) the additional term in the Lévy density, \(\lambda \frac{e^{-x\gamma ^2/2}}{x}\), see (7), is generated as an independent Gamma process and the union of the these points with those from \(Q_{GIG}\) is taken to generate the final simulated process. Once the union of all the points has been taken, to form a set of jump sizes \(\{x_i\}\), independent and uniformly distributed jump times \(\{v_i\in [0,T]\}\) are generated and the process path may be realised as
As previously discussed, the resulting process approximates a GIG Lévy process. By comparing the distribution of the Lévy process at \(t=1\) and the distribution of GIG random variables generated using Devroye (2014), Statovic (2017), the quality of this approximation may be evaluated, at least at the end-point of the interval \(t=T=1\). A possible statistical test in this case is the two-sample Kolmogorov–Smirnov (KS) test since in most cases the CDF of a GIG random variable is not available in closed form. The KS test checks whether the two samples come from the same distribution by reporting the KS statistic, computed as the supremum of the absolute difference between the empirical CDFs of the two samples Hodges (1958).
The results of such a KS test are reported in Fig. 7, for values of \(|\lambda |<-0.5\), varying shot noise series length M, and using random sample sizes of 10000. Remaining parameters were \(\gamma =0.1\), \(\delta =2\). It can be seen that the convergence of the KS statistic is slower as \(\lambda \) becomes more negative, as alluded to in earlier analysis. The final p-values at \(M=10000\) are typically well above 0.1, indicating that the hypothesis cannot be rejected for this M, at a confidence level of 0.1, with considerably earlier convergence for the smaller absolute values of \(\lambda \).
For \(0<|\lambda |<0.5\) we observed very rapid convergence of the KS statistics to acceptable levels as M increases, typically within a few dozen terms in the series, so we do not display these results.
For \(\lambda >0.5\), Fig. 8 gives the equivalent set of KS curves, showing more rapid convergence than their negative counterparts (presumably because of the additional gamma process that is mixed into the point process) and again showing p-values typically well above 0.1 for \(M=10000\).
Of course, we do recognise that our generated GIG processes are approximate, and so it should always be possible to reject the null hypothesis for sufficiently large sample sets, but nevertheless these tests do give a helpful indication of the new algorithm’s performance for different ranges of \(\lambda \). Similar behaviours were observed for other settings of parameters \(\gamma \) and \(\delta \).
Finally the computational burden of the algorithms is linear in M, the number of terms in the series. Moreover there are no random waiting times since the accept–reject steps are performed as one-off tests for each point generated in the series. The most significant contributors to the load are the calculation of the various acceptance probabilities and the sampling of the auxiliary variables z. The Matlab and Python code runs at similar speed on standard laptop platforms. On a Microsoft Surface (Intel(R) Core(TM) i7-1065G7 CPU @ 1.30GHz 1.50 GHz) the time for computation was very roughly \(10^{-6}M\)s per random process realisation with \(|\lambda |\ge 0.5\) and approximately double that for the more complex \(|\lambda |<0.5\) case. Note also that in Algorithm 3 some significant parallelisation is possible, since Steps 1.-3. do not depend on \(\lambda \). Thus it would be possible to simulate many different realisations for different \(\lambda \) by performing one set of steps 1.-3. and then performing step 4. independently for each \(\lambda \) value required. This could be of significant value in ensemble and Monte Carlo inference procedures, as well as for rapid visualisation of different \(\lambda \) scenarios.
5 Discussion
This paper has presented a generic simulation methodology for GIG processes. The methods are simple and efficient to implement and have good acceptance rates, which will make them of use in applications for practitioners. Moreover, we provide code in Matlab and Python in order to give researchers immediate access to the methods.
Simulation of GIG processes opens up the possibility of simulation and inference for many more complex processes; in particular, a direct extension takes the sampled GIG process and generates other processes within the generalised shot-noise methodology Rosiński (2001):
where \(U_i\) are iid uniform random variates and \(H(u,\gamma )\) a non-increasing function of \(\gamma \). Of particular interest will be the mean–variance mixture of Gaussians, with:
and \(J_{i}=h({\varGamma }_{i})\) is the ith ordered jump of the simulated GIG process. This approach, which is an exactly equivalent process to the time-changed Brownian motion description of Barndorff-Nielsen (1997), leads directly to a simulation method for the generalised hyperbolic (GH) process, and its conditionally Gaussian form will enable inference for these processes, using a conditionally Gaussian approach similar in spirit to Lemke and Godsillc (2015), Godsill et al. (2019). Our continued work on these processes will study these inferential approaches with the GIG/GH model and also their use as driving processes for stochastic differential equations, again extending the approach of Godsill et al. (2019) to these more general classes of process.
Notes
Matlab and Python code can be found in: https://github.com/yamankindap/GiG.
References
Achim, A., Bezerianos, A., Tsakalides, P.: Novel Bayesian multiscale method for speckle removal in medical ultrasound images. IEEE Trans. Med. Imaging 20(8), 772–783 (2001)
Achim, A., Buxton, B., Tzagkarakis, G., Tsakalides, P.: Compressive sensing for ultrasound RF echoes using \(\alpha \)-Stable Distributions. In: 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, pp. 4304–4307 (2010)
Achim, A., Kuruoǧlu, E.E., Zerubia, J.: SAR image filtering based on the heavy-tailed Rayleigh model. IEEE Trans. Image Process. 15(9), 2686–2693 (2006)
Amini, A., Unser, M.: Sparsity and infinite divisibility. IEEE Trans. Inf. Theory 60(4), 2346–2358 (2014)
Applebaum, D.: Lévy Processes and Stochastic Calculus, 2 edn. Cambridge Studies in Advanced Mathematics. Cambridge University Press (2009). https://doi.org/10.1017/CBO9780511809781
Azzaoui, N., Clavier, L.: Statistical channel model based on \(\alpha \)-stable random processes and application to the 60 GHz ultra wide band channel. IEEE Trans. Commun. 58(5), 1457–1467 (2010)
Barndorff-Nielsen, O., Halgreen, C.: Infinite divisibility of the hyperbolic and generalized inverse Gaussian distributions. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 38, 309–311 (1977)
Barndorff-Nielsen, O., Mikosch, T., Resnick, S.: Lévy Processes: Theory and Applications. Birkhäuser Boston (2001)
Barndorff-Nielsen, O.E.: Processes of normal inverse Gaussian type. Finance and Stochastics 2(1), 41–68 (1997). https://ideas.repec.org/a/spr/finsto/v2y1997i1p41-68.html
Barndorff-Nielsen, O.E., Shephard, N.: Non-Gaussian Ornstein-Uhlenbeck-based models and some of their uses in financial economics. J. Royal Stat. Soc.: Series B (Stat. Methodol.) 63(2), 167–241 (2001). https://doi.org/10.1111/1467-9868.00282
Barth, A., Stein, A.: Approximation and simulation of infinite-dimensional levy processes (2017)
Bertoin, J.: Lévy Processes. Cambridge Tracts in Mathematics, 121. Cambridge University Press (1997)
Carrillo, R.E., Ramirez, A.B., Arce, G.R., Barner, K.E., Sadler, B.M.: Robust compressive sensing of sparse signals: a review. EURASIP J. Adv. Signal Process. 2016(1), 108 (2016)
Chen, X., Wang, Z.J., McKeown, M.J.: Asymptotic analysis of robust lassos in the presence of noise with large variance. IEEE Trans. Inf. Theory 56(10), 5131–5149 (2010)
Cont, R., Tankov, P.: Financial Modelling with Jump Processes. Chapman & Hall/CRC, London (2003)
Devroye, L.: Random variate generation for the generalized inverse Gaussian distribution. Stat. Comput. 24, 239–246 (2014)
Eberlein, E.: Application of generalized hyperbolic Lévy motions to finance. In: Barndorff-Nielsen, O., Resnick, S., Mikosch, T. (eds.) Lévy Processes. Birkhäuser, Boston, MA (2001)
Eberlein, E., v. Hammerstein, E.A.: Generalized hyperbolic and inverse Gaussian distributions: Limiting cases and approximation of processes. In: R.C. Dalang, M. Dozzi, F. Russo (eds.) Seminar on Stochastic Analysis, Random Fields and Applications IV, pp. 221–264. Birkhäuser Basel, Basel (2004)
Fahs, J. and Abou-Faycal, I.: On the capacity of additive white alpha-stable noise channels. In: 2012 IEEE International Symposium on Information Theory Proceedings, pp. 294–298. IEEE (2012)
Fama, E.F.: The behavior of stock-market prices. J. Business 38(1), 34–105 (1965)
Ferguson, T.S., Klass, M.J.: A representation of independent increment processes without Gaussian components. Ann. Math. Stat. 43(5), 1634–1643 (1972). https://doi.org/10.1214/aoms/1177692395
de Freitas, M.L., Egan, M., Clavier, L., Goupil, A., Peters, G.W., Azzaoui, N.: Capacity bounds for additive symmetric \(\alpha \)-stable noise channels. IEEE Trans. Inf. Theory 63, 5115–5123 (2017)
Freitas, P.: Sharp bounds for the modulus and phase of Hankel functions with applications to Jaeger integrals. Math. Comput. 87, 289–308 (2018)
Godsill, S., Rayner, P.: Statistical reconstruction and analysis of autoregressive signals in impulsive noise using the Gibbs sampler. IEEE Trans. Speech Audio Process. 6(4), 352–372 (1998). https://doi.org/10.1109/89.701365
Godsill, S., Riabiz, M., Kontoyiannis, I.: The Lévy state space model. In: 2019 53rd Asilomar Conference on Signals, Systems, and Computers, pp. 487–494 (2019). https://doi.org/10.1109/IEEECONF44664.2019.9048715
Grabchak, M.: Rejection sampling for tempered Lévy processes. Stat. Comput. 29, 549–558 (2019)
Hodges, J.L.: The significance probability of the smirnov two-sample test. Ark. Mat. 3, 469–486 (1958)
Hörmann, W., Leydold, J.: Generating generalized inverse Gaussian random variates. Stat. Comput. 24(4), 547–557 (2013). https://doi.org/10.1007/s11222-013-9387-3
Imai, J., Kawai, R.: On finite truncation of infinite shot noise series representation of tempered stable laws. Physica A-Stat. Mechan. Appl. 390, 4411–4425 (2011)
Liebeherr, J., Burchard, A., Ciucu, F.: Delay bounds in communication networks with heavy-tailed and self-similar traffic. IEEE Trans. Inf. Theory 58(2), 1010–1024 (2012)
Kallenberg, O.: Foundations Modern Probability, 2nd edn. Springer-Verlag, Berlin (2002)
Katz, R.W., Brown, B.G.: Extreme events in a changing climate: variability is more important than averages. Clim. Change 21(3), 289–302 (1992)
Katz, R.W., Parlange, M.B., Naveau, P.: Statistics of extremes in hydrology. Adv. Water Resour. 25(8–12), 1287–1304 (2002)
Lemke, T., Godsill, S.: Inference for models with asymmetric \(\alpha \)-stable noise processes. In: Unobserved Components and Time Series Econometrics. : Oxford University Press, chap. 9 (2015)
Lewis, P.A.W., Shedler, G.S.: Simulation of nonhomogeneous Poisson processes by thinning. Naval Research Logistics Quarterly 26(3), 403–413 (1979). https://doi.org/10.1002/nav.3800260304. https://ideas.repec.org/a/wly/navlog/v26y1979i3p403-413.html
Lombardi, M.J., Godsill, S.J.: On-line Bayesian estimation of signals in symmetric \(\alpha \)-stable noise. Signal Process, IEEE Trans 54(2), 775–779 (2006)
Lopes, M.E.: Unknown sparsity in compressed sensing: denoising and inference. IEEE Trans. Inf. Theory 62(9), 5145–5166 (2016)
Mandelbrot, B.: New methods in statistical economics. J. Polit. Econ. 71(5), 421–440 (1963)
Neuman, E.: Inequalities and bounds for the incomplete gamma function. Results Math. (2013). https://doi.org/10.1007/s00025-012-0263-9
Nikias, C.L., Shao, M.: Signal processing with alpha-stable distributions and applications. Adaptive and Learning Systems for Signal Processing, Communications, and Control. Wiley (1995)
Qu, Y., Dassios, A., Zhao, H.: Exact simulation of Ornstein-Uhlenbeck tempered stable processes. J. Appl. Prob. (2021)
Riabiz, M., Ardeshiri, T., Kontoyiannis, I., Godsill, S.: Nonasymptotic Gaussian approximation for inference with stable noise. IEEE Trans. Inf. Theor. PP, 1–1 (2020). https://doi.org/10.1109/TIT.2020.2996135
Rosiński, J.: Series Representations of Lévy Processes from the Perspective of Point Processes, pp. 401–415. Birkhäuser Boston, Boston, MA (2001). https://doi.org/10.1007/978-1-4612-0197-7_18
Rosiński, J.: Tempering stable processes. Stochastic Processes and their Applications 117(6), 677–707 (2007). https://doi.org/10.1016/j.spa.2006.10.003. https://www.sciencedirect.com/science/article/pii/S030441490600144X
Rydberg, T.H.: The normal inverse Gaussian Lévy process: simulation and approximation. Commun. Stat. Stochastic Models 13(4), 887–910 (1997). https://doi.org/10.1080/15326349708807456
Samorodnitsky, G., Taqqu, M.S.: Stable non-Gaussian random processes : stochastic models with infinite variance. CRC Press, London (1994)
Shah, A., Wilson, A., Ghahramani, Z.: Student-t Processes as Alternatives to Gaussian Processes. In: S. Kaski, J. Corander (eds.) Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, vol. 33, pp. 877–885. PMLR, Reykjavik, Iceland (2014). http://proceedings.mlr.press/v33/shah14.html
Shephard, N., Barndorff-Nielsen, O.E.: Normal Modified Stable Processes. Economics Series Working Papers 72, University of Oxford, Department of Economics (2001). https://ideas.repec.org/p/oxf/wpaper/72.html
Shevlyakov, G., Kim, K.: Robust minimax detection of a weak signal in noise with a bounded variance and density value at the center of symmetry. IEEE Trans. Inf. Theory 52(3), 1206–1211 (2006)
Statovic: gigrnd(p, a, b, samplesize). MATLAB Central File Exchange (2017). https://www.mathworks.com/matlabcentral/fileexchange/53594-gigrnd-p-a-b-samplesize
Tzagkarakis, G.: Bayesian Compressed Sensing Using \(\alpha \)-stable Distributions. Ph.D. thesis, Department of Computer Science, University of Crete, Crete, Greece (2009)
Unser, M., Tafti, P., Amini, A., Kirshner, H.: A unified formulation of Gaussian versus sparse stochastic processes - 2014; Part II: discrete-domain theory. IEEE Trans. Inf. Theory 60(5), 3036–3051 (2014)
Unser, M., Tafti, P., Sun, Q.: A unified formulation of Gaussian versus sparse stochastic processes–2014; Part I: continuous-domain theory. IEEE Trans. Inf. Theory 60(3), 1945–1962 (2014)
Unser, M., Tafti, P.D.: An Introduction to Sparse Stochastic Processes. Cambridge University Press, Cambridge (2014)
Warren, D.J., Thomas, J.B.: Asymptotically robust detection and estimation for very heavy-tailed noise. IEEE Trans. Inf. Theory 37(3), 475–481 (1991)
Watson, G.: A Treatise on the Theory of Bessel Functions, 2nd edn. Cambridge University Press, Cambridge (1944)
Wolpert, R., Ickstadt, K.: Poisson/gamma random field models for spatial statistics. Biometrika 85(2), 251–267 (1998). https://doi.org/10.1093/biomet/85.2.251
Wolpert, R.L., Ickstadt, K.: Simulation of Lévy Random Fields, pp. 227–242. Springer New York, New York, NY (1998). https://doi.org/10.1007/978-1-4612-1732-9_12
Zhang, S.: Exact simulation of tempered stable Ornstein-Uhlenbeck processes. J. Stat. Comput. Simul. 81(11), 1533–1544 (2011). https://doi.org/10.1080/00949655.2010.494247
Zhang, S.: Transition law-based simulation of generalized inverse Gaussian Ornstein-Uhlenbeck processes. Methodol. Comput. Appl. Probab. 13, 619–656 (2011). https://doi.org/10.1007/s11009-010-9179-6
Zhou, Z., Yu, J.: Estimation of block sparsity in compressive sensing. arXiv preprint arXiv:1701.01055 (2017)
Zhu, D., Galbraith, J.: A generalized asymmetric Student-t distribution with application to financial econometrics. J. Econom. 157(2), 297–305 (2010). https://EconPapers.repec.org/RePEc:eee:econom:v:157:y:2010:i:2:p:297-305
Open Access
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Godsill, S., Kındap, Y. Point process simulation of generalised inverse Gaussian processes and estimation of the Jaeger integral. Stat Comput 32, 13 (2022). https://doi.org/10.1007/s11222-021-10072-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11222-021-10072-0