Abstract
In the literature, retrial queues with batch arrivals and heavy-tailed service times have been studied and the so-called equivalence theorem has been established under the condition that the service time is heavier than the batch size. The equivalence theorem provides the distribution (or tail) equivalence between the total number of customers in the system for the retrial queue and the total number of customers in the corresponding standard (non-retrial) queue. In this paper, under the assumption of regularly varying tails, we eliminate this condition by allowing that the service time can be either heavier or lighter than the batch size. The main contribution made in this paper is an asymptotic characterization of the difference between two tail probabilities: the probability of the total number of customers in the system for the \(M^X/G/1\) retrial queue and the probability of the total number of customers in the corresponding standard (non-retrial) queue.
Similar content being viewed by others
References
Artalejo, J.R.: A classified bibliography of research on retrial queues: progress in 1990–1999. TOP 7, 187–211 (1999)
Artalejo, J.R.: Accessible bibliography on retrial queues. Math. Comput. Model. 30, 1–6 (1999)
Artalejo, J.R.: Accessible bibliography on retrial queues: progress in 2000–2009. Math. Comput. Model. 51, 1071–1081 (2010)
Artalejo, J.R., Gómez-Corral, A.: Retrial Queueing Systems. Springer, Berlin (2008)
Artalejo, J.R., Phung-Duc, T.: Single server retrial queues with two way communication. Appl. Math. Model. 37, 1811–1822 (2013)
Asmussen, S., Klüppelberg, C., Sigman, K.: Sampling at subexponential times, with queueing applications. Stoch. Process. Appl. 79, 265–286 (1999)
Bingham, N.H., Goldie, C.M., Teugels, J.L.: Regular Variation. Cambridge University Press, Cambridge (1989)
Borst, S.C., Boxma, O.J., Nunez-Queija, R., Zwart, A.P.: The impact of the service discipline on delay asymptotics. Perform. Eval. 54, 175–206 (2003)
Boxma, O., Denisov, D.: Sojourn time tails in the single server queue with heavy-tailed service times. Queueing Syst. 69, 101–119 (2011)
Boxma, O., Zwart, B.: Tails in scheduling. ACM Sigmetrics Perform. Eval. Rev. 34, 13–20 (2007)
Embrechts, P., Klüppelberg, C., Mikosch, T.: Modelling Extremal Events for Insurance and Finance. Springer, Heidelberg (1997)
Falin, G.I.: A survey of retrial queues. Queueing Syst. 7, 127–168 (1990)
Falin, G.I., Templeton, J.G.C.: Retrial Queues. Chapman & Hall, London (1997)
Flajolet, P., Sedgewick, R.: Analytic Combinatorics. Cambridge University Press, Cambridge (2008)
Foss, S., Korshunov, D.: Sampling at a random time with a heavy-tailed distribution. Markov Process. Related Fields 6(4), 543–568 (2000)
Foss, S., Korshunov, D., Zachary, S.: An Introduction to Heavy-Tailed and Subexponential Distributions. Springer, New York (2011)
Goovaerts, M.J., D’Hooge, L., De Pril, N.: On a class of generalized \(\Gamma \)-convolutions (Part I). Scand. Actuar. J. 1, 21–30 (1977)
Grandell, J.: Mixed Poisson Processes. Chapman & Hall, London (1997)
Kim, J.: Tail asymptotics for the queue size distribution in an \(M^X/G/1\) retrial queue. J. Appl. Math. Inform. 33, 185–191 (2015)
Kim, B., Kim, J.: Exact tail asymptotics for the \(M/M/m\) retrial queue with nonpersistent customers. Oper. Res. Lett. 40, 537–540 (2012)
Kim, J., Kim, B.: A survey of retrial queueing systems. Ann. Oper. Res. 247, 3–36 (2016)
Kim, B., Kim, J., Kim, J.: Tail asymptotics for the queue size distribution in the \(MAP/G/1\) retrial queue. Queueing Syst. 66, 79–94 (2010)
Kim, J., Kim, J., Kim, B.: Regularly varying tail of the waiting time distribution in \(M/G/1\) retrial queue. Queueing Syst. 65, 365–383 (2010)
Kim, J., Kim, J., Kim, B.: Tail asymptotics of the queue size distribution in the \(M/M/m\) retrial queue. J. Comput. Appl. Math. 236, 3445–3460 (2012)
Kim, J., Kim, B., Ko, S.-S.: Tail asymptotics for the queue size distribution in an \(M/G/1\) retrial queue. J. Appl. Probab. 44, 1111–1118 (2007)
Kulkarni, V.G., Liang, H.M.: Retrial queues revisited. In: Dshalalow, J.H. (ed.) Frontiers in Queueing: Models and Applications in Science and Engineering, pp. 19–34. CRC Press, Boca Raton (1997)
Liu, B., Wang, X., Zhao, Y.Q.: Tail asymptotics for \(M/M/c\) retrial queues with nonpersistent customers. Oper. Res.: Int. J. 12, 173–188 (2012)
Liu, B., Wang, J., Zhao, Y.Q.: Tail asymptotics of the waiting time and the busy period for the \(M/G/1/K\) queues with subexponential service times. Queueing Syst. 76(1), 1–19 (2014)
Liu, B., Zhao, Y.Q.: Analyzing retrial queues by censoring. Queueing Syst. 64, 203–225 (2010)
Liu, B., Zhao, Y.Q.: Tail asymptotics for a retrial queue with Bernoulli schedule. Mathematics 10(15), 2799 (2022)
Liu, B., Zhao, Y.Q.: Tail asymptotics for the \(M_1, M_2/G_1, G_2/1\) retrial queue with non-preemptive priority. Queueing Syst. 96, 169–199 (2020)
Masuyama, H.: Subexponential tail equivalence of the stationary queue length distributions of \(BMAP/GI/1\) queues with and without retrials. arXiv:1310.4608v3 (2014)
Ramsay, C.M.: Exact waiting time and queue size distribution for equilibrium \(M/G/1\) queues with Pareto service. Queueing Syst. 57, 147–155 (2007)
Resnick, S.I.: Extreme Values, Regular Variation, and Point Processes. Springer, Berlin (2008)
Shang, W., Liu, L., Li, Q.-L.: Tail asymptotics for the queue length in an \(M/G/1\) retrial queue. Queueing Syst. 52, 193–198 (2006)
Stam, A.J.: Regular variation of the tail of a subordinated probability distribution. Adv. Appl. Probab. 5, 308–327 (1973)
Yamamuro, K.: The queue length in an \(M/G/1\) batch arrival retrial queue. Queueing Syst. 70, 187–205 (2012)
Yang, T., Templeton, J.G.C.: A survey on retrial queues. Queueing Syst. 2, 201–233 (1987)
Acknowledgements
We acknowledge the valuable comments/suggestions made by the two anonymous reviewers and the associate editor, which led to this final version with significantly improved quality. This work was supported in part by the National Natural Science Foundation of China (Grant No. 72271004), the Key Scientific Research Project of the Education Department of Anhui Province (Grant No. 2022AH050247), the Research Project of Anhui Jianzhu University (Grant No. 2016QD118), and a Discovery Grant (Grant No. 315660) from the Natural Sciences and Engineering Research Council of Canada (NSERC).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Collection of concepts and results
Definition A.1
(e.g. see [11], pp. 564–565) A measurable function \(U:(0,\infty )\rightarrow (0,\infty )\) is regularly varying at \(\infty \) with index \(\sigma \in (-\infty ,\infty )\), denoted by \(U\in \mathcal R_{\sigma }\), iff \(\lim _{x\rightarrow \infty }U(tx)/U(x)=t^{\sigma }\) for all \(t>0\). If \(\sigma =0\) we call U slowly varying, i.e. \(\lim _{x\rightarrow \infty }U(tx)/U(x)=1\) for all \(t>0\).
Furthermore, the class of the regularly varying distributions is defined as (see, for example, [11], p. 50)
The following lemma is referred to the uniform convergence theorem for regularly varying functions.
Lemma A.1
(Bingham et al. [7], p. 22) If f is regularly varying at \(\infty \) with index \(\sigma \le 0\),then for \(\sigma <0\), \(f(tx)/f(x)\rightarrow t^{\sigma }\) as \(x\rightarrow \infty \) uniformly in t on each \([a,\infty )\) \((0<a<\infty )\); for \(\sigma =0\), \(f(tx)/f(x)\rightarrow 1\) as \(x\rightarrow \infty \) uniformly in t on each [a, b] \((0<a\le b<\infty )\).
The following lemma is referred to the monotone density theorem for regularly varying functions.
Lemma A.2
([11], p. 568) Let \(U(x)=\int _x^{\infty }u(y)\hbox {d}y\) (or \(\int _0^x u(y)\hbox {d}y\)) where u is ultimately monotone (i.e. u is monotone on \((z,\infty )\) for some \(z>0\)). If \(U(x)\sim c x^{\sigma }L(x)\) as \(x\rightarrow \infty \) with \(c> 0,\ \sigma \in (-\infty ,\infty )\), then \(u(x)\sim c\sigma x^{\sigma -1}L(x)\) as \(x\rightarrow \infty \).
Definition A.2
(e.g. see Foss et al. [16], p. 40) A distribution F on \((0,\infty )\) belongs to the class of the subexponential distributions, denoted by \(F\in \mathcal S\), if \(\lim _{x\rightarrow \infty }(1-F^{(2)}(x))/(1-F(x))=2\), where \(F^{(n)}=\underbrace{F*F* \cdots * F}_{n}\) denotes the n-fold convolution of F to itself.
Note that \(\mathcal R\) is a subset of \(\mathcal S\), i.e. \(\mathcal R\subset \mathcal S\) (see, e.g., [11], p. 50).
Definition A.3
(Grandell [18], p. 146) A distribution F on \((0,\infty )\) is called light-tailed, if there exists \(s_0>0\) such that \(\int _{0}^{\infty }e^{sx}\hbox {d}F(x)<\infty \) for all \(s<s_0\).
Lemma A.3
([11], p. 567) Let L be a slowly varying function on \((0,\infty )\). Then, for \(b>1\), \(\int _{x}^{\infty }t^{-b}L(t)\textrm{d}t\sim (b-1)^{-1} x^{-b+1}L(x)\) as \(x\rightarrow \infty \).
Lemma A.4
([6], or Foss and Korshunov [15]) Assume that \(N_t\) is a Poisson process with rate \(\lambda >0\), and \(T>0\) is a rv independent of \(N_t\) with tail \(P\{T>x\}\) heavier than \(e^{-\sqrt{x}}\). Then, \(P(N_T>j)\sim P\{T>j/\lambda \},\ j\rightarrow \infty \).
Note that by Assumption A1, both the service time B and the equilibrium service time \(B^{(e)}\) have tails heavier than \(e^{-\sqrt{x}}\).
Lemma A.5
Let N be a discrete non-negative integer-valued rv, and let \(\{Y_k\}_{k=1}^{\infty }\) be a sequence of non-negative, independently and identically distributed rvs. Define \(S_0\equiv 0\) and \(S_n=\sum _{k=1}^n Y_k\).
-
(i)
If \(P\{Y_k>x\}\sim c_Y x^{-h}L(x)\) as \(x\rightarrow \infty \) and \(P\{N>n\}\sim c_N n^{-h}L(n)\) as \(n\rightarrow \infty \), where \(h> 1\), \(c_Y\ge 0\) and \(c_N\ge 0\), then
$$\begin{aligned} P\{S_N > x\}\sim & {} \left( c_N \mu _Y^{h}+ c_Y\mu _N\right) x^{-h}L(x),\quad x\rightarrow \infty , \end{aligned}$$(A.1)where \(E(N)=\mu _N<\infty \) and \(E(Y_k)=\mu _Y<\infty \).
-
(ii)
If \(P\{N>n\}\sim c_N n^{-h_N}L(n)\) as \(n\rightarrow \infty \), where \(0\le h_N<1,\ c_N\ge 0\), and \(E(Y_k)=\mu _Y<\infty \), then
$$\begin{aligned} P\{S_N > x\}\sim & {} c_N (x/\mu _Y)^{-h_N}L(x),\quad x\rightarrow \infty . \end{aligned}$$(A.2) -
(iii)
If \(P\{N>n\}\sim c_N n^{-1}L(n)\) as \(n\rightarrow \infty \), where \(c_N\ge 0\), and \(x^b P\{Y_k>x\}\le c<\infty \) for some \(b>1\), then
$$\begin{aligned} P\{S_N > x\}\sim & {} c_N (x/\mu _Y)^{-1}L(x),\quad x\rightarrow \infty , \end{aligned}$$(A.3)where \(E(Y_k)=\mu _Y<\infty \).
In Lemma A.5, Parts (i) and (ii) are directly from Corollary 8.1 and Corollary 8.2 in [18] (pp. 163–165), and Part (iii) is due to Lemma 2.8 in Stam [36], p. 315.
Lemma A.6 is the discrete version of Karamata’s Theorem and Monotone Density Theorem.
Lemma A.6
([11], pp. 567–568) Let \(\{q(j)\}_{j=0}^{\infty }\) be a nonnegative sequence, and \(b>1\). If \(q(j)\sim j^{-b}L(j)\) as \(j\rightarrow \infty \), then \(\sum _{k=j+1}^{\infty }q(k)\sim \displaystyle \frac{1}{b-1} j^{-b+1}L(j)\) as \(j\rightarrow \infty \). Conversely, if \(\sum _{k=j+1}^{\infty }q(k)\sim \displaystyle \frac{1}{b-1} j^{-b+1}L(j)\) as \(j\rightarrow \infty \) and \(\{q(j)\}_{j=0}^{\infty }\) is ultimately monotonic (i.e. q(j) is monotone except for first finite many terms), then \(q(j)\sim j^{-b}L(j)\) as \(j\rightarrow \infty \).
Lemma A.7
([16], p. 48) Suppose that \(F(x)\in \mathcal S\).
-
(i)
If \(1-G(x)=o(1-F(x))\) as \(x\rightarrow \infty \), then \(F*G\in \mathcal S\) and \(1-F*G(x)\sim 1-F(x)\).
-
(ii)
If \((1-G_i(x))/(1-F(x))\rightarrow c_i\) as \(x\rightarrow \infty \) for some \(c_i\ge 0\), i=1,2, then \((1-G_1*G_2(x))/(1-F(x))\rightarrow c_1+c_2\) as \(x\rightarrow \infty \).
Lemma A.8
([11], pp. 580–581) Suppose \(0<b<1\), and distribution functions F and G are related as \(G(x)=(1-b)\sum _{n=1}^{\infty }b^nF^{(n)}(x)\). Then, the following statements are equivalent:
-
(i)
\(F\in \mathcal S\);
-
(ii)
\(G\in \mathcal S\);
-
(iii)
\(\lim _{x\rightarrow \infty }(1-G(x))/(1-F(x))=b/(1-b)\).
For proving our key property, Theorem 5.1, we need the following concepts and properties. Let \(\{g(j)\}_{j=0}^{\infty }\) be a discrete probability distribution with the GF \(G(z)=\sum _{j=0}^{\infty }g(j)z^j\). Denote by \(\gamma _n(n\ge 0)\) the nth factorial moment of \(\{g(j)\}_{j=0}^{\infty }\), i.e,
It is well known that if \(\gamma _n<\infty \), then \(\gamma _n=\lim _{z\uparrow 1}\hbox {d}^n G(z)/\hbox {d}z^n\) and
Next, if \(\gamma _n<\infty \), we introduce notations \(G_n(\cdot )\) and \(\widehat{G}_n(\cdot )\) as follows:
So,
It follows that if \(\gamma _n<\infty \), then for \(n\ge 1\),
In the following Lemma, we verify that \(\widehat{G}_n (z)\) is the GF of a nonnegative sequence. To this end, we define recursively
Lemma A.9
Suppose that \(\{g(j)\}_{j=0}^{\infty }\) is a discrete probability distribution with \(\gamma _{n}<\infty ,\ n\ge 0\). Then, \(\widehat{G}_k(z)\) is the GF of sequence \(\{\overline{g}_{k+1}(j)\}_{j=0}^{\infty }\) for \(0\le k\le n\), that is,
Proof
Notice that
Next, we proceed with the mathematical induction on k. For \(k=0\),
Under the induction hypothesis that (A.14) holds for \(k=i-1\in \{0,1,\cdots ,n-1\}\), we have
Therefore, (A.14) holds for \(k=i\in \{1,2,\cdots ,n\}\). \(\square \)
The following lemma is referred to the Karamata’s Tauberian theorem for power series.
Lemma A.10
([7], pp. 40) Let \(\{q(j)\}_{j=0}^{\infty }\) be a non-negative sequence such that \(Q(z){\mathop {=}\limits ^\textrm{def}}\sum _{j=0}^{\infty }q(j)z^j\) converges for \(0\le z<1\), let \(L(\cdot )\) be slowly varying at \(\infty \), and \(b\ge 0\), then the following two statements are equivalent:
Furthermore, if the sequence \(\{q(j)\}_{j=0}^{\infty }\) is ultimately monotonic and \(b>0\), then both (i) and (ii) are equivalent to
Lemma A.11
Let \(\{g(j)\}_{j=0}^{\infty }\) be a discrete probability distribution with the GF G(z). Assume that \(n< d<n+1\) for some \(n\in \{0,1,2,\cdots \}\). The sequence \(\{\overline{g}_{n+1}(j)\}_{j=0}^{\infty }\) is defined by (A.13). Let \(L(\cdot )\) be slowly varying. The following two statements are equivalent:
Proof
By the definition of \(\widehat{G}_n(z)\) in (A.7), (A.19) is equivalent to
Note that \(0< n+1-d<1\) and the sequence \(\{\overline{g}_{n+1}(j)\}_{j=0}^{\infty }\) is decreasing with the GF \(\widehat{G}_n(z)\) (by Lemma A.9). Applying Lemma A.10 (taking \(b=n+1-d\) in (A.16) and (A.18)), we know that (A.21) is equivalent to
Next, we prove the equivalence of (A.20) and (A.22). Noting the recursive relation (A.13) and repeatedly applying Lemma A.6, (A.22) is equivalent to
Note that \(\Gamma (d)=(d-1)\cdots (d-n)\Gamma (d-n)\). \(\square \)
Lemma A.12
(e.g. [7], p. 172) Suppose that \(\{q(j)\}_{j=0}^{\infty }\) is a nonnegative sequence with GF \(Q(z)=\sum _{j=0}^{\infty } q(j)z^j\). Let \(r(t)=\sum _{0\le j\le t} q(j)\), \(t\in [0,\infty )\). The following two statements are equivalent:
Lemma A.13
Suppose that \(\{q(j)\}_{j=0}^{\infty }\) is a nonnegative sequence with GF \(Q(z)=\sum _{j=0}^{\infty } q(j)z^j\). The above (A.25) is equivalent to
Proof
By taking \(u=1-e^{-s}\) in (A.25), we know that (A.25) is equivalent to
So we only need confirm the equivalence of (A.26) and (A.27).
Suppose that (A.26) holds. Note that \((1-u)^x=1-xu+o(u)\). For \(0<\epsilon <x\), we have \(1-(x+\epsilon )u \le (1-u)^x \le 1-(x-\epsilon )u\) for \(u>0\) small enough. Therefore,
which, along with (A.26), implies
Letting \(\epsilon \downarrow 0\), we get (A.27).
Conversely, suppose that (A.27) holds. Note that for \(0<\epsilon <x\), \((1-u)^{x+\epsilon }\le 1-xu\le (1-u)^{x-\epsilon }\) for \(u>0\) small enough, by which (A.26) can be verified similarly. \(\square \)
Details of some proofs
1.1 Proof of Theorem 4.1
Case 1: \(d_X>d_B>1\) in Assumptions A1 and A2
This is the case, in which the batch size X has a tail lighter than the service time B. It is worthwhile to mention that in this case X is not necessarily light-tailed (see Definition A.3).
Lemma B.1
If \(d_X>d_B>1\) in Assumptions A1 and A2, then as \(j\rightarrow \infty \),
Proof
Because of \(d_X>d_B\), (B.1) and (B.2) directly follow from Assumptions A1 and A2. We now prove (B.3). By Assumption A2, \(P\{X>j\}\le c'_X j^{-d_X}L(j)\) for some \(c'_X>0\). Since \(P\{X^{(de)}=j\}=P\{X>j\}/\chi _1\) (by the definition of the equilibrium distribution),
which leads to (B.3) due to \(d_X>d_B\). \(\square \)
By (4.1), (4.2), (B.1), and (B.3), we immediately have \(P\{X>j\}=o(P\{N_{B}>j\})\) and \(P\{X^{(de)}>j\}=o(P\{N_{B^{(e)}}>j\})\). By the definitions of \(N_{B_X}\) and \(N_{B^{(e)}_X}\) in Facts A and D, and applying Lemma A.5, we have
By the definitions in Facts B and D, \(N_{B_XX_0}=N_{B_X}+X_0\) and \(N_{B^{(e)}_XX^{(de)}}=N_{B^{(e)}_X}+X^{(de)}\), and (B.5) and (B.6) lead to \(P\{X_0>j\}=o(P\{N_{B_X}>j\})\) and \(P\{X^{(de)}>j\}=o(P\{N_{B^{(e)}_X}>j\})\) due to \(d_X>d_B\). Applying Part (i) of Lemma A.7, we have
Now we are ready to present the asymptotic property for the tail probability of K. By (3.9) and (B.8), and applying Lemma A.8, we get
where in the first equality we have used the fact that \(\rho /(1-\rho )\) is the mean of rv J in (3.9). By Facts B and C, and (B.7),
Applying Lemma A.6 gives
By (3.10), (B.10), and (B.9) and using Part (ii) of Lemma A.7, we have
which is the conclusion in Theorem 4.1 for Case 1.
Case 2: \(1<d_X<d_B\) and \(c_X> 0\) in Assumptions A1 and A2
This is the case, in which the batch size X has a tail heavier than the service time B. By the definitions of \(N_{B}\), \(N_{B_X}\) and \(N_{B_XX_0}\) in Facts A and B, and applying Lemma A.5 and Part (ii) of Lemma A.7, we have
where we have used the facts \(E(N_B)=\lambda \beta _1\) and \(P\{X_0>j\}\sim P\{X>j\}\).
By (4.2) and the definition of \(N_{B^{(e)}_X}\) in Fact D, and applying Lemma A.5,
By Lemma A.6, we have \(P\{X^{(de)}>j\}\sim (\chi _1(d_X-1))^{-1} c_X j^{-d_X+1}L(j)\), which implies \(P\{N_{B^{(e)}_X}>j\}=o(P\{X^{(de)}>j\})\). By the definition \(N_{B^{(e)}_XX^{(de)}}\) in Fact D, and applying Part (i) of Lemma A.7, we get
Now we are ready to present the asymptotic property for the tail probability of K. By (3.9) and (B.15), and applying Lemma A.8, we get
By Facts B and C, and (B.13),
Applying Lemma A.6,
By (3.10), (B.17)–(B.16) and using Part (ii) of Lemma A.7,
which is the conclusion in Theorem 4.1 for Case 2.
Case 3: \(d_X=d_B=a>1\) and \(c_X> 0\) in Assumptions A1 and A2
This is the case, in which the batch size X has a tail equivalent to the service time B. Following the same procedure as in Cases 1 and 2, we can prove that
where we have skipped detailed derivations to avoid repetition.
1.2 Proof of Theorem 5.1
First let us rewrite (2.6) as follows:
As shown in Facts A–D, K(z) is the GF of the rv K with the discrete probability distribution \(k(j) = P\{K=j\}\), \(j\ge 0\). In the proof, we use the notation \(\kappa _n\) to represent the nth factorial moment of K (see (A.4) in Appendix A for the definition of nth factorial moment).
Proof for the case of non-integer \(a>1\)
Suppose \(m<a<m+1\), \(m\in \{1,2,\cdots \}\). By Theorem 4.1, \(P\{K>j\}\sim c_K\cdot j^{-a+1}L(j)\). So \(\kappa _{m-1}<\infty \) and \(\kappa _{m}=\infty \).
Define \(K_{m-1}(z)\) in a manner similar to the definition of \(G_n(z)\) in (A.6). Corresponding to the sequence \(\{k(j)\}_{j=0}^{\infty }\), we also define \(\overline{k}_n(j)\), \(n\in \{0,1,\cdots m-1\}\) in a way similar to the definition of \(\overline{g}_n(j)\) in (A.12) and (A.13). Note that \(\overline{k}_1(j)=P\{K>j\}\sim c_K\cdot j^{-a+1}L(j)\). By Lemma A.11,
By Karamata’s theorem (see, Lemma A.3),
Next, we present a relation between \(D^{(0)}_m(z)\) and \(K_{m-1}(z)\). By the definition of \(K_{m-1}(z)\),
Note that \(\int _z^{1}K_{m-1}(u)\hbox {d}u/(1-z)^{m}\rightarrow 0\) and \(\int _z^{1}K_{m-1}(u)\hbox {d}u/(1-z)^{m+1}\rightarrow \infty \) as \(z\uparrow 1\).
From (B.24) and (B.28), there are constants \(\{v_k;\ k=0,1,2,\cdots ,m\}\) satisfying
Define \(D^{(0)}_m(z)\) in a manner similar to the definition of \(G_n(z)\) in (A.6). By (B.29),
By applying Lemma A.11,
which completes the proof of Theorem 5.1 for non-integer \(a>1\).
Proof for the case of integer \(a>1\)
Suppose \(a=m\in \{2,3,\cdots \}\). By Theorem 4.1, \(P\{K>j\}\sim c_K\cdot j^{-m+1}L(j)\). So, \(\kappa _{m-2}<\infty \). Unfortunately, whether \(\kappa _{m-1}\) is finite or not remains uncertain, which is determined essentially by whether \(\sum _{k=1}^{\infty }k^{-1}L(k)\) is convergent or not. For this reason, we have to sharpen our analytical tool by introducing the following lemma.
Lemma B.2
Suppose that \(\{q(j)\}_{j=0}^{\infty }\) is a nonnegative non-increasing sequence with GF Q(z). Let \(r(t)=\sum _{0\le j\le t} q(j)\), \(t\in [0,\infty )\). The following three statements are equivalent:
Proof
Set \(f(t)=q(k+1)\) for \(t\in (k, k+1],\ k= 0,1,\cdots \), and \(f(t)=0\) for \(t\le 0\). So \(f(t)\ge 0\) is a nonincreasing function. For \(t\ge 1\), we have
where \(\lfloor t \rfloor \) represents the greatest integer less than or equal to t. Note that for \(\epsilon >0\), we have \((x-\epsilon )t\le \lfloor xt \rfloor \le (x+\epsilon )t\) and \((1-\epsilon )t\le \lfloor t \rfloor \le (1+\epsilon )t\) for t large enough. Therefore,
Apparently, (B.33) is equivalent to
Next, we will prove the equivalence of (B.40) and (B.34). Suppose that (B.40) holds. It follows from (B.38) that, for \(x>1\),
where we have used the uniform convergence theorem (see Lemma A.1) on slowly varying functions for interchanging the limit and the integral. Letting \(\epsilon \downarrow 0\) in (B.41) and (B.42) implies (B.34) for \(x>1\). (B.34) for \(0<x<1\) can be similarly proved by using (B.39), and the proof of (B.34) for \(x=1\) is trivial.
Conversely, suppose that (B.34) holds. By (B.38),
which, along with (B.34), implies
Taking \(\epsilon \downarrow 0\) gives \(\frac{\log x}{x-1}\le \liminf _{t\rightarrow \infty } \frac{t f(t)}{L(t)}\). Then, letting \(x\downarrow 1\) leads to \(1\le \liminf _{t\rightarrow \infty } \frac{t f(t)}{L(t)}\). Similarly, for the case of \(0<x<1\), by (B.39) we have
from which, again by (B.34),
Taking \(\epsilon \downarrow 0\) gives \(-\frac{\log x}{1-x}\ge \limsup _{t\rightarrow \infty } \frac{t f(t)}{L(t)}\). Then, letting \(x\uparrow 1\) leads to \(\limsup _{t\rightarrow \infty } \frac{t f(t)}{L(t)}\le 1\). Therefore, \(\lim _{t\rightarrow \infty } \frac{t f(t)}{L(t)}= 1\), which is (B.40).
The equivalence of (B.34) and (B.35) is immediate by using Lemma A.12 and Lemma A.13. \(\square \)
Lemma B.3
Let \(\{g(j)\}_{j=0}^{\infty }\) be a discrete probability distribution with GF G(z), and \(n\in \{1,2,\cdots \}\). The following two statements are equivalent:
where \(\overline{g}_1(j)\) and \(\widehat{G}_{n}(x)\) are defined in (A.13) and (A.7), respectively.
Proof
By using Lemma A.6 repeatedly, (B.47) is equivalent to
Note that the sequence \(\{\overline{g}_{n}(j)\}_{j=0}^{\infty }\) has the GF \(\widehat{G}_{n-1}(z)\) (by Lemma A.9). The equivalence of (B.48) and (B.49) is proved by applying Lemma B.2. \(\square \)
Since \(\kappa _{m-2}<\infty \), we can define \(K_{m-2}(z)\) in a manner similar to the definition of \(G_n(z)\) in (A.6). Then, by comparing (A.8), we have
where \(K_{m-2}(z)=o\left( (1-z)^{m-2}\right) \) as \(z\uparrow 1\). Furthermore,
where \(\int _z^{1}K_{m-2}(u)\hbox {d}u=o((1-z)^{m-1})\) as \(z\uparrow 1\).
It follows from (B.24) and (B.51) that for some constants \(\{v_k;\ k=0,1,2,\cdots ,m\}\),
Define \(\widehat{D}^{(0)}_{m-1}(z)\) in a manner similar to the definition of \(\widehat{G}_n (z)\) in (A.7). Then, we have
which immediately leads to:
Note that \(\overline{k}_1(j)=P\{K>j\}\sim c_K\cdot j^{-m+1}L(j)\). By Lemma B.3, we obtain
By Karamata’s theorem (see Lemma A.3), we know
Therefore,
By applying Lemma B.3, we obtain from (B.59) that
which completes the proof of Theorem 5.1 for integer \(a=m\in \{2,3,\cdots \}\).
1.3 Proof of Lemma 6.1
The proof method presented here resembles that for Lemma 1.3.1 in [11], p. 37. For \(0<\varepsilon <1\) and \(t>0\), we have
where
By the monotone density theorem (Lemma A.2), we know that \(f_1(t)\sim (d-1)c_1 t^{-d} L_1(t)\sim (d-1)t^{-1}\overline{F}_1(t)\). By the mean value theorem, there exists \(\theta =\theta (t,y)\in (0, 1)\) such that \(\overline{F}_1(t-y)-\overline{F}_1(t)=f_1(t-\theta y)\cdot y\) for \(0\le y\le \varepsilon t\). The decreasing property of \(f_1\) yields \(f_1(t)\le f_1(t-\theta y)\le f_1((1-\varepsilon )t)\) for \(0\le y\le \varepsilon t\). Therefore,
from which it follows that
Next, we prove the following asymptotic result:
Note that
where the last equality is due to the fact: by the uniform convergence theorem for slowly varying functions, \(L_2(t(1-y/t))/L_2(t)\rightarrow 1\) uniformly on \(y\in [0,(1-\varepsilon )t]\) (or \(1-y/t\in [\varepsilon ,1]\)).
Since for \(b> 0\) and \(0\le w\le 1-\varepsilon \),
we have
where the last equality is due to \(\int _0^t x \hbox {d}F_1(x)=-t \overline{F}_1(t)+\int _0^t \overline{F}_1(x)\hbox {d}x\). Now, (B.64) follows from (B.65) and (B.67).
Now, let us recall (B.61). Note that for all \(0<\varepsilon <1\), \(\overline{F}_1((1-\varepsilon )t)\overline{F}_2(\varepsilon t)=o(\overline{F}_2(t))\), \(\overline{F}_1(t)\overline{F}_2(\varepsilon t)=o(\overline{F}_2(t))\), \(\overline{F}_2(t)\overline{F}_1((1-\varepsilon )t)=o(\overline{F}_2(t))\) and \(\pi _2(t,\varepsilon )=o(\overline{F}_2(t))\) (by B.64)). So, (B.61) can be rewritten as
Using (B.63) and taking \(\varepsilon \rightarrow 0^+\), we complete the proof.
Proof of \(\Delta \)-analyticity of \(Ez^{L_{\infty }}\) for Examples 1 and 2
Definition C.1
([14], p. 389) An open domain is called a \(\Delta \)-domain at 1 if it is in the form \(\Delta =\{z|\ |z|<R,\ z\ne 1,\ |\text{ Arg }(z-1)|>\phi \}\) for some \(R>1\) and \(0<\phi <\frac{\pi }{2}\). A function is called \(\Delta \)-analytic at 1 if it is analytic in some \(\Delta \)-domain at 1.
In the following, for Example 1 and Example 2, we will prove, respectively, that \(Ez^{L_{\infty }}\) can be analytically continued to \(\mathbb {C}\backslash [1,\infty )\), which immediately implies its \(\Delta \)-analyticity at 1.
1.1 Example 1
Recall that (8.2) and (8.3). Note that \(\beta (s)\) and \(\beta ^{(e)}(s)\) are analytic in \(\mathbb {D}=\mathbb {C}\backslash (-\infty ,0]\), and both X(z) and \(X^{(de)}(z)\) can be analytically continued to \(\mathbb {C}\backslash [1,\infty )\). For \(Ez^{L_{\infty }}\) to be analytic in \(\mathbb {C}\backslash [1,\infty )\), we only need to verify two things: (i) \(\lambda (1-X(z))\in \mathbb {D}\) for any \(z\in \mathbb {C}\backslash [1,\infty )\); (ii) \(1-\rho \beta ^{(e)}(\lambda -\lambda X(z))\cdot X^{(de)}(z)\) is nonzero in \(\mathbb {C}\backslash [1,\infty )\).
(i) Let \(s=(1-z)/(1-q)=x+iy\). By (8.8),
It follows that if \(\text{ Im }(z)\not = 0\) (or say \(y\not = 0\)), then \(\text{ Im }\big (1-X(z)\big )\not = 0\). Additionally, for any real \(z\in (-\infty ,1)\) (or say \(s>0\)), \(1-X(z)=s/(1+qs)>0\). Hence, \(\lambda (1-X(z))\in \mathbb {D}\) for any \(z\in \mathbb {C}\backslash [1,\infty )\).
(ii) By (11) in [33], for \(s\in \mathbb {D}\),
Set \(s=(1-z)/(1-q)=x+iy\). By the expressions of \(1-X(z)=s/(1+qs)\) and \(X^{(de)}(z)=1/(1+qs)\), and using (C.1), we have
from which we infer that \(\text{ Im }(\beta ^{(e)}\left( \lambda -\lambda X(z))\cdot X^{(de)}(z)\right) \ne 0\) for \(y\ne 0\) or \(\text{ Im }(z)\not =0\). So, \(1-\rho \beta ^{(e)}(\lambda -\lambda X(z))\cdot X^{(de)}(z)\) is nonzero in \(\mathbb {C}\backslash (-\infty ,\infty )\). In addition, for any real \(z\in (-\infty ,1)\), we have \(s=x> 0\), then \(\beta ^{(e)}(\lambda -\lambda X(z))\cdot X^{(de)}(z)=\beta ^{(e)}\big (\frac{\lambda x}{1+qx}\big )\cdot \frac{1}{1+qx}< 1\). So, \(1-\rho \beta ^{(e)}(\lambda -\lambda X(z))\cdot X^{(de)}(z)\) is nonzero for \(z\in (-\infty ,1)\).
1.2 Example 2
Note that \(\tau (s)\) and \(\tau ^{(e)}(s)\), \(\beta (s)\) and \(\beta ^{(e)}(s)\) can be analytically continued to \(\mathbb {D}=\mathbb {C}\backslash (-\infty ,0]\), and both X(z) and \(X^{(de)}(z)\) can be analytically continued to \(\mathbb {C}\backslash [1,\infty )\). For \(Ez^{L_{\infty }}\) to be analytic in \(\mathbb {C}\backslash [1,\infty )\), we only need to verify two things: (i) \(\lambda (1-X(z))\in \mathbb {D}\) for any \(z\in \mathbb {C}\backslash [1,\infty )\); (ii) \(1-\rho \beta ^{(e)}(\lambda -\lambda X(z))\cdot X^{(de)}(z)\) is nonzero in \(\mathbb {C}\backslash [1,\infty )\).
(i) Similar to (C.1), we know that
By the definition of X(z), we can immediately write
Set \(s=1-z=x+iy\) in the above equality. Note that \(\tau (s)=1-\tau _1 s\tau ^{(e)}(s)\). We then have
where
It is worthwhile to mention that \(g(x,y)=\frac{v}{\Gamma (u)} \int _0^{\infty }\frac{t+v}{(t+v x)^2+(vy)^2}\cdot t^{u-1}e^{-t} \hbox {d}t>0\) for all \(x>0\) and \(y\not =0\). It follows from (C.5) and (C.6) that
Therefore, if \(\text{ Im }(z)\not = 0\) (or \(y\not = 0\)), then \(\text{ Im }\big (1-X(z)\big )\not = 0\). Additionally, for any real \(z\in (-\infty ,1)\) (or \(s>0\)), \(1-X(z)\ge 1-(1-s)>0\) (see (C.5)). Hence, \(\lambda (1-X(z))\in \mathbb {D}\) for any \(z\in \mathbb {C}\backslash [1,\infty )\).
(ii) It follows from (C.5) and (C.6) that
where \(g_0(x,y)=\frac{g(x,y)}{(f(x,y))^2+(yg(x,y))^2}>0\) for all \(x>0\) and \(y\not =0\). It follows from (C.5), (C.8), and (C.1) that
from which we know that \(\text{ Im }(\beta ^{(e)}\left( \lambda -\lambda X(z))\cdot X^{(de)}(z)\right) \ne 0\) for all \(y\ne 0\). So, \(1-\rho \beta ^{(e)}(\lambda -\lambda X(z))\cdot X^{(de)}(z)\) is nonzero in \(\mathbb {C}\backslash (-\infty ,\infty )\). Furthermore, for any real \(z\in (-\infty ,1)\), we have \(s=x> 0\) and by (C.6),
hence \(\beta ^{(e)}(\lambda -\lambda X(z))\cdot X^{(de)}(z)=\beta ^{(e)}\big (\lambda x\varphi (x)\big )\cdot \frac{\varphi (x)}{1+\tau _1}< 1\). So, \(1-\rho \beta ^{(e)}(\lambda -\lambda X(z))\cdot X^{(de)}(z)\) is nonzero for \(z\in (-\infty ,1)\).
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Liu, B., Min, J. & Zhao, Y.Q. Refined tail asymptotic properties for the \(M^X/G/1\) retrial queue. Queueing Syst 104, 65–105 (2023). https://doi.org/10.1007/s11134-023-09874-y
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11134-023-09874-y