[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Portfolio credit risk with Archimedean copulas: asymptotic analysis and efficient simulation

  • Original Research
  • Published:
Annals of Operations Research Aims and scope Submit manuscript

Abstract

In this paper, we study large losses arising from defaults of a credit portfolio. We assume that the portfolio dependence structure is modelled by the Archimedean copula family as opposed to the widely used Gaussian copula. The resulting model is new, and it has the capability of capturing extremal dependence among obligors. We first derive sharp asymptotics for the tail probability of portfolio losses and the expected shortfall. Then we demonstrate how to utilize these asymptotic results to produce two variance reduction algorithms that significantly enhance the classical Monte Carlo methods. Moreover, we show that the estimator based on the proposed two-step importance sampling method is logarithmically efficient while the estimator based on the conditional Monte Carlo method has bounded relative error as the number of obligors tends to infinity. Extensive simulation studies are conducted to highlight the efficiency of our proposed algorithms for estimating portfolio credit risk. In particular, the variance reduction achieved by the proposed conditional Monte Carlo method, relative to the crude Monte Carlo method, is in the order of millions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Albrecher, H., Constantinescu, C., & Loisel, S. (2011). Explicit ruin formulas for models with dependence among risks. Insurance: Mathematics and Economics, 48(2), 265–270.

    MathSciNet  Google Scholar 

  • Asmussen, S. (2018). Conditional Monte Carlo for sums, with applications to insurance and finance. Annals of Actuarial Science, 12(2), 455–478.

    Article  Google Scholar 

  • Asmussen, S., Binswanger, K., Højgaard, B., et al. (2000). Rare events simulation for heavy-tailed distributions. Bernoulli, 6(2), 303–322.

    Article  MathSciNet  Google Scholar 

  • Asmussen, S., & Kroese, D. P. (2006). Improved algorithms for rare event simulation with heavy tails. Advances in Applied Probability, 38(2), 545–558.

    Article  MathSciNet  Google Scholar 

  • Basoğlu, I., Hörmann, W., & Sak, H. (2018). Efficient simulations for a Bernoulli mixture model of portfolio credit risk. Annals of Operations Research, 260, 113–128.

    Article  MathSciNet  Google Scholar 

  • Bassamboo, A., Juneja, S., & Zeevi, A. (2008). Portfolio credit risk with extremal dependence: Asymptotic analysis and efficient simulation. Operations Research, 56(3), 593–606.

    Article  MathSciNet  Google Scholar 

  • Berndt, B. C. (1998). Ramanujan’s notebooks part V. Springer.

  • Bingham, N. H., Goldie, C. M., & Teugels, J. L. (1989). Regular variation (Vol. 27). Cambridge University Press.

  • Chan, J. C., & Kroese, D. P. (2010). Efficient estimation of large portfolio loss probabilities in \(t\)-copula models. European Journal of Operational Research, 205(2), 361–367.

    Article  MathSciNet  Google Scholar 

  • Charpentier, A., & Segers, J. (2009). Tails of multivariate Archimedean copulas. Journal of Multivariate Analysis, 100(7), 1521–1537.

    Article  MathSciNet  Google Scholar 

  • Cherubini, U., Luciano, E., & Vecchiato, W. (2004). Copula methods in finance. Wiley.

  • Cossette, H., Marceau, E., Mtalai, I., & Veilleux, D. (2018). Dependent risk models with Archimedean copulas: A computational strategy based on common mixtures and applications. Insurance: Mathematics and Economics, 78, 53–71.

    MathSciNet  Google Scholar 

  • de Haan, L., & Ferreira, A. (2007). Extreme value theory: An introduction. Springer.

  • Denuit, M., Purcaru, O., Van Keilegom, I., et al. (2004). Bivariate Archimedean copula modelling for loss-alae data in non-life insurance. IS Discussion Papers, 423.

  • Embrechts, P., Lindskog, F., & McNeil, A. (2001). Modelling dependence with copulas (p. 14). Département de mathématiques, Institut Fédéral de Technologie de Zurich, Zurich: Rapport technique.

  • Feller, W. (1971). An introduction to probability theory and its applications (Vol. 2). Wiley.

  • Frees, E. W., & Valdez, E. A. (1998). Understanding relationships using copulas. North American Actuarial Journal, 2(1), 1–25.

    Article  MathSciNet  Google Scholar 

  • Genest, C., & Favre, A.-C. (2007). Everything you always wanted to know about copula modeling but were afraid to ask. Journal of Hydrologic Engineering, 12(4), 347–368.

    Article  Google Scholar 

  • Glasserman, P. (2004). Tail approximations for portfolio credit risk. The Journal of Derivatives, 12(2), 24–42.

    Article  Google Scholar 

  • Glasserman, P., Kang, W., & Shahabuddin, P. (2007). Large deviations in multifactor portfolio credit risk. Mathematical Finance, 17(3), 345–379.

    Article  MathSciNet  Google Scholar 

  • Glasserman, P., Kang, W., & Shahabuddin, P. (2008). Fast simulation of multifactor portfolio credit risk. Operations Research, 56(5), 1200–1217.

    Article  MathSciNet  Google Scholar 

  • Glasserman, P., & Li, J. (2005). Importance sampling for portfolio credit risk. Management Science, 51(11), 1643–1656.

    Article  Google Scholar 

  • Gordy, M. B. (2003). A risk-factor model foundation for ratings-based bank capital rules. Journal of Financial Intermediation, 12(3), 199–232.

    Article  Google Scholar 

  • Gupton, G. M., Finger, C. C., & Bhatia, M. (1997). Creditmetrics: Technical document. JP Morgan & Co.

  • Hoeffding, W. (1963). Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58(301), 13–30.

    Article  MathSciNet  Google Scholar 

  • Hofert, M. (2008). Sampling Archimedean copulas. Computational Statistics & Data Analysis, 52(12), 5163–5174.

    Article  MathSciNet  Google Scholar 

  • Hofert, M. (2010). Sampling nested Archimedean copulas with applications to CDO pricing. PhD thesis, Universität Ulm.

  • Hofert, M., Mächler, M., & McNeil, A. J. (2013). Archimedean copulas in high dimensions: Estimators and numerical challenges motivated by financial applications. Journal de la Société Française de Statistique, 154(1), 25–63.

    MathSciNet  Google Scholar 

  • Hofert, M., & Scherer, M. (2011). CDO pricing with nested Archimedean copulas. Quantitative Finance, 11(5), 775–787.

    Article  MathSciNet  Google Scholar 

  • Hong, L. J., Juneja, S., & Luo, J. (2014). Estimating sensitivities of portfolio credit risk using Monte Carlo. INFORMS Journal on Computing, 26(4), 848–865.

    Article  MathSciNet  Google Scholar 

  • Juneja, S., Karandikar, R., & Shahabuddin, P. (2007). Asymptotics and fast simulation for tail probabilities of maximum of sums of few random variables. ACM Transactions on Modeling and Computer Simulation (TOMACS), 17(2), 7.

    Article  Google Scholar 

  • Juneja, S., & Shahabuddin, P. (2002). Simulating heavy tailed processes using delayed hazard rate twisting. ACM Transactions on Modeling and Computer Simulation (TOMACS), 12(2), 94–118.

    Article  Google Scholar 

  • Kealhofer, S. & Bohn, J. (2001). Portfolio management of credit risk. Technical Report.

  • Marshall, A. W., & Olkin, I. (1988). Families of multivariate distributions. Journal of the American Statistical Association, 83(403), 834–841.

    Article  MathSciNet  Google Scholar 

  • McLeish, D. L. (2010). Bounded relative error importance sampling and rare event simulation. ASTIN Bulletin: The Journal of the IAA, 40(1), 377–398.

    Article  MathSciNet  Google Scholar 

  • McNeil, A. J., Frey, R., & Embrechts, P. (2015). Quantitative risk management: Concepts, techniques and tools. Princeton University Press.

  • Merton, R. C. (1974). On the pricing of corporate debt: The risk structure of interest rates. The Journal of Finance, 29(2), 449–470.

    Google Scholar 

  • Naifar, N. (2011). Modelling dependence structure with Archimedean copulas and applications to the iTraxx CDS index. Journal of Computational and Applied Mathematics, 235(8), 2459–2466.

    Article  MathSciNet  Google Scholar 

  • Okhrin, O., Okhrin, Y., & Schmid, W. (2013). On the structure and estimation of hierarchical Archimedean copulas. Journal of Econometrics, 173(2), 189–204.

    Article  MathSciNet  Google Scholar 

  • Rényi, A. (1953). On the theory of order statistics. Acta Mathematica Hungarica, 4(3–4), 191–231.

    MathSciNet  Google Scholar 

  • Resnick, S. I. (2013). Extreme values, regular variation and point processes. Springer.

  • Tang, Q., Tang, Z., & Yang, Y. (2019). Sharp asymptotics for large portfolio losses under extreme risks. European Journal of Operational Research, 276(2), 710–722.

  • Tong, E. N., Mues, C., Brown, I., & Thomas, L. C. (2016). Exposure at default models with and without the credit conversion factor. European Journal of Operational Research, 252(3), 910–920.

    Article  MathSciNet  Google Scholar 

  • Wang, W. (2003). Estimating the association parameter for copula models under dependent censoring. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 65(1), 257–273.

    Article  MathSciNet  Google Scholar 

  • Zhang, L., & Singh, V. P. (2007). Bivariate rainfall frequency distributions using Archimedean copulas. Journal of Hydrology, 332(1–2), 93–109.

    Article  ADS  Google Scholar 

  • Zhu, W., Wang, C., & Tan, K. S. (2016). Levy subordinated hierarchical Archimedean copula: Theory and application. Journal of Banking and Finance, 69, 20–36.

    Article  Google Scholar 

Download references

Acknowledgements

We are grateful to the Editor and the anonymous reviewer for the helpful comments and suggestions that have greatly improved the presentation of the paper. Hengxin Cui thanks the support from the Hickman Scholar Program of the Society of Actuaries. Ken Seng Tan acknowledges the research funding from the Society of Actuaries CAE’s grant and the Singapore University Grant. Fan Yang acknowledges financial support from the Natural Sciences and Engineering Research Council of Canada (Grant Number: 04242).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fan Yang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Proofs

Appendix: Proofs

To simplify the notation, for any two positive functions g and h, we write \(g\lesssim h\) or \(h > rsim g\) if \(\lim \sup g/h\le 1\).

1.1 A.1 Proofs for LT-Archimedean copulas

We first list a series of lemmas that will be useful for proving Theorem 4.1 and Theorem 4.2. The following is a restatement of Theorem 2 of Hoeffding (1963).

Lemma A.1

If \(X_{1},X_{2},\ldots ,X_{n}\) are independent and \(a_{i}\le X_{i}\le b_{i}\) for \(i=1,\ldots ,n\), then for \(\varepsilon >0\)

$$\begin{aligned} \mathbb {P}\left( \left| \bar{X}_{n}-\mathbb {E}\left[ \bar{X}_{n}\right] \right| \ge \varepsilon \right) \le 2\exp \left( -\frac{2n^{2} \varepsilon ^{2}}{\sum _{i=1}^{n}(b_{i}-a_{i})^{2}}\right) , \end{aligned}$$

with \(\bar{X}_{n}=\left( X_{1}+X_{2}+\ldots +X_{n}\right) /n\).

Applying Lemma A.1, we obtain the following inequality:

Lemma A.2

For any \(\varepsilon >0\) and any large M, there exists a constant \(\beta >0\) such that

$$\begin{aligned} \mathbb {P}_{v}\left( \left| \frac{1}{n}\sum _{i=1}^{n} c_{i} 1_{\{U_{i}>1-l_{i}f_{n}\}}-r(v)\right| \ge \varepsilon \right) \le \exp (-n\beta ), \end{aligned}$$

uniformly for all \(0<v\le M\) and for all sufficiently large n, where \(\mathbb {P}_{v}\) denotes the original probability measure conditioned on \(V=\frac{v}{\phi (1-f_{n})}\).

Proof

Note that \(U_{i}\) are conditionally independent on V. Then by Lemma A.1, for every n,

$$\begin{aligned} \mathbb {P}_{v}\left( \left| \frac{1}{n}\sum _{i=1}^{n} c_{i} 1_{\{U_{i}>1-l_{i}f_{n}\}}-\frac{1}{n}\sum _{i=1}^{n}c_{i}p(v,i)\right| \ge 2\varepsilon \right) \le 2\exp \left( -\frac{8n^{2}\varepsilon ^{2}}{\sum _{i=1}^{n} c_{i}^{2}}\right) \le \exp (-n\beta ), \end{aligned}$$
(A.1)

where \(\beta \) is some unimportant constant not depending on n and v.

Using (A.1), to obtain the desired result, it suffices to show the existence of N, such for all \(n\ge N\),

$$\begin{aligned} \left| \frac{1}{n}\sum _{i=1}^{n}c_{i}p(v,i)-r(v)\right| \le \varepsilon \end{aligned}$$
(A.2)

holds uniformly for all \(v\le M\). Recall that \(r(v)=\sum _{j\le |\mathcal {W} |}c_{j}w_{j}\tilde{p}(v,j)\). Note that \(n_{j}\) denotes the number of obligors in sub-portfolio j. Then

$$\begin{aligned} \left| \frac{1}{n}\sum _{i=1}^{n}c_{i}p(v,i)-r(v)\right|&=\left| \sum _{j\le |\mathcal {W}|}c_{j}\left( p(v,j)\frac{n_{j}}{n} -\tilde{p}(v,j)w_{j}\right) \right| \nonumber \\&\le \sum _{j\le |\mathcal {W}|}c_{j}p(v,j)\left| \frac{n_{j}}{n} -w_{j}\right| \nonumber \\&\quad +\, \sum _{j\le |\mathcal {W}|}c_{j}w_{j}\left| p(v,j)-\tilde{p} (v,j)\right| \nonumber \\&\le \sum _{j\le |\mathcal {W}|}c_{j}\left| \frac{n_{j}}{n}-w_{j} \right| +\bar{c}\max \limits _{j\le |\mathcal {W}|}\left| p(v,j)-\tilde{p}(v,j)\right| \end{aligned}$$
(A.3)

where \(\bar{c}=\sum _{j\le |\mathcal {W}|}c_{j}w_{j}\). By Assumption 2.1, there exists \(N_{1}\) satisfying \(\sum _{j\le |\mathcal {W}|} c_{j}\left| \frac{n_{j}}{n}-w_{j}\right| \le \frac{\varepsilon }{2}\) for all \(n\ge N_{1}\). For the second part of (A.3), by noting that \(e^{x}\ge 1+x\) for all \(x\in \mathbb {R}\), we have

$$\begin{aligned} \left| p(v,j)-\tilde{p}(v,j)\right|&=\exp \left( -v\left( \frac{\phi (1-l_{j}f_{n})}{\phi (1-f_{n})}\wedge l_{j}^{\alpha }\right) \right) \left( 1-\exp \left( -v\left| \frac{\phi (1-l_{j}f_{n})}{\phi (1-f_{n} )}-l_{j}^{\alpha }\right| \right) \right) \\&\le v\left| \frac{\phi (1-l_{j}f_{n})}{\phi (1-f_{n})}-l_{j}^{\alpha }\right| \\&\le M\left| \frac{\phi (1-l_{j}f_{n})}{\phi (1-f_{n})}-l_{j}^{\alpha }\right| . \end{aligned}$$

Since \(\phi \in \mathrm {RV}_{\alpha }(1)\), there exists \(N_{2}\) such that for all \(n\ge N_{2}\), \(\bar{c}\max \limits _{j\le |\mathcal {W}|,v\in A}\left| p(v,j)-\tilde{p}(v,j)\right| \le \frac{\varepsilon }{2}\).

Combining the upper bound for both parts in (A.3) and letting \(N=\max \{N_{1},N_{2}\}\), (A.2) holds uniformly for all \(v\le M\). The proof is then completed. \(\square \)

The following proof of Theorem 4.1 is motivated by the proof of Theorem 1 in Bassamboo et al. (2008).

Proof of Theorem 4.1

Let \(v_{\delta }^{*}\) denote the unique solution to the equation \(r(v)=b-\delta \). By using continuity and monotonicity of r(v) in v, we have

$$\begin{aligned} v_{\delta }^{*}\rightarrow v^{*} \end{aligned}$$

as \(\delta \rightarrow 0\).

Fix \(\delta >0\). We decompose the probability of the event \(\{L_{n}>nb\}\) into two terms as

$$\begin{aligned} \mathbb {P}\left( L_{n}>nb\right)&=\mathbb {P}\left( L_{n}>nb,V\le \frac{v_{\delta }^{*}}{\phi (1-f_{n})}\right) +\mathbb {P}\left( L_{n}>nb,V>\frac{v_{\delta }^{*}}{\phi (1-f_{n})}\right) \\&=I_{1}+I_{2}. \end{aligned}$$

The remaining part of proof will be divided into three steps. We first show that \(I_{1}\) is asymptotically negligible. Then we develop upper and lower bounds for \(I_{2}\) with the second and third step.

Step 1. We show \(I_{1}=o(f_{n})\). Note that for any \(v\le v_{\delta }^{*}\), \(r(v)\le b-\delta \). Thus, by Lemma A.2, for all sufficiently large n, there exists a constant \(\beta >0\) such that

$$\begin{aligned} \mathbb {P}_{v}\left( L_{n}>nb\right) \le \mathbb {P}_{v}\left( \frac{1}{n}\sum _{i=1}^{n}c_{i}1_{\{U_{i}>1-l_{i}f_{n}\}}-r(v)>\delta \right) \le \exp (-n\beta ) \end{aligned}$$

uniformly for all \(v\le v_{\delta }^{*}\). So the same upper bound holds for \(I_{1}\). Due to the condition on \(f_{n}\), \(I_{1}=o(f_{n})\).

Step 2. We now develop an asymptotic upper bound for \(I_{2} \). Note that

$$\begin{aligned} I_{2}\le \mathbb {P}\left( V>\frac{v_{\delta }^{*}}{\phi (1-f_{n})}\right) =\overline{F}_{V}\left( \frac{v_{\delta }^{*}}{\phi (1-f_{n})}\right) . \end{aligned}$$

Recall that \(\phi ^{-1}\) is the LS transform for random variable V. Then by \(\phi (1-\frac{1}{\cdot })\in \mathrm {RV}_{-\alpha }\) and Karamata’s Tauberian theorem, we obtain

$$\begin{aligned} I_{2}&\le \overline{F}_{V}\left( \frac{v_{\delta }^{*}}{\phi (1-f_{n} )}\right) \\&\sim \frac{{1-\phi ^{-1}\left( \frac{\phi (1-f_{n})}{v_{\delta }^{*} }\right) }}{{\Gamma (1-1/\alpha )}}\\&\sim f_{n}\frac{(v_{\delta }^{*})^{-1/\alpha }}{\Gamma (1-1/\alpha )}, \end{aligned}$$

where in the first step we used \(\overline{F}_{V}\in \mathrm {RV}_{-1/\alpha }\) and the second step is due to \(1-\phi ^{-1}(\frac{1}{\cdot })\in \mathrm {RV} _{1/\alpha }\). Letting \(\delta \downarrow 0\), we obtain

$$\begin{aligned} I_{2}\lesssim f_{n}\frac{(v^{*})^{-1/\alpha }}{\Gamma (1-1/\alpha )}. \end{aligned}$$
(A.4)

Step 3. We now develop an asymptotic lower bound for \(I_{2} \). Denote \(v_{\widehat{\delta }}^{*}\) as the unique solution to the equation \(r(v)=b+\delta \). Similarly, we have \(v_{\widehat{\delta }}^{*}\rightarrow v^{*}\) as \(\delta \rightarrow 0\). It also follows from the monotonicity of r(v) that \(v_{\widehat{\delta }}^{*}\ge v_{\delta }^{*}\). Thus,

$$\begin{aligned} I_{2}\ge \mathbb {P}\left( L_{n}>nb,V>\frac{v_{\widehat{\delta }}^{*}}{\phi (1-f_{n})}\right) . \end{aligned}$$

Note that for any large \(M>0\), applying Lemma A.2, it holds uniformly for \(v\in \left[ v_{\hat{\delta }}^{*},M\right] \) that \(r(v)\ge b+\delta \) and then as \(n\rightarrow \infty \), by Lemma A.2

$$\begin{aligned} \mathbb {P}_{v}\left( L_{n}>nb\right)&\ge \mathbb {P}_{v}\left( \frac{1}{n}\sum _{i=1}^{n}c_{i}1_{\{U_{i}>1-l_{i}f_{n}\}}-r(v)>-\delta \right) \\&=1-\mathbb {P}_{v}\left( \frac{1}{n}\sum _{i=1}^{n}c_{i}1_{\{U_{i} >1-l_{i}f_{n}\}}-r(v)\le -\delta \right) \rightarrow 1. \end{aligned}$$

Hence,

$$\begin{aligned} I_{2}& > rsim \overline{F}_{V}\left( \frac{v_{\hat{\delta }}^{*}}{\phi (1-f_{n})}\right) -\overline{F}_{V}\left( \frac{M}{\phi (1-f_{n} )}\right) \\&\sim f_{n}\frac{(v_{\hat{\delta }}^{*})^{-1/\alpha }}{\Gamma (1-1/\alpha )}-f_{n}\frac{M^{-1/\alpha }}{\Gamma (1-1/\alpha )}. \end{aligned}$$

Taking \(M\rightarrow \infty \) followed by \(\delta \rightarrow 0\), we get

$$\begin{aligned} I_{2} > rsim f_{n}\frac{(v^{*})^{-1/\alpha }}{\Gamma (1-1/\alpha )}. \end{aligned}$$
(A.5)

Combining (A.4), (A.5) with Step 1 completes the proof of the theorem. \(\square \)

Proof of Theorem 4.2

We first note that the expected shortfall can be rewritten as in (4.6). Using Theorem 4.1, in order to get the desired result, it suffices to show that

$$\begin{aligned} \int _{b}^{\infty }\mathbb {P}\left( L_{n}>nx\right) \mathrm {d}x\sim f_{n} \frac{\int _{v^{*}}^{\infty }r^{\prime }(v)v^{-1/\alpha }\mathrm {d}v}{\Gamma (1-1/\alpha )}. \end{aligned}$$
(A.6)

We decompose the left-hand side of (A.6) into the following two terms

$$\begin{aligned} \int _{b}^{\infty }\mathbb {P}\left( L_{n}>nx\right) \mathrm {d}x&=\int _{b}^{\bar{c}}\mathbb {P}\left( L_{n}>nx\right) \mathrm {d}x+\int _{\bar{c} }^{\infty }\mathbb {P}\left( L_{n}>nx\right) \mathrm {d}x\\&:=J_{1}+J_{2}, \end{aligned}$$

where \(\bar{c}=\sum _{j\le |\mathcal {W}|}c_{j}w_{j}\). The remaining part of proof will be divided into three steps. We first show \(\mathbb {P}\left( L_{n}>n\bar{c}\right) \) and \(J_{2}\) are asymptotically negligible in the first two steps. Then we develop the asymptotic for \(J_{1}\) in the last step. For simplicity, we denote the unique solution of the equation \(r(v)=s\) for \(0\le s\le \bar{c}\) by \(r^{\leftarrow }(s)\).

Step 1. In this step, we show

$$\begin{aligned} \mathbb {P}\left( L_{n}>n\bar{c}\right) =o(f_{n}). \end{aligned}$$
(A.7)

Fix an arbitrarily small \(\delta >0\). Proceeding in the same way as in step 1 in the proof of Theorem 4.1, for all sufficiently large n, there exists a constant \(\beta >0\) such that

$$\begin{aligned} \mathbb {P}\left( L_{n}>n\bar{c},V\le \frac{r^{\leftarrow }(\bar{c}-\delta )}{\phi (1-f_{n})}\right) \le \exp (-n\beta ). \end{aligned}$$

Due to the condition on \(f_{n}\) and letting \(\delta \downarrow 0\), we have the desired result in (A.7).

Step 2. In this step, we show \(J_{2}=o(f_{n}).\) Note that \(J_{2}\) can be rewritten as follows,

$$\begin{aligned} J_{2}&=\mathbb {E}\left[ \left( \frac{L_{n}}{n}-\bar{c}\right) _{+}\right] \\&=\mathbb {E}\left[ \left( \frac{L_{n}}{n}-\bar{c}\right) 1_{\left\{ L_{n}>n\bar{c}\right\} }\right] . \end{aligned}$$

Since \(\frac{L_{n}}{n}<\max \limits _{j\le \vert \mathcal {W}\vert }c_{j}\), we have

$$\begin{aligned} J_{2}\le \left( \max \limits _{j\le \vert \mathcal {W}\vert }c_{j}-\bar{c}\right) \mathbb {P}\left( L_{n}>n\bar{c}\right) . \end{aligned}$$

It follows from (A.7) that \(J_{2}=o(f_{n})\).

Step 3. To this end, we show

$$\begin{aligned} \lim _{n\rightarrow \infty }\int _{b}^{\bar{c}}\frac{\Gamma (1-1/\alpha )}{f_{n} }\mathbb {P}\left( L_{n}>nx\right) \mathrm {d}x=\int _{v^{*}}^{\infty }r^{\prime }(v)v^{-1/\alpha }\mathrm {d}v. \end{aligned}$$

First note that, for any \(x\in [b,\bar{c}]\), by Theorem 4.1 we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\Gamma (1-1/\alpha )}{f_{n}}\mathbb {P}\left( L_{n}>nx\right) =(r^{\leftarrow }(x))^{-1/\alpha }. \end{aligned}$$

Further, the following inequality holds any \(x\in [b,\bar{c}]\)

$$\begin{aligned} \frac{\Gamma (1-1/\alpha )}{f_{n}}\mathbb {P}\left( L_{n}>nx\right) \le \frac{\Gamma (1-1/\alpha )}{f_{n}}\mathbb {P}\left( L_{n}>nb\right) . \end{aligned}$$

Applying the dominated convergence theorem, we obtain

$$\begin{aligned} \lim _{n\rightarrow \infty }\int _{b}^{\bar{c}}\frac{\Gamma (1-1/\alpha )}{f_{n} }\mathbb {P}\left( L_{n}>nx\right) \mathrm {d}x&=\int _{b}^{\bar{c}}\left( \lim _{n\rightarrow \infty }\frac{\Gamma (1-1/\alpha )}{f_{n}}\mathbb {P}\left( L_{n}>nx\right) \right) \mathrm {d}x\\&=\int _{b}^{\bar{c}}(r^{\leftarrow }(x))^{-1/\alpha }\mathrm {d}x\\&=\int _{v^{*}}^{\infty }r^{\prime }(v)v^{-1/\alpha }\mathrm {d}v. \end{aligned}$$

The last equality is by changing the variable and let \(v=r^{\leftarrow }(x)\).

Combing Step 2 and Step 3 completes the proof of the theorem. \(\square \)

1.2 A.2 Proofs for algorithm efficiency

Lemma A.3 and A.4 will be used in proving Lemma 5.1.

Lemma A.3

For sufficiently large n, there exists a constant C such that

$$\begin{aligned} \frac{f_{V}(x)}{f_{V}^{*}(x)}\le C\left( -\log \phi (1-f_{n})\right) \end{aligned}$$
(A.8)

for all x, where \(f_{V}^{*}(x)\) is defined in (5.6).

Proof

By definition of \(f_{V}^{*}(x)\), the ratio \(\frac{f_{V}(x)}{f_{V}^{*}(x)}\) equals 1 for \(x<x_{0}\). Hence, to show (A.8), it suffices to show the existence of a constant C for all \(x\ge x_{0}\).

Note that when \(x\ge x_{0}\),

$$\begin{aligned} \frac{f_{V}(x)}{f_{V}^{*}(x)}=\frac{f_{V}(x)}{\overline{F}_{V}(x_{0})} x_{0}^{1/\log \phi (1-f_{n})}\left( -\log \phi (1-f_{n})\right) x^{1-\frac{1}{\log \phi (1-f_{n})}}. \end{aligned}$$

By Assumption 4.1 that V has a eventually monotone density function, we have \(f_{V}\in \mathrm {RV}_{-1/\alpha -1}\). Then by Potter’s bounds [see e.g. Theorem B.1.9 (5) of de Haan and Ferreira (2007)], for any small \(\varepsilon >0\), there exists \(x_{0}>0\) and a constant \(C_{0}>0\) such that for all \(x\ge x_{0}\)

$$\begin{aligned} f_{V}(x)\le C_{0}x^{-\frac{1}{\alpha }-1+\varepsilon }. \end{aligned}$$

Thus,

$$\begin{aligned} \frac{f_{V}(x)}{f_{V}^{*}(x)}&\le \frac{C_{0}}{\overline{F}_{V}(x_{0} )}x_{0}^{1/\log \phi (1-f_{n})}\left( -\log \phi (1-f_{n})\right) x^{-1/\alpha -\frac{1}{\log \phi (1-f_{n})}+\varepsilon }\nonumber \\&\le C\left( -\log \phi (1-f_{n})\right) , \end{aligned}$$
(A.9)

which yields our desired result by noting the fact that \(x\ge x_{0}\) and \(-1/\alpha -\frac{1}{\log \phi (1-f_{n})}+\varepsilon <0\). \(\square \)

Lemma A.4

If \(\phi (1-\frac{1}{\cdot })\in \mathrm {RV}_{-\alpha }\) for some \(\alpha >1\) and \(f_{n}\) is a positive deterministic function converging to 0 as \(n\rightarrow \infty \), then

$$\begin{aligned} \log \phi (1-f_{n})\sim \alpha \log (f_{n}). \end{aligned}$$

Proof

By Proposition B.1.9(1) of de Haan and Ferreira (2007), \(\phi \in \mathrm {RV}_{\alpha }(1)\) implies that

$$\begin{aligned} \log \phi (1-x)\sim \alpha \log (x) \end{aligned}$$

as \(x\rightarrow 0\). \(\square \)

The following proof is motivated by the proof of Theorem 3 in Bassamboo et al. (2008).

Proof of Lemma 5.1

Let

$$\begin{aligned} \hat{L}=\prod _{j\le |\mathcal {W}|}\left( \frac{p_{j}}{p_{j}^{*}}\right) ^{n_{j}Y_{j}}\left( \frac{1-p_{j}}{1-p_{j}^{*}}\right) ^{n_{j}(1-Y_{j} )}. \end{aligned}$$

Note that if \(\mathbb {E}\left[ L_{n}\left| V=\frac{v}{\phi (1-f_{n} )}\right. \right] <nb\), \(p_{j}^{*}=p_{\theta ^{*}}(V\phi (1-f_{n}),j)\) where \(\theta ^{*}\) is chosen by solving \(\Lambda _{L_{n}|V}^{\prime } (\theta )=nb\); otherwise \(p_{j}^{*}=p\left( V\phi (1-f_{n}),j\right) \) by setting \(\theta ^{*}=0\). Besides, (5.8) shows \(\hat{L}\) can be written as follows.

$$\begin{aligned} \hat{L}=\exp (-\theta ^{*}L_{n}|V+\Lambda _{L_{n}|V}(\theta ^{*})). \end{aligned}$$

Then it follows that, for any v,

$$\begin{aligned} 1_{\left\{ L_{n}>nb,V=\frac{v}{\phi (1-f_{n})}\right\} }\hat{L} \le 1_{\left\{ L_{n}>nb,V=\frac{v}{\phi (1-f_{n})}\right\} }\exp (-\theta ^{*}nb+\Lambda _{L_{n}|V}(\theta ^{*}))\qquad \text {a.s.} \end{aligned}$$

Since \(\Lambda _{L_{n}|V}(\theta )\) is a strictly convex function, one can observe that \(-\theta nb+\Lambda _{L_{n}|V}(\theta )\) is minimized at \(\theta ^{*}\) and equals 0 at \(\theta =0\). Hence, the following relation

$$\begin{aligned} 1_{\left\{ L_{n}>nb,V=\frac{v}{\phi (1-f_{n})}\right\} }\hat{L} \le 1_{\left\{ L_{n}>nb,V=\frac{v}{\phi (1-f_{n})}\right\} }\qquad \text {a.s.} \end{aligned}$$
(A.10)

holds for any v.

To prove the theorem, now we re-express

$$\begin{aligned} \mathbb {E}^{*}\left[ 1_{\{L_{n}>nb\}}L^{*^{2}}\right]&=\mathbb {E}^{*}\left[ 1_{\left\{ L_{n}>nb,V\le \frac{v_{\delta }^{*} }{\phi (1-f_{n})}\right\} }L^{*^{2}}\right] +\mathbb {E}^{*}\left[ 1_{\left\{ L_{n}>nb,V>\frac{v_{\delta }^{*}}{\phi (1-f_{n})}\right\} }L^{*^{2}}\right] \\&=K_{1}+K_{2}, \end{aligned}$$

where \(v_{\delta }^{*}\) is the unique solution to the equation \(r(v)=b-\delta \).

The remaining part of proof will be divided into three steps.

Step 1. In this step, we show

$$\begin{aligned} K_{1}=o(f_{n}). \end{aligned}$$

By Lemma A.3, for sufficiently large n, there exists a finite positive constant C such that

$$\begin{aligned} \frac{f_{V}(v)}{f_{V}^{*}(v)}\le C\left( -\log \phi (1-f_{n})\right) \end{aligned}$$

for all v. From (A.10), it then follows that

$$\begin{aligned} 1_{\left\{ L_{n}>nb,V\le \frac{v_{\delta }^{*}}{\phi (1-f_{n})}\right\} }L^{*^{2}}\le C\left( -\log \phi (1-f_{n})\right) \left( 1_{\left\{ L_{n}>nb,V\le \frac{v_{\delta }^{*}}{\phi (1-f_{n})}\right\} }L^{*}\right) \qquad \text {a.s.} \end{aligned}$$

Therefore, \(K_{1}\) is upper bounded by

$$\begin{aligned} \mathbb {E}^{*}\left[ 1_{\left\{ L_{n}>nb,V\le \frac{v_{\delta }^{*} }{\phi (1-f_{n})}\right\} }L^{*^{2}}\right]&\le C\left( -\log \phi (1-f_{n})\right) \left( \mathbb {E}^{*}\left[ 1_{\left\{ L_{n}>nb,V\le \frac{v_{\delta }^{*}}{\phi (1-f_{n})}\right\} }L^{*}\right] \right) \\&=C\left( -\log \phi (1-f_{n})\right) \left( \mathbb {P}\left( L_{n}>nb,V\le \frac{v_{\delta }^{*}}{\phi (1-f_{n})}\right) \right) \\&\le C\left( -\log \phi (1-f_{n})\right) \exp (-\beta n). \end{aligned}$$

The last step is due to step 1 in the proof of Theorem 4.1. Moreover, by Lemma A.4, \(-\log \phi (1-f_{n})\sim \alpha \log \left( \frac{1}{f_{n}}\right) =o\left( \frac{1}{f_{n}}\right) \). Note \(f_{n}\) has a sub-exponential decay rate, it implies \(\frac{1}{f_{n}}\exp (-\beta n/2)\rightarrow 0\). Therefore, \(K_{1}\) is still \(o(f_{n})\).

Step 2. We show that

$$\begin{aligned} \limsup _{n\rightarrow \infty }\frac{\log K_{2}}{\log f_{n}}\le 2. \end{aligned}$$
(A.11)

By Jensen’s inequality,

$$\begin{aligned} \mathbb {E}^{*}\left[ 1_{\left\{ L_{n}>nb,V>\frac{v_{\delta }^{*}}{\phi (1-f_{n})}\right\} }L^{*^{2}}\right]&\ge \left( \mathbb {E} ^{*}\left[ 1_{\left\{ L_{n}>nb,V>\frac{v_{\delta }^{*}}{\phi (1-f_{n} )}\right\} }L^{*}\right] \right) ^{2}\\&=\left( \mathbb {P}\left( L_{n}>nb,V>\frac{v_{\delta }^{*}}{\phi (1-f_{n})}\right) \right) ^{2}\\&\sim f_{n}^{2}\left( \frac{(v^{*})^{-1/\alpha }}{\Gamma (1-1/\alpha )}\right) ^{2}, \end{aligned}$$

where the last step is due to Theorem 4.1. Then (A.11) follows by applying the logarithm function on both sides and using the fact that \(\log \left( f_{n}\right) <0\) for all sufficiently large n.

Step 3. We show that

$$\begin{aligned} \liminf _{n\rightarrow \infty }\frac{\log K_{2}}{\log f_{n}}\ge 2. \end{aligned}$$
(A.12)

First note that, on the set \(\left\{ L_{n}>nb,V>\frac{v_{\delta }^{*}}{\phi (1-f_{n})}\right\} \), by (A.10) the likelihood ratio \(L^{*}\) is upper bounded by \(\frac{f_{V}(v)}{f_{V}^{*}(v)}\) and hence by (A.9), with sufficiently large n, it holds for all \(v>\frac{v_{\delta }^{*}}{\phi (1-f_{n})}\) that

$$\begin{aligned} \frac{f_{V}(v)}{f_{V}^{*}(v)}&<\frac{C_{0}}{\overline{F}_{V}(x_{0} )}x_{0}^{1/\log \phi (1-f_{n})}\left( -\log \phi (1-f_{n})\right) v^{-1/\alpha -\frac{1}{\log \phi (1-f_{n})}+\varepsilon }\\&\le C\left( -\log \phi (1-f_{n})\right) \left( \frac{v_{\delta }^{*} }{\phi (1-f_{n})}\right) ^{-1/\alpha -\frac{1}{\log \phi (1-f_{n})}+\varepsilon }\\&<C\left( -\log \phi (1-f_{n})\right) \left( \phi (1-f_{n})\right) ^{1/\alpha +\frac{1}{\log \phi (1-f_{n})}-\varepsilon }. \end{aligned}$$

Multiplying it with the indicator and taking expectation under \(\mathbb {E} ^{*}\), we obtain

$$\begin{aligned} \mathbb {E}^{*}\left[ 1_{\left\{ L_{n}>nb,V>\frac{v_{\delta }^{*}}{\phi (1-f_{n})}\right\} }L^{*^{2}}\right] \le C^{2}\left( -\log \phi (1-f_{n})\right) ^{2}\left( \phi (1-f_{n})\right) ^{2/\alpha +\frac{2}{\log \phi (1-f_{n})}-2\varepsilon }. \end{aligned}$$

Then, taking logarithms on both sides, dividing by \(\log f_{n}\) and by Lemma A.4, we obtain

$$\begin{aligned} \liminf _{n\rightarrow \infty }\frac{\log \mathbb {E}^{*}\left[ 1_{\left\{ L_{n}>nb,V>\frac{v_{\delta }^{*}}{\phi (1-f_{n})}\right\} }L^{*^{2} }\right] }{\log f_{n}}\ge 2-2\alpha \varepsilon . \end{aligned}$$

Finally, (A.12) is yield by letting \(\varepsilon \downarrow 0\).

Combining Step 1, Step 2 and Step 3, the desired result asserted in the theorem is obtained. \(\square \)

The following two proofs are motivated by Chan and Kroese (2010). Lemma A.5 below will be used in proving Lemma 6.1.

Lemma A.5

Let \(R_{1},\ldots ,R_{n}\) be an i.i.d. sequence of standard exponential random variables. Suppose \(R_{(k)}\) is the kth order statistic and \(\lim _{n\rightarrow \infty }\frac{k}{n}=a<1\). Then, for every \(\varepsilon >0\), there exists a constant \(\beta >0\) such that the following inequality

$$\begin{aligned} \mathbb {P}\left( \left| R_{(k)}-\log \left( \frac{1}{1-a}\right) \right| \ge \varepsilon \right) \le \frac{\beta }{n}. \end{aligned}$$

holds for all sufficiently large n.

Proof

For i.i.d. standard exponential random variables \(R_{i},i=1,\ldots ,n\), it follows from Rényi (1953) that

$$\begin{aligned} R_{(k)}\overset{d}{=}\sum _{j=1}^{k}\frac{R_{j}}{n-j+1}. \end{aligned}$$

Then,

$$\begin{aligned} \mathbb {E}[R_{(k)}]=\sum _{j=1}^{k}\frac{1}{n-j+1}=H_{n}-H_{n-k}\rightarrow \log \left( \frac{1}{1-a}\right) , \quad \text {as }n\rightarrow \infty , \end{aligned}$$
(A.13)

where \(H_{n}\) denotes the nth harmonic number, i.e., \(H_{n}=1+\frac{1}{2}+\cdots +\frac{1}{n}\) for \(n\ge 1\). (A.13) is verified by noting the following asymptotic expansion; see, e.g., Berndt (1998),

$$\begin{aligned} H_{n}\sim \log (n)+\gamma +O\left( \frac{1}{n}\right) , \end{aligned}$$

and \(\gamma \) is the Euler’s constant. Similarly,

$$\begin{aligned} \mathrm {Var}[R_{(k)}]=\sum _{j=1}^{k}\left( \frac{1}{n-j+1}\right) ^{2} =H_{n}^{(2)}-H_{n-k}^{(2)}\sim \frac{a}{1-a}\frac{1}{n}, \quad \text {as } n\rightarrow \infty , \end{aligned}$$
(A.14)

where \(H_{n}^{(2)}\) is the nth harmonic number of order 2, i.e., \(H_{n}^{(2)}=1+\frac{1}{2^{2}}+\cdots +\frac{1}{n^{2}}\) for \(n\ge 1\). (A.14) is derived by applying the asymptotic expansion of \(H_{n}^{(2)} \); see, e.g., Berndt (1998),

$$\begin{aligned} H_{n}^{(2)}\sim \frac{\pi ^{2}}{6}-\frac{1}{n}+O\left( \frac{1}{n^{2}}\right) . \end{aligned}$$

Then, by Chebyshev’s inequality, it follows that, for every \(n>0\),

$$\begin{aligned} \mathbb {P}\left( \vert R_{(k)}-\mathbb {E}[R_{(k)}]\vert \ge \varepsilon \right) \le \frac{\mathrm {Var}[R_{(k)}]}{\varepsilon ^{2}}. \end{aligned}$$

Due to (A.13) and (A.14), there exists N, such that for all \(n\ge N\),

$$\begin{aligned} \mathbb {P}\left( \left| R_{(k)}-\log \left( \frac{1}{1-a}\right) \right| \ge \varepsilon \right) \le \frac{\beta }{n}, \end{aligned}$$

where \(\beta \) only depends on \(\varepsilon \) and a. \(\square \)

Proof of Lemma 6.1

Recall that \(O_{i}=\frac{R_{i}}{\phi (1-l_{i}f_{n})}\), for all \(i=1,\ldots ,n\). Then the order statistic \(O_{(k)}\) is almost surely lower bounded by

$$\begin{aligned} \frac{R_{(k)}}{\phi \left( 1-\max \limits _{j\le |\mathcal {W}|}l_{j} f_{n}\right) }. \end{aligned}$$

Since \(k=\min \{l:\sum _{i=1}^{l}c_{(i)}>nb\}\), we have

$$\begin{aligned} \liminf _{n\rightarrow \infty }\frac{k}{n}\ge \frac{b}{\max \limits _{j\le |\mathcal {W}|}c_{j}}:=b^{\prime }. \end{aligned}$$

Fix \(\varepsilon >0\). For all sufficiently large n, \(\mathbb {E}\left[ S^{2}(\mathbf {R})\right] \) can be bounded as follows,

$$\begin{aligned} \mathbb {E}\left[ S^{2}(\mathbf {R})\right]&\le \mathbb {E}\left[ \mathbb {P}\left( V>\frac{R_{(\lfloor nb^{\prime }\rfloor )}}{\phi \left( 1-\max \limits _{j\le |\mathcal {W}|}l_{j}f_{n})\right) }\right) ^{2}\right] \\&\le \mathbb {E}\left[ \left( \mathbb {P}\left( V>\frac{R_{(\lfloor nb^{\prime }\rfloor )}}{\phi \left( 1-\max \limits _{j\le |\mathcal {W}|}l_{j} f_{n})\right) },R_{(\lfloor nb^{\prime }\rfloor )}\ge \log \left( \frac{1}{1-b^{\prime }}\right) -\varepsilon \right) \right. \right. \\&\quad +\, \left. \left. \mathbb {P}\left( V>\frac{R_{(\lfloor nb^{\prime }\rfloor )}}{\phi \left( 1-\max \limits _{j\le |\mathcal {W}|}l_{j}f_{n})\right) },R_{(\lfloor nb^{\prime }\rfloor )}<\log \left( \frac{1}{1-b^{\prime }}\right) -\varepsilon \right) \right) ^{2}\right] \\&\le \left( \mathbb {P}\left( V>\frac{\log \left( \frac{1}{1-b^{\prime } }\right) -\varepsilon }{\phi \left( 1-\max \limits _{j\le |\mathcal {W}|} l_{j}f_{n})\right) }\right) +\mathbb {P}\left( R_{(\lfloor nb^{\prime }\rfloor )}<\log \left( \frac{1}{1-b^{\prime }}\right) -\varepsilon \right) \right) ^{2}. \end{aligned}$$

Then,

$$\begin{aligned} \limsup _{n\rightarrow \infty }\frac{\mathbb {E}\left[ S^{2}(\mathbf {R})\right] }{f_{n}^{2}}&\le \left( \limsup _{n\rightarrow \infty }\frac{\mathbb {P} \left( V>\frac{\log \left( \frac{1}{1-b^{\prime }}\right) -\varepsilon }{\phi \left( 1-\max \limits _{j\le |\mathcal {W}|}l_{j}f_{n})\right) }\right) }{f_{n}}+\limsup _{n\rightarrow \infty }\frac{\mathbb {P}\left( R_{(\lfloor nb^{\prime }\rfloor )}<\log \left( \frac{1}{1-b^{\prime }}\right) -\varepsilon \right) }{f_{n}}\right) ^{2}\\&\le \left( \max \limits _{j\le |\mathcal {W}|}l_{j}\frac{\left( \log \left( \frac{1}{1-b^{\prime }}\right) -\varepsilon \right) ^{-1/\alpha }}{\Gamma (1-1/\alpha )}+M\right) ^{2}<\infty . \end{aligned}$$

The last step is due to the regular variation of V, Lemma A.5 and the condition that \(\frac{1}{n}=O(f_{n})\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cui, H., Tan, K.S. & Yang, F. Portfolio credit risk with Archimedean copulas: asymptotic analysis and efficient simulation. Ann Oper Res 332, 55–84 (2024). https://doi.org/10.1007/s10479-022-04717-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10479-022-04717-0

Keywords

Navigation