[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

H-relative error estimation for multiplicative regression model with random effect

  • Original Paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

Relative error approaches are more of concern compared to absolute error ones such as the least square and least absolute deviation, when it needs scale invariant of output variable, for example with analyzing stock and survival data. A relative error estimation procedure based on the h-likelihood is developed to avoid heavy and intractable integration for a multiplicative regression model with random effect. Statistical properties of the parameters and random effect in the model are studied. Numerical studies including simulation and real examples show the proposed estimation procedure performs well.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Belenky G, Wesensten NJ, Thorne DR, Thomas ML, Sing HC, Redmond DP, Russo MB, Balkin TJ (2003) Patterns of performance degradation and restoration during sleep restriction and subsequent recovery: a sleep dose-response study. J Sleep Res 12(1):1–12

    Article  Google Scholar 

  • Chen K, Guo S, Lin Y, Ying Z (2010) Least absolute relative error estimation. J Am Stat Assoc 105(491):1104–1112

    Article  MathSciNet  MATH  Google Scholar 

  • Chen K, Lin Y, Wang Z, Ying Z (2016) Least product relative error estimation. J Multivar Anal 144:91–98

    Article  MathSciNet  MATH  Google Scholar 

  • Cox DR, Reid N (1987) Parameter orthogonality and approximate conditional inference. J R Stat Soc Ser B (Methodol) 49(1):1–39

    MathSciNet  MATH  Google Scholar 

  • Cox GM, Cochran WG (1957) The use of a concomitant variable in selecting an experimental design. Biometrika 44(1/2):150–158

    Article  MathSciNet  Google Scholar 

  • Crouch EA, Spiegelman D (1990) The evaluation of integrals of the form \(\int _{-\infty }^{+\infty }f(x)\exp (-t^2)\): application to logistic-normal models. J Am Stat Assoc 85(410):464–469

    MATH  Google Scholar 

  • Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc Ser B (Methodol) 39(1):1–38

    MathSciNet  MATH  Google Scholar 

  • Ha ID, Lee Y, Song J-K (2002) Hierarchical-likelihood approach for mixed linear models with censored data. Lifetime Data Anal 8(2):163–176

    Article  MathSciNet  MATH  Google Scholar 

  • Karim MR, Zeger SL (1992) Generalized linear models with random effects; salamander mating revisited. Biometrics 48:631–644

    Article  Google Scholar 

  • Khoshgoftaar TM, Bhattacharyya BB, Richardson GD (1992) Predicting software errors, during development, using nonlinear regression models: a comparative study. IEEE Trans Reliab 41(3):390–395

    Article  MATH  Google Scholar 

  • Klein JP, Lee SC, Moeschberger ML (1990) A partially parametric estimator of survival in the presence of randomly censored data. Biometrics 46(3):795–811

    Article  MathSciNet  MATH  Google Scholar 

  • Lee Y, Nelder JA (1996) Hierarchical generalized linear models. J R Stat Soc Ser B (Methodol) 58:619–678

    MathSciNet  MATH  Google Scholar 

  • Lee Y, Nelder JA (2001) Hierarchical generalised linear models: a synthesis of generalised linear models, random-effect models and structured dispersions. Biometrika 88(4):987–1006

    Article  MathSciNet  MATH  Google Scholar 

  • Lee Y, Nelder JA (2005) Likelihood for random-effect models. Stat Oper Res Trans 29(2):141–182

    MathSciNet  MATH  Google Scholar 

  • Liu X, Lin Y, Wang Z (2016) Group variable selection for relative error regression. J Stat Plan Inference 175:40–50

    Article  MathSciNet  MATH  Google Scholar 

  • Makridakis SG (1985) The forecasting accuracy of major time series methods. J R Stat Soc Ser D (The Statistician) 34(2):261–262

    Google Scholar 

  • Narula SC, Wellington JF (1977) Prediction, linear regression and the minimum sum of relative errors. Technometrics 19(2):185–190

    Article  MATH  Google Scholar 

  • Paik MC, Lee Y, Ha ID (2015) Frequentist inference on random effects based on summarizability. Stat Sin 25:1107–1132

    MathSciNet  MATH  Google Scholar 

  • Park H, Stefanski L (1998) Relative-error prediction. Stat Probab Lett 40(3):227–236

    Article  MathSciNet  MATH  Google Scholar 

  • Patterson HD, Thompson R (1971) Recovery of inter-block information when block sizes are unequal. Biometrika 58(3):545–554

    Article  MathSciNet  MATH  Google Scholar 

  • Portnoy S, Koenker R et al (1997) The gaussian hare and the laplacian tortoise: computability of squared-error versus absolute-error estimators. Stat Sci 12(4):279–300

    Article  MathSciNet  MATH  Google Scholar 

  • Rao JNK (2003) Small area estimation. Wiley, New York

    Book  MATH  Google Scholar 

  • Robinson GK (1991) That blup is a good thing: the estimation of random effects. Stat Sci 6(1):15–32

    Article  MathSciNet  MATH  Google Scholar 

  • Stigler SM (1981) Gauss and the invention of least squares. Ann Stat 9(3):465–474

    Article  MathSciNet  MATH  Google Scholar 

  • Tierney L, Kadane JB (1986) Accurate approximations for posterior moments and marginal densities. J Am Stat Assoc 81(393):82–86

    Article  MathSciNet  MATH  Google Scholar 

  • Vaida F, Meng X (2004) Mixed linears models and the em algorithm in applied bayesian and causal inference from an incomplete data perspective. Wiley, New York

    Google Scholar 

  • Wang Z, Chen Z, Wu Y (2017) A relative error estimation approach for single index model. J Syst Sci Complex 30:1160–1172

    Article  MathSciNet  MATH  Google Scholar 

  • Wang Z, Liu W, Lin Y (2015) A change-point problem in relative error-based regression. TEST 24(4):835–856

    Article  MathSciNet  MATH  Google Scholar 

  • Ye J (2007) Price models and the value relevance of accounting information. SSRN 1003067

  • Zhang Q, Wang Q (2013) Local least absolute relative error estimating approach for partially linear multiplicative model. Stat Sin 23(3):1091–1116

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the Editor, the Associate Editor, and the anonymous referees for comments and suggestions that lead to improvements in the paper. This research is partially supported by funds of the State Key Program of National Natural Science of China (No. 11231010) and National Natural Science of China (No. 11471302).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhanfeng Wang.

Appendix: Proofs of the main results

Appendix: Proofs of the main results

To prove Theorems, we need the following conditions,

  • \(A_{1}\): For each \(i\in \{1,\ldots ,K\}\), \(n_{i}/\sum _{j=1}^K n_j \rightarrow \lambda _i>0\), as \(n_{j} \rightarrow \infty \), \(j=1,\ldots ,K\).

  • \(A_{2}\): \(\Vert X_{ij}\Vert <\infty \) for \(j=1,\ldots ,n_i, i=1,\ldots ,K\).

  • \(A_{3}\): As \(n_i \rightarrow \infty , K/n_i \rightarrow O(1)\).

Proof of Theorem 1

Similar to Paik et al. (2015), we have a limiting normal distribution of \(\hat{\nu }_{i}\). Here we focus on derivation of asymptotical variance of \(\hat{\nu }_{i}\). Let (\(\hat{\varvec{\theta }}\)\(\hat{\nu }\)) be one solution of

where \(W\{\varvec{\theta },\nu ;\varvec{{Y}}\}=(h_{1}^{(1)}\{\varvec{\theta },\nu _{1};Y_{1}\},\ldots , h_{K}^{(1)}\{\varvec{\theta },\nu _{K};Y_{K}\})^\top \) and \(\nu =(\nu _1,\ldots , \nu _K)^\top \). By Taylor expansion, we get

$$\begin{aligned} h_{i}^{(1)}\{\varvec{\hat{\theta }},\nu _{0i};Y_{i}\}-h_{i}^{(1)}\{\varvec{\theta },\nu _{0i};Y_{i}\} \approx B_{21i}(\varvec{\hat{\theta }}-\varvec{\theta }), \\ \frac{\partial }{\partial \varvec{\theta }}m(\varvec{\hat{\theta }};\varvec{{Y}})-\frac{\partial }{\partial \varvec{\theta }}m(\varvec{\theta };\varvec{{Y}}) \approx -A_{11}(\varvec{\hat{\theta }}-\varvec{\theta }). \end{aligned}$$

where \(A_{11}=E \{ -\frac{\partial ^2}{\partial \varvec{\theta } \partial \varvec{\theta }^{\top }}m(\varvec{\theta };\varvec{{Y}}_i) \}\), \(B_{21i} = E \{ \frac{\partial }{\partial \varvec{\theta }^{\top }}h_{i}^{(1)} \{ \varvec{\theta }, \nu _{i};\varvec{{Y}}_{i} \} | \nu _{i} = \nu _{0i} \}\). Thus, we can get

$$\begin{aligned} h_{i}^{(1)}\{\varvec{\hat{\theta }},\nu _{0i};Y_{i}\}-h_{i}^{(1)}\{\varvec{\theta },\nu _{0i};Y_{i}\}=B_{21i}A_{11}^{-1}\frac{\partial }{\partial \varvec{\theta }}m(\varvec{\theta };\varvec{{Y}}). \end{aligned}$$

Under A3, we can write

$$\begin{aligned} h_{i}^{(1)}\{\varvec{\hat{\theta }},\hat{\nu _i};Y_{i}\}&=0 \approx h_{i}^{(1)}\{\varvec{\hat{\theta }},\nu _{0i};Y_{i}\}+(\hat{\nu _i}-\nu _{0i})h_{i}^{(2)}\{\varvec{\hat{\theta }},\nu _{0i};Y_{i}\}\\&\quad +\frac{1}{2}(\hat{\nu _i}-\nu _{0i})^2h_{i}^{(3)}\{\varvec{\hat{\theta }},\nu _{0i};Y_{i}\}, \end{aligned}$$

and

$$\begin{aligned} \sqrt{n_i}(\hat{\nu _i}-\nu _{0i})=\{-h_{i}^{(2)}\{\varvec{\hat{\theta }},\nu _{0i};Y_{i}\}+O_{p}(n_i^{-\frac{1}{2}})\}^{-1}\sqrt{n_i}h_{i}^{(1)}\{\varvec{\hat{\theta }},\nu _{0i};Y_{i}\}. \end{aligned}$$

Therefore, we have

$$\begin{aligned} \sqrt{n_i}(\hat{\nu _i}-\nu _{0i})=I(\varvec{\theta },\nu _{0i})^{-1}\sqrt{n_i}[h_{i}^{(1)}\{\varvec{\theta },\nu _{0i};Y_{i}\}+B_{21i}A_{11}^{-1}\frac{\partial }{\partial \varvec{\theta }}m(\varvec{\theta };\varvec{{Y}})]+o_p(1). \end{aligned}$$

It follows that

$$\begin{aligned}&Var\{\sqrt{n_i}(\hat{\nu _i}-\nu _{0i})\}\\&\quad =I(\varvec{\theta },\nu _{0i})^{-1}+n_iI(\varvec{\theta },\nu _{0i})^{-1}B_{21i}A_{11}^{-1}Var\left[ \frac{\partial }{\partial \varvec{\theta }}m(\varvec{\theta };\varvec{{Y}})| \nu _i=\nu _{0i}\right] A_{11}^{-1}B_{21i}^{\top }I(\varvec{\theta },\nu _{0i})^{-1}\\&\quad \quad -\,2n_{i}I(\varvec{\theta },\nu _{0i})^{-1}B_{21i}^{\top }A_{11}^{-1}Cov\left[ \frac{\partial }{\partial \varvec{\theta }}m(\varvec{\theta };\varvec{{Y}}), h_{i}^{(1)}\{\varvec{\theta },\nu _i; \varvec{{Y}}_i | \nu _i=\nu _{0i}\}\right] . \end{aligned}$$

\(\square \)

Laplace approximation

From Tierney and Kadane (1986), we show that

$$\begin{aligned}&\exp \{m_i(\varvec{\theta };\varvec{{Y}}_i)\}=\int \exp [n_ih_{i}\{\varvec{\theta },\nu _{i};\varvec{{Y}}_{i}\}]d\nu _{i}\\&\qquad =\exp [n_ih_{i}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}]\sqrt{2\pi } \tau _{i} n_{i}^{-\frac{1}{2}}[1-C_{n_{i}}\{\varvec{\theta },\hat{\nu }_{i}\}]+O(n_{i}^{-2}), \end{aligned}$$

where

$$\begin{aligned}&\tau _{i}^2 = -[h_{i}^{(2)}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}]^{-1},\\&C_{n_{i}}\{\varvec{\theta },\hat{\nu }_{i}\}=J_{1i}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}/{(8n_{i})}-{5} J_{2i}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}/{(24n_{i})},\\&J_{1i}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}=-\,{h_{i}^{(4)}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}}/{[h_{i}^{(2)}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}]^2},\\&J_{2i}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}=-\,{[h_{i}^{(3)}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}]^2}/{[h_{i}^{(2)}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}]^3}. \end{aligned}$$

Under model (3), it easily shows that

$$\begin{aligned}&h_{i}^{(2)}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}\\&\quad =n_i^{-1}\sum _{j=1}^{n_{i}}-Y_{ij}\exp (-X_{ij}^T\beta -\nu _{i})-\exp (X_{ij}^T\beta +\nu _{i})Y_{ij}^{-1} -\frac{1}{n_{i}\sigma ^2}\\&\quad =h_{i}^{(4)}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}-\frac{1}{n_{i}\sigma ^2},|h_{i}^{(2)}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}|>|h_{i}^{(3)}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}|,\\&J_{1i}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}=-\,{h_{i}^{(4)}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}}/{[h_{i}^{(2)}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}]^2}\\&\quad \quad =-\,h_{i}^{(2)}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}^{-1}<1,\\&J_{2i}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}=-\,{[h_{i}^{(3)}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}]^2}/{[h_{i}^{(2)}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}]^3}\\&\quad \quad<-\,h_{i}^{(3)}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}^{-1}<1, \end{aligned}$$

which show that \(J_{1i}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}\) and \(J_{2i}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}\) have the order \(O_{p}(1)\). Thence, we obtain

$$\begin{aligned} m(\varvec{\theta };\varvec{{Y}})&= \sum _{i=1}^K m_i(\varvec{\theta };\varvec{{Y}}_i)\nonumber \\&=\sum _{i=1}^{K}[n_ih_{i}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}-\frac{1}{2}\sum _{i=1}^{K}\log [-n_ih_{i}^{(2)}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}/2\pi ] \nonumber \\&\quad +\sum _{i=1}^{K}\log [1-C_{n_{i}}\{\varvec{\theta },\hat{\nu }_{i}\}]+O_p(n^{-1})\nonumber \\&=p_{\nu }(H)+O_{p}(n^{-1}), \end{aligned}$$
(A.1)

where \(p_{\nu }(H)\) is also called the adjusted profile likelihood.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Z., Chen, Z. & Chen, Z. H-relative error estimation for multiplicative regression model with random effect. Comput Stat 33, 623–638 (2018). https://doi.org/10.1007/s00180-018-0798-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-018-0798-7

Keywords

Navigation