Abstract
Relative error approaches are more of concern compared to absolute error ones such as the least square and least absolute deviation, when it needs scale invariant of output variable, for example with analyzing stock and survival data. A relative error estimation procedure based on the h-likelihood is developed to avoid heavy and intractable integration for a multiplicative regression model with random effect. Statistical properties of the parameters and random effect in the model are studied. Numerical studies including simulation and real examples show the proposed estimation procedure performs well.
Similar content being viewed by others
References
Belenky G, Wesensten NJ, Thorne DR, Thomas ML, Sing HC, Redmond DP, Russo MB, Balkin TJ (2003) Patterns of performance degradation and restoration during sleep restriction and subsequent recovery: a sleep dose-response study. J Sleep Res 12(1):1–12
Chen K, Guo S, Lin Y, Ying Z (2010) Least absolute relative error estimation. J Am Stat Assoc 105(491):1104–1112
Chen K, Lin Y, Wang Z, Ying Z (2016) Least product relative error estimation. J Multivar Anal 144:91–98
Cox DR, Reid N (1987) Parameter orthogonality and approximate conditional inference. J R Stat Soc Ser B (Methodol) 49(1):1–39
Cox GM, Cochran WG (1957) The use of a concomitant variable in selecting an experimental design. Biometrika 44(1/2):150–158
Crouch EA, Spiegelman D (1990) The evaluation of integrals of the form \(\int _{-\infty }^{+\infty }f(x)\exp (-t^2)\): application to logistic-normal models. J Am Stat Assoc 85(410):464–469
Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc Ser B (Methodol) 39(1):1–38
Ha ID, Lee Y, Song J-K (2002) Hierarchical-likelihood approach for mixed linear models with censored data. Lifetime Data Anal 8(2):163–176
Karim MR, Zeger SL (1992) Generalized linear models with random effects; salamander mating revisited. Biometrics 48:631–644
Khoshgoftaar TM, Bhattacharyya BB, Richardson GD (1992) Predicting software errors, during development, using nonlinear regression models: a comparative study. IEEE Trans Reliab 41(3):390–395
Klein JP, Lee SC, Moeschberger ML (1990) A partially parametric estimator of survival in the presence of randomly censored data. Biometrics 46(3):795–811
Lee Y, Nelder JA (1996) Hierarchical generalized linear models. J R Stat Soc Ser B (Methodol) 58:619–678
Lee Y, Nelder JA (2001) Hierarchical generalised linear models: a synthesis of generalised linear models, random-effect models and structured dispersions. Biometrika 88(4):987–1006
Lee Y, Nelder JA (2005) Likelihood for random-effect models. Stat Oper Res Trans 29(2):141–182
Liu X, Lin Y, Wang Z (2016) Group variable selection for relative error regression. J Stat Plan Inference 175:40–50
Makridakis SG (1985) The forecasting accuracy of major time series methods. J R Stat Soc Ser D (The Statistician) 34(2):261–262
Narula SC, Wellington JF (1977) Prediction, linear regression and the minimum sum of relative errors. Technometrics 19(2):185–190
Paik MC, Lee Y, Ha ID (2015) Frequentist inference on random effects based on summarizability. Stat Sin 25:1107–1132
Park H, Stefanski L (1998) Relative-error prediction. Stat Probab Lett 40(3):227–236
Patterson HD, Thompson R (1971) Recovery of inter-block information when block sizes are unequal. Biometrika 58(3):545–554
Portnoy S, Koenker R et al (1997) The gaussian hare and the laplacian tortoise: computability of squared-error versus absolute-error estimators. Stat Sci 12(4):279–300
Rao JNK (2003) Small area estimation. Wiley, New York
Robinson GK (1991) That blup is a good thing: the estimation of random effects. Stat Sci 6(1):15–32
Stigler SM (1981) Gauss and the invention of least squares. Ann Stat 9(3):465–474
Tierney L, Kadane JB (1986) Accurate approximations for posterior moments and marginal densities. J Am Stat Assoc 81(393):82–86
Vaida F, Meng X (2004) Mixed linears models and the em algorithm in applied bayesian and causal inference from an incomplete data perspective. Wiley, New York
Wang Z, Chen Z, Wu Y (2017) A relative error estimation approach for single index model. J Syst Sci Complex 30:1160–1172
Wang Z, Liu W, Lin Y (2015) A change-point problem in relative error-based regression. TEST 24(4):835–856
Ye J (2007) Price models and the value relevance of accounting information. SSRN 1003067
Zhang Q, Wang Q (2013) Local least absolute relative error estimating approach for partially linear multiplicative model. Stat Sin 23(3):1091–1116
Acknowledgements
The authors are grateful to the Editor, the Associate Editor, and the anonymous referees for comments and suggestions that lead to improvements in the paper. This research is partially supported by funds of the State Key Program of National Natural Science of China (No. 11231010) and National Natural Science of China (No. 11471302).
Author information
Authors and Affiliations
Corresponding author
Appendix: Proofs of the main results
Appendix: Proofs of the main results
To prove Theorems, we need the following conditions,
-
\(A_{1}\): For each \(i\in \{1,\ldots ,K\}\), \(n_{i}/\sum _{j=1}^K n_j \rightarrow \lambda _i>0\), as \(n_{j} \rightarrow \infty \), \(j=1,\ldots ,K\).
-
\(A_{2}\): \(\Vert X_{ij}\Vert <\infty \) for \(j=1,\ldots ,n_i, i=1,\ldots ,K\).
-
\(A_{3}\): As \(n_i \rightarrow \infty , K/n_i \rightarrow O(1)\).
Proof of Theorem 1
Similar to Paik et al. (2015), we have a limiting normal distribution of \(\hat{\nu }_{i}\). Here we focus on derivation of asymptotical variance of \(\hat{\nu }_{i}\). Let (\(\hat{\varvec{\theta }}\), \(\hat{\nu }\)) be one solution of
where \(W\{\varvec{\theta },\nu ;\varvec{{Y}}\}=(h_{1}^{(1)}\{\varvec{\theta },\nu _{1};Y_{1}\},\ldots , h_{K}^{(1)}\{\varvec{\theta },\nu _{K};Y_{K}\})^\top \) and \(\nu =(\nu _1,\ldots , \nu _K)^\top \). By Taylor expansion, we get
where \(A_{11}=E \{ -\frac{\partial ^2}{\partial \varvec{\theta } \partial \varvec{\theta }^{\top }}m(\varvec{\theta };\varvec{{Y}}_i) \}\), \(B_{21i} = E \{ \frac{\partial }{\partial \varvec{\theta }^{\top }}h_{i}^{(1)} \{ \varvec{\theta }, \nu _{i};\varvec{{Y}}_{i} \} | \nu _{i} = \nu _{0i} \}\). Thus, we can get
Under A3, we can write
and
Therefore, we have
It follows that
\(\square \)
Laplace approximation
From Tierney and Kadane (1986), we show that
where
Under model (3), it easily shows that
which show that \(J_{1i}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}\) and \(J_{2i}\{\varvec{\theta },\hat{\nu }_{i};\varvec{{Y}}_{i}\}\) have the order \(O_{p}(1)\). Thence, we obtain
where \(p_{\nu }(H)\) is also called the adjusted profile likelihood.
Rights and permissions
About this article
Cite this article
Wang, Z., Chen, Z. & Chen, Z. H-relative error estimation for multiplicative regression model with random effect. Comput Stat 33, 623–638 (2018). https://doi.org/10.1007/s00180-018-0798-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00180-018-0798-7