Institutional Repository

Some variable selection and regularization methodological approaches in quantile regression with applications

Show simple item record

dc.contributor.advisor Ranganai, E.
dc.contributor.author Mudhombo, Innocent
dc.date.accessioned 2023-08-16T08:26:24Z
dc.date.available 2023-08-16T08:26:24Z
dc.date.issued 2022
dc.identifier.uri https://hdl.handle.net/10500/30398
dc.description.abstract The importance of robust variable selection and regularization as solutions to the collinearity influential high leverage points’ adverse effects in a quantile regression (QR) setting cannot be overemphasized, just as the diagnostic tools that identify these high leverage points. In the literature, researchers have dealt with variable selection and regularization quite extensively for penalized QR that generalizes the well-known least absolute deviation (LAD) procedure to all quantile levels. Unlike the least squares (LS) procedures, which are unreliable when deviations from the Gaussian assumptions (outliers) exist, the QR procedure is robust to Y-space outliers. Although QR is robust to response variable outliers, it is vulnerable to predictor space data aberrations (high leverage points and collinearity adverse effects), which may alter the eigen-structure of the predictor matrix. Therefore, in the literature, it is recommended that the problems of collinearity and high leverage points be dealt with simultaneously. In this thesis, we propose applying the ridge regression procedure (RIDGE), LASSO, elastic net (E-NET), adaptive LASSO, and adaptive elastic net (AE-NET) penalties to weighted QR (WQR) to mitigate the effects of collinearity and collinearity influential points in the QR setting. The new procedures are the penalized WQR procedures i.e., the RIDGE penalizedWQR (WQR-RIDGE), the LASSO penalizedWQR (WQR-LASSO), the E-NET penalized WQR (WQR-E-NET) and the adaptive penalized QR procedures (the adaptive LASSO penalized QR (QR-ALASSO) and adaptive E-NET penalized QR (QR-AE-NET procedures and their weighted versions). The penalized WQR procedures are based on the computationally intensive high-breakdown minimum covariance determinant (MCD) determined weights and the adaptive penalized QR procedures are based on the RIDGE penalized WQR (WQR-RIDGE) estimator based adaptive weights. Under regularity conditions, the adaptive penalized procedures satisfy oracle properties. Although adaptive weights are commonly based on the RIDGE regression (RR) estimator in the LS setting when regressors are collinear, this estimator may be plausible for the symmetrical distributions at the ℓ1-estimator (RQ at τ = 0.50) rather than at extreme quantile levels. We carried out simulations and applications to well-known data sets from the literature to assess the finite sample performance of these procedures in variable selection and regularization due to the robust weighting formulation and adaptive weighting construction. In the collinearityenhancing point scenario under the t-distribution, the WQR penalized versions outperformed the unweighted procedures with respect to average shrunken zero coefficients and correctly fitted models. Under the Gaussian and t-distributions, at predictor matrices with collinearity-reducing points, the weighted regularized procedures dominate the prediction performance (WQR-LASSO forms best). In the collinearity-inducing and collinearity-reducing points scenarios under the Gaussian distribution, the adaptive penalized procedures outperformed the non-adaptive versions in prediction. Under the t-distribution, a similar performance pattern is depicted as in the Gaussian scenario, although the performance of all models is adversely affected by outliers. Under the t-distribution, the QR-ALASSO andWQR-ALASSO procedures performed better in their respective categories. en
dc.format.extent 1 online resource (xix, 155 leaves) : graphs (chiefly color)
dc.language.iso en en
dc.subject Weighted quantile regression en
dc.subject Adaptive LASSO penalty en
dc.subject Penalty en
dc.subject Adaptive E-NET penalty en
dc.subject Regularization en
dc.subject Penalization en
dc.subject Collinearity inducing point en
dc.subject Collinearity hiding point en
dc.subject Collinearity influential points en
dc.subject.ddc 519.536
dc.subject.lcsh Quantile regression en
dc.subject.other UCTD
dc.title Some variable selection and regularization methodological approaches in quantile regression with applications en
dc.type Thesis en
dc.description.department Statistics en
dc.description.degree Ph. D. (Statistics)


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search UnisaIR


Browse

My Account

Statistics