Fast Inference for Quantile Regression with Tens of Millions of ObservationsSimon Lee, Yuan Liao, Matt Seo and Youngki Shin
Journal of Econometrics (2024)
- Abstract: While applications of big data analytics have brought many new opportunities to economic research, with datasets containing tens of millions of observations, making usual econometric inferences based on extreme estimators would require huge computing powers and memories that are often not accessible. In this paper, we focus on linear quantile regression employed to analyze ``ultra-large'' datasets such as U.S. decennial censuses. We develop a fast inference framework based on the stochastic sub-gradient descent (S-subGD) updates. The cross-sectional data are treated sequentially into the inference procedure: (i) the parameter estimate is updated when each ``new observation'' arrives, (ii) it is aggregated as the Polyak-Ruppert average, and (iii) a pivotal statistic for inference is computed using a solution path only. We leverage insights from time series regression and construct an asymptotically pivotal statistic via random scaling. Our proposed test statistic is computed in a fully online fashion and the critical values are obtained without any resampling methods. We conduct extensive numerical studies to showcase the computational merits of our proposed inference. For inference problems as large as $(n, d) \sim (10^7, 10^3)$, where $n$ is the sample size and $d$ is the number of regressors, our method can generate new insights beyond the computational capabilities of existing inference methods. Specifically, we uncover the trends in the gender gap in the U.S. college wage premium using millions of observations, while controlling over $10^3$ covariates to mitigate confounding effects.
- Paper: The paper
- Software: https://github.com/SGDinference-Lab/SGDinference