Graduate Student Seminar: Adaptive Iterative Hard Thresholding for Online High-dimensional Quantile Regression
Sparse online learning aims to build interpretable models from sequential data while maintaining sparsity to enhance generalization and computational efficiency. However, it remains challenging under heavy-tailed noise or heteroskedasticity, as heavy-tail friendly loss functions such as the quantile loss often lack the smoothness and strong convexity required for optimal convergence. This work proposes an Adaptive Iterative Hard Thresholding (AIHT) algorithm for online high-dimensional quantile regression. The method alternates between blocks of standard SGD updates and hard thresholding steps, where the block length is adaptively adjusted based on the iteration dynamics. By employing a moving-average subgradient and a suitably chosen step size, the method achieves optimal logarithmic regret and exhibits a two-phase convergence pattern, effectively addressing the non-smoothness of the quantile loss, the non-convexity induced by hard thresholding, and the instability caused by frequent thresholding. Simulations illustrate the empirical properties and convergence characteristics of AIHT.
This is a joint work with Professor Nan Lin.