Advancing Trustworthy Machine Learning: A Perspective from Calibration and Sparsity
Deep neural networks (DNNs) have demonstrated remarkable success across various applications. However, their over-parameterized nature raises concerns about computational efficiency, interpretability, and reliability of predictions. In this talk, I will present statistical approaches to enhance the trustworthiness of machine learning models, focusing on model calibration and sparsity.
In the first part of the talk, I will introduce a debiased calibration error estimator and its asymptotic distribution, enabling the construction of valid confidence intervals where traditional resampling methods, such as bootstrap, fail.
The second part will explore Bayesian neural networks with sparse inducing priors, which achieve function estimation consistency. These sparse models not only identify important features but also enhance calibration, contributing to more interpretable and reliable predictions.
Together, these approaches contribute to advancing trustworthy machine learning by integrating statistical rigor into the design and evaluation of machine learning models.