Interpretable AI in Scientific Applications
AI is largely based on deep neural networks (DNNs) which tend to be massively parameterized and fitted to immense datasets. In scientific applications, we often have smaller and noisier data than are used in the most successful AI domains and there is a critical need for interpretability, reproducibility, accurate uncertainty quantification in inferences, and ability to reliably fit models to modest sample size but high-dimensional datasets. With this in mind and motivated in particular by applications in ecology and neuroscience, this talk proposes Bayesian methods for unsupervised learning of multilayer latent structures under identifiability guarantees. The proposed methods bridge between DNNs and classical latent class and model-based clustering models, also adding to the literature on stochastic block models for networks.
Host: Ran Chen