Modern AI and machine learning techniques increasingly depend on highly complex, hierarchical (deep) probabilistic models to reason with complex relations, and make decisions under uncertain environment. This, however, casts a significant demand on developing efficient computational methods for highly complex probabilistic models in which exact calculation is prohibitive. In this talk, we discuss a new framework for approximate learning and inference that combines ideas from Stein's method, an advantaged theoretical technique developed by mathematical statistician Charles Stein, with practical machine learning and statistical computation techniques such as variational inference, Monte Carlo, optimal transport and reproducing kernel Hilbert space (RKHS). Our framework provides a new foundation for probabilistic learning and reasoning and allows us to develop a host of new algorithms for a variety of challenging learning and AI tasks, that are significantly different from, and have critical advantages over, traditional methods. Examples of applications include computationally tractable goodness-of-fit tests for evaluating highly complex models, scalable Bayesian computation, deep generative models, and sample efficient policy gradient for deep reinforcement learning.
Qiang Liu is an assistant professor of computer science at Dartmouth college, and UT Austin starting from 2018. His research interests are in statistical machine learning, Bayesian inference, deep reinforcement learning, probabilistic graphical models and crowdsourcing. He received his Ph.D from University of California at Irvine, followed with a postdoc at MIT CSAIL. He is an action editor of Journal of Machine Learning Research (JMLR), a recipient of several awards including a notable paper award of AISTATS and Microsoft Phd fellowship.