Speakers

Topics in Selective Inference

Emmanuel J. Candès is the Barnum-Simons Chair in Mathematics and Statistics and Professor, by courtesy, of Electrical Engineering at Stanford University. His main research areas involve Compressive sensing, mathematical signal processing, statistics, computational harmonic analysis and scientific computing. Candès is one of the pioneers of the field of compressed sensing.

Variational Inference: Foundations and Innovations

David Blei is a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. His research is in statistical machine learning, involving probabilistic topic models, Bayesian nonparametric methods, and approximate posterior inference with massive data. He works on a variety of applications, including text, images, music, social networks, user behavior, and scientific data. David has received several awards for his research, including a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), Blavatnik Faculty Award (2013), ACM-Infosys Foundation Award (2013), and a Guggenheim fellowship. He is a fellow of the ACM and the IMS.

Nonparametric Bayesian Methods: Models, Algorithms, and Applications

Tamara Broderickis the ITT Career Development Assistant Professor in the Department of Electrical Engineering and Computer Science at MIT. She is a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), the MIT Statistics and Data Science Center, and the Institute for Data, Systems, and Society (IDSS). Her recent research has focused on developing and analyzing models for scalable Bayesian machine learning, including Bayesian nonparametrics.

Tensor Decompositions for Learning Latent Variable Models

Daniel Hsu is an assistant professor in the Computer Science Department and a member of the Data Science Institute, both at Columbia University. Daniel's research interests are algorithmic statistics, machine learning, and privacy. His work has produced the first computationally efficient algorithms for several statistical estimation tasks, provided new algorithmic frameworks for solving interactive machine learning problems, and led to the creation of scalable tools for machine learning applications.

Probabilistic Programming

Frank Wood is a artificial intelligence and machine learning researcher who focusses on using programming languages tools and techniques to denote and automate powerful AI/ML techniques. Dr. Wood is an associate professor of information engineering at Oxford, a Turing fellow at the Alan Turing Institute, and a governing body fellow of Kellogg college. He holds a PhD in computer science from Brown and a BS in the same from Cornell. Dr. Wood is a successful serial entrepreneur, holds several patents, has authored over 60 papers, received the AISTATS best paper award in 2009, and has won significant support from DARPA, Intel, BP, Xerox, Google, Microsoft, and Amazon.

Optimal Transport

Marco Cuturi is a professor of statistics at ENSAE CREST, Universit ́e Paris-Saclay, France. His research interests include machine learning, optimal transport, nonparametric statistics with positive definite kernels, time-series, cointegration. In the last year, he has become a leading researcher in the field of optimal transport and its applications.

Reinforcement Learning

Finale Doshi-Velez is a Professor of Computer Science at Harvard University. She is excited about methods to turn data into actionable knowledge. Her core research in machine learning, computational statistics, and data science is inspired by---and often applied to---the objective of accelerating scientific progress and practical impact in healthcare and other domains.

Deep Reinforcement Learning

Sergey Levine is an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at UC Berkeley. His research aims to develop algorithms and techniques that can endow machines with the ability to autonomously acquire the skills for executing complex tasks. He is interested in how learning can be used to acquire complex behavioral skills, in order to endow machines with greater autonomy and intelligence.

Gaussian Processes

Neil Lawrence is a Professor of Machine Learning at the University of Sheffield, but he is currently on leave at Amazon where he is a Director of Machine Learning and founder of Amazon Research Cambridge. Neil’s research interests are in probabilistic models with applications in computational biology and personalized health. Neil leads the ML@SITraN group at Sheffield university and is a contributor writer for The Guardian.

Deep Generative Models

Ryan Adams is a professor of Computer Science at Princeton University, currently on leave at google Brain. He lead the Harvard Intelligent Probabilistic Systems (HIPS) group and was a co-founder of Whetlab, a machine learning startup acquired by Twitter in June 2015. Through the years he has developed new models for data, new tools for performing inference, and new computational structures for representing knowledge and uncertainty.

Generative Adversarial Networks

David Warde-Farley is a research scientist at Google Deepmind. His primary research interest is the development of machine learning methods, primarily “deep learning” methods. I'm also interested in machine learning applications in computer vision and computational molecular biology. He has made major contributions to the development of algorithms to train generative adversarial networks and large scale optimization in deep models as theano.

Fundamentals of Deep Neural Networks

Hugo Larochelle leads the Google Brain group in Montreal. His main area of expertise is deep learning. His previous work includes unsupervised pretraining with autoencoders, denoising autoencoders, visual attention-based classification, neural autoregressive distribution models. More broadly, he is interested in applications of deep learning to generative modeling, reinforcement learning, meta-learning, natural language processing and computer vision.

The mathematical optimization approach to machine learning

Elad Hazan is a professor of computer science at Princeton university. His research focuses on the design and analysis of algorithms for basic problems in machine learning and optimization. Amongst his contributions are the co-development of the AdaGrad optimization algorithm and the first sublinear-time algorithms for convex optimization.

Manifold Learning and Spectral Methods

David Pfau is a research scientist at Google DeepMind. He is deeply interested in solving artificial general intelligence. His research interests span artificial intelligence, machine learning and computational neuroscience. Previously, David developed algorithms for analyzing and understanding high-dimensional data and time series arising from neural recordings.

Learning Deep Learning from Scratch with MXNet/Gluon

Mu Li is a principal scientist for machine learning at AWS. Before joining AWS, he was the CTO of Marianas Labs, an AI start-up. He also served as a principal research architect at the Institute of Deep Learning at Baidu. Mu’s research has focused on large-scale machine learning. At AWS, Mu leads a team that works primarily on the Apache MXNet framework. Their focus is making it easier to use deep learning and to run deep learning applications on AWS.

TensorFlow

Martín Abadi is a Principal Scientist at Google, in the Google Brain team. He is also a Professor Emeritus at the University of California at Santa Cruz, where was a Professor in the Computer Science Department till 2013. He has held an annual Chair at the Collège de France, has taught at Stanford University and the University of California at Berkeley, and has worked at Digital’s System Research Center, Microsoft Research Silicon Valley, and other industrial research labs. He received his Ph.D. at Stanford University in 1987. His research is mainly on computer and network security, programming languages, and specification and verification methods. It has been recognized with the Outstanding Innovation Award of the ACM Special Interest Group on Security, Audit and Control and with the Hall of Fame Award of the ACM Special Interest Group on Operating Systems, among other awards. He is a Fellow of the Association for Computing Machinery and of the American Association for the Advancement of Science (AAAS). He holds a doctorate honoris causa from École normale supérieure Paris-Saclay.

TensorFlow

Derek Murray is a member of the Google Brain team, where he works on core development for TensorFlow. His principal research interest is distributed systems for parallel computation, with a particular emphasis on expressive programming constructs like streaming and iteration.