Topics in Selective Inference

xEmmanuel J. Candès is the Barnum-Simons Chair in Mathematics and Statistics and Professor, by courtesy, of Electrical Engineering at Stanford University. His main research areas involve Compressive sensing, mathematical signal processing, statistics, computational harmonic analysis and scientific computing. Candès is one of the pioneers of the field of compressed sensing.

Variational Inference: Foundations and Innovations

xDavid Blei is a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. His research is in statistical machine learning, involving probabilistic topic models, Bayesian nonparametric methods, and approximate posterior inference with massive data. He works on a variety of applications, including text, images, music, social networks, user behavior, and scientific data. David has received several awards for his research, including a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), Blavatnik Faculty Award (2013), ACM-Infosys Foundation Award (2013), and a Guggenheim fellowship. He is a fellow of the ACM and the IMS.

Nonparametric Bayesian Methods: Models, Algorithms, and Applications

xTamara Broderickis the ITT Career Development Assistant Professor in the Department of Electrical Engineering and Computer Science at MIT. She is a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), the MIT Statistics and Data Science Center, and the Institute for Data, Systems, and Society (IDSS). Her recent research has focused on developing and analyzing models for scalable Bayesian machine learning, including Bayesian nonparametrics.

Tensor Decompositions for Learning Latent Variable Models

xDaniel Hsu is an assistant professor in the Computer Science Department and a member of the Data Science Institute, both at Columbia University. Daniel's research interests are algorithmic statistics, machine learning, and privacy. His work has produced the first computationally efficient algorithms for several statistical estimation tasks, provided new algorithmic frameworks for solving interactive machine learning problems, and led to the creation of scalable tools for machine learning applications.

Probabilistic Programming

xFrank Wood is a artificial intelligence and machine learning researcher who focusses on using programming languages tools and techniques to denote and automate powerful AI/ML techniques. Dr. Wood is an associate professor of information engineering at Oxford, a Turing fellow at the Alan Turing Institute, and a governing body fellow of Kellogg college. He holds a PhD in computer science from Brown and a BS in the same from Cornell. Dr. Wood is a successful serial entrepreneur, holds several patents, has authored over 60 papers, received the AISTATS best paper award in 2009, and has won significant support from DARPA, Intel, BP, Xerox, Google, Microsoft, and Amazon.

Optimal Transport

xMarco Cuturi is a professor of statistics at ENSAE CREST, Universit ́e Paris-Saclay, France. His research interests include machine learning, optimal transport, nonparametric statistics with positive definite kernels, time-series, cointegration. In the last year, he has become a leading researcher in the field of optimal transport and its applications.

Reinforcement Learning

xFinale Doshi-Velez is a Professor of Computer Science at Harvard University. She is excited about methods to turn data into actionable knowledge. Her core research in machine learning, computational statistics, and data science is inspired by---and often applied to---the objective of accelerating scientific progress and practical impact in healthcare and other domains.

Deep Reinforcement Learning

xSergey Levine is an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at UC Berkeley. His research aims to develop algorithms and techniques that can endow machines with the ability to autonomously acquire the skills for executing complex tasks. He is interested in how learning can be used to acquire complex behavioral skills, in order to endow machines with greater autonomy and intelligence.

Gaussian processes for Uncertainty Quantification.

xJavier Gonzalez is a Senior Machine Learning Scientist at Amazon, Cambridge UK and a Visitor Researcher at the Department of Mathematics and Statistics in Lancaster University. Javier’s research focuses on Uncertainty Quantification. In particular, Javier is interested on how uncertainty can be characterized, quantified and used to build robust and interpretable decision systems. Javier has done major contributions to the fields of Bayesian Optimization, Gaussian Processes and Computational Biology. Javier is also one of the frequent co-organizers of the series of Gaussian Process Summer School held every year at the university of Sheffield.

Deep Generative Models

xRyan Adams is a professor of Computer Science at Princeton University, currently on leave at google Brain. He lead the Harvard Intelligent Probabilistic Systems (HIPS) group and was a co-founder of Whetlab, a machine learning startup acquired by Twitter in June 2015. Through the years he has developed new models for data, new tools for performing inference, and new computational structures for representing knowledge and uncertainty.

Generative Adversarial Networks

xDavid Warde-Farley is a research scientist at Google Deepmind. His primary research interest is the development of machine learning methods, primarily “deep learning” methods. I'm also interested in machine learning applications in computer vision and computational molecular biology. He has made major contributions to the development of algorithms to train generative adversarial networks and large scale optimization in deep models as theano.

Fundamentals of Deep Neural Networks

xHugo Larochelle leads the Google Brain group in Montreal. His main area of expertise is deep learning. His previous work includes unsupervised pretraining with autoencoders, denoising autoencoders, visual attention-based classification, neural autoregressive distribution models. More broadly, he is interested in applications of deep learning to generative modeling, reinforcement learning, meta-learning, natural language processing and computer vision.

The mathematical optimization approach to machine learning

xElad Hazan is a professor of computer science at Princeton university. His research focuses on the design and analysis of algorithms for basic problems in machine learning and optimization. Amongst his contributions are the co-development of the AdaGrad optimization algorithm and the first sublinear-time algorithms for convex optimization.

Manifold Learning and Spectral Methods

xDavid Pfau is a research scientist at Google DeepMind. He is deeply interested in solving artificial general intelligence. His research interests span artificial intelligence, machine learning and computational neuroscience. Previously, David developed algorithms for analyzing and understanding high-dimensional data and time series arising from neural recordings.

Learning Deep Learning from Scratch with MXNet/Gluon

xThomas Delteil is an applied scientist in the Deep Engine team at Amazon AI. He has a passion for deep learning and contributes to Apache MXNet. His interests span a broad range of topics, including text to speech, visual search and NLP.


xMartín Abadi is a Principal Scientist at Google, in the Google Brain team. He is also a Professor Emeritus at the University of California at Santa Cruz, where was a Professor in the Computer Science Department till 2013. He has held an annual Chair at the Collège de France, has taught at Stanford University and the University of California at Berkeley, and has worked at Digital’s System Research Center, Microsoft Research Silicon Valley, and other industrial research labs. He received his Ph.D. at Stanford University in 1987. His research is mainly on computer and network security, programming languages, and specification and verification methods. It has been recognized with the Outstanding Innovation Award of the ACM Special Interest Group on Security, Audit and Control and with the Hall of Fame Award of the ACM Special Interest Group on Operating Systems, among other awards. He is a Fellow of the Association for Computing Machinery and of the American Association for the Advancement of Science (AAAS). He holds a doctorate honoris causa from École normale supérieure Paris-Saclay.


xMichael Isard is a member of the Google Brain team, where he has been working on the TensorFlow system, in particular its support for custom hardware. His early research career was in visual tracking methods for computer vision. More recently he has focused on distributed systems research. Much of his work concerns the ways in which choices in the programming model interact with constraints on lower-level systems design and implementation.