Monday 06/18 | Tuesday 06/19 | Wednesday 06/20 | Thursday 06/21 | |||

Morning 8:20 - 9:00 | Registration | |||||

Morning 9:00-9:30 | Opening | Morning 9-10:30 | Adams -3 | Broderick - 2 | Doshi-Velez - 1 | |

Morning 9:30-11:00 | Candes -1 | Coffee Break | Coffee Break | Coffee Break | ||

Coffee Break | Morning 11-12:30 | Gonzalez - 1 | Broderick - 3 | Doshi-Velez - 2 | ||

Afternoon 11:30-12:45 | Candes -2 | Afternoon 2:00-3:30 | Gonzalez - 2 | Cuturi - 1 | Doshi-Velez - 3 | |

Afternoon 2:00-3:30 | Candes -3 | Coffee Break | Coffee Break | Coffee Break | ||

Coffee Break | Afternoon 4:00-5:00 | Gonzalez - 3 | Cuturi - 2 | Break | ||

Afternoon 4:00-5:30 | Adams - 1 | Night 5:00-6:00 | Broderick - 1 | Cuturi - 3 | Levine - 1 | |

Night 5:45-7:00 | Adams - 2 | Night 600-6:30 | A - 9 | Social Event (Transportation to the venue) | Poster - Session |

Friday 06/22 | Saturday 06/23 | |||

Morning 9-10:30 | Levine - 2 | Morning 9:00 - 9:30 | A - 6 | |

Coffee Break | Morning 9:30-10:00 | A - 7 | ||

Morning 11-12:30 | Levine - 3 | Morning 10:00-10:30 | A - 8 | |

Afternoon 2:00-3:00 | A - 1 | Coffee Break | ||

Coffee Break | Morning 11:00 - 11:30 | TBA. Sadosky Foundation. | ||

Afternoon 3:30-4:15 | A - 2 | Noon 11:30 - 1:00 PM | Delteil - 1 | |

Afternoon 4:15 - 5:00 | A - 3 | |||

Afternoon 5:00-5:45 | A - 4 | |||

Night 5:45 - 6:00 | A - 5 | |||

Night 6:00 - 7:00 | Poster - Session |

Monday 06/25 | Tuesday 06/26 | Wednesday 06/27 | Thursday 06/28 | Friday 06/29 | Saturday 06/30 | |

Morning 9:00-10:30 | Larochelle - 1 | Blei - 1 | Blei - 3 | Warde-Farley - 1 | Warde-Farley - 3 | Hsu - 3 |

Coffee Break | Coffee Break | Coffee Break | Coffee Break | Coffee Break | Coffee Break | |

Morning 11-12:30 | Larochelle - 2 | Blei - 2 | Wood - 3 | Warde-Farley - 2 | Hazan - 2 | Pfau - 2 |

Afternoon 2:00-3:30 | Larochelle - 3 | Wood - 1 | Abadi - Isard | Hazan - 1 | Hazan - 3 | Closing - 12:30 - 1:00 |

Coffee Break | Coffee Break | Coffee Break | Coffee Break | Coffee Break | ||

Afternoon 4:00-5:30 | Delteil - 2 | Break | Abadi - Isard | Pfau - 1 | Hsu - 1 | |

Night 5:45-7:00 | Delteil - 3 | Wood - 2 | Diversity | Hsu - 2 | ||

Night 7:00-8:00 | Social Event |

Sergey Levine

Deep Reinforcement Learning

Deep Reinforcement Learning

Abstract:

I will discuss reinforcement learning and control algorithms that combine high-dimensional parametric models, such as deep neural networks, with decision making and control. In particular, the lectures will cover policy gradient, value function-based, and actor-critic algorithms for reinforcement learning with function approximation, model-based reinforcement learning, and a number of advanced topics, which may include: the connection between control and probabilistic inference, inverse reinforcement learning, transfer, multi-task, and meta-learning for control, and applications in areas such as robotics.

I will discuss reinforcement learning and control algorithms that combine high-dimensional parametric models, such as deep neural networks, with decision making and control. In particular, the lectures will cover policy gradient, value function-based, and actor-critic algorithms for reinforcement learning with function approximation, model-based reinforcement learning, and a number of advanced topics, which may include: the connection between control and probabilistic inference, inverse reinforcement learning, transfer, multi-task, and meta-learning for control, and applications in areas such as robotics.

Slides

Daniel Hsu

Learning latent variable models using tensor decompositions

Learning latent variable models using tensor decompositions

This tutorial surveys algorithms for learning latent variable models based on the method-of-moments, focusing on algorithms
based on low-rank decompositions of higher-order tensors. The target audiences of the
tutorial include (i) users of latent variable models in applications, and (ii) researchers
developing techniques for learning latent variable models. The only prior knowledge expected
of the audience is a familiarity with simple latent variable models (e.g., mixtures of
Gaussians), and rudimentary linear algebra and probability. The audience will learn about
new algorithms for learning latent variable models, techniques for developing new learning
algorithms based on spectral decompositions, and analytical techniques for understanding
the aforementioned models and algorithms. Advanced topics such as learning overcomplete
representations may also be discussed.

Slides

Martin Abadi, Michael Isard

Tensorflow

Tensorflow

TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. Its computational
model is based on dataflow graphs with mutable state. Graph nodes may be mapped to different
machines in a cluster, and within each machine to CPUs, GPUs, and custom ASICs. TensorFlow
supports a variety of applications, but it particularly targets training and inference
with deep neural networks. It serves as a platform for research and for deploying machine
learning systems across many areas, such as speech recognition, computer vision, robotics,
information retrieval, and natural language processing. In these lectures, we will describe
TensorFlow's programming models, some aspects of its implementation, and some of the
underlying theory. TensorFlow is joint work with many other people in the Google Brain
team and elsewhere. More information is available at tensorflow.org.

Some of the material is based on the following papers: https://dl.acm.org/citation.cfm?id=3026899, https://dl.acm.org/citation.cfm?id=3088527, and https://dl.acm.org/citation.cfm?id=3190551

Some of the material is based on the following papers: https://dl.acm.org/citation.cfm?id=3026899, https://dl.acm.org/citation.cfm?id=3088527, and https://dl.acm.org/citation.cfm?id=3190551

Elad Hazan

Optimization for Machine Learning

Optimization for Machine Learning

In this tutorial we'll survey the optimization viewpoint to learning. We will cover optimization-based learning frameworks,
such as online learning and online convex optimization. These will lead us to describe
some of the most commonly used algorithms for training machine learning models.

Slides

Marco Cuturi

A Primer on Optimal Transport

A Primer on Optimal Transport

Optimal transport (OT) provides a powerful and flexible way to compare probability measures, discrete and continuous, which
includes therefore point clouds, histograms, datasets, parametric and generative models.
Originally proposed in the eighteenth century, this theory later led to Nobel Prizes
for Koopmans and Kantorovich as well as Villani’s Fields Medal in 2010. OT recently has
reached the machine learning community, because it can tackle challenging learning scenarios
including dimensionality reduction, structured prediction problems that involve histogram
outputs, and estimation of generative models such as GANs in highly degenerate, high-dimensional
problems. Despite very recent successes bringing OT from theory to practice, OT remains
challenging for the machine learning community because of its mathematical formality.
This tutorial will introduce in an approachable way crucial theoretical, computational,
algorithmic and practical aspects of OT needed for machine learning applications.

Slides

Javier Gonzalez Hernandez

Gaussian processes for Uncertainty Quantification

Gaussian processes for Uncertainty Quantification

In these three lectures we will cover different theoretical and practical aspects of Gaussian processes (GPs) and how they
can be used for making decisions under uncertainty. The first lecture will be an introduction
to GPs. We will review the basic concepts of GPs, explore some connections with other
related techniques and show how to use them in practice. The second lecture will present
different ways in which GPs can be used for decision making. We will focus on cases in
which taking into account uncertainty coming from the predictions of the GPs is a key
element. In particular, we will describe how GPs can be used for optimization, quadrature
and experimental design. In the third lecture students will take part in a hands-on lab,
using different Python libraries to see these methods at work. There are no pre-requisites
for the lectures but a general background in machine learning is recommended. For the
practical sessions a laptop with some version of Python (preferably Conda) is desirable.

Slides

Ryan Adams

A Tutorial on Deep Probabilistic Models

A Tutorial on Deep Probabilistic Models

I will give a tutorial on the interface between probabilistic modeling and deep neural networks. The three primary topics
of interest will be Bayesian neural networks, Boltzmann machines, and neural density
models such as variational autoencoders. I will provide an introduction to inference
and learning in these models, and give an overview that connects modern approaches to
the long history of probabilistic modeling with neural function approximation.

Slides

Tamara Broderick

Nonparametric Bayesian Methods: Models, Algorithms, and Applications

Nonparametric Bayesian Methods: Models, Algorithms, and Applications

This tutorial introduces Bayesian nonparametrics (BNP) as a tool for modern data science and machine learning. BNP methods
are useful in a variety of data analyses---including density estimation without parametric
assumptions and clustering models that adaptively determine the number of clusters. We
will demonstrate that BNP allows the data analyst to learn more from a data set as the
size of the data set grows and see how this feat is accomplished. We will study popular
BNP models such as the Dirichlet process, Chinese restaurant process, Indian buffet process,
and hierarchical BNP models---and how they relate to each other.

Slides

David Blei

Variational Inference: Foundations and Innovations

Variational Inference: Foundations and Innovations

One of the core problems of modern statistics and machine learning is to approximate difficult-to-compute probability distributions.
This problem is especially important in probabilistic modeling and Bayesian statistics,
which frame all inference about unknown quantities as calculations about conditional
distributions. In this tutorial I will review and discuss variational inference (VI),
a method a that approximates probability distributions through optimization. VI has been
used in myriad applications in machine learning and tends to be faster than more traditional
methods, such as Markov chain Monte Carlo sampling. I will first review the basics of
variational inference. Then I will describe some of the pivotal tools for VI that have
been developed in the last few years: Monte Carlo gradients, black box variational inference,
stochastic variational inference, and variational autoencoders. Last, I will discuss
some of the unsolved problems in VI and point to promising research directions.

Slides

Hugo Larochelle

Deep Neural Networks

Deep Neural Networks

In this lecture, I'll start by covering the basic concepts behind feedforward neural networks. I'll present forward propagation
and backpropagation in neural networks. Specifically, I'll discuss the parameterization
of feedforward nets, the most common types of units, the capacity of neural networks
and how to compute the gradients of the training loss for classification with neural
networks. I'll discuss the training of neural networks by gradient descent and then discuss
the more recent ideas that are now commonly used for training deep neural networks. Then,
I'll discuss various types of neural network architectures, designed to address a variety
of learning problems (supervised, multi-task, one-shot, zero-shot, meta-learning). Finally,
I'll end with a discussion of some of the intriguing properties of neural networks that
are the object of a lot of research today.

Slides

Finale Doshi-Velez

Introduction to Reinforcement Learning (+ bonus material)

Introduction to Reinforcement Learning (+ bonus material)

We will begin with an introduction to fundamental concepts in reinforcement learning: policies, value functions, and planning
in discrete environments. Armed with this groundwork, we'll do some hands-on practicals
to gain an intuition of these fundamental concepts/understand how even simple, discrete
environments can exhibit interesting subtleties. In the final session, we'll dive into
a very specific use-case of reinforcement learning known as off-policy evaluation: if
we have already collected data under one policy, can we reuse that data to guess how
well a different decision-making strategy will perform? I'll link this to some of our
current work in applying reinforcement learning in healthcare.

Slides

Frank Wood

Probabilistic Programming

Probabilistic Programming

How do we engineer machines that reason? This is a question that has long vexed humankind. The answer to this question is
fantastically valuable. There exist various hypotheses. One major division of hypothesis
space delineates along lines of assertion: that random variables and probabilistic calculation
are more-or-less an engineering requirement [Ghahramani, 2015, Tenenbaum et al., 2011]
and the opposite [LeCun et al., 2015, Goodfellow et al., 2016]. The field ascribed to
the former camp is roughly known as Bayesian or probabilistic machine learning; the latter
as deep learning. The first requires inference as a fundamental tool; the latter optimization,
usually gradient-based, for classification and regression. Probabilistic programming
languages are to the former as automated differentiation tools are to the latter. Probabilistic
programming is fundamentally about developing languages that allow the denotation of
inference problems and evaluators that “solve” those inference problems. It can be argued
that the rapid exploration of the deep learning, big-data/regression approach to artificial
intelligence has been triggered largely by the emergence of programming languages tools
that automate the tedious and troublesome derivation and calculation of gradients for
optimization. Probabilistic programming aims to build and deliver a toolchain that does
the same for probabilistic machine learning; supporting supervised, unsupervised, and
semi-supervised inference. Without such a toolchain one could argue that the complexity
of inference-based approaches to artificial intelligence systems are too high to allow
rapid exploration of the kind we have seen recently in deep learning. These lectures
introducing probabilistic programming will cover the basics of probabilistic programming
from language design to evaluator implementation with the dual aim of explaining existing
systems at a deep enough level that students should have no trouble adopting and using
any of both the languages and systems that are currently out there and making it possible
for the next generation of probabilistic programming language designers and implementers
to use this as a foundation upon which to build.

Slides

Thomas Delteil

Learning Deep Learning from Scratch with MXNet/Gluon

Learning Deep Learning from Scratch with MXNet/Gluon

This lecture introduces MXNet Gluon, a flexible new imperative interface that pairs MXNet’s speed with a user-friendly frontend.
It allows users to seamlessly transition from imperative code to a symbolic graph representation.
This allows for faster execution and deployment on a wide range of devices, including
embedded ones. In the first part of the lecture, we will cover the fundamentals of Gluon:
NDArray data structure, Block layer and automatic differentiation. We will show how to
define neural networks at the atomic level and through Gluon’s predefined layers. We
will demonstrate how to load data asynchronously, serialize models and build dynamic
graphs. In the second part, we will focus on a specific deep learning task: semantic
segmentation. We will show how you can implement and train a Fully Convolutional Network
(FCN) to segment an image according to a set of classes. In the final part, we will go
through the GluonCV (Computer Vision) and GluonNLP (Natural Language Processing) toolkits.
You will learn how to leverage these two libraries to quickly replicate the results of
state-of-the-art models in several tasks.

Slides

David Pfau

Discovering the Geometry of Data Manifolds with Spectral and Deep Learning

Discovering the Geometry of Data Manifolds with Spectral and Deep Learning

A central tenet of machine learning is that complex high-dimensional data can be described by the structure of a low-dimensional
latent manifold. However, most machine learning methods do little to exploit the rich
toolkit of techniques for analyzing these manifolds. In this tutorial I will give a tour
of how ideas from differential geometry and spectral analysis can be brought to bear
on problems in machine learning. We will cover basic ideas in differential geometry such
as curvature, metrics and geodesics, and go over how they relate to problems in spectral
theory like Laplacian operators and computing low-rank matrix decompositions. We will
survey applications to machine learning, including recent works on generalizing convolutional
neural networks to graph- and manifold-structured input, analyzing the structure of latent
spaces in deep generative models, and embedding hierarchical structure in continuous
spaces. Finally, I will discuss spectral inference networks, a framework for unsupervised
learning that uses the algorithmic tools of deep learning and stochastic optimization
to solve large-scale spectral decomposition problems that would otherwise be intractable.

Slides

Emmanuel Candes

Modern Approaches to False Discovery Rate Control and Inference in High Dimensional Models.

Modern Approaches to False Discovery Rate Control and Inference in High Dimensional Models.

Generative adversarial networks (GANs) are a recently developed approach to learning generative models, in particular generative
models capable of highly realistic synthesis. This tutorial will explore the GAN approach
to generative models, comparing and contrasting it with more traditional generative model
paradigms as well as other modern approaches based on neural networks. We will explore
the practical considerations for effectively stabilizing GAN learning dynamics, as well
as approaches to quantitatively evaluating the resulting models, and the current set
of challenges at the frontiers of GAN and generative model research. We will also motivate
the study of GANs from the perspective of successful applications to date, including
domain adaptation, image-to-image translation, and single-image super-resolution.

Slides

David Warde-Farley

Generative Adversarial Networks

Generative Adversarial Networks

Generative adversarial networks (GANs) are a recently developed approach to learning generative models, in particular generative
models capable of highly realistic synthesis. This tutorial will explore the GAN approach
to generative models, comparing and contrasting it with more traditional generative model
paradigms as well as other modern approaches based on neural networks. We will explore
the practical considerations for effectively stabilizing GAN learning dynamics, as well
as approaches to quantitatively evaluating the resulting models, and the current set
of challenges at the frontiers of GAN and generative model research. We will also motivate
the study of GANs from the perspective of successful applications to date, including
domain adaptation, image-to-image translation, and single-image super-resolution.

A - 1

Recommendations in the largest e-commerce platform of Latin America.

Creating a product from scratch faces many different challenges from both business perspective and algorithmic/infrastructure complexity. In this presentation, we will share how the largest e-commerce platform of LATAM develops its recommender engine from its birth in 2017 up to present day.

Speakers: Pablo Zivic ML Researcher, Martin Pozzer Senior Product Development Manager. Mercado Libre.

Creating a product from scratch faces many different challenges from both business perspective and algorithmic/infrastructure complexity. In this presentation, we will share how the largest e-commerce platform of LATAM develops its recommender engine from its birth in 2017 up to present day.

Speakers: Pablo Zivic ML Researcher, Martin Pozzer Senior Product Development Manager. Mercado Libre.

A - 2

AI: From Inception to Production

The recent developments in artificial intelligence (AI), supported by the analysis and exploitation of large data sets, open up a new era in which the application of AI promises the creation of innovative products and services. However, even though major players in the technology marketplace are creating software that takes AI out of the laboratories and makes it more accessible, the process of incorporating AI to the products or processes of a company still poses great challenges. Our aim in this talk is to present, drawing from our own experience, the process of carrying out an AI project from its inception, passing through the stages of development and quality control, until its eventual realization.

Speakers: Tomás Tecce, Pasquinel Ubani. Globant.

The recent developments in artificial intelligence (AI), supported by the analysis and exploitation of large data sets, open up a new era in which the application of AI promises the creation of innovative products and services. However, even though major players in the technology marketplace are creating software that takes AI out of the laboratories and makes it more accessible, the process of incorporating AI to the products or processes of a company still poses great challenges. Our aim in this talk is to present, drawing from our own experience, the process of carrying out an AI project from its inception, passing through the stages of development and quality control, until its eventual realization.

Speakers: Tomás Tecce, Pasquinel Ubani. Globant.

A - 3

IQA @ Despegar

What is a good quality photo? During this talk we are going to discuss possible answers to this question by diving into the world of Image Quality Assesment. We will present the state of the art in Machine Learning technics applied to this field as well as industry caveats in order to turn prototypes into performant and scalable software

Speakers: Pedro Carossi y Alejandro Alvarez. Despegar.

What is a good quality photo? During this talk we are going to discuss possible answers to this question by diving into the world of Image Quality Assesment. We will present the state of the art in Machine Learning technics applied to this field as well as industry caveats in order to turn prototypes into performant and scalable software

Speakers: Pedro Carossi y Alejandro Alvarez. Despegar.

A - 4

ML at OLX: Disrupting the Future of Classifieds World"

Machine learning (ML) technologies are becoming key differentiators for companies across all industries. Some of the biggest two-sided marketplace platforms in the world such as Uber, Airbnb and OLX use ML to deliver disruption in transportation, housing and classifieds marketplaces respectively. Founded in 2006, OLX is a global company that operates in more than 40 countries and provides one of the largest online classifieds marketplaces in the world. At the moment, OLX’s ML-powered platform facilitates matching buyers and sellers by delivering personalized relevant content to more than 300 million users monthly. In this talk I will introduce challenging and unique problems that we encounter while building recommendation, personalization and search systems at this scale at OLX and how we are tackling them using cutting-edge ML technologies.

Speakers: Vladan Radosavljevic, Ph.D., Head of Data Science at OLX.

Machine learning (ML) technologies are becoming key differentiators for companies across all industries. Some of the biggest two-sided marketplace platforms in the world such as Uber, Airbnb and OLX use ML to deliver disruption in transportation, housing and classifieds marketplaces respectively. Founded in 2006, OLX is a global company that operates in more than 40 countries and provides one of the largest online classifieds marketplaces in the world. At the moment, OLX’s ML-powered platform facilitates matching buyers and sellers by delivering personalized relevant content to more than 300 million users monthly. In this talk I will introduce challenging and unique problems that we encounter while building recommendation, personalization and search systems at this scale at OLX and how we are tackling them using cutting-edge ML technologies.

Speakers: Vladan Radosavljevic, Ph.D., Head of Data Science at OLX.

A - 5

Next - Ai Canada

Are you interested in commercializing your research and starting a business? NextAI is Canada’s premiere AI startup accelerator located in Toronto and Montreal - two global hotspots for AI research and commercialization. NextAI is for entrepreneurs, researchers and scientists looking to launch AI-enabled ventures.

Speakers: Jon French.

Are you interested in commercializing your research and starting a business? NextAI is Canada’s premiere AI startup accelerator located in Toronto and Montreal - two global hotspots for AI research and commercialization. NextAI is for entrepreneurs, researchers and scientists looking to launch AI-enabled ventures.

Speakers: Jon French.

A - 6

Machine Learning and Earth Observation

Satellogic aims to capture every square meter of the Earth's surface to derive insights and enable better decision making for industries, governments, and individuals. Addressing some of humanity’s most pressing challenges, such as providing food or distributing energy for nine billion people without depleting resources, requires real-time planet-scale data. Satellogic has created small, inexpensive satellites, that transmit real-time data and images back home. Our constellation already has several high-resolution satellites in orbit and is growing to 300 in the next few years to provide new insights about our planet. The Satellogic data science team works with our satellite images and other data sources in order to transform this data into knowledge. In this talk we will explain how our company has designed from scratch a fleet of satellites that cost 1000 times less than conventional earth observation satellites and what are the main challenges that our machine learning engineers face when developing remote sensing solutions through real cases.

Satellogic aims to capture every square meter of the Earth's surface to derive insights and enable better decision making for industries, governments, and individuals. Addressing some of humanity’s most pressing challenges, such as providing food or distributing energy for nine billion people without depleting resources, requires real-time planet-scale data. Satellogic has created small, inexpensive satellites, that transmit real-time data and images back home. Our constellation already has several high-resolution satellites in orbit and is growing to 300 in the next few years to provide new insights about our planet. The Satellogic data science team works with our satellite images and other data sources in order to transform this data into knowledge. In this talk we will explain how our company has designed from scratch a fleet of satellites that cost 1000 times less than conventional earth observation satellites and what are the main challenges that our machine learning engineers face when developing remote sensing solutions through real cases.

A - 7

A brief survey of Data Science techniques applied to the analysis of Bank and Mobile Phone Datasets.

Unified Machine Learning Approaches for Heterogenous Data

Speakers: Charles Sarraute. Grandata.

Unified Machine Learning Approaches for Heterogenous Data

Speakers: Charles Sarraute. Grandata.

A - 8

Unified Machine Learning Approaches for Heterogenous Data

Speakers: David Stevens and Santiago Hernandez. Jampp.

Speakers: David Stevens and Santiago Hernandez. Jampp.

A - 9

Smart Learning for fraud prevention

How is the best way to investigate fraud? What happens when we find a new pattern or modus operandi? After the investigation of a fraud, and when we have found the most relevant that can impact the organization.It is necessary to be clear in what way and what information is vital to be able to feed our learning engine.

Speakers: Daniel Guzman. IBM.

How is the best way to investigate fraud? What happens when we find a new pattern or modus operandi? After the investigation of a fraud, and when we have found the most relevant that can impact the organization.It is necessary to be clear in what way and what information is vital to be able to feed our learning engine.

Speakers: Daniel Guzman. IBM.

Diversity - Panel

Discussion Session: Diversity and Inclusion in ML

Top local speakers will discuss main initiatives, outstanding projects and their personal experiences in mixed teams focusing on the importance of diversity to foster a more ethic ML. They will tell us about how to emphasize a more inclusive Machine Learning, Data Science and Information and Communications Technology focusing on Argentina and the region.

Speakers: Delfina Daglio (IBM Argentina), Fernando Schapachnik (Fundación Sadosky) and Luciana Ferrer (ICC UBA-CONICET). Moderator: Guadalupe Dorna.

Top local speakers will discuss main initiatives, outstanding projects and their personal experiences in mixed teams focusing on the importance of diversity to foster a more ethic ML. They will tell us about how to emphasize a more inclusive Machine Learning, Data Science and Information and Communications Technology focusing on Argentina and the region.

Speakers: Delfina Daglio (IBM Argentina), Fernando Schapachnik (Fundación Sadosky) and Luciana Ferrer (ICC UBA-CONICET). Moderator: Guadalupe Dorna.

Computational Challenges

Sadosky Fundation

Speakers: Lenadro Lombardi. Sadosky Foundation.

Speakers: Lenadro Lombardi. Sadosky Foundation.