loader from loading.io

How to Know with Celeste Kidd - #330

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Release Date: 12/23/2019

Consciousness and COVID-19 with Yoshua Bengio - #361 show art Consciousness and COVID-19 with Yoshua Bengio - #361

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Today we’re joined by one of, if not the most cited computer scientist in the world, Yoshua Bengio, Professor at the University of Montreal and the Founder and Scientific Director of MILA. We caught up with Yoshua to explore his work on consciousness, including how Yoshua defines consciousness, his paper “The Consciousness Prior,” as well as his current endeavor in building a COVID-19 tracing application, and the use of ML to propose experimental candidate drugs.

info_outline
Geometry-Aware Neural Rendering with Josh Tobin - #360 show art Geometry-Aware Neural Rendering with Josh Tobin - #360

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Today we’re joined by Josh Tobin, Co-Organizer of the machine learning training program Full Stack Deep Learning. We had the pleasure of sitting down with Josh prior to his presentation of his paper Geometry-Aware Neural Rendering at NeurIPS.

info_outline
The Third Wave of Robotic Learning with Ken Goldberg - #359 show art The Third Wave of Robotic Learning with Ken Goldberg - #359

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Today we’re joined by Ken Goldberg, professor of engineering at UC Berkeley, focused on robotic learning.

info_outline
Learning Visiolinguistic Representations with ViLBERT w/ Stefan Lee - #358 show art Learning Visiolinguistic Representations with ViLBERT w/ Stefan Lee - #358

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Today we’re joined by Stefan Lee, an assistant professor at Oregon State University. In our conversation, we focus on his paper ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. We discuss the development and training process for this model, the adaptation of the training process to incorporate additional visual information to BERT models, where this research leads from the perspective of integration between visual and language tasks.

info_outline
Upside-Down Reinforcement Learning with Jürgen Schmidhuber - #357 show art Upside-Down Reinforcement Learning with Jürgen Schmidhuber - #357

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Today we’re joined by Jürgen Schmidhuber, Co-Founder and Chief Scientist of NNAISENSE, the Scientific Director at IDSIA, as well as a Professor of AI at USI and SUPSI in Switzerland.

info_outline
SLIDE: Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning with Beidi Chen - #356 show art SLIDE: Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning with Beidi Chen - #356

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Beidi Chen is part of the team that developed a cheaper, algorithmic, CPU alternative to state-of-the-art GPU machines. They presented their findings at NeurIPS 2019 and have since gained a lot of attention for their paper, SLIDE: In Defense of Smart Algorithms Over Hardware Acceleration for Large-Scale Deep Learning Systems. Beidi shares how the team took a new look at deep learning with the case of extreme classification by turning it into a search problem and using locality-sensitive hashing.

info_outline
Advancements in Machine Learning with Sergey Levine - #355 show art Advancements in Machine Learning with Sergey Levine - #355

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Today we're joined by Sergey Levine, an Assistant Professor at UC Berkeley. We last heard from Sergey back in 2017, where we explored Deep Robotic Learning. Sergey and his lab’s recent efforts have been focused on contributing to a future where machines can be “out there in the real world, learning continuously through their own experience.” We caught up with Sergey at NeurIPS 2019, where Sergey and his team presented 12 different papers -- which means a lot of ground to cover!

info_outline
Secrets of a Kaggle Grandmaster with David Odaibo - #354 show art Secrets of a Kaggle Grandmaster with David Odaibo - #354

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Imagine spending years learning ML from the ground up, from its theoretical foundations, but still feeling like you didn’t really know how to apply it. That’s where David Odaibo found himself in 2015, after the second year of his PhD. David’s solution was Kaggle, a popular platform for data science competitions.

info_outline
NLP for Mapping Physics Research with Matteo Chinazzi - #353 show art NLP for Mapping Physics Research with Matteo Chinazzi - #353

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Predicting the future of science, particularly physics, is the task that Matteo Chinazzi, an associate research scientist at Northeastern University focused on in his paper Mapping the Physics Research Space: a Machine Learning Approach.

info_outline
Metric Elicitation and Robust Distributed Learning with Sanmi Koyejo - #352 show art Metric Elicitation and Robust Distributed Learning with Sanmi Koyejo - #352

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

The unfortunate reality is that many of the most commonly used machine learning metrics don't account for the complex trade-offs that come with real-world decision making. This is one of the challenges that Sanmi Koyejo, assistant professor at the University of Illinois, has dedicated his research to address. Sanmi applies his background in cognitive science, probabilistic modeling, and Bayesian inference to pursue his research which focuses broadly on “adaptive and robust machine learning.”

info_outline
 
More Episodes

Today we begin our coverage of the 2019 NeurIPS conference with Celeste Kidd, Assistant Professor of Psychology at UC Berkeley. In our conversation, we discuss:

  • The research at the Kidd Lab, which is focused on understanding “how people come to know what they know.”
  • Her invited talk “How to Know,” which details the core cognitive systems people use to guide their learning about the world.
  • Why people are curious about some things but not others.
  • How our past experiences and existing knowledge shape our future interests.
  • Why people believe what they believe, and how these beliefs are influenced in one direction or another.
  • How machine learning figures into this equation.

Check out the complete show notes for this episode at twimlai.com/talk/330. You can also follow along with this series at twimlai.com/neurips2019.