The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.
info_outline
Applying the Causal Roadmap to Optimal Dynamic Treatment Rules with Lina Montoya - #506
08/02/2021
Applying the Causal Roadmap to Optimal Dynamic Treatment Rules with Lina Montoya - #506
Today we close out our 2021 ICML series joined by Lina Montoya, a postdoctoral researcher at UNC Chapel Hill. In our conversation with Lina, who was an invited speaker at the Neglected Assumptions in Causal Inference Workshop, we explored her work applying Optimal Dynamic Treatment (ODT) to understand which kinds of individuals respond best to specific interventions in the US criminal justice system. We discuss the concept of neglected assumptions and how it connects to ODT rule estimation, as well as a breakdown of the causal roadmap, coined by researchers at UC Berkeley. Finally, Lina talks us through the roadmap while applying the ODT rule problem, how she’s applied a “superlearner” algorithm to this problem, how it was trained, and what the future of this research looks like. The complete show notes for this episode can be found at .
/episode/index/show/twimlai/id/20007845
info_outline
Constraint Active Search for Human-in-the-Loop Optimization with Gustavo Malkomes - #505
07/29/2021
Constraint Active Search for Human-in-the-Loop Optimization with Gustavo Malkomes - #505
Today we continue our ICML series joined by Gustavo Malkomes, a research engineer at Intel via their recent acquisition of SigOpt. In our conversation with Gustavo, we explore his paper , which focuses on a novel algorithmic solution for the iterative model search process. This new algorithm empowers teams to run experiments where they are not optimizing particular metrics but instead identifying parameter configurations that satisfy constraints in the metric space. This allows users to efficiently explore multiple metrics at once in an efficient, informed, and intelligent way that lends itself to real-world, human-in-the-loop scenarios. The complete show notes for this episode can be found at .
/episode/index/show/twimlai/id/19970432
info_outline
Fairness and Robustness in Federated Learning with Virginia Smith -#504
07/26/2021
Fairness and Robustness in Federated Learning with Virginia Smith -#504
Today we kick off our ICML coverage joined by Virginia Smith, an assistant professor in the Machine Learning Department at Carnegie Mellon University. In our conversation with Virginia, we explore her work on cross-device federated learning applications, including where the distributed learning aspects of FL are relative to the privacy techniques. We dig into her paper from ICML, Ditto: Fair and Robust Federated Learning Through Personalization, what fairness means in contrast to AI ethics, the particulars of the failure modes, the relationship between models, and the things being optimized across devices, and the tradeoffs between fairness and robustness. We also discuss a second paper, Heterogeneity for the Win: One-Shot Federated Clustering, how the proposed method makes heterogeneity beneficial in data, how the heterogeneity of data is classified, and some applications of FL in an unsupervised setting. The complete show notes for this episode can be found at .
/episode/index/show/twimlai/id/19933667
info_outline
Scaling AI at H&M Group with Errol Koolmeister - #503
07/22/2021
Scaling AI at H&M Group with Errol Koolmeister - #503
Today we’re joined by Errol Koolmeister, the head of AI foundation at H&M Group. In our conversation with Errol, we explore H&M’s AI journey, including its wide adoption across the company in 2016, and the various use cases in which it's deployed like fashion forecasting and pricing algorithms. We discuss Errol’s first steps in taking on the challenge of scaling AI broadly at the company, the value-added learning from proof of concepts, and how to align in a sustainable, long-term way. Of course, we dig into the infrastructure and models being used, the biggest challenges faced, and the importance of managing the project portfolio, while Errol shares their approach to building infra for a specific product with many products in mind.
/episode/index/show/twimlai/id/19898930
info_outline
Evolving AI Systems Gracefully with Stefano Soatto - #502
07/19/2021
Evolving AI Systems Gracefully with Stefano Soatto - #502
Today we’re joined by Stefano Soatto, VP of AI applications science at AWS and a professor of computer science at UCLA. Our conversation with Stefano centers on recent research of his called Graceful AI, which focuses on how to make trained systems evolve gracefully. We discuss the broader motivation for this research and the potential dangers or negative effects of constantly retraining ML models in production. We also talk about research into error rate clustering, the importance of model architecture when dealing with problems of model compression, how they’ve solved problems of regression and reprocessing by utilizing existing models, and much more. The complete show notes for this episode can be found at .
/episode/index/show/twimlai/id/19857842
info_outline
ML Innovation in Healthcare with Suchi Saria - #501
07/15/2021
ML Innovation in Healthcare with Suchi Saria - #501
Today we’re joined by Suchi Saria, the founder and CEO of Bayesian Health, the John C. Malone associate professor of computer science, statistics, and health policy, and the director of the machine learning and healthcare lab at Johns Hopkins University. Suchi shares a bit about her journey to working in the intersection of machine learning and healthcare, and how her research has spanned across both medical policy and discovery. We discuss why it has taken so long for machine learning to become accepted and adopted by the healthcare infrastructure and where exactly we stand in the adoption process, where there have been “pockets” of tangible success. Finally, we explore the state of healthcare data, and of course, we talk about Suchi’s recently announced startup Bayesian Health and their goals in the healthcare space, and an accompanying study that looks at real-time ML inference in an EMR setting. The complete show notes for this episode can be found at .
/episode/index/show/twimlai/id/19822826
info_outline
Cross-Device AI Acceleration, Compilation & Execution with Jeff Gehlhaar - #500
07/12/2021
Cross-Device AI Acceleration, Compilation & Execution with Jeff Gehlhaar - #500
Today we’re joined by a friend of the show Jeff Gehlhaar, VP of technology and the head of AI software platforms at Qualcomm. In our conversation with Jeff, we cover a ton of ground, starting with a bit of exploration around ML compilers, what they are, and their role in solving issues of parallelism. We also dig into the latest additions to the Snapdragon platform, AI Engine Direct, and how it works as a bridge to bring more capabilities across their platform, how benchmarking works in the context of the platform, how the work of other researchers we’ve spoken to on compression and quantization finds its way from research to product, and much more! After you check out this interview, you can look below for some of the other conversations with researchers mentioned. The complete show notes for this episode can be found at .
/episode/index/show/twimlai/id/19786106
info_outline
The Future of Human-Machine Interaction with Dan Bohus and Siddhartha Sen - #499
07/08/2021
The Future of Human-Machine Interaction with Dan Bohus and Siddhartha Sen - #499
Today we continue our AI in Innovation series joined by Dan Bohus, senior principal researcher at Microsoft Research, and , a principal researcher at Microsoft Research. In this conversation, we use a pair of research projects, Maia Chess and Situated Interaction, to springboard us into a conversation about the evolution of human-AI interaction. We discuss both of these projects individually, as well as the commonalities they have, how themes like understanding the human experience appear in their work, the types of models being used, the various types of data, and the complexity of each of their setups. We explore some of the challenges associated with getting computers to better understand human behavior and interact in ways that are more fluid. Finally, we touch on what excites both Dan and Sid about their respective projects, and what they’re excited about for the future. The complete show notes for this episode can be found at .
/episode/index/show/twimlai/id/19745423
info_outline
Vector Quantization for NN Compression with Julieta Martinez - #498
07/05/2021
Vector Quantization for NN Compression with Julieta Martinez - #498
Today we’re joined by , a senior research scientist at recently announced startup Waabi. Julieta was a keynote speaker at the recent LatinX in AI workshop at CVPR, and our conversation focuses on her talk “What do Large-Scale Visual Search and Neural Network Compression have in Common,” which shows that multiple ideas from large-scale visual search can be used to achieve state-of-the-art neural network compression. We explore the commonality between large databases and dealing with high dimensional, many-parameter neural networks, the advantages of using product quantization, and how that plays out when using it to compress a neural network. We also dig into another paper Julieta presented at the conference, Deep Multi-Task Learning for Joint Localization, Perception, and Prediction, which details an architecture that is able to reuse computation between the three tasks, and is thus able to correct localization errors efficiently. The complete show notes for this episode can be found at .
/episode/index/show/twimlai/id/19708886
info_outline
Deep Unsupervised Learning for Climate Informatics with Claire Monteleoni - #497
07/01/2021
Deep Unsupervised Learning for Climate Informatics with Claire Monteleoni - #497
Today we continue our CVPR 2021 coverage joined by Claire Monteleoni, an associate professor at the University of Colorado Boulder. We cover quite a bit of ground in our conversation with Claire, including her journey down the path from environmental activist to one of the leading climate informatics researchers in the world. We explore her current research interests, and the available opportunities in applying machine learning to climate informatics, including the interesting position of doing ML from a data-rich environment. Finally, we dig into the evolution of climate science-focused events and conferences, as well as the Keynote Claire gave at the EarthVision workshop at CVPR “,” which focused on semi- and unsupervised deep learning approaches to studying rare and extreme climate events. The complete show notes for this episode can be found at .
/episode/index/show/twimlai/id/19674095
info_outline
Skip-Convolutions for Efficient Video Processing with Amir Habibian - #496
06/28/2021
Skip-Convolutions for Efficient Video Processing with Amir Habibian - #496
Today we kick off our CVPR coverage joined by Amir Habibian, a senior staff engineer manager at Qualcomm Technologies. In our conversation with Amir, whose research primarily focuses on video perception, we discuss a few papers they presented at the event. We explore the paper Skip-Convolutions for Efficient Video Processing, which looks at training discrete variables to end to end into visual neural networks. We also discuss his work on his FrameExit paper, which proposes a conditional early exiting framework for efficient video recognition. The complete show notes for this episode can be found at .
/episode/index/show/twimlai/id/19632989
info_outline
Advancing NLP with Project Debater w/ Noam Slonim - #495
06/24/2021
Advancing NLP with Project Debater w/ Noam Slonim - #495
Today we’re joined by Noam Slonim, the principal investigator of Project Debater at IBM Research. In our conversation with Noam, we explore the history of Project Debater, the first AI system that can “debate” humans on complex topics. We also dig into the evolution of the project, which is the culmination of 7 years and over 50 research papers, and eventually becoming a Nature cover paper, “,” which details the system in its entirety. Finally, Noam details many of the underlying capabilities of Debater, including the relationship between systems preparation and training, evidence detection, detecting the quality of arguments, narrative generation, the use of conventional NLP methods like entity linking, and much more. The complete show notes for this episode can be found at
/episode/index/show/twimlai/id/19551077
info_outline
Bringing AI Up to Speed with Autonomous Racing w/ Madhur Behl - #494
06/21/2021
Bringing AI Up to Speed with Autonomous Racing w/ Madhur Behl - #494
Today we’re joined by Madhur Behl, an Assistant Professor in the department of computer science at the University of Virginia. In our conversation with Madhur, we explore the super interesting work he’s doing at the intersection of autonomous driving, ML/AI, and Motorsports, where he’s teaching self-driving cars how to drive in an agile manner. We talk through the differences between traditional self-driving problems and those encountered in a racing environment, the challenges in solving planning, perception, control. We also discuss their upcoming race at the Indianapolis Motor Speedway, where Madhur and his students will compete for 1 million dollars in the world’s first head-to-head fully autonomous race, and how they’re preparing for it.
/episode/index/show/twimlai/id/19553858
info_outline
AI and Society: Past, Present and Future with Eric Horvitz - #493
06/17/2021
AI and Society: Past, Present and Future with Eric Horvitz - #493
Today we continue our AI Innovation series joined by Microsoft’s Chief Scientific Officer, Eric Horvitz. In our conversation with Eric, we explore his tenure as AAAI president and his focus on the future of AI and its ethical implications, the scope of the study on the topic, and how drastically the AI and machine learning landscape has changed since 2009. We also discuss Eric’s role at Microsoft and the Aether committee that has advised the company on issues of responsible AI since 2017. Finally, we talk through his recent work as a member of the National Security Commission on AI, where he helped commission a 750+ page report on topics including the Future of AI R&D, Building Trustworthy AI systems, civil liberties and privacy, and the challenging area of AI and autonomous weapons. The complete show notes for this episode can be found at .
/episode/index/show/twimlai/id/18555065
info_outline
Agile Applied AI Research with Parvez Ahammad - #492
06/14/2021
Agile Applied AI Research with Parvez Ahammad - #492
Today we’re joined by Parvez Ahammad, head of data science applied research at LinkedIn. In our conversation, Parvez shares his interesting take on organizing principles for his organization, starting with how data science teams are broadly organized at LinkedIn. We explore how they ensure time investments on long-term projects are managed, how to identify products that can help in a cross-cutting way across multiple lines of business, quantitative methodologies to identify unintended consequences in experimentation, and navigating the tension between research and applied ML teams in an organization. Finally, we discuss differential privacy, and their recently released GreyKite library, an open-source Python library developed to support forecasting. The complete show note for this episode can be found at .
/episode/index/show/twimlai/id/19431989
info_outline
Haptic Intelligence with Katherine J. Kuchenbecker - #491
06/10/2021
Haptic Intelligence with Katherine J. Kuchenbecker - #491
Today we’re joined Katherine J. Kuchenbecker, director at the Max Planck Institute for Intelligent Systems and of the haptic intelligence department. In our conversation, we explore Katherine’s research interests, which lie at the intersection of haptics (physical interaction with the world) and machine learning, introducing us to the concept of “haptic intelligence.” We discuss how ML, mainly computer vision, has been integrated to work together with robots, and some of the devices that Katherine’s lab is developing to take advantage of this research. We also talk about hugging robots, augmented reality in robotic surgery, and the degree to which she studies human-robot interaction. Finally, Katherine shares with us her passion for mentoring and the importance of diversity and inclusion in robotics and machine learning. The complete show notes for this episode can be found at .
/episode/index/show/twimlai/id/19403342
info_outline
Data Science on AWS with Chris Fregly and Antje Barth - #490
06/07/2021
Data Science on AWS with Chris Fregly and Antje Barth - #490
Today we continue our coverage of the AWS ML Summit joined by Chris Fregly, a principal developer advocate at AWS, and Antje Barth, a senior developer advocate at AWS. In our conversation with Chris and Antje, we explore their roles as community builders prior to, and since, joining AWS, as well as their recently released book Data Science on AWS. In the book, Chris and Antje demonstrate how to reduce cost and improve performance while successfully building and deploying data science projects. We also discuss the release of their new on Coursera, managing the complexity that comes with building real-world projects, and some of their favorite sessions from the recent ML Summit.
/episode/index/show/twimlai/id/19384037
info_outline
Accelerating Distributed AI Applications at Qualcomm with Ziad Asghar - #489
06/03/2021
Accelerating Distributed AI Applications at Qualcomm with Ziad Asghar - #489
Today we’re joined by Ziad Asghar, vice president of product management for snapdragon technologies & roadmap at Qualcomm Technologies. We begin our conversation with Ziad exploring the symbiosis between 5G and AI and what is enabling developers to take full advantage of AI on mobile devices. We also discuss the balance of product evolution and incorporating research concepts, and the evolution of their hardware infrastructure Cloud AI 100, their role in the deployment of Ingenuity, the robotic helicopter that operated on Mars just last year. Finally, we talk about specialization in building IoT applications like autonomous vehicles and smart cities, the degree to which federated learning is being deployed across the industry, and the importance of privacy and security of personal data. The complete show notes can be found at .
/episode/index/show/twimlai/id/19342475
info_outline
Buy AND Build for Production Machine Learning with Nir Bar-Lev - #488
05/31/2021
Buy AND Build for Production Machine Learning with Nir Bar-Lev - #488
Today we’re joined by Nir Bar-Lev, co-founder and CEO of ClearML. In our conversation with Nir, we explore how his view of the wide vs deep machine learning platforms paradox has changed and evolved over time, how companies should think about building vs buying and integration, and his thoughts on why experiment management has become an automatic buy, be it open source or otherwise. We also discuss the disadvantages of using a cloud vendor as opposed to a software-based approach, the balance between mlops and data science when addressing issues of overfitting, and how ClearML is applying techniques like federated machine learning and transfer learning to their solutions. The complete show notes for this episode can be found at .
/episode/index/show/twimlai/id/19295510
info_outline
Applied AI Research at AWS with Alex Smola - #487
05/27/2021
Applied AI Research at AWS with Alex Smola - #487
Today we’re joined by Alex Smola, Vice President and Distinguished Scientist at AWS AI. We had the pleasure to catch up with Alex prior to the upcoming AWS Machine Learning Summit, and we covered a TON of ground in the conversation. We start by focusing on his research in the domain of deep learning on graphs, including a few examples showcasing its function, and an interesting discussion around the relationship between large language models and graphs. Next up, we discuss their focus on AutoML research and how it's the key to lowering the barrier of entry for machine learning research. Alex also shares a bit about his work on causality and causal modeling, introducing us to the concept of Granger causality. Finally, we talk about the aforementioned ML Summit, its exponential growth since its inception a few years ago, and what speakers he's most excited about hearing from. The complete show notes for this episode can be found at .
/episode/index/show/twimlai/id/19262273
info_outline
Causal Models in Practice at Lyft with Sean Taylor - #486
05/24/2021
Causal Models in Practice at Lyft with Sean Taylor - #486
Today we’re joined by Sean Taylor, Staff Data Scientist at Lyft Rideshare Labs. We cover a lot of ground with Sean, starting with his recent decision to step away from his previous role as the lab director to take a more hands-on role, and what inspired that change. We also discuss his research at Rideshare Labs, where they take a more “moonshot” approach to solving the typical problems like forecasting and planning, marketplace experimentation, and decision making, and how his statistical approach manifests itself in his work. Finally, we spend quite a bit of time exploring the role of causality in the work at rideshare labs, including how systems like the aforementioned forecasting system are designed around causal models, if driving model development is more effective using business metrics, challenges associated with hierarchical modeling, and much much more. The complete show notes for this episode can be found at
/episode/index/show/twimlai/id/19218410
info_outline
Using AI to Map the Human Immune System w/ Jabran Zahid - #485
05/20/2021
Using AI to Map the Human Immune System w/ Jabran Zahid - #485
Today we’re joined by Jabran Zahid, a Senior Researcher at Microsoft Research. In our conversation with Jabran, we explore their recent endeavor into the complete mapping of which T-cells bind to which antigens through the Antigen Map Project. We discuss how Jabran’s background in astrophysics and cosmology has translated to his current work in immunology and biology, the origins of the antigen map, the biological and how the focus was changed by the emergence of the coronavirus pandemic. We talk through the biological advancements, and the challenges of using machine learning in this setting, some of the more advanced ML techniques that they’ve tried that have not panned out (as of yet), the path forward for the antigen map to make a broader impact, and much more. The complete show notes for this episode can be found at
/episode/index/show/twimlai/id/19168328
info_outline
Learning Long-Time Dependencies with RNNs w/ Konstantin Rusch - #484
05/17/2021
Learning Long-Time Dependencies with RNNs w/ Konstantin Rusch - #484
Today we conclude our 2021 ICLR coverage joined by Konstantin Rusch, a PhD Student at ETH Zurich. In our conversation with Konstantin, we explore his recent papers, titled coRNN and uniCORNN respectively, which focus on a novel architecture of recurrent neural networks for learning long-time dependencies. We explore the inspiration he drew from neuroscience when tackling this problem, how the performance results compared to networks like LSTMs and others that have been proven to work on this problem and Konstantin’s future research goals. The complete show notes for this episode can be found at .
/episode/index/show/twimlai/id/19133792
info_outline
What the Human Brain Can Tell Us About NLP Models with Allyson Ettinger - #483
05/13/2021
What the Human Brain Can Tell Us About NLP Models with Allyson Ettinger - #483
Today we continue our ICLR ‘21 series joined by Allyson Ettinger, an Assistant Professor at the University of Chicago. One of our favorite recurring conversations on the podcast is the two-way street that lies between machine learning and neuroscience, which Allyson explores through the modeling of cognitive processes that pertain to language. In our conversation, we discuss how she approaches assessing the competencies of AI, the value of control of confounding variables in AI research, and how the pattern matching traits of Ml/DL models are not necessarily exclusive to these systems. Allyson also participated in a recent panel discussion at the ICLR workshop , centered around the utility of brain inspiration for developing AI models. We discuss ways in which we can try to more closely simulate the functioning of a brain, where her work fits into the analysis and interpretability area of NLP, and much more! The complete show notes for this episode can be found at .
/episode/index/show/twimlai/id/19080533
info_outline
Probabilistic Numeric CNNs with Roberto Bondesan - #482
05/10/2021
Probabilistic Numeric CNNs with Roberto Bondesan - #482
Today we kick off our ICLR 2021 coverage joined by Roberto Bondesan, an AI Researcher at Qualcomm. In our conversation with Roberto, we explore his paper , which represents features as Gaussian processes, providing a probabilistic description of discretization error. We discuss some of the other work the team at Qualcomm presented at the conference, including a paper called Adaptive Neural Compression, as well as work on Guage Equvariant Mesh CNNs. Finally, we briefly discuss quantum deep learning, and what excites Roberto and his team about the future of their research in combinatorial optimization. The complete show notes for this episode can be found at
/episode/index/show/twimlai/id/19024370
info_outline
Building a Unified NLP Framework at LinkedIn with Huiji Gao - #481
05/06/2021
Building a Unified NLP Framework at LinkedIn with Huiji Gao - #481
Today we’re joined by Huiji Gao, a Senior Engineering Manager of Machine Learning and AI at LinkedIn. In our conversation with Huiji, we dig into his interest in building NLP tools and systems, including a recent open-source project called DeText, a framework for generating models for ranking classification and language generation. We explore the motivation behind DeText, the landscape at LinkedIn before and after it was put into use broadly, and the various contexts it’s being used in at the company. We also discuss the relationship between BERT and DeText via LiBERT, a version of BERT that is trained and calibrated on LinkedIn data, the practical use of these tools from an engineering perspective, the approach they’ve taken to optimization, and much more! The complete show notes for this episode can be found at .
/episode/index/show/twimlai/id/18995078
info_outline
Dask + Data Science Careers with Jacqueline Nolis - #480
05/03/2021
Dask + Data Science Careers with Jacqueline Nolis - #480
Today we’re joined by Jacqueline Nolis, Head of Data Science at Saturn Cloud, and co-host of the . You might remember Jacqueline from our panel, where she shared her experience trying to navigate the suddenly hectic data science job market. Now, a year removed from that panel, we explore her book on data science careers, top insights for folks just getting into the field, ways that job seekers should be signaling that they have the required background, and how to approach and navigate failure as a data scientist. We also spend quite a bit of time discussing Dask, an open-source library for parallel computing in Python, as well as use cases for the tool, the relationship between dask and Kubernetes and docker containers, where data scientists are in regards to the software development toolchain and much more! The complete show notes for this episode can be found at .
/episode/index/show/twimlai/id/18924944
info_outline
Machine Learning for Equitable Healthcare Outcomes with Irene Chen - #479
04/29/2021
Machine Learning for Equitable Healthcare Outcomes with Irene Chen - #479
Today we’re joined by Irene Chen, a Ph.D. student at MIT. Irene’s research is focused on developing new machine learning methods specifically for healthcare, through the lens of questions of equity and inclusion. In our conversation, we explore some of the various projects that Irene has worked on, including an early detection program for intimate partner violence. We also discuss how she thinks about the long term implications of predictions in the healthcare domain, how she’s learned to communicate across the interface between the ML researcher and clinician, probabilistic approaches to machine learning for healthcare, and finally, key takeaways for those of you interested in this area of research. The complete show notes for this episode can be found at .
/episode/index/show/twimlai/id/18910481
info_outline
AI Storytelling Systems with Mark Riedl - #478
04/26/2021
AI Storytelling Systems with Mark Riedl - #478
Today we’re joined by Mark Riedl, a Professor in the School of Interactive Computing at Georgia Tech. In our conversation with Mark, we explore his work building AI storytelling systems, mainly those that try and predict what listeners think will happen next in a story and how he brings together many different threads of ML/AI together to solve these problems. We discuss how the theory of mind is layered into his research, the use of large language models like GPT-3, and his push towards being able to generate suspenseful stories with these systems. We also discuss the concept of intentional creativity and the lack of good theory on the subject, the adjacent areas in ML that he’s most excited about for their potential contribution to his research, his recent focus on model explainability, how he approaches problems of common sense, and much more! The complete show notes for this episode can be found at .
/episode/index/show/twimlai/id/18866375
info_outline
Creating Robust Language Representations with Jamie Macbeth - #477
04/21/2021
Creating Robust Language Representations with Jamie Macbeth - #477
Today we’re joined by Jamie Macbeth, an assistant professor in the department of computer science at Smith College. In our conversation with Jamie, we explore his work at the intersection of cognitive systems and natural language understanding, and how to use AI as a vehicle for better understanding human intelligence. We discuss the tie that binds these domains together, if the tasks are the same as traditional NLU tasks, and what are the specific things he’s trying to gain deeper insights into. One of the unique aspects of Jamie’s research is that he takes an “old-school AI” approach, and to that end, we discuss the models he handcrafts to generate language. Finally, we examine how he evaluates the performance of his representations if he’s not playing the SOTA “game,” what he bookmarks against, identifying deficiencies in deep learning systems, and the exciting directions for his upcoming research. The complete show notes for this episode can be found at
/episode/index/show/twimlai/id/18813209