loader from loading.io

Transfer Learning

Data Skeptic

Release Date: 06/15/2018

Pose Tracking show art Pose Tracking

Data Skeptic

Many researchers and students have painstakingly labeled precise details about the body positions of the creatures they study. Can AI be used for this labeling? Of course it can! Today's episode discusses Social LEAP Estimates Animal Poses (SLEAP), a software solution to train AI to perform this tedious but important labeling work.

info_outline
Modeling Group Behavior show art Modeling Group Behavior

Data Skeptic

Our guest in this episode is Sebastien Motsch, an assistant professor at Arizona State University, working in the School of Mathematical and Statistical Science. He works on modeling self-organized biological systems to understand how complex patterns emerge.

info_outline
Advances in Data Loggers show art Advances in Data Loggers

Data Skeptic

Our guest in this episode is Ryan Hanscom. Ryan is a Ph.D. candidate in a joint doctoral evolution program at San Diego State University and the University of California, Riverside. He is a terrestrial ecologist with a focus on herpetology and mammalogy.  Ryan discussed how the behavior of rattlesnakes is studied in the natural world, particularly with an increase in temperature.

info_outline
What You Know About Intelligence is Wrong (fixed) show art What You Know About Intelligence is Wrong (fixed)

Data Skeptic

We are joined by Hank Schlinger, a professor of psychology at California State University, Los Angeles. His research revolves around theoretical issues in psychology and behavioral analysis.  Hank establishes that words have references and questions the reference for intelligence. He discussed how intelligence can be observed in animals. He also discussed how intelligence is measured in a given context.

info_outline
Animal Decision Making show art Animal Decision Making

Data Skeptic

On today’s episode, we are joined by Aimee Dunlap. Aimee is an assistant professor at the University of Missouri–St. Louis and the interim director at the Whitney R. Harris World Ecology Center. Aimee discussed how animals perceive information and what they use it for. She discussed the connection between their environment and learning for decision-making. She also discussed the costs required for learning and factors that affect animal learning.

info_outline
Octopus Cognition show art Octopus Cognition

Data Skeptic

We are joined by Tamar Gutnick, a visiting professor at the University of Naples Federico II, Napoli, Italy. She studies the octopus nervous system and their behavior, focusing on cognition and learning behaviors. Tamar gave a background to the kind of research she does — lab research. She discussed some challenges with observing octopuses in the lab. She discussed some patterns observed by the octopus lifestyle in a controlled setting. Tamar discussed what they know about octopus intelligence. She discussed the octopus nervous system and why they are unique compared to other animals. She...

info_outline
Optimal Foraging show art Optimal Foraging

Data Skeptic

Claire Hemmingway, an assistant professor in the Department of Psychology and Ecology and Evolutionary Biology at the University of Tennessee in Knoxville, is our guest today. Her research is on decision-making in animal cognition, focusing on neotropical bats and bumblebees. Claire discussed how bumblebees make foraging decisions and how they communicate when foraging. She discussed how they set up experiments in the lab to address questions about bumblebees foraging. She also discussed some nuances between bees in the lab and those in the wild. Claire discussed factors that drive an animal's...

info_outline
Memory in Chess show art Memory in Chess

Data Skeptic

On today’s show, we are joined by our co-host, Becky Hansis-O’Neil. Becky is a Ph.D. student at the University of Missouri, St Louis, where she studies bumblebees and tarantulas to understand their learning and cognitive work.   She joins us to discuss the paper: Perception in Chess. The paper aimed to understand how chess players perceive the positions of chess pieces on a chess board. She discussed the findings paper. She spoke about situations where grandmasters had better recall of chess positions than beginners and situations where they did not.   Becky and Kyle discussed...

info_outline
OpenWorm show art OpenWorm

Data Skeptic

On this episode, we are joined by Stephen Larson, the CEO of MetaCell and an affiliate of the OpenWorm foundation. Stephen discussed what the Openworm project is about. They hope to use a digital C. elegans nematode (C. elegans for short) to study the basics of life. Stephen discussed why C. elegans is an ideal organism for studying life in the lab. He also discussed the steps involved in simulating a digital organism. He mentioned the constraints on the cellular scale that informed their development of a digital C. elegans. Stephen discussed the validation...

info_outline
What the Antlion Knows show art What the Antlion Knows

Data Skeptic

Our guest is Becky Hansis-O’Neil, a Ph.D. student at the University of Missouri, St Louis, and our co-host for the new "Animal Intelligence" season. Becky shares her background on how she got into the field of behavioral intelligence and biology.

info_outline
 
More Episodes

On a long car ride, Linhda and Kyle record a short episode. This discussion is about transfer learning, a technique using in machine learning to leverage training from one domain to have a head start learning in another domain.

Transfer learning has some obvious appealing features. Take the example of an image recognition problem. There are now many widely available models that do general image recognition. Detecting that an image contains a "sofa" is an impressive feat. However, for a furniture company interested in more specific details, this classifier is absurdly general. Should the furniture company build a massive corpus of tagged photos, effectively starting from scratch? Or is there a way they can transfer the learnings from the general task to the specific one.

A general definition of transfer learning in machine learning is the use of taking some or all aspects of a pre-trained model as the basis to begin training a new model which a specific and potentially limited dataset.