loader from loading.io

Spam Filtering with Naive Bayes

Data Skeptic

Release Date: 07/27/2018

Pose Tracking show art Pose Tracking

Data Skeptic

Many researchers and students have painstakingly labeled precise details about the body positions of the creatures they study. Can AI be used for this labeling? Of course it can! Today's episode discusses Social LEAP Estimates Animal Poses (SLEAP), a software solution to train AI to perform this tedious but important labeling work.

info_outline
Modeling Group Behavior show art Modeling Group Behavior

Data Skeptic

Our guest in this episode is Sebastien Motsch, an assistant professor at Arizona State University, working in the School of Mathematical and Statistical Science. He works on modeling self-organized biological systems to understand how complex patterns emerge.

info_outline
Advances in Data Loggers show art Advances in Data Loggers

Data Skeptic

Our guest in this episode is Ryan Hanscom. Ryan is a Ph.D. candidate in a joint doctoral evolution program at San Diego State University and the University of California, Riverside. He is a terrestrial ecologist with a focus on herpetology and mammalogy.  Ryan discussed how the behavior of rattlesnakes is studied in the natural world, particularly with an increase in temperature.

info_outline
What You Know About Intelligence is Wrong (fixed) show art What You Know About Intelligence is Wrong (fixed)

Data Skeptic

We are joined by Hank Schlinger, a professor of psychology at California State University, Los Angeles. His research revolves around theoretical issues in psychology and behavioral analysis.  Hank establishes that words have references and questions the reference for intelligence. He discussed how intelligence can be observed in animals. He also discussed how intelligence is measured in a given context.

info_outline
Animal Decision Making show art Animal Decision Making

Data Skeptic

On today’s episode, we are joined by Aimee Dunlap. Aimee is an assistant professor at the University of Missouri–St. Louis and the interim director at the Whitney R. Harris World Ecology Center. Aimee discussed how animals perceive information and what they use it for. She discussed the connection between their environment and learning for decision-making. She also discussed the costs required for learning and factors that affect animal learning.

info_outline
Octopus Cognition show art Octopus Cognition

Data Skeptic

We are joined by Tamar Gutnick, a visiting professor at the University of Naples Federico II, Napoli, Italy. She studies the octopus nervous system and their behavior, focusing on cognition and learning behaviors. Tamar gave a background to the kind of research she does — lab research. She discussed some challenges with observing octopuses in the lab. She discussed some patterns observed by the octopus lifestyle in a controlled setting. Tamar discussed what they know about octopus intelligence. She discussed the octopus nervous system and why they are unique compared to other animals. She...

info_outline
Optimal Foraging show art Optimal Foraging

Data Skeptic

Claire Hemmingway, an assistant professor in the Department of Psychology and Ecology and Evolutionary Biology at the University of Tennessee in Knoxville, is our guest today. Her research is on decision-making in animal cognition, focusing on neotropical bats and bumblebees. Claire discussed how bumblebees make foraging decisions and how they communicate when foraging. She discussed how they set up experiments in the lab to address questions about bumblebees foraging. She also discussed some nuances between bees in the lab and those in the wild. Claire discussed factors that drive an animal's...

info_outline
Memory in Chess show art Memory in Chess

Data Skeptic

On today’s show, we are joined by our co-host, Becky Hansis-O’Neil. Becky is a Ph.D. student at the University of Missouri, St Louis, where she studies bumblebees and tarantulas to understand their learning and cognitive work.   She joins us to discuss the paper: Perception in Chess. The paper aimed to understand how chess players perceive the positions of chess pieces on a chess board. She discussed the findings paper. She spoke about situations where grandmasters had better recall of chess positions than beginners and situations where they did not.   Becky and Kyle discussed...

info_outline
OpenWorm show art OpenWorm

Data Skeptic

On this episode, we are joined by Stephen Larson, the CEO of MetaCell and an affiliate of the OpenWorm foundation. Stephen discussed what the Openworm project is about. They hope to use a digital C. elegans nematode (C. elegans for short) to study the basics of life. Stephen discussed why C. elegans is an ideal organism for studying life in the lab. He also discussed the steps involved in simulating a digital organism. He mentioned the constraints on the cellular scale that informed their development of a digital C. elegans. Stephen discussed the validation...

info_outline
What the Antlion Knows show art What the Antlion Knows

Data Skeptic

Our guest is Becky Hansis-O’Neil, a Ph.D. student at the University of Missouri, St Louis, and our co-host for the new "Animal Intelligence" season. Becky shares her background on how she got into the field of behavioral intelligence and biology.

info_outline
 
More Episodes

Today's spam filters are advanced data driven tools. They rely on a variety of techniques to effectively and often seamlessly filter out junk email from good email.

Whitelists, blacklists, traffic analysis, network analysis, and a variety of other tools are probably employed by most major players in this area. Naturally content analysis can be an especially powerful tool for detecting spam.

Given the binary nature of the problem (Spam or \neg Spam) its clear that this is a great problem to use machine learning to solve. In order to apply machine learning, you first need a labelled training set. Thankfully, many standard corpora of labelled spam data are readily available. Further, if you're working for a company with a spam filtering problem, often asking users to self-moderate or flag things as spam can be an effective way to generate a large amount of labels for "free".

With a labeled dataset in hand, a data scientist working on spam filtering must next do feature engineering. This should be done with consideration of the algorithm that will be used. The Naive Bayesian Classifer has been a popular choice for detecting spam because it tends to perform pretty well on high dimensional data, unlike a lot of other ML algorithms. It also is very efficient to compute, making it possible to train a per-user Classifier if one wished to. While we might do some basic NLP tricks, for the most part, we can turn each word in a document (or perhaps each bigram or n-gram in a document) into a feature.

The Naive part of the Naive Bayesian Classifier stems from the naive assumption that all features in one's analysis are considered to be independent. If x and y are known to be independent, then Pr(x \cap y) = Pr(x) \cdot Pr(y). In other words, you just multiply the probabilities together. Shh, don't tell anyone, but this assumption is actually wrong! Certainly, if a document contains the word algorithm, it's more likely to contain the word probability than some randomly selected document. Thus, Pr(\text{algorithm} \cap \text{probability}) > Pr(\text{algorithm}) \cdot Pr(\text{probability}), violating the assumption. Despite this "flaw", the Naive Bayesian Classifier works remarkably will on many problems. If one employs the common approach of converting a document into bigrams (pairs of words instead of single words), then you can capture a good deal of this correlation indirectly.

In the final leg of the discussion, we explore the question of whether or not a Naive Bayesian Classifier would be a good choice for detecting fake news.