loader from loading.io

LM101-061: What happened at the Reinforcement Learning Tutorial? (RERUN)

Learning Machines 101

Release Date: 02/23/2017

LM101-086: Ch8: How to Learn the Probability of Infinitely Many Outcomes show art LM101-086: Ch8: How to Learn the Probability of Infinitely Many Outcomes

Learning Machines 101

This 86th episode of Learning Machines 101 discusses the problem of assigning probabilities to a possibly infinite set of observed outcomes in a space-time continuum which corresponds to our physical world. The machine learning algorithm uses information about the frequency of environmental events to support learning. Along the way we discuss measure theory mathematical tools such as sigma fields, and the Radon-Nikodym probability density function as well as the intriguing Banach-Tarski paradox.

info_outline
LM101-085:Ch7:How to Guarantee your Batch Learning Algorithm Converges show art LM101-085:Ch7:How to Guarantee your Batch Learning Algorithm Converges

Learning Machines 101

This 85th episode of Learning Machines 101 discusses formal convergence guarantees for a broad class of machine learning algorithms designed to minimize smooth non-convex objective functions using batch learning methods. Simple mathematical formulas are presented based upon research from the late 1960s by Philip Wolfe and G. Zoutendijk that ensure convergence of the generated sequence of parameter vectors.

info_outline
LM101-084: Ch6: How to Analyze the Behavior of Smart Dynamical Systems show art LM101-084: Ch6: How to Analyze the Behavior of Smart Dynamical Systems

Learning Machines 101

In this episode of Learning Machines 101, we review Chapter 6 of my book “Statistical Machine Learning” which introduces methods for analyzing the behavior of machine inference algorithms and machine learning algorithms as dynamical systems. We show that when dynamical systems can be viewed as special types of optimization algorithms, the behavior of those systems even when they are highly nonlinear and high-dimensional can be analyzed.

info_outline
LM101-083: Ch5: How to Use Calculus to Design Learning Machines show art LM101-083: Ch5: How to Use Calculus to Design Learning Machines

Learning Machines 101

This particular podcast covers the material from Chapter 5 of my new book “Statistical Machine Learning: A unified framework” which is now available! The book chapter shows how matrix calculus is very useful for the analysis and design of both linear and nonlinear learning machines with lots of examples. We discuss the relevance of the matrix chain rule and matrix Taylor series for machine learning algorithm design and the analysis of generalization performance! Check out: www.learningmachines101.com

info_outline
LM101-082: Ch4: How to Analyze and Design Linear Machines show art LM101-082: Ch4: How to Analyze and Design Linear Machines

Learning Machines 101

The main focus of this particular episode covers the material in Chapter 4 of my new forthcoming book titled “Statistical Machine Learning: A unified framework.”  Chapter 4 is titled “Linear Algebra for Machine Learning.

info_outline
LM101-081: Ch3: How to Define Machine Learning (or at Least Try) show art LM101-081: Ch3: How to Define Machine Learning (or at Least Try)

Learning Machines 101

This podcast covers the material in Chapter 3 of my new book “Statistical Machine Learning: A unified framework” which discusses how to formally define machine learning algorithms. A learning machine is viewed as a dynamical system that is minimizing an objective function. In addition, the knowledge structure of the learning machine is interpreted as a preference relation graph w implicitly specified by the objective function. Also, the new book “The Practioner’s Guide to Graph Data” is reviewe

info_outline
LM101-080: Ch2: How to Represent Knowledge using Set Theory show art LM101-080: Ch2: How to Represent Knowledge using Set Theory

Learning Machines 101

This particular podcast covers the material in Chapter 2 of my new book “Statistical Machine Learning: A unified framework” with expected publication date May 2020. In this episode we discuss Chapter 2 of my new book, which discusses how to represent knowledge using set theory notation. Chapter 2 is titled “Set Theory for Concept Modeling”.

info_outline
LM101-079: Ch1: How to View Learning as Risk Minimization show art LM101-079: Ch1: How to View Learning as Risk Minimization

Learning Machines 101

This particular podcast covers the material in Chapter 1 of my new (unpublished) book “Statistical Machine Learning: A unified framework”. In this episode we discuss Chapter 1 of my new book, which shows how supervised, unsupervised, and reinforcement learning algorithms can be viewed as special cases of a general empirical risk minimization framework. This is useful because it provides a framework for not only understanding existing algorithms but for suggesting new algorithms for specific applications

info_outline
LM101-078: Ch0: How to Become a Machine Learning Expert show art LM101-078: Ch0: How to Become a Machine Learning Expert

Learning Machines 101

This particular podcast (Episode 78 of Learning Machines 101) is the initial episode in a new special series of episodes designed to provide commentary on a new book that I am in the process of writing. In this episode we discuss books, software, courses, and podcasts designed to help you become a machine learning expert! For more information, check out: www.learningmachines101.com

info_outline
LM101-077: How to Choose the Best Model using BIC show art LM101-077: How to Choose the Best Model using BIC

Learning Machines 101

In this 77th episode of , we explain the proper semantic interpretation of the Bayesian Information Criterion (BIC) and emphasize how this semantic interpretation is fundamentally different from AIC (Akaike Information Criterion) model selection methods. Briefly, BIC is used to estimate the probability of the training data given the probability model, while AIC is used to estimate out-of-sample prediction error. The probability of the training data given the model is called the “marginal likelihood”.  Using the marginal likelihood, one can calculate the probability of a model given...

info_outline
 
More Episodes

This is the third of a short subsequence of podcasts providing a summary of events associated with Dr. Golden’s recent visit to the 2015 Neural Information Processing Systems Conference. This is one of the top conferences in the field of Machine Learning. This episode reviews and discusses topics associated with the Introduction to Reinforcement Learning with Function Approximation Tutorial presented by Professor Richard Sutton on the first day of the conference.

This episode is a RERUN of an episode originally presented in January 2016 and lays the groundwork for future episodes on the topic of reinforcement learning!

Check out: www.learningmachines101.com  for more info!!