loader from loading.io

MLG 027 Hyperparameters 1

Machine Learning Guide

Release Date: 01/28/2018

MLA 024 Code AI MCP Servers, ML Engineering show art MLA 024 Code AI MCP Servers, ML Engineering

Machine Learning Guide

Links Notes and resources at  stay healthy & sharp while you learn & code audio/video editing with AI power-tools Tool Use in AI Code Agents File Operations: Agents can read, edit, and search files using sophisticated regular expressions. Executable Commands: They can recommend and perform installations like pip or npm installs, with user approval. Browser Integration: Allows agents to perform actions and verify outcomes through browser interactions. Model Context Protocol (MCP) Standardization: MCP was created by Anthropic to standardize how AI tools...

info_outline
MLA 023 Code AI Models & Modes show art MLA 023 Code AI Models & Modes

Machine Learning Guide

Links Notes and resources at  stay healthy & sharp while you learn & code audio/video editing with AI power-tools Model Current Leaders According to the  (as of April 12, 2025), leading models include for vibe-coding: Gemini 2.5 Pro Preview 03-25: most accurate and cost-effective option currently. Claude 3.7 Sonnet: Performs well in both architect and code modes with enabled reasoning flags. DeepSeek R1 with Claude 3.5 Sonnet: A popular combination for its balance of cost and performance between reasoning and non-reasoning tasks. Local Models Tools for Local...

info_outline
MLA 022 Code AI Tools show art MLA 022 Code AI Tools

Machine Learning Guide

Links Notes and resources at stay healthy & sharp while you learn & code audio/video editing with AI power-tools I currently favor Roo Code. Plus either gemini-2.5-pro-exp-03-25 for Architect, Boomerang, or Code with large contexts. And Claude 3.7 for code with small contexts, eg Boomerang subtasks. Many others favor Cursor, Aider, or Cline. Copilot and Windsurf are less vogue lately. I found Copilot to struggle more; and their pricing - previously their winning point - is less compelling now. Why I favor Roo. The default settings have it as stable and effective as Cline, Cursor....

info_outline
MLG 033 Transformers show art MLG 033 Transformers

Machine Learning Guide

Links: Notes and resources at 3Blue1Brown videos:   stay healthy & sharp while you learn & code  audio/video editing with AI power-tools Background & Motivation RNN Limitations: Sequential processing prevents full parallelization—even with attention tweaks—making them inefficient on modern hardware. Breakthrough: “Attention Is All You Need” replaced recurrence with self-attention, unlocking massive parallelism and scalability. Core Architecture Layer Stack: Consists of alternating self-attention and feed-forward (MLP) layers, each wrapped...

info_outline
MLA 021 Databricks show art MLA 021 Databricks

Machine Learning Guide

to stay healthy while you study or work! Full notes at Raybeam and Databricks: Ming Chang from Raybeam discusses Raybeam's focus on data science and analytics, and how their recent acquisition by Dept Agency has expanded their scope into ML Ops and AI. Raybeam often utilizes Databricks due to its comprehensive nature. Understanding Databricks: Contrary to initial assumptions, Databricks is not just an analytics platform like Tableau but an ML Ops platform competing with tools like SageMaker and Kubeflow. It offers functionalities for creating notebooks, executing Python code, and using a...

info_outline
MLA 020 Kubeflow show art MLA 020 Kubeflow

Machine Learning Guide

to stay healthy while you study or work! Full notes at Conversation with Dirk-Jan Kubeflow (vs cloud native solutions like SageMaker)  - Data Scientist at Dept Agency . (From the website:) The Machine Learning Toolkit for Kubernetes. The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Our goal is not to recreate other services, but to provide a straightforward way to deploy best-of-breed open-source systems for ML to diverse infrastructures. Anywhere you are running Kubernetes, you should be able to...

info_outline
MLA 019 DevOps show art MLA 019 DevOps

Machine Learning Guide

to stay healthy while you study or work! Full notes at Chatting with co-workers about the role of DevOps in a machine learning engineer's life Expert coworkers at Dept  - Principal Software Developer  - DevOps Lead  (where Matt features often) Devops tools Pictures (funny and serious)

info_outline
MLA 017 AWS Local Development show art MLA 017 AWS Local Development

Machine Learning Guide

to stay healthy while you study or work! Show notes:  Developing on AWS first (SageMaker or other) Consider developing against AWS as your local development environment, rather than only your cloud deployment environment. Solutions: Stick to AWS Cloud IDEs (, ,  Connect to deployed infrastructure via  Infrastructure as Code

info_outline
MLA 016 SageMaker 2 show art MLA 016 SageMaker 2

Machine Learning Guide

to stay healthy while you study or work! Full note at Part 2 of deploying your ML models to the cloud with SageMaker (MLOps) MLOps is deploying your ML models to the cloud. See  for an overview of tooling (also generally a great ML educational run-down.)

info_outline
MLA 015 SageMaker 1 show art MLA 015 SageMaker 1

Machine Learning Guide

to stay healthy while you study or work! Part 1 of deploying your ML models to the cloud with SageMaker (MLOps) MLOps is deploying your ML models to the cloud. See  for an overview of tooling (also generally a great ML educational run-down.) And I forgot to mention , I'll mention next time.

info_outline
 
More Episodes

Full notes and resources at  ocdevel.com/mlg/27 

Try a walking desk to stay healthy while you study or work!

Hyperparameters are crucial elements in the configuration of machine learning models. Unlike parameters, which are learned by the model during training, hyperparameters are set by humans before the learning process begins. They are the knobs and dials that humans can control to influence the training and performance of machine learning models.

Definition and Importance

Hyperparameters differ from parameters like theta in linear and logistic regression, which are learned weights. They are choices made by humans, such as the type of model, number of neurons in a layer, or the model architecture. These choices can have significant effects on the model's performance, making them vital to conscious and informed tuning.

Types of Hyperparameters

Model Selection:

Choosing what model to use is itself a hyperparameter. For example, deciding between linear regression, logistic regression, naive Bayes, or neural networks.

Architecture of Neural Networks:

  • Number of Layers and Neurons: Deciding the width (number of neurons) and depth (number of layers).
  • Types of Layers: Whether to use LSTMs, convolutional layers, or dense layers.

Activation Functions:

They transform linear outputs into non-linear outputs. Popular choices include ReLU, tanh, and sigmoid, with ReLU being the default for most neural network layers.

Regularization and Optimization:

These influence the learning process. The use of L1/L2 regularization or dropout, as well as the type of optimizer (e.g., Adam, Adagrad), are hyperparameters.

Optimization Techniques

Techniques like grid search, random search, and Bayesian optimization are used to systematically explore combinations of hyperparameters to find the best configuration for a given task. While these methods can be computationally expensive, they are necessary for achieving optimal model performance.

Challenges and Future Directions

The field strives towards simplifying the choice of hyperparameters, ideally automating them to become parameters of the model itself. Efforts like Google's AutoML aim to handle hyperparameter tuning automatically.

Understanding and optimizing hyperparameters is a cornerstone in machine learning, directly impacting the effectiveness and efficiency of a model. Progress continues to integrate these choices into model training, reducing the dependency on human intervention and trial-and-error experimentation.

Decision Tree

  • Model selection
    • Unsupervised? K-means Clustering => DL
    • Linear? Linear regression, logistic regression
    • Simple? Naive Bayes, Decision Tree (Random Forest, Gradient Boosting)
    • Little data? Boosting
    • Lots of data, complex situation? Deep learning
  • Network
    • Layer arch
      • Vision? CNN
      • Time? LSTM
      • Other? MLP
      • Trading LSTM => CNN decision
    • Layer size design (funnel, etc)
      • Face pics
      • From BTC episode
      • Don't know? Layers=1, Neurons=mean(inputs, output) link
  • Activations / nonlinearity
    • Output
      • Sigmoid = predict probability of output, usually at output
      • Softmax = multi-class
      • Nothing = regression
    • Relu family (Leaky Relu, Elu, Selu, ...) = vanishing gradient (gradient is constant), performance, usually better
    • Tanh = classification between two classes, mean 0 important