loader from loading.io

MLA 014 Machine Learning Hosting and Serverless Deployment

Machine Learning Guide

Release Date: 01/18/2021

MLA 030 AI Job Displacement & ML Careers show art MLA 030 AI Job Displacement & ML Careers

Machine Learning Guide

ML engineering demand remains high with a 3.2 to 1 job-to-candidate ratio, but entry-level hiring is collapsing as AI automates routine programming and data tasks. Career longevity requires shifting from model training to production operations, deep domain expertise, and mastering AI-augmented workflows before standard implementation becomes a commodity. Links Notes and resources at   - stay healthy & sharp while you learn & code  - use my voice to listen to any AI generated content you want Market Data and Displacement ML engineering demand rose 89% in early 2025....

info_outline
MLA 029 OpenClaw show art MLA 029 OpenClaw

Machine Learning Guide

OpenClaw is a self-hosted AI agent daemon that executes autonomous tasks through messaging apps like WhatsApp and Telegram using persistent memory. It integrates with Claude Code to enable software development and administrative automation directly from mobile devices. Links Notes and resources at   - stay healthy & sharp while you learn & code  - use my voice to listen to any AI generated content you want OpenClaw is a self-hosted AI agent daemon (Node.js, port 18789) that executes autonomous tasks via messaging apps like WhatsApp or Telegram. Developed by Peter...

info_outline
MLA 028 AI Agents show art MLA 028 AI Agents

Machine Learning Guide

AI agents differ from chatbots by pursuing autonomous goals through the ReACT loop rather than responding to turn-based prompts. While coding agents are currently the most reliable due to verifiable feedback loops, the market is expanding into desktop and browser automation via tools like Claude co-work and open claw. Links Notes and resources at   - stay healthy & sharp while you learn & code  - use my voice to listen to any AI generated content you want Fundamental Definitions Agent vs. Chatbot: Chatbots are turn-based and human-driven. Agents receive...

info_outline
MLA 027 AI Video End-to-End Workflow show art MLA 027 AI Video End-to-End Workflow

Machine Learning Guide

How to maintain character consistency, style consistency, etc in an AI video. Prosumers can use Google Veo 3’s "High-Quality Chaining" for fast social media content. Indie filmmakers can achieve narrative consistency by combining Midjourney V7 for style, Kling for lip-synced dialogue, and Runway Gen-4 for camera control, while professional studios gain full control with a layered ComfyUI pipeline to output multi-layer EXR files for standard VFX compositing. Links Notes and resources at   - stay healthy & sharp while you learn & code - use my voice to listen to any AI...

info_outline
MLA 026 AI Video Generation: Veo 3 vs Sora, Kling, Runway, Stable Video Diffusion show art MLA 026 AI Video Generation: Veo 3 vs Sora, Kling, Runway, Stable Video Diffusion

Machine Learning Guide

Google Veo leads the generative video market with superior 4K photorealism and integrated audio, an advantage derived from its YouTube training data. OpenAI Sora is the top tool for narrative storytelling, while Kuaishou Kling excels at animating static images with realistic, high-speed motion. Links Notes and resources at   - stay healthy & sharp while you learn & code - use my voice to listen to any AI generated content you want S-Tier: Google Veo The market leader due to superior visual quality, physics simulation, 4K resolution, and , which removes...

info_outline
MLA 025 AI Image Generation: Midjourney vs Stable Diffusion, GPT-4o, Imagen & Firefly show art MLA 025 AI Image Generation: Midjourney vs Stable Diffusion, GPT-4o, Imagen & Firefly

Machine Learning Guide

The AI image market has split: Midjourney creates the highest quality artistic images but fails at text and precision. For business use, OpenAI's GPT-4o offers the best conversational control, while Adobe Firefly provides the strongest commercial safety from its exclusively licensed training data. Links Notes and resources at   - stay healthy & sharp while you learn & code - use my voice to listen to any AI generated content you want The 2025 generative AI image market is defined by a split between two types of tools. "Artists" like Midjourney excel at creating...

info_outline
MLG 036 Autoencoders show art MLG 036 Autoencoders

Machine Learning Guide

Auto encoders are neural networks that compress data into a smaller "code," enabling dimensionality reduction, data cleaning, and lossy compression by reconstructing original inputs from this code. Advanced auto encoder types, such as denoising, sparse, and variational auto encoders, extend these concepts for applications in generative modeling, interpretability, and synthetic data generation. Links Notes and resources at   - stay healthy & sharp while you learn & code Build the future of multi-agent software with . Thanks to  from  for recording...

info_outline
MLG 035 Large Language Models 2 show art MLG 035 Large Language Models 2

Machine Learning Guide

At inference, large language models use in-context learning with zero-, one-, or few-shot examples to perform new tasks without weight updates, and can be grounded with Retrieval Augmented Generation (RAG) by embedding documents into vector databases for real-time factual lookup using cosine similarity. LLM agents autonomously plan, act, and use external tools via orchestrated loops with persistent memory, while recent benchmarks like GPQA (STEM reasoning), SWE Bench (agentic coding), and MMMU (multimodal college-level tasks) test performance alongside prompt engineering techniques such as...

info_outline
MLG 034 Large Language Models 1 show art MLG 034 Large Language Models 1

Machine Learning Guide

Explains language models (LLMs) advancements. Scaling laws - the relationships among model size, data size, and compute - and how emergent abilities such as in-context learning, multi-step reasoning, and instruction following arise once certain scaling thresholds are crossed. The evolution of the transformer architecture with Mixture of Experts (MoE), describes the three-phase training process culminating in Reinforcement Learning from Human Feedback (RLHF) for model alignment, and explores advanced reasoning techniques such as chain-of-thought prompting which significantly improve complex...

info_outline
MLA 024 Agentic Software Engineering show art MLA 024 Agentic Software Engineering

Machine Learning Guide

Agentic engineering shifts the developer role from manual coding to orchestrating AI agents that automate the full software lifecycle from ticket to deployment. Using Claude Code with MCP servers and git worktrees allows a single person to manage the output and quality of an entire engineering organization. Links Notes and resources at   - stay healthy & sharp while you learn & code  - use my voice to listen to any AI generated content you want The Shift: Agentic Engineering Andrej Karpathy transitioned from "vibe coding" in February 2025 to "agentic engineering" in...

info_outline
 
More Episodes

Builders can scale ML from simple API calls to full MLOps pipelines using SST on AWS, utilizing Aurora pgvector for search and Spot instances for 90 percent cost savings. External platforms like Modal or GCP Cloud Run provide superior serverless GPU options for real-time inference when AWS native limits are reached.

Links

Core Infrastructure

SST uses Pulumi to bridge high-level web components (API, Database) with low-level AWS resources (SageMaker, GPU clusters). The framework enables infrastructure-as-code in TypeScript, allowing developers to manage entire ML lifecycles within a single configuration.

Level 1-2: Foundational Models and Edge Inference

  • AWS Bedrock: Managed gateway for models including Claude 4.5, Llama 4, and Nova. It provides IAM security, VPC isolation, and integrated billing.
  • Knowledge Bases: Automates RAG pipelines by chunking S3 documents and storing embeddings in Aurora pgvector.
  • Cloudflare Workers AI: Runs open-source models (Llama, Mistral, Flux) on edge GPUs. Pricing uses "Neurons" units, measuring compute per request rather than tokens.

Level 3-4: Cost-Effective CPU and Batch Processing

  • Lambda Inference: Use ONNX-formatted models on AWS Lambda with SnapStart to minimize costs and 16-second cold starts.
  • Vector Search: The SST Vector component manages semantic search within existing Aurora PostgreSQL databases using pgvector, matching dedicated database performance.
  • SST Task: Runs Fargate containers for CPU-bound ETL and data preprocessing.
  • AWS Batch: Orchestrates GPU training on EC2. Using Spot instances reduces costs by 60 to 90 percent, with checkpointing protecting against instance reclamation.

Level 5: Real-Time GPU Inference

  • AWS Options: SageMaker Real-Time endpoints support scale-to-zero since late 2024. SageMaker Async handles large payloads via S3 queues.
  • External Alternatives:
    • GCP Cloud Run: Offers serverless L4 and Blackwell GPUs with per-second billing.
    • Modal: Python-native serverless GPU platform with 2 to 4 second cold starts.
    • Groq: Uses LPU hardware for LLM inference, reaching 1300 tokens per second.
    • RunPod: Provides the lowest raw GPU pricing and FlashBoot for fast starts.

Level 6-7: MLOps and Mature Production

  • SageMaker Platform: Includes Studio for IDE work, JumpStart for one-click model deployment, and Model Registry for version tracking.
  • Monitoring: Use Arize Phoenix or Evidently AI to detect data and concept drift. Log all predictions to S3 for weekly distribution analysis.
  • Hardware Optimization: AWS Inferentia and Trainium chips offer 70 percent lower inference costs compared to GPUs. Transition becomes viable when monthly GPU spend exceeds 10,000 dollars.
  • Self-Hosting: API calls are cheaper until volume reaches 30 million tokens daily. For self-hosting, use vLLM for high-throughput PagedAttention.