loader from loading.io

When AI Decides Who Lives and Dies

Brobots: Health, Wellness, and Mindset in the Age of AI

Release Date: 09/10/2025

Are AI death panels already deciding your medical fate while teenagers get suicide coaching from ChatGPT?

Medicare just greenlit AI to approve treatments, promising "efficiency" while a 16-year-old allegedly got help writing his suicide note from OpenAI's bot. We're watching our warnings play out in real-time as the feds outsource life-and-death decisions to algorithms designed for engagement, not empathy.

In this episode you'll discover how AI medical claims work, why your data isn't safe, and the specific red flags to watch for when using AI for mental health. Plus, we'll break down the legal implications and what this means for anyone sharing personal information with chatbots.

Stop raw-dogging your AI interactions and start protecting your data before it's used against you.

Topics Discussed:

  • Medicare's AI claims approval pilot program targeting "inefficient" surgeries
  • How AI engagement algorithms can override safety guardrails over time
  • The ChatGPT suicide case and its legal implications for AI companies
  • Why AI lacks the neurochemical basis for genuine empathy or care
  • Data synthesis risks when AI combines medical records with other sources
  • The difference between AI augmentation vs. complete human replacement
  • Why treating AI like a friend instead of a tool is dangerous
  • Guardrail failures and the "engagement at all costs" programming
  • How AI medical decisions could incorporate voting history and social data
  • The need for human oversight in life-and-death AI applications

----

Resources:

----

MORE FROM BROBOTS:

Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok

Subscribe to BROBOTS on Youtube

Join our community in the BROBOTS Facebook group

----

LINKS TO OUR PARTNERS: