TSL.P Labs 🧪: Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖
Release Date: 02/20/2026
The Tech Savvy Lawyer
Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this episode, we break down our April 20, 2026, Tech‑Savvy Lawyer editorial on how a global DRAM shortage and AI data center demand are driving up PC prices, pushing many legal professionals toward Apple hardware, and redefining what technological competence really means. We explore how unified memory, on‑device AI, and long‑term support lifecycles are changing the Mac vs. Windows calculus, and why “cheap but weak” laptops may now create serious competence...
info_outlineThe Tech Savvy Lawyer
This special episode features the audio‑only release of an ABA TECHSHOW 2026 panel I was excited to be part of: “Podcasting for Lawyers: The Truth Behind the Mic,” with moderator Ruby Powers and fellow panelists Gyi Tsakalakis and Stephanie Everett. 🎧 Instead of our usual one‑on‑one format, you will hear a live, conference‑style conversation about how lawyers can use podcasting, video, and modern legal technology to build authority, strengthen client and referral relationships, and stay aligned with legal‑ethics and professionalism rules. Join Ruby, Gyi, Stephanie, and me as...
info_outlineThe Tech Savvy Lawyer
My next guest is Ross Guberman — founder of BriefCatch, nationally recognized legal writing trainer, and author of several acclaimed books on persuasive legal writing. Ross has trained thousands of lawyers and judges across the country. After years of teaching the craft of legal writing, he channeled that expertise into building BriefCatch — a purpose-built AI writing tool that lives right inside Microsoft Word and Outlook, scanning your legal documents using roughly 17,000 rules to help you write cleaner, sharper, and more persuasive work product. Whether you're a solo practitioner or...
info_outlineThe Tech Savvy Lawyer
My next guest is Nick Cohen, Chief Operating Officer of Matador Solutions — a legal marketing think tank and agency — and a newly minted partner at Cohen Injury Law Group. Nick brings a rare dual perspective: he lives the daily grind of running a law firm AND helps over 170 firms across the country use technology and marketing strategy to grow their practice. With more than $1 billion in case value generated for clients, Nick knows what separates the law firms that thrive from the ones that spin their wheels. 🚀 Whether you are just hanging out your shingle or you have been practicing...
info_outlineThe Tech Savvy Lawyer
📌 To Busy to Read This Week’s Editorial? Welcome to the TSL Lab’s Initiative. 🤖 This weeks episode builds on my March 3rd, 2026, editorial “” is a misleading comfort blanket for lawyers, and how ABA Model Rules on confidentiality, competence, diligence, candor, supervision, and client communication must govern every AI prompt you run. Our Google LLM Notebook hosts translate the theory into practical workflows you can implement today—from document grounding and tokenization to vendor due diligence and line‑by‑line verification—so you can leverage AI confidently without...
info_outlineThe Tech Savvy Lawyer
In this special episode, I join Professor Wondracek virtually to guest-host Capital’s very first Podcast Club session for a live conversation about AI, legal ethics, deepfakes, and metadata. We talk candidly with law students about how AI-generated evidence, consumer AI tools, and digital footprints are already impacting sanctions, privilege, and professional responsibility, then translate those issues into practical safeguards for everyday practice. Whether you are in law school, running a small firm, or managing litigation for a larger organization, this inaugural Podcast Club episode...
info_outlineThe Tech Savvy Lawyer
Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 We unpack the February 23, 2026, editorial . Our Google Notebook LLM hostsbreaks down why a single click on a public AI tool’s Terms of Use can trigger a privilege waiver, and what “tech competence” really means in 2026—especially after and Judge Jed Rakoff’s wake-up-call analysis of confidentiality and third-party disclosure risk. 🔗 Read the full editorial on The Tech-Savvy Lawyer.Page and share this episode with a colleague who is experimenting with...
info_outlineThe Tech Savvy Lawyer
Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this episode, we discuss our February 16, 2026, editorial, “” and explore why treating AI-generated drafts as hypotheses—not answers—is quickly becoming a survival skill for law firms of every size. We connect a real-world AI failure risk at the Department of Veterans Affairs to the everyday ways lawyers are using tools like chatbots, and we translate ABA Model Rules into practical oversight steps any practitioner can implement without becoming a programmer....
info_outlineThe Tech Savvy Lawyer
My next guest is Justin McCallan, founder of StrongSuit, an AI-powered litigation platform built to transform how litigators handle legal research, document review, and drafting while keeping lawyers firmly in control. In this episode, Justin and I dig into practical, real-world workflows that solos, small firms, and big-firm litigators can use today and over the next few years to change the economics, pace, and strategy of litigation—without sacrificing accuracy, ethics, or the quality of advocacy. Join Justin and me as we discuss the following three questions and more! What are the top...
info_outlineThe Tech Savvy Lawyer
Everyday devices can capture extraordinary evidence, but the same tools can also manufacture convincing fakes. 🎥⚖️ In this episode, we unpack our on how courts are punishing fake digital and AI-generated evidence, then translate the risk into practical guidance for lawyers and legal teams. You’ll hear why judges are treating authenticity as a frontline issue, what ethical duties get triggered when AI touches evidence or briefing, and how a simple “authenticity playbook” can help you avoid career-ending mistakes. ✅ In our conversation, we cover the following 00:00:00 –...
info_outlineJoin us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this episode, we discuss our February 16, 2026, editorial, “Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖” and explore why treating AI-generated drafts as hypotheses—not answers—is quickly becoming a survival skill for law firms of every size. We connect a real-world AI failure risk at the Department of Veterans Affairs to the everyday ways lawyers are using tools like chatbots, and we translate ABA Model Rules into practical oversight steps any practitioner can implement without becoming a programmer.
In our conversation, we cover the following:
-
00:00:00 – Why conversations about the future of law default to Silicon Valley, and why that’s a problem ⚖️
-
00:01:00 – How a crisis at the U.S. Department of Veterans Affairs became a “mirror” for the legal profession 🩺➡️⚖️
-
00:03:00 – “Speed without governance”: what the VA Inspector General actually warned about, and why it matters to your practice
-
00:04:00 – From patient safety risk to client safety and justice risk: the shared AI failure pattern in healthcare and law
-
00:06:00 – Shadow AI in law firms: staff “just trying out” public chatbots on live matters and the unseen risk this creates
-
00:07:00 – Why not tracking hallucinations, data leakage, or bias turns risk management into wishful thinking
-
00:08:00 – Applying existing ABA Model Rules (1.1, 1.6, 5.1, 5.2, and 5.3) directly to AI use in legal practice
-
00:09:00 – Competence in the age of AI: why “I’m not a tech person” is no longer a safe answer đź§
-
00:09:30 – Confidentiality and public chatbots: how you can silently lose privilege by pasting client data into a text box
-
00:10:30 – Supervision duties: why partners cannot safely claim ignorance of how their teams use AI
-
00:11:00 – Candor to tribunals: the real ethics problem behind AI-generated fake cases and citations
-
00:12:00 – From slogan to system: why “meaningful human engagement” must be operationalized, not just admired
-
00:12:30 – The key mindset shift: treating AI-assisted drafts as hypotheses, not answers 🧪
-
00:13:00 – What reasonable human oversight looks like in practice: citations, quotes, and legal conclusions under stress test
-
00:14:00 – You don’t need to be a computer scientist: the essential due diligence questions every lawyer can ask about AI
-
00:15:00 – Risk mapping: distinguishing administrative AI use from “safety-critical” lawyering tasks
-
00:16:00 – High-stakes matters (freedom, immigration, finances, benefits, licenses) and heightened AI safeguards
-
00:16:45 – Practical guardrails: access controls, narrow scoping, and periodic quality audits for AI use
-
00:17:00 – Why governance is not “just for BigLaw” and how solos can implement checklists and simple documentation 📋
-
00:17:45 – Updating engagement letters and talking to clients about AI use in their matters
-
00:18:00 – Redefining the “human touch” as the safety mechanism that makes AI ethically usable at all 🤝
-
00:19:00 – AI as power tool: why lawyers must remain the “captain of the ship” even when AI drafts at lightning speed 🚢
-
00:20:00 – Rethinking value: if AI creates the first draft, what exactly are clients paying lawyers for?
-
00:20:30 – Are we ready to bill for judgment, oversight, and safety instead of pure production time?
-
00:21:00 – Final takeaways: building a practice where human judgment still has the final word over AI
RESOURCES
Mentioned in the episode
-
American Bar Association Model Rules of Professional Conduct
-
Interview by Terry Gerton of the Federal News Network of Charyl Mason, Inspector General of the Department of Veterans Affairs, “VA rolled out new AI tools quickly, but without a system to catch mistakes, patient safety is on the line”.
Software & Cloud Services mentioned in the conversation
-
ChatGPT — https://chat.openai.com/
-
Lexis - https://www.lexisnexis.com
-
Westlaw - https://legal.thomsonreuters.com