Application Security Weekly (Audio)
About all things AppSec, DevOps, and DevSecOps. Hosted by Mike Shema and John Kinsella, the podcast focuses on helping its audience find and fix software flaws effectively.
info_outline
How OWASP's GenAI Security Project keeps up with the pace of AI/Agentic changes - Scott Clinton - ASW #348
09/16/2025
How OWASP's GenAI Security Project keeps up with the pace of AI/Agentic changes - Scott Clinton - ASW #348
This week, we chat with Scott Clinton, board member and co-chain of the OWASP GenAI Security Project. This project has become a massive organization within OWASP with hundreds of volunteers and thousands of contributors. This team has been cranking out new tools, reports and guidance for practitioners month after month for over a year now. We start off discussing how Scott and other leaders have managed to keep up with the crazy rate of change in the AI world. We pivot to discussing some of the specific projects the team is working on, and finally discuss some of the biggest AI security challenges before wrapping up the conversation. If you're neck-deep in AI like we are, I highly recommend checking out this conversation, and consider joining this OWASP project, sponsoring them, or just checking out what they have to offer (which is all free, of course). Segment Resources: Get started with the Register for the on October 9th, 11am - 4pm EST This segment is sponsored by The OWASP GenAI Security Project. Visit to learn more about them! Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/38231090
info_outline
Limitations and Liabilities of LLM Coding - Seemant Sehgal, Ted Shorter - ASW #347
09/09/2025
Limitations and Liabilities of LLM Coding - Seemant Sehgal, Ted Shorter - ASW #347
Up first, the ASW news of the week. At Black Hat 2025, Doug White interviews Ted Shorter, CTO of Keyfactor, about the quantum revolution already knocking on cybersecurity’s door. They discuss the terrifying reality of quantum computing’s power to break RSA and ECC encryption—the very foundations of modern digital life. With 2030 set as the deadline for transitioning away from legacy crypto, organizations face a race against time. Ted breaks down what "full crypto visibility" really means, why it’s crucial to map your cryptographic assets now, and how legacy tech—from robotic sawmills to outdated hospital gear—poses serious risks. The interview explores NIST's new post-quantum algorithms, global readiness efforts, and how Keyfactor’s acquisitions of InfoSec Global and Cipher Insights help companies start the quantum transition today—not tomorrow. Don’t wait for the breach. Watch this and start your quantum strategy now. If digital trust is the goal, cryptography is the foundation. Segment Resources: For more information about Keyfactor’s latest Digital Trust Digest, please visit: Live from BlackHat 2025 in Las Vegas, cybersecurity host Jackie McGuire sits down with Seemant Sehgal, founder of BreachLock, to unpack one of the most pressing challenges facing SOC teams today: alert fatigue—and its even more dangerous cousin, vulnerability fatigue. In this must-watch conversation, Seemant reveals how his groundbreaking approach, Adversarial Exposure Validation (AEV), flips the script on traditional defense-heavy security strategies. Instead of drowning in 10,000+ “critical” alerts, AEV pinpoints what actually matters—using Generative AI to map realistic attack paths, visualize kill chains, and identify the exact vulnerabilities that put an organization’s crown jewels at risk. From his days leading cybersecurity at a major global bank to pioneering near real-time CVE validation, Seemant shares insights on scaling offensive security, improving executive buy-in, and balancing automation with human expertise. Whether you’re a CISO, SOC analyst, red teamer, or security enthusiast, this interview delivers actionable strategies to fight fatigue, prioritize risks, and protect high-value assets. Key topics covered: - The truth about alert fatigue & why it’s crippling SOC efficiency - How AI-driven offensive security changes the game - Visualizing kill chains to drive faster remediation - Why fixing “what matters” beats fixing “everything” - The future of AI trust, transparency, and control in cybersecurity Watch now to discover how BreachLock is redefining offensive security for the AI era. Segment Resources: This segment is sponsored by Breachlock. Visit to learn more about them! Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/38135880
info_outline
AI, APIs, and the Next Cyber Battleground: Black Hat 2025 - Chris Boehm, Idan Plotnik, Josh Lemos, Michael Callahan - ASW #346
09/02/2025
AI, APIs, and the Next Cyber Battleground: Black Hat 2025 - Chris Boehm, Idan Plotnik, Josh Lemos, Michael Callahan - ASW #346
In this must-see BlackHat 2025 interview, Doug White sits down with Michael Callahan, CMO at Salt Security, for a high-stakes conversation about Agentic AI, Model Context Protocol (MCP) servers, and the massive API security risks reshaping the cyber landscape. Broadcast live from the CyberRisk TV studio at Mandalay Bay, Las Vegas, the discussion pulls back the curtain on how autonomous AI agents and centralized MCP hubs could supercharge productivity—while also opening the door to unprecedented supply chain vulnerabilities. From “shadow MCP servers” to the concept of an “API fabric,” Michael explains why these threats are evolving faster than traditional security measures can keep up, and why CISOs need to act before it’s too late. Viewers will get rare insight into the parallels between MCP exploitation and DNS poisoning, the hidden dangers of API sprawl, and why this new era of AI-driven communication could become a hacker’s dream. Blog: Survey Report: This segment is sponsored by Salt Security. Visit for a free API Attack Surface Assessment! At Black Hat 2025, live from the Cyber Risk TV studio in Las Vegas, Jackie McGuire sits down with Apiiro Co-Founder & CEO Idan Plotnik to unpack the real-world impact of AI code assistants on application security, developer velocity, and cloud costs. With experience as a former Director of Engineering at Microsoft, Idan dives into what drove him to launch Apiiro — and why 75% of engineers will be using AI assistants by 2028. From 10x more vulnerabilities to skyrocketing API bloat and security blind spots, Idan breaks down research from Fortune 500 companies on how AI is accelerating both innovation and risk. What you'll learn in this interview: - Why AI coding tools are increasing code complexity and risk - The massive cost of unnecessary APIs in cloud environments - How to automate secure code without slowing down delivery - Why most CISOs fail to connect security to revenue (and how to fix it) - How Apiiro’s Autofix AI Agent helps organizations auto-fix and auto-govern code risks at scale This isn’t just another AI hype talk. It’s a deep dive into the future of secure software delivery — with practical steps for CISOs, CTOs, and security leaders to become true business enablers. Watch till the end to hear how Apiiro is helping Fortune 500s bridge the gap between code, risk, and revenue. Apiiro AutoFix Agent. Built for Enterprise Security: Deep Dive Demo: This segment is sponsored by Apiiro. Be one of the first to see their new AppSec Agent in action at . Is Your AI Usage a Ticking Time Bomb? In this exclusive Black Hat 2025 interview, Matt Alderman sits down with GitLab CISO Josh Lemos to unpack one of the most pressing questions in tech today: Are executives blindly racing into AI adoption without understanding the risks? Filmed live at the CyberRisk TV Studio in Las Vegas, this eye-opening conversation dives deep into: - How AI is being rapidly adopted across enterprises — with or without security buy-in - Why AI governance is no longer optional — and how to actually implement it - The truth about agentic AI, automation, and building trust in non-human identities - The role of frameworks like ISO 42001 in building AI transparency and assurance - Real-world examples of how teams are using LLMs in development, documentation & compliance Whether you're a CISO, developer, or business exec — this discussion will reshape how you think about AI governance, security, and adoption strategy in your org. Don’t wait until it’s too late to understand the risks. The Economics of Software Innovation: $750B+ Opportunity at a Crossroads Report: For more information about GitLab and their report, please visit: Live from Black Hat 2025 in Las Vegas, Jackie McGuire sits down with Chris Boehm, Field CTO at Zero Networks, for a high-impact conversation on microsegmentation, shadow IT, and why AI still struggles to stop lateral movement. With 15+ years of cybersecurity experience—from Microsoft to SentinelOne—Chris breaks down complex concepts like you're a precocious 8th grader (his words!) and shares real talk on why AI alone won’t save your infrastructure. Learn how Zero Networks is finally making microsegmentation frictionless, how summarization is the current AI win, and what red flags to look for when evaluating AI-infused security tools. If you're a CISO, dev, or just trying to stay ahead of cloud threats—this one's for you. This segment is sponsored by Zero Networks. Visit to learn more about them! Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/38023690
info_outline
Translating Security Regulations into Secure Projects - Roman Zhukov, Emily Fox - ASW #345
08/26/2025
Translating Security Regulations into Secure Projects - Roman Zhukov, Emily Fox - ASW #345
The EU Cyber Resilience Act joins the long list of regulations intended to improve the security of software delivered to users. Emily Fox and Roman Zhukov share their experience education regulators on open source software and educating open source projects on security. They talk about creating a baseline for security that addresses technical items, maintaining projects, and supporting project owners so they can focus on their projects. Segment resources: github.com/ossf/wg-globalcyberpolicy github.com/orcwg baseline.openssf.org Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/37953835
info_outline
Managing the Minimization of a Container Attack Surface - Neil Carpenter - ASW #344
08/19/2025
Managing the Minimization of a Container Attack Surface - Neil Carpenter - ASW #344
A smaller attack surface should lead to a smaller list of CVEs to track, which in turn should lead to a smaller set of vulns that you should care about. But in practice, keeping something like a container image small has a lot of challenges in terms of what should be considered minimal. Neil Carpenter shares advice and anecdotes on what it takes to refine a container image and to change an org's expectations that every CVE needs to be fixed. Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/37870765
info_outline
The Future of Supply Chain Security - Janet Worthington - ASW #343
08/12/2025
The Future of Supply Chain Security - Janet Worthington - ASW #343
Open source software is a massive contribution that provides everything from foundational frameworks to tiny single-purpose libraries. We walk through the dimensions of trust and provenance in the software supply chain with Janet Worthington. And we discuss how even with new code generated by LLMs and new terms like slopsquatting, a lot of the most effective solutions are old techniques. Resources Show Notes:
/episode/index/show/aswaudio/id/37763035
info_outline
Uniting software development and application security - Will Vandevanter, Jonathan Schneider - ASW #342
08/05/2025
Uniting software development and application security - Will Vandevanter, Jonathan Schneider - ASW #342
Maintaining code is a lot more than keeping dependencies up to date. It involved everything from keeping old code running to changing frameworks to even changing implementation languages. Jonathan Schneider talks about the engineering considerations of refactoring and rewriting code, why code maintenance is important to appsec, and how to build confidence that adding automation to a migration results in code that has the same workflows as before. Resources Then, instead of our usual news segment, we do a deep dive on some recent vulns NVIDIA's Triton Inference Server disclosed by Trail of Bits' Will Vandevanter. Will talks about the thought process and tools that go into identify potential vulns, the analysis in determining whether they're exploitable, and the disclosure process with vendors. He makes the important point that even if something doesn't turn out to be a vuln, there's still benefit to the learning process and gaining experience in seeing the different ways that devs design software. Of course, it's also more fun when you find an exploitable vuln -- which Will did here! Resources Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/37674015
info_outline
How Product-Led Security Leads to Paved Roads - Julia Knecht - ASW #341
07/29/2025
How Product-Led Security Leads to Paved Roads - Julia Knecht - ASW #341
A successful strategy in appsec is to build platforms with defaults and designs that ease the burden of security choices for developers. But there's an important difference between expecting (or requiring!) developers to use a platform and building a platform that developers embrace. Julia Knecht shares her experience in building platforms with an attention to developer needs, developer experience, and security requirements. She brings attention to the product management skills and feedback loops that make paved roads successful -- as well as the areas where developers may still need or choose their own alternatives. After all, the impact of a paved road isn't in its creation, it's in its adoption. Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/37587235
info_outline
Rise of Compromised LLMs - Sohrob Kazerounian - ASW #340
07/22/2025
Rise of Compromised LLMs - Sohrob Kazerounian - ASW #340
AI is more than LLMs. Machine learning algorithms have been part of infosec solutions for a long time. For appsec practitioners, a key concern is always going to be how to evaluate the security of software or a system. In some cases, it doesn't matter if a human or an LLM generated code -- the code needs to be reviewed for common flaws and design problems. But the creation of MCP servers and LLM-based agents is also adding a concern about what an unattended or autonomous piece of software is doing. Sohrob Kazerounian gives us context on how LLMs are designed, what to expect from them, and where they pose risk and reward to modern software engineering. Resources Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/37497310
info_outline
Getting Started with Security Basics on the Way to Finding a Specialization - ASW #339
07/15/2025
Getting Started with Security Basics on the Way to Finding a Specialization - ASW #339
What are some appsec basics? There's no monolithic appsec role. Broadly speaking, appsec tends to branch into engineering or compliance paths, each with different areas of focus despite having shared vocabularies and the (hopefully!) shared goal of protecting software, data, and users. The better question is, "What do you want to secure?" We discuss the Cybersecurity Skills Framework put together by the OpenSSF and the Linux Foundation and how you might prepare for one of its job families. The important basics aren't about memorizing lists or technical details, but demonstrating experience in working with technologies, understanding how they can fail, and being able to express concerns, recommendations, and curiosity about their security properties. Resources: Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/37405880
info_outline
Checking in on the State of Appsec in 2025 - Janet Worthington, Sandy Carielli - ASW #338
07/08/2025
Checking in on the State of Appsec in 2025 - Janet Worthington, Sandy Carielli - ASW #338
Appsec still deals with ancient vulns like SQL injection and XSS. And now LLMs are generating code along side humans. Sandy Carielli and Janet Worthington join us once again to discuss what all this new code means for appsec practices. On a positive note, the prevalence of those ancient vulns seems to be diminishing, but the rising use of LLMs is expanding a new (but not very different) attack surface. We look at where orgs are investing in appsec, who appsec teams are collaborating with, and whether we need security awareness training for LLMs. Resources: Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/37325910
info_outline
Simple Patterns for Complex Secure Code Reviews - Louis Nyffenegger - ASW #337
07/01/2025
Simple Patterns for Complex Secure Code Reviews - Louis Nyffenegger - ASW #337
Manual secure code reviews can be tedious and time intensive if you're just going through checklists. There's plenty of room for linters and compilers and all the grep-like tools to find flaws. Louis Nyffenegger describes the steps of a successful code review process. It's a process that starts with understanding code, which can even benefit from an LLM assistant, and then applies that understanding to a search for developer patterns that lead to common mistakes like mishandling data, not enforcing a control flow, or not defending against unexpected application states. He explains how finding those kinds of more impactful bugs are rewarding for the reviewer and valuable to the code owner. It involves reading a lot of code, but Louis offers tips on how to keep notes, keep an app's context in mind, and keep code secure. Segment Resources: Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/37226090
info_outline
How Fuzzing Barcodes Raises the Bar for Secure Code - Artur Cygan - ASW #336
06/24/2025
How Fuzzing Barcodes Raises the Bar for Secure Code - Artur Cygan - ASW #336
Fuzzing has been one of the most successful ways to improve software quality. And it demonstrates how improving software quality improves security. Artur Cygan shares his experience in building and applying fuzzers to barcode scanners, smart contracts, and just about any code you can imagine. We go through the useful relationship between unit tests and fuzzing coverage, nudging fuzzers into deeper code paths, and how LLMs can help guide a fuzzer into using better inputs for its testing. Resources Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/37126595
info_outline
Threat Modeling With Good Questions and Without Checklists - Farshad Abasi - ASW #335
06/17/2025
Threat Modeling With Good Questions and Without Checklists - Farshad Abasi - ASW #335
What makes a threat modeling process effective? Do you need a long list of threat actors? Do you need a long list of terms? What about a short list like STRIDE? Has an effective process ever come out of a list? Farshad Abasi joins our discussion as we explain why the answer to most of those questions is No and describe the kinds of approaches that are more conducive to useful threat models. Resources: In the news, learning from outage postmortems, an EchoLeak image speaks a 1,000 words from Microsoft 365 Copilot, TokenBreak attack targets tokenizing techniques, Google's layered strategy against prompt injection looks like a lot like defending against XSS, learning about code security from CodeAuditor CTF, and more! Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/37031895
info_outline
Bringing CISA's Secure by Design Principles to OT Systems - Matthew Rogers - ASW #334
06/10/2025
Bringing CISA's Secure by Design Principles to OT Systems - Matthew Rogers - ASW #334
CISA has been championing Secure by Design principles. Many of the principles are universal, like adopting MFA and having opinionated defaults that reduce the need for hardening guides. Matthew Rogers talks about how the approach to Secure by Design has to be tailored for Operational Technology (OT) systems. These systems have strict requirements on safety and many of them rely on protocols that are four (or more!) decades old. He explains how the considerations in this space go far beyond just memory safety concerns. Segment Resources: Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/36929235
info_outline
AIs, MCPs, and the Acutal Work that LLMs Are Generating - ASW #333
06/03/2025
AIs, MCPs, and the Acutal Work that LLMs Are Generating - ASW #333
The recent popularity of MCPs is surpassed only by the recent examples deficiencies of their secure design. The most obvious challenge is how MCPs, and many more general LLM use cases, have erased two decades of security principles behind separating code and data. We take a look at how developers are using LLMs to generate code and continue our search for where LLMs are providing value to appsec. We also consider what indicators we'd look for as signs of success. For example, are LLMs driving useful commits to overburdened open source developers? Are LLMs climbing the ranks of bug bounty platforms? In the news, more examples of prompt injection techniques against LLM features in GitLab and GitHub, the value (and tradeoffs) in rewriting code, secure design lessons from a history of iOS exploitation, checking for all the ways to root, and NIST's approach to (maybe) measuring likely exploited vulns. Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/36824010
info_outline
AI in AppSec: Agentic Tools, Vibe Coding Risks & Securing Non-Human Identities - Mo Aboul-Magd, Shahar Man, Brian Fox, Mark Lambert - ASW #332
05/27/2025
AI in AppSec: Agentic Tools, Vibe Coding Risks & Securing Non-Human Identities - Mo Aboul-Magd, Shahar Man, Brian Fox, Mark Lambert - ASW #332
ArmorCode unveils Anya—the first agentic AI virtual security champion designed specifically for AppSec and product security teams. Anya brings together conversation and context to help AppSec, developers and security teams cut through the noise, prioritize risks, and make faster, smarter decisions across code, cloud, and infrastructure. Built into the ArmorCode ASPM Platform and backed by 25B findings, 285+ integrations, natural language intelligence, and role-aware insights, Anya turns complexity into clarity, helping teams scale securely and close the security skills gap. Anya is now generally available and included as part of the ArmorCode ASPM Platform. Visit to request a demo! As 'vibe coding", the practice of using AI tools with specialized coding LLMs to develop software, is making waves, what are the implications for security teams? How can this new way of developing applications be made secure? Or have the horses already left the stable? Segment Resources: This segment is sponsored by Backslash. Visit to learn more about them! The rise of AI has largely mirrored the early days of open source software. With rapid adoption amongst developers who are trying to do more with less time, unmanaged open source AI presents serious risks to organizations. Brian Fox, CTO & Co-founder of Sonatype, will dive into the risks associated with open source AI and best practices to secure it. Segment Resources: This segment is sponsored by Sonatype. Visit to learn more about Sonatype's AI SCA solutions! The surge in AI agents is creating a vast new cyber attack surface with Non-Human Identities (NHIs) becoming a prime target. This segment will explore how SandboxAQ's AQtive Guard Discover platform addresses this challenge by providing real-time vulnerability detection and mitigation for NHIs and cryptographic assets. We'll discuss the platform's AI-driven approach to inventory, threat detection, and automated remediation, and its crucial role in helping enterprises secure their AI-driven future. To take control of your NHI security and proactively address the escalating threats posed by AI agents, visit to schedule an early deployment and risk assessment. Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/36694240
info_outline
Appsec News & Interviews from RSAC on Identity and AI - Rami Saas, Charlotte Wylie - ASW #331
05/20/2025
Appsec News & Interviews from RSAC on Identity and AI - Rami Saas, Charlotte Wylie - ASW #331
In the news, Coinbase deals with bribes and insider threat, the NCSC notes the cross-cutting problem of incentivizing secure design, we cover some research that notes the multitude of definitions for secure design, and discuss the new Cybersecurity Skills Framework from the OpenSSF and Linux Foundation. Then we share two more sponsored interviews from this year's RSAC Conference. With more types of identities, machines, and agents trying to access increasingly critical data and resources, across larger numbers of devices, organizations will be faced with managing this added complexity and identity sprawl. Now more than ever, organizations need to make sure security is not an afterthought, implementing comprehensive solutions for securing, managing, and governing both non-human and human identities across ecosystems at scale. This segment is sponsored by Okta. Visit to learn more about them! At Mend.io, we believe that securing AI-powered applications requires more than just scanning for vulnerabilities in AI-generated code—it demands a comprehensive, enterprise-level strategy. While many AppSec vendors offer limited, point-in-time solutions focused solely on AI code, Mend.io takes a broader and more integrated approach. Our platform is designed to secure not just the code, but the full spectrum of AI components embedded within modern applications. By leveraging existing risk management strategies, processes, and tools, we uncover the unique risks that AI introduces—without forcing organizations to reinvent their workflows. Mend.io’s solution ensures that AI security is embedded into the software development lifecycle, enabling teams to assess and mitigate risks proactively and at scale. Unlike isolated AI security startups, Mend.io delivers a single, unified platform that secures an organization’s entire codebase—including its AI-driven elements. This approach maximizes efficiency, minimizes disruption, and empowers enterprises to embrace AI innovation with confidence and control. This segment is sponsored by Mend.io. Visit to book a live demo! Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/36632560
info_outline
Secure Code Reviews, LLM Coding Assistants, and Trusting Code - Rey Bango, Karim Toubba, Gal Elbaz - ASW #330
05/13/2025
Secure Code Reviews, LLM Coding Assistants, and Trusting Code - Rey Bango, Karim Toubba, Gal Elbaz - ASW #330
Developers are relying on LLMs as coding assistants, so where are the LLM assistants for appsec? The principles behind secure code reviews don't really change based on who write the code, whether human or AI. But more code means more reasons for appsec to scale its practices and figure out how to establish trust in code, packages, and designs. Rey Bango shares his experience with secure code reviews and where developer education fits in among the adoption of LLMs. As businesses rapidly embrace SaaS and AI-powered applications at an unprecedented rate, many small-to-medium sized businesses (SMBs) struggle to keep up due to complex tech stacks and limited visibility into the skyrocketing app sprawl. These modern challenges demand a smarter, more streamlined approach to identity and access management. Learn how LastPass is reimagining access control through “Secure Access Experiences” - starting with the introduction of SaaS Monitoring capabilities designed to bring clarity to even the most chaotic environments. Secure Access Experiences - This segment is sponsored by LastPass. Visit to learn more about them! Cloud Application Detection and Response (CADR) has burst onto the scene as one of the hottest categories in security, with numerous vendors touting a variety of capabilities and making promises on how bringing detection and response to the application-level will be a game changer. In this segment, Gal Elbaz, co-founder and CTO of Oligo Security, will dive into what CADR is, who it helps, and what the future will look like for this game changing technology. Segment Resources - To see Oligo in action, please visit Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/36540355
info_outline
AI Era, New Risks: How Data-Centric Security Reduces Emerging AppSec Threats - Vishal Gupta, Idan Plotnik - ASW #329
05/06/2025
AI Era, New Risks: How Data-Centric Security Reduces Emerging AppSec Threats - Vishal Gupta, Idan Plotnik - ASW #329
We catch up on news after a week of BSidesSF and RSAC Conference. Unsurprisingly, AI in all its flavors, from agentic to gen, was inescapable. But perhaps more surprising (and more unfortunate) is how much the adoption of LLMs has increased the attack surface within orgs. The news is heavy on security issues from MCPs and a novel alignment bypass against LLMs. Not everything is genAI as we cover some secure design topics from the Airborne attack against Apple's AirPlay to more calls for companies to show how they're embracing secure design principles and practices. Apiiro CEO & Co-Founder, Idan Plotnik discusses the AI problem in AppSec. This segment is sponsored by Apiiro. Visit to learn more about them! Gen AI is being adopted faster than company’s policy and data security can keep up, and as LLM’s become more integrated into company systems and uses leverage more AI enabled applications, they essentially become unintentional data exfiltration points. These tools do not differentiate between what data is sensitive and proprietary and what is not. This interview will examine how the rapid adoption of Gen AI is putting sensitive company data at risk, and the data security considerations and policies organizations should implement before, if, and when their employees may seek to adopt a Gen AI tools to leverage some of their undeniable workplace benefits. Customer case studies: Seclore Blog: This segment is sponsored by Seclore. Visit to learn more about them! Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/36446475
info_outline
Secure Designs, UX Dragons, Vuln Dungeons - Jack Cable - ASW #328
04/29/2025
Secure Designs, UX Dragons, Vuln Dungeons - Jack Cable - ASW #328
In this live recording from BSidesSF we explore the factors that influence a secure design, talk about how to avoid the bite of UX dragons, and why designs should put classes of vulns into dungeons. But we can't threat model a secure design forever and we can't oversimplify guidance for a design to be "more secure". Kalyani Pawar and Jack Cable join the discussion to provide advice on evaluating secure designs through examples of strong and weak designs we've seen over the years. We highlight the importance of designing systems to serve users and consider what it means to have a secure design with a poor UX. As we talk about the strategy and tactics of secure design, we share why framing this as a challenge in preventing dangerous errors can help devs make practical engineering decisions that improve appsec for everyone. Resources Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/36343965
info_outline
Managing Secrets - Vlad Matsiiako - ASW #327
04/22/2025
Managing Secrets - Vlad Matsiiako - ASW #327
Secrets end up everywhere, from dev systems to CI/CD pipelines to services, certificates, and cloud environments. Vlad Matsiiako shares some of the tactics that make managing secrets more secure as we discuss the distinctions between secure architectures, good policies, and developer friendly tools. We've thankfully moved on from forced 90-day user password rotations, but that doesn't mean there isn't a place for rotating secrets. It means that the tooling and processes for ephemeral secrets should be based on secure, efficient mechanisms rather than putting all the burden on users. And it also means that managing secrets shouldn't become an unmanaged risk with new attack surfaces or new points of failure. Segment Resources: Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/36257290
info_outline
More WAFs in Blocking Mode and More Security Headaches from LLMs - Sandy Carielli, Janet Worthington - ASW #326
04/15/2025
More WAFs in Blocking Mode and More Security Headaches from LLMs - Sandy Carielli, Janet Worthington - ASW #326
The breaches will continue until appsec improves. Janet Worthington and Sandy Carielli share their latest research on breaches from 2024, WAFs in 2025, and where secure by design fits into all this. WAFs are delivering value in a way that orgs are relying on them more for bot management and fraud detection. But adopting phishing-resistant authentication solutions like passkeys and deploying WAFs still seem peripheral to secure by design principles. We discuss what's necessary for establishing a secure environment and why so many orgs still look to tools. And with LLMs writing so much code, we continue to look for ways LLMs can help appsec in addition to all the ways LLMs keep recreating appsec problems. Resources In the news, crates.io logging mistake shows the errors of missing redactions, LLMs give us slopsquatting as a variation on typosquatting, CaMeL kicks sand on prompt injection attacks, using NTLM flaws as lessons for authentication designs, tradeoffs between containers and WebAssembly, research gaps in the world of Programmable Logic Controllers, and more! Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/36134835
info_outline
In Search of Secure Design - ASW #325
04/08/2025
In Search of Secure Design - ASW #325
We have a top ten list entry for Insecure Design, pledges to CISA's Secure by Design principles, and tons of CVEs that fall into familiar categories of flaws. But what does it mean to have a secure design and how do we get there? There are plenty of secure practices that orgs should implement are supply chains, authentication, and the SDLC. Those practices address important areas of risk, but only indirectly influence a secure design. We look at tactics from coding styles to design councils as we search for guidance that makes software more secure. Segment resources Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/36045495
info_outline
Avoiding Appsec's Worst Practices - ASW #324
04/01/2025
Avoiding Appsec's Worst Practices - ASW #324
We take advantage of April Fools to look at some of appsec's myths, mistakes, and behaviors that lead to bad practices. It's easy to get trapped in a status quo of chasing CVEs or discussing which direction to shift security. But scrutinizing decimal points in CVSS scores or rearranging tools misses the opportunity for more strategic thinking. We satirize some worst practices in order to have a more serious discussion about a future where more software is based on secure designs. Segment resources: Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/35958455
info_outline
Finding a Use for GenAI in AppSec - Keith Hoodlet - ASW #323
03/25/2025
Finding a Use for GenAI in AppSec - Keith Hoodlet - ASW #323
LLMs are helping devs write code, but is it secure code? How are LLMs helping appsec teams? Keith Hoodlet returns to talk about where he's seen value from genAI, where it fits in with tools like source code analysis and fuzzers, and where its limitations mean we'll be relying on humans for a while. Those limitations don't mean appsec should dismiss LLMs as a tool. It means appsec should understand how things like context windows might limit a tool's security analysis to a few files, leaving a security architecture review to humans. Segment resources: Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/35847775
info_outline
Redlining the Smart Contract Top 10 - Shashank . - ASW #322
03/18/2025
Redlining the Smart Contract Top 10 - Shashank . - ASW #322
The crypto world is rife with smart contracts that have been outsmarted by attackers, with consequences in the millions of dollars (and more!). Shashank shares his research into scanning contracts for flaws, how the classes of contract flaws have changed in the last few years, and how optimistic we can be about the future of this space. Segment Resources: Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/35743370
info_outline
CISA's Secure by Design Principles, Pledge, and Progress - Jack Cable - ASW #321
03/11/2025
CISA's Secure by Design Principles, Pledge, and Progress - Jack Cable - ASW #321
Just three months into 2025 and we already have several hundred CVEs for XSS and SQL injection. Appsec has known about these vulns since the late 90s. Common defenses have been known since the early 2000s. Jack Cable talks about CISA's Secure by Design principles and how they're trying to refocus businesses on addressing vuln classes and prioritizing software quality -- with security one of those important dimensions of quality. Segment Resources: Skype hangs up for good, over a million cheap Android devices may be backdoored, parallels between jailbreak research and XSS, impersonating AirTags, network reconnaissance via a memory disclosure vuln in the GFW, and more! Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/35622100
info_outline
Keeping Curl Successful and Secure Over the Decades - Daniel Stenberg - ASW #320
03/04/2025
Keeping Curl Successful and Secure Over the Decades - Daniel Stenberg - ASW #320
Curl and libcurl are everywhere. Not only has the project maintained success for almost three decades now, but it's done that while being written in C. Daniel Stenberg talks about the challenges in dealing with appsec, the design philosophies that keep it secure, and fostering a community to create one of the most recognizable open source projects in the world. Segment Resources: Google replacing SMS with QR codes for authentication, MS pulls a VSCode extension due to red flags, threat modeling with TRAIL, threat modeling the Bybit hack, malicious models and malicious AMIs, and more! Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/35524515
info_outline
Developer Environments, Developer Experience, and Security - Dan Moore - ASW #319
02/25/2025
Developer Environments, Developer Experience, and Security - Dan Moore - ASW #319
Minimizing latency, increasing performance, and reducing compile times are just a part of what makes a development environment better. Throw in useful tests and some useful security tools and you have an even better environment. Dan Moore talks about what motivates some developers to prefer a "local first" approach as we walk through what all of this means for security. Applying forgivable vs. unforgivable criteria to reDoS vulns, what backdoors in LLMs mean for trust in building software, considering some secure AI architectures to minimize prompt injection impact, developer reactions to Rust, and more! Visit for all the latest episodes! Show Notes:
/episode/index/show/aswaudio/id/35415640