SEI Podcasts
On November 7, the Department of War released an acquisition transformation strategy that seeks to remove bureaucratic hurdles and streamline acquisition processes to enable even more rapid adoption of technologies, including artificial intelligence. Getting AI into the hands of warfighters requires disciplined AI Engineering. In this podcast from the Carnegie Mellon University Software Engineering Institute, Carol Smith, lead of human-centered research in the SEI’s AI Division, and Brigid O’Hearn, the SEI’s lead of software modernization policy for the Department of War, sit down with...
info_outlineSEI Podcasts
Organizations, including the U.S. military, are increasingly adopting cloud deployments for their flexibility and cost savings. The shared security model utilized by cloud service providers removes some of the adopting organization's responsibility for system administration and security. But it leaves them on the hook for monitoring hosted applications and resources. Cloud flow logs are a valuable source of data for supporting these security responsibilities and attaining situational awareness. The SEI has a long history of supporting flow log collection and analysis, including tools for...
info_outlineSEI Podcasts
From early 2022 through late 2024, a group of threat actors publicly known as APT28 exploited known vulnerabilities, such as CVE-2022-38028, to remotely and wirelessly access sensitive information from a targeted company network. This attack did not require any hardware to be placed in the vicinity of the targeted company’s network as the attackers were able to execute remotely from thousands of miles away. With the ubiquity of Wi-Fi, cellular networks, and Internet of Things (IoT) devices, the attack surface of communications-related vulnerabilities that can compromise data is extremely...
info_outlineSEI Podcasts
Modern data analytic methods and tools—including artificial intelligence (AI) and machine learning (ML) classifiers—are revolutionizing prediction capabilities and automation through their capacity to analyze and classify data. To produce such results, these methods depend on correlations. However, an overreliance on correlations can lead to prediction bias and reduced confidence in AI outputs. Drift in data and concept, evolving edge cases, and emerging phenomena can undermine the correlations that AI classifiers rely on. As the U.S. government increases its use of AI...
info_outlineSEI Podcasts
How can you ever know whether an LLM is safe to use? Even self-hosted LLM systems are vulnerable to adversarial prompts left on the internet and waiting to be found by system search engines. These attacks and others exploit the complexity of even seemingly secure AI systems. In our latest podcast from the Carnegie Mellon University Software Engineering Institute (SEI), David Schulker and Matthew Walsh, both senior data scientists in the SEI’s CERT Division, sit down with Thomas Scanlon, lead of the CERT Data Science Technical Program, to discuss their work on System Theoretic...
info_outlineSEI Podcasts
Software bills of materials or SBOMs are critical to software security and supply chain risk management. Ideally, regardless of the SBOM tool, the output should be consistent for a given piece of software. But that is not always the case. The divergence of results can undermine confidence in software quality and security. In our latest podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Jessie Jamieson, a senior cyber risk engineer in the SEI’s CERT Division, sits down with Matt technical director of Risk and Resilience in CERT, to talk about how to achieve more...
info_outlineSEI Podcasts
Application programing interfaces, more commonly known as APIs, are the engines behind the majority of internet traffic. The pervasive and public nature of APIs have increased the attack surface of the systems and applications they are used in. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), McKinley Sconiers-Hasan, a solutions engineer in the SEI’s CERT Division, sits down with Tim Morrow, Situational Awareness Technical Manager, also with the CERT Division, to discuss emerging API security issues and the application of zero-trust architecture...
info_outlineSEI Podcasts
Artificial intelligence (AI) is a transformational technology, but it has limitations in challenging operational settings. Researchers in the AI Division of the Carnegie Mellon University Software Engineering Institute (SEI) work to deliver reliable and secure AI capabilities to warfighters in mission-critical environments. In our latest podcast, Matt Gaston, director of the SEI’s AI Division, sits down with Matt Butkovic, technical director of the SEI CERT Division’s Cyber Risk and Resilience program, to discuss the SEI's ongoing and future work in AI, including test and evaluation, the...
info_outlineSEI Podcasts
A recent Google survey found that many developers felt comfortable using the Rust programming language in two months or less. Yet barriers to Rust adoption remain, particularly in safety-critical systems, where features such as memory and processing power are in short supply and compliance with regulations is mandatory. In our latest podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Vaughn Coates, an engineer in the SEI’s Software Solutions Division, sits down with Joe Yankel, initiative Lead of the DevSecOps Innovations team at the SEI, to discuss the...
info_outlineSEI Podcasts
In response to Executive Order (EO) 14028, Improving the Nation’s Cybersecurity, the National Institute of Standards and Technology (NIST) recommended . Threat modeling is at the top of the list. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Natasha Shevchenko and Alex Vesey, both engineers with the SEI’s CERT Division, sit down with Timothy Chick, technical manager of CERT’s Applied Systems Group, to discuss how threat modeling can be used to protect software-intensive systems from attack. Specifically, they explore how threat models can...
info_outlineSoftware enables our way of life, but market forces have sidelined security concerns leaving systems vulnerable to attack. Fixing this problem will require the software industry to develop an initial standard for creating software that is secure by design. These are the findings of a recently released paper coauthored by Greg Touhill, director of the Software Engineering Institute (SEI) CERT Division. In this latest SEI podcast, Touhill and Matthew Butkovic, director of Cyber Risk and Resilience at CERT, discuss the paper including its recommendations for making software secure by design.