Going Deep on Deep Fakes—Plus a Bonus Interview with Rob Silvers on the Cyber Safety Review Board.
Release Date: 01/30/2024
The Cyberlaw Podcast
Okay, yes, I promised to take a hiatus after episode 500. Yet here it is a week later, and I'm releasing episode 501. Here's my excuse. I read and liked Dmitri Alperovitch's book, "World on the Brink: How America Can Beat China in the Race for the 21st Century." I told him I wanted to do an interview about it. Then the interview got pushed into late April because that's when the book is actually coming out. So sue me. I'm back on hiatus. The conversation in the episode begins with Dmitri's background in cybersecurity and geopolitics, beginning with his emigration from the Soviet...
info_outline Who’s the Bigger Cybersecurity Risk – Microsoft or Open Source?The Cyberlaw Podcast
There’s a whiff of Auld Lang Syne about episode 500 of the Cyberlaw Podcast, since after this it will be going on hiatus for some time and maybe forever. (Okay, there will be an interview with Dmitri Alperovich about his forthcoming book, but the news commentary is done for now.) Perhaps it’s appropriate, then, for our two lead stories to revive a theme from the 90s – who’s better, Microsoft or Linux? Sadly for both, the current debate is over who’s worse, at least for cybersecurity. Microsoft’s sins against cybersecurity are laid bare in , Paul Rosenzweig reports. ...
info_outline Taking AI Existential Risk SeriouslyThe Cyberlaw Podcast
This episode is notable not just for cyberlaw commentary, but for its imminent disappearance from these pages and from podcast playlists everywhere. Having promised to take stock of the podcast when it reached episode 500, I’ve decided that I, the podcast, and the listeners all deserve a break. So I’ll be taking one after the next episode. No final decisions have been made, so don’t delete your subscription, but don’t expect a new episode any time soon. It’s been a great run, from the dawn of the podcast age, through the ad-fueled podcast boom, which I...
info_outline The Fourth Antitrust Shoe Drops, on Apple This TimeThe Cyberlaw Podcast
The Biden administration has been aggressively pursuing antitrust cases against Silicon Valley giants like Amazon, Google, and Facebook. This week it was Apple’s turn. The Justice Department (joined by several state AGs) filed a accusing Apple of improperly monopolizing the market for “performance smartphones.” The market definition will be a weakness for the government throughout the case, but the complaint does a good job of identifying ways in which Apple has built a moat around its business without an obvious benefit for its customers. The complaint focuses on Apple’s...
info_outline Social Speech and the Supreme CourtThe Cyberlaw Podcast
The Supreme Court is getting a heavy serving of first amendment social media cases. Gus Hurwitz covers two that made the news last week. In the , Justice Barrett spoke for a unanimous court in spelling out the very factbound rules that determine when a public official may use a platform’s tools to suppress critics posting on his or her social media page. Gus and I agree that this might mean a lot of litigation, unless public officials wise up and simply follow the Court’s broad hint: If you don’t want your page to be treated as official, simply say up top that it isn’t official....
info_outline Preventing Sales of Personal Data to Adversary NationsThe Cyberlaw Podcast
This bonus episode of the Cyberlaw Podcast focuses on the national security implications of sensitive personal information. Sales of personal data have been largely unregulated as the growth of adtech has turned personal data into a widely traded commodity. This, in turn, has produced a variety of policy proposals – comprehensive privacy regulation, a weird proposal from Sen. Wyden (D-OR) to ensure that the US governments cannot buy such data while China and Russia can, and most recently an Executive Order to prohibit or restrict commercial transactions affording China, Russia, and...
info_outline The National Cybersecurity Strategy – How Does it Look After a Year?The Cyberlaw Podcast
Kemba Walden and Stewart revisit the National Cybersecurity Strategy a year later. Sultan Meghji examines the ransomware attack on Change Healthcare and its consequences. Brandon Pugh reminds us that even large companies like Google are not immune to having their intellectual property stolen. The group conducts a thorough analysis of a "public option" model for AI development. Brandon discusses the latest developments in personal data and child online protection. Lastly, Stewart inquires about Kemba's new position at Paladin Global Institute, following her departure from the role of Acting...
info_outline Regulating personal data for national securityThe Cyberlaw Podcast
The United States is in the process of rolling out a for personal data transfers. But the rulemaking is getting limited attention because it targets transfers to our rivals in the new Cold War – China, Russia, and their allies. old office is drafting the rules, explains the history of the initiative, which stems from endless Committee on Foreign Investment in the United States efforts to impose such controls on a company-by-company basis. Now, with an as the foundation, the Department of Justice has published an that promises what could be years of slow-motion regulation. Faced with a...
info_outline Are AI models learning to generalize?The Cyberlaw Podcast
We begin this episode with describing major progress in conversions. Amazon flagged its new model as having “emergent” capabilities in handling what had been serious problems – things like speaking with emotion, or conveying foreign phrases. The key is the size of the training set, but Amazon was able to spot the point at which more data led to unexpected skills. This leads Paul and me to speculate that training AI models to perform certain tasks eventually leads the model to learn “generalization” of its skills. If so, the more we train AI on a variety of tasks – chat,...
info_outline Death, Taxes, and Data RegulationThe Cyberlaw Podcast
On the latest episode of The Cyberlaw Podcast, guest host Brian Fleming, along with panelists and discuss the latest U.S. government efforts to protect sensitive personal data, including the and the restricting certain bulk sensitive data flows to China and other countries of concern. Nate and Brian then discuss before the April expiration and debate what to make of a recent . Gus and Jane then talk about the , as well as , in an effort to understand some broader difficulties facing internet-based ad and subscription revenue models. Nate considers the implications of in its war against...
info_outlineIt was a big week for deep fakes generated by artificial intelligence. Sultan Meghji, who’s got a new AI startup, walked us through three stories that illustrate the ways AI will lead to more confusion about who’s really talking to us. First, a fake Biden robocall urged people not to vote in the New Hampshire primary. Second, a bot purporting to offer Dean Phillips’s views on the issues was sanctioned by OpenAI because it didn’t have Phillips’s consent. Third, fake nudes of Taylor Swift led to a ban on Twitter searches for her image. And, finally, podcasters used AI to resurrect George Carlin and got sued by his family. The moral panic over AI fakery meant that all of these stories were long on “end of the world” and short on “we’ll live through this.”
Regulators of AI are not doing a better job of maintaining perspective. Mark MacCarthy reports that New York City’s AI hiring law, which has punitive disparate-impact disclosure requirements for automated hiring decision engines, seems to have persuaded NYC employers that they aren’t making any automated hiring decisions, so they don’t have to do any disclosures. Not to be outdone, the European Court of Justice has decided that pretty much any tool to aid in decisions is likely to be an automated decision making technology subject to special (and mostly nonsensical) data protection rules.
Is AI regulation creating its own backlash? Could be. Sultan and I report on a very plausible Republican plan to attack the Biden AI executive order on the ground that its main enforcement mechanism relies, the Defense Production Act, simply doesn’t authorize what the order calls for.
Speaking of regulation, Maury Shenk covers the EU’s application of the Digital Markets Act to big tech companies like Apple and Google. Apple isn’t used to being treated like just another company, and its contemptuous response to the EU’s rules for its app market could easily lead to regulatory sanctions. Looking at Apple’s proposed compliance with the California court ruling in the Epic case and the European Digital Market Act, Mark says it's time to think about price regulating mobile app stores.
Even handing out big checks to technology companies turns out to be harder than it first sounds. Sultan and I talk about the slow pace of payments to chip makers, and the political imperative to get the deals done before November (and probably before March).
Senator Ron Wyden, D-Ore. is still flogging NSA and the danger of government access to personal data. This time, he’s on about NSA’s purchases of commercial data. So far, so predictable. But this time, he’s misrepresented the facts by saying without restriction that NSA buys domestic metadata, omitting NSA’s clear statement that its netflow “domestic” data consists of communications with one end outside the country.
Maury and I review an absent colleague’s effort to construct a liability regime for insecure software. Jim Dempsey's proposal looks quite reasonable, but Maury reminds me that he and I produced something similar twenty years ago, and it’s not even close to adoption anywhere in the U.S.
I can’t help but rant about Amazon’s arrogant, virtue-signaling, and customer-hating decision to drop a feature that makes it easy for Ring doorbell users to share their videos with the police. Whose data is it, anyway, Amazon? Sadly, we know the answer.
It looks as though there’s only one place where hasty, ill-conceived tech regulation is being rolled back. Maury reports on the People’s Republic of China, which canned its video game regulations, and its video game regulator for good measure, and started approving new games at a rapid clip, after a proposed regulatory crackdown knocked more than $60 bn off the value of its industry.
We close the news roundup with a few quick hits:
-
Outside of AI, VCs are closing their wallets and letting startups run out of money
-
Quantum winter may be back as quantum computing turns out to be harder than hoped
-
Speaking of winter, self-driving cars are going to need snow tires to get through the latest market and regulatory storms overtaking companies like Cruise
Finally, as a listener bonus, we turn to Rob Silvers, Under Secretary for Policy at the Department of Homeland Security and Chair of the Cyber Safety Review Board (CSRB). Under Rob’s leadership, DHS has proposed legislation to give the CSRB a legislative foundation. The Senate homeland security committee recently held a hearing about that idea. Rob wasn’t invited, so we asked him to come on the podcast to respond to issues that the hearing raised – conflicts of interest, subpoena power, choosing the incidents to investigate, and more.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.