loader from loading.io

Triangulating Apple

The Cyberlaw Podcast

Release Date: 01/09/2024

World on the Brink with Dmitri Alperovitch show art World on the Brink with Dmitri Alperovitch

The Cyberlaw Podcast

Okay, yes, I promised to take a hiatus after episode 500. Yet here it is a week later, and I'm releasing episode 501. Here's my excuse. I read and liked Dmitri Alperovitch's book, "World on the Brink: How America Can Beat China in the Race for the 21st Century."  I told him I wanted to do an interview about it. Then the interview got pushed into late April because that's when the book is actually coming out. So sue me. I'm back on hiatus. The conversation  in the episode begins with Dmitri's background in cybersecurity and geopolitics, beginning with his emigration from the Soviet...

info_outline
Who’s the Bigger Cybersecurity Risk – Microsoft or Open Source? show art Who’s the Bigger Cybersecurity Risk – Microsoft or Open Source?

The Cyberlaw Podcast

There’s a whiff of Auld Lang Syne about episode 500 of the Cyberlaw Podcast, since after this it will be going on hiatus for some time and maybe forever. (Okay, there will be an interview with Dmitri Alperovich about his forthcoming book, but the news commentary is done for now.) Perhaps it’s appropriate, then, for our two lead stories to revive a theme from the 90s – who’s better, Microsoft or Linux? Sadly for both, the current debate is over who’s worse, at least for cybersecurity.   Microsoft’s sins against cybersecurity are laid bare in , Paul Rosenzweig reports. ...

info_outline
Taking AI Existential Risk Seriously show art Taking AI Existential Risk Seriously

The Cyberlaw Podcast

This episode is notable not just for cyberlaw commentary, but for its imminent disappearance from these pages and from podcast playlists everywhere.  Having promised to take stock of the podcast when it reached episode 500, I’ve decided that I, the podcast, and the listeners all deserve a break.  So I’ll be taking one after the next episode.  No final decisions have been made, so don’t delete your subscription, but don’t expect a new episode any time soon.  It’s been a great run, from the dawn of the podcast age, through the ad-fueled podcast boom, which I...

info_outline
The Fourth Antitrust Shoe Drops, on Apple This Time show art The Fourth Antitrust Shoe Drops, on Apple This Time

The Cyberlaw Podcast

The Biden administration has been aggressively pursuing antitrust cases against Silicon Valley giants like Amazon, Google, and Facebook. This week it was Apple’s turn. The Justice Department (joined by several state AGs)  filed a accusing Apple of improperly monopolizing the market for “performance smartphones.” The market definition will be a weakness for the government throughout the case, but the complaint does a good job of identifying ways in which Apple has built a moat around its business without an obvious benefit for its customers.  The complaint focuses on Apple’s...

info_outline
Social Speech and the Supreme Court show art Social Speech and the Supreme Court

The Cyberlaw Podcast

The Supreme Court is getting a heavy serving of first amendment social media cases. Gus Hurwitz covers two that made the news last week. In the , Justice Barrett spoke for a unanimous court in spelling out the very factbound rules that determine when a public official may use a platform’s tools to suppress critics posting on his or her social media page.  Gus and I agree that this might mean a lot of litigation, unless public officials wise up and simply follow the Court’s broad hint: If you don’t want your page to be treated as official, simply say up top that it isn’t official....

info_outline
Preventing Sales of Personal Data to Adversary Nations show art Preventing Sales of Personal Data to Adversary Nations

The Cyberlaw Podcast

This bonus episode of the Cyberlaw Podcast focuses on the national security implications of sensitive personal information. Sales of personal data have been largely unregulated as the growth of adtech has turned personal data into a widely traded commodity. This, in turn, has produced a variety of policy proposals – comprehensive privacy regulation, a weird proposal from Sen. Wyden (D-OR) to ensure that the US governments cannot buy such data while China and Russia can, and most recently an Executive Order to prohibit or restrict commercial transactions affording China, Russia, and...

info_outline
The National Cybersecurity Strategy – How Does it Look After a Year? show art The National Cybersecurity Strategy – How Does it Look After a Year?

The Cyberlaw Podcast

Kemba Walden and Stewart revisit the National Cybersecurity Strategy a year later. Sultan Meghji examines the ransomware attack on Change Healthcare and its consequences. Brandon Pugh reminds us that even large companies like Google are not immune to having their intellectual property stolen. The group conducts a thorough analysis of a "public option" model for AI development. Brandon discusses the latest developments in personal data and child online protection. Lastly, Stewart inquires about Kemba's new position at Paladin Global Institute, following her departure from the role of Acting...

info_outline
Regulating personal data for national security show art Regulating personal data for national security

The Cyberlaw Podcast

The United States is in the process of rolling out a for personal data transfers. But the rulemaking is getting limited attention because it targets transfers to our rivals in the new Cold War – China, Russia, and their allies. old office is drafting the rules, explains the history of the initiative, which stems from endless Committee on Foreign Investment in the United States efforts to impose such controls on a company-by-company basis. Now, with an as the foundation, the Department of Justice has published an that promises what could be years of slow-motion regulation. Faced with a...

info_outline
Are AI models learning to generalize? show art Are AI models learning to generalize?

The Cyberlaw Podcast

We begin this episode with describing major progress in conversions. Amazon flagged its new model as having “emergent” capabilities in handling what had been serious problems – things like speaking with emotion, or conveying foreign phrases. The key is the size of the training set, but Amazon was able to spot the point at which more data led to unexpected skills. This leads Paul and me to speculate that training AI models to perform certain tasks eventually leads the model to learn “generalization” of its skills. If so, the more we train AI on a variety of tasks – chat,...

info_outline
Death, Taxes, and Data Regulation show art Death, Taxes, and Data Regulation

The Cyberlaw Podcast

On the latest episode of The Cyberlaw Podcast, guest host Brian Fleming, along with panelists and discuss the latest U.S. government efforts to protect sensitive personal data, including the and the restricting certain bulk sensitive data flows to China and other countries of concern. Nate and Brian then discuss before the April expiration and debate what to make of a recent . Gus and Jane then talk about the , as well as , in an effort to understand some broader difficulties facing internet-based ad and subscription revenue models. Nate considers the implications of in its war against...

info_outline
 
More Episodes

Returning from winter break, this episode of the Cyberlaw Podcast covers a lot of ground. The story I think we’ll hear the most about in 2024 is the remarkable exploit used to compromise several generations of Apple iPhone. The question I think we’ll be asking for the next year is simple: How could an attack like this be introduced without Apple’s knowledge and support? We don’t get to this question until near the end of the episode, and I don’t claim great expertise in exploit design, but it’s very hard to see how such an elaborate compromise could be slipped past Apple’s security team. The second question is which government created the exploit. It might be a scandal if it were done by the U.S. But it would be far more of a scandal if done by any other nation. 

Jeffery Atik and I lead off the episode by covering recent AI legal developments that simply underscore the obvious: AI engines can’t get patents as “inventors.” But it’s quite possible that they’ll make a whole lot of technology “obvious” and thus unpatentable.

Paul Stephan joins us to note that National Institute of Standards and Technology (NIST) has come up with some good questions about standards for AI safety. Jeffery notes that U.S. lawmakers have finally woken up to the EU’s misuse of tech regulation to protect the continent’s failing tech sector. Even the continent’s tech sector seems unhappy with the EU’s AI Act, which was rushed to market in order to beat the competition and is therefore flawed and likely to yield unintended and disastrous consequences.  A problem that inspires this week’s Cybertoonz.

Paul covers a lawsuit blaming AI for the wrongful denial of medical insurance claims. As he points out, insurers have been able to wrongfully deny claims for decades without needing AI. Justin Sherman and I dig deep into a NYTimes article claiming to have found a privacy problem in AI. We conclude that AI may have a privacy problem, but extracting a few email addresses from ChatGPT doesn’t prove the case. 

Finally, Jeffery notes an SEC “sweep” examining the industry’s AI use.

Paul explains the competition law issues raised by app stores – and the peculiar outcome of litigation against Apple and Google. Apple skated in a case tried before a judge, but Google lost before a jury and entered into an expensive settlement with other app makers. Yet it’s hard to say that Google’s handling of its app store monopoly is more egregiously anticompetitive than Apple’s.

We do our own research in real time in addressing an FTC complaint against Rite Aid for using facial recognition to identify repeat shoplifters.  The FTC has clearly learned Paul’s dictum, “The best time to kick someone is when they’re down.” And its complaint shows a lack of care consistent with that posture.  I criticize the FTC for claiming without citation that Rite Aid ignored racial bias in its facial recognition software.  Justin and I dig into the bias data; in my view, if FTC documents could be reviewed for unfair and deceptive marketing, this one would lead to sanctions.

The FTC fares a little better in our review of its effort to toughen the internet rules on child privacy, though Paul isn’t on board with the whole package.

We move from government regulation of Silicon Valley to Silicon Valley regulation of government. Apple has decided that it will now require a judicial order to give government’s access to customers’ “push notifications.” And, giving the back of its hand to crime victims, Google decides to make geofence warrants impossible by blinding itself to the necessary location data. Finally, Apple decides to regulate India’s hacking of opposition politicians and runs into a Bharatiya Janata Party (BJP) buzzsaw. 

Paul and Jeffery decode the EU’s decision to open a DSA content moderation investigation into X.  We also dig into the welcome failure of an X effort to block California’s content moderation law.

Justin takes us through the latest developments in Cold War 2.0. China is hacking our ports and utilities with intent to disrupt (as opposed to spy on) them. The U.S. is discovering that derisking our semiconductor supply chain is going to take hard, grinding work.

Justin looks at a recent report presenting actual evidence on the question of TikTok’s standards for boosting content of interest to the Chinese government. 

And in quick takes, 

Download 486th Episode (mp3)


You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.