Democracy Dies in Darkness
Tech Brief

The Washington Post’s essential guide to tech policy news

California Gov. Gavin Newsom’s desk is overflowing with AI bills

Analysis by

with research by Will Oremus

Tech Brief

The Washington Post’s essential guide to tech policy news

7 min

Hi all, Gerrit De Vynck here, filling in for Cristiano. I cover artificial intelligence for The Post out in San Francisco.

Below: A new report probes the link between social media and political violence. First:

California Gov. Gavin Newsom’s desk is overflowing with AI bills

We’ve been writing a lot about California’s controversial S.B. 1047, which would make tech companies liable for harm done by their artificial intelligence models. The industry is now lobbying Gov. Gavin Newsom (D) to veto the bill, which is also opposed by a raft of prominent California politicians, including Rep. Nancy Pelosi (D). But 1047 isn’t the only AI-related law that Newsom has on his desk. There are others covering deepfakes and autonomous vehicles.

It’s possible that Newsom sides with tech companies (and Pelosi) by vetoing S.B. 1047, but then signs into law several other AI-related regulations. That would allow the governor to say he isn’t totally against regulation, but simply chose to pass the bills he thought strike the right balance between fostering California innovation and creating safeguards for the burgeoning technology. Newsom has until Sept. 30 to decide. Here’s a quick rundown of those bills.

Autonomous vehicles. One bill, called A.B. 1777, would require companies running self-driving vehicles to provide a hotline for police to call in case an autonomous vehicle disrupts a crime scene or emergency situation — something that has happened repeatedly in San Francisco. Another bill would force companies to report to the government every time their cars are involved in a collision, something they don’t currently have to do if the vehicle is carrying passengers for money.

Deepfakes. AI-generated images, audio and videos are becoming easier to make. S.B. 942, or the AI Transparency Act, would require AI companies to provide free tools to detect images, audio and video made by their products. Companies found breaking the law would be subject to fines. Another bill, A.B. 2655, or the Defending Democracy from Deepfakes Act, would require big social media platforms to block political deepfakes meant to mislead voters for a certain time period leading up to elections. Deepfakes have already showed up in elections around the world, including in the U.S. presidential election. Another bill, A.B. 2355, would require political advertisements to disclose if they used generative AI to create part or all of the ad.

There’s more. Currently, it’s illegal to profit off a dead person’s likeness without his or her estate’s permission. A.B. 1836 would clarify the existing law to include using AI to generate an image or video of a dead person and then making money off it, such as creating a fake Elvis song with AI and selling it online. Finally, there’s a bill that would make it illegal to make robotic weapons, unless you work for a defense company that’s in the business of selling weapons to the Defense Department.

California’s wave of AI bills shows how firmly the state’s politicians are trying to become the nation’s de facto AI regulators, while Congress continues discussing the technology without passing any specific legislation. And California isn’t the only state moving ahead with AI policies. On Thursday, New York Attorney General Letitia James (D) is releasing a guide for voters meant to help them avoid AI-generated election misinformation. It urges people not to trust information from chatbots when it comes to voting and asks them to report any examples of election disinformation.

Send news tips to: gerrit.devynck@washpost.com.

From our notebooks

Social media really does contribute to political violence, NYU report contends

Called by Congress to testify on misinformation in March 2021, two months after the Jan. 6 insurrection, Meta CEO Mark Zuckerberg played down the role of social networks in stoking the uprising.

“Some people say that the problem is that social networks are polarizing us, but that’s not at all clear from the evidence or research,” Zuckerberg said at the time.

That line, which other Meta officials have since echoed, irked Paul M. Barrett, deputy director of the NYU Stern Center for Business and Human Rights. “I think that’s a largely disingenuous, throw-your-hands-up interpretation of the material,” he said.

On Thursday, Barrett and his colleagues are publishing a 28-page report that he says is intended in part to correct the record on what social science really says about the relationship between social media and political violence, Will Oremus reports for the Tech Brief. The report, which draws on a survey of more than 400 papers from peer-reviewed journals and think tanks, concludes that social media has often “facilitated” or “exacerbated” outbreaks of political violence around the world, from Myanmar to the U.S. Capitol, even though it isn’t the primary cause.

The report makes the case that specific social media features — such as Facebook groups, Instagram comments, TikTok’s For You Page and Twitter hashtags — have contributed to violence and extremist organizing in ways that could have been mitigated with different design choices or better oversight. It summarizes the roles of smaller networks, including alt-right platforms such as Gab, Parler and 4chan, as echo chambers where partisans “talk themselves into” radical views and acts.

The report includes a case study on the role of various platforms in Jan. 6 by Dean Jackson of Public Circle Research and Consulting and Justin Hendrix, adjunct professor at NYU Tandon School of Engineering. They distill some 280 academic analyses on social media’s contributions to the insurrection, including how adherents of the online extremist ideology QAnon coalesced around the pro-Trump “Stop the Steal” movement on both mainstream and alternative social platforms, including Telegram.

Social media is just one factor in political violence, the report says. “But as the work of social scientists makes clear, major tech companies need to do better. Rather than retreat from modest reforms made in the recent past, they should be intensifying efforts to protect against political threats and incitement to violence.”

Meta spokesperson Ryan Daniels declined to comment beyond highlighting the results of the company’s past research partnerships with independent academics, which it says show little evidence that social media meaningfully affects political attitudes or behaviors.

Government scanner

Justice Dept. charges two Russian media operatives in alleged scheme (David Nakamura, Catherine Belton and Will Sommer)

New Mexico sues Snap over sexual exploitation of minors (Cat Zakrzewski)

The Internet Archive loses its appeal of a major copyright case (Wired)

Nvidia says it has ‘not been subpoenaed’ by Justice Dept. (Bloomberg News)

Musk’s X wins appeal to block part of California content moderation law (Reuters)

SpaceX pulls employees from Brazil, discourages travel there, as Musk battles court over X (Wall Street Journal)

North Carolina man charged with using AI to win music royalties (New York Times)

Inside the industry

Meta’s Oversight Board says ‘from the river to the sea’ isn’t hate speech (Naomi Nix)

Verizon offers to buy Frontier for $9.6 billion cash to expand fiber network (Aaron Gregg)

Advertisers plan to withdraw from X in record numbers (CNN)

Andreessen Horowitz ditches Miami two years after opening office (Bloomberg News)

YouTube to restrict teenagers’ exposure to videos about weight and fitness (The Guardian)

Workforce report

US judge says X must face class action age bias claims over mass layoff (Reuters)

Trending

National Novel Writing Month faces backlash over allowing AI (Adela Suliman)

Is the Change Healthcare data breach letter real? Here’s what to know. (Tatum Hunter)

Webb telescope detects what looks like a giant question mark in space (Joel Achenbach)

Daybook

  • The Brookings Institution hosts a fireside chat about Nicol Turner Lee’s new book, “Digitally invisible: How the internet is creating the new underclass,” Thursday at 2 p.m.

Before you log off

That’s all for today — thank you so much for joining us! Make sure to tell others to subscribe to the Tech Brief. Get in touch with Cristiano (via email or social media) and Will (via email or social media) for tips, feedback or greetings!