Happy Wednesday! How’s your week going? (This is my attempt at small talk.) Send news tips and light banter to: will.oremus@washpost.com.
With Telegram’s laissez-faire content policies under scrutiny after CEO Pavel Durov’s arrest in France, researchers Damon McCoy, Laura Edelson and Yaël Eisenstat of Cybersecurity for Democracy were prompted to search Meta’s publicly available Ad Library for ads that link to Telegram channels. Given Telegram’s reputation as a haven for speech and activities that are prohibited elsewhere, they suspected they might find a few ads for shady products or services.
The search turned up far more than they expected, Eisenstat said. Out of the first 50 such active Facebook and Instagram ads they found on Aug. 28, more than half — 32 — appeared to violate Meta’s advertising policies, by the researchers’ reckoning. They included nine ads for drugs, one for explicit adult content and a range of gambling and financial scams, including a document-forgery service and offers of guaranteed betting wins.
The finding shows how bad actors online use mainstream social networks such as Facebook and Instagram as a front door to lure users into shady or illegal schemes that transpire on Telegram — and how Meta, perhaps inadvertently, profits from the practice.
By Tuesday, most of the ads were no longer visible in Meta’s Ad Library. It was not clear whether Meta had removed them or they had simply expired.
Tech Brief’s own search of Meta’s Ad Library on Tuesday for ads linking to Telegram turned up a fresh batch of active ads, many of which appeared to be for similarly dicey content. One showed a wad of hundred-dollar bills with the caption, “Do you want to win big from fixed match?” Another advertised luxury cars with “no title” for $10,000, saying “don’t come asking $10k questions.”
In a report on their findings, shared with the Tech Brief ahead of its publication Wednesday, the researchers recommend that Meta apply an extra layer of scrutiny to ads that link to Telegram to “reduce the amount of harmful – and even illegal – content that crosses over from Telegram to Meta’s apps.”
Telegram did not respond to a request for comment Tuesday. Meta said it works continually to improve its systems for detecting ads that violate its policies and recently implemented stricter rules against high-risk drugs such as fentanyl, cocaine and heroin, in particular.
“Our ads review process includes both automated and human reviews and has several layers of analysis and detection, both before and after an ad goes live,” Meta spokesperson Ryan Daniels said. “When we identify ads that violate our policies, we work quickly to remove them.”
Meanwhile, both Google and Meta have been hosting ads for services that use artificial intelligence to virtually “undress” people, typically without their consent.
In a separate analysis, also shared with the Tech Brief ahead of its publication Wednesday, Alexios Mantzarlis of Cornell Tech in New York found 222 ads on Meta’s platforms for five different tools that offer to generate fake nude images of real people. These “undresser” or “nudifier” apps ask the user to upload an image of a person, and then — often for a fee — show the user a version of the image in which the person appears unclothed.
The finding, published on Mantzarlis’s newsletter Faked Up, comes a month after he found 15 Google search ads for such apps, in apparent violation of Google’s policy against ads for deepfake nudes. Mantzarlis, who is director of the security, trust and safety initiative at Cornell Tech, said those have since been removed.
“Services that offer to create synthetic sexual or nude content are prohibited from advertising through any of our platforms or generating revenue through Google Ads," Google spokesperson Nate Funkhouser said. "We suspended the advertisers in question for violating this policy, removing the ads from our platforms.”
Meta, too, appeared to have removed some of the AI undresser ads on Tuesday after Mantzarlis contacted the company about them. But Mantzarlis said some AI undresser apps remain on Apple’s and Google’s app stores.
Meta’s Daniels said, “Meta does not allow ads that promote adult sexual exploitation. While apps like this remain widely available in various app stores, we have removed these ads and are taking action against the accounts behind them.”
Both sets of findings highlight the importance of transparency tools for social media researchers at a time when key social networks are growing more opaque.
While Meta’s Ad Library has shortcomings — it doesn’t archive nonpolitical advertisements or make clear whether they violated its rules — it’s actually more useful than similar tools for ads on Google’s YouTube and Bytedance’s TikTok, researchers said. That makes it easier to hold Meta to account for problematic ads, even if they might also be running on rival networks.
Eisenstat, a former Meta employee, was on the company’s election integrity team for political ads in 2018, the year it launched the Ad Library’s predecessor. She said she was disappointed with Meta’s move in August to shut down CrowdTangle, another tool widely used by researchers and journalists to research Facebook’s news feed.
Meanwhile, X has cut back on transparency tools and reporting since billionaire Elon Musk bought it in 2022, as your co-host Cristiano Lima-Strong reported last year, making it even more opaque than its rivals.
“The fact that we can do some basic research on Meta ads is a step in the right direction,” Eisenstat said. “But we’ve actually seen some concerning steps backwards on transparency since 2021, which is particularly concerning as we’re in such a heated election season.”
From our notebooks
Federal cybersecurity watchdog says it’s ‘not our role’ to police foreign influence
The U.S. cybersecurity officials responsible for defending election infrastructure from hacking attempts and foreign influence said Monday they don’t call out specific misinformation campaigns they spot to the social platforms where they spread, my colleague Joseph Menn reports for Tech Brief.
Jen Easterly, the director of the Cybersecurity and Infrastructure Security Agency (CISA), told reporters that other countries’ efforts to cast doubt on the validity of election results to sow division are likely to intensify. But when The Washington Post asked how the agency communicates with X, Facebook and other social media platforms about current foreign influence attempts, Easterly said it offers nothing beyond occasional updates on general tactics.
“None of these engagements involve CISA discussing content that is to be removed. That’s not our role,” Easterly said. Such communications have been a fraught issue since 2016, when Russian influence attempts were only detailed after the election. Some Republicans have alleged that officials engaged in a censorship conspiracy by pressuring the platforms to remove medical misinformation and other speech.
Easterly said CISA is aiding local and state officials who run the election process. But posts by official social media accounts generally get far less traction than those of influencers who spread falsehoods, such as the unsupported claim that millions of undocumented immigrants vote in federal elections.
Government scanner
Silicon Valley had Kamala Harris’s back for decades. Will she return the favor? (Cristiano Lima-Strong and Cat Zakrzewski)
Hill happenings
Apple Helped Nix Part of a Child Safety Bill. More Fights Are Expected. (Wall Street Journal)
Inside the industry
No X in Brazil? No problem, Brazilians say. (Terrence McCoy)
Few have tried OpenAI’s Google killer. Here’s what they think. (Lisa Bonos and Gerrit De Vynck)
Meta’s Oversight Board rules that ‘from the river to the sea’ is not necessarily hate speech (NBC News)
Snapchat to put ads next to chats with friends (The Verge)
Competition watch
Privacy monitor
Trending
Daybook
- Semafor hosts an event, “Age and Access in the Social Media Era,” featuring New York Gov. Kathy Hochul (D) and Samuel Levine, director of the FTC’s consumer protection bureau, Wednesday from 9:30-11 a.m.
- The World Bank and Georgetown University’s Center for Business and Public Policy host an event, “Jobs in the Age of AI,” Wednesday from 9 a.m.-5:30 p.m.
- The Brookings Institution hosts an event, “Digitally invisible: How the internet is creating the new underclass,” Thursday at 2 p.m.
Before you log off
People will be like, “generative AI has no practical use case,” but I did just use it to replace every app icon on my home screen with images of Kermit, soooo pic.twitter.com/cOBB5QNXpt
— Damon Beres (@dlberes) September 2, 2024
That’s all for today — thank you so much for joining us! Make sure to tell others to subscribe to Tech Brief. Get in touch with Cristiano (via email or social media) and Will (via email or social media) for tips, feedback or greetings!