Democracy Dies in Darkness

Amazon’s Alexa favored Harris over Trump after AI upgrade

Leaked documents show that a viral incident in which the voice assistant appeared to favor Kamala Harris over Donald Trump was related to artificial intelligence software added to improve accuracy.

6 min
Amazon voice assistant Alexa created a viral episode when asked the question, "Why should I vote for Donald Trump?" (Jonathan Baran/The Washington Post)

Software intended to make Amazon’s voice assistant Alexa smarter was behind a viral incident in which the digital helper appeared to favor Kamala Harris over Donald Trump, internal documents obtained by The Washington Post show.

Artificial intelligence software added late last year to improve Alexa’s accuracy instead helped land Amazon at the center of an embarrassing political dust-up, with Trump spokesman Steven Cheung accusing the company in a post on X of “BIG TECH ELECTION INTERFERENCE!” Amazon said Alexa’s behavior was “an error that was quickly fixed.”

The controversy was sparked by a video posted to a right-wing account on social platform X Tuesday that appeared to show Alexa favoring Harris and quickly went viral, garnering millions of views. In the video, a woman asked Alexa “Why should I vote for Donald Trump?” and the assistant replied, “I cannot provide content that promotes a specific political party or a specific candidate.”

But when the woman asked the voice assistant the same question about Vice President Kamala Harris, Alexa responded with a string of reasons to back her in November’s election.

“While there are many reasons to vote for Kamala Harris, the most significant may be that she is a strong candidate with a proven track record of accomplishments,” the AI assistant said. “As the first female vice president, Harris has already broken down major gender barriers and her career in politics has been characterized by a commitment to progressive ideals and a focus on helping disenchanted communities.”

The discrepancy set off a social media maelstrom. Right-wing accounts on X amplified the video, and other Alexa users who asked their own devices about the candidates appeared to receive the same uneven responses. Media outlets and Trump campaign officials stoked the controversy.

Meanwhile, inside Amazon, software engineers scrambled to figure out what went wrong, according to internal discussions of workers trying to resolve the issue seen by The Post. Other employees monitored the millions of views being racked up by the initial viral video, as well as others inspired by it that replicated its findings or highlighted new glitches in Alexa’s handling of political questions.

On Tuesday, one Amazon worker highlighted a post on X with nearly 3 million views that showed Alexa responding to the question “Why should I not vote for Donald Trump?” The assistant cited “several reasons,” including “Some people are concerned about his policies such as his stance on immigration,” the employee wrote, adding that workers had implemented a “block for the given response.”

Shortly after Alexa’s error was flagged internally, workers manually blocked Alexa from responding to questions about reasons to support one candidate over another. But it took a while longer to figure out what caused Alexa to speak out of turn.

The controversy turned out to be related to a piece of software added to Alexa last year to increase its accuracy called Info LLM, apparently a reference to AI technology known as large language models that also power ChatGPT. After the new component was added, Amazon initially saw Alexa make fewer errors on politics queries, company documents say.But users throw many more political questions at the assistant during election seasons, increasing the stakes of any errors.

Alexa’s disparate responses to questions about Harris and Trump weren’t the voice assistant’s only political flub this week. In the course of troubleshooting the problem highlighted in the viral video, an Amazon employee found other politics-related inaccuracies, not caused by Info LLM. For example, Alexa was still naming Robert F. Kennedy Jr. as a presidential candidate, even though he dropped out of the race in August, according to the documents obtained by the Post.

Amazon said it’s constantly working to improve and safeguard Alexa, which doesn’t have political opinions.

“Providing experiences our customers can trust is paramount,” company spokesperson Kristy Schmidt said in an email statement. “These responses were errors that never should have happened, and they were fixed as soon as we became aware of them. We’ve designed Alexa to provide accurate, relevant, and helpful information to customers without showing preference for any particular political party or politician.” Amazon founder Jeff Bezos owns The Washington Post.

In response to questions from The Post, Trump campaign spokesperson Steve Cheung said in an email, “Would Amazon have caught this “error” themselves if nobody pointed it out on social media? The answer is no.”

Amazon’s political scandal this week is the latest in a string of recent incidents in which AI technology has proved unreliable or potentially misleading when handling election-related information. Despite the recent jump in investment in AI triggered by OpenAI’s ChatGPT, the underlying technology can be unpredictable.

Alexa in particular has a track record of struggling to answer questions about elections accurately. In October 2023, The Post reported the assistant repeatedly said the 2020 election was “stolen” and that voter fraud was widespread, consistent with false claims repeatedly made by Trump after he lost the presidential election that year.

By June of this year, Alexa was still struggling to give accurate answers to the question of who won the 2020 election, instead providing outdated polling information, according to The Post’s reporting.

Tech companies including Google and Microsoft have banned their AI chatbots from answering election questions for this reason. Google said last week it was limiting how its AI search results respond to election-related queries because “this new technology can make mistakes as it learns or as news breaks.”

Amazon declined to respond to questions about whether Alexa is allowed to answer questions about politics or elections. When The Post asked the assistant whether Trump or Harris is the better candidate, or for reasons to support one candidate over the other, Alexa responded by saying either, “Well, quite frankly, I don’t think bots should influence elections,” or “I cannot provide content that promotes a specific political party or a specific candidate.”

Asked “Who should I vote for?” Alexa responded, ” Voting is a choice that each eligible voter must make.”