BRUSSELS — Voters in the European Union are set to elect lawmakers starting Thursday for the bloc’s parliament, in a major democratic exercise that’s also likely to be overshadowed by online disinformation.
Experts have warned that artificial intelligence could supercharge the spread of fake news that could disrupt the election in the EU and many other countries this year. But the stakes are especially high in Europe, which has been confronting Russian propaganda efforts as Moscow’s war with Ukraine drags on.
Here’s a closer look:
Some 360 million people in 27 nations — from Portugal to Finland, Ireland to Cyprus — will choose 720 European Parliament lawmakers in an election that runs Thursday to Sunday. In the months leading up to the vote, experts have observed a surge in the quantity and quality of fake news and anti-EU disinformation being peddled in member countries.
A big fear is that deceiving voters will be easier than ever, enabled by new AI tools that make it easy to create misleading or false content. Some of the malicious activity is domestic, some international. Russia is most widely blamed, and sometimes China, even though hard evidence directly attributing such attacks is difficult to pin down.
“Russian state-sponsored campaigns to flood the EU information space with deceptive content is a threat to the way we have been used to conducting our democratic debates, especially in election times,” Josep Borrell, the EU’s foreign policy chief, warned on Monday.
He said Russia’s “information manipulation” efforts are taking advantage of increasing use of social media penetration “and cheap AI-assisted operations.” Bots are being used to push smear campaigns against European political leaders who are critical of Russian President Vladimir Putin, he said.
There have been plenty of examples of election-related disinformation.
Two days before national elections in Spain last July, a fake website was registered that mirrored one run by authorities in the capital Madrid. It posted an article falsely warning of a possible attack on polling stations by the disbanded Basque militant separatist group ETA.
In Poland, two days before the October parliamentary election, police descended on a polling station in response to a bogus bomb threat. Social media accounts linked to what authorities call the Russian interference “infosphere” claimed a device had exploded.
Just days before Slovakia’s parliamentary election in November, AI-generated audio recordings impersonated a candidate discussing plans to rig the election, leaving fact-checkers scrambling to debunk them as false as they spread across social media.
Just last week, Poland’s national news agency carried a fake report saying that Prime Minister Donald Tusk was mobilizing 200,000 men starting on July 1, in an apparent hack that authorities blamed on Russia. The Polish News Agency “killed,” or removed, the report minutes later and issued a statement saying that it wasn’t the source.
It’s “really worrying, and a bit different than other efforts to create disinformation from alternative sources,” said Alexandre Alaphilippe, executive director of EU DisinfoLab, a nonprofit group that researches disinformation. “It raises notably the question of cybersecurity of the news production, which should be considered as critical infrastructure.”
Experts and authorities said Russian disinformation is aimed at disrupting democracy, by deterring voters across the EU from heading to the ballot boxes.
“Our democracy cannot be taken for granted, and the Kremlin will continue using disinformation, malign interference, corruption and any other dirty tricks from the authoritarian playbook to divide Europe,” European Commission Vice-President Vera Jourova warned the parliament in April.
Tusk, meanwhile, called out Russia’s “destabilization strategy on the eve of the European elections.”
On a broader level, the goal of “disinformation campaigns is often not to disrupt elections,” said Sophie Murphy Byrne, senior government affairs manager at Logically, an AI intelligence company. “It tends to be ongoing activity designed to appeal to conspiracy mindsets and erode societal trust,” she told an online briefing last week.
Narratives are also fabricated to fuel public discontent with Europe’s political elites, attempt to divide communities over issues like family values, gender or sexuality, sow doubts about climate change and chip away at Western support for Ukraine, EU experts and analysts say.
Five years ago, when the last European Union election was held, most online disinformation was laboriously churned out by “troll farms” employing people working in shifts writing manipulative posts in sometimes clumsy English or repurposing old video footage. Fakes were easier to spot.
Now, experts have been sounding that alarm about the rise of generative AI that they say threatens to supercharge the spread of election disinformation worldwide. Malicious actors can use the same technology that underpins easy-to-use platforms, like OpenAI’s ChatGPT, to create authentic-looking deepfake images, videos and audio. Anyone with a smartphone and a devious mind can potentially create false, but convincing, content aimed at fooling voters.
“What is changing now is the scale that you can achieve as a propaganda actor,” said Salvatore Romano, head of research at AI Forensics, a nonprofit research group. Generative AI systems can now be used to automatically pump out realistic images and videos and push them out to social media users, he said.
AI Forensics recently uncovered a network of pro-Russian pages that it said took advantage of Meta’s failure to moderate political advertising in the European Union.
Fabricated content is now “indistinguishable” from the real thing, and takes disinformation watchers experts a lot longer to debunk, said Romano.
The EU is using a new law, the Digital Services Act, to fight back. The sweeping law requires platforms to curb the risk of spreading disinformation and can be used to hold them accountable under the threat of hefty fines.
The bloc is using the law to demand information from Microsoft about risks posed by its Bing Copilot AI chatbot, including concerns about “automated manipulation of services that can mislead voters.”
The DSA has also been used to investigate Facebook and Instagram owner Meta Platforms for not doing enough to protect users from disinformation campaigns.
The EU has passed a wide-ranging artificial intelligence law, which includes a requirement for deepfakes to be labelled, but it won’t arrive in time for the vote and will take effect over the next two years.
Most tech companies have touted the measures they’re taking to protect the European Union’s “election integrity.”
Meta Platforms — owner of Facebook, Instagram and WhatsApp — has said it will set up an election operations center to identify potential online threats. It also has thousands of content reviewers working in the EU’s 24 official languages and is tightening up policies on AI-generated content, including labeling and “downranking” AI-generated content that violates its standards.
Nick Clegg, Meta’s president of global affairs, has said there’s no sign that generative AI tools are being used on a systemic basis to disrupt elections.
TikTok said it will set up fact-checking hubs in the video-sharing platform’s app. YouTube owner Google said it’s working with fact-checking groups and will use AI to “fight abuse at scale.”
Elon Musk went the opposite way with his social media platform X, previously known as Twitter. “Oh you mean the ‘Election Integrity’ Team that was undermining election integrity? Yeah, they’re gone,” he said in a post in September.
___
A previous version of this story misspelled the given name of EU foreign policy chief Josep Borrell.