AI is already shaking up elections around the world, and Germany’s recent vote should serve as a wake-up call for Australia. AI-generated misinformation played a big role in the campaign, with deepfakes and automated disinformation targeting conservative frontrunner Friedrich Merz. Meanwhile, AI-powered chatbots helped voters figure out which party aligned with their views. With Australia heading to the polls later this year, the real question isn’t whether AI will influence our elections, it’s how much, and whether we’re ready to handle it.
Toby Walsh, chief scientist at UNSW AI Institute, says AI-generated misinformation is only going to get worse. “We saw much more of it in Germany than in earlier US elections. The generative AI tools are getting easier to use, more powerful, and more accessible,” he warns.
The risks of AI in Australian politics
AI is a double-edged sword for political campaigns. On the one hand, it can help parties personalise their outreach, making voter engagement more effective. On the other, it can be weaponised to spread disinformation, create deepfakes, and manipulate public opinion through hyper-targeted messaging.
According to Walsh, three major risks stand out:
• Micro-targeting: AI can feed different voters highly persuasive, but possibly contradictory, messages, making it harder to pin down a party’s real position.
• Deepfakes: AI-generated videos and images could make even truthful content seem unreliable, eroding trust in political messaging.
• Algorithmic amplification: Social media platforms prioritise engagement, which often means promoting extreme and polarising content.
Are we equipped to spot AI misinformation?
AI detection tools exist, but they’re always playing catch-up with how quickly generative AI evolves. “Technical solutions like digital watermarking will help in the long run, but they’re just Band-Aids on a deeper problem,” says Walsh. “We’re entering an era where seeing is no longer believing.”
The Australian Communications and Media Authority (ACMA) knows misinformation is a growing concern but doesn’t have direct regulatory power over digital platforms. Instead, Australia relies on a voluntary Code of Practice on Disinformation and Misinformation. Given that 75% of Australians are reportedly worried about misinformation, it’s fair to question whether self-regulation is enough.
Lessons from Germany and the U.S.
Germany’s election showed just how AI-driven disinformation can shape voter perceptions, often with the involvement of foreign actors. In the US, AI-generated robocalls impersonating political figures and deepfake videos of candidates have already surfaced, setting a worrying precedent for 2024.
Walsh believes Australia must take three key lessons from these experiences:
• Regulating platforms: Social media companies can’t be left to regulate themselves, they’ll always prioritise engagement over truth.
• Truth in political advertising: We have rules about false advertising for washing powder, so why not for political campaigns?
• Stronger journalism: Independent media is one of the best tools we have to counter misinformation and hold campaigns to account.
Will political parties Use AI responsibly?
If history is anything to go by, political parties will use AI to gain an edge, not necessarily responsibly. Walsh points to how Obama, and later Trump, leveraged social media to their advantage. “Without strict regulation, we’re heading for an arms race of AI-powered manipulation,” he says.
So how can Australia strike the right balance? Walsh suggests looking to the EU’s AI Act as a starting point and implementing clear safeguards like:
• Mandatory disclosure when political ads are AI-generated.
•Truth-in-advertising rules that apply to election campaigns.
• Strict limits on micro-targeting, “You should only be able to target ads to people based on whether they live in your constituency and are of voting age,” Walsh suggests.
Tech platforms: Problem or solution?
Platforms like Facebook, X (formerly Twitter), and TikTok are failing spectacularly at stopping AI-driven election misinformation. “Their algorithms actively promote divisive content because it drives engagement,” says Walsh. “Voluntary measures aren’t working. These platforms must be held legally responsible for the AI-generated content they amplify.”
The ACMA and Australian Electoral Commission are monitoring digital misinformation ahead of the 2025 Federal Election. The AEC’s Stop and Consider campaign aims to boost voter awareness, but there’s little in the way of real enforcement.
The fight for democracy in the age of AI
Germany’s election shows exactly what’s coming: AI-generated attack ads, deepfake smear campaigns, and hyper-personalised voter manipulation. Without strong regulation, Australia risks heading down the same path as other democracies grappling with misinformation at scale.
As deepfake technology advances and AI-driven political tools grow more sophisticated, the risks to democracy are escalating. Whether the answer lies in stronger regulations, improved detection, or a more informed electorate is yet to be determined.