close
HomeNewsMoneyHealthPropertyLifestyleWineRetirement GuideTriviaGames
Sign up
menu

AI can swing elections. Here’s why digital literacy is critical

Jan 20, 2026
Share:
Never has it been more important to question what you read.

The shooting had barely finished, and while the chaos was continuing at Bondi Beach on December 14 last year, the first examples of social media misinformation about Australia’s worst terrorist attack were already being posted online.

The artificial intelligence machine was at full speed.

For those not totally initiated to what that means – fake social media accounts were posting messages and acting like real people to push certain narratives independent of human oversight. These fake accounts are known as ‘bots’, short for robots.

Some weeks later, Senior Lecturers at the University of New South Wales (UNSW) in Sydney conducted a groundbreaking social media simulation, in a fully controlled environment, designed to underscore the growing threat posed by artificial intelligence–powered bots in shaping public opinion and potentially altering election outcomes, amid broader evidence of increasing AI-driven misinformation and automated bot activity online.

The Capture the Narrative project, described as the world’s first social media “wargame,” was conducted to examine how small teams equipped with consumer-grade, or over-the-counter baseline generative AI, can build bot networks capable of flooding a platform with content and influencing voter behaviour.

After the attack at Bondi Beach that killed 15 people and a gunman, AI-generated and manipulated posts included images depicting human rights lawyer Arsen Ostrovsky as a “crisis actor,” being made up by human artists to look like he had been shot. Ostrovsky, who also survived Hamas’ attack on Israel on October 7, 2023, was there at Bondi Beach and was featured in media interviews directly following the shooting, covered in blood and bandages.

AFP, a fact checking unit linked to news agency Agence France-Presse (AFP) confirmed these images were “deepfakes” and that they had been generated by AI bots and circulated with false narratives about Ostrovsky’s involvement in the attack.

Inside the wargame

Back to the UNSW project, and researchers invited 108 teams from 18 Australian universities to develop AI-powered bots to influence a fictional presidential election on a simulated social media platform.

Teams supported either the left-leaning candidate “Victor” or the right-leaning “Marina.” Over four weeks, bots generated more than 60 per cent of the platform’s content – more than seven million posts – with both sides using highly engaging, and in some cases false or fictional, narratives to sway simulated voters. The bots interacted with simulated citizens designed to behave like real people.

The result was a narrow victory for Victor. When the simulation was rerun without bot interference, Marina won with a swing of 1.78%, indicating the bot campaign materially changed the election outcome.

One student participant said after the contest: “It’s scarily easy to create misinformation, easier than truth. It’s really difficult to distinguish between genuine and manufactured posts.” Another added, “We needed to get a bit more toxic to get engagement.”

Researchers say the exercise demonstrates how accessible generative AI can easily create misinformation, from fake posts to targeted amplification.

Rising bot activity and AI-driven automation

Separate industry research paints a wider picture of automated bot growth online. The 2025 ‘Bad Bot Report’ by cybersecurity firm Imperva found that for the first time in a decade, automated traffic surpassed human online visits, accounting for roughly 51% of all web activity in 2024, driven in part by AI tools that lower barriers to building and deploying bots.

The report highlighted that “bad bots” – or malicious automated programs – made up about 37% of all internet traffic, up from previous years, and are increasingly sophisticated and evasive. These bots are not limited to simple activities such as text generation or social media; they also target live data feeds known as APIs, exploit gaps in business logic, and facilitate fraud across industries such as financial services and telecommunications.

An analysis of the exercise suggests this surge in bot activity is not only a technical issue but a social one. As more of the internet becomes automated, distinguishing human-generated content from machine-generated posts becomes harder, eroding trust in online information and complicating efforts to verify truth.

Fact-checking organisations, including AFP’s Fact Check unit, have documented numerous instances of AI-generated misinformation across global events, from national elections to major crises, underscoring the difficulty of tracing and debunking false content once it spreads.

Implications for public discourse

According to the UNSW analysis, experts warn that the proliferation of AI bots and automated content poses significant challenges for democratic discourse. Generative AI can rapidly produce realistic text, images and videos that blur the line between truth and fiction, while bot networks can amplify this content to simulate consensus or polarise discussion.

Follow-up studies and surveys found that even when users recognise information as false, exposure can still influence perceptions and beliefs, eroding confidence in legitimate information sources. Researchers describe a “liar’s dividend,” where the mere possibility of fabricated content leads users to dismiss authentic posts as fakes.

The outcomes of the Capture the Narrative simulation have prompted calls for greater AI regulation on one side, but also for improved digital literacy so citizens can better recognise and critically evaluate AI-generated misinformation. Proponents argue that understanding how bots operate and how content can be manipulated is essential to maintaining informed public debate in an era where automated misinformation is proliferating.

The Imperva report and fact-checking findings reflect broader trends in online automation and misinformation. As automated traffic continues to grow, policymakers, platforms and educators are debating how to balance technological innovation with safeguards that preserve the integrity of information.

Critics of current fact checking and verification practices note that even sophisticated tools can struggle to keep pace with the volume and evolution of AI-generated content, further highlighting the importance of public awareness and systemic responses to mitigate misinformation risks in the digital age.

Ongoing concerns

As AI capabilities expand, so too does the potential for misuse, whether in cyberattacks, misinformation campaigns, or automated manipulation of public opinion. Experts say targeted education into AI technology, policy and civic engagement – will be necessary to address the complex challenges posed by AI-driven bot networks and misinformation in the years ahead.

Continue reading