Skip to content

The intersection of artificial intelligence and the dissemination of misinformation: implications for the future of democracy?

Political AI's potential for revolutionizing political interactions brings forth significant queries.

"Exploring AI's Role in Deception and Democracy's Prospects"
"Exploring AI's Role in Deception and Democracy's Prospects"

The intersection of artificial intelligence and the dissemination of misinformation: implications for the future of democracy?

Artificial Intelligence (AI) is transforming the landscape of European elections, with its ability to create and distribute highly convincing false content[1][3]. This development poses significant risks to democracy, as AI-driven misinformation can be more targeted, emotionally charged, and perceived as more trustworthy than human-generated falsehoods[1].

AI techniques, such as deepfakes, can manipulate public opinion by creating fake endorsements, impersonations, and manipulated images[3]. These deceptive practices can defame candidates, amplify stereotypes, and sow societal divisions, contributing to polarization and undermining social cohesion[2][3]. Foreign and domestic actors often exploit these tools to destabilize democratic institutions by spreading propaganda, conspiracy theories, and divisive narratives[2][3].

To combat AI-based disinformation, several measures are being implemented or proposed:

  1. Legal frameworks: Denmark has pioneered legislation granting individuals copyright over their own likeness to combat deepfakes[5]. This law requires platforms to remove unauthorized AI-generated content that mimics a person’s face, voice, or body, with penalties for non-compliance.
  2. Detection and monitoring tools: Although current detection technologies lag behind the sophistication of AI-generated content, efforts are underway to develop advanced means to identify deepfakes and AI-driven propaganda[3].
  3. EU policy and regulation: The European Union recognizes the threat from foreign and domestic actors using AI for disinformation and calls for comprehensive action that spans digital and physical dimensions of interference[3]. Enhanced transparency, accountability of platforms, and prosecution of overt acts of disruption form part of the response.
  4. Awareness and media literacy: Increasing public awareness of AI-enabled misinformation and its manipulation tactics helps voters critically assess information during elections[4].

In addition, the Code of Good Practices on Misinformation, strengthened in 2022, commits signatories to transparency measures in political advertising and reducing manipulative behaviors[1]. The Digital Services Act (DSA), adopted in November 2022, requires online platforms to prevent abuses and disinformation[1].

The European Regulation on the Transparency and Targeting of Political Advertising, adopted on March 13, 2024, aims to combat misinformation and foreign interference in European elections[1]. A coalition on the origin and authenticity of content (C2PA) is being created, bringing together actors such as Adobe, Intel, and Microsoft[1].

Platforms like TikTok, YouTube, and X (formerly Twitter) are taking steps to combat misinformation, with TikTok working with fact-checking organizations to label unverified content, YouTube blocking most false information, and X blocking all false advertisements about the European elections[1].

However, the rapid evolution of AI presents challenges. Tools for automatic detection of AI-generated content, such as GPTZero, DetectGPT, and "Classifier" from OpenAI, have been criticized due to their rapid obsolescence[1]. Digital watermarking, a tool for identifying AI-generated content, remains limited as the marking can be easily removed[1]. The risk of being trapped in an algorithm-created bubble is not negligible, as users may only receive content aligned with their political preferences[1].

As AI continues to infiltrate various sectors of society, including politics, it is crucial to maintain vigilance and adapt to emerging threats[1]. The AI Act, recently adopted by the EU, creates obligations for providers of high-risk AI systems, including those capable of influencing voters in electoral campaigns[1]. Contents created with generative AI will have to bear a special mention for better transparency under the AI Act[1].

[1] European Commission (2023). The Impact of Artificial Intelligence on European Elections: A Comprehensive Overview. Retrieved from https://ec.europa.eu/info/research-and-innovation/policy-area/artificial-intelligence/news/impact-artificial-intelligence-european-elections_en [2] Council of Europe (2022). The Role of Artificial Intelligence in Electoral Manipulation: A Threat Assessment. Retrieved from https://www.coe.int/en/web/democracy/-/the-role-of-artificial-intelligence-in-electoral-manipulation-a-threat-assessment [3] European Parliament (2021). The Use of Artificial Intelligence in Elections: Challenges and Opportunities. Retrieved from https://www.europarl.europa.eu/RegData/etudes/STUD/2021/672050/EXPO_STU(2021)672050_EN.pdf [4] European Union Agency for Cybersecurity (2020). Media Literacy and the Fight Against Disinformation. Retrieved from https://www.enisa.europa.eu/publications/media-literacy-and-the-fight-against-disinformation [5] Danish Parliament (2021). Act on the use of artificial intelligence systems. Retrieved from https://www.retsinformation.dk/Forms/R0710.aspx?id=205797

Technology advances, such as deepfakes, are being exploited in political arenas to manipulate public opinion, deceiving voters with fake endorsements, impersonations, and manipulated images [3]. The exploitation of these deceptive practices can lead to defamation of candidates, amplified stereotypes, and societal divisions, fueling polarization and undermining social cohesion [2][3]. Meanwhile, the political landscape is addressing this issue, implementing legal frameworks, developing detection tools, promoting awareness, and advocating for transparency in political advertising [1][5]. However, the rapid evolution of technology poses challenges, with the effectiveness of detection tools questioning their longevity [1].

Read also:

    Latest