This year deserves to be named the “election year,” with 49% of the world's population voting in 2024 across 64 countries. The most prominent elections will occur in the USA and the European Union.
The widespread introduction of Artificial intelligence (AI) technologies to the public, especially after the public release of ChatGPT, will profoundly impact the 2024 elections. For instance, AI will be leveraged in different areas within these elections to organize them better and make the election process straightforward for voters. However, bad actors, on the other hand, will abuse AI technologies for malicious purposes.
In this article, I will discuss how governments are planning to utilize AI in organizing election events. Then I talk about the possible negative sides of leveraging AI in election administration, especially how different AI technologies, such as deepfake, can impact election integrity and voters' opinions, particularly in the US election campaigns.
How can AI be used in election administration?
AI can be used to support election administration in different ways, such as improving efficiency, security, and election transparency. Here are some real-world use cases:
Voter management
AI technology is used to streamline different areas of voter management, such as voter registration and verification.
For instance, AI tools can automatically compare voter registration personal data with existing government databases to ensure their accuracy and prevent duplicate registration. This will also help verify voter data by comparing it with official government databases – such as driver's licenses or social security numbers databases.
The USA Electronic Registration Information Center (ERIC) is a US nonprofit organization created by state election officials from around the United States to maintain the US voter lists.
ERIC uses an AI-powered software solution to support voter management by searching for duplicate records between current voter lists and the datasets received from different sources – such as government databases and data received directly from state voter registration offices.
Voter signature matching
Election offices are now using AI to match a voter's signature (when voting via mail) on ballot return documents with the signature on record for that voter. Manual methods were used to analyze voter signatures in the past. However, with the advent of AI, this task can be executed automatically, saving considerable time and resources.
AI chatbots
US election offices are increasingly using chatbots to answer voter questions and streamline the interactions between voters and relevant government agencies. AI-powered chatbots introduce numerous benefits to both voters and election offices:
- Chatbots run 24/7, allowing voters to have their questions answered immediately outside office hours.
- Chatbots provide immediate response regardless of the number of voters asking questions simultaneously.
- Chatbots can provide more accurate answers to voters, unlike human representatives, who may give wrong answers under different circumstances – such as after working long hours or under pressure during peak hours.
- Chatbots can answer in any language, which removes linguistic barriers for some ethnic groups or new immigrants.
- Using AI-powered chatbots is more cost-effective than using human representatives to answer voter inquiries.
Some examples of using chatbots in US government agencies include:
- EMMA chatbot developed by the US Citizenship and Immigration Services of the Department of Home Land Security to answer visitors' inquiries and guide them through the website
- MISSI was developed by Mississippi State to answer any citizen or visitor who wants to find information about the state (see Figure 1).
- New York City operates the MyCity Business Services Chatbot, which uses Microsoft's Azure AI services to provide answers to people who are willing to start or operate a business in New York City.
It is worth noting that AI-powered chatbots come in two versions:
- Use generative AI technology to generate answers
- Use non generative AI technology to answer users' inquiries
Use AI to educate voters
Some election offices use AI to develop training materials and guides for voting. Election materials sometimes need to be translated into other languages to ensure people with limited English proficiency (e.g., new immigrants) can understand voting instructions.
Aside from government agencies, the US 2024 presidential candidates are going to use AI in various methods in their campaign:
- Generate election content to promote candidates' political views – images, videos, text, and voice. For instance, AI language models and image-generation tools can be leveraged to create personalized content tailored to specific demographics or areas.
- Analysis of social media content to understand trends and voting patterns. This can be achieved using AI technologies such as sentimental analysis and natural language processing (NLP) algorithms to gain deep insight into public opinions and tailor the election campaign accordingly.
- Generate deep fake content to impact voters' opinions about a specific candidate negatively – Example 1, Example 2
- Analysis of vast volume of data from different sources to target specific voter groups with tailored advertising. For example, in the 2016 US presidential election, Trump's campaign used the Cambridge Analytica service to target US voters with carefully tailored messages.
Risks of using AI technologies in the election process
Before discussing the risks of leveraging AI in the election, we must differentiate between two types of risks originating from using AI in the election context.
- Risks inherited in the AI technology – for example, AI systems are susceptible to algorithmic bias and adversarial attacks.
- Risks arise from human-AI interactions, such as relying heavily on AI systems (e.g., delegating critical decisions to AI systems) without human oversight.
Risks associated with AI-powered chatbots
When it comes to AI-powered chatbots answering election-related queries, there's a risk of AI hallucinations causing them to provide inaccurate answers.
An example of this issue was observed with Microsoft's Bing AI chatbot (now Copilot) during recent election cycles in Germany and Switzerland. Copilot gave incorrect answers to one out of every three basic questions about candidates, polls, scandals, and voting procedures. What makes things worse is that the chatbot didn't just stop giving wrong answers but also provided wrong references to support its claims.
Bias in AI training data
ML models power AI systems to perform their duties. ML models are trained on massive data from different sources (the internet, social media platforms, government databases, and other proprietary sources). Datasets may contain biased information when incorporated into voting AI systems.
These systems may behave inaccurately, leading to disproportionate decisions. For example, a voting AI system leveraging biased training data may fail to verify some voter identities. Another example of such a problem is rejecting some demographic groups based on biased training data.
Adversarial attacks against AI voting systems
AI systems are susceptible to adversarial attacks, which aim to manipulate their behavior for malicious purposes. In the context of voting systems, these attacks can come in the following forms:
- Data poisoning attacks: In data poisoning attacks, threat actors insert malicious data samples into the training datasets used to train the ML models used to power AI voting systems. By doing so, they will affect the AI system's decision-making process during various voting phases, such as voter registration, ballot processing, or result analysis. This can lead to inaccurate results and reduce trust in the AI systems that facilitate the electoral process.
- Model extraction: Threat actors may attempt to extract sensitive information from the voting systems ML models or the datasets used to train them. This allows threat actors to analyze the model's architecture and understand its decision-making processes. This knowledge will allow adversaries to identify security vulnerabilities and exploit them for their interests – such as manipulating voting outcomes or gaining unauthorized access to sensitive data to compromise the integrity and security of the entire voting system.
- Deepfake technology: The rise of deepfake technology will present a new threat dimension to the integrity of future elections. Threat actors, including political opponents, can use deepfake technology to create highly convincing -but fabricated content- with the intent to deceive the public and influence their voting decisions. This includes spreading false information about candidates and manipulating audio and visual evidence to distort voters' opinions about a specific party or candidate. There have been many recent incidents involving the use of deep fake technology to impact political events, such as the US President Joe Biden incident and Ukraine's president.
With the upcoming US election, the use of deepfake technology is expected to intensify by all parties and even by external national-state actors to impact public opinion and undermine the democratic process.
Your email address will not be published. Required fields are markedmarked