Adversarial data manipulation is the nuclear weapon of the 21st century. New techniques to conduct offensive cyberoperations could corrupt the integrity of datasets in voter registration systems as well as political and electoral management databases. This is just one example of how “data poisoning” — when data is manipulated to deceive — poses a risk to our most critical infrastructures.
Such challenges posed by emerging technologies to global security and national sovereignty are increasingly gaining traction in the United Nations Security Council. Its first debate on emerging technologies shed light on the growing cybersecurity divide. Many member states in the global South are becoming more vulnerable to cyberthreats. Yet these early discussions lack full understanding of how artificial intelligence, or AI, is converging with other technologies to create more sophisticated cyberthreats, including to election security.
In the last five years, hacks of electoral, medical and social media datasets have exposed the dangers of extensive breaches of sensitive information from ethnic backgrounds, biometrics and health profiles to online behaviors. With this recent deterioration of the cyberthreat landscape, populations’ datasets and their related electoral infrastructure are growing targets for data manipulation.
For the multilateral system, it is urgent to anticipate and control how the convergence of AI, cyberthreats and data-capture technologies can be misused to discredit electoral institutions, influence people’s behaviors and erode citizens’ trust and political agency.
Three defining socio-technical shifts will lead to what I call “behavioral and electoral engineering,” affecting the future of election security.
First, the integration of AI in offensive cyberoperations efficiently targets human biometrics and behavioral vulnerabilities before and during elections. AI techniques have become common tools for attackers to gain access to sensitive data on election officials, political parties, campaign staff and candidates. In 2019, the American company Cisco revealed how malicious actors started targeting public officials to introduce intelligence malware that can evade detection with damaging consequences, such as exfiltrating credentials and security information, deleting files and commandeering a computing system.
In August 2016, Russian hackers used spear-phishing emails to target a voter registration software vendor and impersonated the company’s employees by sending malicious emails to several Florida election administrators. They successfully gained access to voters’ information databases in at least two Florida counties. In January 2019, the head of the Ukraine cyberpolice reported that 10 weeks before the country’s presidential election, hackers were acquiring personal information stolen from civil servants and election officials. Online social manipulation tactics are also rising in Africa, with millions of attacks reaching South Africa, Kenya, Egypt, Nigeria and Ethiopia.
As algorithms learn to analyze our emotions and conversations, their ability to engineer behaviors can lead to hacking the operations of an organization, such as electoral management bodies and vendors and political parties. Such techniques were recently used to access information about how the Covid-19 vaccine is shipped, refrigerated and delivered. The ability of AI to manipulate behaviors can target the psychological weaknesses of human actors and lead them to compromise strategic safety and security measures. This shift is taking place while people working in electoral management bodies, election monitoring units and political parties are struggling to develop skills to better prevent emerging cyber- and influence operations.
Second, elections are complex data-driven processes, so the challenge of protecting election security should be approached as an information security and integrity problem. Population and electoral datasets — from voters’ registration databases and biometrics ID systems to security and political campaign information — are at risk to both malicious exfiltration and manipulation. For instance, by injecting malware into voter information databases, malicious parties could corrupt data profiles about voters with significant implications, including failure to register by the voting deadline.
In June 2016, Russian hackers compromised the computer network of the Illinois State Board of Elections, accessing information on millions of voters and extracting data related to thousands of these people. What matters more is not primarily discussing the level of digitization of discrete steps in the voting process but anticipating threats to data integrity in the full electoral information life cycle.
Large-scale voters’ data hacks have already affected populations in the United States, Israel, India, Kenya and the Philippines. These collective data harms create insecurity flash points as the international development agenda is pushing large numbers of countries in the global South to digitize their population datasets and critical infrastructure with often no foresight and security safeguards.
Third, this new typology of AI-led cyberthreats produces “trust decay.” Data-manipulation attacks do not need to significantly engineer voting count and tabulation to turn the tide of an election. They mainly require instilling widespread public doubt and legitimacy concerns, leading electoral institutions to fail transparency and security challenges. With pervasive questions around what electoral datasets are valid and what information is true, malicious actors can focus on weaponizing polarization and distrust.
Considering the peace and security implications, protecting civilians and electoral institutions from technological and data harms is becoming a multilateral obligation. The UN and its member states should boost engagement where electoral management bodies, civil society and the corporate world could better forecast fast-emerging threats to develop safeguards and accountability measures.
There is no single solution to the pervasive dangers posed by the convergence of emerging technologies. Our ability to understand and stop global security risks must be developed collectively, across sectors and technological domains.
We welcome your comments on this article. What are your thoughts?
Eleonore Pauwels is an international expert in the security, societal and governance implications generated by the convergence of artificial intelligence with other dual-use technologies. She is the author of two reports, “The Anatomy of Information Disorders in Africa” and “Cyber-AI Convergence and Interference in Elections,” published by the New York office of the Konrad-Adenauer-Stiftung Foundation.
Just recently, I “imagined” that “AI will start the nuclear war.” This article confirms that situation. Human intelligence MUST control the levers of AI. There is an urgent need for all of our technology to be protected from malevolent threats. It is obvious we are using a new “fire” that we can’t properly control. The use of “automatic surveillance and censorship” cannot become an overwhelming force being used against “the human population.”