This year, over 80 national elections are scheduled to take place, directly affecting an estimated 4.2 billion people—52 percent of the globe’s population—in the largest election cycle the world will see until 2048. In addition to the U.S. presidential election, voters will go to the polls in the European Union, India, Indonesia, Mexico, South Africa, Ukraine, the United Kingdom, and dozens of other countries. Collectively, the stakes are high. The candidates that win will have a chance to shape not only domestic policy but also global issues including artificial intelligence, cybersecurity, and Internet governance.
This year’s elections are important for reasons that go beyond their scale. They will be subject to a perfect storm of heightened threats and weakened defenses. Commercial decisions made by technology companies, the reach of global digital platforms, the complexity of the environments in which these platforms operate, the rise of generative AI tools, the growth of foreign influence operations, and the emergence of partisan domestic investigations in the United States have converged to supercharge threats to elections worldwide.
Each election will, of course, be affected by local issues, the cultural context, and the main parties’ policies. But each will also be challenged by global threats to electoral integrity and, by extension, democracy. Governments, companies, and civil society groups must invest to mitigate the risks to democracy and track the emergence of new and dangerous electoral threats. If they get to work now, then 2024 may be remembered as the year when democracy rallied.
DISTRACTED WATCHMEN
Elections take place within local contexts, in local languages, and in accordance with local norms. But the information underpinning them increasingly comes from global digital platforms such as Facebook, Google, Instagram, Telegram, TikTok, WhatsApp, and YouTube. Voters rely on these commercial platforms to communicate and receive information about electoral processes, issues, and candidates. As a result, the platforms exert a powerful sway over elections. In a recent survey by Ipsos, 87 percent of respondents across 16 countries with elections in 2024 expressed concern that disinformation and fake news could affect the results, with social media cited as the leading source of disinformation, followed by messaging apps. Although voters use these social media platforms, they are generally unable to influence the platforms’ decisions or priorities. Platforms are not obliged to fight information manipulation, protect information integrity, or monitor electoral environments equitably across the communities in which they operate. Nor are they focused on doing so.
Instead, the largest U.S. technology companies are increasingly distracted. Facing declining profits, higher compliance costs, pressure to invest in AI, and increased scrutiny from governments around the world, leading companies such as Google and Meta have shifted resources away from their trust and safety teams, which mitigate electoral threats. X (formerly known as Twitter) has gone even further, implementing massive cuts and introducing erratic policy changes that have increased the amount of hate speech and disinformation on the platform. Some platforms, however, have begun to prepare for this year’s elections. Meta, for example, has announced that it will apply certain safeguards, as will Google, both globally and in the United States. Both companies are also seeking to maximize the use of generative AI-based tools for content moderation, which may offer improvements to the speed and scale of information monitoring.
Newer platforms—such as Discord, TikTok, Twitch, and others—are beginning to formulate election-related policies and mitigation strategies, but they lack experience of operating during elections. Telegram, which is an established global platform, takes a lax approach to combating disinformation and extremism, while U.S.-centric platforms including Gab, Rumble, and Truth Social have adopted a hands-off strategy that promulgates extremism, bigotry, and conspiracy theories. Some even welcome Russian propagandists banned from other platforms. WhatsApp and other popular encrypted messaging platforms present their own unique challenges when it comes to reducing misuse because of the encrypted nature of the content being shared.
Tech platforms have neither the resources nor the resolve to properly monitor and address problematic content. Every digital platform has a different process for reporting disinformation, hate speech, or harassment—as well as a varying capacity to respond to those threats. Companies will invariably be confronted with difficult tradeoffs, especially when their employees’ personal safety is at stake. At the same time, revenue constraints, technological limitations, and political prioritization will result in a vast gap between resources aimed at supporting U.S. electoral integrity and those focused on other countries’ elections. The result will be that most nations will be neglected.
Tech companies’ difficulties have been compounded by the growing incoherence of the laws and regulations that they are subject to. In the United States, legislators in 34 states have introduced over 100 bills since 2022 to regulate how social media platforms handle users’ posts. Challenges to laws passed in Florida, Texas, Utah, and other states are currently progressing through the courts. By the summer, the Supreme Court will have announced decisions in multiple cases affecting the ways in which social media companies and the U.S. government can communicate to discuss electoral threats and the degree of autonomy that companies should have to make decisions regarding the content they show users. At the same time, Europe is implementing its Digital Services Act and Digital Markets Act, a landmark regulatory move against large platforms that seeks to protect EU users’ rights. Countries such as India, Indonesia, and the United Kingdom have introduced their own regulations that are likely to result in increased content removal requests and demands for user data. Other countries are considering laws specific to their own elections, all of which will require focus from the companies operating there.
In some instances, these laws reflect the increasingly authoritarian impulses of democratic governments, and their desire to control information, especially around elections. This dynamic may particularly apply in India, the world’s largest democracy. Indian Prime Minister Narendra Modi has consistently refused to show support for independent journalism, and his government has passed regulations that allow it to control what content online platforms leave up or take down. The result has been that more speech from opposition groups is being removed. In Mexico, President Andrés Manuel López Obrador and his party, Morena, have attempted to reduce the power of the electoral commission and have criticized journalists and the judiciary. According to Freedom House, the country remains hostile to journalists, and the threat of violence is high. These risks will only grow in the lead-up to elections.
A PERFECT STORM
Another threat to electoral integrity is the continued proliferation of powerful, publicly available generative AI tools. The level of expertise required to create and disseminate fake text, imagery, audio clips, and video recordings across multiple languages will continue to plummet, without any commensurate increase in the public’s ability to identify, investigate, or debunk this media. Indeed, events in 2023 demonstrated how easy it will be to generate confusion. In Slovakia, for example, 48 hours before the parliamentary elections, an AI-manipulated audio recording circulated in which the leader of the liberal Progressive Slovakia party discussed how to rig the election. The audio clip was released during a news blackout on political coverage, limiting the media’s capacity to cover the story or debunk it. A similar campaign was seen in Bangladesh, which is holding a hotly contested election on January 7. There, AI-generated news clips have been transmitted that falsely accuse U.S. officials of interfering with the election. The tools powering this misinformation cost as little as $24 a month, and fakes have also emerged in Poland, Sudan, and the United Kingdom, demonstrating the growing nature of the threat.
The greatest danger of generative AI tools on online platforms is not their capacity to generate absolute belief in fake information, however. It is that they have the capacity to generate overall distrust. If everything can be faked, nothing may be true. Media coverage to date has focused on the use of AI to target political parties or officials, but it is likelier that the most significant target this year will be trust in the electoral process itself. For those seeking to sow skepticism and confusion, there are abundant opportunities.
Hostile actors are positioned to take advantage of these opportunities. In August 2023, Meta announced its “biggest single takedown” of a Chinese influence campaign that targeted countries including Australia, Japan, the United Kingdom, and the United States. The campaign, which used thousands of accounts across numerous platforms, sought to bolster China and discredit the country’s critics. Beijing has also been active elsewhere, particularly in its attempts to affect the outcome of the Taiwanese election in January. Officials in Taipei have warned that China has “very diverse” ways of interfering in the election, including increasing military pressure and spreading fake news. Indeed, Beijing sought to do the same in 2019, and all countries can expect to be subject to some foreign interference efforts. As a recent Microsoft report noted, “Cyber operations are expanding globally, with increased activity in Latin America, sub-Saharan Africa, and the Middle East. . . . Nation-state actors are more frequently employing [information operations] alongside cyber operations to spread favored propaganda narratives. These aim to manipulate national and global opinion to undermine democratic institutions within perceived adversary nations—most dangerously in the contexts of armed conflicts and national elections.”
Since 2016, attempts to influence elections via online platforms have been met by a coalition of companies, external researchers, and governments working to analyze and understand electoral information dynamics. But those efforts are now under attack in the United States. Congressional investigations into these partnerships have been led by politically motivated lawmakers who are convinced that these collaborations are aimed at censuring conservatives’ speech. The pending Supreme Court decision in Murthy v. Missouri, which is due by summer, could result in the U.S. government being the only government in the world that is not at liberty to contact American social media platforms regarding electoral threats at home or abroad.
Meanwhile, lawsuits have been initiated by campaigning groups, including Stephen Miller’s America First Legal, to intimidate academics and nonprofits, which has caused research institutions to pull back on their work and delay building new programs for 2024. This has reduced research efforts and collaborative contingency planning at the exact moment that they are most needed. The academics and civil society organizations that are under attack are leaders in analyzing electoral dynamics in underresourced and underexamined countries, training independent researchers, and connecting them to global platforms that might not otherwise invest in those environments. Although many of these institutions are committed to continuing their work, the chilling effect has been significant and is spreading across the globe. Nonpartisan U.S. NGOs focused on global electoral integrity have also been caught up in the mix, losing precious resources and focus to legal inquiries, and pulling back on work that might attract partisan attacks.
This chilling effect is particularly dangerous for U.S. elections—not just in major federal and state races but also at the local level. With 11 months to go before the 2024 presidential election, the FBI and the U.S. Cybersecurity and Infrastructure Security Agency, the primary U.S. government agency tasked with ensuring cybersecurity and digital protection for domestic elections, lack legal clarity regarding their ability to engage with major social media platforms to discuss electoral security or information. Public comments by Meta indicate that the company’s communication with U.S. government counterparts on electoral security stopped in July 2023, the same month a federal district court in Louisiana issued a sweeping initial injunction in Murthy. Meanwhile, the Department of Homeland Security has warned of the increased likelihood that China, Iran, and Russia will use generative AI to target the U.S. elections. Microsoft has said the same, pointing to the current manipulation of social media platforms by the Chinese government.
The United States is not the only nation that is absorbed by domestic issues. Many leading democracies that are holding elections in 2024, including the United Kingdom, the United States, and the European Union, maintain an active diplomatic or development presence that supports the safety and integrity of international elections. This may include offering electoral monitoring support, technological infrastructure, transparent electoral processes or local civil society initiatives. In an election year, governments understandably turn their attention inward. Focus on foreign affairs is divided, and diplomatic relationships can stall as a potential leadership transition grows nearer. Given the stakes of this year’s electoral cycle, and the complexity of the threats facing it, governments committed to supporting free and fair elections globally have a unique role to play in redoubling support for the collaboration and flexibility that will be necessary to protect democratic processes.
DEMOCRACY’S HOUR OF NEED
Despite the magnitude of the challenge, there is much that can still be done to protect this year’s elections. First, philanthropies and foreign assistance providers must immediately increase their investments in existing mechanisms to support information monitoring, fact checking, and digital forensics. In countries where there is limited capacity to track emerging electoral threats, this funding will be particularly important. The analyses from each election must then be shared and used as case studies for identifying new electoral threats and dynamics across digital platforms. The January elections in Bangladesh and Taiwan will not only be important in their own right: they will also be among the first to reveal what shape electoral threats will take during the rest of the year.
Second, governments and organizations focused on supporting democracy must ensure that they engage and mobilize local and regional civil society leaders. The resulting coalition must include educators, academics, community organizers, health providers, and representatives of marginalized communities who may operate outside of networks focused on civil and political rights, technology, or the rule of law. These individuals will bring crucial expertise to contextualizing how information dynamics are evolving in local contexts, especially with regard to issues that are of concern to voters, and may, consequently, be at high risk of exploitation. Too often these civic leaders operate at a remove from networks and partnerships focused on supporting democratic norms, as well as diplomats, researchers, and journalists striving to increase understanding of local developments. Their participation will be vital. Governments and civil society organizations must also increase pressure on technology companies to be more transparent about their electoral monitoring and response resources. This information will be crucial to identifying gaps in coverage that could be bridged by noncorporate investments.
Third, research must urgently begin to improve the understanding of how generative AI tools can affect the flow of information and to produce recommendations to mitigate their negative effects. A wealth of new philanthropic initiatives focused on AI, including a new $200 million fund created by ten organizations including the Ford Foundation, Packard, and others, must be harnessed for this purpose. These efforts cannot center solely on developed nations and majority languages. Rather, they must span all communities and countries that are holding elections this year.
Fourth, innovative public-private partnerships must be developed to help voters become discerning consumers of information. They must be assisted in identifying sources of authoritative information, such as election officials, and taught about the tools available to understand whether an image or video has been generated by AI or been used misleadingly. Voters should also be warned ahead of time about the types of narratives that they might hear. During the 2022 U.S. midterm elections, efforts such as these to pre-bunk problematic narratives were found to reduce the impact of misinformation. These efforts will help voters better understand who to trust when events and information about elections are evolving quickly.
This year will be a milestone for democracy. The challenges will be extreme in both scope and complexity, and the threats to electoral processes will be supercharged. This is the moment for democracy’s defenders to redouble their efforts in defense of electoral integrity and enable creative, nimble opportunities for collaboration. It is still possible to ensure that 2024 sees democracy strengthened, not weakened. This year must be remembered not only for the scale of its elections but also for the speed and scale of democracy’s defense.
No comments:
Post a Comment