NEWYou can now listen to Fox News articles!
It appears that there won’t be any new regulations on the use of artificial intelligence or AI in elections from the Federal Election Commission (FEC) for the 2024 election cycle, based on the recent vote to table the pursuit of regulations on “deepfake” political ads.
Some are concerned that the election could become an “AI arms race”. The rationale behind this is a common proliferation scenario where each party fears the other having more weapons, so it gets more itself.
Where to draw the line is an important question. Because we protect free speech – especially surrounding political campaigns – it is one that each campaign will have to figure out for itself. This doesn’t mean that there can be no rules. Instead, it provides an opportunity for campaigns to devise shared ethical standards.
GOP 2024 CANDIDATE GETS AI MAKEOVER: FRANCIS SUAREZ LOOKALIKE MAY BE ‘SURROGATE’ AND DO INTERVIEWS, PAC SAYS
This is not without precedent: some elections have featured clean campaign pledges, and political parties regularly agree on debate formats and rules through the Commission on Presidential Debates. However, this is not without controversy.
The exact rules that campaigns agree on – labeling generated content and allowing the targeting of ads to individual voters, but not customization, for example – would have to be negotiated. These rules might set a precedent for future presidential campaigns and campaigns of all types.
Campaigns can tout their compliance, such as with RNC’s recent promotion of their ethical use of AI in a campaign ad. Rival campaigns can also call out rule violations by other campaigns, and potentially gain political points by doing so.
Of course, the question would be whether candidates saw an opponent gaining political ammunition from violating the rules to be more or less damaging than what they would stand to gain.
If the data or votes to be gained are more valuable, then there is little chance of AI limitations being agreed to, much less followed. Of course, even calling for AI limitations could become political capital, with campaigns potentially gaining publicity and voter support.
If campaigns cannot agree on AI use limitations – or don’t follow them – the onus falls on voters to critically evaluate campaign marketing. Third-party content labeling by media organizations or others could prospectively aid readers in this; however, even limited exposure to misinformation can be hard to correct and drives greater future belief in similar misinformation.
Additionally, labeling may give additional credibility to content that is not labeled as AI-generated, even though it could be.
It is important to note that these issues are not unique to AI content. AI facilitates lower cost and more rapid alternation and targeting of content. It took a Hollywood team to edit then-governor Arnold Schwarzenegger into the movie “Terminator Salvation,” using content from prior movies.
AI deep fake technology allowed a YouTube creator to develop a more complex “Terminator” substitution, placing Sylvester Stallone into the iconic role, presumably at a fraction of the time and cost.
ChatGPT can similarly speed up the writing process and has been so valuable to its users as to become the fastest-growing app ever; however, it is not without content accuracy issues. Cambridge Analytica showed the power of computer analysis to allow campaigns to get to know voters as well as community campaigners – all from a distance, without human intervention, and at a fraction of the cost.
Ideally, standards for campaign behavior will emerge and be followed as campaign decorum. Of course, each candidate runs the risk of another violating these unofficial and unenforceable standards to their benefit.
Even if the FEC had decided to pursue AI ad regulation, it is not clear that this would have been successful. The power of the FEC doesn’t extend to content regulation, even though legislation to this effect has been introduced. However, the Supreme Court’s Citizens United ruling suggests that a constitutional amendment might be necessary to regulate AI use in campaign speech, as it proscribes laws from banning “political speech based on the speaker’s identity.”
It is important to note that these issues are not unique to AI content. AI facilitates lower cost and more rapid alternation and targeting of content.
Ultimately, the most effective deterrent is the power of public opinion. The fear of intense voter backlash – based on public reaction to similar campaign decorum violations in the past – is a risk that all but the most desperate long-shot candidates will likely avoid.
We stand at a crossroads with regards to the introduction of AI in politics. Our choices today will shape the political discourse for future generations and determine the integrity of our democratic processes and institutions.
We can no more blindly embrace all uses of AI in the political arena than we can presume that no one will use the technology. Instead, we must find a balance where AI helps deliver legitimate information and candidate messages to voters, but is not used to confuse, inundate or intimidate them.