A group of twenty technology companies involved in the development of artificial intelligence (AI) have announced their commitment to ensure that their software does not have any influence on elections, including those in the United States. The agreement recognizes the significant risk that AI products pose, particularly in a year when around four billion people are expected to participate in elections worldwide. The document raises concerns about deceptive AI in election content and its potential to mislead the public, thereby threatening the integrity of electoral processes.
In addition, the agreement acknowledges that global lawmakers have been slow to respond to the rapid advancements in generative AI, prompting the tech industry to explore self-regulation. Brad Smith, the vice chair and president of Microsoft, expressed his support for this initiative in a statement. The pledge has been signed by twenty companies, including Microsoft, Google, Adobe, Amazon, Anthropic, Arm, ElevenLabs, IBM, Inflection AI, LinkedIn, McAfee, Meta, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic, and X.
However, it is important to note that this agreement is voluntary and does not go as far as implementing a complete ban on AI content in elections. The document, which spans 1,500 words, outlines eight steps that these companies commit to taking by the year 2024. These steps involve the development of tools to differentiate AI-generated images from authentic content and ensuring transparency with the public by providing updates on significant developments.
Despite this commitment, the open internet advocacy group, Free Press, has dismissed it as an empty promise. The group argues that tech companies have not adhered to previous pledges regarding election integrity after the 2020 election and advocates for increased oversight by human reviewers.
In response to the tech accord, U.S. Representative Yvette Clarke has expressed her support and hopes that Congress will build on this initiative. Clarke has sponsored legislation aimed at regulating deepfakes and AI-generated content in political advertisements.
On January 31, the Federal Communications Commission voted to ban AI-generated robocalls that utilize AI-generated voices. This decision was made in light of a fake robocall, claiming to be from President Joe Biden, that caused widespread concern about the potential for counterfeit voices, images, and videos in politics ahead of January’s New Hampshire primary.
Magazine: Crypto+AI token picks, AGI will take ‘a long time’, Galaxy AI to 100M phones: AI Eye