As artificial intelligence continues to evolve, its potential for misuse is becoming increasingly apparent, particularly in the realm of cryptocurrency fraud. In light of this growing concern, lawmakers in the United States have put forth new legislation aimed at safeguarding citizens against the dangers posed by AI-generated deepfakes.
On September 12, Representatives Madeleine Dean and María Elvira Salazar introduced the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act. This legislation seeks to shield Americans from the abuse of AI technology and combat the proliferation of unauthorized digital replicas.
Source: Joe Morelle
In a press release, the lawmakers emphasized that the NO FAKES Act will empower individuals to take action against those who create, distribute, or profit from unauthorized digital representations of them. Additionally, it offers media platforms protection from liability when they remove harmful content.
The announcement also suggested that the new law would promote innovation and uphold free speech. However, skepticism remains regarding whether these goals will be truly realized.
**Concerns About Private Censorship**
Corynne McSherry, legal director at the Electronic Frontier Foundation—a nonprofit dedicated to digital rights—has voiced concerns that the NO FAKES Act could inadvertently lead to “private censorship.”
In an article from August, McSherry argued that while the bill might benefit attorneys, it could present significant challenges for the general public. She pointed out that the NO FAKES Act offers fewer protections for lawful expression compared to the Digital Millennium Copyright Act (DMCA), which is designed to safeguard copyrighted works.
According to McSherry, the DMCA provides a straightforward counter-notice procedure that enables individuals to reclaim their work. In contrast, the NO FAKES Act necessitates that someone rush to court within 14 days to assert their rights. She elaborated on this concern, stating that while AI-generated fakes can inflict real damage, these shortcomings could ultimately undermine the effectiveness of the bill.
**Escalating Threats in AI Fraud**
The issue of AI deepfake scams is becoming increasingly severe. In the second quarter of 2024, the software company Gen Digital reported that scammers leveraging AI deepfakes had stolen at least $5 million in cryptocurrency. The firm has urged users to remain vigilant, as advancements in AI make these scams more sophisticated and believable.
Additionally, Web3 security firm CertiK believes that AI-driven attacks will likely extend beyond just video and audio manipulation, potentially targeting cryptocurrency wallets through facial recognition technology. A CertiK representative advised that wallets utilizing this feature should assess their preparedness for potential AI attack vectors.
**Magazine:** AI Eye: $1M bet ChatGPT won’t lead to AGI, Apple’s intelligent AI use, AI millionaires surge