Life Corporation and an individual named Walter Monk have been identified as the culprits behind a robocall in New Hampshire that featured a voice resembling President Joe Biden’s, instructing citizens not to vote in the Jan. 23 primary. The New Hampshire Department of Justice, in an announcement made by Attorney General John Formella, revealed that the source of the calls was a Texas-based firm called Life Corporation and a person named Walter Monk. The state attorney general’s office labeled the robocalls as misinformation and advised voters to disregard them.
The automated messages were created using an AI deepfake tool, which is a software that uses advanced AI algorithms to produce highly realistic and deceptive digital content, including videos, audio recordings, and images. These tools have become a cause for concern as they can be used to manipulate and meddle in elections, as seen in this case.
The investigation into the voter suppression calls in New Hampshire began in mid-January and involved collaboration between the state attorney general’s office, the Anti-Robocall Multistate Litigation Task Force, and the Federal Communications Commission Enforcement Bureau. The Election Law Unit of the attorney general’s office issued a cease-and-desist order to Life Corporation for violating the 2022 New Hampshire Revised Statutes Title LXIII, which covers bribes, intimidation, and suppression. The order requires immediate compliance, and the unit has the authority to take further enforcement actions based on previous conduct.
The calls were traced back to a telecoms provider called Lingo Telecom, based in Texas, by investigators from the Election Law Unit. The Federal Communications Commission also issued a cease-and-desist letter to Lingo Telecom for its alleged involvement in AI-generated voice cloning for robocalls. The letter orders the company to immediately stop supporting illegal robocall traffic.
In response to the rise of deepfakes and AI-generated content, FCC Chairwoman Jessica Rosenworcel proposed considering calls featuring AI-generated voices as illegal, subject to the regulations and penalties outlined in the Telephone Consumer Protection Act. The World Economic Forum has also raised concerns about the negative consequences of AI technologies, including deepfakes, in its 19th Global Risks Report. Canada’s primary national intelligence agency, the Canadian Security Intelligence Service, has also expressed worries about disinformation campaigns that utilize AI deepfakes on the internet.
Overall, the incident in New Hampshire highlights the potential dangers of AI-generated content and the need for robust regulations to combat its misuse.