Este sitio web fue traducido automáticamente. Para obtener más información, por favor haz clic aquí.

The Federal Communications Commission on Thursday made AI-generated robocalls mimicking the voices of political candidates to fool voters illegal. 

With the unanimous adoption of a declaratory ruling that recognizes calls made with AI-generated voices are "artificial" under the Telephone Consumer Protection Act (TCPA), a 1991 law restricting junk calls that use artificial and prerecorded voice messages, the FCC said it was giving state attorneys general new tools to go after those responsible for voice cloning scams. 

The decision was announced days after New Hampshire Attorney General John Formella revealed earlier this week that nefarious robocalls with an AI-generated clone of President Biden's voice urging recipients not to participate in the Jan. 23 primaries – and instead save their votes for the November election – had been traced to two Texas companies. 

Formella vowed potential civil and criminal action at the state and federal level. 

NEW HAMPSHIRE AG TRACES ROBOCALLS WITH 'AI-GENERATED CLONE' OF BIDEN'S VOICE BACK TO TEXAS-BASED COMPANIES

FCC commissioner

Jessica Rosenworcel, an FCC commissioner, speaks during a hearing on Capitol Hill, June 24, 2020. (Alex Wong/Pool via AP, File)

The FCC ruling, which takes effect immediately, makes voice cloning technology used in common robocall scams targeting consumers illegal. 

"Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We’re putting the fraudsters behind these robocalls on notice," FCC Chairwoman Jessica Rosenworcel said in a statement. "State Attorneys General will now have new tools to crack down on these scams and ensure the public is protected from fraud and misinformation."

These types of calls have increased during the last few years, as this technology now has the potential to confuse consumers with misinformation by imitating the voices of celebrities, political candidates, and close family members, the FCC noted. While currently state attorneys general can target the outcome of an unwanted AI-voice generated robocall, such as the scam or fraud they are seeking to perpetrate, the commission explained, the new action announced Thursday now makes the act of using AI to generate the voice in these robocalls itself illegal, expanding the legal avenues through which state law enforcement agencies can hold these perpetrators accountable under the law.

Under the consumer protection law, telemarketers generally cannot use automated dialers or artificial or prerecorded voice messages to call cellphones, and they cannot make such calls to landlines without prior written consent from the call recipient. The new ruling classifies AI-generated voices in robocalls as "artificial" and thus enforceable by the same standards. 

Those who break the law can face steep fines, maxing out at more than $23,000 per call, the FCC said. 

The agency has previously used the consumer law to clamp down on robocallers interfering in elections, including imposing a $5 million fine on two conservative hoaxers for falsely warning people in predominantly Black areas that voting by mail could heighten their risk of arrest, debt collection and forced vaccination, according to The Associated Press. 

The law also gives call recipients the right to take legal action and potentially recover up to $1,500 in damages for each unwanted call.

Rosenworcel said the commission started looking at making robocalls with AI-generated voices illegal because it saw a rise in these types of calls. It sought public comment on the issue last November and in January, a bipartisan group of 26 state attorneys general wrote to the FCC urging it to move forward with a ruling.

Sophisticated generative AI tools, from voice-cloning software to image generators, already are in use in elections in the U.S. and around the world. Last year, as the U.S. presidential race got underway, several campaign advertisements used AI-generated audio or imagery, and some candidates experimented with using AI chatbots to communicate with voters.

FCC logo

The FCC said the decision on AI-generated robocalls sends a clear message that exploiting the technology to mislead voters won’t be tolerated. (AP Photo/Jacquelyn Martin, File)

Bipartisan efforts in Congress have sought to regulate AI in political campaigns, but no federal legislation has passed, with the general election nine months away.

HOUSE GOP CAMPAIGN ARM SLAMS DEMOCRATS IN NEW AI-GENERATED AD TURNING NATIONAL PARKS INTO MIGRANT TENT CITIES

New Hampshire Secretary of State David Scanlan said the Jan. 21 robocall that surfaced in his state two days before the primary was a form of voter suppression that cannot be tolerated.

"New Hampshire had a taste of how AI can be used inappropriately in the election process," Scanlan said. "It is certainly appropriate to try and get our arms around the use and the enforcement so that we’re not misleading the voting population in a way that could harm our elections."

New Hampshire officials announce robocall probe

New Hampshire Attorney General John Formella describes the investigation into robocalls that used artificial intelligence to mimic President Biden's voice and discourage people from voting in the state primary, on Monday, Feb. 6, 2024. (Amanda Gokee/The Boston Globe via AP)

Formella said Tuesday that investigators had identified the Texas-based Life Corp. and its owner Walter Monk as the source of the calls, which went to thousands of state residents, mostly registered Democrats. He said the calls were transmitted by another Texas-based company, Lingo Telecom. 

New Hampshire issued cease-and-desist orders and subpoenas to both companies, while the FCC issued a cease-and-desist letter to the telecommunications company, Formella said. A bipartisan task force of attorneys general in all 50 states and Washington, D.C., sent a letter to Life Corp. warning it to stop originating illegal calls immediately.

According to the FCC, both Lingo Telecom and Life Corp. have been investigated for illegal robocalls in the past. In 2003, FCC issued a citation to Life Corp. for delivering illegal pre-recorded and unsolicited advertisements to residential lines.

CLICK HERE TO GET THE FOX NEWS APP 

More recently, the task force of attorneys general has accused Lingo of being the gateway provider for 61 suspected illegal calls from overseas. The Federal Trade Commission issued a cease-and-desist order against Lingo’s prior corporate name, Matrix Telecom, in 2022. The next year, the task force demanded that it take steps to protect its network.

Lingo Telecom said in a statement Tuesday that it "acted immediately" to help with the investigation into the robocalls impersonating Biden and quickly identified and suspended Life Corporation when contacted by the task force. The company said it "had no involvement whatsoever in the production of the call content."

The Associated Press contributed to this report.