Este sitio web fue traducido automáticamente. Para obtener más información, por favor haz clic aquí.

One of the tech CEOs who signed a letter calling for a six-month pause on AI labs training powerful systems warned that such technology threatens "human extinction."

"As stated by many, including these model's developers, the risk is human extinction," Connor Leahy, CEO of Conjecture, told Fox News Digital this week. Conjecture describes itself as working to make "AI systems boundable, predictable and safe."

Leahy is one of more than 2,000 experts and tech leaders who signed a letter this week calling for "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." The letter is backed by Tesla and Twitter CEO Elon Musk, as well as Apple co-founder Steve Wozniak, and argues that "AI systems with human-competitive intelligence can pose profound risks to society and humanity."

Leahy said that "a small group of people are building AI systems at an irresponsible pace far beyond what we can keep up with, and it is only accelerating."

UNBRIDLED AI TECH RISKS SPREAD OF DISINFORMATION, REQUIRING POLICY MAKERS STEP IN WITH RULES: EXPERTS

Elon Musk in Washington state

Tesla and SpaceX Chief Executive Officer Elon Musk speaks at the SATELLITE Conference and Exhibition in Washington, Monday, March 9, 2020. (AP Photo/Susan Walsh, File)

"We don't understand these systems, and larger ones will be even more powerful and harder to control. We should pause now on larger experiments and redirect our focus towards developing reliable, bounded AI systems."

Leahy pointed to previous statements from AI research leader, Sam Altman, who serves as the CEO of OpenAI, the lab behind GPT-4, the latest deep learning model, which "exhibits human-level performance on various professional and academic benchmarks," according to the lab. 

ELON MUSK, APPLE CO-FOUNDER, OTHER TECH EXPERTS CALL FOR PAUSE ON 'GIANT AI EXPERIMENTS': 'DANGEROUS RACE'

Sam Altman speaks at event

Sam Altman speaks at the Wall Street Journal Digital Conference in Laguna Beach, California, October 18, 2017. REUTERS/Lucy Nicholson/File Photo. (REUTERS/Lucy Nicholson/File Photo)

Leahy cited that just earlier this year, Altman told Silicon Valley media outlet StrictlyVC that the worst-case scenario regarding AI is "lights out for all of us."

Leahy said that even as far back as 2015, Altman warned on his blog that "development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity."

The heart of the argument for pausing AI research at labs is to give policymakers and the labs themselves space to develop safeguards that would allow for researchers to keep developing the technology, but not at the reported threat of upending the lives of people across the world with disinformation. 

ChatGPT ai model

The OpenAI website ChatGPT about page on laptop computer arranged in the Brooklyn borough of New York City, on Thursday, Jan. 12, 2023. Gabby Jones/Bloomberg via Getty Images (Gabby Jones/Bloomberg via Getty Images)

"AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts," the letter states. 

I INTERVIEWED CHATGPT AS IF IT WAS A HUMAN; HERE'S WHAT IT HAD TO SAY THAT GAVE ME CHILLS

Currently, the U.S. has a handful of bills in Congress on AI, while some states have also tried to tackle the issue, and the White House published a blueprint for an "AI Bill of Rights." But experts Fox News Digital previously spoke to said that companies do not currently face consequences for violating such guidelines. 

ChatGPT screen

This picture taken on January 23, 2023, shows screens displaying the logos of OpenAI and ChatGPT. ChatGPT is a conversational artificial intelligence software application developed by OpenAI. (Photo by Lionel BONAVENTURE / AFP) (Photo by LIONEL BONAVENTURE/AFP via Getty Images) (LIONEL BONAVENTURE/AFP via Getty Images)

When asked whether the tech community is at a critical moment to pull the reins on powerful AI technology, Leahy said that "there are only two times to react to an exponential."

MUSK'S PROPOSED AI PAUSE MEANS CHINA WOULD 'RACE' PAST US WITH 'MOST POWERFUL' TECH, EXPERT SAYS

"Too early or too late. We’re not too far from existentially dangerous systems, and we need to refocus before it’s too late."

CLICK HERE TO GET THE FOX NEWS APP

"I hope more companies and developers will be on board with this letter. I want to make clear that this only affects a small section of the tech field and the AI field in general: only a handful of companies are focusing on hyperscaling to build God-like systems as quickly as possible," Leahy added in his comment to Fox News Digital. 

OpenAI did not immediately respond to Fox News Digital regarding Leahy's comments on AI risking human extinction.