Este sitio web fue traducido automáticamente. Para obtener más información, por favor haz clic aquí.

Artificial intelligence could become so powerful that it replaces professional experts "in most domains" within the next decade, OpenAI CEO Sam Altman warned.

Altman, the chief of the AI lab behind popular platforms such as ChatGPT, published a blog post this week with two other OpenAI leaders, Greg Brockman and Ilya Sutskever, warning that "we must mitigate the risks of today’s AI technology."

"It’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations," reads the post, which was published on OpenAI’s website.

"In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there," the post continued. 

OPENAI CEO SAM ALTMAN REVEALS WHAT HE THINKS IS 'SCARY' ABOUT AI

OpenAI CEO Sam Altman

Sam Altman, CEO and co-founder of OpenAI, speaks during a Senate Judiciary subcommittee hearing in Washington, D.C., on May 16, 2023. (Getty)

Altman and his fellow OpenAI executives compared artificial intelligence to nuclear energy and synthetic biology, arguing that regulations must be handled with "special treatment and coordination" to be effective. They suggested that a version of the International Atomic Energy Agency will be needed to regulate the "superintelligence" technology.

"Any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.," they wrote.

Altman appeared before Congress this month to discuss how to regulate artificial intelligence, saying he welcomes U.S. leaders to craft such rules. Following the hearing, Altman provided examples of "scary AI" to Fox News Digital, which included systems that could design "novel biological pathogens."

CRITICS SAY AI CAN THREATEN HUMANITY, BUT CHATGPT HAS ITS OWN DOOMSDAY PREDICTIONS

​​"An AI that could hack into computer systems," he said. "I think these are all scary. These systems can become quite powerful, which is why I was happy to be here today and why I think this is so important."

Fears have spread among some tech experts and leaders, as well as members of the public, that artificial intelligence could grow so knowledgeable and powerful that it could wipe out society.

Geoffrey Hinton

Artificial intelligence pioneer Geoffrey Hinton speaks at the Thomson Reuters Financial and Risk Summit in Toronto on Dec. 4, 2017. (Reuters/Mark Blinch/File)

The "godfather of artificial intelligence," computer scientist Geoffrey Hinton, for example, warned last month that "it's not inconceivable" AI could wipe "out humanity." His remarks came after he quit his job at Google, saying he regrets his life’s work due to how AI can be misused.

OPENAI CEO SAM ALTMAN INVITES FEDERAL REGULATION ON ARTIFICIAL INTELLIGENCE

In Altman’s post on OpenAI, he and the other tech leaders argued that AI developers must coordinate to "ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society" while individual AI firms "should be held to an extremely high standard of acting responsibly."

The tech leaders also addressed why they are creating such powerful technology when they themselves admit it has drastic potential pitfalls for society, citing the world could see "astonishing" economic prosperity and a boost to quality of life.

AI dementia model

Digital image of the brain on the palm using artificial Intelligence technology. (iStock)

"We believe it’s going to lead to a much better world than what we can imagine today (we are already seeing early examples of this in areas like education, creative work, and personal productivity). The world faces a lot of problems that we will need much more help to solve; this technology can improve our societies, and the creative ability of everyone to use these new tools is certain to astonish us," they wrote.

Fox News Digital previously spoke to economist Peter St. Onge, who compared the proliferation of artificial intelligence to the Industrial Revolution, which ultimately led to the U.S. becoming a more prosperous nation as Americans moved from working on farms to new industries.

OPENAI CEO SAM ALTMAN ADMITS HIS BIGGEST FEAR FOR AI: ‘IT CAN GO QUITE WRONG’

"Throughout history, we've gone through tremendous technological revolutions. Generally, technologies kill jobs," St. Onge, with the Heritage Foundation, told Fox News Digital last month. "What happened? Well, you know, we had lots of new jobs. Almost nobody today works on a farm."

The OpenAI leaders added that there’s no stopping the creation of superintelligence, and that the time is now to create it and use it correctly and in a safe way.

"​​We believe it would be unintuitively risky and difficult to stop the creation of superintelligence," they wrote. "Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on, stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right."

This week, Google chief Sundar Pichai also weighed in that AI must be regulated, arguing it’s "too important" to be left without guardrails. 

Google CEO Sundar Pichai

Google CEO Sundar Pichai appears before the House Judiciary Committee to be questioned about the internet giant's privacy security and data collection on Capitol Hill in Washington, D.C., on Dec. 11, 2018. (AP Photo/J. Scott Applewhite)

"Developing policy frameworks that anticipate potential harms and unlock benefits will require deep discussions between governments, industry experts, publishers, academia and civil society," Pichai wrote in an op-ed for the Financial Times. "Legislators may not need to start from scratch: existing regulations provide useful frameworks to manage the potential risks of new technologies."

TEACHERS TAKE AI CONCERNS INTO THEIR OWN HANDS AMID WARNING TECH POSES 'GREATEST THREAT' TO SCHOOLS

Similar to OpenAI’s post, Pichai called for "cooperation" and coordination when it comes to regulation, specifically the U.S. and European leaders working together "to create robust, pro-innovation frameworks for the emerging technology, based on shared values and goals."

OpenAI ChatGPT app on the App Store website

OpenAI ChatGPT app on the App Store website is displayed on a screen and the OpenAI website is displayed on a phone screen in this illustration photo created on May 18, 2023. (Jakub Porzycki/NurPhoto)

Earlier this year, thousands of tech leaders and experts signed an open letter calling for a six-month pause on research at AI labs working on platforms more powerful than OpenAI’s GPT-4. No pause ever came to fruition, but it ignited a debate among tech leaders and lawmakers on how to regulate the tech from spiraling into the potential demise of society.

CLICK HERE TO GET THE FOX NEWS APP

"AI presents a once-in-a-generation opportunity for the world to reach its climate goals, build sustainable growth, maintain global competitiveness and much more," Pichai concluded. "Yet we are still in the early days, and there’s a lot of work ahead."