Este sitio web fue traducido automáticamente. Para obtener más información, por favor haz clic aquí.

Former Google CEO Eric Schmidt said the tech industry will face a "reckoning" over artificial intelligence, comparing the potential dangers of the technology to the risks associated with social media when the platforms were first rolled out years ago. 

"What happened with social media is we, including myself, just offered social media because we had a simple model of how humans would use social media. But, instead, look at how social media was used to interfere in elections, to cause harm. People have died over social media," Schmidt told ABC News on Sunday. 

"No one meant that as [the] goal, and yet it happened. How do we prevent that with this [AI] technology?"

Schmidt led Google for a decade, before becoming the company’s executive chairman from 2011-2015 and the executive chairman of Google’s parent company Alphabet from 2015 to 2018. He warned in the interview that the tech community will face a "reckoning" over how to make the increasingly more powerful AI models solely helpful to users. 

TECH CEO WARNS AI RISKS 'HUMAN EXTINCTION' AS EXPERTS RALLY BEHIND SIX-MONTH PAUSE

Eric Schmidt, holding microphone at conference

Eric Schmidt speaks at Chainlink's SmartCon Web3 Conference on Sept. 28, 2022, in New York City. (Eugene Gologursky/Getty Images )

"We, collectively, in our industry, face a reckoning of, how do we want to make sure this stuff doesn’t harm but just helps?" he said. 

Schmidt said AI has become "remarkable" and has the potential to better the world, pointing to a hypothetical where artificial intelligence doctors can make people "healthier." 

AI COULD GO 'TERMINATOR,' GAIN UPPER HAND OVER HUMANS IN DARWINIAN RULES OF EVOLUTION, REPORT WARNS

"Imagine a world where you have an AI tutor that increases the educational capability of everyone in every language globally. These are remarkable. And these technologies, which are generally known as large language models, are clearly going to do this," Schmidt added.

But he then hypothesized about AI causing disorder with disinformation online or even leading to human-AI romance. 

AI illustration

Eric Schmidt hypothesized about AI causing disorder with disinformation online or even leading to human-AI romance. (Reuters/Dado Ruvic/Illustration)

"But, at the same time, they face extraordinary – we face extraordinary new challenges from these things, whether it’s the deepfakes… or what happens when people fall in love with their AI tutor?" Schmidt said. 

"I'm much more worried about the use in biology, or in cyberattacks, or in that sort of thing, and especially in manipulating the way the body politic works, and in particular how democracies work," he added. 

Schmidt’s comments follow the release of an open letter signed by thousands of tech leaders, researchers and others calling for a pause on research at AI labs that are creating programs more powerful than GPT-4, OpenAI’s latest and wildly popular deep learning chatbot. 

TECH CEO WARNS AI RISKS 'HUMAN EXTINCTION' AS EXPERTS RALLY BEHIND SIX-MONTH PAUSE

ChatGPT screen with "OpenAI" logo in background

Screens display the logos of OpenAI and ChatGPT on Jan. 23, 2023. (Lionel Bonaventure/AFP via Getty Images)

"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs," the letter, released last week and signed by tech leaders such as Elon Musk and Apple co-founder Steve Wozniak, begins. 

The letter calls for an at least six-month pause on powerful AI research so policymakers and AI leaders can "develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts."

CLICK HERE TO GET THE FOX NEWS APP

"AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal," the letter states.