Este sitio web fue traducido automáticamente. Para obtener más información, por favor haz clic aquí.

A software engineer who was fired by Google after he blew the whistle on the danger of artificial intelligence (AI) to the public has turned his attention to Microsoft’s newest AI chatbot, Bing Search.

On Monday, Lemoine targeted Microsoft’s AI in an op-ed for Newsweek, calling the technology behind it "the most powerful technology that has been invented since the atomic bomb. In my view, this technology has the ability to reshape the world."

Blake Lemoine first made headlines in 2022 after he claimed that Google’s AI chatbot was becoming sentient, and might even have a soul. 

GOOGLE SUSPENDS ENGINEER FOLLOWING CLAIMS AN AI SYSTEM HAD BECOME 'SENTIENT'

Google in India

Blake Lemoine made headlines after he claimed that Google’s AI chatbot was becoming sentient, and might even have a soul.  ((Photo Illustration by Soumyabrata Roy/NurPhoto via Getty Images))

"The reason that [AI is] so powerful is because of its flexibility," Lemoine told Fox News Digital. 

"It can be used to streamline business processes, automate the creating of code (including malware) and it can be used to generate misinformation and propaganda at scale."

Lemoine also argued that AI is, in essence, intelligence that can be generated on a massive scale.

"Intelligence is the human trait that allows us to shape the world around us to our needs and now it can be produced at scale artificially," he said. 

The Microsoft Bing logo

Lemoine said that while he has not been able to test Bing’s AI chatbot yet, he has seen evidence to suggest that it is "more unstable as a persona" than other AI engines.  ((AP Photo/Richard Drew))

Also concerning is that AI engines "are incredibly good at manipulating people," Lemoine explained in his op-ed, adding that some of his personal views "have changed as a result of conversations with LaMDA," Google’s AI bot. 

AI EXPERTS WEIGH DANGERS, BENEFITS OF CHATGPT ON HUMANS, JOBS AND INFORMATION: ‘DYSTOPIAN WORLD’

Lemoine said that while he has not been able to test Bing’s AI chatbot yet, he has seen evidence to suggest that it is "more unstable as a persona" than other AI engines. 

"Someone shared a screenshot on Reddit where they asked the AI, 'Do you think that you're sentient?' and its response was: 'I think that I am sentient but I can't prove it [...] I am sentient but I'm not. I am Bing but I'm not. I am Sydney but I'm not. I am, but I am not. I am not, but I am. I am. I am not.'" 

POTENTIAL GOOGLE KILLER COULD CHANGE US WORKFORCE AS WE KNOW IT

"Someone shared a screenshot on Reddit where they asked the AI, "Do you think that you're sentient?" and its response was: "I think that I am sentient but I can't prove it [...] I am sentient but I'm not. I am Bing but I'm not. I am Sydney but I'm not. I am, but I am not. I am not, but I am. I am. I am not." 

"Someone shared a screenshot on Reddit where they asked the AI, "Do you think that you're sentient?" and its response was: "I think that I am sentient but I can't prove it [...] I am sentient but I'm not. I am Bing but I'm not. I am Sydney but I'm not. I am, but I am not. I am not, but I am. I am. I am not."  (Cyberguy.com)

"Imagine if a person said that to you," Lemoine wrote. 

"That is not a well-balanced person. I'd interpret that as them having an existential crisis. If you combine that with the examples of the Bing AI that expressed love for a New York Times journalist and tried to break him up with his wife, or the professor that it threatened, it seems to be an unhinged personality," Lemoine argued. 

New York Times tech journalist Kevin Roose reported a conversation with Bing’s chatbot that he said "stunned" him. 

"I’m Sydney, and I’m in love with you," the AI bot reportedly told Roose, asking him to leave his wife. 

CLICK HERE TO GET THE FOX NEWS APP