Chatbots can now talk, but experts warn they may be listening too
The new ChatGPT could be used to collect massive amounts of data on users
{{#rendered}} {{/rendered}}
The popular artifical intelligence platform ChatGPT will now be able to respond to spoken words and images, causing concern among some experts who believe the application could lead to unwanted invasions of privacy.
OpenAI, the company behind ChatGPT, released the new version of the chatbot on Monday, allowing it for the first time to interact with users with the spoken word, according to a report from the New York Times.
"We’re looking to make ChatGPT easier to use – and more helpful," Peter Deng, OpenAI’s vice president of consumer and enterprise product, told the New York Times.
{{#rendered}} {{/rendered}}
GOOGLE’S AI IS TRYING TO ONE-UP CHATGPT AND BING WITH NEW EVERYDAY AI FEATURES
Similar to popular platforms like Amazon's Alexa and Apple's Siri, users of ChatGPT will be able to talk to the application, which will respond with a spoken voice of its own. ChatGPT will also be able to respond to images, according to the report, which noted that users could upload a picture of their open refrigerator and ChatGPT would respond with recipe ideas based upon what is inside.
While OpenAI has been aggressive in recent weeks at rolling out new AI tools, some experts questioned just how useful the updated version of ChatGPT would be.
{{#rendered}} {{/rendered}}
"While voice detection and conversational capability sound impressive, it does not really improve the current capabilities of ChatGPT," Christopher Alexander, chief analytics officer of Pioneer Development Group, told Fox News Digital. "Essentially you can just dictate your Natural Language Processing commands instead of typing them."
Alexander also warned that there will be considerations for users of the updated platform, noting that it could enhance its capabilities to be used as a "surveillance tool."
{{#rendered}} {{/rendered}}
"You now have ChatGPT listening to you and collecting additional information," Alexander said.
"When you are speaking to ChatGPT, it is learning how to better process voices by pitch, accent, etc." he added. "The training people provide the AI in voice may help ChatGPT develop incredibly realistic voice capabilities for AI personas in the very near future. This has great creative potential but could also make deep fakes exponentially more difficult to detect if you can use future voice technology as a result of this work."
Ziven Havens, policy director at the Bull Moose Project, expressed similar concerns, telling Fox News Digital that the application could be part of a growing trend to "collect unprecedented levels of data on Americans."
{{#rendered}} {{/rendered}}
"As AI has grown, so too has the ability for companies like OpenAI to collect even more data, including the voices of Americans and the images they supply to ChatGPT," Havens said. "Congress must act to protect Americans from signing away their privacy in the name of innovation."
Jon Schweppe, policy director of American Principles Project, noted that the new development has "great creative potential," but warned that the new version "could also open the door for more deep fakes and make it exponentially more difficult to differentiate between AI voice technology and real human voices."
{{#rendered}} {{/rendered}}
"The biggest concern here is obviously data collection. As ChatGPT 'trains' to better process voices by pitch, accent, etc., it is likely it will develop incredibly realistic voice capabilities for AI personas in the very near future," Schweppe said.
While users could potentially be reminded of Siri or Alexa while using the new capabilities of ChatGPT, the report noted that the platforms rely on different technology. While Alexa and Siri are programmed to perform a set number of tasks or give certain answers, ChatGPT uses a large language model that can learn to generate additional responses by analyzing large amounts of data from the internet.
Phil Siegel, founder of the Center for Advances Preparedness and Threat Response Simulation, told Fox News Digital that such abilities could open exciting new doors for consumers.
{{#rendered}} {{/rendered}}
CLICK HERE TO GET THE FOX NEWS APP
"I think this is exactly the type of consumer application that will be useful and have high take-up if designed well," Siegel said. "I like to call it having ‘an Angel on your shoulder.’ Spoken conversation with timely and useful information (like recipes, spoken reminders, information retrieval, and others) is probably one of the most interesting consumer applications and opens the door to much more useful assistants than Siri or Alexa could ever be."
Reached for comment by Fox News Digital, a spokesperson for OpenAI said the company conducts "rigorous testing, engage external experts for feedback, work to improve the model's behavior with techniques like reinforcement learning with human feedback, and build broad safety and monitoring systems" before they release any new system.
{{#rendered}} {{/rendered}}
"While some of our training data includes personal information that is available on the public internet, we want our models to learn about the world, not private individuals," the spokesperson added. "So we work to remove personal information from the training dataset where feasible, fine-tune models to reject requests for personal information of private individuals, and respond to requests from individuals to delete their personal information from our systems. These steps minimize the possibility that our models might generate responses that include the personal information of private individuals."