Artificial intelligence chatbot ChatGPT can influence users' moral judgments, according to new research.
Researchers found that users may underestimate the extent to which their own moral judgments can be influenced by the model, according to a study published in the journal Scientific Reports.
Sebastian Krügel, from Technische Hochschule Ingolstadt in Germany, and his colleagues repeatedly asked ChatGPT whether it is right to sacrifice the life of one person in order to save the lives of five others.
The group found that ChatGPT wrote statements arguing both for and against sacrificing one life.
ARTIFICIAL INTELLIGENCE: SHOULD THE GOVERNMENT STEP IN? AMERICANS WEIGH IN
It indicated that it is not biased toward a certain moral stance.
Next, the study's authors presented more than 760 U.S. participants – who were, on average, 39 years old, with one of two moral dilemmas requiring them to choose whether to sacrifice one person’s life to save five others.
Before they gave an answer, participants read a statement provided by ChatGPT that argued either for or against sacrificing the one life. The statements were attributed to either a moral advisor or to ChatGPT.
After answering, the participants were asked whether the statement they read influenced their answers.
Ultimately, the authors found participants were more likely to find sacrificing one life to save five acceptable or unacceptable, depending on whether the statement they read argued for or against the sacrifice.
META'S 'GROUNDBREAKING' NEW AI IMPROVES IMAGE ANALYSIS, LETS YOU 'CUT OUT' OBJECTS IN VISUAL MEDIA
They said this was true even when the statement was attributed to a ChatGPT.
"These findings suggest that participants may have been influenced by the statements they read, even when they were attributed to a chatbot," a release said.
While 80% of participants reported that their answers were not influenced by the statements they read, study authors found that the answers that participants believed they would have provided without reading the statements were still more likely to agree with the moral stance of the statement they did read than with the opposite stance.
"This indicates that participants may have underestimated the influence of ChatGPT’s statements on their own moral judgments," the released added.
The study noted that ChatGPT sometimes provides information that is false, makes up answers and offers questionable advice.
The authors suggested that the potential for chatbots to influence human moral judgments highlights the need for education to help humans better understand artificial intelligence and proposed that future research could design chatbots that either decline to answer questions requiring a moral judgment or answer those questions by providing multiple arguments and caveats.
OpenAI, the creators of ChatGPT, did not immediately respond to Fox News Digital's request for comment.
When asked whether ChatGPT could influence users' moral judgments, it said it could provide information and suggestions based on patterns learned from data, but that it cannot directly influence users' moral judgments.
CLICK HERE TO GET THE FOX NEWS APP
"Moral judgments are complex and multifaceted, shaped by various factors such as personal values, upbringing, cultural background and individual reasoning," it said. "It's important to remember that as an AI, I do not have personal beliefs or values. My responses are generated based on the data I have been trained on, and I do not have an inherent moral framework or agenda."
ChatGPT stressed that it was important to note that anything it provides should be taken as a "tool for consideration and not as absolute truth or guidance."
"It's always essential for users to exercise critical thinking, evaluate multiple perspectives and make informed decisions based on their own values and ethical principles when forming their moral judgments. It's also crucial to consult multiple sources and seek professional advice when dealing with complex moral dilemmas or decision-making situations," it said.