Este sitio web fue traducido automáticamente. Para obtener más información, por favor haz clic aquí.

An education software company has developed a program it says schools and universities can use to detect whether students are using AI to complete their tests and essays, according to a new report.

The company, Turn It In, has a long history of developing tools educators can use to detect plagiarism. The company has now turned to an AI system that it says can effectively determine whether students are responsible for their own work, or whether they turned to an AI like ChatGPT.

Turn It In's tool isn't foolproof, however, according to a test conducted at the University of Southern California. Dr. Karen North, a professor at the university, found that while the tool can detect a large amount of AI-generated essays, some slip by and other authentic works receive false flags, according to a report from NBC News.

EVERYTHING YOU NEED TO KNOW ABOUT ARTIFICIAL INTELLIGENCE: WHAT IS IT USED FOR?

A photograph of a computer screen shows OpenAI's website and its ChatGPT robot.

Some students across the country have turned to ChatGPT to falsify homework, creating a massive problem for educators. (MARCO BERTORELLO/AFP via Getty Images)

Education is just one of the innumerable areas experts say AI already has or will have a massive impact in the coming years.

Interest in AI exploded following the release of OpenAI's ChatGPT late last year, a conversation tool that users can ask to draft up all sorts of written works, from college essays to movie scripts.

MARK WEINSTEIN: THREE WAYS TO REGULATE AI RIGHT NOW BEFORE IT'S TOO LATE

As advanced as it is, however, experts say it is only the beginning of how AI can be used. Given the massive potential, some industry leaders signed a letter calling for a pause on development so that responsible limits and best practices can be put into place.

Sam Altman

OpenAI CEO Sam Altman has said that safety is important in developing AI, but argued a pause in development is not the solution. (JASON REDMOND/AFP via Getty Images)

CLICK HERE TO GET THE FOX NEWS APP

Nevertheless, Sam Altman, who leads OpenAI, argued last week that such a pause is not the correct way to address the issue.

"I think moving with caution and increasing rigor for safety issues is really important," he said in an interview. "The letter, I don't think is the optimal way to address it."