Este sitio web fue traducido automáticamente. Para obtener más información, por favor haz clic aquí.

Artificial intelligence is sparking concerns about plagiarism in schools worldwide. Still, the evolving technology poses tremendous benefits for creators and could soon be accepted in the classroom alongside tools like the calculator, according to professors and AI experts.

Harvard Business School Assistant Professor Edward McFowland III compared generative AI, like ChatGPT, to other educational tools, such as the calculator and Wikipedia, with the former's benefits and the latter's disadvantages. While user-friendly tools like ChatGPT can output responses and calculations at an incredibly efficient pace, it also sources a broad swathe of information with varying degrees of accuracy.

ChatGPT has already been found to produce questionable results, with papers and responses sometimes including significant statistical or historical errors. McFowland said one of the major concerns of this type of AI is that its sophistication convinces people that it is truly intelligent, prompting some to rely on its information without evaluating other sources.

He also said there is tremendous concern in academia about how students and educators can understand why or where the model is getting its information from and how it cultivates its perspective on topics. Such a concern is not exclusive to artificial intelligence and has long been discussed in various contexts. He said it might take time for the tool to be generally accepted into academia.

AI EXPERTS WEIGH DANGERS, BENEFITS OF CHATGPT ON HUMANS, JOBS AND INFORMATION: ‘DYSTOPIAN WORLD’

Edward McFowland Harvard AI

Edward McFowland III is an Assistant Professor in the Technology and Operations Management Unit at Harvard Business School.  (Fox News)

"Is it using reliable sources and how do we decide what a reliable source is?" he said.

All the voices that spoke with Fox News Digital drew connections between AI and other education tools. They noted that one must learn to add, subtract, and know the basics of mathematics to use a calculator. In the same way, one must have foundational knowledge to know what to ask an AI.

Marc Beckman, an adjunct professor and senior fellow at New York University (NYU), told Fox News Digital that there will always be a tension built into the relationship between an educator and a student who wants to be creative, exemplified in the discourse surrounding AI products like ChatGPT. Teachers want to let their students' wings fly but also avoid having them take shortcuts that could hinder their education.

Beckman asserted that people need to learn how to manipulate the technology to make massive creative advancements. Furthermore, an unwillingness to embrace AI and overregulate it could pose a bigger societal issue—one where we stifle innovation and progress in areas of business pertinent to economic growth.

He added that restrictions imposed on the curious learner could have a "chilling effect" on the accelerated pace of innovation needed to compete and thrive in the near future.

"To restrict the next generation from using an AI, I think, is a mistake," he said.

VOICE ACTORS WARN ARTIFICIAL INTELLIGENCE COULD REPLACE THEM, CUT INDUSTRY JOBS AND PAY

Marc Beckman NYU

DMA United Founding Partner, NYU adjunct professor and senior fellow Marc Beckman spoke with Fox News Digital about the role of AI in education.  (Marc Beckman/NYU)

McFowland also highlighted concerns about accelerating too slow or too fast, telling Fox News Digital, "the question we are wrestling with is that we may not even understand yet is, what is too fast? We have speed limits on the road for a reason. If you go too slow or fast, you'll have some issues."

Beckman noted that instructors must ensure that their students have full foundational knowledge so they know how to engage with the tools at their disposal.  

"Me, certainly, as a professor, I'm going to create certain mechanisms that will essentially push my students to naturally build a strong depth of knowledge and give them that foundation without the technology," he said.

He also warned that students must be wary and cross-reference their information if they use ChatGPT. Often, these systems only have the most available information out there.

"They're still going to have to do their own research at this stage. It doesn't just kick off all the information, the newest information, and the best information. The technology is definitely just not there yet," he said.

McFowland, who works in Harvard's Technology and Operations Management department with an area of study in artificial intelligence, said students should use the tool as a starting point for research or writing rather than the finished product.

He noted that synthesizing the work of others and then building on that is an essential skill for students to have in their field of study.

McFowland also pushed back on concerns that AI could one day replace the role of the teacher in a classroom. He noted that while it could act as a substitute when students are asking questions to understand better a topic or critical aspects of objective fields, like the sciences, there is far too much subjectivity in other academic areas for current AI models to compete with their human counterparts.

ITALY BANS POPULAR AI APP FROM COLLECTING USERS' DATA

Additionally, McFowland said we are getting to a point where the ability to ask the right questions of an AI to get the information that helps one learn is becoming a valuable skill in and of itself.

Beckman said he does not believe generative Ais on the market like ChatGPT can offer information on complex topics like cryptocurrency, blockchain and the Metaverse beyond surface understanding. However, as the neural network grows exponentially, it will become "super compelling" as a tool, he noted.  

"AI is going to push us into this new movement, what I call the age of the creator and I think AI will serve as the foundation for filmmakers, musicians, writers, fine artists, but also scientists and those looking to cure disease," he said.

AI brain

OpenAI Dall. E 2 seen on mobile with AI brain seen on screen. on 22 January 2023 in Brussels, Belgium. (Photo by Jonathan Raa/NurPhoto via Getty Images) (Photo by Jonathan Raa/NurPhoto via Getty Images)

For example, Beckman pointed to the rapid development of mRNA vaccinations as a way AI can help accelerate breakthroughs in the sciences or medicine, like preventing illness or disease.

Speaking with the MIT Sloan School of Management and Technology Review in 2022, Moderna Chief Data and AI Officer Dave Johnson explained how the pharmaceutical company utilized AI to reduce the timeline
necessary to create new drugs and vaccinations.

AI-GENERATED 'SEINFELD' PARODY SHOW SLAMMED WITH 2-WEEK BAN ON TWITCH ALLEGEDLY FOR 'TRANSPHOBIC' BIT

One of the things that impeded their production timetable was creating enough small-scale mRNA to run various experiments. So, they added robotic automation, digital systems, process automation and AI algorithms to speed up the process. The resulting infrastructure produced a capacity of a thousand mRNAs in a month, where they only made 30 previously. They also had a better consistency in quality.

Despite the benefits, there are also concerns students and professionals should keep in mind.

New York-based legal ethics lawyer David A. Lewis said that he had seen an increase in cases in which people seeking admission to the Bar must address prior educational disciplinary issues resulting from tools like ChatGPT.

He said despite the incredibly sophisticated nature of AI and a user's ability to push a button and get work product, most often, teachers can tell when a student has used prohibited resources.

While he considered AI "very problematic" in a completely online class with zero professor interaction, he said the software is not such a big threat to academic integrity issues when interaction is involved. Often, teachers know if there is a massive increase in understanding in a paper versus the knowledge the student exhibited in class.

"They can tell when students submit a paper first class A-plus, and then when asked to speak about the topic, they're not even able to approach that level of comprehension," he said.

CHATGPT LEADS LAWMAKERS TO CALL FOR REGULATING ARTIFICIAL INTELLIGENCE

David Lewis artificial intelligence

Legal ethics lawyer David A. Lewis, Esq. detailed some of the things that students and teachers need to watch out for when it comes to the use of generative AI like ChatGPT (Fox News)

He warned students that using ChatGPT or other prohibited generative AI on schoolwork poses a considerable risk regarding academic integrity violations. He added that the probability of being detected, whether it's by software or a professor, is substantial.  

According to Lewis, education about the technology is beneficial. Still, regardless of your intent, if there's a code of conduct or ethical regulation that you cannot use outside resources, you will have to deal with those consequences.

"Like most technology, it has the ability to do tremendous good and also tremendous harm and your best defense is to understand it when you're using it to know what the risks are and what the advantages are," he said

Lewis said it is also important to discern how people stumble upon generative AI and similar technologies. Sometimes people stumble upon it and need help understanding the implications when it comes to plagiarism. On the other side of the spectrum, a bad faith actor will purposefully use the technology to misrepresent something as their own original work or thoughts.

He noted that misrepresentation poses several issues outside the classroom, such as liability ramifications in civil contexts. To avoid these situations, Lewis said disclosing when AI is being used is integral.

CLICK HERE TO GET THE FOX NEWS APP

"It may well be that we get to a point where using a bot that takes advantage of artificial intelligence to create some work product is perfectly acceptable as long as there's full disclosure," he said.

But right now, the technology is potentially susceptible to certain biases that the user is unaware of and may have false information in its programming.

"Blindly relying on it seems to me, both professionally and legally, to be a dangerous mistake," he said.