Judges likely to take AI rules into their own hands as lawmakers slow to act: experts
Lawyer in New York already facing potential blowback over using ChatGPT for court briefing
{{#rendered}} {{/rendered}}
Judges are likely to take concerns over artificial intelligence into their own hands and create their own rules for the tech in courtrooms, experts say.
U.S. District Judge Brantley Starr of the Northern District of Texas may have been a pioneer last week when he required lawyers who appear in his courtroom to certify they did not use artificial intelligence programs, such as ChatGPT, to draft their filings without a human checking for accuracy.
"We're at least putting lawyers on notice, who might not otherwise be on notice, that they can't just trust those databases," Starr, a Trump appointed judge, told Reuters. "They've got to actually verify it themselves through a traditional database."
{{#rendered}} {{/rendered}}
Experts who spoke to Fox News Digital argued that the judge’s move to institute an AI pledge for lawyers is "excellent" and a plan of action that will likely repeat itself amid the tech race to build even more powerful AI platforms.
TEXAS JUDGE SAYS NO AI IN COURTROOM UNLESS LAWYERS CERTIFY IT WAS VERIFIED BY HUMAN
"I think this is an excellent way to ensure that AI is used properly," said Christopher Alexander, chief communications officer of Liberty Blockchain. "The judge is simply using the old adage of ‘trust but verify.'"
{{#rendered}} {{/rendered}}
"The reasoning is likely that the risk for error or bias is too great," Alexander added. "Legal research is significantly more complex than just punching numbers into a calculator."
Starr said he crafted the plan to show lawyers that AI can hallucinate and make up cases, with a statement on the court’s website warning that the chatbots don’t swear an oath to uphold the law like lawyers do.
AI COST NEARLY 4,000 PEOPLE IN US THEIR JOBS, REPORT SAYS
{{#rendered}} {{/rendered}}
"These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up – even quotes and citations," the statement said.
"Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle," the notice continued.
Phil Siegel, founder of CAPTRS (Center for Advanced Preparedness and Threat Response Simulation), a nonprofit focused on using simulation gaming and artificial intelligence to improve societal disaster preparedness, said the judge was prudent in his AI pledge requirement, adding that AI could take a role in the justice system in the future.
{{#rendered}} {{/rendered}}
"At this point, this is a sensible position for a judge to take. Large language models are going to hallucinate because humans do also," Siegel said.
"It won’t take long, though, for more focused datasets and models to appear that solve this problem," he continued. "In most specific fields like law, but also in architecture, finance, etc."
He pointed to how in the field of law, a dataset could be created that gathers all case law and civil criminal laws by jurisdiction and is used to train an AI model.
{{#rendered}} {{/rendered}}
AI LIKENED TO GUN DEBATE AS COLLEGE STUDENTS STAND AT TECH CROSSROADS
"These databases can be built with citation markers that follow a certain convention scheme that will make it harder for a human or AI to either hallucinate or incorrectly cite," Siegel said. "It will also need to have a good scheme to ensure that laws are coordinated with their jurisdictions. A citation might be real, but when it is from an irrelevant jurisdiction, it would not be usable in court. At the point that this dataset and trained AI is available, the ruling will become moot."
Aiden Buzzetti, president of the Bull Moose Project, a conservative nonprofit working "to identify, train, and develop the next generation of America-First leaders," said Starr’s requirement is unsurprising due to the lack of legislation and guardrails surrounding AI.
{{#rendered}} {{/rendered}}
"In the absence of proactive legislation to ensure the quality of AI-created products, it's completely understandable that individuals and institutions will create their own rules regarding the use of AI materials," Buzzetti said. "This trend will probably increase the longer legislators ignore the risks involved in other professions."
OLDER GENERATIONS TRAIL NATION ON AI KNOW-HOW: POLL
Starr’s plan comes after a judge in New York threatened to sanction a lawyer over using ChatGPT for a court briefing that cited phony cases.
{{#rendered}} {{/rendered}}
The Texas judge, however, said that incident did not weigh on his decision. Instead, he began crafting his AI rules during a panel on the technology at a conference hosted by the 5th Circuit U.S. Court of Appeals.
TEACHERS TAKE AI CONCERNS INTO THEIR OWN HANDS AMID WARNING TECH POSES 'GREATEST THREAT' TO SCHOOLS
Leaders in other fields have also taken concerns over AI and the lack of regulations around the powerful tech into their own hands, including teachers in the U.K. Eight educators penned a letter to the Times of London last month to warn that though AI could serve as a useful tool to students and teachers, the technology’s risks are considered schools’ "greatest threat."
{{#rendered}} {{/rendered}}
The educators are forming their own advisory board to hash out what AI components educators should ignore in their work.
CLICK HERE TO GET THE FOX NEWS APP
"As leaders in state and independent schools, we regard AI as the greatest threat but also potentially the greatest benefit to our students, staff and schools," the coalition of teachers in the U.K. wrote in a letter to The Times. "Schools are bewildered by the very fast rate of change in AI and seek secure guidance on the best way forward, but whose advice can we trust?"