Experts predict that judges may take matters into their own hands and establish their own regulations for using technology in courtrooms in response to concerns about artificial intelligence.
Last week, U.S. District Judge Brantley Starr of the Northern District of Texas may have made legal history when he demanded that attorneys who appear in his docket certify they did not draught their documents using artificial intelligence software, such as ChatGPT, without a human reviewing them for correctness.
Judge Starr, who was chosen by Trump, told Reuters that “we’re at least putting lawyers on notice, who might not be on notice otherwise, that they can’t just trust those databases.” They need to independently check it using a typical database.
Experts who talked with Fox News Digital said the judge’s decision to establish an AI pledge for attorneys is “excellent” and that this course of action would probably be followed again as tech companies compete to create platforms with ever greater AI capabilities.
According to experts interviewed by Fox News Digital, judges may still provide AI instruction in courtrooms. (iStock)
Christopher Alexander, Liberty Blockchain’s chief communications officer, said: “I think this is a great way to ensure that AI is used appropriately.” “The judge is merely applying the proverb “trust but verify”.”
“The risk for error or bias is probably too great,” Alexander continued. Legal research entails a great deal more than merely entering numbers into a calculator.
With a warning that the chatbots do not take an oath to protect the law like lawyers do, Starr claimed that he created the scheme to demonstrate to lawyers how AI might hallucinate and invent cases.
“These platforms are prone to bias and hallucinations in their current states. On hallucinations, they fabricate information, including quotations and references, the statement read.
The notification stated, “Unbound by any feeling of duty, honour, or justice, such programmes act in accordance with computer code instead of conviction, based on programming instead of philosophy.

Experts predict that judges may take matters into their own hands and establish their own regulations for using technology in courtrooms in response to concerns about artificial intelligence. (Photo by Josep Lago/AFP via Getty Images)
The judge was cautious in his AI pledge requirement, according to Phil Siegel, founder of CAPTRS (Centre for Advanced Preparedness and Threat Response Simulation), a nonprofit organisation devoted to using simulation gaming and artificial intelligence to improve societal disaster preparedness. He also noted that AI might someday play a role in the justice system.
This is a reasonable stance for a judge to adopt at this time. Large language models will experience hallucinations because humans do, according to Siegel.
However, he added, “it won’t be long before more targeted datasets and models that address this issue appear.” “In the majority of specialised fields, such as law, but also in finance, architecture, etc.”
He demonstrated how an AI model might be trained using a dataset that compiles all case law and civil and criminal statutes by jurisdiction.
These databases can be created with citation markers that adhere to a specific convention scheme, making it more difficult for a human or AI to miscite or have hallucinations, according to Siegel. It will also require a sound plan to guarantee that the laws are in accordance with the relevant authorities.Even though a citation is legitimate, it cannot be used in court if it originates from an unrelated jurisdiction.The decision will be irrelevant once this dataset and trained AI are accessible.
When he compelled attorneys in his courtroom to declare they did not draught their submissions using AI programmes like ChatGPT without a human checking for accuracy, a Texas judge may have been a pioneer. (Getty Images)
Starr’s request is not unusual, according to Aiden Buzzetti, head of the Bull Moose Project, a conservative nonprofit organisation that aims “to identify, train, and develop the next generation of America-First leaders,” because there is no legislation or regulatory framework in place for artificial intelligence.
“It’s completely understandable that individuals and institutions will create their own rules regarding the use of AI materials in the absence of proactive legislation to ensure the quality of AI-created products,” Buzzetti said. The longer legislators continue to disregard the risks associated with other occupations, the more likely this tendency will become.
After a judge in New York threatened to punish a lawyer for utilising ChatGPT for a court briefing that used fictitious cases, Starr came up with his method.
However, the Texas judge claimed that occurrence had no bearing on his choice. Instead, he started developing his AI guidelines while participating in a panel discussion on the subject at a conference organised by the 5th Circuit U.S. Court of Appeals.
In a statement, PEN America CEO Suzanne Nossel claimed that taking books out of school libraries teaches pupils that they are hazardous. Image Credit (iStock)
Leaders in other professions, such as British teachers, have also taken action in response to their concerns about AI and the absence of laws governing the potent technology. Last month, eight educators wrote a letter to the Times of London warning that while artificial intelligence (AI) could be a beneficial tool for students and instructors, the hazards associated with the technology are thought to be the “greatest threat” to schools.
To discuss which aspects of AI teachers should disregard in their work, the educators are creating their own advisory board.
The alliance of teachers in the U.K. said in a statement to The Times, “As leaders in state and independent schools, we regard AI as the greatest threat but also potentially the greatest benefit to our students, staff, and schools.” Schools are perplexed by the rapid rate of development in AI and seek reliable counsel on the best course of action, but who can we believe?