Authors: Juan Vasquez and Dr. Roberto Rosas
Date: March 15, 2025
Artificial intelligence is rapidly making its mark on the legal profession. From automating document review to drafting briefs, generative AI tools promise unprecedented efficiency. In fact, a 2023 survey found that nearly 63% of lawyers had already experimented with AI in their work, and about 12% use it regularly.1 But alongside the excitement comes a sobering warning: AI sometimes generates information that is completely false yet looks convincingly authentic. In tech parlance, these mistakes are called “AI hallucinations.” And for lawyers, an AI hallucination isn’t just a glitch, it can be a career-threatening disaster.
AI hallucinations occur when an AI model produces an answer or citation that seems plausible but is actually fabricated or incorrect.2 The AI model will often deliver the misinformation confidently, with no hint that it’s anything but factual. This poses a unique danger in the practice of law. Attorneys are bound by strict ethical and professional duties to provide truthful, accurate information to courts and clients. If an AI tool hallucinates a non-existent case or a false legal principle, and a lawyer fails to catch it, the result could be a misled client, a reprimand from the court, sanctions and malpractice liability, and even the loss of the Attorney’s license. As one legal tech expert put it, the risk of AI “producing inaccurate or misleading information” is a key reason many lawyers remain hesitant about embracing the technology.3
Why do these hallucinations happen? The leading AI systems today (like the ChatGPT 4o, 03, etc., Claude AI Sonnet 3.5, or other large language models) generate text by predicting likely word sequences based on patterns in vast training data. They do not actually consult a real-time database of verified facts each time they answer. As a result, generative AI is “known to confidently make up facts,” because it relies on statistical patterns rather than truth-checking.1 In casual settings, a made-up fact from ChatGPT might be trivial or even amusing. In a legal brief or a court hearing, however, it’s no laughing matter.
In this article, we will explore why AI hallucinations pose a very real risk for lawyers. We’ll look at eye-opening examples to see how these errors have already tripped up legal professionals. We’ll discuss why even the most advanced AI models are never 100% safe from hallucinating, and why no prudent attorney wants to be the unwitting victim of an AI model’s first big blunder. Crucially, we will explain why any AI-generated content must be thoroughly reviewed and verified before being relied upon in legal practice. Finally, we’ll outline strategies to mitigate the risks, including verification best practices and cautious adoption of AI tools, and examine the role of education in helping lawyers use AI responsibly. Organizations like 3ITAL (International Institute for Intelligent Technology Adoption in the Law) are leading the way with resources and training to ensure that attorneys can harness AI’s power safely and ethically.
The bottom line is that AI can be a powerful ally for lawyers, but only if we approach it with eyes wide open to its hallucination problem. Let’s delve into what that means in practice.
Hallucinations in Legal Practice: Cautionary Tales
Not long ago, the legal world got a dramatic wake-up call about AI’s pitfalls. In June 2023, a federal judge in New York sanctioned two lawyers after their brief cited six fictitious court decisions that didn’t exist at all, except in the imagination of an AI chatbot.1 The case, Mata v. Avianca, became instant legal folklore: the attorneys had used ChatGPT to help write a brief, and it produced fake judicial opinions with realistic-sounding names and citations. The lawyers, who failed to verify the references, later told the court they were “unaware that ChatGPT could fabricate cases out of whole cloth.” The judge was not amused, calling the misuse of AI an act of “conscious avoidance” of truth, and imposed a $5,000 fine.1 This “ChatGPT lawyer” incident was one of the first highly publicized AI hallucinations in law, and it vividly demonstrated the professional peril of trusting an unchecked AI output.
That cautionary tale was just the beginning. In the two years since, at least seven similar cases have surfaced across the United States where attorneys faced court scrutiny or discipline due to AI-generated falsehoods.1 For example, in early 2025, a large personal injury firm’s internal alarm bells rang when a federal judge in Wyoming discovered bogus case citations in a court filing against a major retailer. The filing, it turned out, had been drafted with the help of an AI tool that invented supporting cases out of thin air. The judge threatened sanctions. 1 The law firm (Morgan & Morgan, with over a thousand attorneys) responded by blasting out an urgent firm-wide email. 4 Its message in essence was that AI can invent fake case law, and if you submit AI-fabricated cases to a court, it could cost you your job. 1 In other words, using an AI without verification was now a fireable offense in the firm.
Even seasoned lawyers and high-profile matters have not been immune. In one federal case involving former U.S. presidential fixer Michael Cohen, a lawyer inadvertently submitted a brief containing fake citations provided by Google’s AI tool, Bard. The lawyer (and Cohen, who had supplied the AI output to his counsel) avoided sanctions by the court, but the judge called the episode “embarrassing,” serving as a warning to never trust an unverified source, even if it comes from a famous client armed with a new AI gadget.5 In another incident, a Texas attorney relying on the Claude AI model filed a court document that quoted nonexistent precedents. Opposing counsel and the judge quickly spotted the fake cases. The result was that the lawyer was fined $2,000 and ordered to attend a legal ethics course on using generative AI.6)
Alarmingly, AI hallucinations have even crept into expert testimony. In a recent Minnesota case, a party’s expert witness, brought in for his tech expertise, submitted a written declaration that cited several authoritative-sounding sources. Only during scrutiny did it emerge that some of those citations were entirely AI-fabricated. The “expert” had unknowingly included references provided by ChatGPT or a similar tool without verifying them. The judge in that case said the misstep “destroyed [the expert’s] credibility with the court.”7 If even expert consultants can be tripped up by AI, it underscores how vigilant lawyers must be when vetting anything that crosses their desk.
These examples, both high-profile and under-the-radar, all convey the same lesson. That is, AI hallucinations are a real threat in legal practice here and now. No matter the court or the context, from personal injury claims to complex litigation, unverified AI output can land a lawyer in hot water. The reputational damage is bad enough; in some instances, the offending lawyers had to explain themselves in public hearings, facing embarrassment and potential discipline. Beyond that, there’s the very real harm to clients. A case can be thrown out or delayed because of a fake citation, or an otherwise winning argument can be undermined by a single fictitious quote. As one retired judge observed after seeing multiple such incidents, the continued submission of unverified AI-generated content “threatens the integrity of our judicial system” and raises serious questions about lawyers’ technological competence.8 In short, these cautionary tales show how a seemingly small AI slip-up can mushroom into a professional and legal fiasco.
State-of-the-Art, But Not Infallible: The Risk Never Reaches Zero
One might hope that these incidents are simply the result of lawyers using older or unreliable AI tools, and that newer, more advanced models have solved the hallucination problem. It’s true that AI language models have improved by leaps and bounds. The latest AI models (such as OpenAI’s 4o and 03 models, Google’s Gemini 2.0, Claude AI’s 3.5 Sonnet, just to name a few) are far more accurate than their predecessors.9 In fact, GPT-4 has been observed to hallucinate much less frequently than earlier models (one analysis found it fabricated information in roughly 3% of responses, compared to much higher rates for others).10 Certain legal tech companies have even started marketing specialty AI research assistants that claim to “avoid hallucinations” or be “hallucination-free” when fetching case law.11
The reality, however, is that no AI system is 100% infallible. Researchers widely acknowledge that completely eliminating AI hallucinations is effectively impossible. 12 Hallucinations are not just bugs that can be patched; they are a byproduct of how generative AI works. As one detailed academic study put it, “it is impossible to eliminate hallucination in LLMs” because at some fundamental level these models will always have gaps and make assumptions beyond their training.12 In practical terms, this means there will always be some risk, however small, that a generative AI will produce a false statement or citation, especially when pushed into areas of uncertainty or ambiguous queries. And if there’s even a one-in-a-thousand chance of a serious error, that’s enough to keep a prudent lawyer up at night. After all, someone will eventually be that unlucky “1 in 1000” case. It is doubtful any attorney wants to be the first person to discover a catastrophic flaw in a tool that was thought to be bulletproof.
In fact, independent evaluations of cutting-edge legal AI tools have revealed non-trivial error rates. A 2024 Stanford University study tested the citation accuracy of two prominent AI-driven legal research systems (from major vendors) and found that these supposedly state-of-the-art tools still produced incorrect or unsupported legal information in about 17–34% of test queries.11 In roughly one out of every six questions, the AI gave an answer that either cited a case that didn’t actually support the proposition or just outright made something up. This was despite the vendors’ promises of rigorous accuracy. The takeaway is that even a specialized “lawyer’s AI” that really does perform better than a general chatbot will still hallucinate sometimes.
It’s also worth noting that hallucination frequency can vary depending on the task. For straightforward questions of law, the latest models might rarely go off-base. But pose a complex, novel legal question or an unusual fact pattern, and the AI could be more prone to wander outside the lines of truth. AI might also struggle when the available training data is sparse or doesn’t neatly fit the query, leading it to improvise. The bottom line is that as AI gets more reliable, hallucinations may become rarer, but they will not disappear entirely. 12 Even a 1% error rate is unacceptable if that error involves citing a non-existent case to the Supreme Court or misquoting a statute in a client memo. Lawyers operate in a realm where accuracy isn’t just preferred, it’s mandatory.
This inherent uncertainty means attorneys must approach AI outputs with a healthy skepticism. You could use the most advanced legal AI platform on the market for months without a single mistake, but you cannot assume it will never falter. Trust, but verify is the only reasonable motto. As the saying goes in aviation, “there are old pilots and bold pilots, but no old, bold pilots.” The legal equivalent might be: there are successful lawyers and careless lawyers, but no successful careless lawyers. No matter how “smart” or vaunted an AI tool may be, if it’s generating content, there is always a non-zero chance it could be wrong. The responsible lawyer will never forget that.
Why AI-Generated Legal Content Must Be Verified (Every Time)
In light of the above, it should be clear that any content produced by AI, be it a suggested case citation, a draft contract clause, or a legal brief, must be reviewed and verified by a human attorney before it is used in practice. There are no exceptions to this rule. Legal ethics and common sense both demand it.
From an ethical standpoint, a lawyer cannot hide behind an AI’s mistake. Professional rules universally require attorneys to exercise independent judgment and due diligence in their work. These ethical rules require lawyers to vet and stand by their court filings or risk being disciplined. In fact, the American Bar Association has made it explicit that a lawyer’s duty of competence and candor extends to material prepared with the help of AI. The ABA advised its members last year that attorneys are responsible for “even an unintentional misstatement” generated by an AI tool.13 In other words, if an AI-written memo your firm produces contains a falsehood, you’re on the hook for it just as if you wrote it yourself. The Illinois Supreme Court recently underscored this point in a policy on AI, stating: “All users must thoroughly review AI-generated content before submitting it in any court proceeding to ensure accuracy and compliance with legal and ethical obligations.”8 The policy further emphasizes that lawyers (and even self-represented litigants) must understand the capabilities and limitations of any AI tool they use.8 Simply put, no court will accept “the computer made me do it” as a valid excuse. If it’s filed under your name, you are accountable.
The practical reasons for verification are equally compelling. AI does not know when it’s hallucinating. It won’t blink or stutter or show any of the telltale signs a human might when unsure. To the contrary, AI often delivers incorrect answers with polished prose and confident tone. That means the onus is entirely on the attorney to catch errors. Every citation an AI provides should be cross-checked in a legal database. Every “quote” from a case or statute should be located and read in the source material to ensure it’s real and in context. This might sound onerous, after all, the hope of AI is to save work, not duplicate it. But verification is a necessary checkpoint. Skipping it is like relying on a paralegal’s research memo without bothering to see if the cited cases support the conclusions; no careful lawyer would do that.
Unfortunately, as we’ve seen, some attorneys did skip this step, perhaps lulled by the apparent authority with which the AI presented information. The sanctions and embarrassments that followed are now part of legal lore. Suffolk Law School Dean Andrew Perlman did not mince words about such situations: when lawyers blindly use ChatGPT (or any AI) to create citations without checking them, “that’s incompetence, just pure and simple,” he said.1 Strong words, but they echo the sentiment of judges who have had to contend with AI-driven mistakes. The message is clear: verifying AI output is not optional. Failing to do so may violate ethical duties of competence, diligence, and candor toward the tribunal.
To be fair, most lawyers are already verification hawks by training. The idea of citing a case you haven’t read is anathema to good practice. Using AI doesn’t change that norm, it just adds a new layer of vigilance. Think of an AI like a junior associate with great potential but no sense of the stakes, a sort of extremely well-read but occasionally fanciful first-year law student. You would never let a first-year send out work to a client or court under your name without rigorous review and editing. Treat AI the same way. In fact, you should treat AI outputs as if they came from a smart but inexperience first-year lawyer who requires significant oversight. Use the AI’s work as a starting point or a sounding board, not as gospel. If something looks off, investigate further. If something looks good, still cross-check it. Trust your instincts and legal training, if an AI’s answer surprises you or seems too pat, dig deeper rather than taking it at face value.
Ultimately, integrating AI into legal work does have many benefits, but it also raises the bar for lawyer oversight. The tools may be new, but the lawyer’s role as the final gatekeeper of accuracy remains unchanged. In fact, that role becomes even more critical. As the Illinois Supreme Court and others have highlighted, the human lawyer must stay in full control of the outcome, which means reviewing, verifying, and if necessary correcting the AI’s contributions at every step.8
Strategies for Mitigating AI Hallucination Risks
Lawyers can absolutely leverage AI in their practice without courting disaster, if they adopt some commonsense safeguards. Here are several strategies and best practices to mitigate the risk of AI hallucinations:
- Double-Check All AI-Generated Citations and Quotes: Every case, statute, or quotation provided by an AI should be verified using trusted primary sources (such as Westlaw, Lexis, or official reporters). Do not assume any reference is valid until you confirm it. If the AI summarizes a case or statute, read the actual text to ensure the summary is accurate. This might feel redundant, but it is the single most important step to catch hallucinations. In the Texas case mentioned earlier, simply pulling up the AI-cited cases (which turned out not to exist) would have immediately exposed the error before it ever reached a judge.
- Cross-Reference Important Information: If an AI tool gives you a crucial piece of information (say, a key legal rule or a detail about a case), try phrasing the query differently or using another AI or search engine to see if you get consistent answers. Hallucinations often unravel when checked against another source. For example, if ChatGPT confidently states a legal holding, consider running a quick manual search or asking a specialized legal database to see if that holding actually exists in the case law. Obtaining corroboration from multiple sources can alert you if one AI-generated result was off base.
- Use AI as a Supplement, Not a Substitute: Approach AI outputs as a starting point or a means to generate ideas, rather than as the final authority. One practical tip is to perform your own legal research or drafting first, then use the AI to see if it comes up with something you missed or to help refine your arguments. This way, your work isn’t solely based on AI output. Some attorneys describe using AI like a “sparring partner” in writing, helpful for bouncing around arguments and spotting angles, but not the source of ultimate truth. By having your own groundwork in place, you are better positioned to evaluate where the AI might be off base.
- Favor AI Tools That Provide Sources (and Check Them): Many newer legal AI tools and chatbots now cite their sources or are integrated with databases of law. These are often better for legal work since they at least show where the information is (supposedly) coming from. However, a cited source is not a guarantee of accuracy, the source might not say what the AI claims it says. Always click the footnotes or citations the AI provides and examine whether they truly support the statements. If an AI platform claims to be “hallucination-free,” remain cautious and verify anyway, as studies have shown even domain-specific legal AI can err a significant portion of the time.11
- Stay Within Well-Trodden Paths (When Possible): AI is less likely to hallucinate on topics it has seen frequently in its training. Very novel legal questions or obscure issues have a higher chance of prompting the AI to fabricate. If you suspect your query is venturing into a fringe area, be extra careful. In such scenarios, it may be wiser to do more of the research manually or rely on traditional tools, using the AI only to summarize or outline what you have found. Keep the AI on a short leash when dealing with high-risk topics.
- Develop Internal AI Usage Guidelines: Law firms and legal departments should consider creating policies for how attorneys may use AI. This can include requiring a second attorney to review any AI-involved work product, prohibiting AI use on certain high-stakes filings, or mandating disclosure (to a supervising attorney or even to a client or court if appropriate) that AI was used in drafting. Morgan & Morgan’s response, an internal memo threatening termination for unverified AI citations is one example of a strong stance. 4 Each organization can calibrate accordingly, but the key is to have clear expectations that AI output must be verified and that lawyers will be held accountable for lapses.
- Train and Educate Yourself Continuously: Gaining a solid understanding of how AI tools work will help you use them wisely. Lawyers should invest time in learning the strengths and weaknesses of the AI platforms they adopt. This includes keeping up with any updates or known issues. Many hallucinations can be avoided by knowing, for instance, that a certain AI model struggles with recent case law after 2021, or that it tends to mix up facts under specific conditions. If you recognize the points of failure, you can proactively guard against them. Education is critical, more on that in the next section.
- Start Small and Low-Stakes: When introducing AI to your workflow, begin with low-stakes tasks to build trust in the tool’s outputs. For example, use it to summarize well-known cases or organize your own notes (situations where you can easily spot if something is off). Gradually work up to more significant tasks as you gain confidence (and only in conjunction with the other precautions above). This phased approach ensures that you don’t find out about an AI’s quirks for the first time in a high-pressure situation like a filing deadline.
Implementing these strategies can dramatically reduce the chances of an AI hallucination derailing your work. It’s all about maintaining a “trust but verify” mindset. As one commentator aptly noted, the goal isn’t to expect perfection from generative AI, but rather to “develop mitigation strategies that reduce errors” to an acceptable level while taking advantage of AI’s speed and efficiency.14 In essence, you treat the AI as a helpful assistant—one that dramatically cuts down your grunt work, but you always remain the quality control inspector before anything leaves your desk.
The Role of Education and Training in Managing AI Risks
Considering how quickly AI technology is evolving, education is perhaps the most powerful tool for managing the risk of AI hallucinations. The legal profession is coming to recognize that AI literacy is now an essential part of lawyer competence. Just as attorneys must understand the basics of electronic discovery or digital document management, they now need at least a working knowledge of how AI models operate and what their pitfalls are. A lack of understanding, “lack of AI literacy,” as one law professor described it, has been a common thread in many of the mishaps we discussed. 1 Lawyers who treat AI as a magic black box are at a significantly high risk of being caught off guard when it produces fiction. On the other hand, lawyers who take the time to learn about AI’s tendencies (and limitations) are far better equipped to use it wisely and avoid its traps.
Recognizing this need, various bar associations and courts have issued guidance to push attorneys onto the learning curve. For example, state bar ethics opinions in places like California, Florida, New York, and Texas have cautioned lawyers to become proficient with generative AI if they choose to use it, and to understand how to properly vet AI’s outputs.815 The clear message is that you cannot responsibly use a tool you don’t understand. Even the judge in Texas who sanctioned a lawyer for an AI-related mistake saw education as the remedy: part of the sanction was sending the attorney to a continuing legal education (CLE) course on the use of AI in law.6) The hope is that after proper training, that lawyer (and others attending such courses) will know exactly what pitfalls to watch for and how to avoid them.
Thankfully, resources for AI education in the legal field are growing. One notable effort is the International Institute for Intelligent Technology Adoption in the Law (3ITAL), an organization under development that will be devoted to empowering legal professionals to embrace AI responsibly and ethically. 3ITAL is developing a wealth of training and educational programs specifically tailored for lawyers navigating AI.16 For example, 3ITAL is currently developing interactive courses and CLE-accredited workshops that teach attorneys about AI applications and best practices in different areas of law. These courses will dive into real-world use cases of AI, highlighting both the opportunities and the risks (like hallucinations) in a practical context. Lawyers will be able to learn how to use AI tools effectively while maintaining client confidentiality and meeting their ethical obligations, core concerns that 3ITAL emphasizes in its mission.17
Beyond formal courses, 3ITAL will also curate resources to keep lawyers informed as AI technology evolves. Its platform, currently under development, will include a searchable library of articles, guides, and case studies on AI in legal practice, as well as tool reviews that evaluate how various AI products perform (and whether they have known hallucination issues or other limitations). By providing up-to-date information and expert insights, 3ITAL will help attorneys stay ahead of the curve. In addition, the institute will engage in advocacy and policy discussions to ensure that as AI becomes more integrated in law, it’s done in a way that upholds professional standards. In short, organizations like 3ITAL serve as a crucial knowledge hub, allowing lawyers to educate themselves and ask informed questions about the AI tools they might use.
Individual law firms are also investing in education. Many firms now host internal seminars or bring in experts to train their attorneys on AI topics. Some have created internal committees or task forces to study AI adoption and disseminate best practices. This kind of proactive educational culture is exactly what will prevent future “AI hallucination” debacles. When lawyers are trained to approach AI output with a critical eye, and when they know the right techniques to verify and fact-check that output, the risk of error drops dramatically.
Education is not a one-time thing, of course. AI capabilities (and vulnerabilities) are changing rapidly. Ongoing learning will be part of the job from here on out. The good news is that a lawyer who builds a solid foundation in how AI works will find it easier to adapt to new developments, because they’ll understand the underlying concepts rather than treating each new AI product as an entirely foreign creature. And with institutions like 3ITAL providing support through publications, courses, and community discussion forums, lawyers have places to turn when they need to get up to speed.
To sum up, knowledge is the antidote to fear and error when it comes to AI. The more the legal community educates itself about generative AI, its advantages (its quirks, and yes, its hallucinations), the more effectively we can integrate these powerful tools into practice without stumbling over them.
Conclusion
Artificial intelligence has undeniably arrived in the legal world, and it brings extraordinary potential to transform how lawyers work. Research that once took days can be done in hours, boilerplate documents can be drafted in minutes. In a profession notorious for long hours and information overload, these AI-driven efficiencies are a welcome development. Lawyers who harness AI wisely will likely have an edge in serving their clients faster and maybe even more cost-effectively. However, along with that promise comes the responsibility to use AI carefully and correctly. AI hallucinations represent a real risk, one that can undermine trust, upend cases, and jeopardize careers if left unchecked.
The experiences of the past few years make it plain that blind reliance on AI is a mistake no lawyer can afford to make. We’ve seen that even intelligent, well-meaning attorneys can get into trouble by assuming an AI’s output is accurate. The technology is impressively advanced, but it is not magic and it is not foolproof. Just as you wouldn’t let an intern file a brief without reviewing it, you shouldn’t let an AI draft go out the door without a thorough once-over. By maintaining robust verification practices (“trust but verify”), lawyers can enjoy the benefits of AI while neutralizing much of the risk. Think of AI as a powerful tool in your toolkit, like a chainsaw. In the right hands, it can do work astonishingly faster than traditional methods. However, in careless hands, it can cause serious harm. The tool doesn’t change the standards of workmanship and safety that you must uphold.
Encouragingly, the legal community is adapting. Courts and ethical bodies are clarifying expectations, and resources for learning responsible AI use are increasingly available. Through a combination of smart policies, individual vigilance, and ongoing education, lawyers can mitigate the dangers of AI hallucinations. The role of organizations such as 3ITAL in this new landscape cannot be overstated: they are lighting the way so that lawyers can confidently use AI as a partner, not a peril. By availing themselves of training and guidance, attorneys can turn AI from a source of anxiety into an asset that amplifies their abilities.
In the final analysis, AI is a powerful ally for lawyers who wield it with caution. Embracing innovation in law doesn’t mean abandoning the rigorous habits that define good lawyering. On the contrary, it means extending those habits to new tools. Yes, an AI might occasionally hallucinate, but a diligent lawyer armed with knowledge and verification skills can catch those hallucinations before they cause harm. The future of legal practice will likely feature AI at every turn, from research to courtroom strategy. By approaching that future with informed vigilance, lawyers can ensure that when AI elevates their practice, it does so on a solid foundation of truth and reliability. And by always taking into account the immense ethical implications of artificial intelligence and making proper use of it, that proper use will be wise and fruitful.
- [↩][↩][↩][↩][↩][↩][↩][↩][↩]Sara Merken, “AI ‘hallucinations’ in court papers spell trouble for lawyers,” Reuters (Feb. 18, 2025) available at https://www.reuters.com/technology/artificial-intelligence/ai-hallucinations-court-papers-spell-trouble-lawyers-2025-02-18/
- [↩]Page Laubheimer, “AI Hallucinations: What Designers Need to Know,” Nielsen Norman Group (Feb. 7, 2025) available at https://www.nngroup.com/articles/ai-hallucinations/
- [↩]Colin Levy, quoted in “Legal AI Unfiltered: 16 Tech Leaders on AI…” NatLawReview (Oct. 2023) – available at https://natlawreview.com/article/legal-ai-unfiltered-16-tech-leaders-ai-replacing-lawyers-billable-hour-and
- [↩][↩]Email available at https://fingfx.thomsonreuters.com/gfx/legaldocs/jnpwjqqbzvw/Morgan%20and%20Morgan%20Email.pdf
- [↩]Jonathan Stempel, “Michael Cohen will not face sanctions after generating fake cases with AI” Reuters (Mar. 20, 2024) available at https://www.reuters.com/legal/michael-cohen-wont-face-sanctions-after-generating-fake-cases-with-ai-2024-03-20/
- [↩][↩]Gauthier v. Goodyear Tire & Rubber Co., No. 1:23-CV-281, 2024 WL 4882651, at *2 (E.D. Tex. Nov. 25, 2024
- [↩]David Thomas, “Judge rebukes Miinnesota over AI error in ‘deepfake’ lawsuit,” Reuters (Jan. 13, 2025) available at https://www.reuters.com/legal/government/judge-rebukes-minnesota-over-ai-errors-deepfakes-lawsuit-2025-01-13/
- [↩][↩][↩][↩][↩]Hon. Ralph Artigliere (Ret.), “AI Hallucinations in Court: A Wake-Up Call for the Legal Profession,” EDRM (JDSupra) (Jan. 22, 2025) available at https://www.jdsupra.com/legalnews/ai-hallucinations-in-court-a-wake-up-4503661/
- [↩]Vectara, Hallucination Leaderboard, updated as of February 11, 2025, available at https://github.com/vectara/hallucination-leaderboard
- [↩]Steven Levy, “In Defense of AI Hallucinations,” Wired (Jan. 5, 2024) available at https://www.wired.com/story/plaintext-in-defense-of-ai-hallucinations-chatgpt/
- [↩][↩][↩]Varun Magesh et al., “AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries,” Stanford HAI (RegLab) (May 23, 2024) available at https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries
- [↩][↩][↩]Shomit Ghose, “Why Hallucinations Matter: Misinformation, Brand Safety and Cybersecurity in the Age of Generative AI” UC Berkley Sutardja Center for Entrepreneurship and Technology (May 2, 2024) available at https://scet.berkeley.edu/why-hallucinations-matter-misinformation-brand-safety-and-cybersecurity-in-the-age-ofgenerative-ai/
- [↩]Formal Opinion 512, Standing Committee on Ethics and Professional Responsibility, American Bar Association (Jul. 29, 2024) available at https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf
- [↩]Mike Dahn, “How Harmful Are Errors in AI Research Results?” Thomson Reuters (August 2, 2024) available at https://www.thomsonreuters.com/en-us/posts/innovation/how-harmful-are-errors-in-ai-research-results/
- [↩]Opinion No. __ [PO 2024-6], The Professional Ethics Committee for the State bar of Texas, available at https://www.texasbar.com/AM/pec/vendor/drafts/PO_2024_6.pdf
- [↩]3ITAL – International Institute for Intelligent Technology Adoption in the Law, Mission and Programs (3ital.org, 2024) – describing 3ITAL’s mission to help lawyers adopt AI responsibly
- [↩]3ITAL – International Institute for Intelligent Technology Adoption in the Law, Mission and Programs (3ital.org, 2024) – describing 3ITAL’s educational offerings like CLE courses, workshops, and a resource hub for legal AI tools