All News

Why Lawyers Keep Using ChatGPT Despite AI Hallucination Risks

Attorneys increasingly rely on AI tools like ChatGPT for legal research to save time amid heavy caseloads. However, AI hallucinations—fabricated case citations and inaccurate information—have led to court sanctions and fines. While many lawyers use AI responsibly, the technology demands careful verification to avoid costly mistakes. The legal profession is adapting with new guidelines encouraging cautious, informed AI use.

Published June 1, 2025 at 11:09 AM EDT in Artificial Intelligence (AI)

The legal profession is rapidly adopting AI tools like ChatGPT to manage demanding workloads and streamline research. Yet, this convenience comes with a significant caveat: AI hallucinations—instances where large language models generate convincing but false information, including fabricated case law and citations.

Despite multiple high-profile incidents where lawyers submitted filings containing bogus AI-generated research, many attorneys continue to use these tools. Why? The answer lies in the immense time pressures lawyers face and the integration of AI into familiar legal research platforms like LexisNexis and Westlaw.

A 2024 Thomson Reuters survey found that 63% of lawyers have used AI, with 12% using it regularly, mostly for summarizing case law and researching statutes. Many view AI as a powerful assistant that can save time and help manage complex caseloads.

However, the technology’s limitations are stark. Judges have struck down motions and sanctioned lawyers after discovering multiple hallucinations in filings. For example, a Florida judge found nine fabricated citations in a motion to dismiss a case, forcing lawyers to resubmit corrected documents.

Experts emphasize that AI should supplement, not replace, legal expertise. Lawyers must rigorously verify AI-generated content, much like they would review junior associates’ work. The American Bar Association’s 2024 guidance underscores the duty of competence, urging attorneys to understand AI’s benefits and risks and to maintain technological proficiency.

Some attorneys treat AI as a junior associate—using it for drafting, brainstorming, and organizing citations—but always double-checking the output. This approach recognizes AI’s potential while mitigating the risk of costly errors.

In the bigger picture, AI’s role in legal services is poised to grow. As one law school dean noted, future lawyers may be judged on their ability to use AI effectively rather than avoid it. Yet, the necessity for human judgment and ethical responsibility remains paramount.

Balancing Efficiency and Accuracy in Legal AI Use

The allure of AI in legal research is clear: it promises speed and breadth of information processing that can ease the burden of heavy caseloads. But the risk of hallucinations means that lawyers must adopt a disciplined approach—using AI-generated insights as starting points rather than final answers.

  • Use AI tools integrated with trusted legal databases for more reliable results.
  • Always verify AI-generated citations and legal arguments against primary sources.
  • Treat AI output as a draft or research assistant, not a substitute for legal judgment.
  • Stay informed about evolving ethical guidelines and technological capabilities.

Ultimately, AI in law is a tool that, when used wisely, can enhance efficiency without compromising integrity. But ignoring its pitfalls risks undermining trust in legal processes.

Keep Reading

View All
The Future of Business is AI

AI Tools Built for Agencies That Move Fast.

QuarkyByte offers in-depth analysis and practical insights on AI’s impact in legal tech. Discover how to integrate AI tools safely and effectively in your legal workflows. Explore QuarkyByte’s expert resources to enhance your understanding of AI risks and benefits in legal research and compliance.