AI Isn’t Your Lawyer: The Risks of Getting Legal Advice From a BotArtificial intelligence (AI) has come a long way in recent years. Today, tools like ChatGPT, Google Bard, and other AI-powered apps are familiar to millions of people across the world. These types of AI are known as “large language models (LLMs),” and they essentially predict the next word in a sequence based on the context. As they have gotten more advanced, LLMs have become adept at answering many questions. Users now turn to them for quick answers on all kinds of subjects–including legal ones. While it’s true that these tools can give answers to general queries, like understanding basic legal terms, relying on AI to resolve actual legal issues comes with serious risks. The bottom line, AI is not a lawyer.

AI isn’t a substitute for legal counsel

AI tools generate responses based on patterns in data. The more data the better, usually. But it’s not always clear where they’re pulling their information from. And bad data is just one of the problems that can lead to incorrect answers. When it comes to the law, there are many nuances, location-specific rules and regulations, individual case-specific facts that influence the answer, and other issues that LLMs aren’t great at navigating. As a result, the legal advice you get from a bot is often inaccurate, unreliable, or outdated. They don’t know the specifics of your situation and they don’t keep up with real-time legal developments in the law. Additionally, you can’t hold them accountable for their mistakes. If they mess up, you will be left to face the consequences.

In fact, even lawyers who relied on AI for legal advice have been hit with fines–and suffered serious reputational damage.

Cases where lawyers faced discipline for using AI

1.  Mata v. Avianca, Inc. (2023)

In 2023, attorneys in New York succumbed to the temptation of using ChatGPT to generate a legal brief for their client, Roberto Mata. The firm filed it, but unfortunately, the tool had “hallucinated” multiple case citations. Essentially, it created entirely fictitious cases, and then used that non-existent case law to support legal arguments. When the opposing counsel and judge couldn’t verify the cases, the court started an investigation.

The judge reprimanded the lawyers for using “bogus judicial decisions, with bogus quotes and bogus internal citations.” Ultimately, the court issued sanctions, and dismissed the case, ruling in favor of the defendant, Avianca.

2.  Lacey v. State Farm General Insurance Co. (2025)

More recently, attorneys in California made a similar mistake by using AI to perform their legal research. The court found that “approximately nine of the 27 legal citations in the ten-page brief were incorrect in some way. At least two of the authorities cited do not exist at all. Additionally, several quotations attributed to the cited judicial opinions were phony and did not accurately represent those materials.” These findings show that the problem goes beyond the hallucination of entirely made-up cases. Sometimes, the tool cites real cases, but misrepresents the court’s decisions. Such errors can be even more challenging to uncover or correct, but can still lead to serious consequences. In this case, as in New York, the attorneys faced hefty fines. However, it’s the clients who also suffer when their attorneys lean on AI.

3. Johnson v. Dunn (2025)

In another recent case, a large law firm in Alabama submitted filings citing AI-hallucinated case law in litigation involving Alabama’s prison system. The State of Alabama was paying the firm millions to defend the suit. Now, the firm faces potential fines and sanctions.

There are many more cases where attorneys have used fake or incorrect citations in court. Unfortunately, AI hallucinations in court filings are on the rise.

What you need to know

It’s important to know that AI, while an impressive and powerful tool, is not infallible. In fact, there are many pitfalls that come with using these tools. And those risks are even greater if you don’t have a background in law to help you spot possible AI-errors. Essentially, you need to understand these risks associated with AI-powered tools like ChatGPT and Gemini:

  • AI can hallucinate: Even today’s advanced AI models sometimes fabricate facts, laws, or citations.
  • No confidentiality: If you type something into a public AI platform, that information is not protected by attorney-client privilege.
  • Jurisdiction matters: Legal standards vary by state and sometimes, even more locally. But AI tools may not account for regional variances in the law, leading to incorrect information.
  • Responsibility lies with you: Courts sanction attorneys for using AI. However, if you use it incorrectly, you can face consequences as well. It’s the human users of AI—not the AI—that suffer from misinformation.

Trust a human lawyer

Legal matters are often complex, high-stakes, and deeply personal. The temptation to seek quick answers through AI is understandable. However, only a qualified attorney can provide you with the accurate, confidential, and strategic guidance you need. Whether you suffered a personal injury and need financial compensation, or you’re facing criminal charges, you need to speak to a licensed attorney—not a chatbot.

At Delius & McKenzie, we’re here to offer responsible, human, legal representation for clients in Sevierville, Gatlinburg and Pigeon Forge. Contact us today to schedule a consultation with one of our experienced attorneys.