Ensuring Ethical Legal Practices: The Attorney's Duty to Fact-Check AI and Prevent Perjury
- Paul Quinn
- Mar 25
- 3 min read
Artificial intelligence tools are becoming common in legal work, offering quick access to information and drafting assistance. Yet, these tools can produce inaccurate or misleading content, known as hallucinations. For attorneys, relying on AI without thorough fact-checking risks presenting false information to the court, which can lead to suborning perjury—a serious ethical violation. This post explores the attorney’s responsibility to verify AI-generated content and maintain integrity in legal proceedings.
The Rise of AI in Legal Practice
AI applications in law help with research, document review, and drafting pleadings or contracts. These tools analyze vast amounts of data and generate responses that can save time. However, AI systems do not understand truth or context like humans do. They generate answers based on patterns in data, which sometimes results in fabricated or incorrect information.
For example, an AI might create a citation to a non-existent case or misstate a legal principle. If an attorney uses this information without verification, it could mislead the court or opposing counsel. This risk makes fact-checking essential when incorporating AI into legal work.
The Attorney’s Ethical Duty to Verify Information
Attorneys have a fundamental duty to ensure the accuracy of information they present. The American Bar Association’s Model Rules of Professional Conduct require lawyers to avoid knowingly making false statements to a tribunal and to correct false statements previously made. Using AI-generated content without verification can violate these rules if the attorney presents hallucinated facts as true.
Key responsibilities include:
Reviewing AI outputs carefully before including them in legal documents or arguments.
Cross-checking citations and facts against reliable sources such as official case law databases or statutes.
Clarifying uncertainties by conducting independent research rather than relying solely on AI.
Correcting errors promptly if inaccurate AI-generated information has been submitted.
Failing to meet these responsibilities risks misleading the court and may amount to suborning perjury if the attorney knowingly allows false statements to stand.
Examples of AI Hallucination Risks in Legal Contexts
Consider a scenario where an attorney uses AI to draft a brief citing a precedent. The AI generates a case name and citation that appear plausible but do not exist. If the attorney does not verify this and submits the brief, the court may rely on false authority. This could unfairly influence the outcome and damage the attorney’s credibility.
Another example involves AI summarizing witness statements. If the AI fabricates or distorts key details, the attorney might unintentionally present false testimony. This risks violating ethical duties and could lead to sanctions or disciplinary action.
These examples highlight why attorneys must treat AI as a tool that requires human oversight, not a source of unquestionable truth.
Practical Steps for Attorneys to Avoid Ethical Pitfalls
Attorneys can adopt several practical measures to ensure ethical use of AI:
Use AI as a starting point, not a final source. Treat AI-generated content as a draft requiring verification.
Verify all citations and legal authorities through trusted legal research platforms like Westlaw or LexisNexis.
Maintain documentation of fact-checking efforts to demonstrate diligence if questioned.
Stay informed about AI limitations and updates to technology to understand potential risks.
Educate legal teams about the importance of verifying AI outputs and the consequences of errors.
Consult colleagues or experts when uncertain about AI-generated information.
By integrating these steps into their workflow, attorneys can reduce the risk of presenting false information and uphold their ethical obligations.
The Broader Impact on the Legal System
When attorneys fail to fact-check AI outputs, the consequences extend beyond individual cases. Courts rely on accurate information to make just decisions. False or misleading facts can waste judicial resources, undermine public trust, and harm parties involved.
Conversely, responsible use of AI can improve efficiency without compromising integrity. Attorneys who rigorously verify AI-generated content contribute to a legal system that embraces technology while maintaining fairness and truth.



Comments