The Lamborghini Doctrine and Defensible AI in eDiscovery: Navigating Risk, Trust, and Strategy
The legal profession is entering a new era of accountability in artificial intelligence. Courts across the United States are drawing clear lines around how lawyers may—and may not—use AI. The so-called “Lamborghini Doctrine”, a phrase born from a sanctions order against attorneys who relied on hallucinated AI-generated case law, underscores a broader judicial concern: lawyers cannot outsource professional responsibility to machines.
For those of us working at the intersection of technology and legal practice, the message is unmistakable. Reckless use of generative AI in court filings is already producing sanctions, disqualification, and reputational harm. At the same time, defensible applications of AI in discovery review are proving reliable, efficient, and risk-mitigating. The challenge is to navigate this paradox responsibly.
Sanctions and the Rise of Judicial Concern
Recent cases illustrate how swiftly courts are acting to set boundaries. In Mata v. Avianca (S.D.N.Y. 2023), attorneys submitted a brief containing six fabricated case citations generated by ChatGPT. Judge P. Kevin Castel sanctioned the lawyers, reminding the profession that while technological advances are not improper in themselves, existing rules still impose a gatekeeping duty: “Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”
That same principle guided Judge Anna Manasco in Johnson v. Dunn (N.D. Ala. 2025), a case that made headlines after three Butler Snow attorneys were disqualified from a matter for citing AI-fabricated authorities. Her order could not have been clearer: “Lawyers who make false statements in court filings—whether intentional or not—undermine the integrity of the judicial process. Delegating diligence to AI does not absolve counsel of responsibility.”
Another instructive example is Park v. Kim (E.D.N.Y. 2023), where an attorney was sanctioned after citing fictitious AI-generated cases. The court described his reliance on unverified machine output as “an abdication of the duty of competence.” Taken together, these decisions form a coherent picture: the judiciary will not tolerate lawyers who fail to validate the work that AI produces.
What the Courts Are Really Saying
While the facts of these cases vary, the thread that runs through them is remarkably consistent. The duty of candor to the court remains paramount, and attorneys cannot abdicate it by pointing to a machine. The duty of competence now includes technology, meaning that lawyers must not only be aware of AI but must understand its risks and limitations. And perhaps most importantly, the sanctions imposed in these cases were not only about punishing negligence; they were also about protecting the integrity of the justice system itself. Judges are making an example of reckless AI use to preserve trust in the judicial process.
Filings Versus Discovery: Two Very Different Contexts
What is often lost in the headlines is the distinction between how generative AI is used in different parts of the legal process. In court filings, where accuracy and precedent are everything, a single hallucination can collapse an argument, trigger sanctions, and permanently damage reputations. These high-profile failures have created a chilling effect, making any use of generative AI in pleadings radioactive. Disclosure expectations are becoming stricter, and even perceived failures are enough to invite challenges from opposing counsel.
But in the context of discovery review, generative AI paints a very different picture. Here, tools designed to process, categorize, and surface insights from massive datasets are showing real value. When guided by experienced practitioners, generative AI can enhance consistency, reduce human error, and extract meaning from sources like chat threads or collaborative platforms that would overwhelm manual reviewers. Far from creating risk, defensible use of AI in discovery actually mitigates it. The paradox, however, is that courtroom failures like Mata and Johnson have raised the temperature around generative AI as a whole, making adoption in discovery more sensitive and disclosure decisions more complex.
Defensibility as the New Benchmark
In this environment, defensibility has become the standard by which AI in eDiscovery must be judged. A defensible workflow is one that can withstand scrutiny from a judge or from opposing counsel. That means every output must be explainable, every decision traceable, and every workflow subject to human oversight. Transparency, traceability, oversight, and documentation are not optional—they are the foundations of trust in any AI-assisted discovery process.
When discovery workflows are opaque or treated as “black boxes,” defensibility collapses. And in the current climate, such collapses do not just raise questions; they invite sanctions. The real measure of success is not how quickly AI can process data, but whether its results can be defended when challenged in court.
The Case for a Consultative, Human-Led Model
The most effective approach to AI in eDiscovery is not to treat it as a stand-alone solution but as one component of a broader, consultative process. Experienced subject matter experts lead this process, choosing the right combination of tools—whether traditional analytics, predictive coding, or generative AI—based on the specific needs of the matter. Throughout, human reviewers remain in the loop, supervising AI, providing feedback, and validating results against the evidentiary record. This is not human versus machine; it is human and machine working together, with the human role firmly in charge.
End-point validation is a crucial piece of this puzzle. No output is relied upon until it has been checked against source data. This ensures that discovery conclusions are not just fast and efficient but also court-ready. In other words, the process is not defined by what AI alone can do but by what AI and experts, working together, can defend.
Trust, Risk Avoidance, and the Emotional Dimension
For law firms and corporations, the risks of misusing AI extend beyond sanctions. The greater threat may be reputational damage, client backlash, and the erosion of trust. A single incident can compromise relationships built over years. In this context, defensible AI in eDiscovery offers not only technical advantages but emotional reassurance. Clients gain confidence when they see that their provider is not chasing the latest hype but deploying AI responsibly, transparently, and defensibly.
Risk avoidance is not the opposite of innovation—it is strategy. The smartest use of AI in law is not to move fast and break things, but to move deliberately and defend everything. This is especially true in litigation and investigations, where defensibility is the currency of credibility. Clients do not want to explain to their boards why sanctions were imposed. They want to explain how their team leveraged cutting-edge tools responsibly to achieve outcomes without exposure.
Responsibility, Not Recklessness
The Lamborghini Doctrine is more than a cautionary tale. It is a signal that courts will continue to set guardrails around AI, and failures will be punished. Yet it would be a mistake to interpret these rulings as a reason to retreat from AI altogether. The lesson is not to avoid AI but to deploy it responsibly.
Defensible AI in eDiscovery is already here. It is built on blended tools, human-in-the-loop supervision, end-point validation, and detailed documentation. It is consultative and strategy-driven, not technology-driven. And it is the only sustainable way forward for practitioners who want to harness the benefits of AI without exposing themselves or their clients to unnecessary risk.
The courts have spoken. Reckless reliance on AI in pleadings will not be tolerated. But in the hands of experts who understand both technology and law, AI – even GenAI with hallucination risks – in discovery review can enhance trust, mitigate risk, and drive better outcomes. The future of AI in law will not be defined by reckless experiments that collapse in court. It will be defined by whether we embrace defensibility and proper validation as our guiding principles.
_
About Author
Brian Stempel is a law practice technology executive and thought leader with over 30 years of experience in delivering innovative solutions and services to the legal industry. He is the Senior Vice President of Strategic Client Solutions at Lineal where he helps clients solve legal challenges with Lineal’s award-winning Amplify™ platform. Before Lineal, Brian ran eDiscovery operations at Kirkland & Ellis, Paul Hastings, and Debevoise & Plimpton. A life-long learner he also holds executive education certificates from Cornell University, MIT Sloan School of Management, Columbia Business School, and Harvard Business School in various fields related to artificial intelligence, innovation, DEI, and leadership.
_
About Lineal
Lineal is an innovative eDiscovery and legal technology solutions company that empowers law firms and corporations with modern data management and review strategies. Established in 2009, Lineal specializes in comprehensive eDiscovery services, leveraging its proprietary technology suite, Amplify™ to enhance efficiency and accuracy in handling large volumes of electronic data. With a global presence and a team of experienced professionals, Lineal is dedicated to delivering custom-tailored solutions that drive optimal legal outcomes for its clients. For more information, visit lineal.com
