AI Fabricated Citations and the Legal Profession: Lessons from the High Court
In June 2025, the High Court of England and Wales issued a clear warning to legal practitioners regarding the misuse of Artificial Intelligence (AI) in the preparation of court documents. The judgment in Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank arose from two unrelated proceedings in which submissions contained multiple fictitious case citations, some entirely fabricated, others materially inaccurate, suspected to have been generated by AI tools and relied upon without verification.
The Court's intervention reflects a growing international trend. Courts in Canada, New Zealand, and Australia have confronted similar incidents of fabricated citations. Most prominently, the 2023 United States decision of Mata v Avianca, Inc., 678 F. Supp. 3d 443, 466 (S.D.N.Y. 2023), lawyers were sanctioned for filing briefs containing AI fabricated citations. Against this backdrop, the High Court's warning situates England and Wales within a wider global effort to ensure that the use of AI in legal practice is consistent with existing professional duties and regulatory standards.
The two High Court cases and Dame Sharp's judgment
The judgment was delivered by Dame Victoria Sharp, President of the King's Bench Division, sitting as part of a Divisional Court under the "Hamid" jurisdiction. This jurisdiction reflects the Court's inherent power to regulate its own procedures, which includes ensuring that lawyers conduct themselves in line with their professional duties. It stems from the case of R (Hamid) v Secretary of State for the Home Department [2012] EWHC 3070 (Admin) , where the Court established a special procedure for addressing serious professional misconduct or beaches of duties, by legal representatives or law firms. Hearings convened for this purpose are referred to as Hamid hearings.
The cases were referred to the Court following the suspected use of generative artificial intelligence tools to produce written legal arguments or witness statements which were not then verified. As a result, false information, in the form of fabricated case citations and inaccurate quotations, was placed before the court. While the immediate concern was the competence and conduct of the individual lawyers involved, Dame Victoria Sharp observed that the matters also raised "broader areas of concern [...] as to the adequacy of the training, supervision and regulation of those who practice before the courts" and the need for practical steps to ensure compliance with professional and ethical responsibilities (para 3).
In setting out the potential consequences of such conduct, Dame Victoria Sharp stressed:
"In the most egregious cases, deliberately placing false material before the court with the intention of interfering with the administration of justice amounts to the common law criminal offence of perverting the course of justice, carrying a maximum sentence of life imprisonment." (para 25)
She added that even in the absence of criminal intent, this behaviour could amount to contempt of court:
"Placing false material before the court with the intention that the court treats it as genuine may, depending on the person's state of knowledge, amount to a contempt. That is because it deliberately interferes with the administration of justice." (para 26)
Although no contempt proceedings were initiated in these particular cases (and indeed no such criminal intent was ascribed to the actions), the Court made clear that such measures remain available where professional At the same time, the court observed that existing regulatory guidance is not enough to prevent misuse of AI in legal practice. Accordingly, it directed that the judgment be sent to the Bar Council, the Law Society, and the Council of the Inns of Court, urging them to consider further measures to ensure compliance with professional duties.
The Ayinde Case
Mr Ayinde sought judicial review against the London Borough of Haringey for failing to provide interim accommodation pending a homelessness review. He was represented by Haringey Law Centre. The grounds for review, drafted by Ms Forey, misstated section 188(3) of the Housing Act 1996 and cited five non-existent cases, including "El Gendi v Camden LBC." Mr Amadigwe, the solicitor supervising Haringey Law Centre, relied on Ms Forey's work and when alerted that the cases did not exist, he took "inadequate steps" (para 61).
The defendant's solicitor made an application for a wasted costs order against Haringey Law Centre and Ms Forey. Ritchie J heard the application and found that Ms Forey had "intentionally", in the sense of recklessly, rather than with a deliberate intent to mislead, included non-existent cases in her pleadings without "caring whether they existed" (para 65). This amounted to improper and unreasonable conduct. While it could not be determined conclusively whether she had used AI, Ritchie J stated it would have been negligent had she done so without verifying the material. He ordered both Ms Forey and the Law Centre to pay £2,000 each and referred the case to the Hamid judge.
The High Court decided not to initiate contempt proceedings against Ms Forey, noting unresolved factual issues regarding her conduct, possible failings in her supervision and training, her junior status, and the fact she is already subject to regulatory investigation. Nonetheless, the court stressed the seriousness of submitting unchecked AI-generated or false authorities. The matter was referred to the Bar Standards Board (BSB), regarding Ms Forey, and the Solicitors Regulation Authority (SRA), regarding Mr Amadigwe's inadequate oversight.
The Al-Haroun Case
Mr Al-Haroun claimed £89.4m in damages against Qatar National Bank and QNB Capital. During interlocutory proceedings, Judge Dias J found that Mr Al-Haroun's and his solicitor's witness statements relied on 45 authorities, of which 18 did not exist, and many of the remainder were either irrelevant, inaccurately quoted, or unsupported. Dias J described this as a matter of "utmost seriousness" and also referred it under the Hamid jurisdiction (para 73).
The client, Mr Al-Haroun admitted responsibility for the fictitious material, explaining that it had been generated using AI tools, online sources, and legal search engines. He apologised, stressing he did not intend to mislead, though he misplaced his confidence in the material. His solicitor, Abid Hussain of Primus Solicitors, also admitted his witness statement contained false citations. He had relied on his client's research without verification, accepted this was a grave professional error, apologised unreservedly, and referred himself to the SRA.
The High Court accepted Mr Al-Haroun's candour and apology, but emphasised that responsibility rests with the lawyer, not a "lay client" (para 81). It described Hussain's reliance on his client's unchecked research as a "lamentable failure" to meet the basic duty of verifying material put before the court. However, as there was no evidence he knowingly sought to mislead the court, the threshold for contempt was not met. Mr Hussain and Primus Solicitors were nonetheless referred to the SRA for investigation.
Legal Professionals' Regulatory Duties
The incidents in Ayinde and Al-Haroun did not occur in a regulatory vacuum. In England and Wales, both solicitors and barristers are subject to strict professional obligations that apply regardless of the tools used to inform and prepare case documents and submissions. The Court's judgment showed that the use of AI does not deviate from these duties. Both the SRA and the BSB have already published guidance on the responsible use of AI, underlining that it must align with the profession's core duties.
Existing professional guidance reinforces these expectations. The Bar Council's Considerations when using ChatGPT and generative artificial intelligence software stresses that outputs should not be taken "on trust and certainly not at face value." This is underpinned by the BSB Handbook which prohibits barristers from knowingly or recklessly misleading the court (rC3.1, rC9.1) and requires them to advance only contentions they reasonably consider properly arguable (rC9.2b), and demands a competent standard of work (rC18). Similarly, the SRA Code of Conduct for Solicitors requires solicitors not to mislead clients or the court (Rule 1.4), and to make only properly arguable submissions (Rule 2.4). They must not waste the court's time (Rule 2.6) and are under a positive duty to draw attention to relevant authorities that could materially affect the outcome (Rule 2.7). Competence is central, as solicitors are required to provide "competent" services (Rule 3.2) and remain accountable for work carried out on their behalf by others (Rule 3.5).
At the same time, regulators are not resisting innovation. The SRA's Risk Outlook Report: The use of artificial intelligence in the legal market stressed that firms remain responsible for AI outputs but recognises the technology's potential benefits if managed responsibly. In May 2025, the SRA authorised Garfield.Law Ltd, the first purely AI-driven law firm in England and Wales. Garfield.Law uses an AI-powered litigation assistant to guide SMEs through small claims debt recovery. Before granting approval, the SRA scrutinised the firm's processes to ensure quality checks, confidentiality, and safeguards against conflicts of interest and hallucinations. Crucially, the system cannot propose case law, and every action requires client approval. Named solicitors remain ultimately accountable for outputs, backed by mandatory professional indemnity insurance. As SRA Chief Executive Paul Philip put it, this "landmark moment" shows that AI-driven legal services may improve access to justice, but only if supported by robust consumer protections and high professional standards.
The issues with the use of AI in legal practice
While regulators have highlighted that AI use must align with long-standing professional duties, there are some practical difficulties. Tools such as virtual assistants and e-discovery software can retrieve case law, statutes, and commentary at unprecedented speed and scale. Yet their integration into practice is far from straightforward.
Recent research carried out by BIICL on the Use of AI in Legal Practice indicates that practitioners cannot yet fully trust AI outputs. These tools can generate memos, suggest case law, or provide starting points for research; however, lawyers will not fully rely on them. This echoes Dame Victoria Sharp's concern that generative systems like ChatGPT "are not capable of conducting reliable legal research" (para 6). Although such tools may produce coherent and plausible answers, those answers "may turn out to be entirely incorrect," citing sources that do not exist or fabricating quotations (para 6). For this reason, she stressed that lawyers have a continuing professional duty to check accuracy against authoritative sources, whether they rely on AI directly or supervise others who do so (para 7).
Accuracy is not the only risk. Transparency is also undermined by the "black box" nature of AI systems, which makes it difficult for lawyers to explain why a particular output was generated, potentially conflicting with the duty to give clear and reasoned arguments to the court and clients. Bias and fairness issues arise if data used to train AI systems is skewed, raising concerns under the duty not to discriminate and to act in the best interests of clients. The duty of confidentiality and data protection obligations restrict the uploading of client information into third-party systems. Over the longer term, there is also a concern that outsourcing research tasks to machines could erode lawyers' opportunities to develop critical analytical skills, undermining the duty of competence and independent professional judgment.
Yet alongside these risks, AI also carries potential benefits, particularly in addressing access-to-justice gaps. As BIICL's forthcoming research on Bridging the Justice Gap: How Smart Technology Can Support Access to Legal Advice for Underserved Communities shows, the same tools that present risks in traditional legal practice may also provide opportunities to (support) closing gaps in access to justice. In settings where legal aid has been cut back and third-sector providers are overstretched, AI can be leveraged to help legal service providers "do more with less". This could enable legal aid providers and pro bono clinics to expand their reach and provide more timely, affordable assistance to vulnerable, underserved communities.
The challenge, then, is to reconcile these two perspectives. AI tools cannot replace human judgment and professional responsibility, but they can support third-sector providers if deployed cautiously and responsibly. To achieve this, firms and organisations should consider adopting formal protocols for AI use; for example, clear policies on verification of AI-generated outputs and confidentiality safeguards. In this context, BIICL's access to justice research emphasises that AI must be integrated with safeguards for accuracy, confidentiality, and trust, ensuring that the technology strengthens rather than undermines the right to effective legal support. AI cannot and should not replace lawyers but can support their work, under close supervision by those lawyers who will ultimately be responsible for the outputs provided.
Conclusion
The High Court's intervention in Ayinde and Al-Haroun marks a pivotal moment in the conversation about the use of AI in legal practice in the UK. These judgments are not a judicial rejection of technology, but a reaffirmation that its use must be grounded in the same principles that have long underpinned professional conduct.
Put simply, AI's value lies not in replacing lawyers but in supporting them. Its benefits will only be realised if its limits are recognised, its outputs verified, and its deployment aligned with professional and ethical standards.
Author:
Iris Anastasiadou, Visiting Lecturer, University of Westminster
Join the conversation
No comments have been added to this insight.