The Bar Council’s Guidance on the use of Generative AI

The Bar Council has recently issued guidance for barristers on the use of generative Artificial Intelligence (AI), including ChatGPT and other large language model systems (LLMs). The guidance concludes that there is nothing inherently improper about using reliable AI tools to augment legal services, but they must be properly understood by the individual practitioner and used responsibly.

The guidance is not legal advice and is not ‘guidance’ for the purposes of the BSB Handbook I6.4. However,  it highlights the key risks with LLMs and explores the considerations for barristers, and by extension all lawyers, when using generative AI:

  • Due to possible hallucinations and biases, it is important for lawyers to verify the output of LLM software and maintain proper procedures for checking generative outputs.
  • LLMs should not be a substitute for the exercise of professional judgment, quality legal analysis and the expertise that clients, courts and society expect from their legal representatives.
  • Lawyers should not to share with an LLM system any legally privileged or confidential information.
  • Lawyers should critically assess whether content generated by LLMs might violate intellectual property rights or breach trademarks.
  • It is important to keep abreast of relevant Civil Procedure Rules, which in the future may implement rules/practice directions on the use of LLMs, for example, requiring parties to disclose when they have used generative AI in the preparation of materials, as has been adopted by the Court of the King’s Bench in Manitoba.

This Bar Council guidance will be kept under review and updated periodically, however practitioners will need to be vigilant and adapt as the legal and regulatory landscape changes.

Leave a Reply