Ethical Considerations regarding the Non-Use of Artificial Intelligence under LSO Rules of Professional Conduct

Society is facing a paradigm shift with the advancement of Artificial Intelligence (AI), being effected by technologies like Large Language Models (LLMs), a type of AI trained on vast text data specifically to understand and generate human-like language. As AI invariably integrates into the legal profession, it presents both opportunities and challenges within the ethical framework established by the Law Society of Ontario (LSO). While discussions often focus on the risks inherent in using AI, and numerous practitioners vocalize opinions against it, I believe in the foreseeable future, AI will become as ubiquitous as email and the Internet. Indeed, we are standing on the precipice of another technological shift comparable to the one in around 1998.  The question will soon become, could a lawyer’s failure to leverage appropriate AI tools soon constitute a breach of their ethical obligations under the LSO Rules of Professional Conduct. This post explores the ethical dimensions of AI non-adoption strictly within the Ontario legal context, examining how existing professional duties under the LSO Rules may compel lawyers to engage with, rather than ignore, relevant technological advancements.

Foundational Ethical Duties Under the LSO Rules

The LSO Rules of Professional Conduct establish the ethical framework for Ontario lawyers. Several core duties are particularly relevant when considering the implications of AI:

  1. Duty of Competence (Rule 3.1): A lawyer must perform legal services to the standard of a competent lawyer (Rule 3.1-2). Rule 3.1-1 defines competence comprehensively, encompassing relevant knowledge, skills, and attributes. Crucially, it explicitly includes “performing all functions conscientiously, diligently, and in a timely and cost-effective manner” (Rule 3.1-1(e)). Furthermore, LSO commentary added subsequent to the FLSC’s 2019 amendment states, “To maintain the required level of competence, a lawyer should develop an understanding of, and ability to use, technology relevant to the nature and area of the lawyer’s practice…” (Commentary [4A] to Rule 3.1-2). While not a binding rule itself, this commentary signals an evolving expectation regarding technological proficiency as part of competence.
  2. Duty Regarding Quality of Service (Rule 3.2): Lawyers owe a duty to provide courteous, thorough, and prompt service (Rule 3.2-1). The quality of service required must be “competent, timely, conscientious, diligent, efficient, and civil” (Rule 3.2-1). This rule directly links efficiency to the expected standard of service.
  3. Duty Regarding Reasonable Fees and Disbursements (Rule 3.6): Fees and disbursements must be fair, reasonable, and disclosed (Rule 3.6-1). Billing practices resulting in excessive fees due to inefficient methods, when more efficient alternatives are reasonably available, could potentially conflict with this duty.
  4. Confidentiality (Rule 3.3): The paramount duty to hold client information in strict confidence (Rule 3.3-1) is critical when considering AI. Lawyers must ensure that the use of any technology, including AI, does not compromise this fundamental obligation, particularly when inputting or processing client data.

AI tools offer capabilities that directly intersect with these LSO duties. AI platforms can potentially enhance efficiency and timeliness by accelerating tasks such as legal research, large-volume document review, and preliminary document drafting, relating directly to the requirements for timely and efficient service under Rule 3.2-1. Furthermore, by reducing the lawyer or staff time required for certain tasks, AI can potentially lower the overall cost of legal services, aligning with the cost-effectiveness component of competence under Rule 3.1-1(e) and the duty of reasonable fees under Rule 3.6-1. Additionally, in data-intensive matters, AI may assist in identifying relevant information or patterns across vast datasets that might be impractical to uncover through purely manual review, potentially enhancing the thoroughness required by Rule 3.2-1.

The Ethical Argument for Considering AI Adoption (Based on LSO Rules)

Given these potential benefits, an argument emerges that an unjustified refusal to utilize appropriate AI tools could raise ethical concerns under the LSO Rules. If a reasonably available, reliable AI tool demonstrably offers a superior method for performing a specific task (in terms of efficiency, cost-effectiveness, or thoroughness) compared to traditional methods, and its associated risks (accuracy, bias, confidentiality) are manageable through diligent oversight and established protocols, then a lawyer’s refusal to employ it could potentially result in a failure to provide service in a cost-effective manner (contrary to Rule 3.1-1(e)), a failure to provide efficient service (contrary to Rule 3.2-1), or charging fees that are not reasonable in light of the available efficient methods (potentially violating Rule 3.6-1). This perspective is amplified by the LSO commentary on technological competence under Rule 3.1, suggesting an affirmative responsibility to understand and potentially use relevant technology. Judicial commentary in Ontario, while not establishing a formal rule, has also signaled impatience with technological reluctance and linked efficiency with cost consequences and access to justice, further contextualizing the practical implications of these LSO rules.

Essential Caveats and Professional Judgment Within the LSO Framework

This argument does not impose a blanket duty to use any AI tool under the LSO Rules, and critical caveats remain. Lawyers must diligently manage the significant risks associated with AI, including accuracy (hallucinations) and bias; rigorous human oversight, verification of outputs, and critical evaluation are essential components of competent practice under Rule 3.1, with lawyers remaining ultimately responsible for the final work product. Protecting client confidentiality (Rule 3.3) is paramount, requiring careful vetting of vendors and understanding data usage policies; where confidentiality risks cannot be adequately mitigated, refusal to use AI is ethically required. Further, the decision to use AI must be grounded in professional judgment regarding its suitability for the specific task and context, ensuring it aids rather than hinders competent representation (Rule 3.1) and quality service (Rule 3.2). Lawyers are also responsible for supervising the use of AI tools, whether by themselves or staff, ensuring compliance with all professional obligations (Rule 6.1 regarding delegation implicitly applies). Finally, the expectation to use technology, as noted in the LSO commentary ([4A] to Rule 3.1-2), is subject to the principles of relevance and reasonable availability, considering the nature of the practice, the specific matter, and available resources.

The ethical landscape surrounding AI in legal practice is evolving, demanding attention from lawyers in Ontario. While AI adoption presents risks requiring careful management under the LSO Rules, an outright refusal to engage with or utilize demonstrably beneficial and reasonably available AI tools may carry ethical implications, particularly concerning competence (Rule 3.1), quality of service (Rule 3.2), and reasonable fees (Rule 3.6). Lawyers must competently assess AI’s relevance, manage its risks (especially confidentiality under Rule 3.3), and make reasoned, client-centered decisions about its integration into practice. Failure to do so, especially when resulting in tangible client detriment through excessive costs or undue delay, becomes increasingly difficult to justify under the existing LSO ethical framework. As AI tools mature and integrate further into legal workflows, understanding these ethical considerations is essential for responsible legal practice in Ontario.