Generative AI, such as Open AI’s ChatGPT and Google’s Gemini (previously called “Bard”), are capable of performing tasks that were once thought solely to be in the province of human ability. It can analyze and categorize data and even create human-sounding content in the form of text. As such, generative AI has clear applications in the practice of law and legal marketing. Some of the more obvious uses include document review, contract analysis, and even handling basic client communications.
As lawyers and other professionals have started looking for ways to leverage Generative AI to do their jobs more efficiently, many observers have sounded alarm bells about ethical and professional issues about how it is used.
In fact, some of the early adopters of Generative AI in the legal profession have been subject to sanctions, as the technology is known to “hallucinate” facts. In the case of two New York lawyers, ChatGPT made up case law out of thin air and then doubled down on its existence when asked to verify the cases it cited. Ultimately, the attorneys were each fined $5,000 and ordered to reach out to the judges about the fake cases mentioned. Perhaps worse, their names were splashed all over the national media – from Forbes to CNN – for using ChatGPT and not fact-checking its output.
A year and a few months into generative AI entering the mainstream, state bars are starting to develop guidance and rules regarding how lawyers use it. Given the concerns and uncertainties regarding the use of AI in the legal profession, this guidance is particularly valuable in helping attorneys leverage the efficiency of AI while upholding ethical duties. Recently, California issued guidance that lawyers across the United States can benefit from. I discuss some of the highlights in the material below.
The California Bar Guidance
As part of its guidance, the California Bar takes the position that AI is like any other technology that attorneys may leverage in their day-to-day professional activities. From the guidance:
Like any technology, generative AI must be used in a manner that conforms to a lawyer’s professional responsibility obligations, including those set forth in the Rules of Professional Conduct and the State Bar Act.
The guidance they provide demonstrates ways that lawyers can use AI consistently with their professional responsibility obligations. Some of the obligations they address are discussed in the material below.
Duty of Confidentiality
The California Bar cautions that the use of AI can have implications related to the disclosure of confidential information. The guidance points out that many generative AI models use inputs to train the AI further and the information that users upload may be shared with third parties. In addition, the models may lack adequate security for attorneys to input confidential information.
For this reason, the Bar advises that lawyers should not input any confidential information without first confirming the model they are using has sufficient confidentiality and security protections. Furthermore, the Bar advises lawyers to consult with IT professionals
to confirm that an AI model adheres to security protocols and also carefully review the Terms of Use or other provisions.
Duties of Competence and Diligence
The use of generative AI also can raise issues related to the duties of competence and diligence. In light of the fact that these models can produce false or misleading information, the California Bar advises that lawyers must:
- Ensure competent use of the technology and apply diligence and prudence with respect to facts and law
- Understand to a reasonable degree how the technology works and its limitations
- Carefully scrutinize outputs for accuracy and bias
In addition, the Bar cautions that overreliance on AI is inconsistent with the active practice of law and application of trained judgment by an attorney. Furthermore, the guidance advises that an attorney’s professional judgment cannot be delegated to AI.
Duty to Supervise Lawyers and Non-lawyers, Responsibilities of Subordinate Lawyers
The Bar advises that supervisory and managerial attorneys should establish clear policies regarding the use of generative AI. In addition, they should make reasonable efforts to ensure that the firm adopts measures that provide reasonable assurance that its lawyers’ and non-lawyers’ conduct complies with professional obligations when using generative AI. This includes training on how to use AI and the ethical implications of using AI.
Using AI Can Also Have Implications for Law Firm Marketing
At Lexicon Legal Content, our sole focus is on generating keyword-rich content that helps law firms connect with their clients. While the California Bar’s guidance does not mention it directly, using generative AI to create marketing materials like social media or blog posts may also have implications related to the rules of professional conduct.
Under California Rule 7.1, a lawyer may not make a false statement about the lawyer or the lawyer’s services, and a statement is false or misleading if it contains a material misrepresentation of fact or law. Importantly, this is analogous to ABA Model Rule 7.1, which many states have adopted. In addition, under Model Rule 7.2, a lawyer should not call themselves a specialist or expert in any area of law unless they have been certified by an appropriate authority of the state or the District of Columbia or a U.S. Territory or that has been accredited by the American Bar Association.
These professional duties related to advertising make it critical to review any AI output a law firm intends to use in its marketing efforts. At Lexicon Legal Content, we are staffed by experienced legal professionals, including law school graduates and licensed attorneys, who understand these rules and ensure that all of the content we create – whether AI-assisted or not – is in compliance with advertising regulations in our clients’ states.