On Sept. 19, Brown Rudnick hosted its inaugural conference on “The Future of AI,” which brought together industry experts and legal professionals, sparking lively debates on the outlook, trends, challenges and opportunities surrounding artificial intelligence (AI).
Stephen Palley and Sage Revell chaired the legal ethics panel, where they discussed the implications of using AI technologies in the legal sector, including providing advice on how best to navigate this fast-developing sector based on the existing guidance in the U.S. and the U.K.
Here are some key takeaways:
- Not all publicity is good publicity: In June 2023, two New York attorneys relied on ChatGPT for legal research and submitted a briefing that included six fictional cases. Judge Castel in SDNY imposed Rule 11 sanctions, both because of the misuse of the technology and failure of the lawyers to be forthright with the Court when they were caught. Don’t be these guys.
- ABA and SRA: Both the American Bar Association (ABA) in the U.S. and the Solicitors Regulation Authority (SRA) in the U.K. provide a clear set of codes of conduct that lawyers and law firms must abide by. In the midst of this “AI revolution,” lawyers and law firms must be mindful of these.
- LawTech: This is not the first time that the law has been impacted by new technology. Lawyers have adapted to copying machines, email and e-discovery, amongst others. As with prior innovations, whilst AI can enhance legal practice, lawyers remain responsible for maintaining professional standards and safeguarding client interests.
- Competence: It is essential to verify any AI-generated work for accuracy and reliability, understanding both capabilities and limitations of AI. Lawyers should use AI technologies to complement services rather than replacing them as lawyers still remain responsible for any work product going to clients.
- Confidentiality: Lawyers must not submit confidential or privileged information to AI models. Protecting client data is essential as given the nature of AI, any information inputted into an AI model could be made available to the public. This presents an array of serious risks such as infringement of intellectual property rights, cybersecurity risks and data privacy.
- Responsible Use: Editing substance that is created by lawyers or summarising lengthy clauses that cannot be traced back to a client contract are good examples of responsible use. AI should not be used to perform legal research (especially research that has not been independently checked and confirmed for accuracy) or to upload client contracts or data of any kind.
Conclusion: Lawyers and law firms are responsible to courts and clients for all work products. Technology can make us more efficient and smarter, however, it does not replace independent legal judgment and training. AI can free lawyers from mundane tasks and allow them to devote more of their time to counselling clients, which after all is the core of what lawyers do. Lawyers should not fear AI, they should embrace it, but with caution.
For further information please reach out to Stephen Palley and Sage Revell or any member of the Brown Rudnick Technology Team.