As covered in our second post in this series, attorneys should be extremely cautious when using ChatGPT or GPT-4 in practice for a number of reasons. Chief among them, these general-use AI tools “hallucinate,” making up plausible-sounding but false information in their responses.
With the right expertise, building a solution that uses the power of AI and is trustworthy enough for legal practice is possible, and such products are quickly becoming must-haves for attorneys. Equally important as an AI product’s reliability, though, is its ability to keep confidential firm and client information secure and private.
Just how critical data privacy and security are to legal practice has been underscored by the spike in law firm data breaches the last three years—since 2020, more than 750,000 Americans have had their personal information compromised as a result of law firm cyberattacks. And while consumer-facing products powered by large language models (LLMs), such as ChatGPT, do protect users’ data, that protection doesn’t rise to the level of both security and privacy required for high-stakes situations involving privileged and highly confidential information.
For starters, comprehensive privacy and cybersecurity protocols, rigorous audits and testing, and a high level of domain-area expertise are required to create AI solutions that meet the standards legal practitioners must adhere to. Building a generative AI-powered solution that’s both reliable and secure enough for use by legal professionals isn’t necessarily easy, but it is possible—we’ve done it with CoCounsel.
Because of its prevalence and powerful capabilities, many lawyers already incorporate AI such as ChatGPT into their practice, despite its known data security and privacy risks. In response to ChatGPT’s data leak and requests for measures to protect personal data, OpenAI has added a Personal Data Removal Request form allowing users to ask that their information be deleted.
But this protection is limited to users based in countries such as Japan and GDPR-protected Europe. And even if a removal request is approved and OpenAI does not retain information provided in ChatGPT conversations, it appears the data may still be used to train the model.
Given these risks, general-use LLMs like ChatGPT don’t fulfill the strict obligations attorneys have to protect privileged work product and confidential client information.
When considering integrating an AI solution into your practice, it’s crucial to choose a product specifically built for use by legal professionals. This kind of carefully engineered, professional-grade AI—such as our AI legal assistant, CoCounsel—can both capture the power of advanced LLMs and eliminate security and data privacy risks.
For instance, security and privacy should be integral to a product’s creation, not add-on features. As pointed out recently in Harvard Business Review, companies practicing top-notch cybersecurity are committed to “ensuring security is not an afterthought through processes such as DevSecOps, a method that integrates security throughout the development life cycle.” When building CoCounsel, we began with security in mind, evidenced by, among other considerations, our requirement that our AI partner, OpenAI, never store our users’ data or use it to train the underlying model.
When considering using AI in their practice, attorneys should look for these four key indicators of high-level security measures:
1. Customer-first data storage policies. It’s critical you, as the customer, control how your data is used, accessed, and stored. In stark contrast to ChatGPT, CoCounsel accesses OpenAI’s GPT-4 model through private, dedicated servers, and through a zero-retention API. All data is encrypted in transit and at rest. This means OpenAI cannot store any customer data longer than required to process the request and cannot view any of that data or use it to train CoCounsel’s underlying LLM. You always retain control over your data, and can remove it completely from the platform at any time.
2. Stringent security controls. Those providing AI for professional use should employ a sophisticated, multifaceted security program that goes beyond securing just the AI platform and customer data. Look for both internal and external security resources. For example, Casetext has taken extensive measures to ensure that CoCounsel aligns to NIST 800-53 Moderate and NIST Cybersecurity Framework, two of the most respected security frameworks in the industry. And CoCounsel’s security controls are mapped onto ISO 27001 and SOC 2 standards, which are internationally recognized as best practices for information security management. Additional protocols include a rigorous vendor vetting and management program, independent verification, auditing, and testing, and up-to-date, comprehensive Incident Response and Business Continuity and Disaster Recovery plans.
3. A long record of success. A more nebulous but still important factor is how long the AI developer has been in the business, meaning not just legal tech generally, but in the very complex business of building LLM-powered products for legal professionals. Examine a company’s performance history, specifically in the areas of security and expertise in AI. Prior leaks or other security incidents are obvious red flags. If a developer has only been in AI for a year or two, there’s little record to examine in terms of both incidents and expertise. Casetext has not only been at the forefront of applying the power of LLMs to the practice of law for several years, but has provided customers with a safe and secure platform for more than a decade.
4. Adoption among industry leaders. Who does the legal AI provider count among its clients? Peer firms’ adoption of an AI solution indicates whether its security program is robust enough for law practice. Take Casetext as an example. More than 40 of the Am Law 200 have subjected Casetext to rigorous security review. Thousands of client firms have integrated Casetext into their practice, including top-ranked firms such as DLA Piper, Troutman Pepper, and Dykema, as well as Fortune 50 companies, including Ford and Microsoft. The fact that some of the world’s leading firms and businesses trust Casetext to securely manage their most sensitive data speaks volumes.
When integrating AI into practice, attorneys need to know they’re using a platform they can trust, meaning one that will ensure they meet their obligations to protect client data and privileged work product. Look for providers who adhere to industry-leading security frameworks and are committed to data privacy, as demonstrated by their company’s history, expertise, and clientele.
In next week’s post, the last in this series, we’ll share more about our comprehensive approach to reliability and what’s on our roadmap for CoCounsel.