
Many of the people we’ve talked to since we launched CoCounsel in March wonder how they can trust CoCounsel’s output, since it’s built around GPT-4, which is known to hallucinate. “Built around” is the key to why they can trust it. Our product and engineering teams took up (and delivered on) the challenge of creating a product that could take advantage of GPT-4’s tremendous raw power while eliminating the serious limitations—like hallucinations—that curb the professional utility of the model when used on its own. GPT-4’s unprecedented capabilities, including scoring in the 90th percentile on the Uniform Bar Exam, make it a step change from all prior models. What makes GPT-4 truly extraordinary, then, is not what it can do alone, but what it enables.
Using the model directly, by chatting with GPT-4 or ChatGPT, should be done with great caution and exposes users to risk if they use the output professionally. In fact, here’s the warning all users see in ChatGPT:
CoCounsel, on the other hand, harnesses that power and has engineered robust, well-tested accuracy, privacy, and security controls around it. In short: GPT-4 is the world’s most incredible engine. CoCounsel is the only car using that engine that can get you to incredible places—places you could not reach without GPT-4—safely.
What makes CoCounsel trustworthy?
Our team has been wanting to build a platform with CoCounsel’s capabilities ever since we began working with LLMs (large language models) in 2018, and we even had the opportunity to evaluate earlier models, such as GPT-3. But not until last fall, when OpenAI invited us to integrate GPT-4 into a domain-specific AI product, did we think the technology capable of helping us produce results ready for professional legal use. We’ve applied our technical and domain expertise to GPT-4 to create CoCounsel, a first-of-its-kind product that both does more than GPT-4 can and corrects the problems that make GPT-4 on its own unsuitable for professional use.
CoCounsel can be relied upon by legal professionals for 5 key reasons:
1. CoCounsel has the right information. We connected the power of GPT-4 to our proprietary, industry-leading search technology Parallel Search, which enables CoCounsel to surface more accurate, on-point information than can large language models when accessed directly. Unlike even the most advanced LLMs, CoCounsel does not make up facts, or “hallucinate,” because we’ve implemented controls to limit CoCounsel to answering from known, reliable data sources—such as our comprehensive, up-to-date database of case law, statutes, regulations, and codes—or not to answer at all.
2. Casetext established a Trust Team to guide our work fine-tuning CoCounsel for the demands of legal practice. This dedicated group of AI engineers and experienced litigation and transactional attorneys spent over 4,000 hours before we launched CoCounsel filtering, ranking, and scoring results based on over 30,000 legal questions. And that testing and fine-tuning continues, every day.
3. CoCounsel was used more than 50,000 times before launch in day-to-day work by our beta testers, a deliberately varied group of clients we invited to use and give feedback on CoCounsel: 400+ lawyers from 40 firms and organizations—including multinational law firms, solo law offices, nonprofits, and Fortune 50 corporations.
4. CoCounsel makes it easy to verify its output. CoCounsel was not designed to replace the role of the lawyer, but rather to help attorneys obtain valuable insights, read and comprehend large amounts of information at superhuman speed, and accomplish more high-quality work in less time. So just as a lawyer reviews all work delegated to a junior associate or paralegal, they need to validate CoCounsel’s output. We’ve made it easy to do so: all answers link to their origin in the source documents, so it’s simple for lawyers to trust, but verify.
5. CoCounsel always keeps lawyers’ and their clients’ data private and secure. Data entered by users into CoCounsel is subject to substantially more rigorous security controls than are consumer-facing LLM-powered products such as ChatGPT. CoCounsel accesses OpenAI’s model through private, dedicated servers, and through a zero-retention API. This means OpenAI cannot store any customer data longer than required to process the request, cannot view any of that data, and cannot use any of it to train the AI model. Users always retain control over their data and can remove it completely from the platform at any time.
In the next post in our series, we share more about the process our team undertook to engineer a reliable AI legal assistant, followed by a third post on how to ensure your data is secure when using AI.