Getting Smarter about AI: A Guide for Legal Professionals

Privacy Plus+

Privacy, Technology and Perspective

Getting Smarter about AI: A Guide for Legal Professionals. This week, let’s focus on what lawyers need to know to become smarter about the use of artificial intelligence (AI) in law practice.

1. Know What AI Is

AI is a broad field of computer science that creates systems capable of learning from data, making decisions, and accomplishing tasks that would typically require human intelligence. AI technologies encompass several subsets, including machine learning (ML), natural language processing (NLP), and deep learning. AI capacities range from robotic process automation to recommender systems, physical robots, facial recognition, and more.  Popular uses include analytics, risk modeling, predictive services, product feature optimization, customer acquisition, lead generation, and use of transformer models, like ChatGPT and Bard.

But AI systems aren’t merely commercial products.  They, along with other technologies like quantum computing, 5G and blockchain, are strategic national assets.  The U.S. national strategy on AI is defined through legislation and Executive Orders.  To learn more, you can click on the following link:

https://www.ai.gov/legislation-and-executive-orders/

Also, a compelling book on the subject of AI as a strategic national asset is “AI Super-Powers: China, Silicon Valley, and the New World Order” by Kai-Fu Lee.

Understanding these basics can provide a firm foundation for further learning.

2. Understand How AI is Being Used in Law

AI is being leveraged in the legal field in numerous ways, transforming the way lawyers conduct research, manage cases, and interact with legal databases. AI-powered legal tools leverage AI and machine learning technologies to automate routine tasks. Here are a few examples:

  • ·       Luminance: Luminance markets itself as “the world’s most advanced AI for the processing of legal documents, streamlining operations and delivering value business-wide.”

  • ·       Blue J: Blue J markets itself as “AI for tax law with a reliable solution.”

  • ·       Casetext: Casetext markets an AI tool that facilitates “document review, legal research memos, deposition preparation, and contract analysis in minutes.”

  • ·       Westlaw Edge: Westlaw Edge markets AI-enhanced capabilities that can help you research more effectively and be more strategic.” Its features include legal citation analysis, a powerful legal search engine, litigation analytics, and regulations comparison.

  • ·       LegalMation: LegalMation markets itself as “leverage[ing] the power of artificial intelligence to transform litigation and dispute resolution.” Its features include drafting pleadings, discovery requests, discovery responses, and related documents.

Understanding how AI is currently being used can help identify how it might benefit your practice or organization. 

3. Stay Abreast of Legal and Ethical Implications Regarding the Use of AI-Powered Tools

The use of AI technologies is not without risk, especially regarding ethical and legal issues. As a reminder, lawyers have ethical obligations under the professional rules of conduct, which in Texas include the following duties under the Texas Disciplinary Rules:

  • •       Competence (Tex. Disc. Rule 1.01): Duty to provide competent representation, includes a duty to “keep abreast of changes in the law and its practice, including the benefits and risks of technology…” 

  • •       Communication (Tex. Disc. Rule 1.05): Duty to provide clients with sufficient information to make informed decisions.

  • •       Confidentiality (Tex. Disc. Rule 1.06): Duty to make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to client information.

  • •       Safekeeping (Tex. Disc. Rule 1.14): Duty to protect client information.

  • •       Supervision (Tex. Disc. Rules 5.01 and 5.03): Duty to supervise lawyers and non-lawyers, including vendors.

What is critical to understand is that lawyers cannot use AI-powered tools as a replacement for legal advice.  In fact, most tools, including those listed above, are offered under terms of service that contain broad disclaimers, including disclaiming any warranties related to the security, accuracy, and reliability of the tools and their outputs, among other things. 

Typically, AI-powered tools are offered on an “as-is” basis, and customers—here, lawyers—must agree to indemnify the providers of those tools and cap their liability as a condition precedent to accessing the tools.  Thus, if a lawyer relies on an AI system and gives the wrong legal advice or makes a biased decision, liability rests entirely on that lawyer.

To emphasize the liability point, you can read the following article about a lawyer who unreasonably relied on ChatGPT to write a brief, cited non-existent cases the AI tool provided, and accordingly confronted potential sanctions for violating the duties of competence and confidentiality, along with their responsibilities regarding nonlawyer assistants:

https://www.reuters.com/legal/transactional/lawyer-used-chatgpt-cite-bogus-cases-what-are-ethics-2023-05-30/

Keep in mind that according to the National Institute of Standards and Technology (NIST), trustworthy AI systems must be:

  • - valid and reliable,

  • - safe,

  • - secure and resilient,

  • - accountable and transparent,

    - explainable and interpretable,

  • - privacy-enhanced, and

  • - fair with harmful bias managed.

For more, you can access the NIST Risk Management Framework for AI by clicking on the following link:

https://www.nist.gov/itl/ai-risk-management-framework

In short, competent lawyers must understand the risks and benefits of AI-powered tools.  They must ensure that their use of such tools does not compromise their duties of confidentiality and safekeeping client information.  In addition, they must supervise the use of AI-powered tools, including reviewing their outputs to confirm their accuracy in order to ensure that they meet their ethical obligations, as well as address the concerns about reliability, security, privacy, and liability, and appropriately advise clients.

4. Ask Questions about AI During Procurement

When procuring AI-powered tools or services, organizations need to be vigilant and raise specific questions to ensure alignment with legal, ethical, and business standards. This is especially important as AI technologies are sometimes stealthily embedded into products, such as cameras that include facial recognition capabilities. Here are some key questions to ask:

  • ·       Does this product or service utilize AI technologies, including stealthy ones like facial recognition?

  • ·       What disclaimers, limitations, and indemnities are in the Terms of Service?

  • ·       How does the tool or service handle data privacy and security? What do the Terms say about this?

  • ·       What measures does the provider take to ensure the ethical use of AI?

  • ·       How does the provider ensure the accuracy and reliability of its AI technologies?

  • ·       What is the provider's policy on updates and improvements in response to evolving AI technologies and regulations?

5.  Keep Up with AI-related Legal Developments

Just as you stay updated with other legal changes, it's crucial to stay abreast of developments in AI legislation, regulation, and case law. Our blog often highlights new developments, such as the release of AI frameworks by NIST and new AI guidance issued by the International Standards Organization (ISO), which you can read more about by clicking on the following link:

https://www.hoschmorris.com/privacy-plus-news/managing-ai-risk-nist-framework-and-iso-guidance-announced

 Other organizations are dedicated to tracking legal AI developments, such as the following:

  • - The Future of Privacy Forum (FPF): This Washington, D.C.-based think tank seeks to advance responsible data practices, with its leaders serving as outspoken advocates for privacy rights in the age of AI and data.

  • - Centre for Data Ethics and Innovation (CDEI): UK-based CDEI advises on the use of data-driven technologies and AI. It aims to ensure that these technologies are used responsibly and that they benefit everyone in society.

  • - AI Governance Center: International Association of Privacy Professionals (IAPP):  The IAPP recently launched the AI Governance Center containing content, resources, networking, training (and now even a certification), to address risks in the AI field, focusing on the evolving laws applicable to the development, deployment, and use of AI.

6. Test AI Legal Tools

Finally, one of the best ways to understand something is by experiencing it. We encourage our readers to try using various AI  Before you do, however, check your organization’s or firm’s acceptable use policy or specific AI policy.  You will want to ensure that you (and your organization or firm) use AI responsibly, taking into account privacy and data security concerns, the potential for bias in AI algorithms, and intellectual property issues, among other things.

Understanding AI is no longer an option for legal professionals—it's a necessity. Remember, the goal isn't to become an AI expert, but to understand enough to leverage AI in your practice effectively and ethically, and to provide knowledgeable counsel to your clients.

---

Hosch & Morris, PLLC is a boutique law firm dedicated to data privacy and protection, cybersecurity, the Internet and technology. Open the Future℠.

 

Previous
Previous

“Data Provenance”: Navigating Ownership, Authenticity, and Rights in the Digital Age

Next
Next

E.U. and U.S. Have Agreed to A New Data Privacy Framework - What’s Old is New Again