Managing AI Risk: NIST Framework and ISO Guidance Announced

Privacy Plus+

Privacy, Technology and Perspective

Managing AI Risk: NIST Framework and ISO Guidance Announced. This week, let’s highlight the new Risk Management Framework for Artificial Intelligence (AI), announced in late January by the National Institute of Standards and Technology (NIST), and new AI guidance issued by the International Standards Organization (ISO).

What a “Framework” is, How it Works: “Frameworks” are organizing tools, to guide users in i) developing the steps needed to accomplish a large ongoing objective (e.g. “managing risk”), ii) organizing the process, and iii) assessing where they are and how far they still need to go. Typically, a Framework is developed and continually improved by an authoritative body of some kind, such as NIST (an agency of the US Department of Commerce), the International Standards Organization (ISO) (a large multinational nonprofit), HITRUST (an industry group focused mainly on healthcare), or the American Institute of CPAs (accountants).

Frameworks are usually organized by anywhere from 10-20 to 150+ topics (or “controls”), typically grouped in top-level categories. An “assessment” of a user’s situation looks at how well it performs as to each applicable control.  It is not graded on a binary or pass-fail scale (except as a user or industry may determine for itself), but rather on a “green-yellow-red” scale as to each applicable topic, so the user can see highlighted areas of its relative strengths and weaknesses.  Management Frameworks exist for nearly every field and subject. NIST standards are often especially useful for government agencies and early-stage companies. The ISO has promulgated over 24,000 standards, many for more complex and sophisticated environments. HITRUST’s Common Security Framework™ focuses mostly on HIPAA requirements but is useful far beyond it. The AICPA sponsors the well-known SOC1, SOC2, and SOC3 standards.

NIST’s AI Risk Management Framework: After more than a year of study, comments, drafts, workshops, and conferences, NIST has published its 1.0 version of an AI Risk Management Framework, complete with an easy-to-read “Playbook,” video, “Roadmap,” “CrossTalk,” and other materials. 

NIST defines trustworthy AI systems as having certain characteristics and then offers its guidance.  According to NIST, trustworthy AI systems are:

  • ·       valid and reliable,

  • ·       safe,

  • ·       secure and resilient,

  • ·       accountable and transparent,

  • ·       explainable and interpretable,

  • ·       privacy-enhanced, and

  • ·       fair with harmful bias managed.

Interesting in particular is the Framework’s Appendix B, which discusses these characteristics and how AI risks differ from traditional software risks.

To manage AI risks, NIST organizes its controls in 4 main categories:

  • ·       “Govern,” according to policies, accountability, DEI, culture, management processes, and third-party inputs;

  • ·       “Map,” according to context, categorization of what AI will use/support, capabilities/costs/benchmarks, risks/benefits, and impacts;

  • ·       “Measure,” according to metrics, trustworthiness, risk-tracking, and management efficiency; and

  • ·       “Manage,” including risks, strategies, 3d-party risks, and risk responses. 

Each of these categories has sub-topics with increasingly specific controls. You can access these materials by clicking here:

https://www.nist.gov/itl/ai-risk-management-framework

ISO’s Information technology — Artificial intelligence — Guidance on risk management: On February 6, 2023, the ISO published ISO/IEC 23894:2023 guidance on AI risk management. The guidance is divided into three parts:

  • Clause 4, which describes the underlying principles of risk management;

  • Clause 5, related to a risk management framework; and

  • Clause 6, on risk management processes

Generally, the guidance suggests how organizations can develop, produce, deploy or use products, systems and services that utilize AI can manage related risks.  It also includes three annexes, which cover AI-related objectives and risk sources and provide an example of mapping between the risk management processes and an AI system life cycle.

The guidance may be downloaded (for a fee) at the following link:

https://www.iso.org/standard/77304.html

Our thoughts: It’s 2023. AI is already being used in a wide range of applications, from chatbots and self-driving cars, to medical diagnosis and financial trading. The speed at which AI is being developed and deployed is breathtaking, and it is already having a profound impact on society. As we see it, there are three AI risks that may not be fully appreciated:

First, latent problems in datasets cause problems in AI outputs.  If a dataset is laced with unconsented, personal information – or if AI outputs are used to predict what people are like or what they are likely to be, or to do – privacy will be a gargantuan problem, enhanced because of the data aggregation capabilities of AI systems, and the regulations that cover information that can “reasonably be linked” with other information to identify a person. In the world of Big Data And AI, such linking is a given.  Further, current privacy laws (like the GDPR) curtail the use of automated decision-making (for example, in hiring), yet such automation seems to be a major purpose of many AI projects.

Second, the datasets may have been created by copying images or writings. “Copying” triggers serious copyright concerns. (We wonder whether current interpretations of copyright law will make control of datasets into a chokepoint for AI development, at least in Berne Convention countries.)

Third, AI models often find, repeat, and magnify mistakes that a user is already making.  If a model is improperly trained, its output may be tainted by bias—not only human prejudice, but systemic, computational and statistical biases that can infuse the AI systems as they are used to make decisions that affect our lives. Such bias is already a known AI risk, especially in areas like facial recognition and predictive policing.

And of course these and other concerns raise hard issues about the ethics of AI, both as it’s being developed and as it’s being deployed.

Where so much AI has already been deployed, we wonder whether the cat is out of the bag, especially where these frameworks/guidance are not mandatory and little regulation exists to address AI risks.

---

Hosch & Morris, PLLC is a boutique law firm dedicated to data privacy and protection, cybersecurity, the Internet and technology. Open the Future℠.

 

 

Previous
Previous

SEC Proposes New Rule on Safeguarding Client Assets

Next
Next

Bad Privacy Practices at GoodRx?