Latest Analyst Report: The 2023 Gartner® Market Guide for Supplier Risk Management Solutions

How to Upgrade Your Third-Party Risk Management Program for the Age of Global AI Regulation

Prevalent's SVP of Global Products & Services, Alastair Parr, shares 5 best practices for mitigating AI risks throughout the third-party lifecycle.
April 02, 2024
Logo Global Risk Community

Editor's Note: This article was originally published in globalriskcommunity.com

As artificial intelligence grows by leaps and bounds, governments worldwide are scrambling to build guardrails to ensure its safe and responsible deployment. On the other hand, businesses are deploying AI however and wherever they can, as its promised productivity and efficiency gains may profoundly impact the bottom line.

However, companies must deploy this game-changing technology in ways that do not violate the law. Increasingly, this will mean that they are not only responsible for using AI safely and responsibly but also for ensuring that the many third parties they transact with—including all vendors and service providers—do the same.

Navigating this complex landscape will not be simple, especially if your company is based in one of the regions leading the charge in crafting new regulations—the U.S., the U.K., the E.U., or Canada. Each of these areas will have its own particular framework for AI regulation, with different strong provisions and—most likely—loopholes.

However, managing third-party risk will be constant for businesses in this world of variables. Wherever you are based, a cautious approach and proactive vendor engagement will be the order of the day.

Every business has different objectives and challenges, and the quantity and quality of relationships with third-party business partners vary widely. But even so, here are five steps any company can take to mitigate risk at a time when mitigation is more important than ever:

1. Know who amongst your third-party vendor and supplier population is leveraging artificial intelligence—and how. Understanding who in your vendor universe is using AI is the critical first step in understanding any risks inherent in their usage. It begins with taking inventory and asking the right questions. Every business needs to create its discovery mechanism to understand the various instances of AI in use in its third-party ecosystem.

2. Design a way to score and tier their AI usage. Your business likely already has a tiering system for third-party partners based on their work's importance to your organization. Now, update that tiering system based on how they use AI and what risks that usage could bring. Do they use AI to handle customer information? Do they use it in customer-facing applications? These are just some questions that can help a business rank its partners and vendors regarding AI risk.

3. Perform a more detailed risk assessment. The third step involves going beyond cursory information and inspecting third parties’ internal controls over the usage of AI in their organization. This means asking questions about governance like:

  • What is this partner doing to avoid hallucination or cognitive bias?
  • What are their data security protocols?
  • Is their AI usage transparent?
  • Are there humans involved in evaluating AI outputs?

The various compliance frameworks being developed by government agencies and industries can serve as a guide for asking these and other questions during due diligence.

4. Recommend remediations. Once you have performed a detailed assessment and have a tiered system to grade your vendors in terms of risk, you can recommend actions to make your partnership with them less risky. For example, suppose a third party uses ChatGPT to analyze customer data internally. In that case, you might recommend migrating to a different AI with more robust security protocols or a more proprietary instance where data does not leave their environment and where your data is not used to train outside models. Notably, at this stage, you should insert actual clauses and measures in your contracts that ensure any de-risking of business operations that you are undertaking has fundamental enforcement mechanisms. This step requires a detailed knowledge of what AI programs are available and which should be avoided. This step should also lead to a legal contract with teeth.

5. Monitor on an ongoing basis. Reducing third-party risk is not a one-and-done proposition. It is a process of continuous evaluation and assessment. Businesses should monitor the internal policies of third parties to detect any changes in controls over time and any usage of AI tools that might indicate potential problems—knowing whether you have to step in and remediate them is important. AI risk will be a moving target, so it is essential to move with it through ongoing monitoring of vendors and partners.

Governments will respond with new legal frameworks as businesses continue to integrate AI into their operations. Embracing AI cautiously– and asking questions of vendors and suppliers– has become imperative for risk managers.

Putting the above five steps into action requires expertise in AI, something for which there is far more demand than supply today. That is why companies without the resources to hire a dedicated team to handle AI risk management outsource this to businesses that laser-focus on the issue.

Whether your business hires a team or seeks outside help for this kind of risk mitigation, one thing is clear: Understanding and addressing these risks is only becoming more important over time.