Global AI Regulations and Their Impact on Third-Party Risk Management

World governments and standards bodies have started to respond to AI technologies with new compliance regulations and frameworks, which will have a broad impact on third-party risk management in the months ahead.
By:
Matthew Delman
,
Product Marketing Manager
January 04, 2024
Share:
Blog AI Regulations 01 24

Regulation of artificial intelligence technology is underway around the world, including in the United States, the United Kingdom, the European Union, and Canada. Regulatory bodies in those countries have either proposed or finalized documents emphasizing outright restrictions, conscientious development, and approval processes. The different focuses across multiple geographies create a patchwork of regulatory complexity for third-party risk managers and cybersecurity professionals seeking to strike a balance between efficiency gains and responsible development.

In this post, we will examine the AI regulatory developments and statements of intent from leading political figures and standards bodies in the U.S., the U.K., the European Union, and Canada. Additionally, we will look at each regulation’s impact on the third-party risk management landscape.

The European Union Artificial Intelligence Act

The EU made news in early December when the European Parliament reached a deal on the passage of the EU AI Act, which makes European regulators the first to have comprehensive legislation regulating generative AI and any future AI developments. It’s not unusual for Europe to be at the forefront of regulation. In 2016, they were the first to have comprehensive data privacy rules with the General Data Protection Regulation (GDPR). The AI Act marks the bloc continuing its tradition of leading the world with example regulations.

The European Union’s Artificial Intelligence Act, officially the “Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts,” was originally proposed in 2021.

The rules in this law are designed to:

  • Address risks specifically created by AI applications;
  • Propose a list of high-risk applications;
  • Set clear requirements for AI systems for high-risk applications;
  • Define specific obligations for AI users and providers of high-risk applications;
  • Propose a conformity assessment before the AI system is put into service or placed on the market;
  • Propose enforcement after such an AI system is placed in the market;
  • Propose a governance structure at the European and national levels.

The European Parliament took a risk-based approach to the regulation. They defined four categories of risk, which it outlined using a pyramid model as shown below.

European Commission AI Risk Pyramid

Source: European Commission

These four levels are defined as:

  • Unacceptable Risk – This level refers to any AI systems that the EU considers a clear threat to the safety, livelihoods, and rights of EU citizens. Two examples are social scoring by governments – which appears to ban one of the common uses of AI in China – and toys that use voice assistance to encourage dangerous behavior.
  • High Risk – AI systems marked as high risk are those that may operate within use cases that are crucial to society. This can include AI used in education access, employment practices, law enforcement, border control and immigration, critical infrastructure, education access, and other situations where someone’s rights might be infringed.
  • Limited Risk – This category refers to AI applications with specific transparency obligations. Think of a chatbot on a website. In that specific case, consumers should be aware that they’re interacting with a software program or a machine, so they can make informed decisions on whether to engage or not.
  • Minimal Risk – Also called “no risk,” these are AI systems used in media like video games or AI-enabled email spam filters. This appears to be the bulk of AI in use within the EU today.

High-risk AI systems have specific, strict rules that they must comply with before they’re able to go on the market. According to the EU’s write-up on the AI Act, high-risk systems follow these rules:

  • Adequate risk assessment and mitigation systems;
  • High quality of the datasets feeding the system to minimize risks and discriminatory outcomes;
  • Logging of activity to ensure traceability of results;
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
  • Clear and adequate information to the user;
  • Appropriate human oversight measures to minimize risk;
  • High level of robustness, security, and accuracy.

All remote biometric systems, for example, are considered high-risk. The use of remote biometrics in public spaces for identification (e.g., facial recognition) in law enforcement will be prohibited under this act.

The AI Act establishes a legal framework for reviewing and approving high-risk AI applications, aiming to protect citizens' rights, minimize bias in algorithms, and control negative AI impacts.

What does this mean for third-party risk management programs?

Companies located in the EU or those who do business with EU organizations need to be aware of and comply with the laws after the applicable notice period. Given the expansive definition of “high risk” in the law, it may make sense to ask vendors and suppliers more concrete questions about how they’re using AI and how they’re following the other relevant regulations.

Organizations should also thoroughly examine their own AI implementation practices. Other European technology laws still apply, so organizations needing GDPR compliance should explore ways to integrate AI Act compliance into their workflow as well.

How Will AI Impact Your TPRM Program?

Read our 16-page report to discover how AI can lower third-party risk management costs, add scale, and enable faster decision making.

Read Now
Featured resource ai paper 1023

Artificial Intelligence Governance in the United States

While there are no official AI regulations in the United States today, major political figures and standards bodies have published extensive guidance in the form of governance frameworks and statements of intent. This includes the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) introduced in January 2023, President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, and Senator Chuck Schumer’s SAFE Innovation Framework that’s designed to inform future congressional legislation.

In the U.S., existing AI guidance centers around promoting responsible development and addressing business risk. The NIST framework exemplifies this approach, providing a methodology for crafting an AI governance strategy within your organization. President Biden’s Executive Order and Senator Schumer’s SAFE Innovation Framework are policy guidance documents for the executive branch and Congress as they debate laws designed to govern AI development.

The NIST AI RMF and Third-Party Risk Management

The NIST AI framework provides guidance around how to craft an AI governance strategy in your organization. The RMF is divided into two parts. Part 1 includes an overview of risks and characteristics of what NIST refers to as “trustworthy AI systems.” Part 2 describes four functions to help organizations address the risks of AI systems: Govern, Map, Measure, and Manage. The illustration below reviews the four functions.

NIST AI Risk Management Framework

The functions in the AI risk management framework. Courtesy: NIST

Organizations should consider risk management principles to minimize the potential negative impacts of AI systems, such as hallucination, data privacy, and threats to civil rights. This consideration also extends to the use of third-party AI systems or third-parties’ use of AI systems. Potential risks of third-party misuse of AI include:

  • Security vulnerabilities in the AI application itself. Without the proper governance and safeguards in place, your organization could be exposed to system or data compromise.
  • Lack of transparency in methodologies or measurements of AI risk. Deficiencies in measurement and reporting could result in underestimating the impact of potential AI risks.
  • AI security policies that are inconsistent with other existing risk management procedures. Inconsistency results in complicated and time-intensive audits that could introduce potential negative legal or compliance outcomes.

According to NIST, the RMF will help organizations overcome these potential risks.

President Biden’s Executive Order on AI

President Biden released his executive order on AI development at the end of October 2023. The goal of this EO is to define guidance around what Biden calls responsible development and use of AI, as well as outline the specific principles that the executive branch – and ideally the entire U.S. federal government – will use as part of ensuring responsible development.

Biden outlines eight guiding principles and priorities in the executive order designed to guide AI development. In the following table, we describe the guiding principles and what they could mean for your TPRM program.

Guiding Principle

What the EO Says About It

What It Means for You

Artificial Intelligence must be safe and secure.

President Biden wants to implement some guardrails around AI development, ensuring that the products developed using this technology are resilient against attack, can be readily evaluated, and are as safe as possible to use.

Expect more guidance from the federal government around how to use AI in your products and what to look for with regards to any AI usage in your supplier’s work. In the interim, consider the NIST AI RMF as guidance.

Promoting responsible innovation, competition, and collaboration will enable the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges.

Biden intends to invest in AI-related education, training, and research to give the United States a leg up in the global AI arms race. The intent is also to encourage competition, so large enterprises with deep pockets don’t capture the market.

There could be a lot of smaller software vendors and suppliers leveraging AI toolkits in the future. Be aware of which companies in your supply chain include AI capabilities and be prepared to assess them accordingly.

The responsible development and use of AI require a commitment to supporting American workers.

The Biden administration wants to ensure that AI tools don’t cause widespread unemployment or challenges in the labor market.

Start to expand your examinations of AI risks beyond cybersecurity and data privacy. This includes looking at how AI is used in hiring practices or other day-to-day operations such as customer support and inventory management.

AI will likely have a significant societal impact. As part of your ESG monitoring, you need to understand how suppliers use the technology today and how they intend to use it in the future.

Artificial Intelligence policies must be consistent with the Biden Administration’s dedication to advancing equity and civil rights.

This guiding principle is about preventing bias in AI algorithms, as well as ensuring that organizations don’t use AI to further disadvantage historically underrepresented groups.

AI will soon become an even bigger ESG concern. Expect to see questions about AI usage focused on the non-technical side of third-party risk management in the next year or so. Be sure your TPRM program includes a library of updated assessment content to capture this important information from vendors and suppliers.

The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected.

The federal government plans to enforce consumer protections against AI technology usage. They also plan to examine potential new regulations for the technology.

Take a hard look at how your suppliers intend to use AI in their business. Current federal regulations still apply to this growing technology sector, and you should ask your suppliers about their plans for potential future compliance concerns – including data privacy.

Americans’ privacy and civil liberties must be protected as AI continues advancing.

The Biden administration wants to emphasize data privacy considerations. This is especially pertinent with how powerful AI can be to extract personal data.

Pay strict attention to how your suppliers comply with data privacy laws. Large language models and other AI tools could become privacy risks if not properly governed.

It is important to manage the risks from the federal government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans.

This principle is about making sure the federal government has the right professionals with AI skills in its ranks. Biden notes that he intends to focus on training for the federal workforce on AI technology.

If you’re working with federal suppliers or are a federal supplier yourself, be aware that the government is going to be focused on upskilling its people in respect to AI.

The federal government should lead the way to global societal, economic, and technological progress, as the United States has in previous eras of disruptive innovation and change.

The Biden administration plans to work with industry and international allies to develop a “framework to manage AI’s risks, unlock AI’s potential for good, and promote common approaches to shared challenges.”

Expect more documentation about AI risk management to come out of the federal government. As noted above, consider the NIST AI RMF as guidance.

The EO matters most to federal contractors and companies that supply federal contractors. The point here is that President Biden is addressing the challenges of AI use more broadly. It might be indicative of a future focus on technology as well.

The SAFE Innovation Framework

Not to be outdone, Senator Chuck Schumer (D-NY) in June 2023 unveiled what he called the “SAFE Innovation Framework” for artificial intelligence. The framework aims to establish a policy response for the inevitable legislation and regulatory guidance around AI technology. The word SAFE in the name of the framework stands for:

  • Security: Safeguard our national security with AI and determine how adversaries use it and ensure economic security for workers by mitigating and responding to job loss.
  • Accountability: Support the deployment of responsible systems to address concerns around misinformation and bias, support our creators by addressing copyright concerns, protect intellectual property, and address liability.
  • Foundations: Require that AI systems align with our democratic values at their core, protect our elections, promote AI’s societal benefits while avoiding the potential harms, and stop the Chinese Government from writing the rules of the road on AI.
  • Explain: Determine what information the federal government needs from AI developers and deployers to be a better steward of the public good, and what information the public needs to know about an AI system, data, or content.

What’s clear in this framework is the U.S. Senate’s intention to take a more concrete look at regulating AI technology at the federal level. This is outside the scope of the Executive Order that President Biden issued and may influence future legislation.

The United Kingdom’s Artificial Intelligence Regulation Bill

In the United Kingdom, Lord Holmes of Richmond introduced an AI Regulation Bill in the House of Lords. This is the second bill introduced in Parliament designed to regulate the usage of artificial intelligence in the UK. The initial bill, addressing both AI and worker’s rights, was presented in the House of Commons towards the conclusion of the 2022 to 2023 legislative session but was discontinued in May 2023 due to the session's conclusion.

This new AI bill, introduced in November 2023, is broader in focus. Lord Holmes introduced the regulation to put some guardrails around AI development and define who would be responsible for defining future legislative restrictions on AI in the United Kingdom.

There are a few key features of the bill, which had its first reading in the House of Lords on November 22, 2023. These include:

  • The creation of an AI Authority tasked with the primary responsibility of ensuring a cohesive approach to artificial intelligence across the UK government. It’s also in charge of taking a forward-looking approach to AI and ensuring that any future regulatory framework aligns with international frameworks.
  • The definition of key regulatory principles, which puts guardrails around the regulations that the proposed AI Authority can and should create. According to the bill, any AI regulations put in place must adhere to the principles of transparency, accountability, governance, safety, security, fairness, and contestability. The bill also notes that AI applications should comply with equalities legislation, be inclusive by design, and meet the needs of “lower socio-economic classes, older people, and disabled people.”
  • Defining the need to establish regulatory sandboxes that enable regulators and businesses to work together for effectively testing new applications of artificial intelligence. These “sandboxes” may also offer companies a way to understand and pinpoint appropriate consumer safeguards to do business in the UK.
  • Advocating for AI Responsible Officers in each company seeking to do business in the UK. The role of this officer is to ensure that any applications of AI in the company are as unbiased and ethical as possible, while also ensuring that the data used in AI remains unbiased.
  • Guidelines for transparency, IP obligations, and labelling, stipulating that companies utilizing AI provide a record of all third-party data used to train their AI model, comply with all relevant IP and copyright laws, and clearly label their software as using AI. This component also grants consumers the right to opt out of having their data used in training AI models.

This proposed regulation is still in the early phases of negotiations. It could take a very different form after the second reading in the House of Lords, followed by a subsequent reading in the House of Commons.

Depending on how much of the bill survives the legislative process, it could have a substantial impact on how AI is used in the UK, how models are trained, and the transparency of the broader data-gathering process. Each of these areas has a direct impact on third-party vendor or supplier usage of AI technologies.

The Artificial Intelligence and Data Act (Canada)

In June 2022, the government of Canada began consideration of the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, the Digital Charter Implementation Act, 2022. The larger C-27 bill is designed to modernize existing privacy and digital law and includes three different sub-acts: the Consumer Privacy Protection Act, the Artificial Intelligence and Data Act, and the Personal Information and Data Protection Tribunal Act.

The AIDA’s main goal is to add consistency to AI regulations throughout Canada. There are a few regulatory gaps identified in the companion document of the Act, such as:

  • Mechanisms such as human rights commissions provide for redress in cases of discrimination, however, individuals subject to AI bias may never be aware that it has occurred;
  • Given the wide range of uses of AI systems throughout the economy, many sensitive use cases do not fall under existing sectoral regulators; and
  • There is a need for minimum standards as well as greater coordination and expertise to ensure consistent protections for Canadians across use contexts.

The act is currently under discussion, and the Canadian government anticipates it will take approximately two years for the law to pass and be implemented. There were six core principles identified in the companion document to AIDA, outlined in the table below:

Guiding Principle

How AIDA Describes It

What It Could Mean for TPRM

Human Oversight & Monitoring

Human Oversight means that high-impact AI systems must be designed and developed to enable people managing the operations of the system to exercise meaningful oversight. This includes a level of interpretability appropriate to the context.

Monitoring, through measurement and assessment of high-impact AI systems and their output, is critical in supporting effective human oversight.

Vendors and suppliers must establish easily measurable methods to monitor AI usage in their products and workflows.

Following potential passage of the AIDA, organizations will need to understand how their third parties monitor AI usage and incorporate AI into their broader governance and oversight policies.

Another way to ensure more thorough human oversight and monitoring is to build human reviews into reporting workflows to check for accuracy and bias.

Transparency

Transparency means providing the public with appropriate information about how high-impact AI systems are being used.

The information provided should be sufficient to allow the public to understand the capabilities, limitations, and potential impacts of the systems.

Organizations should be asking their vendors and suppliers about how they’re using AI and what sort of data is included in the models. Be aware of how this is integrated as well.

Fairness and Equity

Fairness and Equity means building high-impact AI systems with an awareness of the potential for discriminatory outcomes.

Appropriate actions must be taken to mitigate discriminatory outcomes for individuals and groups.

Organizations should inquire how their third parties are controlling for potential bias in their AI usage.

There might be an additional impact here in terms of net new ESG regulations.

Safety

Safety means that high-impact AI systems must be proactively assessed to identify harms that could result from use of the system, including through reasonably foreseeable misuse.

Measures must be taken to mitigate the risk of harm.

AIDA may introduce new regulations regarding data usage in the context of AI.

Expect new security requirements in AI tooling, and make sure that current and prospective vendors answer questions about the security of their AI usage – including basic controls such as data security, asset management, and identity and access management.

Accountability

Accountability means that organizations must put in place governance mechanisms needed to ensure compliance with all legal obligations of high-impact AI systems in the context in which they will be used.

This includes the proactive documentation of policies, processes, and measures implemented.

New regulations are likely, prompting companies to inquire with third parties about compliance with any emerging reporting requirements and mandates.

Validity & Robustness

Validity means a high-impact AI system performs consistently with intended objectives.

Robustness means a high-impact AI system is stable and resilient in a variety of circumstances.

Organizations should ask their third parties about any validity issues with AI models in their operations, a concern that may be relevant to technology vendors and potentially extend to the physical supply chain.

Ultimately, the Canadian government is taking a hard look at how to regulate AI usage nationwide. There are going to be new mandates and new laws to comply with no matter what. So, it makes sense for companies doing business in Canada or working with Canadian companies to understand any upcoming requirements as AIDA comes closer to passage.

Final Thoughts: AI Regulations and Third-Party Risk Management

Governments around the world are actively debating how to regulate artificial intelligence technology and its development. Regulatory discussions have so far focused on specific use cases identified as potentially the most impactful on a societal level, suggesting that AI laws will focus on a combination of privacy, security, and ESG concerns.

The next 12 to 18 months should offer more clarity on how organizations worldwide need to adapt their third-party risk management programs to AI technology. Companies are rapidly integrating AI into their operations, and governments will respond in kind. At this point, adopting a more cautious and considerate approach to AI in operations and asking questions of vendors and suppliers is the correct choice for third-party risk managers.

For more on how Prevalent incorporates AI technologies into our Third-Party Risk Management Platform to ensure transparency, governance, and security, download the white paper, How to Harness the Power of AI in Third-Party Risk Management, or request a demonstration today.

Tags:
Share:
Matthew delman
Matthew Delman
Product Marketing Manager

Matthew Delman has more than 15 years of marketing experience in cybersecurity, financial technology, and data management. As product marketing manager at Prevalent, he is responsible for customer advocacy, product content, enablement, and launch support. Before joining Prevalent, Matthew held marketing leadership roles at Techstrong Group and LookingGlass Cyber, and owned product positioning for EASM and breach prevention technologies.


  • Ready for a demo?
  • Schedule a free personalized solution demonstration to see if Prevalent is a fit for you.
  • Request a Demo