Sunday, June 29, 2025

Privacy and Artificial Intelligence - 1.7 Vendor and Third-Party Risks

1.7 Vendor and Third-Party Risks

Introduction

When you use a computer or a smart device, you might not realize that many of the parts and programs inside it come from different companies. In the world of artificial intelligence (AI), organizations often rely on vendors and third parties—other companies that provide technology, software, or data. Working with these outside companies can help organizations do amazing things with AI, but it also introduces new risks that need to be managed carefully (ISACA, 2025).

Technical or Conceptual Background

Vendor and third-party risks happen when organizations use AI systems or services that are built, hosted, or managed by other companies. These risks can be very different from the risks that come from using your own technology. For example, when you share your data with a third-party AI vendor, you are trusting them to keep your information safe and to use it in ways that follow the law (Verasafe, 2025). However, this trust is not always rewarded.

Traditional risk management models are not always enough to handle the new kinds of risks that AI introduces. AI systems can have problems like bias, hallucinations (where the AI makes up information), model drift (where the AI’s behavior changes over time), and issues in the supply chain (ISACA, 2025). When third-party AI tools are used, the risk does not just stay with the vendor—it can spread to your organization, your clients, and even national infrastructure, depending on what you do (ISACA, 2025).

Current Trends and Challenges

One major challenge is that many AI vendors collect more data than they need, including sensitive personal information. This excessive data gathering can break privacy rules and put organizations at risk of legal trouble (Verasafe, 2025). There is also a risk that once data is shared with a vendor, it might be used in ways that were not agreed upon, such as training new AI models or selling insights to other companies. For example, if you give a company your favorite color or your pet’s name to help make a game better, the company might use that information to teach a robot how to guess things about people—or they might share what they learned with other companies, even if you never said it was okay to do that (Verasafe, 2025). This can make people feel worried or upset, because their personal information is being used in ways they did not expect or want.

Another big problem is the lack of transparency. Many AI systems are “black boxes,” meaning that even the companies using them do not really know how the AI works or how it processes data (Verasafe, 2025). This makes it hard to check if the AI is following the rules or if it is safe to use.

Third-party data breaches are also a serious concern. In 2024, a Russian-linked group called Midnight Blizzard hacked a third-party app that was connected to Microsoft, stealing tens of thousands of emails, including those of U.S. government officials (Legit Security, 2025). In another case, attackers breached a third-party merchant processor used by American Express, leaking sensitive cardholder data (Legit Security, 2025). These examples show that even if your own systems are secure, a mistake by a vendor can still put your information at risk.

Regulatory bodies around the world are paying more attention to these risks. The EU AI Act and the NIST AI Risk Management Framework both stress the need for good governance, transparency, and accountability when using AI. The NIST AI RMF specifically highlights the Govern function as central to establishing robust governance, clear accountability, and transparency in AI systems. According to the NIST AI RMF, the Govern function involves “establishing and implementing the policies and processes to oversee AI risk management,” which includes assigning clear roles and responsibilities, ensuring decision-making is transparent, and maintaining accountability throughout the AI lifecycle (NIST, 2023; Standard Fusion, 2025; Avolution, 2024). This means organizations must set up clear rules for who is responsible for AI decisions, make sure those decisions can be explained and checked, and keep records so that everyone knows who made what choices and why.

Mitigation Challenges and Shortcomings

Even though it is important to manage vendor and third-party risks, many organizations find this hard to do. One reason is that risk management is not a one-time process—it needs to be tailored to your business and updated as things change (ISACA, 2025). You need to think about how important the vendor is to your business, how sensitive your data is, how complex the AI system is, and what rules you need to follow (ISACA, 2025).

Sometimes, countries or organizations do not enforce their data protection laws well. For example, in some places, companies are not fined or punished for mishandling personal data, even when clear violations occur. People who try to complain about privacy violations may find it difficult to get help, because the authorities do not investigate or respond to their concerns (Privacy International, 2023). This can make it easier for people to misuse your information and harder for you to protect your privacy.

Another challenge is that many vendors are new or inexperienced, and may not understand all the rules or best practices for using AI (Ncontracts, 2025). If a vendor does something wrong, your organization can still be held responsible, and you could face legal trouble, financial losses, or damage to your reputation (Verasafe, 2025).

Glossary

Term

Meaning and Example Sentence

Vendor

A company that provides goods or services to another company. Example: "The vendor supplies the AI software we use."

Third Party

Any company or organization that is not directly part of your own business. Example: "We share data with a third party to help us analyze it."

Black Box AI

An AI system whose inner workings are not easily understood or explained. Example: "Black box AI makes it hard to know how decisions are made."

Data Breach

When sensitive information is accessed or stolen without permission. Example: "A data breach can happen if a vendor’s security is weak."

Compliance Risk

The chance of breaking laws or rules. Example: "Using a vendor that does not follow privacy laws creates compliance risks."

Questions

  1. What are vendor and third-party risks in the context of AI?

  2. What are some examples of privacy risks when using AI vendors?

  3. How can a lack of transparency in AI systems create problems for organizations?

  4. What are some real-world examples of third-party data breaches?

  5. Why is it important for organizations to follow frameworks like the NIST AI Risk Management Framework?

Answer Key

  1. Suggested Answer: Vendor and third-party risks in AI happen when organizations use AI systems or services provided by other companies. These risks can include data breaches, misuse of data, and lack of transparency, and they can affect your organization, your clients, and even national infrastructure (ISACA, 2025; Verasafe, 2025).

  2. Suggested Answer: Examples include vendors collecting more data than necessary, using data in ways that were not agreed upon—like using your information to teach new robots or sharing what they learned with other companies—and failing to follow privacy laws. This can lead to legal trouble, regulatory investigations, and damage to your reputation (Verasafe, 2025).

  3. Suggested Answer: A lack of transparency in AI systems means organizations do not know how the AI works or how it processes data. This makes it hard to check if the AI is following the rules or if it is safe to use (Verasafe, 2025).

  4. Suggested Answer: Real-world examples include the Midnight Blizzard attack on Microsoft, where a third-party app was hacked and sensitive emails were stolen, and the American Express data breach, where a third-party merchant processor was attacked and cardholder data was leaked (Legit Security, 2025).

  5. Suggested Answer: It is important because frameworks like the NIST AI Risk Management Framework help organizations set up good governance, clear accountability, and transparency. The NIST AI RMF specifically highlights the Govern function, which involves establishing and implementing policies and processes to oversee AI risk management, assigning clear roles and responsibilities, and ensuring that decision-making is transparent and accountable throughout the AI lifecycle (NIST, 2023; Standard Fusion, 2025; Avolution, 2024).

References

Avolution. (2024, November 15). NIST AI Risk Management Framework (RMF). https://www.avolutionsoftware.com/news/nist-ai-risk-management-framework-rmf/
ISACA. (2025, May 8). Six steps for third-party AI risk management. ISACA Now Blog. https://www.isaca.org/resources/news-and-trends/isaca-now-blog/2025/six-steps-for-third-party-ai-risk-management
Legit Security. (2025, June 20). Third-party data breach: Examples and prevention strategies. https://www.legitsecurity.com/aspm-knowledge-base/third-party-data-breach
Ncontracts. (2025, May 13). How to manage third-party AI risk: 10 tips for financial institutions. https://www.ncontracts.com/nsight-blog/how-to-manage-third-party-ai-risk
NIST. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). https://www.nist.gov/itl/ai-risk-management-framework
Privacy International. (2023, March 15). Enforcement of data protection laws around the world. https://privacyinternational.org/long-read/4837/enforcement-data-protection-laws-around-world
Standard Fusion. (2025, April 7). What is the NIST AI Risk Management Framework (AI RMF)? https://www.standardfusion.com/blog/what-is-the-nist-ai-risk-management-framework-(ai-rmf)
Verasafe. (2025, April 7). AI vendors and data privacy: Essential insights for organizations. https://verasafe.com/blog/ai-vendors-and-data-privacy-essential-insights-for-organizations




No comments: