In 2023, an Australian mayor was getting ready to take legal action against OpenAI. The reason? The company’s chatbot, ChatGPT, shared a fake story about him being involved in a bribery scandal.
Surprised? Then you’ll be shocked to learn that Gartner thinks 85% of all AI models will fail, and cites data as the main cause.
Anything that consumes or processes data comes with risks, especially if the data belongs to other people. That’s why responsible AI governance is so important, and why AI risk management is an essential component.
Artificial intelligence models are built using machine learning (ML), a process where the model-in-training is given training data to “study,” allowing it to recognize patterns and make connections. Then, once it’s been trained, it can make predictions or generate answers. If this data is “bad” or incomplete, you will get poor results.
Some of these might be wrong answers, but as we just saw, these can lead to legal action. Others might lead to discrimination, bias, or more. You need to manage these AI risks to be confident in your model’s performance.
What Is AI Risk?
The concept of AI risk can be represented by this formula:
AI risk = (the chances of an AI model being exploited or giving an erroneous output) x (the consequences of the error or exploitation)
An AI-generated response should ideally be accurate and unbiased. However, if its training data was flawed, it might give incorrect responses.
If bad actors managed to access its algorithm and logic or the data it uses to make decisions, they could cause the AI model to provide poor responses. Depending on the context, that bad output could result in a minor inconvenience or a major impact on the life or livelihood of the user.
For example, if a student uses AI to research a topic and gets the wrong date in response, that’s a minor mistake. However, if you’re using AI to screen candidates for a job or a loan, a bad decision could have more of a negative impact. If the AI model were being used for driving an autonomous vehicle or making medical decisions, the consequences could be fatal.
Risk is the likelihood of a poor or incorrect response, either due to internal or external factors, and its potential impact on users or the business.
As you can see from the formula, AI risk isn’t just one or the other; it’s a product of both the factors.
Let’s say your model has an increased likelihood of frequently giving bad answers. Even if they don’t have a significant impact, your risk is high. Similarly, a single incident could lead to greater risk if its consequences are devastating.
Risks Associated With AI
There are many different types of AI models, but they all share the same four sets of risks:
- Data risks
- Model risks
- Operational risks
- Ethical or legal risks
Data Risks
Any AI model is as good as the data provided for its development and use. If someone gained unauthorized access to your database and tampered with it, the contaminated information could lead to your model giving bad outputs.
The more data you collect to train your model, the greater the potential impact of a breach, which is both a security and privacy concern.
Finally, there is the risk of poorly curated training data. Bad data and incomplete datasets can lead to the system developing biases. This could be potentially harmful when used for making decisions that affect users’ lives.
For example, there was an incident when an AI healthcare risk prediction algorithm demonstrated racial bias. The idea behind this predictive model was to identify chronically ill patients who might need more care during their stay in a hospital. To do so, the model used the patient’s previous medical care spending as a proxy as to how much care they’d need in the future. It seemed like a reasonable assumption, as a sicker patient would generate higher costs.
The problem was, it frequently determined that Black patients needed less care than White ones, even when they were sicker and would have benefited from closer monitoring and specific attention.
As it turned out, medical spending was not the best metric. It was discovered that Black patients—for reasons including worse access to care, lower trust in doctors, or other barriers to getting regular treatment—would put off getting medical care until their illness got worse. This meant they had lower medical spending, but their condition was often worse because they hadn’t sought medical care sooner.
On the other hand, White patients looked for medical intervention sooner and more consistently. As a result, it looked like White patients were spending more on their health, which led the model to believe they needed more targeted care. In reality, it was the other way around.
Here, the issue wasn’t quite bad data; it was the wrong assumptions as to what that data represented. More spending did not mean poorer health—it just reflected better access to medical care.
All these risks can be summed up as:
- Risk to the security of the data
- Risk to the privacy of the data
- Risk to the integrity of the data
Model Risks
Data risks deal with the information that fuels the AI system, but model risks are inherent to the learning algorithm, logic, or architecture of the model. Threat actors might copy, steal, or manipulate the way your AI model works. They might use techniques like the following:
- Adversarial attacks, where threat actors feed the model inputs specifically designed to confuse it into making bad decisions.
- Prompt injections, where users input manipulation designed to hijack GenAI models and cause them to produce offensive, unsafe, or manipulated content.
- Supply chain attacks. Instead of attacking the model directly, threat actors compromise a third-party vendor or tool—software libraries, data pipelines, code repositories, or even hardware—to introduce vulnerabilities in an otherwise secure model, leading to unexpected or malicious outputs once the model is deployed.
As a result, the model doesn’t function as it should, creating instances of bias or poor decision-making. The results can also lead to security or privacy breaches, disinformation, or even service disruptions.
We previously discussed an instance of bias caused by the incorrect interpretation of data, which resulted in poor medical decisions. As another example, Amazon had to scrap its AI recruitment tool because it contained a model error which led to discriminatory results.
The tool was created to evaluate job applications and suggest which candidates were the best, based on their resumes.
Unfortunately, it had been developed using 10 years' worth of training data from a field that used to be male-dominated. The algorithm should have been designed to compensate for the weighted data, but it wasn’t. The result was a model that favored applications from men, leading to accusations of sexism.
Operational Risk
It’s not always external forces that pose a risk; sometimes, the danger lies within. Internal, operational AI risks include:
- Data drift or decay. The model learned from data that’s now outdated and hasn’t been updated.
- Sustainability issues. As a new technology, the system was implemented without a plan for scaling or support.
- Integration challenges. The AI model isn’t integrated properly with other software and systems.
- Lack of accountability. The roles and responsibilities for the AI model aren’t assigned.
In late 2019, tech giant Apple and Goldman Sachs came under scrutiny after it was discovered that the Apple Card offered different credit limits to men and women, with women’s limits being significantly lower, even if their credit scores were higher.
It wasn’t just a data or model problem; the problem was a lack of transparency, where neither company involved could explain how the decisions were made. There was no system for manual review or an appeals process in case an applicant wanted to challenge unfair credit card assessments.
Most importantly, once the issue was discovered, it turned out that there was no one explicitly assigned to oversee, audit, and fix the issue. The model was deployed into a live financial product without any governance, accountability, or human oversight. Then, when it failed, the organization wasn’t prepared to respond effectively.
Ethical or Legal Risks
If you don’t prioritize safety, ethical constraints, and privacy in your AI model development, you risk regulatory violations and bias. Data privacy is protected by regulations like the GDPR and the CCPA. The EU AI Act—which we discuss in greater detail later in this article—specifically governs data privacy and ethical use of information in developing AI models.
If your system gives answers that violate these privacy regulations, you might face penalties and reputational damage.
Plus, if your model makes biased decisions against certain groups of people, it could lead to decisions that damage your public perception and reputation, not to mention the impact it could have on people’s lives.
OpenAI discovered this in March 2025. This was when the company was hit with a privacy complaint in Europe after ChatGPT falsely claimed a Norwegian man had murdered two of his children. The story was completely fabricated–a hallucination from the chatbot–and was deeply damaging because it included certain correct details mixed up in the hallucination.
The advocacy group NOYB filed the complaint, arguing that such hallucinations violate the GDPR's requirement for accurate and lawful data processing.
This case shows the growing legal exposure AI developers face, particularly as legislation like the EU AI Act begins to tighten enforcement around high-risk AI applications. Compliance expectations are rising and so are the stakes for getting it wrong.
AI Risk Management Frameworks
To mitigate these risks, use an AI risk management framework. These frameworks offer a set of guidelines and best practices for managing potential problems that could affect your model throughout the AI lifecycle and for protecting the sensitive data of your consumers.
Here’s an overview of some of the most popular frameworks for managing risks that AI systems face:
The NIST AI Risk Management Framework (AI RMF)
The NIST AI Risk Management Framework is a set of guidelines developed by the National Institute of Standards and Technology (NIST) to help organizations manage risks associated with AI. The framework was first released in January 2023 and provides voluntary but structured approaches to ensuring trustworthy AI systems that are safe and aligned with ethical principles.
The AI RMF defines three categories of AI harm:
- Harm to people
- Harm to an organization
- Harm to the ecosystem
The AI RMF also defines the seven characteristics of a trustworthy AI system, stating it should:
- Be accurate and reliable
- Keep users safe
- Have safeguards against attacks and external threats
- Be accountable and transparent about how it makes decisions
- Be explainable, meaning its inner workings should be easily understandable
- Protect the privacy and personal information of users
- Ensure fairness in outcomes and mitigate biases
A key aspect of the AI RMF is data privacy, which is a major concern in reducing AI risk, as AI systems rely heavily on personal and sensitive data. The framework emphasizes:
- Transparency and accountability. Organizations must clearly explain how AI models use and process data.
- Compliance with regulations. AI systems should align with laws like the GDPR and the CCPA to protect user privacy.
- Bias mitigation. AI models must be trained on diverse and unbiased data to avoid discriminatory outcomes.
- Security measures. Organizations should implement data encryption, access controls, and privacy-preserving techniques to prevent unauthorized data breaches.
Its latest update, NIST-AI-600-1, is a profile on generative AI that expands on the framework to address risks in AI-generated content. It also covers concerns about the privacy of data, such as unauthorized data scraping and misinformation.
EU AI Act
The European Union (EU) AI Act—passed in 2024—is the world’s first comprehensive AI regulation. It categorizes AI systems by risk levels and enforces strict compliance measures to ensure privacy and security of data.
The Act aligns with the GDPR and places heavy restrictions on AI applications that process personal data. It’s particularly strict on high-risk and banned categories.
Its key data privacy measures include:
- Transparency requirements. AI developers must disclose how personal data is used to ensure clear accountability.
- Prohibition of unacceptable AI uses. Systems like real-time biometric surveillance and social scoring are strictly banned.
- Strict rules for high-risk AI. AI in healthcare, banking, and recruitment must undergo rigorous privacy assessments and comply with data minimization principles.
- User rights protection. Individuals have the right to opt out of AI-driven decision-making and demand explanations for automated outcomes.
This Act sets a global precedent for privacy-focused AI governance. It aims to create AI systems that respect user rights and limit mass surveillance risks.
ISO/IEC Standards
ISO/IEC standards provide essential guidelines for managing AI risks, particularly in the areas of data privacy, security, and accountability. Data scientists and AI engineers rely on these frameworks to develop and maintain responsible AI systems.
Two key standards—ISO/IEC 27001:2022 and ISO/IEC 23894:2023—offer frameworks for protecting personal data in AI systems. They also offer guidance on mitigating threats such as unauthorized access, bias, and adversarial attacks.
Key AI risk management provisions in ISO/IEC Standards:
- Risk-based approach. ISO/IEC 23894:2023 emphasizes a structured risk management framework for AI. It helps organizations identify and mitigate vulnerabilities such as data leaks, model manipulation, and compliance risks.
- Data protection controls. Both standards promote encryption, access control, and anonymization techniques to safeguard personal data used in AI models.
- Regulatory compliance. This provision aligns AI development with GDPR and other global privacy laws, ensuring that AI-driven decision-making is lawful and transparent.
- Bias and fairness mitigation. This encourages organizations to assess and reduce bias in AI systems, preventing discriminatory or unethical outcomes.
- Security and monitoring. This provision requires continuous AI oversight to detect privacy violations, security breaches, and adversarial AI threats.
A 2024 amendment (ISO/IEC 27001:2022/Amd 1:2024) further enhances privacy and sustainability considerations in AI environments.
Automate Your Data Privacy and Governance for Enhanced AI Risk Management
As we’ve seen, one of the major concerns with AI systems is managing and maintaining the privacy of your data.
Fortunately, that’s what Osano does best. We make it easy to run your privacy program, supporting your ability to source data in a compliant manner, assess AI applications for compliance, act on subject rights requests, and more.
Learn how it can help you reduce your AI risk.