GenAI Incidents: A New Frontier in Incident Management

Written by Rebecca

Companies that adopt generativeAI tools can find many benefits in productivity.  However, companies that are suppliers of generative AI tools may need to consider how they could handle potential incidents around responsibility and security of their AI tool.  These incidents can range from minor annoyances to major disasters.  Let’s dissect what is different about a genAI incident, and how CISOs, CTOs, and DPOs can prepare for and respond to genAI mistakes or issues.  

What is a GenAI Incident?

AI can be helpful for responding to traditional security incidents, using AI cyber response tools. But that’s not what this article is about. This is about incidents that are caused by AI and MLs.  A genAI incident is any incident that is caused by an AI system. GenAI incidents can occur in any industry, and they can have a wide range of consequences.   

For a quick taster of crucial real-world AI mistakes that harmed business reputation and credibility, this blog has a nice summary:

If you want a more in-depth dive into AI incidents, there are at least three separate databases of AI incidents and concerns.   

  1. The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database.  https://incidentdatabase.ai/ The blog offers some qualitative monthly round up of incidents, and the database allows for quantitative assessments.  These showcase that many AI incidents come to light due to their impact on vulnerable populations.   
  2. The OECD.AI platform combines resources from across the OECD and GPAI, partners, and stakeholder groups to create a one-stop-shop for AI policymakers and other actors.  https://oecd.ai/en/incidents
  3. The AIAAIC Repository (standing for ‘AI, Algorithmic and Automation Incidents and Controversies’) is an independent, open, public interest resource that details incidents and controversies driven by and relating to AI, algorithms and automation. https://www.aiaaic.org/aiaaic-repository

It is important not just to consider data breaches in which sensitive company data is accessed, but also the many facets of responsible AI.  Although there is not total agreement on a responsible AI taxonomy, I list some factors here that are crucial for understanding threats and incidents.  

The dimensions of responsible AI include:

    • Safety: AI systems should be designed and operated in a way that minimizes the risk of harm to people or property.

    • Privacy and Security: AI systems should be designed and operated in a way that respects privacy and ensures security.

    • Inclusivity: AI systems should be designed and operated in a way that is inclusive and accessible to everyone.

    • Bias: AI systems should be designed and operated in a way that is free from bias and discrimination.

    • Sustainability: AI systems should be designed and operated in a way that is sustainable and environmentally friendly.

    • Transparency: AI systems should be designed and operated in a way that is transparent and explainable.

    • Accuracy: AI systems should be designed and operated in a way that is accurate and reliable.

Homework: I recommend a thought exercise for your own company.  Within the context of your products and customers, go through each of the dimensions of Responsible AI. Consider the fears, concerns, and potential events for each dimension for your products.  For example, bias may look different when a AI personal trainer show bias than it does for an hiring agency.  Customers of AI-assisted farm equipment may have different concerns around sustainability than customers of AI-assisted sports cars.  Going through each of the dimensions step-by-step can be a useful exercise in understanding how a genAI incident could impact your company’s brand and reputation.    

How is a GenAI Incident Different from a Regular Incident?

GenAI incidents are different from regular incidents in several ways. First, genAI incidents are often more complex than regular incidents. This is because AI systems are often complex and interconnected. As a result, it can be difficult to identify the root cause of a genAI incident.

GenAI incidents present unique challenges that necessitate a departure from traditional security incident handling procedures In these cases, traditional security incident response might need a new AI playbook.   This is for several reasons. 

First, traditional security is well-established in methods and playbooks, while generative AI might be newer.   Traditional security incidents (hopefully) have established response protocols. However, incidents caused by AI can be more unpredictable, or difficult to diagnose due to the inherent complexity of AI systems and their potential for unexpected behaviors. The complex and interconnected nature of AI systems often makes it difficult to pinpoint the root cause of a GenAI incident. This complexity can lead to delays in incident response and remediation efforts.

Second, GenAI is not deterministic.  Generative AI generates new content, which can include hallucinations or untruths.  It is not predictable and is non-deterministics.  As a result, traditional incident response methods may not be adequate, necessitating the development of new approaches specifically tailored to the nuances of AI-related incidents. Additionally, the lack of established industry standards and regulations for handling GenAI incidents further complicates matters, as organizations may not have clear guidelines to follow.

Furthermore, the source of the threat and liability may look different in an AI incident.  Unlike conventional security incidents, which are typically triggered by malicious attacks and result in immediate harm, GenAI incidents may not always be classified as harmful events, might not be included in standard security playbooks, and may not stem from malicious intent. This ambiguity in classification, coupled with the potential absence of immediate harm to your business, makes it challenging to identify and respond to GenAI incidents effectively using established security protocols.

How much does an AI incident cost the business?

Some companies choose to accept the risk of a genAI incident, as they feel the benefits outweigh the risks.  If you are weighing this option, here are some things to consider when tallying the potential cost of a genAI incident.  

Regulation

The regulatory landscape for AI is constantly evolving, and new laws and regulations are being developed and implemented at a rapid pace. As a result, organizations need to stay informed about the latest developments and ensure that their AI systems comply with all applicable regulations to avoid potential fines and other legal consequences.

Several regulations worldwide impose fines for non-compliance with responsible AI principles. In the European Union, the AI Act categorizes AI systems based on their risk levels and imposes fines for violations. For example, using prohibited AI practices or failing to comply with data governance requirements can lead to penalties of up to €30 million or 6% of global annual revenue, whichever is higher. Similarly, the General Data Protection Regulation (GDPR) can also apply to AI systems that process personal data, with fines reaching up to €20 million or 4% of global annual revenue. 

In the United States, while there is no comprehensive federal AI law, various sector-specific regulations and state laws may impose fines for certain AI-related violations. For instance, the Federal Trade Commission (FTC) has the authority to take action against companies that engage in unfair or deceptive practices related to AI, including bias and discrimination. Additionally, some US states have enacted their own AI regulations, such as the Algorithmic Accountability Act in New York City, which requires certain businesses to conduct bias audits of their automated employment decision tools.

Bad press

Negative publicity surrounding a GenAI incident can inflict significant financial and temporal costs on an organization. The fallout often necessitates a swift and comprehensive response, diverting valuable resources and personnel away from their usual tasks. Communication teams and leadership are typically compelled to dedicate time and effort to crafting and disseminating carefully worded statements, engaging with the media, and addressing public concerns. This diversion of resources can impede the progress of other critical projects, leading to delays, missed opportunities, and ultimately, financial setbacks. 

Furthermore, the reputational damage incurred can have long-lasting implications, potentially alienating customers, investors, and partners, thereby impacting the company’s bottom line. In essence, the repercussions of bad press extend far beyond the immediate crisis, underscoring the importance of proactive measures to prevent and mitigate GenAI incidents.

Which Dimensions of Responsible AI Are Covered by Insurance?

Most insurance policies do not cover genAI incidents. This is because genAI incidents are a relatively new type of risk. This may change (or if you know of insurers who cover genAI incidents, please let me know!)  Whether insurers will specifically recognize issues related to hallucinations or aspects of trustworthy AI is yet to be seen. 

Conclusion

GenAI incidents are a serious risk that organizations need to be aware of. By understanding the risks of genAI incidents and taking steps to mitigate those risks, organizations can help to protect themselves from the financial and reputational damage that can result from a genAI incident.

To address these challenges, organizations need to develop new frameworks and strategies specifically tailored to GenAI incident response. These frameworks should prioritize proactive risk assessment, continuous monitoring of AI systems, and the establishment of clear lines of accountability for AI development and deployment. Furthermore, fostering collaboration between AI developers, security teams, and legal experts is crucial to ensure a comprehensive and effective approach to managing GenAI incidents.

Need help recovering from an AI incident? 

Name*
Email*
Phone
Message
0 of 350

About the Author

Dr. Rebecca Balebako has helped multiple organizations improve their responsible AI and ML programs. With over 25 years experience in software, she has specialized in testing privacy, security, and data protection.