From Risks to Vulnerabilities 2/2
In Part 1, I made the case for a shift in AI governance from reactive risk management to proactive stewardship of vulnerabilities.
In many aspects of data management, we play an endless game of whack-a-mole when we govern AI by modeling risks because those risks are always evolving. AI makes this situation far more challenging, partly because AI innovation is moving so rapidly just now, but even when that slows down in the long run, we will still face the problem that AI activity is non-deterministic, meaning that even when given the same input, an AI system may not always produce the same outcome. This is because AI models, especially those using machine learning, rely on probabilistic patterns rather than fixed rules. Additionally, AI decisions are influenced by evolving data, environmental factors, and even interactions with other systems, making exact outcomes difficult to predict.
When we govern by addressing vulnerabilities, we build systems that can adapt to harm before they escalate. The question now is how do we operationalize this approach in a world where AI evolves faster than policies?.
Let’s begin by confronting a hard truth …
AI’s harms are not accidents. Whether they affect our systems or ourselves, our families or our factories, AI harms are the outcomes of systems interacting with preexisting fractures in our societies and infrastructures. Governance that ignores these fractures will only widen them.
Identifying Vulnerabilities
It's easy to assume that when we talk about vulnerabilities, we mean people or social groups. These are, indeed, very important examples. However, it also makes sense to talk about vulnerable systems or even vulnerable machines. Moreover, if we think of all these as examples of vulnerability, then we develop techniques that are generally applicable and, indeed, that learn across domains.
So, we can say that, in general, vulnerabilities fall into three interconnected categories: social, technical, and systemic.
Social: Marginalized groups (e.g., racial minorities, low-income communities) facing historical discrimination.
Technical: Brittle infrastructure (e.g., legacy systems, single points of failure in production lines).
Systemic: Structural dependencies (e.g., over-reliance on one supplier, regulatory gaps in AI ethics).
These categories overlap: A facial recognition system (technical) trained on non-diverse data (social) deployed in over-policed neighborhoods (systemic) compounds harm across all three.
Or perhaps you remember the global chip shortage in 2021, where automakers’ over-reliance on Taiwanese semiconductor suppliers (systemic) and just-in-time inventories (technical) collided with pandemic disruptions and labor shortages (social), halting production worldwide. An AI optimized purely for efficiency would have worsened the crisis.
A risk-based approach may identify specific threats, such as a supply-chain disruption cutting inventory to dangerously low levels. A vulnerability-based methodology would identify our reliance on inventory as a vulnerability and would therefore include inventory in any assessment of changes, new ways of working, new technologies or even new contracts.
While identifying vulnerabilities is crucial, it's only the first step. These vulnerabilities exist in dynamic environments where systems are constantly evolving. To move from identification to meaningful governance, we need testing frameworks that can validate our understanding of vulnerabilities and measure how AI systems interact with them over time. This testing becomes particularly critical because vulnerabilities, unlike risks, often reveal themselves through complex interactions that may not be immediately apparent.
Vulnerability Testing Frameworks for AI
A vulnerability testing framework evaluates AI systems against real-world weaknesses, not hypothetical risks. Traditional testing asks, “Does the system work?” Vulnerability testing asks, “Does it fail safely? Who or what is impacted? Under what conditions?”
There are a few steps:
Define evaluation criteria grounded in vulnerabilities, not a checklist of risks. For example, can the system handle data drift over time - technical vulnerability?
Use scenario-based stress tests to simulate crises. For example, simulate the effect of a natural disaster disrupting 40% of a factory’s suppliers; can its AI supply-chain manager pivot?
Incorporate red teaming to simulate adversarial attacks on technical and social vulnerabilities. For example, testers role-play as dangerous users, probing how a social media algorithm exploits children’s cognitive vulnerabilities.
Build continuous monitoring to detect emerging vulnerabilities as systems evolve. For example, a retail AI may adjust prices dynamically but begin to surge costs for necessities like baby formula during a shortage. Continuous monitoring flags this as a systemic vulnerability (exploiting healthcare crises) rather than just a risk (reputational damage).
From Testing to Implementation
The shift from risk-based to vulnerability-based AI governance represents more than a change in methodology; it's a fundamental reorientation of how we approach AI safety and ethics. By focusing on vulnerabilities, we create frameworks that are inherently adaptive and inclusive, capable of addressing both current and emerging challenges.
A vulnerabilities-first approach to AI governance offers a more resilient, adaptive alternative to the endless race of chasing risks. By proactively identifying and addressing the social, technical, and systemic weaknesses that AI can exploit or exacerbate, we shift our focus from merely preventing single-point failures to continuously strengthening the foundations of our societies and infrastructures.
As AI systems become more deeply embedded in our businesses and our social fabric, the importance of getting governance right only increases. A vulnerability-based approach is both principled for social and practical for business impact.
Adopting a vulnerability perspective also means that your teams develop a skillset that can be applied both to business and social (or reputational) risk. That's important. I hear a lot of teams saying, *We know how to test software but not social impact.* And as a result, they are anxious about deploying AI broadly. Vulnerability modeling can be applied in both areas.
Most importantly, this approach recognizes that AI governance is not just about protecting against harm; it's about building systems that actively strengthen our social and technical infrastructure.