In all the work I have done in data governance and security, the dominant mindset and operating model has been risk management: identifying, assessing, and mitigating potential threats. It operates on a logic of probability and consequence: what is the likelihood of a harmful event occurring, and how severe would its impact be?
For example, in data governance, risk management might involve implementing firewalls and access controls to reduce the likelihood of a breach, while encrypting sensitive information minimizes the consequences if one occurs.
Yet, this framework has inherent limitations. Risk management prioritizes quantifiable, immediate threats like breaches of security or data loss. Systemic or societal harms, such as an ongoing erosion of privacy, are harder to measure. So are systemic weaknesses in processes, dependencies, or infrastructure, such as over-reliance on a single supplier or legacy systems, which are increasingly hard to maintain.
In this first article of two, I want to explore why I think we should focus on vulnerabilities rather than risks in order to create a more adaptive and more practical framework for AI governance, both for ethics and best practices.
In the second article, I will look at specific vulnerabilities and how we can construct a testing framework to evaluate our models and applications.
The Risk in Risks
Risk management treats harms as discrete problems to be “solved” rather than as symptoms of deeper vulnerabilities in how technologies interact with people, institutions or systems.
In AI governance, this manifests as a focus on technical safeguards (e.g., bias detection tools, predictive maintenance) without addressing who or what is most harmed by failures, why they are vulnerable, or how systems might perpetuate or exacerbate the vulnerability. Risk management asks, “What could go wrong?” Vulnerability-based governance asks, “Who or what could be harmed, and how do we protect them?”
Risk frameworks are typically shaped by technical experts, policymakers, and corporate stakeholders, who may lack experience in the full range of potentially vulnerable groups or systems. And we know that AI risk models rely on historical data to predict future harms, which can just make it worse. For example, predictive policing tools trained on arrest data from past decades will disproportionately target communities that are already over-policed, reinforcing cycles of harm. Or a supply-chain management system that monitors and reroutes shipments may leave a business unprepared for sudden political instability in a region supplying critical raw materials.
This kind of risk mitigation, focusing on “fixing” discrete problems rather than addressing systemic vulnerabilities, creates a cycle where vulnerabilities are ignored until they escalate into crises.
Focussing on Vulnerabilities
By centering on the people, communities, and systems most susceptible to harm, a vulnerability-based approach reframes governance as a proactive safeguard rather than a checklist of technical compliance. This is very relevant to the issues surrounding AI ethics and related regulations.
As I said before, traditional risk-based governance models tend to prioritize quantifiable, near-term threats, such as data breaches or algorithmic errors while overlooking systemic or long-term problems. As a result, these models struggle to keep pace with the rapid development and deployment of AI technologies. Every new capability added to a model (text generation, image generation, voice generation, video) represents a new set of risks to be modeled.
Vulnerabilities, however, evolve more slowly. All children are prone to being manipulated by AI-driven social applications at all times. Minorities may be discriminated against for generations. When it comes to the vulnerability of technical systems, your supply chain weaknesses can persist for years, and your legacy systems will remain difficult to integrate until they are finally upgraded or replaced.
Just as risk-based frameworks for testing are reactive, in the same way, regulators and policymakers frequently find themselves playing catch-up as emerging tools like generative AI or facial recognition systems evolve faster than governance mechanisms can adapt.
A vulnerability-based model shifts our focus from abstract risks to the real-world impacts of AI.
Benefits of a Vulnerability-Based Governance Framework
A vulnerability-based model isn’t just about ethics; it’s about operational excellence. By treating vulnerabilities as systemic weaknesses, businesses can build AI systems that are not only compliant but antifragile. For your factory, this could mean fewer unplanned downtimes, lower costs, and a supply chain that adapts to crises before they escalate.
The shift from risk to vulnerability is not just a change in terminology. In practice, it’s a fundamental reorientation of how we govern AI. Risk-based frameworks, while useful for addressing immediate threats, are inherently backward-looking. They treat AI’s harms as isolated malfunctions to be patched rather than symptoms of systemic weaknesses in our technologies, institutions, and societies.
A vulnerability-based approach, by contrast, forces us to confront uncomfortable truths:
AI’s harms are not accidents; they are often predictable outcomes of systems interacting with preexisting inequities, dependencies, or fragilities.
Governance cannot be static; it must evolve alongside the technologies it seeks to guide, anticipating harm rather than reacting to it.
Ethics and efficiency are not at odds; protecting vulnerable people and systems creates stronger, more resilient businesses and societies.
In the second article in this series, I'll move from theory to practice. How do we identify vulnerabilities, whether in marginalized communities, critical infrastructure, or supply chains? How can we design testing frameworks that stress-test AI systems against these vulnerabilities rather than retrofitting compliance? And crucially, how do we empower stakeholders (from engineers to policymakers to impacted communities) to collaborate in building AI that strengthens, rather than exploits, our shared systems?
The stakes are high. AI is reshaping everything from education to manufacturing. Will we govern reactively or proactively?
The answer begins with seeing vulnerabilities not as flaws to hide but as priorities to address.