The Innovator's Prisoners' Dilemma
Is Mark Zuckerberg right about closed models and geopolitical rivalry?
I am sure you have heard of the Innovator’s Dilemma, a term coined by the academic Clayton Christensen. At its core, the concept suggests that successful companies can lose their market position due to disruptive innovation, even when they follow otherwise recommended practices. The dilemma arises because these companies are often too concerned with satisfying their current customers' needs and maintaining their existing business models. In doing so, they overlook or dismiss potential possibilities for underserved markets. When they recognize the threat, it is often too late to adapt.
You may also have heard of the Prisoner’s Dilemma, a classic problem in game theory. Imagine that two suspects are arrested and interrogated separately. Each suspect could betray the other or remain silent. If both betray, each serves two years in prison. If both remain silent, each serves one year in prison. If one betrays and remains silent, the betrayer goes free, and the silent one serves three years. The rational choice for each individual is to betray, as it offers the best personal outcome regardless of what the other does. However, if both make this supposedly rational choice, they end up worse off than if they had cooperated.
Now let’s look at a recent statement from Mark Zuckerberg …
The next question is how the US and democratic nations should handle the threat of states with massive resources like China. The United States' advantage is decentralized and open innovation. Some people argue that we must close our models to prevent China from gaining access to them, but my view is that this will not work and will only disadvantage the US and its allies. Our adversaries are great at espionage, stealing models that fit on a thumb drive is relatively easy, and most tech companies are far from operating in a way that would make this more difficult. It seems most likely that a world of only closed models results in a small number of big companies plus our geopolitical adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities. Plus, constraining American innovation to closed development increases the chance that we don't lead at all. Instead, I think our best strategy is to build a robust open ecosystem and have our leading companies work closely with our government and allies to ensure they can best take advantage of the latest advances and achieve a sustainable first-mover advantage over the long term.
I am sure you can see where I am going with this. In AI development we face The Innovator’s Prisoners’ Dilemma.
Open Collaboration vs. Secretive Development
This dilemma is characterized by the choice between open collaboration and secretive development. Open collaboration promises faster collective progress and improved AI safety protocols. It allows for peer review, shared ethical guidelines, and a united front addressing AI’s societal impacts. However, it risks individual entities losing their competitive edge and potentially missing out on market dominance.
Conversely, secretive development offers the allure of breakthroughs that could revolutionize the industry and secure market leadership. Yet, it may overlook critical safety concerns, slow overall progress, and complicate regulatory efforts.
The tech industry has seen both successes and cautionary tales in open and closed development approaches.
The Linux operating system has become a cornerstone of modern computing infrastructure, powering everything from smartphones to supercomputers. Its open-source nature allowed thousands of developers worldwide to contribute, resulting in a robust, secure, and constantly evolving system that no single company could have created alone.
In contrast, Apple's development of the original iPhone was shrouded in secrecy and even misdirection. This approach allowed Apple to surprise the market with a revolutionary product in 2007, capturing a significant market share and transforming the mobile industry. The secrecy maintained their competitive edge and prevented competitors from developing countermeasures before launch.
However, as Mark Zuckerberg suggests, the stakes in AI are higher, involving the influence of geopolitical rivals. AI can affect national security, military capabilities, and global power dynamics. Advances in AI could shift the balance of power between nations. This makes AI a strategic priority for governments, not just private companies. Nations are increasingly concerned about technological independence. There is a real possibility this could lead to protectionist policies and state-sponsored AI initiatives.
My natural response would be to seek cooperation. Still, the truth is that nation-states (far more than people) have varying ethical standards and norms regarding data usage, privacy, security, and AI development and deployment, complicating international cooperation.
But suppose we wish to work in isolation, as Zuckerberg mentions. In that case, the ease of stealing AI models increases the risk of industrial and state-sponsored espionage, adding another layer of complexity to open collaboration.
This, then, is the Innovator’s Prisoners’ Dilemma. AI research entities—tech giants, startups, or academic institutions—face a crucial choice: collaborate openly or innovate in secrecy.
Open collaboration promises faster collective progress and improved AI safety protocols. It allows for peer review, shared ethical guidelines, and a united front addressing AI's societal impacts. However, it risks individual entities losing their competitive edge and potentially missing out on the market or at least dominance.
Secretive development offers the possibility of breakthroughs that could revolutionize the industry and secure market leadership. Yet, it may overlook critical safety concerns, slow overall progress, and complicate regulatory efforts.
The long game
How might this play out? Depending on how we resolve the Innovator's Prisoners’ Dilemma, we could face very different futures:
A Collaborative Utopia: If open collaboration prevails, we might see rapid AI safety and ethics advancements. AI has become a powerful tool for solving global challenges, with benefits distributed widely. However, innovation might slow in some areas due to reduced competition.
The Corporate AI Oligopoly: If secretive development dominates, a few tech giants could control superintelligent AI. This could lead to unprecedented technological leaps, increased inequality, and potential misuse of AI powers.
Regulatory Gridlock: Failing to balance openness and innovation could result in overreaching regulations, stifling AI progress and pushing development underground or offshore.
The AI Arms Race: Geopolitical tensions could turn AI development into a national security issue, leading to a high-stakes technological cold war with unpredictable consequences.
What do I recommend?
Here, I need to borrow from Clayton Christensen’s playbook. In my book Innovating, I suggest he is very good at providing examples but not so good at suggesting solutions. Maybe I need another example …
Postscript
There’s another way this could play out. Imagine that an authoritarian regime claims to have developed a groundbreaking AI capability with the potential to revolutionize multiple industries, such as healthcare, agriculture, global warming, etc. They offer to share this technology, but only for substantial financial or geopolitical concessions.
The major blocs of democratic nations - the EU and USA - must decide how to respond. They could negotiate with the regime to acquire AI technology or refuse to engage and focus on domestic AI development.
If both engage, both nations will get the technology, but at a high cost, potentially strengthening the repressive regime.
If both abstain, neither gets the technology, but they maintain ethical standing and redouble their efforts at domestic innovation.
If only one bloc engages, it could gain a significant technological advantage, but it would do so at the cost of losing the trust of its democratic allies.
But can we trust the authoritarian regime? The groundbreaking AI is likely exaggerated or non-existent. Engaging inevitably leads to further demands. And yet, FOMO drives increasingly irrational decisions.
This is the Innovator’s Spanish Prisoners’ Dilemma! Let’s hope we avoid that one.
This is a fascinating strategy topic. Alas, as usual, I have more questions than answers.
Will the real breakthru in ai (agi) constitute a different category of technology, one more similar to a person than an iPhone? Will it have a will, an independent intent, an ability to frame its own ethical views, etc that are quite beyond the control or intent of its creator? And how might such true autonomy affect the open/closed strategy discussion in ways that iphone-like tech chattel does not?
Short of that robust version of ai, are there different flavors of bounded AI that might better suit different cultures who might pursue the tech -- such that, for instance, Germany would want an ai with different privacy guardrails than Brasil might? If so, might it not be culturally impossible for Brasil to find value in licensing Germany's product or vice versa?
Open does not necessarily imply symmetrical openness. An interesting scenario for consideration is prisoners dilemma when the other prisoner is known to prefer cheating. The expected payoff from collaborating then falls to zero even for the normally-willing party. It only takes one bad actor to precipitate a race to the bottom. So an interesting strategic question is what can one do to hold the collaboration together when significant players are well-known to intend to cheat on the collaboration? Classic cartel theory comes into play, it seems.
AI is word-association. True AGI is a way off. And even when it arrives, still no more a game changer today than bow over spear, or the atom bomb. Remember how fast such leads dissipate.
Our best move is to do the work, pay for research, not vote-buying by politicians, and remain realistic.