McKinsey recently claimed: Artificial intelligence and generative AI have the potential to transform how strategists work by strengthening and accelerating activities such as analysis and insight generation while mitigating challenges posed by human biases and the social side of strategy.
I doubt whether AI can live up to such grand claims. My worry stems from this way of talking about AI as if it’s a magical tool that automatically spits out significant insights. I see two pitfalls.
First, of course, there’s the data problem: AI needs good, complete data to be useful, and we often assume we have it. But many companies just don’t have that level of data sophistication. In fact, too many companies are barely data literate overall, although they may have pockets of individual excellence in some teams.
Second, there’s an over-reliance on short-term predictions. Businesses chase the next quarter’s results, ignoring big-picture economic cycles or consumer shifts that AI can’t easily forecast. So we can talk about AI’s promise, but I see a real risk of overestimation.
So, I am very cautious about seeing AI as an inflection point in the development of business strategy.
Actually, I’m cautious about labeling any technology a “true inflection point.” Historically, we’ve always had seemingly revolutionary tools, like spreadsheets in the 1980s, that changed how people made business decisions. But, ultimately, success still depended on how well managers grasped their business environment and their people.
AI, in my view, runs the risk of creating a sense of detachment: we might treat strategic decisions too clinically, losing the human element that influences culture and organizational identity.
Limits and Hazards
I touched on data quality earlier. For sure, recent research suggests AI can do some amazaing work with large, robust data sets. And I have posted myself about how useful it can be with small datasets for non-expert users.
But of course, if a company’s data is unsystematic, biased or poorly cleaned, the AI is just automating illusions. Then they’re basing strategic decisions on illusions, when they believe they are being objective, which is even worse than a manager knowingly relying on gut instinct.
For sure, AI might highlight certain forms of bias. But if the training data itself is incomplete or historically skewed, the AI could reinforce systemic biases we’re not even aware of.
However, I have also heard an argument in support of AI: that it mitigates human bias by avoiding habits like groupthink or overconfidence in projections.
However, this overlooks a fundamental truth of business, that the corporate culture shapes how decisions are made. In any business, AI outputs will be interpreted for strategy, not just blindly acted upon. If the culture is very hierarchical, with so-called strong leaders used to making their decisions stick, managers might cherry-pick AI results that confirm what the leadership wants to hear. So, rather than mitigating bias, AI becomes a confirmation tool for existing power structures. In that sense, we’re not really removing bias; we’re just shifting it onto a technical platform.
In this strategic use of AI, I also see a real moral hazard: when someone takes on more risk because they're protected from the consequence, or more basically, when a person changes their behavior because they are protected from risks.
AI strategy tools might propose expansions into markets that are more profitable for the short term, ignoring the social or ethical consequences. Or it might underestimate opportunities in segments that historically lacked attention. Still, middle managers and even leaders might say, “The model told us to do this, so we’re off the hook if it fails.”
That’s a concern: that AI becomes the scapegoat for poor decision-making processes.
AI short-termism
The specific risk of short-term "thinking" (in the AI case, short-term prediction) could also be exacerbated by AI.
When we talk about “predictive intelligence” in business, I nearly always hear an emphasis on near-term signals: next quarter’s sales, upcoming launches of competitive products, or incremental changes in market share.
AI is good at pattern recognition in these relatively stable contexts. But strategy, in the broad sense, should also consider longer time horizons, like fundamental shifts in consumer psychology or generational transformations.
AI can’t parse cultural or societal inflection points, which might take decades to play out. We're very unlikely to meet a Steve Jobs or Bill Gates in digital form any time soon.
Strategy used to be about imagining future scenarios and investing in intangible long-term assets, such as or organizational capability or brand trust.
AI might produce brilliant forecasts about supply chain optimization or targeted marketing, but it doesn’t easily capture intangible capital, like the culture you need to respond to crises. By focusing on what’s measurable, a company could lose sight of bigger, even existential, questions.
Innovation theater
I have no doubt that some companies adopt AI in strategy mostly for the optics. We see a lot of grand announcements: “We’ve integrated AI into our strategic planning!” And it makes for great headlines in shareholder reports or press releases.
Most often, the integration is superficial. There’s a new dashboard, maybe some interactive analytics, but the underlying decision processes remain the same. In a year or two, you'll find that next to no genuine transformation has occurred.
This posture is most likely driven by the fear of missing out. If your rivals claim they’re using AI, you feel compelled to do so, even if you’re not sure it adds real value. That can lead to check-box solutions that might not deliver ROI.
In the best-case scenario, that’s a waste of resources; in the worst case, it misleads stakeholders about the company’s strategic foundations.
Strategy is a humanism
In all of this, I think we risk overshadowing the role of conversation in strategic planning. Those informal, face-to-face exchanges over the boardroom table reveal subtle organizational dynamics. AI is adept at scanning external signals, but it can’t replicate that rich internal dialogue among teams when experiences and perspectives are constantly interweaving.
For instance, how do you sense emerging frustration or rebellious energy among employees that might become crucial to your strategy’s execution? AI doesn’t have the empathy or the context for that.
Strategy, at its core, is about aligning people with a goal. And if the people dimension is overlooked, you’re building your strategy on nothing.
To be fair, empirical data (like pulse surveys) can sometimes help you see broad patterns in employee sentiment, but it’s not the same as personal trust. Real people open up in unstructured ways, over coffee or in spontaneous Zoom chats.
That intangible aspect of strategy is intangible for a reason: it doesn’t reduce neatly to data points. AI can process data, but it cannot understand the stories that define culture and relationships. I’m not saying AI can’t help in some capacity. But to rely on it for the entire strategy discussion is short-sighted.
To put it another way: AI might distract from fundamental managerial competence, such as learning from mistakes or leading people through difficult times, whether personal or economic.
Good managers make sense of conflicting signals, weigh those intangible costs, and also bring their personal experience to their work. Strategy requires more than algorithmic precision: it demands the muscle of human judgment, built up through years of experience and failure. If they lean too heavily on AI, they could lose the ability to interpret context without an algorithm.
In an economic downturn (watch this space), you want leaders who can pivot quickly based on incomplete or contradictory evidence. AI is not built to handle that kind of systemic ambiguity.
Sometimes strategic decisions come from a piercing insight: perhaps a manager’s deep empathy for a customer’s pain point or a sense of moral obligation to the workforce, customers or community.
If we allow AI to overshadow that, we degrade one of the core qualities that define a strong leader: judgment that’s shaped by values, personal history, and community sense. Strategy comes engages the heart as much the brain.
The balancing act
To be clear, I am not rejecting AI outright (I'm a strong advocate for the use of AI in strategy) but I am rather questioning whether it’s being overhyped.
I’d suggest applying AI selectively, almost as a specialized tool rather than a universal solution. Use it for data-heavy tasks, like modeling competitor pricing or analyzing supply chains, but keep strategic oversight in human hands.
Leaders might be startled to discover that after implementing AI-driven strategies, they've lost touch with the cultural nuances that once defined their organization's character. Let's keep leaders accountable for such decisions.
I also see an important role for “creative doubt.” Don’t assume AI outputs are correct just because they’re quantitative and sound convincing. Challenge them, discuss them and bring in the human dimension.
In a balanced approach, AI is one voice at the table, sometimes a useful voice, but prone to error or oversimplification. More like a recent business school graduate than an experienced strategist.
So what do I suggest? Don’t get swept away by the hype.
Focus on the fundamentals of your market, your data infrastructure, and your leadership culture before deploying AI as a strategic fix. If those fundamentals are weak, AI is unlikely to help and might even make things worse.
Remember that strategy is about people, creativity, and purpose. AI can accelerate certain analyses, but it can’t replicate the moral and interpersonal judgments central to building a resilient organization. Keep the human element front and center.
Perhaps McKimsey have access to AI tools that I do not, but so far mine seem to retain much of the bias and little of the empathy of the humans they mimick.