You most likely know Grammarly as an advanced grammar guide and spell-checker; that’s certainly how I started using it and I think it’s the best on the market. However, there more to the company, and a lot more to their ambition, than that. Grammarly position themselves as facilitators of communication and collaboration, expanding into areas such as AI writing assistants and, significantly, document-based co-creation through their acquisition of Coda.
As part of the acquisition, the CEO of Coda, Shishir Mehrotra, took over the leadership of Grammarly too. I worked with Shishir a little at Microsoft and he’s an outstanding thinker, going on to do good things at YouTube and founding Coda as a genuinely innovative company. He’s definitely the right person to see past the simplicity of a useful app to the potential beyond, while keeping a focus on design and the business.
One aspect of Grammarly’s work is important to me, but you may not know much about it unless you follow them closely: they do excellent research. That’s always a telling sign of a team looking deeply at their business. They share this research in excellent blogs - the engineering blog is my favourite - and reports.
Today, I’m writing about their report, The Productivity Shift 2025, which was published in February. You should read it, so I will not repeat its findings here, although I will discuss the implications that play out for me.
I find this report fascinating because it highlights something economists have been warning about for years: that more work and more communication don't automatically mean better productivity. Our contemporary obsession with being constantly connected, whether in meetings, emails, or Slack channels, isn't really creating value.
And now, AI is being framed as the solution. But the real question is: whose productivity is being optimized? For workers? For employers? Or for AI vendors?
For that matter, what does productivity even mean in the context of AI? The report suggests that AI can help reduce communication overload and it makes an important distinction between performative productivity (work that looks busy but doesn't create value) and actual impact.
The cost of looking busy
For example, in this report I wasn't so much surprised, but definitely struck by how much time professionals are losing to what the report calls performative productivity. Over 8 hours a week wasted on work that isn't actually productive? That's an entire workday, every single week.
And yet, companies are still piling on new communication tools and assuming that's the solution. To me, that's a design failure. If people are spending that much time trying to look productive rather than be productive, the problem isn't them: it's the system they're working within.
What did surprise me was the scale of financial loss tied to this inefficiency. The report estimates $9.3 million in annual losses per 1,000 employees due to poor communication alone. That's not an inconvenience: that's an economic crisis in itself.
Instead of throwing AI at the problem, companies could rethink how they organize work in the first place. Maybe the issue isn't communication efficiency at all. Maybe it's the entire way work is structured.
I realize that raises a very radical question, but it can usefully guide AI adoption in the workplace and we have a once-in-a-generation opportunity to rethink processes in this way.
Beyond efficiency: AI and purpose
I’m bothered by the assumption that AI will naturally solve this. Productivity isn't just about efficiency; it's about purpose. We are most productive when we feel that our work matters: to us, to our peers, to our organizations and to our customers or clients. Will AI help people do better, more meaningful work? Or will it simply allow employers to demand more from employees without changing the way work is structured?
So, to be clear, more work isn't the answer. But will AI actually help, or is it adding another layer of complexity?
It depends on how it's implemented. If AI is used in a way that forces people to conform to rigid, algorithm-driven workflows, it will make things worse. But AI could be designed to enhance how people think and collaborate and that's why we need to look at the structural incentives around AI adoption.
AI as an amplifier of human capability
I am sure (and this report confirms) that AI is already changing productivity in unexpected ways. Designers are using AI for ideation in ways they never could before. Students using AI may actually improve their reasoning skills: they can learn how to think better by interacting with these tools. That suggests that AI doesn't necessarily make people more efficient; but it can make them smarter.
That optimistic view assumes that organizations will encourage, or at least allow, people to use AI in such creative ways. But, if AI is treated as just another layer of control, another way for management to squeeze more work out of people, then all we've done is create an even more exhausting treadmill.
In The Design of Everyday Things, Don Norman argued that good design should reduce cognitive load. The same applies here: If AI is going to help, it needs to be human-centered rather than a way to automate busyness.
The report touches on this, but I'd go further: We need designers and policymakers working together to make AI adoption beneficial for everyone, not only for the most AI-literate workers, or AI will become an accelerant of existing inequalities rather than a leveling force.
So, how could organizations design AI tools that actually improve work rather than only increasing the efficiency of current tasks?
The fundmental principle is understanding the real needs of workers. I suspect a lot of AI tools today are built without considering how people actually think and work, but how companies wish they worked. That can be counter-productive. For example, if AI is used to generate emails faster, but people end up spending even more time interpreting those emails, we haven't solved the problem, we've made it worse.
AI can reduce friction in communication rather than increasing the volume; but that means designing systems that help people focus, rather than constantly interrupting them with notifications.
The importance of AI literacy
Many companies are treating AI as a cost-cutting tool rather than an enabler of better work. If firms only invest in AI to reduce headcount or drive employees to do more with less, we won't see real productivity gains, but more pressure.
The report suggests that the AI-literate (AI-fluent in the language of the report) are gaining an edge over their less AI-literate colleagues. That's not surprising.
But what really made me pause was how much of the AI adoption is happening on an individual rather than institutional level. The report highlights AI power users who are seeing huge benefits, but it doesn't seem like companies are actively creating the conditions for all employees to benefit. That tells me AI is being adopted in a fragmented, bottom-up way rather than being integrated strategically. That's a missed opportunity.
It suggests that AI isn't fixing a broken system: it's helping a small group of workers work around the system.
However, I’m sure the genie is out of the bottle. AI is no longer something people wait for their employers to give them. Like mobile devices or self-service business intelligence in the past, workers are adopting AI with or without management or IT permission. If companies don't provide AI tools, employees will just bring their own.
The economist Mariana Mazzucato has often written that innovation isn't neutral, but shaped by public and private sector priorities. And in the case of AI deployment those priorities are becoming clear. But what worries me is that AI literacy is being treated as an individual responsibility rather than a systemic issue. If access to AI tools and training isn't well distributed, we're going to see even more polarization in the workforce than we already do.
The good news is that AI literacy or fluency is learnable. Unlike coding, which requires deep technical skills, AI fluency is about knowing how to ask the right questions. It's about thinking critically and knowing how to use AI to enhance your work.
The experienced analyst John Santaferraro and I have been working on exactly this problem, developing a curriculum specifically for executives to develop what we call The New Literacy
AI is only part of the solution
If I have a criticism of the Grammarly report, it would be that while it outlines a problem (communication overload and productivity stagnation) it might be overestimating AI's role. In its current form, yes, AI could be part of the solution, but only if companies rethink why work is structured the way it is. If they apply AI like a band-aid, the underlying issues won't change.
The real shift needs to be cultural, not technological.
The report does provide strong data and a compelling case for change. And Shishir Mehrotra's statement about a people-centric approach is directionally correct: I completely agree that the goal should be workplaces where people thrive. I have said this, and similar, many times in Creative Differences.
But my concern is that this framing may assume that individual AI adoption can fix structural inefficiencies. If organizations don't rethink their systems, how work is organized, how decisions are made, and what is considered valuable work, then AI could reinforce the status quo rather than transform it.
In his introduction to the report, Shishir is right to say that AI needs to be integrated in a way that amplifies focus and results rather than making people busier. But I'd push back on the idea of using AI power users as the sole model, or even the key model for transformation: they are a great case study, but they're often early adopters who are comfortable navigating complex systems. That's not true for everyone. If companies only focus on the AI-literate - and executives surely need to focus on their own literacy, too - they risk leaving behind a huge portion of the workforce. Instead of asking how do we make more AI power users?, we could be asking how do we make AI work for everyone, even those who aren't power users?
AI isn't a magic fix: it's a tool. And tools are only as good as the systems they're designed for. If we don't design AI thoughtfully, rather than making our work better, it'll be yet another layer of digital noise.
So, who wins here: organizations or workers?
The hopeful part of me wants to say workers, but history suggests otherwise. For decades, we've seen technology introduced with the promise of making work easier, and yet people are working longer hours than ever. AI could be a liberator, but only if we redefine what we mean by "work."
There is another possibility. What if AI makes management itself more efficient? If AI can automate things like scheduling, performance reviews, or even strategy modeling, then maybe the real transformation isn't just in how employees work, but in how organizations themselves are structured.
Imagine a future where a company doesn't need layers and layers of middle management because AI handles coordination. That could actually reduce bureaucracy and give people more autonomy. That would be a radical shift.
But I wonder: will leaders be willing to embrace it? The problem with AI isn't the technology itself; it's who controls it. If AI is used to give workers more autonomy, that's a revolution. If it's used to micromanage them more efficiently, it's just another corporate tool.
In the past, we designed organizations around hierarchies. But in an AI-driven world, do we even need traditional companies anymore? What if work became more fluid: people working on projects rather than fixed jobs? AI could allow for more flexible, meaningful work rather than just reinforcing old structures.
That's where I think AI is most exciting. It's not about productivity alone: it's about redefining what work even is. In the next decade, we might see a world where more people work in independent, networked ways rather than within rigid corporate systems. AI could enable that shift, if we let it.
Donald - another great article. It’s quite related to an article I penned last year on “productivity” being the right measure for humans. https://medium.com/@davidsweenor/will-ai-take-my-job-maybe-2c85ebf9e0a7