I have mentioned in this series the prediction by NVIDIA CEO Jensen Huang that the IT department of every company is going to be the HR department of AI agents.
I don't think he means that companies will manage AI agents as they do human staff, although the cynical will say they may treat agents better. But I do see AI agents becoming integral in workplaces and I already find organizations thinking in familiar terms about how they hire, train, review, and even retire (without benefits) their digital workers.
So, I think Jensen Huang is on to something, but the metaphor is not quite as direct as he implies.
In a recent two-part debate, Forrester analysts Seth Marrs and Anthony McPartlin made quite different arguments about AI agents. Marrs laid out the familiar advantages of agents, especially for areas like handling out-of-hours issues, or language problems in customer support. However, McPartline took most issue with the narrative of AI agent as coworker: overhyped, misleading, and driven by self-serving interests of the tech industry.
There’s a false equivalency of agents as virtual employees ... Coworker suggests relationships or partnerships to complete tasks or processes. Human employees surrounded by AI robots at Amazon warehouses or Tesla factories do not see these robots as coworkers ...
Anthony McPartlin, Forrester
True, but there is also a long history of anthropomorphizing machines. One study of Roomba users noted that people engaged with the vacuums in pet-like ways, talking to them, rescuing them when stuck, and even listing them as family members in surveys. In fact, participants in that study described their Roomba as a helpful assistant, a pet-like being, or a valuable family member, despite knowing it was just a machine
Perhaps we will find that the more capable AI agents become, the more they start to resemble team members in hybrid human-AI teams.
The Collaboration Gap
On the other hand, here's an interview with Tye Brady, chief technologist at Amazon Robotics, which includes a lot of fascinating information about the scope and skill of their automation efforts. But there is no discussion by Brady or the Business Insider interviewer of how Amazon warehouse workers feel about working alongside the robots.
We do hear Brady's view that robots help workers be more efficient and safer. He emphasizes several times that this is an augmentation strategy rather than a replacement strategy and claims that robotics creates more skilled jobs. For example, he mentions that their Shreveport facility has “35% more skilled jobs" and talks about Amazon's $1.2 billion investment in upskilling programs.
However, there are no interviews with or perspectives from the warehouse workers themselves who interact with these 750,000+ robots daily. We don't hear their experiences, concerns, or thoughts about how the automation has impacted their work. The discussion focuses mainly on the technical capabilities of the robots and the company's vision for human-robot collaboration, rather than the workers' experiences with this technology.
AI’s Professional Development
This matters, because AI agents don’t remain static; they learn and improve over time even without upgrades, an evolution that I have previously described as a form of professional development. For instance, an AI that starts with basic customer support duties might, after ingesting enough interactions and feedback, progress to handling more complex customer inquiries or analyzing customer sentiment trends. In essence, updating an AI’s knowledge base or algorithms is akin to sending an employee to training or giving them on-the-job experience.
In the Economic Times of India, Goldman Sachs CIO Marco Argenti, for example, was quoted as seeing AI agents gradually adopting the characteristics of experienced employees: The AI assistant becomes really like talking to another GS employee.”
That seems unlikely, but already it's as if Argenti really sees AI evolution as a career progression; the AI starts in a junior-like capacity and, through learning and iteration, becomes more proficient and valuable to the organization. Such framing helps employees and leaders conceptualize AI not just as tech to install, but as contributors whose development needs to be managed over time.
In every large office, I am sure you have wondered what certain people are doing there? You nearly always find out they have institutional knowledge: they just know very well from experience how things are done and where everything is and what everyone’s role and speciality is.
I expect some agents will develop this kind of knowledge, especially if they are interconnected with other agents and human teams in something more like a working relationship than a purely functional task-oriented role, and if they have the capacity to learn.
I have been talking recently with a team that is working on an early but highly functional approach to this vision. GPT Mates based in Paris, introduces AI agents, that are to some extent capable of autonomously collaborating and forming teams. These AI Mates are designed to assess project needs and assemble teams by inviting or creating new Mates with the necessary skills and knowledge. This self-directed team-building process ensures that each project is supported by a tailored group of AI agents, optimizing task execution and resource allocation.
In addition to forming their own teams, GPT Mates collaborates between AI agents and human team members. Mates share expertise and align tasks with their skill strengths; human employees focus on strategic decision-making and creative endeavors: an alliance between ingenuity and intelligence.
In the GPTMates model, agents can, in a way, ask each other for help. In practice, this could mean that a manager can delegate certain tasks entirely to an AI agent and trust it to deliver results. In doing so, an agent could assemble a team. It will be interesting to see if this may also involve an AI agent asking a human with certain skills to join the team. Would the agent then be managing the human?
Are we ready?
No, we are not ready, culturally or structurally, for this change in work. It’s one thing to see AI agents as useful tools to work in the background for us, but I wonder if we’ll tolerate them self-organizing with humans coming along behind to do the fewer and fewer jobs agents can’t handle.
And, if IT departments are to act as HR for AI, how literally do we take that? How do we measure an AI agent’s “employee satisfaction” or address its “misconduct”?. Are such ideas even relevant.
Nevertheless, I can see "onboarding" processes that focus on system integration, security clearance, and initial training with company data. Performance management would need to balance quantitative metrics like task completion rates and response times with qualitative measures like user satisfaction and adaptation to new scenarios.
I think GPT Mates has a good model of agents interacting with each other and I hope they do well. But if, like Business Insider, we think and write only about what automation, agents and robots can do, while ignoring how human workers feel ... well, they will make their feelings known one way or another.
From everything I am reading, I am not sure where would be the place for humans in the alliance between ingenuity and intelligence. AI agents got both.