Generative AI has made it possible for individuals to perform tasks that once required entire teams. Today, a single marketer can produce campaign assets, analyze data, and generate content at scale. A product manager can prototype, test, and iterate without relying on engineering; and developers can ship reams of high-quality code written by machines. The result is the rise of the “superpowered individual” who can do the work of many.
It’s tempting to extrapolate from this that human collaboration is becoming obsolete. If AI can replicate or augment the cognitive contributions of multiple individuals, why bother with the friction of teamwork at all?
In our work with top companies—Tomas as an organizational psychologist and author of I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique, and Dorie as a keynote speaker and consultant for companies reinventing themselves in the face of AI—we’ve encountered a wide range of experimentation, with companies using agents to stress-test strategy, deliver key functions such as finance and operations, and serve as quasi-autonomous development teams.
Still, we believe that teamwork is here to stay—though AI will almost certainly reshape it. Specifically, we believe teamwork will shift in three key ways:
1. Team composition will change. Teams are likely to become smaller (and perhaps more nimble) because individuals can do more on their own, and teams may include both human and nonhuman contributors. As a result, it won’t be sufficient for a few people to be “good with AI.” AI literacy has to become a core team capability, not an individual skill. Teams need shared norms around emerging topics like:
- When to rely on AI (and when not to);
- Understanding the difference and trade-offs between speed and quality, efficiency and accuracy, low-value and high-value work; and
- How to interrogate AI’s outputs and combine them with human judgment.
Effective teams will need to develop a mechanism for rewarding people not just for using AI efficiently, but for spotting when it’s wrong. In practice, that may mean making “AI skepticism” a formal part of performance evaluation.
2. The focus of teams will change. Today, many teams focus on logistical issues—a hodgepodge of analysis, reporting, and coordination across divisions and departments. (Status update, anyone?) That kind of task-based teamwork may soon be obsolete, because AI can handle it faster and more efficiently.
But teamwork was never just about task execution—and in the AI era, teamwork will evolve into a higher-value activity that unlocks new possibilities for organizations. Indeed, as transactional collaboration declines, relational collaboration becomes more important.
Leaders should invest in trust-building deliberately: fewer but higher-quality interactions, more in-person time when possible, and structured opportunities for disagreement. Psychological safety matters, but so does intellectual friction. The goal is not harmony, but productive conflict.
As a result of this change, teamwork is likely to feel increasingly meaningful—like a core component of both your job and your professional identity. When you can connect deeply with other people through shared goals and activities, it becomes a highly meaningful experience that increases loyalty to your team and your company.
3. The role of leaders will also shift. Leaders in the AI era will need to make three major changes in how they guide their team members. Specifically, that means:
- Becoming more intentional about focusing the team on high-value work. With AI handling more of the analytical and operational workload, leaders will need to redesign teams around judgment, not tasks. Teams should be explicitly chartered around higher-order goals: framing problems, making trade-offs, and aligning on priorities. In other words, the “soft” skills of leadership may become the hardest to replace. A simple rule is that if a meeting could be replaced by an AI-generated summary, it probably should be.
- Reconceptualizing your role as an orchestrator, not the source of answers. Leaders should begin to think of themselves as the architects of how humans and machines work together. That means clarifying roles between AI and people, setting decision rights, and ensuring accountability. It also requires resisting the temptation to defer to AI when stakes are high. Judgment, not output, remains the leader’s ultimate responsibility.
- Measuring what actually matters. In many organizations, performance is still evaluated based on visible activity rather than quality of thinking. In an AI-enabled world, this becomes dangerous. Leaders should shift metrics toward decision quality, learning speed, and long-term outcomes, rather than short-term productivity gains.
In short, the old teamwork of processing and coordination is on its way out. But the new teamwork—integrated with AI and suffused with human talent and judgment—will be more essential than ever, and organizations will need to figure out how to evolve.
The risk is not that AI will destroy teamwork, but that it will expose how much of what we have previously called teamwork was never that valuable to begin with. The opportunity is to rebuild it around what humans do best: thinking critically, connecting meaningfully, and deciding wisely so that the sum of a team is greater than its parts.








