fbpx
BETA
v1.0
menu menu

Log on to your account

Forgotten password | Register

Welcome

Logout

Living Your Organization’s AI Transformation

3rd Oct 2025 | 07:16pm

Generative AI is tearing through the foundations of knowledge work—and leaders need to get personally involved in the messy, destabilizing process of rebuilding them if they’re truly going to drive growth. This can’t be a neat handoff to IT. It can’t be a lab experiment or a pilot for someone else to run. Why? Simple. Gen AI is challenging what it means to create value, make decisions, and exercise judgment at the highest levels of the enterprise. And if you’re not at least a little unsettled, you may not be close enough to what’s actually changing.

I know this from experience. As a consultant in Accenture’s Talent & Organization practice, I’m being asked not just for help with deploying AI tools, but also—and more importantly—for guidance on how to rethink talent strategy: how teams should work, what “expertise” now means, and what long-standing training models should give way to. And as I have learned, to remain trusted advisors, we have to push ourselves through what we’re asking others to do. What’s more, we have to do it as fast as possible, and continuously, to be sure we keep steering people in the right direction.

In that spirit, our practice turned the bright light on ourselves. We created “Hack Talent”—our own initiative to find out how fast and how much AI could help us change our jobs for the better. We wanted to use AI to take away onerous, repetitive tasks, sure, but more importantly, we needed to find out how it could help us see what our roles could be, and how close we could come to making our daily work exceed the best of what we could imagine. And truth be told, this initiative has been both rewarding and disorienting.

Our “Hack Talent” initiative

We gathered a group of about 40 Talent & Organization consultants and gave ourselves this hypothesis to prove: “The same technologies disrupting work can be used to power a more productive, human-centered future of work.” Our strategy was to learn together, navigating this new frontier one prompt, one experiment at a time, with humans and AI learning and adapting together through continuous co-learning. The goal we set was to define a clear path forward, either by strengthening or expanding existing talent development models, or by creating new models. Our purpose, ultimately, was to show clients how it can be done, and done well. And that we know what we’re talking about because we’ve done it.

More specifically, we enabled the team with ChatGPT Enterprise licenses (read: secure) and deployed a base set of enterprise-secure custom GPTs. We kicked off the experiment with a very open and honest admission that we were going to collectively figure this out as we go. We gave the team some high-level guidance to bring AI into every task of every day—using our internally deployed gen AI tools, Microsoft Copilot, and the newly procured ChatGPT Enterprise licenses. As the weeks progressed, we held knowledge-sharing sessions with “show and tell” examples and kept engagement high through virtual collaboration channels. We documented our use cases, noting what we were doing, how we were doing it, and what value we were delivering. We categorized high-level outcomes in terms of time savings, creativity expansion, and new offerings (things we simply couldn’t have done without these tools). Little by little, the use cases began to stack up.

Four months in, we expanded our most successful use cases, conducted more learning sessions, and then essentially started the process again, revisiting the original hypotheses but from a new, higher “ground floor.” These cases have ranged from simple exercises to thorny forays into reshaping our thinking about what we do for the organization and why. In one, for example, a group of us used our enterprise GPT as a thought partner to brainstorm and expand on ideas about how best to prepare an organization’s workforce for the next stage of its AI adoption. We don’t believe the GPT saved us any time on this project; critically, however, we do believe we produced a higher-quality offering than we could have achieved without its support and we did this without taking more time. Why? In part because it helped us move more quickly past the blank-page phase of brainstorming, and in part because as we iterated with AI, it was able to offer examples that we likely wouldn’t have identified to help us hone our thinking.

In the group’s biggest use case to date, a team (including me) used our enterprise-protected ChatGPT to help create a talent engine maturity model for use internally and with clients. The model covers six components we believe are crucial to an organization’s talent engine: attract and match, support and empower, move, align, develop, and workforce intelligence. Here’s a brief description of each:

  • Attract and match: Recruiting and developing the people best suited to the work that needs to be done. (Again and again.)
  • Support and empower: Equipping people with the tools, resources, and learning they need in the flow of work.
  • Move: Creating direct and frictionless pathways to support people’s transitions into, around, and out of the organization.
  • Develop: Helping employees develop future-ready skills through social and experiential learning.
  • Align: Ensuring that motivation, mindsets, and metrics are in sync so employees have a clear sense of their purpose and the value of their contributions.
  • Workforce intelligence: Keeping a big-picture view to ensure that the efforts of a productive, engaged workforce add up to an organization that drives growth.

We wanted to define up to five stages of increasing maturity across each component, complete with stage descriptors, value implications, and next best steps. We asked ChatGPT to create a first draft, drawing from enterprise materials. Then, we brainstormed, edited, and asked the technology to revise the model based on our feedback.

As with other projects, we tracked the time we spent along with the time AI spent. We compared the finished work with similar previous initiatives done without AI technologies and assessed it on quality and time saved. Ultimately, we believe that AI saved us 118 hours of dedicated human work. More importantly, we are confident that the end result, aided by the deeper, more relevant cycles of revision, was a higher-quality asset than we could have produced without AI support. We learned what it feels like to use AI to “augment” our capabilities.

Personally, I faced a long learning curve using our AI-powered tools. Without that experience (say, for example, if I had delegated the project to the others in the group), I really wouldn’t have understood what AI could do best, or where input from people mattered—and mattered most. I could have easily underestimated feedback loops needed and overestimated the tools’ abilities because I wouldn’t have been able to separate the whole project into relevant parts. I would have under- and over-estimated what it takes to use AI tools well, while failing to understand how the balance of time and effort needed to get high-quality results is likely to shift over time, not to mention failing to grasp the mental hurdling it takes to trust what AI offers to the proper extent.

Based on our experiences to date, we’ve amassed a great deal of data, which has provided us with significant insights into the economics of deploying AI and the personal and team opportunities that come with it. We’ve distilled what we learned about leadership and AI into four high-level actions for senior leaders across the C-suite and at functional levels. None of them are clean. But all of them are necessary. Here they are:

Get into the mess—personally: Generative AI doesn’t reward arms-length distance. It works best in cycles of experimentation: trying prompts, examining outputs, hitting dead ends, finding new angles. But many senior leaders are still delegating exploration downward. That’s a missed opportunity. To lead in this moment, executives need to sit with the ambiguity, help shape the use cases, and model the learning mindset they want others to adopt.

Try this: Block time each week to work with gen AI tools—ideally alongside your strategy, finance, or transformation teams. Don’t just review their output. Contribute prompts. Ask naïve questions. Test how AI reshapes your own thinking. Make your discomfort visible.

Let go of what made you valuable: The hard truth? The skills and instincts that once made you effective—the ability to synthesize information, structure a story, and deliver a crisp answer—are table stakes for machines. That doesn’t make you obsolete. But it does mean your value is shifting from having answers to framing better questions, from holding knowledge to creating meaning in real time. This shift is deeply personal. Many of us built careers and identities on being the smartest person in the room. Generative AI forces a humbler posture. It rewards the leader who can provoke, interpret, and steer, one who sees end-to-end, not the one who already knows. As Accenture research has found, generative AI is democratizing business process redesign, enabling everyone to reshape their workflows. That can be a difficult personal issue to overcome. At one point, I found myself wondering if this would make team members (or me) lose our identities. That, briefly, was a low point. I pushed past it when I learned that I could avoid that outcome by provoking the thoughts of AI-augmented teams in new ways. I needed to ask different questions and explicitly reframe my expectations, while opening up my thinking to new approaches. When I did that, I started to see how our teams were starting to work differently, and that put a charge in me.

“The skills and instincts that once made you effective—the ability to synthesize information, structure a story, and deliver a crisp answer—are table stakes for machines. That doesn’t make you obsolete. But it does mean your value is shifting.”

Try this: Start asking your core team what they’re personally learning from AI—then share what you are, too. Build feedback rituals around real-time sense-making, not final answers. Recognize and reward people who are building new mental models as well as those who are doing work more efficiently within old ones. Celebrate new.

Rebuild trust from the ground up: When generative AI enters the room, questions suddenly abound and trust gets scrambled. Where did this recommendation come from? Is that insight the product of human judgment or AI synthesis—or both? Who made this slide? If you can’t answer those questions, your people won’t trust the process. And if your people can’t trust the process, then ultimately your clients, customers, and stakeholders won’t either. The only way forward is radical transparency. That means designing new practices that surface how decisions are being made and by whom, while also teaching teams to challenge outputs without undermining confidence. In addition, it means encouraging co-learning so people see how humans and AI refine each other’s outputs over time.

Try this: Require teams to annotate strategic outputs: what was AI-generated, what was human-refined, what was rejected. Not to police, but to create shared literacy. Do the same yourself. Treat every decision as a teaching moment about how to think critically in a world of collaborative intelligence. Compare, contrast, and evolve thinking.

Protect space to reimagine—not just react: Right now, many leaders are still testing tools and launching pilots. That work is critical both for the experience it provides and for the opportunity to ensure that responsibility and accessibility are built into every AI deployment. But this is a “yes, and” situation. Organizations also need protected space in which to explore bigger bets, so that short-term gains don’t come at the cost of long-term reinvention. Generative AI isn’t just a new capability. It’s a new medium for thought. It invites us to rethink how knowledge flows, how strategy is shaped, and what collaboration even looks like. That kind of reimagining takes time, reflection, and creativity—three things often squeezed out by the pressure to act fast.

Try this: Launch a “future room” project at the executive level. Create a cross-functional team and ask team members (including yourself) to reframe (not just improve) core elements of the business. Use gen AI as a partner in exploration, not just as a tool for acceleration. Ask: What would our business look like if gen AI was native to it? How would we structure work if AI and humans co-created every decision?

Generative AI isn’t nibbling at the edges of your business—it’s coming for the core. How knowledge is produced. How decisions are made. How strategy is developed and executed. The only way to lead through that level of change is to get closer to it personally, visibly, and vulnerably. Zoom in.

This moment doesn’t require perfect answers. What it requires is for both leaders and advisors to show up less varnished than most of us like to be. It also requires a willingness to let go, get messy, and lead by learning—out loud.

The post Living Your Organization’s AI Transformation appeared first on Ivey Business Journal.