fbpx
BETA
v1.0
menu menu

Log on to your account

Forgotten password | Register

Welcome

Logout

News & Insight

View RALI news and insights to keep up to date with the latest on trend developments relating to future leadership capability and experience requirements and the future world of work.

About 35% of current jobs in the UK are at high risk of computerisation over the following 20 years, according to a study by researchers at Oxford University and Deloitte. Go to http://www.bbc.co.uk/news/technology-34066941 and type your job title into the search box below to find out the likelihood that it could be automated within the …

2nd Mar 2018 | 03:55pm

The technology is clearly transformative, but it’s also vulnerable to the same excesses that inflated past financial bubbles.

16th Oct 2025 | 06:30pm

Honor’s Robot Phone pairs AI smarts with a fold-out robotic camera arm and next-gen imaging—the first milestone in its Alpha Plan for an AI device ecosystem.
The post Honor Teases ‘Robot Phone’ Blending AI, Robotics, and Next-Gen Imaging appeared first…

16th Oct 2025 | 02:52pm

When my teenage son developed mysterious symptoms, I followed the same path anyone else would: I put his health in the hands of a team of medical professionals. Multiple myeloma is a rare blood cancer. It is so uncommon in 17-year-olds that it doesn’t appear on diagnostic checklists. Despite having no clear starting point to work from, my son’s doctors worked their way to an accurate diagnosis through a process of trial and error, bouncing ideas off each other and testing and discarding hypotheses until they could tell us what was wrong. The process felt inefficient and uncertain at a time when I wanted fast answers and cast-iron guarantees. But this messy and distinctively human approach saved my son’s life.

AI promises to improve processes like this, replacing the fallible and unpredictable human mind with the analytic power of trained and tested algorithms. As someone who helps organizations implement AI technology, I know just how much potential it has to make processes and workflows more efficient. But before we start replacing human judgment at scale, we need to think carefully about the hidden costs that can come with productivity gains.

A recent study in The Lancet Gastroenterology & Hepatology presented some sobering findings for AI maximalists. Physicians who spent several months working with AI support in diagnostic roles showed a significant decline in unassisted performance when the technology was withdrawn. This kind of “deskilling” effect isn’t unique to either medicine or AI. We have known for years that extensive GPS use leads to a decline in spatial memory and that easy access to information reduces our ability to recall facts (the so-called “Google effect”).

Most people are willing to accept these cognitive losses in exchange for convenience. And that is a trade-off that individuals need to decide for themselves. But when it comes to organizations and institutions, things are more complex.

The first concerns that leap to mind are worries about losing access to our AI tools after outsourcing our skills to them. What if the system crashes or performance drops off? While this is a real problem, it is nothing new. We can design backup solutions where necessary, just as we always have with technology.

But there is another set of problems that cannot be resolved simply by putting guardrails in place. Human skill sets are important not just because they let us act on those skills, but also because they let managers and decision-makers understand and supervise what is happening on the frontlines. If physicians lose their diagnostic chops, who will validate or audit the output of the algorithms? Who will notice that the edge cases—the patients with statistically implausible diseases—are not being diagnosed correctly? And, perhaps most importantly, who will take responsibility for the algorithmic judgments, whether they are right or wrong?

For most organizations, maintaining public trust is a core part of their relationship with society. Just as we won’t eat in a restaurant if we don’t trust the kitchen to deliver safe food, so we avoid products and services that we believe may harm us. Without accountability, trust is impossible.

As an IBM training manual put it nearly 50 years ago: “A computer can never be held accountable, therefore a computer must never make a management decision.” The same principle holds true for AI. Without a clear accountability trail that leads to a human decision-maker, it becomes impossible to hold anyone responsible for any harms that arise from the AI’s behavior. And this accountability deficit can destroy the legitimacy of an institution.

We can see these dynamics at work in the U.K.’s 2020 exam grading debacle. At the height of the COVID pandemic, with normal exams cancelled, the U.K. government used an algorithm to assign grades. The algorithm imported biases and systematically favored children from wealthy backgrounds. But even if it had worked perfectly, something critical would still have been missing: institutions that can justify their decisions to those affected by them. Nobody will be satisfied by an algorithmic explanation for a result that might have lifelong effects. Ultimately, the government reversed course, replacing the AI judgment with assessments made by each student’s teachers.

What this means for your organization

The challenge isn’t whether to use AI—it’s how to implement it without creating dangerous dependencies. Here are specific actions leaders, managers, and teams can take:

  • Implement AI rotation schedules: Ensure that teams rotate periodically from AI-assisted work to manual work to maintain core competencies.
  • Create skill preservation protocols: Document which human capabilities are mission-critical and cannot be outsourced.
  • Establish accountability chains: Specify which decisions require human sign-off.
  • Institute “analog days”: Schedule regular sessions where teams solve problems without AI tools.
  • Design edge case challenges: Create exercises focusing on unusual scenarios AI might miss.
  • Maintain decision logs: Create institutional memory of the value and role of human judgment by documenting when and why you override AI recommendations.
  • Practice explanation exercises: Regularly require team members to explain AI outputs in plain language—If they can’t explain it, they shouldn’t rely on it.
  • Rotate expertise roles: Ensure multiple people can perform critical tasks without AI support, preventing single points of failure.

Warning signs your organization is too AI-dependent

Watch for these red flags that indicate dangerous levels of dependency:

  • Teams can’t explain AI recommendations
  • Acceptance of AI results without validation has become the norm
  • Staff miss errors or outliers that the AI overlooks
  • Employees express anxiety about performing tasks without AI assistance
  • Simple decisions that once took seconds now require AI consultation

If you spot any of these signs, you need to intervene to restore human capability.

The path forward

My son’s cancer was successfully diagnosed thanks to structured redundancy in his care team. Multiple specialists approached the same problem through different lenses. The bone specialist saw what the blood specialist missed. The resident asked the naive question that made the senior doctor reconsider. This kind of overlap can look like inefficiency at times, but if we don’t work to retain it, we lose something vital.

We should not shy away from the advantages AI can offer when it comes to analytical speed and pattern-recognition. But at the same time, it is essential that we shield the decision-making process from being overwritten by a single algorithmic voice. We must keep humans in the loop both because they can look beyond statistical likelihood and because they can be held accountable for their final decisions.

Yes, maintaining human capabilities alongside AI will be expensive. Training tracks that preserve human skills, AI-off drills, and rigorous human audits all cost money. But they preserve the institutional muscle memory that holds the whole edifice up. The cost of losing the human perspective is one we cannot afford to bear.

16th Oct 2025 | 02:43pm

Try this cognitive audit.

16th Oct 2025 | 01:15pm

Every day another industry leader proclaims that everything will change with AI. While there is no question AI is the most transformative tech shift since the industrial revolution, all the hype means leaders lack real answers about how those changes …

16th Oct 2025 | 10:40am

There’s no shortage of challenges facing employers and the U.S. workforce. From economic concerns to the impact of AI, both workers and organizational leaders are navigating big changes. One trend deserves particular attention: working mothers are ree…

16th Oct 2025 | 10:27am

I was interviewing for a job as a customer service agent with Anna. She had a low, pleasant voice and she’d nailed the pronunciation of my name—something few people do. I wanted to make a good impression except I had no idea what Anna was thinking bec…

16th Oct 2025 | 09:00am

Below, Scott Anthony shares five key insights from his new book, Epic Disruptions: 11 Innovations That Shaped Our Modern World.

Scott is a clinical professor of strategy at the Tuck School of Business at Dartmouth College. His research and tea…

16th Oct 2025 | 08:00am

Every office has that coworker that turns up to a meeting coughing and sniffling while proudly proclaiming they have never once taken a sick day in their career. (If there isn’t one, maybe it’s you.) 

But as one viral TikTok makes clear, those…

16th Oct 2025 | 07:00am