Thriving in the Age of AI: Unveiling the Expertise Gap in Generative Technology Adoption

At a time when artificial intelligence (AI) is transforming business, the scramble to introduce AI-based tools to organisational processes has brought about the question: who should lead the use of revolutionary technologies? Intuition might single out young, digital native employees for this task, given their high exposure to digital technologies, their reputation for speed and agility, and their willingness to adopt technological trends early on. But new research from a collaborative study between the Harvard Business School, the MIT Sloan School of Management and the Wharton School at the University of Pennsylvania reveals otherwise, highlighting a web of competences, risks and senior leadership to navigate generative AI.

THE GENERATIVE AI HYPE: ASSESSING THE STATUS QUO

Generative AI is heralded as one of the most transformative developments of our time, one capable of having human-sounding conversations, paraphrasing text for nuance, and producing solutions in a suite of computational tasks. But as humans and organisations rush to embrace these technologies, the reliance on junior workers to guide their proliferation is a risky gambit.

The Pitfalls of Reverse Mentoring in AI Adoption

The idea of reverse mentoring – with younger staff teaching older colleagues about technology – has an appealing ring of vibrant, democratised knowledge. But the study reveals a chink in this armour: a difference in understanding and expertise that could trip up the safe deployment of an AI system.

AI Risk Mitigation: Where Junior Consultants Miss the Mark

In the experiment, the junior consultants’ rationale for the problem – and for the risk mitigation strategy they proposed – was very different from expert rationale, even though the junior staffers intended to achieve the same business outcome. The popularity of generative AIs such as GPT-4 can make it tempting to assume that digital nativity is equivalent to deep technological understanding. However, we should avoid this.

THE STATUS THREAT AND VALUED OUTCOMES DICHOTOMY

The premise of their research is that the evolution of AI will be different because of a ‘status threat’ facing practitioners in the senior-junior dynamic, and because of the extreme speed of evolution in AI relative to other disciplines. And the paper’s thesis sheds light on the real knowledge required to manage or practise AI – a deep and broad understanding of the discipline that goes well beyond knowing generally that it’s there.

BRIDGING THE GAP: THE IMPERATIVE OF EXPERTISE IN AI GOVERNANCE

The central finding of the study is really about the need for expertise. As the use of generative AI expands within organisations, there is a pressing need to instil general – organisation-wide – AI literacy. This is a shift from bottom-up to top-down AI governance, where senior leadership plays a key role in AI implementation, as well as learning and external advising.

The Road Ahead: Senior Leadership at the AI Helm

To succeed in this AI-fuelled future, senior professionals will need to find ways to fulfil this dual responsibility: on the one hand, they will need to be users of current technologies, implementing them to meet the demands of the present; but, on the other hand, they will have to become visionaries who anticipate the shape of tomorrow’s technological landscape well in advance of it becoming readily available. This will involve not only knowing what is possible today but also adopting an informed view of what might be possible tomorrow and how it could affect the company and its customers.

The Role of Upskilling in Demystifying AI

With these obstacles in mind, the way ahead is not overly Abstract, but it is clear: skilling-up becomes a non-negotiable part of business strategy, at all levels of the organisation, and certainly in the ranks of senior leadership.

THE STATUS OF EXPERTISE IN NAVIGATING AI'S FUTURE

The beginnings of that journey are fraught with risk; it also confronts incredible opportunities. As organisations navigate their way forward into a world they aren’t used to, something I call status – not just position, but expertise – will ultimately determine who gets to do what as generative AI becomes used more widely in business.

In short: this study provides a cautionary tale, but ultimately a hopeful one, as we stand at the precipice of a technological revolution. By gaining greater awareness of the chequered evolution of human status hierarchies and power dynamics, we can draw the right lessons as we set about to utilise the full productive potential of AI. We need to rely on expertise, planning and learning, not digital nativity. We need to avoid ‘blindspots’ and fallback claims if we are to navigate the status quo, and lead our organisations into the future, where machines not only complement but also multiply human potential.

Jun 08, 2024
<< Go Back