
Eight-in-ten consultants are using AI as part of their work – with ChatGPT the most popular. However, only 10% use it every day, suggesting that as more than eight-in-ten consultants have concerns about being unable to validate the sources of content AI produced, the technology is still struggling to win the industry’s wholehearted trust.
The consulting industry has long been at the forefront of the hype surrounding automation, machine learning and generative AI. From the launch of ChatGPT, the sector’s biggest names have clamoured to partner with AI’s leading figures – from software developers like OpenAI, to chip-producers like Nvidia. But while business advisories are keen to encourage their clients to invest heavily in implementing AI throughout their organisations, the jury still seems to be out in consulting itself.
Uncertainty around privacy, data-security, accuracy and affordability still mean that while AI remains ‘the future’ according to the press output of most firms, behind closed doors they are a little more cautious. For example, Harvard Business School researchers polled 700 consultants, who said when they used AI tools, they found task completion rates improved by a quarter. However, speed is not everything – and even on fronts such as data processing – where consultants and their adjacent service providers have insisted AI could save huge amounts of time making sense of masses of information – some are now suggesting AI should only be fed data in increments, to ensure easy checks for the quality of its output.
Source: Eden McCallum & LBS Consultant Survey 2024
At the same time, Deltek’s annual Clarity Report found other anxieties mean AI is not being adopted as fully as it might be in consulting. It found that while 74% of professional services leaders said successful AI implementation could deliver substantial competitive advantage, 64% also expressed concerns about the impact of future regulations.
A new study from Eden McCallum has painted a similarly complex picture of AI in consulting. Surveying 500 consultants in the UK and Europe, the researchers found that the majority of consultants do use AI in their work to some extent – with 79% admitting they at least deployed it ‘sometimes’. But while that might look good when presenting themselves as ‘forward thinking’ technology experts for their clients, just 10% said they ‘always’ used it in their overall work.
The most common platform deployed was ChatGPT. Of the consultants who used AI in their work, only 6% did not use it at all. Meanwhile, half of all consultants leverages Microsoft Copilot occasionally, and 30% used Gemini in certain situations. However, regular use of the technology on all fronts was still only present in a minority of cases.
Source: Eden McCallum & LBS Consultant Survey 2024
When asked what they used AI for, the portion of those who said they ‘always’ deployed it seemed to be spread even thinner – and seemed to be heavily weighted toward the least committal tasks. While 76% said they used the technology for idea generation, the number who deployed it every time was just 9% – before working on generating content for themselves. Meanwhile, the only other ‘always’ answer that came close was for the 7% who used it for ‘synthesis’, or comparing two conflicting points of view, to see where overlap or contrast could be found.
Meanwhile, only 51% said they used AI for data analysis at least occasionally. And a tiny 1% said they would use AI in this kind of work every time. Meanwhile, not a single person was willing to use AI every time to plan projects, or handle business admin – despite consultants regularly touting the time-saving back-office functions of AI for their clients.
So, what might be giving consultants pause for thought when it comes to their own operations? Despite the claims of the proponents of AI that the technology is progressing rapidly, the accuracy of AI generated content has stalled – with a number of notable studies suggesting it may actually have declined in recent months. A recent BBC study found that when researchers fed ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity AI BBC news stories, their summaries contained “significant issues” in 51% of cases. Meanwhile, the Tow Center for Digital Journalism also recently studied eight AI search engines – including the three consultants favour most – and found results were incorrect in 60% of cases.
Source: Eden McCallum & LBS Consultant Survey 2024
To that end, Eden McCallum found that consultants were acutely aware of these issues. When asked about ‘hallucinations’ – a euphemism deployed by developers to describe the misinformation that AI has a reputation for presenting as fact – 69% of respondents said they were concerned to some extent, while only 31% said they were not concerned at all. Meanwhile, the inability to validate sources from AI generated output was an even greater issue – with just 11% saying they were not worried, while 23% were ‘very concerned’.
When asked how these fears factored into their use of AI products at work, consultants gave a variety of answers. One said they treated AI like a “recent graduate”, who has great enthusiasm and time to research “but lacks the business acumen to sense check the findings”. Meanwhile, others simply noted AI was most useful for “getting a fast start on tasks” – before doing the rest the “classic” way – meaning hallucinations are “generally not an issue”.
As much as they might have wanted to spin the answer, then, ultimately the respondents highlighted the same theme. AI can be helpful in certain contexts – but that qualifier is all-important. The technology is still not trusted when it comes to tasks with any amount of material importance – those certain contexts usually far from final outputs that come into contact with clients, and require consultants to do plenty of work of their own.