A question that data and senior leaders often ask me is:
How do we put people at the centre of our AI projects?
My view is that putting people at the centre of these initiatives is essential, for a number of reasons:
To ensure AI projects actually deliver value by supporting the real workflows people are using.
To reduce risk from over‑automation or the unauthorised use of AI systems.
To address concerns about – and internal resistance to – AI projects.
Addressing these issues in partnership with users is what ensures that AI is adopted safely and usefully within your organisation. Let’s look at each point in turn.
The AI hype of recent years has led to a lot of pilots and proof-of-concept projects that never made it to production. Even the most optimistic observers estimate that less than 40% of AI pilots are deployed more widely.
Unsurprisingly, engaging users early and often in the definition, build and testing phases maximises the chance that the solution will actually deliver value. In particular, focus on:
That last point is especially important. Only users can reliably assess whether an AI solution is accurate and trustworthy enough to augment their workflows, rather than requiring endless checking.
There is an 80/20 rule that can be applied to AI systems (and automation in general): it is often 20% of a process that, if automated, delivers 80% of the value in terms of time saved. Identifying this 20% helps you:
Another helpful shift is to move from asking:
“How can I automate this process with AI?”
to asking:
“How can I help the people who execute this process with AI?”
For example, one of our clients recognised that their customer service representatives were spending a lot of time researching solutions for problems that had already been addressed in previous calls.
Creating an AI‑powered search capability over the transcripts of previous calls allowed them to leverage this knowledge base and deliver real value to their representatives. All without trying to automate the entire support process.
AI risks in the enterprise come from several sources, but there are two where users play a particularly strong role:
AI systems need to be designed so that people can always review and, if necessary, challenge their outputs and recommendations. A people‑centred design process, like the one described above, helps to embed that control.
When it comes to unauthorised AI usage, it is important not to let the perfect be the enemy of the good. Simply shutting off access to consumer AI platforms, without providing a credible managed alternative, tends to:
Instead, focus on enabling people to integrate general‑purpose AI into their work safely. For example:
By working with users to define safe, supported patterns of use, you reduce both operational and data risk.
Attitudes to AI inside organisations are very complex. Many people see both significant benefits and significant risks. The result can be:
Investing in education and enablement is essential if you want people to get the best out of the technology and manage the risks. Done well, this helps people to:
Many users are still unaware of the range of solutions that can be built, or how to work with them effectively.
By contrast, setting arbitrary goals and incentives for AI usage (“everyone must use AI X times a week”) is unlikely to help, and may make matters worse:
CEOs and senior leadership also play an important role. Trumpeting the expected “efficiences” (read: job cuts) that AI is going to deliver is one of the fastest ways to sow fear and distrust.
Instead, leaders should:
Transparency more broadly is key. Establishing an AI governance forum that reports on:
This helps people affected by any change understand how it will affect them and gives them the opportunity to share their concerns and ideas.
Ultimately, it comes back to providing solutions that are useful and centred around the users themselves:
Get that right, and value delivery, risk management and internal buy‑in all become significantly easier.
How to use AI in regulated industries
Using data to deliver personalised experiences
Are old-school problems holding back AI?