People‑centred AI: How to deliver value, reduce risk and win buy‑in

11 Feb 2026
3 min read
<span id=People‑centred AI: How to deliver value, reduce risk and win buy‑in">

A question that data and senior leaders often ask me is:

How do we put people at the centre of our AI projects?

My view is that putting people at the centre of these initiatives is essential, for a number of reasons:

  • To ensure AI projects actually deliver value by supporting the real workflows people are using.

  • To reduce risk from over‑automation or the unauthorised use of AI systems.

  • To address concerns about – and internal resistance to – AI projects.

Addressing these issues in partnership with users is what ensures that AI is adopted safely and usefully within your organisation. Let’s look at each point in turn.

1. Involving users to ensure AI projects deliver value

The AI hype of recent years has led to a lot of pilots and proof-of-concept projects that never made it to production. Even the most optimistic observers estimate that less than 40% of AI pilots are deployed more widely.

Unsurprisingly, engaging users early and often in the definition, build and testing phases maximises the chance that the solution will actually deliver value. In particular, focus on:

  • Understanding the real processes people are using, and where they are spending time on tasks that could be automated or semi‑automated.
  • Involving users in early design explorations as you scope out the solution.
  • Engaging users to test and refine the solution as it evolves.

That last point is especially important. Only users can reliably assess whether an AI solution is accurate and trustworthy enough to augment their workflows, rather than requiring endless checking.

There is an 80/20 rule that can be applied to AI systems (and automation in general): it is often 20% of a process that, if automated, delivers 80% of the value in terms of time saved. Identifying this 20% helps you:

  • Focus the project on what is really important.
  • Reduce some of the risk and adoption concerns we’ll come back to later.

Another helpful shift is to move from asking:

“How can I automate this process with AI?”

to asking:

“How can I help the people who execute this process with AI?”

For example, one of our clients recognised that their customer service representatives were spending a lot of time researching solutions for problems that had already been addressed in previous calls.

Creating an AI‑powered search capability over the transcripts of previous calls allowed them to leverage this knowledge base and deliver real value to their representatives. All without trying to automate the entire support process.

2. Involving users to reduce AI risk

AI risks in the enterprise come from several sources, but there are two where users play a particularly strong role:

  • Inaccurate or poor decision‑making in AI solutions (for example, a mortgage approval system which approves mortgages for risky individuals).
  • IP and data leakage via the use of unauthorised AI systems (for example, people using public AI tools and pasting in sensitive information).

AI systems need to be designed so that people can always review and, if necessary, challenge their outputs and recommendations. A people‑centred design process, like the one described above, helps to embed that control.

When it comes to unauthorised AI usage, it is important not to let the perfect be the enemy of the good. Simply shutting off access to consumer AI platforms, without providing a credible managed alternative, tends to:

  • Breed resentment, and
  • Drive “under‑the‑counter” use of AI systems, which is even harder to govern.

Instead, focus on enabling people to integrate general‑purpose AI into their work safely. For example:

  • Redrafting emails or documents within approved tools.
  • Summarising lengthy documents using governed prompts and data sources.
  • Surfacing guidance on what is and isn’t acceptable to share.

By working with users to define safe, supported patterns of use, you reduce both operational and data risk.

3. Involving users to address concerns and drive buy‑in

Attitudes to AI inside organisations are very complex. Many people see both significant benefits and significant risks. The result can be:

  • Public support for “AI initiatives”.
  • But very limited willingness to change individual ways of working.

Investing in education and enablement is essential if you want people to get the best out of the technology and manage the risks. Done well, this helps people to:

  • Understand what AI can and cannot do today.
  • Articulate their own needs more clearly as AI solutions evolve.
  • Spot opportunities where AI can genuinely help their team.

Many users are still unaware of the range of solutions that can be built, or how to work with them effectively.

By contrast, setting arbitrary goals and incentives for AI usage (“everyone must use AI X times a week”) is unlikely to help, and may make matters worse:

  • It encourages superficial use to “hit a number”.
  • It risks reinforcing the perception that AI is being imposed, rather than built with them.

CEOs and senior leadership also play an important role. Trumpeting the expected “efficiences” (read: job cuts) that AI is going to deliver is one of the fastest ways to sow fear and distrust.

Instead, leaders should:

  • Be transparent about the objectives of AI initiatives.
  • Emphasise augmentation and quality, not just cost reduction.
  • Model responsible, practical use of AI in their own work.

Transparency more broadly is key. Establishing an AI governance forum that reports on:

  • What is being piloted and why.
  • What is working and what is not.
  • How projects are progressing.

This helps people affected by any change understand how it will affect them and gives them the opportunity to share their concerns and ideas.

Bringing it together

Ultimately, it comes back to providing solutions that are useful and centred around the users themselves:

  • Designed around real workflows and pain points.
  • Built and tested with the people who will use them.
  • Supported by clear guidance, education and transparent governance.

Get that right, and value delivery, risk management and internal buy‑in all become significantly easier.

Discover more data and AI insights

How to use AI in regulated industries

Using data to deliver personalised experiences

Are old-school problems holding back AI?

Go further, faster with us.