This is a question I get asked a lot. The first thing is to make sure you’ve built the right guardrails that will actually allow you to do things with AI.
I advise clients working in regulated industries to look at the following:
1. Develop an AI Risk Questionnaire
Understand how people in the business want to use AI, for what use cases, and to deliver what outcomes. This will also give you an easy way to get under the hood of things like data sovereignty risks and model retraining policies of your AI providers.
2. Build your internal AI platform
The number of businesses that don’t realise how easy it is, for example, in Azure, to spin up a private OpenAI/ChatGPT instance to ensure you never lose control of your data is significant. It’s something I hear in client conversations still to this day. By building an internal AI platform where staff can access AI services, you make sure you retain full control over your data and the models, and you can easily build in the logging and monitoring frameworks to help you track how AI is being used.
3. Establish AI policies
Making it very clear to staff the company’s appetite for experimentation with different AI services, and how it can, and can’t, be used in daily activities, will help a lot.
4. Establish team responsibility
Create a team that has central responsibility for at least being aware of how people are using AI, and tracking their use cases, etc. This team can act either as a delivery partner or purely as an advisor, depending on your needs. Either way, it’s important to maintain some degree of centralised awareness and control.
5. Invest in the quality of your documents
When using RAG-based GenAI, make sure you invest in the quality of the documents going in. This will help to reduce both hallucinations and potential copyright issues down the line.
6. Know your models
Understand the materiality and impact of your models, whether AI or ML, and make sure your guardrails are appropriately proportional to the potential impact bad decisions can make.
7. Understand the legislation
Be familiar, for customer-facing models, with the EU AI Act and ensure the requirements are implemented. For example, AI-based chatbots should disclose the fact that they are chatbots.
8. Log, monitor, and audit
This is critical! Being able to review how people are using ML and AI, the decisions that it is making, and being able to iterate and expand the capabilities of solutions is reliant on getting real feedback and reviewing activities.
9. Build your prompts
For AI solutions, build your enterprise prompts/system prompts! For GenAI-based solutions, it’s important to build the appropriate guardrails that help to take into account things like:
This level of regulation around your systems will help to reduce errors and output problems.
Read my take on:
How can we build more confidence in our data strategy?
Which business problems can effectively be solved by AI, ML or BI?
How can we use data to deliver personalised experiences?
Follow me on LinkedIn for more insights.