How to help your Board have confidence in your AI work has been the subject of several recent conversations. At a high level, the goal is to develop trustworthy AI, which was very well covered in Ethics Guidelines for Trustworthy AI, by the EU High Level Expert Group on AI. For an insurance specific document, the document prepared by EIOPA’s Expert Group in Digital Ethics, Artificial Intelligence Governance Principles: Towards Ethical and Trustworthy Artificial Intelligence in The European Insurance Sector, is a great aid (disclaimer: I was a member of EIOPA’s Expert Group so, in my extremely biased view, its document is a fantastic resource!)
The bad news is that there is no shortcut or magic word that you can say to your Board to make them feel instantly confident, you are going to have to do some work. The good news is that the road map will feel very familiar. I will make a suggestion below, but without getting into too much detail. We could expand every item below, but let us focus on the road map for now.
Generally speaking, you want to show that your work is technically sound and that your work has gone through a robust governance process that ensures there is adequate challenge, risk management considerations, documentary evidence supporting assumptions and decisions, etc.
Solvency 2 provides a useful framework that insurers can leverage. This has the advantage of being familiar ground for key stakeholders too. It means writing policies, setting up committees, engaging your three lines of defence model, and so on.
You need to consider how you will provide AI expertise to your Board if not there yet. You can do this by leveraging in-house expertise or you can engage with a third party specialist provider when the need arises. As the impact of AI in the business grows, you should rely less on consultants.
You will also need to educate all Board members in a manner that is relevant to their responsibilities. They do not need to become experts in the mathematics of AI, but they should know about its risks, governance, ethical considerations, etc. Having had to do this with actuarial work before has paved the way, and it should make it less daunting.
Next you should write an AI policy. Some experts suggest an AI and Ethics policy instead. You can also have a separate Ethics policy, but another option is to ensure all relevant policies (not just AI) include ethics. I personally prefer to separate ethics from AI so either a separate policy, or including it in other policies. If you are tempted to think that Ethics is optional, it is not. AI will get you in trouble faster than you think fi you do not pay attention to this. I would say, do it because it is the right thing to do, but if you need additional encouragement, let staying out of trouble be the one.
When working on your AI policy, do not forget to consider AI systems that you use but do not develop yourself, such as those from third party providers. It would be a good idea to think about the use of AI tools that your employees make for work purposes out of their own initiative such as for summarising documents, generating minutes, etc.
Actuarial work has many similarities with AI work and the insurance industry has benefited from the best practice that the actuarial profession has developed over decades. There is no organised data science profession (yet), so you should start developing your company’s corpus of best practices for AI. The two documents I mentioned at the beginning can be helpful in this regard, but you may also want to search for something more technical. Encourage your AI team to participate in technical user groups, organised bodies, etc to ensure continuous learning and that they remain up to date.
Depending on the scale and complexity of your AI work, you should also consider setting up an AI Committee. You could also have an AI expert in any other committees where this work is involved (for example, pricing or reserving). Individuals who are not AI experts should be in that committee too, stakeholders, consumers of output, those providing data, etc.
Similarly you may think of setting up an Ethics Committee if you think it appropriate to your company’s organisation. Make sure to document decisions and actions so that you can prove that it is effective. If a development is cancelled due to ethical concerns, document it in the minutes to be able to demonstrate that the committee works and is not just for show.
If you do not have a dedicated Ethics Committee, ethics should be a consideration for other committees too. Diversity is always a strength, but more so when it comes to ethics. If you do set up an Ethics Committee, ensure there is diversity amongst its members, not just gender, but socioeconomic, age, etc, you want people who experience life differently so they can inform how your actions can impact different groups of people.
Engage your Compliance, Risk, and Internal Audit teams with the topic. Compliance should be aware of specific AI legislation, but also that non-specific legislation has an impact on AI work too (for example, the Irish Consumer Protection Code, the EU’s Digital Operational Resilience Act, etc).
In relation to risk management, AI and its ethical challenges are another risk, which you can manage by looking at probability of occurrence and severity. This should also drive the governance requirements; lowering them for work that cannot cause too much harm. Bear in mind that the impact on your business will be assessed under very different metrics than the impact on policyholders and the wider public. You may have greater tolerance for things going wrong than the public has, so adjust your governance requirements accordingly.
Finally, your business might require a culture adjustment. The way to stay out of trouble with AI is by developing trustworthy AI. This will not guarantee that things never go wrong (nothing can give you such a guarantee), but it will mitigate their impact and the reaction from supervisors and society. You must move from a minimum compliance mindset (if that is where you are) to one of actively seeking to do no harm, and to fostering trust. This could be a component of your Environmental, Social, and Governance strategy, to ensure that your AI work benefits society as well as your company.
And thus, you can use an existing framework to provide confidence in your AI work, by demonstrating that there is adequate and effective oversight, challenge and governance structures in place.