Working with the Board on AI

How to help your Board have confidence in your AI work has been the subject of several recent conversations. At a high level, the goal is to develop trustworthy AI, which was very well covered in Ethics Guidelines for Trustworthy AI, by the EU High Level Expert Group on AI. For an insurance specific document, the document prepared by EIOPA’s Expert Group in Digital Ethics, Artificial Intelligence  Governance Principles: Towards Ethical and Trustworthy Artificial Intelligence in The European Insurance Sector, is a great aid (disclaimer: I was a member of EIOPA’s Expert Group so, in my extremely biased view, its document is a fantastic resource!)

The bad news is that there is no shortcut or magic word that you can say to your Board to make them feel instantly confident, you are going to have to do some work. The good news is that the road map will feel very familiar. I will make a suggestion below, but without getting into too much detail. We could expand every item below, but let us focus on the road map for now.

Generally speaking, you want to show that your work is technically sound and that your work has gone through a robust governance process that ensures there is adequate challenge, risk management considerations, documentary evidence supporting assumptions and decisions, etc.

Solvency 2 provides a useful framework that insurers can leverage. This has the advantage of being familiar ground for key stakeholders too. It means writing policies, setting up committees, engaging your three lines of defence model, and so on.

You need to consider how you will provide AI expertise to your Board if not there yet. You can do this by leveraging in-house expertise or you can engage with a third party specialist provider when the need arises. As the impact of AI in the business grows, you should rely less on consultants. 

You will also need to educate all Board members in a manner that is relevant to their responsibilities. They do not need to become experts in the mathematics of AI, but they should know about its risks, governance, ethical considerations, etc. Having had to do this with actuarial work before has paved the way, and it should make it less daunting.

Next you should write an AI policy. Some experts suggest an AI and Ethics policy instead. You can also have a separate Ethics policy, but another option is to ensure all relevant policies (not just AI) include ethics. I personally prefer to separate ethics from AI so either a separate policy, or including it in other policies. If you are tempted to think that Ethics is optional, it is not. AI will get you in trouble faster than you think fi you do not pay attention to this. I would say, do it because it is the right thing to do, but if you need additional encouragement, let staying out of trouble be the one.

When working on your AI policy, do not forget to consider AI systems that you use but do not develop yourself, such as those from third party providers. It would be a good idea to think about the use of AI tools that your employees make for work purposes out of their own initiative such as for summarising documents, generating minutes, etc.

Actuarial work has many similarities with AI work and the insurance industry has benefited from the best practice that the actuarial profession has developed over decades. There is no organised data science profession (yet), so you should start developing your company’s corpus of best practices for AI. The two documents I mentioned at the beginning can be helpful in this regard, but you may also want to search for something more technical. Encourage your AI team to participate in technical user groups, organised bodies, etc to ensure continuous learning and that they remain up to date.

Depending on the scale and complexity of your AI work, you should also consider setting up an AI Committee. You could also have an AI expert in any other committees where this work is involved (for example, pricing or reserving). Individuals who are not AI experts should be in that committee too, stakeholders, consumers of output, those providing data, etc.

Similarly you may think of setting up an Ethics Committee if you think it appropriate to your company’s organisation. Make sure to document decisions and actions so that you can prove that it is effective. If a development is cancelled due to ethical concerns, document it in the minutes to be able to demonstrate that the committee works and is not just for show.

If you do not have a dedicated Ethics Committee, ethics should be a consideration for other committees too. Diversity is always a strength, but more so when it comes to ethics. If you do set up an Ethics Committee, ensure there is diversity amongst its members, not just gender, but socioeconomic, age, etc, you want people who experience life differently so they can inform how your actions can impact different groups of people.

Engage your Compliance, Risk, and Internal Audit teams with the topic. Compliance should be aware of specific AI legislation, but also that non-specific legislation has an impact on AI work too (for example, the Irish Consumer Protection Code, the EU’s Digital Operational Resilience Act, etc). 

In relation to risk management, AI and its ethical challenges are another risk, which you can manage by looking at probability of occurrence and severity. This should also drive the governance requirements; lowering them for work that cannot cause too much harm. Bear in mind that the impact on your business will be assessed under very different metrics than the impact on policyholders and the wider public. You may have greater tolerance for things going wrong than the public has, so adjust your governance requirements accordingly.

Finally, your business might require a culture adjustment. The way to stay out of trouble with AI is by developing trustworthy AI. This will not guarantee that things never go wrong (nothing can give you such a guarantee), but it will mitigate their impact and the reaction from supervisors and society. You must move from a minimum compliance mindset (if that is where you are) to one of actively seeking to do no harm, and to fostering trust. This could be a component of your Environmental, Social, and Governance strategy, to ensure that your AI work benefits society as well as your company.

And thus, you can use an existing framework to provide confidence in your AI work, by demonstrating that there is adequate and effective oversight, challenge and governance structures in place.

On Technical Expertise

As I mentioned in previous posts, you do not need to be an expert in AI to embed it in your insurance company. But, you must have access to that expertise. And the closer it is to the CEO, the better. Call it Chief AI Officer, Chief Analytics Officer, etc. However, the position is not about mathematical or coding expertise, but to manage expectations, team leadership, project execution, and strategic input.

Avoid mixing data protection and data science in the same role. The skill requirements are different and it will not work. In addition, most candidates will be interested in only one of those topics so, if the same person looks after the two, one will be underdeveloped.

A common mistake is to require great technical and coding skills from the top AI job. There should be a team for that. That person’s job is strategy and management. It is the team that needs to be good with Python, machine learning, etc. Do not expect your Chief AI Officer to know the most about all aspects of AI in your company, that is not their contribution.

While sound mathematical and coding skills are important, the world of AI moves fast, prioritising intellectual curiosity in your team over coding mastery is a good long term approach.

Beware of skill shortcuts. For example, most data scientists can help with data engineering but it might not be their strength, or what makes them happy. You get better results if using the right skilled people for each challenge. It will make a difference in the long run. In addition, given the high demand for data scientists, they will not last long with you doing work they do not like.

Developing a career path for your data science team will help too. What career options do they have in your company? It will show that you take them seriously, and it will provide an understanding of what skills they have to develop to grow with you. Potential hires would find it interesting too.

Set up a budget for your team’s ongoing education, make sure they spend time reading papers, going to talks, and mingling with peers. That is how they get better at what they do.

As a non-technical leader, you need a team that you trust, and to understand what is hard, what takes long, what long actually means (weeks, months, years?), what happens if something goes wrong and its implications, etc. Also bear in mind that all AI and machine learning projects have a highly experimental component. There is no predefined solution to your problem so there will be several attempts until you get it right. Because of this, estimating how long a project will take can be very challenging.

On strategy, goals, and AI

Artificial intelligence does not have to feature in your business strategy to make an impact. It is one of the tools your company has to help you meet your goals. Good old insurance goals such as “more revenue here”, “lower expenses there”, “increase market share by x%”, etc are perfectly fine. In fact, measurable goals such as those are well suited, as it will make it easier to estimate the impact your projects had on those metrics.

After you communicate your goals to your teams, they must come up with relevant projects. This requires expertise in a number of topics (AI / machine learning, operations, finance, actuarial, sales, legal, etc). Therefore, cross-team collaboration is necessary. Suggesting feasible projects will take time, learning which ones show promise will take time too so bear that in mind, do not expect quick miracles. Your people will then have to make it work, which, again, takes time. How your teams generate these projects is a conversation for another day.

Something I think is very important is keeping your focus. Call it commitment if you prefer. You should ensure that your AI projects are always aligned with your strategy and contribute to your goals. Everyone involved must have a clear picture of how that is going to happen, that is, how a project is adding value and contributing to the strategy. Once your teams understand what you are trying to achieve, they will be able to come up with innovative solutions of their own. Find that focus and commit to it. It will also help you prioritise projects according to their contribution.

Knowing what to avoid is as important as knowing what you need to focus on. Distractions take time and resources, learning early on that something is not worth pursuing should not be seen as failure, but as progress that saves you time and money. It is a necessary part of the journey to success. If you do this iteration faster than your competitors, you will have a clear advantage in time and cost of execution. In my experience, saying no to things will often contribute more to your success than the things you say yes to. Remember: focus, avoid distractions.

AI for insurance leaders

After a six year pause, I have eventually returned to this blog. I wanted to write a series of posts providing advice to insurance leaders on how to successfully use AI in their companies. It is all my own opinions and based on my experience, which means that it is not meant to invalidate other people’s views and approaches. In fact, there are usually multiple ways to achieve similar results and I do not claim to know them all or to have the best one. This is just my contribution with the hope that you find it useful.

The first thing worth mentioning is that you do not have to be an expert in artificial intelligence to lead an insurance company that makes the most of AI. However, you do need to have access to such expertise and leverage it with good business acumen. Eventually you will learn what is possible, what is hard, and where your team’s expertise begins and ends. Very valuable knowledge.

In my opinion, what you need to successfully apply AI can be grouped into four areas (in no particular order):

  • A business strategy with defined goals (measurable ones make it easier)
  • A data infrastructure that is fit for purpose 
  • A data science team
  • A governance and ethical framework

I will cover each area in more detail in future posts.

Then it is your role to communicate and implement the company’s vision, and to embed a culture that facilitates responsible use of AI. 

However, the issue with not being an AI expert yourself is that you will have to trust your experts. That trust is very important and achieving it is not only your job, it is your data scientist’s job too. They must understand that it requires honesty and transparency on both sides. I plan to cover some aspects of work culture in later posts too but you should not underestimate their importance in the successful adoption of AI.

Cancellation Insurance?

Last week I had the honour of delivering the first talk of the first InsurTech MeetUp in Dublin.  There was much more interest than anticipated and we were overwhelmed with the success.  It has set the bar very high and we hope the next events continue to grow the local InsurTech community.

My presentation was a suggestion for cancellation insurance, to help airlines meet the costs of flight disruption and provide assistance service to stranded passengers.

The Shape of Things to Come

data-2355696_640

Modern technologies are increasingly used in the insurance industry.  Cloud computing is becoming more common, the use of machine learning and artificial intelligence is spreading at a steady rate and insurers’ demand for data scientists is on the rise.  Not all markets show the same levels of adoption but they are all moving in this direction at their own speed.

Open source machine learning frameworks and inexpensive technology at everyone’s disposal are fuelling the rapid adoption of machine learning in insurance.

Success stories and inexpensive technology are making this possible.  There is easy access to artificial intelligence and advanced machine learning techniques.  Popular data science languages such as R and Python have many relevant open source libraries.  

Deep learning frameworks such as Theano, Keras or TensorFlow are also open source and free for anyone to use.  Cloud computing is cheap and accessible to all thanks to Amazon Web Services, Google Cloud, etc.

In addition, the current lack of data science talent has already encouraged a myriad of Masters programmes and online learning courses that will add to the pool of data scientists.  All of this is great news.

However, this democratization of data analytics and technology has unintended consequences.  When everyone has access to the most sophisticated models and best cloud computing services, these are no longer a differentiating factor among competing organisations.  In short: you will not have an edge over your competition because you make extensive use of deep learning, as your competitors can close this gap much more quickly than before.

Relevant data, not technology, will provide the next competitive advantage.

It is important to remember that even the most complex machine learning methods are still models and the old adage holds true: garbage in, garbage out.  We have already seen cases where flawed training data has led to flawed results, sometimes embarrassing.  It is inevitable that we will see more cases as the use of artificial intelligence becomes more common.

So, is it true then that data is the new oil?  The Economist certainly thinks so.  Just like with oil, the importance of data lies in what you can do with it rather than in raw data itself.  It needs to be cleaned, processed, packaged and relevant to your needs to be useful.  Therefore, you are not sitting on a pile of gold simply because you have large amounts of data unless you are prepared to do something with it and it is relevant to your goals.  Home insurance data might not be of much help if you are planning to sell motor insurance.  Unstructured data from thousands of claim forms is pointless unless you make it machine readable and amenable to use with machine learning algorithms.

Access to relevant data will be the competitive advantage in the near future and organisations would do well by start preparing now.  There is much publicly available data whose potential is still untapped (The Irish Census, Eurostat, etc.)  But work is required before this data can be helpful.  This takes time and learning the potential and limitations of this data will take time too.  You may want to start now.

The computer games industry is a good example where this has already happened. Valve’s Steam still enjoys a dominant position as a seller of PC games.  Selling computer games is a largely digital business that generates significant amounts of valuable data for developers and distributors.  Steam collects this data and uses it to gain insights.

That industry has seen many attempts to replicate this model.  GOG has succeeded in finding its place but other initiatives such as Electronic Arts’ Origin are still far away from being a serious competitor.  However, Origin is a strategic development for Electronic Arts, as it enables the company to gather its own data.

Everyone knows that data is important. Customers and regulators too.

We have all heard how important data is and everyone is prepared to act on it. Customers are becoming more aware of the data they provide and regulators are concerned with the uses companies will make of data.  Financial services are not escaping this trend.

The insurance industry in particular enjoys a poor reputation among customers. Insurers will find that acquiring more data is not as easy as it is for other companies. Policyholders do not trust insurers and are reluctant to share data with them, as they believe insurance companies will use it against them.  However, they happily share much with companies they trust such as Google or Facebook.  Insurers will have to find a way to address this issue.

In addition, data protection legislation is evolving to face the new challenges that data analytics and artificial intelligence bring.  The GDPR is a very important development but we have already seen other initiatives to regulate access to data and the use insurers make of it.

The rise of the data broker?

Will the need for relevant data lead to the creation of data brokers?  It might happen. Alternatively, companies with large amounts of data might commercialise the insights gained from that data rather than sell the data itself.  Companies that have focused on gathering industry relevant data will be well positioned.  Google is probably the first company to come to mind but each market is likely to have its own dominant player and these may look like LexisNexis in insurance or Crème Global in the food and cosmetic industries.

Conclusion

Talent, technology and the adoption of machine learning will continue to be important but they will stop being a competitive advantage in the next few years.  Instead, the strategic use of machine learning and access to well maintained, relevant data will be the key to win.


Image source: Pixabay