Category: Uncategorized

On AI, the talent crisis, and workplace culture

I have recently spoken with business leaders who are very excited with the increased productivity they expect thanks to GenAI. 

And yet, from what I see, their employees do not generally show the same enthusiasm. The way they see it, they will be generating more money for the business while working the same number of hours for the same salary. Someone is bound to be very disappointed.

The expectation of redundancies due to AI is common. You might think that this should motivate them to work harder or fully adopt AI to not lose their job, instead, it is driving many to consider career change, moving jobs, or starting side gigs. There is a feeling of inevitability to it, as if nothing that they do will make a difference.

And I am not the only one hearing this, 70% of Gen Z are either freelancing or planning to do it soon. Read that again and let it sink in. A not insignificant number of Millenials are in the same boat too. Think now about the talent crisis there is, and make a prediction for how that is going to go in the near future. 

Job dissatisfaction has been growing non-stop for the last ten years, with no sign of slowing down. Ten years ago I barely knew actuaries who were not happy with their jobs. Sadly, it is not like that anymore. Over the last couple of years many actuaries I know have expressed their unhappiness with actuarial jobs. In general, not just with their current role. And it is not actuaries only, the Accounting Profession is facing significant challenges too (in case you thought it was me, spreading unhappiness amongst all them innocent actuaries!).

I doubt any government initiative will make a difference to what seems to boil down to dissatisfaction with working conditions, being disillusioned with one’s career, and a general lack of trust in employers, particularly amongst younger people.

A few years ago, a senior leader expressed his frustration at his employees’ reluctance to return to the office post-Covid. He could not understand why. According to him, they had a great office culture. I pointed out that that culture worked for him, but was clearly not working for everyone. I suggested that he should acknowledge it and try to fix what was not working rather than force people to return to work in an environment they did not enjoy. 

It went down like a lead balloon. And I have learned the lesson, whenever I hear a similar story I smile, I listen politely, and I say nothing. I am freelancing now, I have no skin in the game. It is, bluntly, not my problem anymore. But businesses that ignore this will continue to struggle to attract and retain happy, engaged, and productive staff. A big shift in workplace culture is required. 

In my early 20s there was huge competition for jobs. There were many candidates of my age so you had to accept things that you did not like in order to get and keep a job. Demographics in all developed countries show that there are fewer candidates now of that age cohort (and no hope of reversing that trend for many decades), but demand is still considerable, so they have more options.

Check the link about the shortage of accountants again, and I am sure you have also read of similar issues in the hospitality sector. Some industries are in trouble, as younger people show little interest in joining them for a variety of reasons. And think too of how many times you heard (I know I did) negative comments about the insurance industry, and nothing changed because people needed jobs and insurance was stable. Well, maybe there is your talent crisis in a nutshell. Either companies change their culture and the relationship with their employees, or the talent crisis will be here for a long time.

Working with the Board on AI

How to help your Board have confidence in your AI work has been the subject of several recent conversations. At a high level, the goal is to develop trustworthy AI, which was very well covered in Ethics Guidelines for Trustworthy AI, by the EU High Level Expert Group on AI. For an insurance specific document, the document prepared by EIOPA’s Expert Group in Digital Ethics, Artificial Intelligence  Governance Principles: Towards Ethical and Trustworthy Artificial Intelligence in The European Insurance Sector, is a great aid (disclaimer: I was a member of EIOPA’s Expert Group so, in my extremely biased view, its document is a fantastic resource!)

The bad news is that there is no shortcut or magic word that you can say to your Board to make them feel instantly confident, you are going to have to do some work. The good news is that the road map will feel very familiar. I will make a suggestion below, but without getting into too much detail. We could expand every item below, but let us focus on the road map for now.

Generally speaking, you want to show that your work is technically sound and that your work has gone through a robust governance process that ensures there is adequate challenge, risk management considerations, documentary evidence supporting assumptions and decisions, etc.

Solvency 2 provides a useful framework that insurers can leverage. This has the advantage of being familiar ground for key stakeholders too. It means writing policies, setting up committees, engaging your three lines of defence model, and so on.

You need to consider how you will provide AI expertise to your Board if not there yet. You can do this by leveraging in-house expertise or you can engage with a third party specialist provider when the need arises. As the impact of AI in the business grows, you should rely less on consultants. 

You will also need to educate all Board members in a manner that is relevant to their responsibilities. They do not need to become experts in the mathematics of AI, but they should know about its risks, governance, ethical considerations, etc. Having had to do this with actuarial work before has paved the way, and it should make it less daunting.

Next you should write an AI policy. Some experts suggest an AI and Ethics policy instead. You can also have a separate Ethics policy, but another option is to ensure all relevant policies (not just AI) include ethics. I personally prefer to separate ethics from AI so either a separate policy, or including it in other policies. If you are tempted to think that Ethics is optional, it is not. AI will get you in trouble faster than you think fi you do not pay attention to this. I would say, do it because it is the right thing to do, but if you need additional encouragement, let staying out of trouble be the one.

When working on your AI policy, do not forget to consider AI systems that you use but do not develop yourself, such as those from third party providers. It would be a good idea to think about the use of AI tools that your employees make for work purposes out of their own initiative such as for summarising documents, generating minutes, etc.

Actuarial work has many similarities with AI work and the insurance industry has benefited from the best practice that the actuarial profession has developed over decades. There is no organised data science profession (yet), so you should start developing your company’s corpus of best practices for AI. The two documents I mentioned at the beginning can be helpful in this regard, but you may also want to search for something more technical. Encourage your AI team to participate in technical user groups, organised bodies, etc to ensure continuous learning and that they remain up to date.

Depending on the scale and complexity of your AI work, you should also consider setting up an AI Committee. You could also have an AI expert in any other committees where this work is involved (for example, pricing or reserving). Individuals who are not AI experts should be in that committee too, stakeholders, consumers of output, those providing data, etc.

Similarly you may think of setting up an Ethics Committee if you think it appropriate to your company’s organisation. Make sure to document decisions and actions so that you can prove that it is effective. If a development is cancelled due to ethical concerns, document it in the minutes to be able to demonstrate that the committee works and is not just for show.

If you do not have a dedicated Ethics Committee, ethics should be a consideration for other committees too. Diversity is always a strength, but more so when it comes to ethics. If you do set up an Ethics Committee, ensure there is diversity amongst its members, not just gender, but socioeconomic, age, etc, you want people who experience life differently so they can inform how your actions can impact different groups of people.

Engage your Compliance, Risk, and Internal Audit teams with the topic. Compliance should be aware of specific AI legislation, but also that non-specific legislation has an impact on AI work too (for example, the Irish Consumer Protection Code, the EU’s Digital Operational Resilience Act, etc). 

In relation to risk management, AI and its ethical challenges are another risk, which you can manage by looking at probability of occurrence and severity. This should also drive the governance requirements; lowering them for work that cannot cause too much harm. Bear in mind that the impact on your business will be assessed under very different metrics than the impact on policyholders and the wider public. You may have greater tolerance for things going wrong than the public has, so adjust your governance requirements accordingly.

Finally, your business might require a culture adjustment. The way to stay out of trouble with AI is by developing trustworthy AI. This will not guarantee that things never go wrong (nothing can give you such a guarantee), but it will mitigate their impact and the reaction from supervisors and society. You must move from a minimum compliance mindset (if that is where you are) to one of actively seeking to do no harm, and to fostering trust. This could be a component of your Environmental, Social, and Governance strategy, to ensure that your AI work benefits society as well as your company.

And thus, you can use an existing framework to provide confidence in your AI work, by demonstrating that there is adequate and effective oversight, challenge and governance structures in place.

On Technical Expertise

As I mentioned in previous posts, you do not need to be an expert in AI to embed it in your insurance company. But, you must have access to that expertise. And the closer it is to the CEO, the better. Call it Chief AI Officer, Chief Analytics Officer, etc. However, the position is not about mathematical or coding expertise, but to manage expectations, team leadership, project execution, and strategic input.

Avoid mixing data protection and data science in the same role. The skill requirements are different and it will not work. In addition, most candidates will be interested in only one of those topics so, if the same person looks after the two, one will be underdeveloped.

A common mistake is to require great technical and coding skills from the top AI job. There should be a team for that. That person’s job is strategy and management. It is the team that needs to be good with Python, machine learning, etc. Do not expect your Chief AI Officer to know the most about all aspects of AI in your company, that is not their contribution.

While sound mathematical and coding skills are important, the world of AI moves fast, prioritising intellectual curiosity in your team over coding mastery is a good long term approach.

Beware of skill shortcuts. For example, most data scientists can help with data engineering but it might not be their strength, or what makes them happy. You get better results if using the right skilled people for each challenge. It will make a difference in the long run. In addition, given the high demand for data scientists, they will not last long with you doing work they do not like.

Developing a career path for your data science team will help too. What career options do they have in your company? It will show that you take them seriously, and it will provide an understanding of what skills they have to develop to grow with you. Potential hires would find it interesting too.

Set up a budget for your team’s ongoing education, make sure they spend time reading papers, going to talks, and mingling with peers. That is how they get better at what they do.

As a non-technical leader, you need a team that you trust, and to understand what is hard, what takes long, what long actually means (weeks, months, years?), what happens if something goes wrong and its implications, etc. Also bear in mind that all AI and machine learning projects have a highly experimental component. There is no predefined solution to your problem so there will be several attempts until you get it right. Because of this, estimating how long a project will take can be very challenging.

On strategy, goals, and AI

Artificial intelligence does not have to feature in your business strategy to make an impact. It is one of the tools your company has to help you meet your goals. Good old insurance goals such as “more revenue here”, “lower expenses there”, “increase market share by x%”, etc are perfectly fine. In fact, measurable goals such as those are well suited, as it will make it easier to estimate the impact your projects had on those metrics.

After you communicate your goals to your teams, they must come up with relevant projects. This requires expertise in a number of topics (AI / machine learning, operations, finance, actuarial, sales, legal, etc). Therefore, cross-team collaboration is necessary. Suggesting feasible projects will take time, learning which ones show promise will take time too so bear that in mind, do not expect quick miracles. Your people will then have to make it work, which, again, takes time. How your teams generate these projects is a conversation for another day.

Something I think is very important is keeping your focus. Call it commitment if you prefer. You should ensure that your AI projects are always aligned with your strategy and contribute to your goals. Everyone involved must have a clear picture of how that is going to happen, that is, how a project is adding value and contributing to the strategy. Once your teams understand what you are trying to achieve, they will be able to come up with innovative solutions of their own. Find that focus and commit to it. It will also help you prioritise projects according to their contribution.

Knowing what to avoid is as important as knowing what you need to focus on. Distractions take time and resources, learning early on that something is not worth pursuing should not be seen as failure, but as progress that saves you time and money. It is a necessary part of the journey to success. If you do this iteration faster than your competitors, you will have a clear advantage in time and cost of execution. In my experience, saying no to things will often contribute more to your success than the things you say yes to. Remember: focus, avoid distractions.

Analytics Breakfasts for Financial Services

coffee-386878_1280

The next analytics breakfast will be at 8am on 8th August at Bank of Ireland in Grand Canal Square.  This time I will speak briefly about using data analytics for operational risk management.  In the past, we have discussed interest rates modelling and customer life time value among other topics.

Sign up and join us if you find this interesting and would like to network with like minded individuals.

http://doodle.com/poll/q2widx7phnc42xn9

Image: Mike Kunz / Pixabay