
Modern technologies are increasingly used in the insurance industry. Cloud computing is becoming more common, the use of machine learning and artificial intelligence is spreading at a steady rate and insurers’ demand for data scientists is on the rise. Not all markets show the same levels of adoption but they are all moving in this direction at their own speed.
Open source machine learning frameworks and inexpensive technology at everyone’s disposal are fuelling the rapid adoption of machine learning in insurance.
Success stories and inexpensive technology are making this possible. There is easy access to artificial intelligence and advanced machine learning techniques. Popular data science languages such as R and Python have many relevant open source libraries.
Deep learning frameworks such as Theano, Keras or TensorFlow are also open source and free for anyone to use. Cloud computing is cheap and accessible to all thanks to Amazon Web Services, Google Cloud, etc.
In addition, the current lack of data science talent has already encouraged a myriad of Masters programmes and online learning courses that will add to the pool of data scientists. All of this is great news.
However, this democratization of data analytics and technology has unintended consequences. When everyone has access to the most sophisticated models and best cloud computing services, these are no longer a differentiating factor among competing organisations. In short: you will not have an edge over your competition because you make extensive use of deep learning, as your competitors can close this gap much more quickly than before.
Relevant data, not technology, will provide the next competitive advantage.
It is important to remember that even the most complex machine learning methods are still models and the old adage holds true: garbage in, garbage out. We have already seen cases where flawed training data has led to flawed results, sometimes embarrassing. It is inevitable that we will see more cases as the use of artificial intelligence becomes more common.
So, is it true then that data is the new oil? The Economist certainly thinks so. Just like with oil, the importance of data lies in what you can do with it rather than in raw data itself. It needs to be cleaned, processed, packaged and relevant to your needs to be useful. Therefore, you are not sitting on a pile of gold simply because you have large amounts of data unless you are prepared to do something with it and it is relevant to your goals. Home insurance data might not be of much help if you are planning to sell motor insurance. Unstructured data from thousands of claim forms is pointless unless you make it machine readable and amenable to use with machine learning algorithms.
Access to relevant data will be the competitive advantage in the near future and organisations would do well by start preparing now. There is much publicly available data whose potential is still untapped (The Irish Census, Eurostat, etc.) But work is required before this data can be helpful. This takes time and learning the potential and limitations of this data will take time too. You may want to start now.
The computer games industry is a good example where this has already happened. Valve’s Steam still enjoys a dominant position as a seller of PC games. Selling computer games is a largely digital business that generates significant amounts of valuable data for developers and distributors. Steam collects this data and uses it to gain insights.
That industry has seen many attempts to replicate this model. GOG has succeeded in finding its place but other initiatives such as Electronic Arts’ Origin are still far away from being a serious competitor. However, Origin is a strategic development for Electronic Arts, as it enables the company to gather its own data.
Everyone knows that data is important. Customers and regulators too.
We have all heard how important data is and everyone is prepared to act on it. Customers are becoming more aware of the data they provide and regulators are concerned with the uses companies will make of data. Financial services are not escaping this trend.
The insurance industry in particular enjoys a poor reputation among customers. Insurers will find that acquiring more data is not as easy as it is for other companies. Policyholders do not trust insurers and are reluctant to share data with them, as they believe insurance companies will use it against them. However, they happily share much with companies they trust such as Google or Facebook. Insurers will have to find a way to address this issue.
In addition, data protection legislation is evolving to face the new challenges that data analytics and artificial intelligence bring. The GDPR is a very important development but we have already seen other initiatives to regulate access to data and the use insurers make of it.
The rise of the data broker?
Will the need for relevant data lead to the creation of data brokers? It might happen. Alternatively, companies with large amounts of data might commercialise the insights gained from that data rather than sell the data itself. Companies that have focused on gathering industry relevant data will be well positioned. Google is probably the first company to come to mind but each market is likely to have its own dominant player and these may look like LexisNexis in insurance or Crème Global in the food and cosmetic industries.
Conclusion
Talent, technology and the adoption of machine learning will continue to be important but they will stop being a competitive advantage in the next few years. Instead, the strategic use of machine learning and access to well maintained, relevant data will be the key to win.
Image source: Pixabay