Category: Non-Technical

Cancellation Insurance?

Last week I had the honour of delivering the first talk of the first InsurTech MeetUp in Dublin.  There was much more interest than anticipated and we were overwhelmed with the success.  It has set the bar very high and we hope the next events continue to grow the local InsurTech community.

My presentation was a suggestion for cancellation insurance, to help airlines meet the costs of flight disruption and provide assistance service to stranded passengers.


The Shape of Things to Come


Modern technologies are increasingly used in the insurance industry.  Cloud computing is becoming more common, the use of machine learning and artificial intelligence is spreading at a steady rate and insurers’ demand for data scientists is on the rise.  Not all markets show the same levels of adoption but they are all moving in this direction at their own speed.

Open source machine learning frameworks and inexpensive technology at everyone’s disposal are fuelling the rapid adoption of machine learning in insurance.

Success stories and inexpensive technology are making this possible.  There is easy access to artificial intelligence and advanced machine learning techniques.  Popular data science languages such as R and Python have many relevant open source libraries.  

Deep learning frameworks such as Theano, Keras or TensorFlow are also open source and free for anyone to use.  Cloud computing is cheap and accessible to all thanks to Amazon Web Services, Google Cloud, etc.

In addition, the current lack of data science talent has already encouraged a myriad of Masters programmes and online learning courses that will add to the pool of data scientists.  All of this is great news.

However, this democratization of data analytics and technology has unintended consequences.  When everyone has access to the most sophisticated models and best cloud computing services, these are no longer a differentiating factor among competing organisations.  In short: you will not have an edge over your competition because you make extensive use of deep learning, as your competitors can close this gap much more quickly than before.

Relevant data, not technology, will provide the next competitive advantage.

It is important to remember that even the most complex machine learning methods are still models and the old adage holds true: garbage in, garbage out.  We have already seen cases where flawed training data has led to flawed results, sometimes embarrassing.  It is inevitable that we will see more cases as the use of artificial intelligence becomes more common.

So, is it true then that data is the new oil?  The Economist certainly thinks so.  Just like with oil, the importance of data lies in what you can do with it rather than in raw data itself.  It needs to be cleaned, processed, packaged and relevant to your needs to be useful.  Therefore, you are not sitting on a pile of gold simply because you have large amounts of data unless you are prepared to do something with it and it is relevant to your goals.  Home insurance data might not be of much help if you are planning to sell motor insurance.  Unstructured data from thousands of claim forms is pointless unless you make it machine readable and amenable to use with machine learning algorithms.

Access to relevant data will be the competitive advantage in the near future and organisations would do well by start preparing now.  There is much publicly available data whose potential is still untapped (The Irish Census, Eurostat, etc.)  But work is required before this data can be helpful.  This takes time and learning the potential and limitations of this data will take time too.  You may want to start now.

The computer games industry is a good example where this has already happened. Valve’s Steam still enjoys a dominant position as a seller of PC games.  Selling computer games is a largely digital business that generates significant amounts of valuable data for developers and distributors.  Steam collects this data and uses it to gain insights.

That industry has seen many attempts to replicate this model.  GOG has succeeded in finding its place but other initiatives such as Electronic Arts’ Origin are still far away from being a serious competitor.  However, Origin is a strategic development for Electronic Arts, as it enables the company to gather its own data.

Everyone knows that data is important. Customers and regulators too.

We have all heard how important data is and everyone is prepared to act on it. Customers are becoming more aware of the data they provide and regulators are concerned with the uses companies will make of data.  Financial services are not escaping this trend.

The insurance industry in particular enjoys a poor reputation among customers. Insurers will find that acquiring more data is not as easy as it is for other companies. Policyholders do not trust insurers and are reluctant to share data with them, as they believe insurance companies will use it against them.  However, they happily share much with companies they trust such as Google or Facebook.  Insurers will have to find a way to address this issue.

In addition, data protection legislation is evolving to face the new challenges that data analytics and artificial intelligence bring.  The GDPR is a very important development but we have already seen other initiatives to regulate access to data and the use insurers make of it.

The rise of the data broker?

Will the need for relevant data lead to the creation of data brokers?  It might happen. Alternatively, companies with large amounts of data might commercialise the insights gained from that data rather than sell the data itself.  Companies that have focused on gathering industry relevant data will be well positioned.  Google is probably the first company to come to mind but each market is likely to have its own dominant player and these may look like LexisNexis in insurance or Crème Global in the food and cosmetic industries.


Talent, technology and the adoption of machine learning will continue to be important but they will stop being a competitive advantage in the next few years.  Instead, the strategic use of machine learning and access to well maintained, relevant data will be the key to win.

Image source: Pixabay

Analytics Breakfasts for Financial Services


The next analytics breakfast will be at 8am on 8th August at Bank of Ireland in Grand Canal Square.  This time I will speak briefly about using data analytics for operational risk management.  In the past, we have discussed interest rates modelling and customer life time value among other topics.

Sign up and join us if you find this interesting and would like to network with like minded individuals.

Image: Mike Kunz / Pixabay

Insurance Fraud Analytics


Policyholders in the UK and Ireland have been hit by a series of large price increases in motor and home insurance policies.  There are many factors driving these hikes and insurance fraud is one of them.

According to Insurance Confidential, fraud costs the Irish insurance industry €200 million every year and motor insurance policies are, on average, €50 more expensive because of it[1].  There is a similar impact in the UK market with an estimated addition of £50 to every insurance policy.  UK insurers uncovered fraudulent claims in 2014 worth more than £1.3 billion[2].

The problem is hardly unique to the UK and Ireland.  Of the 1.2 million claims AXA handled in Spain in 2015, 1.3% of them were fraudulent[3].  That is close to the Spanish average of 1.37%, up from 1.12% in 2014.  Some lines of business have seen a 15% annual increase in fraud with motor insurance being responsible for 80% of all the fraudulent claims discovered.

Fraud affects all business lines and different areas where customers interact with insurers;  from non-disclosure at the time of underwriting to making fraudulent claims.  It can be organised, with several individuals planning and working together, or it can be opportunistic.  Fortunately there are tools to help insurers detect these types of fraud.

A solution to non disclosure

Non-disclosure is not always done in bad faith and customers may simply not be aware of important issues that can be relevant.  Nevertheless, whether it is accidental or intentional, statistics have a way to identify potential cases where non-disclosure may be likely.  Probabilistic Graphical Models  and Bayesian Networks are useful tools which identify cases with statistically unusual combinations, recommending a closer inspection.

It is impossible to investigate all applications but these methods provide a list of cases the system suspects of not having disclosed everything.  Data analytics does a good job at spotting potential inconsistencies in application forms. It allows insurance companies to operate with more confidence, knowing that the risk they take is appropriately assessed and priced.  It also provides a better customer experience by reducing the number of claims rejected due to non-disclosure.

Those interested in how these statistical techniques work may want to check Applied AI’s blog[4] for a three part series on Probabilistic Graphical Models.  The last two entries[5][6] focus on insurance.

Preventing claims fraud

Regarding insurance claims, the vast majority are legitimate and insurers expect them to happen.  But a small proportion of fraudulent claims are generated by organised teams.  This has a disproportionate impact in the claims experience of insurers, causing huge unnecessary losses[7].  Because they involve several individuals, the best tool for the job is a network analysis (graph databases).

Network analysis is a powerful weapon when fighting organised fraud as it finds a way to show what fraudsters have in common[8]: for example they live in the same house or close together, work together, they are friends in social networks[9], etc.

There are cases however, where policyholders see the possibility of making a false or an exaggerated claim and decide to act on it.  This is not organised crime but opportunistic and network analysis may not be useful in spotting this.  More common data analytics techniques can be used here and the challenge posed by the rarity of these fraudulent claims can be overcome with some preparation.  Those interested in the mechanics may check Dr Peter Brennan’s work on the subject[10].

But what is really happening?

Researchers are investigating increasingly sophisticated techniques to detect fraud, some including image and voice recognition.  In addition, the Association of British Insurers announced plans to invest more than £11 million to fund an expansion of the Fraud Enforcement Department in the City of London Police[11].

Insurers are beginning to up their game in relation to fraud analytics.  AIG has developed an in-house system that identifies twice as many fraudulent claims than the tools provided by the leading vendors in the fraud analytics market[12].

Irish Health insurer VHI set up a fraud investigation unit that has recovered over €47 million since 2009.  Liam Downey, from VHI, said that “widespread use of data analytics had been very effective”[13].  Also in healthcare, the US company Optum provides data analytics based solutions to prevent fraud, waste, abuse and enhance payment integrity[14], helping their clients make significant savings.

It is clear is that the impact insurance fraud has in companies and customers is too large to ignore and data analytics is, so far, the best solution we have.  Insurers are adopting data analytics as a very effective tool in the fight against fraud.


Image source: Steve Buissinne / Pixabay















Analytics Breakfasts for Financial Services

coffee-386878_1280I organise breakfast meetings to discuss predictive analytics in financial services.

This is to facilitate contact between financial services and experts on predictive analytics.

It should be of interest if you work in banking, insurance or aviation finance and want to use data analytics in your organisation or would like to expand beyond its current use. Join us if you are short of ideas or do not know where to start.

We will share our experience, and discuss market updates, trends and best practice in an informal environment.

Bank of Ireland has kindly agreed to host these meetings at their innovation hub in Grand Canal Square.  Feel free to join us and sign up at the doodle below:

Image: Mike Kunz / Pixabay

The Role of Predictive Analytics in the Strategy of an Insurance Company


Predictive analytics is slowly making its way into the insurance sector but progress is different in every country.  Nevertheless, there is a global trend across the industry to acquire data analytics capability.

Insurers with an analytics team are already enjoying the benefits of superior underwriting, pricing, and fraud detection with only a few companies currently using analytics in more than one of these areas.

This is expected to change and extended use of analytics is on the horizon[1].  However, once all insurers have a data analytics capability, something extra will be required to keep a competitive advantage.  The way an insurer uses its analytical resources will determine whether the company has a lasting advantage.  Data analysts are in high demand[2] so insurance companies need to ensure their analysts focus on the most important tasks for the business.

In addition, an analytics team will perform better when there are specific objectives and a clear sense of purpose.

Data analytics – enabling the company’s strategy?

Instead of adopting a silo approach, an insurer should use data analysts to support its business plan.  Analytics should help the business achieve its strategic goals to:  increase market size, achieve a better product per customer ratio, reduce fraud, etc.

The company’s strategy contains the business’ most important goals, so the data analytics department should focus on achieving those goals.  This can only happen when an insurer fully embeds analytics in the organisation rather than leaving it to act as an isolated team with little contact with the rest of the company.

XL and AIG have already seen this as key to success and they have taken steps accordingly[3].  Deloitte’s John Lucker considers that an appropriate analytics strategy is the first component to successful execution.  Christian Moe, Senior Analytical Consultant at SAS, also sees the alignment of analytics and business strategy as the first step to succeed in the implementation of data analytics[4].

But not all insurers are there yet:  a Deloitte’s 2015 survey reported that fewer than 50% of US Health companies (which includes Health insurers) had a clear analytics strategy[5].  US healthcare is often quoted as undergoing a data analytics revolution[6], putting the strategic gap in perspective.

There is a clear opportunity in the US Health Insurance market for companies that can formulate a good analytics strategy.  This applies to other insurers too.

Predictive analytics – driving the strategy?

Some argue that rather than supporting the company’s strategy, predictive analytics should dictate the strategy.  In today’s digital, fast paced world, the ability to identify new or changing trends and anticipate developments could be the key to success[7].

Oracle’s white paper ‘Driving Strategic Planning with Predictive Modelling‘ already identified this in 2008[8].  The paper expects a shift in the focus of planning sessions after adopting predictive modelling:  ‘…from debating arbitrary point estimates toward reaching consensus on the key underlying assumptions with the greatest impact on the results’.

Focusing the analytics team on helping the business achieve desired outcomes for those key assumptions is a great way to contribute to the success of the strategy and its execution.

Oracle considers that the change in planning focus is possible because predictive analytics enables a business to ‘identify and evaluate risk and uncertainty in strategic decisions’.  Could insurers benefit from using predictive analytics to drive their strategy?

To an extent, insurance companies are already doing this.  Claims projections, stress and scenario testing, asset and liability matching and other forms of predictive modelling are already embedded in insurance, with actuaries making excellent use of these techniques.

This is good news for insurers but there is a wider range of methods in predictive analytics that would be a useful addition to the traditional actuarial skill set.  There is also scope for application in non-actuarial parts of the business such as Sales or Claims Management.

Analytics, an independent team with a supporting role

A company’s strategy will focus on different areas, sometimes on several at the same time.  Because of this, it is important that the analytics team is not part of a team with another specific focus such as Actuarial or Marketing.  Analysts need to be able to work with different business functions, understand how these operate and move on with the business strategy.

To be fully effective, an analytics team must be independent, ideally reporting to the CEO.  This is to ensure that it best serves the company’s interests rather than those of a particular department.  A recent survey by Towers Watson found that conflicting priorities is the third most significant challenge US P&C insurers face in the US, following only the lack of available talent and difficulty in data capture[9].

An insurance company with an autonomous analytics team, staffed with data analysts and business experts, will always be in a better position to help the company achieve its desired business outcomes than one that depends on a specific business unit and only counts with data analysts among its members.

A dynamic approach to analytics, aligning it with the company’s strategy, is a long-term winning combination that will provide the necessary competitive edge when data analytics is widespread across the industry.


Image source: Omer Yousief / Pixabay










From Solvency 2 to Machine Learning

hand-982059_1920Machine learning and predictive analytics are driving significant changes in financial services.  However, this is not limited to banks and insurers; regulators all over the world are embracing data analytics as a powerful tool to process ever-increasing amounts of data and extract useful insight to prioritise their actions.

One example is the US Financial Industry Regulatory Authority (FINRA), which has recruited a data analytics team to improve their ability to oversee the vast amount of data collected and, as FINRA’s Chief Risk Officer and Head of Strategy puts it: “… see things they couldn’t have seen or understood as well before.[1]

The insurance industry lags behind banking in this respect.  Nevertheless, the recent adoption of Solvency 2 in the EU will quickly bring insurance up to speed given the regular data submissions insurers must make to their supervisors.  These regulators will have to change their approach to supervision in order to cope with the massive amount of information they are set to receive.  In addition to quantitative data in the form of Quantitative Reporting Templates (QRTs), there are also narrative reports such as the Stability and Financial Condition Report (SFCR) and the Regular Supervisory Report (RSR).

Regulators face the challenge of extracting insight from vast amounts of data

A 2014 Deloitte’s report[2] identifies the ability to extract analytical insights as one of the key challenges for banking supervisors.  The ability to design early warning mechanisms and predictive models that allow regulators to anticipate issues is of special importance.  This would allow regulators to prevent rather than react, playing a more proactive role in the prevention of crisis than before.

Given the similarities between Solvency 2 and Basel 3 (the regulatory regime for banking), the challenges banking supervisors face are also relevant for insurance regulators.

A combination of supervised and unsupervised machine learning techniques could help make sense of all the data insurance companies submit in their QRTs and other reports.  These techniques could include:

  • Pattern detection, to find relevant trends in the industry.
  • Predictive modelling, to identify insurers likely to get in trouble.
  • Anomaly or outlier detection, to see companies deviating from what is the norm in the market or detect potential cases of fraud or money laundering.
  • Clustering techniques, to group insurers in pools of similar entities and facilitate their comparison.
  • Correlation analysis with external factors such as interest rates, economic growth, etc. This is useful when building predictive models that try to anticipate how changes in the economic environment affect insurers.
  • Text mining techniques, to automatically explore narrative reports (SFCR and RSR) and prioritise supervisory work.

Investors can apply machine learning to explore insurers’ public regulatory returns

In addition to regulators, investors focused on the European insurance market may also benefit from using machine learning to make sense of the large amount of data that is becoming available.

Machine learning can help identify potential investments based on risk, solvency and other financial data included in the regulatory returns alongside the narrative reports European insurers must make public.  Investors could develop predictive models to estimate the future profitability and solvency of insurance companies in a range of scenarios.

For example, Blackrock is an investment firm betting on data analytics to gain a competitive edge[3] in its stock picking.  It is only a matter of time before investment firms take full advantage of the potential that machine learning and public Solvency 2 reporting represent for large-scale investment analysis.

Machine learning techniques can help investors and insurance supervisors process large amounts of data, generating actionable insight that could drive investment decisions or prioritise regulatory action.  This can be a game changer for regulators, who would be able to predict issues at regulated entities and have a data driven approach to prioritising their work, making optimum use of scarce resources.  Investors, on the other hand, could benefit from better knowledge of the sector and the specific exposure to different risks each insurer has, outsmarting rival investment firms.


Image source: Pixabay






Data Analytics and Capital Management

coins-1015125_1920Solvency 2 came into effect on 1st January 2016.  This regulatory framework plays a very important role in how European insurance companies are managed, with risk management  at the core of Solvency 2, imposing capital requirements for the risks that affect insurers.  This article explains how insurance companies can use predictive analytics to improve risk management and reduce capital requirements under Solvency 2.

Data analytics for entity specific risk parameters

The calculation of capital requirements requires the use of risk parameters that calibrate the risk models.  Solvency 2 allows for a range of options such as a full internal model, a partial internal model or a Standard Formula initially calibrated by EIOPA but that accepts entity specific parameters known as USPs (Undertaking Specific Parameters)[1]

Companies using entity specific parameters, in full or partial internal models or by tailoring the Standard Formula with USPs, need to justify the use of these with their own data.  This is an area where predictive analytics can give a competitive advantage.  The more information you can extract from your data, the more appropriate your risk parameters will be.  Furthermore, a better understanding of risk could reduce its volatility, decreasing capital requirements, as deviations from expected values would be smaller or less probable[2].

For example, predictive modelling provides complementary reserving methods that may perform better than traditional actuarial techniques[3], helping to reduce Reserve risk.  Markus Gesmann, Manager of Analysis at Lloyds, highlights the advantages of a Bayesian approach to communicate uncertainty with credibility intervals[4].  Given the complexity of some solvency models, anything making the message easier to understand is a welcome addition.

The goal is not to replace traditional actuarial reserving practice but to consider additional methods that contribute with a different way to look at the risk.  Most winning solutions at the data analytics website Kaggle[5] are an ensemble of models.  Rarely a single method is good enough to cover all scenarios and a combination of models works better in the same way that a team usually performs better than an individual.  However, there must be a balance between accuracy and practicality.

Better understanding of risk may lead to lower capital requirements

In a more specific example, the Standard Formula’s calibration for Premium and Reserve risk in Medical Expenses assumes a standard deviation of 5% for Reserve Risk.  If predictive analytics helped reduce the standard deviation to 4%, the capital charge for Premium and Reserve Risk could decrease by 10%[6], as large deviations from the best estimate of technical provisions would be smaller and less probable.  The capital requirement reflects the severity of potential adverse developments in reserving. Thus the smaller the volatility, the smaller the capital requirement.

A similar change in the standard deviation for Premium risk could reduce the capital charge by 15% and the joint effect would be a 35% reduction in the capital requirement for Premium and Reserve risk.  EIOPA has rules defining what proportion of these savings insurers can acknowledge if using USPs[7].  Insurer’s relying on the Standard Formula may want to check the list of risks for which they can use USPs.

The list of Standard Formula parameters that can be replaced by USPs is short but represents the most significant risks for some businesses.  Entities using partial or full internal models may enjoy greater potential to leverage predictive analytics.  Risk Dynamics, recently acquired by McKinsey, already provides validation services to financial institutions whose risk models are based on predictive analytics[8].

Increasing an insurer’s capital efficiency is not usually in mind when discussing data analytics but it is a secondary benefit worth considering.

Predictive analytics can help insurers understand and manage their risks better, with a clear impact to their bottom line.  With some extra work, insurers can enjoy the benefits of lower risk uncertainty and increased capital efficiency.   This would enable the company to be more profitable, provide more affordable products and increase dividends to shareholders.


Image source: Pixabay

[1] See EIOPA’s advice on USPs, in page 18:

[2] An application in Long Term Care:

[3] Bayesian Analysis in Forecasting Insurance Loss Payments:

[4] See Markus’ blog, “mage’s blog”


[6] Different undertakings could see different results depending on their specific figures. This is just an illustration.