Category: Solvency 2

From Solvency 2 to Machine Learning

hand-982059_1920Machine learning and predictive analytics are driving significant changes in financial services.  However, this is not limited to banks and insurers; regulators all over the world are embracing data analytics as a powerful tool to process ever-increasing amounts of data and extract useful insight to prioritise their actions.

One example is the US Financial Industry Regulatory Authority (FINRA), which has recruited a data analytics team to improve their ability to oversee the vast amount of data collected and, as FINRA’s Chief Risk Officer and Head of Strategy puts it: “… see things they couldn’t have seen or understood as well before.[1]

The insurance industry lags behind banking in this respect.  Nevertheless, the recent adoption of Solvency 2 in the EU will quickly bring insurance up to speed given the regular data submissions insurers must make to their supervisors.  These regulators will have to change their approach to supervision in order to cope with the massive amount of information they are set to receive.  In addition to quantitative data in the form of Quantitative Reporting Templates (QRTs), there are also narrative reports such as the Stability and Financial Condition Report (SFCR) and the Regular Supervisory Report (RSR).

Regulators face the challenge of extracting insight from vast amounts of data

A 2014 Deloitte’s report[2] identifies the ability to extract analytical insights as one of the key challenges for banking supervisors.  The ability to design early warning mechanisms and predictive models that allow regulators to anticipate issues is of special importance.  This would allow regulators to prevent rather than react, playing a more proactive role in the prevention of crisis than before.

Given the similarities between Solvency 2 and Basel 3 (the regulatory regime for banking), the challenges banking supervisors face are also relevant for insurance regulators.

A combination of supervised and unsupervised machine learning techniques could help make sense of all the data insurance companies submit in their QRTs and other reports.  These techniques could include:

  • Pattern detection, to find relevant trends in the industry.
  • Predictive modelling, to identify insurers likely to get in trouble.
  • Anomaly or outlier detection, to see companies deviating from what is the norm in the market or detect potential cases of fraud or money laundering.
  • Clustering techniques, to group insurers in pools of similar entities and facilitate their comparison.
  • Correlation analysis with external factors such as interest rates, economic growth, etc. This is useful when building predictive models that try to anticipate how changes in the economic environment affect insurers.
  • Text mining techniques, to automatically explore narrative reports (SFCR and RSR) and prioritise supervisory work.

Investors can apply machine learning to explore insurers’ public regulatory returns

In addition to regulators, investors focused on the European insurance market may also benefit from using machine learning to make sense of the large amount of data that is becoming available.

Machine learning can help identify potential investments based on risk, solvency and other financial data included in the regulatory returns alongside the narrative reports European insurers must make public.  Investors could develop predictive models to estimate the future profitability and solvency of insurance companies in a range of scenarios.

For example, Blackrock is an investment firm betting on data analytics to gain a competitive edge[3] in its stock picking.  It is only a matter of time before investment firms take full advantage of the potential that machine learning and public Solvency 2 reporting represent for large-scale investment analysis.

Machine learning techniques can help investors and insurance supervisors process large amounts of data, generating actionable insight that could drive investment decisions or prioritise regulatory action.  This can be a game changer for regulators, who would be able to predict issues at regulated entities and have a data driven approach to prioritising their work, making optimum use of scarce resources.  Investors, on the other hand, could benefit from better knowledge of the sector and the specific exposure to different risks each insurer has, outsmarting rival investment firms.


Image source: Pixabay







Data Analytics and Capital Management

coins-1015125_1920Solvency 2 came into effect on 1st January 2016.  This regulatory framework plays a very important role in how European insurance companies are managed, with risk management  at the core of Solvency 2, imposing capital requirements for the risks that affect insurers.  This article explains how insurance companies can use predictive analytics to improve risk management and reduce capital requirements under Solvency 2.

Data analytics for entity specific risk parameters

The calculation of capital requirements requires the use of risk parameters that calibrate the risk models.  Solvency 2 allows for a range of options such as a full internal model, a partial internal model or a Standard Formula initially calibrated by EIOPA but that accepts entity specific parameters known as USPs (Undertaking Specific Parameters)[1]

Companies using entity specific parameters, in full or partial internal models or by tailoring the Standard Formula with USPs, need to justify the use of these with their own data.  This is an area where predictive analytics can give a competitive advantage.  The more information you can extract from your data, the more appropriate your risk parameters will be.  Furthermore, a better understanding of risk could reduce its volatility, decreasing capital requirements, as deviations from expected values would be smaller or less probable[2].

For example, predictive modelling provides complementary reserving methods that may perform better than traditional actuarial techniques[3], helping to reduce Reserve risk.  Markus Gesmann, Manager of Analysis at Lloyds, highlights the advantages of a Bayesian approach to communicate uncertainty with credibility intervals[4].  Given the complexity of some solvency models, anything making the message easier to understand is a welcome addition.

The goal is not to replace traditional actuarial reserving practice but to consider additional methods that contribute with a different way to look at the risk.  Most winning solutions at the data analytics website Kaggle[5] are an ensemble of models.  Rarely a single method is good enough to cover all scenarios and a combination of models works better in the same way that a team usually performs better than an individual.  However, there must be a balance between accuracy and practicality.

Better understanding of risk may lead to lower capital requirements

In a more specific example, the Standard Formula’s calibration for Premium and Reserve risk in Medical Expenses assumes a standard deviation of 5% for Reserve Risk.  If predictive analytics helped reduce the standard deviation to 4%, the capital charge for Premium and Reserve Risk could decrease by 10%[6], as large deviations from the best estimate of technical provisions would be smaller and less probable.  The capital requirement reflects the severity of potential adverse developments in reserving. Thus the smaller the volatility, the smaller the capital requirement.

A similar change in the standard deviation for Premium risk could reduce the capital charge by 15% and the joint effect would be a 35% reduction in the capital requirement for Premium and Reserve risk.  EIOPA has rules defining what proportion of these savings insurers can acknowledge if using USPs[7].  Insurer’s relying on the Standard Formula may want to check the list of risks for which they can use USPs.

The list of Standard Formula parameters that can be replaced by USPs is short but represents the most significant risks for some businesses.  Entities using partial or full internal models may enjoy greater potential to leverage predictive analytics.  Risk Dynamics, recently acquired by McKinsey, already provides validation services to financial institutions whose risk models are based on predictive analytics[8].

Increasing an insurer’s capital efficiency is not usually in mind when discussing data analytics but it is a secondary benefit worth considering.

Predictive analytics can help insurers understand and manage their risks better, with a clear impact to their bottom line.  With some extra work, insurers can enjoy the benefits of lower risk uncertainty and increased capital efficiency.   This would enable the company to be more profitable, provide more affordable products and increase dividends to shareholders.


Image source: Pixabay

[1] See EIOPA’s advice on USPs, in page 18:

[2] An application in Long Term Care:

[3] Bayesian Analysis in Forecasting Insurance Loss Payments:

[4] See Markus’ blog, “mage’s blog”


[6] Different undertakings could see different results depending on their specific figures. This is just an illustration.