Automation Assisted Analysis of Claims Data
If you have any claims, or have a client with a history of claims, with 4 or more years of claims data, then there is an opportunity for you to enjoy a significant benefit from automation-assisted exhaustive analysis.
For companies or organizations with a history of claims, let’s say that they have 10 years of historical claims data.
Data prep that used to take a lot of time and effort now takes less time, sorting 4 to 14 years of claims data by coverage, by year of occurrence date, for multiple coverages. For each successive valuation year, it creates an Excess Loss Analysis, listing Total Incurred by Year, and for each selected SIR, it displays annual Excess Loss Amounts (Excess of the SIR).
RAS will typically run the Automated Loss Data Prep for 5 to 10 SIRs.
Now, for 10 years of loss data, typical actuarial pricing analysis takes all 10 years of data (and possibly a 9-year slice of data), and runs it through multiple Loss Development, then through Bornhuetter-Ferguson (BF) and other common actuarial methodologies, including regression analysis, then adds Benchmarking and other fudge factors.
If your client or your organization is at all pro-active in its risk management, such as safety and loss control, risk mitigation, and Enterprise Risk Management (ERM), then your expected loss profile will likely be much better than the average for your industry. Using Benchmarking will likely penalize you.
The standard and conventional approach with benchmarking (based on averages) reflects a statistical situation where a person dies at an average temperature of 72 degrees, where his head is in the freezer and his feet are in the oven. 🙂
Standard insurance company practice with Benchmarking inflates expected losses for a risk with a proactive risk management and a comprehensive ERM profile.
Now for those 10 years of loss data, where conventional actuarial practice will look at all 10 years, and possibly one slice of 9 years, there are 28 possible slices of 4 or more years of the data. There’s all 10 years, the first 9 and the last 9, the first 8, next 8 and last 8, 4 slices of 7 years, 5 slices of 6 years, and so on for a full 28 slices of 4 or more years – which is the minimum for a viable Loss Development (LD) or Regression Analysis (REG). And we run the 28 slices for each of 10 SIRs for 280 LD analyses and 280 Linear REG (LinReg) analyses.
RAS may also run 280 Logarithmic REG (LogReg) analyses, as the data sets may better fit logarithmic curves than linear straight lines. That can then be a total of 560* (LD+LinReg) analyses per Coverage. For 2 coverages, that’s 1120 analyses, for each of which we produce a 3-page Executive Summary and an 8-page (LD) to 16 page (Reg) detailed analysis (which includes a full audit trail, suitable for inclusion in Notes in 10k’s and as extra support for regulatory and Board of Directors documentation purposes.
RAS has also created Summary evaluative statistical profiles comparing each of the component analyses, to enable a skilled analyst to make rapid sense of the deep dive into the data. This enables us to create credible granular profiles of estimated expected losses within successive loss layers. We can therefore see, for example, that our estimated expected losses for increasing our SIR from $500k to $600k is an additional $100,000; while the insurance market, with its Benchmark-inflated estimates are expecting $1 million for that same $500 to $600k layer.
The insurer generously offers a $500,000 (50%) credit, for which we are expecting $100,000, so our client has saved a net $400,000. This actually works in practice, and it can work in any situation where the data and the market supports it. The key is diving deep into the data. That exhaustive deep dive is only possible and economically feasible with the kind of proprietary automated
Try Us Today – Secure & Confidential
* 840 (LD + LinReg + LogReg) analyses per Coverage.