Skip to main content
SortSpoke Blog » Latest Articles
Jan 31, 2024 10:19:17 AM
Jasper Li
Insurance

Why Insurance AI Projects Fail & How Yours Can Succeed

A guide for commercial P&C and specialty insurance companies

There may be powerful use cases for artificial intelligence (AI) in the insurance industry, but there is also a massive tangle of hype, confusion, and flat-out misinformation about it standing between you and viable solutions. Leaders in the insurance industry need to cut through that tangle to find out how AI can improve their company’s performance and bottom line.

That is, assuming your AI project deploys successfully. Far too many insurance AI projects fail, and they fail for many different reasons. The AI Infrastructure Alliance (AIIA) recently published the results of a survey exploring the various reasons AI projects fail at Fortune 1000 companies. They found many—including 63 percent reporting mismanaged governance and 56 percent security challenges—but the common denominator wasn’t actually the technology. They all had to do with the people in the organization, their expectations, and the existing, outdated policies they wanted to abide by.

AI projects fail because of people. But that doesn’t mean your team should give up on them. There are many reasons to push through that confusion to find success on the other side. This article explores the top seven reasons AI projects fail in the insurance industry and how to ensure your project succeeds.

 

7 Reasons Why AI Insurance Initiatives Fail

Research from McKinsey & Company shows that insurance companies that modernize legacy IT systems became up to 40 percent more productive. They also reduced their administrative costs by up to 30 percent. So, the benefits will be significant if your company can successfully deploy modern AI-powered solutions.

The key word there is “successfully.” Far too many AI projects in the insurance industry fail. We here at SortSpoke wanted to share the seven reasons we see AI projects fail most often and how you can avoid these headaches during your deployments.

 

1. Taking a monolithic approach when you really want best-of-breed

It is tempting to procure an end-to-end solution suite from a single vendor that promises a single-throat-to-choke procurement process. In reality, the solution’s capabilities will be years behind the best-in-class, especially where AI plays a critical part. Real upgrades only come every 10-15 years with the major platform upgrades that the insurance industry knows cost millions of dollars to implement, keeping you constantly behind the curve.

A smarter approach

Rather than a monolithic solution, the only way that insurers can stay within 2-3 years of the best-in-class capabilities of their leading competitors is by adopting a modular and open architecture approach. By selecting and assembling the best tools for the job like Lego blocks, you can always be at the frontier of innovation. This also allows you to swap solutions out rapidly without impacting other modules as new capabilities are available or the best-in-class vendor changes.

 

2. Not having team consensus on what data you need

The number one barrier in insurance preventing AI deployments from reaching full adoption is teams needing to fully understand what data they actually need AI working on. This often comes from operations teams never having to define a standard data model, or different teams (e.g. underwriting, actuarial, and data science) with different data needs trying to agree on one set of requirements for a project.

A smarter approach

Take the time upfront to get consensus from all stakeholders. Even if you don’t reach a full consensus, developing protocols for data extraction will at least make everyone aware of what data is relevant to those different stakeholders. Then, invest in AI solutions specialized in the specific tasks you need, such as data extraction.

 

3. Business stakeholders having less than 50% of the say in what defines success

Contrary to conventional wisdom, letting data science or IT determine the requirements of AI projects can often lead to failure. AI systems may be information technology, but AI today is specialized to the business problem being solved and often has to work closely with the business subject matter experts.  Without the business defining what success looks like and being eager adopters of the technology, failure is the likely outcome.  This isn’t like buying a database or document management system.

A smarter approach

Think of AI that will be used by the business as more like a chef buying their knife or a carpenter selecting their tools.  It becomes an extension of the subject matter experts' mind and body and must be fit for their purpose.  IT, innovation, and data science play a critical role in facilitating and advising on what is possible, running an efficient process, and objectively and critically evaluating vendor claims.  But don’t lose sight of the fact that the business is the end customer and it has to work for them.  The process should start with their needs, they should make the final decision and the implementation should be focused on the people and processes first, technology second.

For a deeper dive on this topic, get our free Evaluation Checklist with an 18-step decision framework.

 

4. Expecting AI to automate 100 percent of a process

Nothing is perfect. And while with a well-trained AI tool, you can usually get well past that 80/20 tipping point in terms of automation and performance, expecting to reach 100 percent automation every time is unrealistic, no matter how much data you train it on.  If you are working on “harder” AI problems with messy inputs such as unstructured documents, aiming for 100% automation (or straight-through-processing) is likely setting up a project for failure before it even starts.

A smarter approach

First, determine if your use case demands 100% (or close to it) accurate output. Often, analytics, marketing or next-best-action AI projects can be wrong a lot of the time and still provide significant value.  But operational underwriting processes don’t have tolerance for error.

Second, if you require high accuracy and you have a highly complex problem with highly varied inputs, then accept the fact that you will likely never achieve 100% automation (i.e. straight through processing).  Once you accept this reality, you’ll know that the end-state you’re aiming for is to “augment” your existing business subject matter experts with AI.  Your requirements then need to include how the AI will work seamlessly with business users.

 

5. Treating AI implementation like legacy software implementation

Compared to legacy software, AI is messy. Legacy systems process an input and produce an output over and over. AI is ‘non-deterministic,’ meaning you might get different results feeding your AI the same input two or more times in a row. This is due to the size and complexity of the models on which they’re built. This difference impacts how you’ll need to implement AI systems in insurance.

Think of developing legacy software like building a house—you follow a blueprint step by step, adding one piece after another, and if you follow the plan closely, you’ll get the same final result every time. Developing AI systems, on the other hand, is more like raising a child. You feed them different stimuli and wait patiently for them to process and learn, and eventually, a creative, intelligent, growing, and changing system emerges.

A smarter approach

AI software is dynamic. It constantly learns new information, modifies, grows, and evolves. If you work with AI, you need to plan for ongoing maintenance to nurture its development. Embrace a plan for continuous improvement and plan to integrate AI maintenance into your operations to avoid degraded performance and unnecessary costs.

 

6. Setting the wrong metrics for success

While data accuracy is often the default metric insurance companies use to measure the success of AI-driven workflows, relying solely on that metric can lead to suboptimal outcomes. By measuring accuracy only, you’re often over-optimizing the easy stuff that doesn’t have an appreciable effect on your bottom line.

A smarter approach

Instead of data accuracy, consider tracking processing time per document or the number of submissions closed per underwriter. These metrics often correlate more directly to operational efficiency and revenue generation. By aligning metrics with activities that have a tangible business impact, insurance companies can ensure that their AI initiatives contribute meaningfully to real-world financial returns.

 
7. Not pushing a solution to the limit during the PoC or pilot

Proof of Concept (PoC) projects often fall short once you try to transition them into successful real-world applications. The controlled environment of a PoC can obscure challenges like dealing with real-world variations in inputs, adding new data fields or document types, and system integration issues, which only tend to surface when you go to implement them. Don’t trust a vendor that claims that because it works for a small subset of “happy path” situations that they will solve the rest in production.

A smarter approach

Opt for vendors that provide a transparent PoC process. Many vendors prefer to run the PoC themselves and show you a working solution. However, allowing your business users to actively engage with the PoC, experiment, succeed, and fail first-hand provides valuable insights. Test in a way that is representative of your real-world use case with all of its complexities and messiness.  That’s the only way you’ll know the limits of the technology.

 


 

Learn More

Commercial P&C Insurers Guide to Solving the Underwriting Bottleneck

guide-1

Explore Topics

About

SortSpoke is a cloud-based Intelligent Document Processing (IDP) tool that uses AI/ML to help you turn even the most complex PDF documents into data.
Get our free Buyer's Guide

Related articles