BY_PAULO NUNES

How Insurance and Financial Services can Benefit from LLMs

llms-insurance-financial-services

Large Language Models

RAG

Insurance

Financial Services

Unless you were cut-off from all technology news throughout 2023, LLMs probably ended up stealing all the AI buzz around you, just like they did for everyone else in the tech industry. I personally was overwhelmed with all the news, sometimes not knowing what to pay attention to. 

 

But why did this happen?

 

Essentially, Generative AI (GenAI), in particular Large Language Models (LLMs), like GPT-4, were the source of this buzz. ChatGPT was launched in November 2022 by OpenAI (based on GPT-3.5) and in March 2023, GPT-4 was released. This technology is a major breakthrough. LLMs understand and generate human-like text, enabling machines to perform tasks involving natural language understanding and generation. 

text-quote-nlp

This technology presents plenty of opportunities for organisations in the financial services space. And those opportunities are half the buzz. Having worked with Natural Language Processing technology for over 10 years, I see this breakthrough as a game-changer. These models are trained with enormous amounts of textual data and have a substantial understanding of language, and general knowledge of the world. In fact, LLMs can be fine-tuned to answer questions, act as assistants, or assist people in writing, summarising text, and even programming.

 

“Classic” Natural Language Processing is still very useful but it requires skill and experience from data scientists or NLP engineers. LLMs make solving problems like classification, entity extraction, and summarisation more accessible to software developers, thus making AI much easier to integrate into business or personal applications.

 

After all, by harnessing the capabilities of LLMs, businesses can partially or fully automate processes, enhance customer engagement, and drive intelligent data analysis, ultimately leading to increased efficiency, reduced costs, and improved customer satisfaction. It could be game-changing!

 

However, while this technology is powerful, I observe that it tends to be seen as the swiss-knife of AI, and it is easy to fall into the trap of thinking that it can completely replace humans. And this is where the other half of the buzz came from. Well, it cannot! It can only do that for relatively simple tasks, and should not (yet) be fully trusted with tasks that require real expertise and attention to detail. 

llms-insurance-financial-services

What can you do with Large Language Models in Insurance and Financial Services?

 

So, what can you do with this extraordinary technology? 

 

There are many impressive applications for LLMs, but let’s focus on a few that can make a difference for Insurance and Financial Services.

 

  1. RAG based Q&A applications
  2. Underwriting
  3. Claims and Fraud Detection
  4. Investment Portfolio Management
  5. Sales
  6. Customer Service

 

I’ve worked with the insurance and financial services industry for the past 12 years. There are several stages of the value chain in the insurance and financial services industries that require companies to deal with textual and numerical information. Professionals that work in each of these stages would benefit significantly from LLMs, such as: Underwriting, Claims and Fraud Detection, Investment Portfolio Monitoring, Sales and Customer Service.

5 Benefits of LLMs in Insurance & Finance

RAG based Q&A Applications

 

In my experience, many companies struggle with finding information across the organisation. Information and data are usually scattered across various silos, usually heterogeneous, of structured and unstructured nature (databases, text documents, spreadsheets, knowledge bases, etc). This becomes a big challenge as organisations grow. But even smaller organisations suffer from it. As a startup, we also have data spread across several systems like email, document management, HR, ERP, CRM and other systems, and it becomes difficult to find information. 

 

But we've already got search engines. Don't we?

 

So, now we could say that the answer to the problem above is enterprise search.  Indeed, enterprise search has been aiming at solving this problem for many years, looking at helping office workers, or with specialised solutions that focus on business documents. This has had some success and made it easier to find information, or better said, documents within an organisation.  

 

But have you ever seen an enterprise search project?  Enterprise search solutions tend to cost lots of resources to build and maintain, and small and medium companies can usually not afford this. Large organisations have high costs running this kind of solution. On top of this,  enterprise search engines are not good at answering questions, but rather they return a list of relevant documents. Also, most enterprise search engines are not tuned for semantic similarity searches, but are rather optimised for keyword-based or full text search. 

 

What RAG brings to the table

 

RAG (Retrieval-Augmented Generation) goes beyond just finding information, or returning a list of relevant documents. RAG uses a search engine component to retrieve relevant data, and then it feeds the search results into a large language model (LLM). The LLM then processes the information and uses its own knowledge, together with the search results to generate a response to a question.

 

In other words, RAG does not solve the problem of bringing the information to a single repository or search engine. Data extraction, transformation and loading (ETL)  pipelines or other techniques are still required to achieve this. If you don’t have this, it is a challenge to make any RAG solution work properly, even with the best models on the market. An investment in building data pipelines in some form is always required, since LLMs are not able to do this. Another alternative would be to use an enterprise search engine with an LLM. The requirement is that the search engine has semantic capabilities. 

 

So, how can the RAG approach be used in the insurance and financial services space? 

llms-benefits-in-risk-assessment

Benefits of LLMs in Underwriting

 

Over the past 12 years, I’ve seen applications where underwriters leverage Natural Language Processing to analyse customer data, past claims history, and external fraud detection databases. 

 

For example, I’ve seen implementations that make use of rule-based NLP to analyse engineering risk reports and help risk engineers in scoring risks. This classic approach is of considerable help, but has high costs of development and maintenance. RAG can be used to make this process smoother, since better information can be found. LLMs can be used to assess risk reports done by experts, and then generate more comprehensive assessments, highlighting potential areas of concern and recommending appropriate action. 

 

This makes the whole process cheaper to develop and maintain, as well as more thorough and accurate.

 

 

Improving Claims Processing and Fraud Detection

 

Fraud detection systems are largely rule-based. A few years ago, I worked in a system for a retail insurance company that aimed at detecting fraud in health disability claims. These kinds of claims involve large sums of money, and require human intervention in evaluating them. Evaluating such a claim has high costs for an insurance company, precisely because it requires human expertise in proofing hundreds of pages, like medical reports. At the time, my team developed a hybrid system that used rules and a Machine Learning based classification model to test plausibility and to calculate the probability of a fraudulent claim. While this yielded great results for the client, this was again a substantial effort in development and maintenance, since dozens of rules needed to be developed and maintained over a long period, and adjusted every time new types of claims arrived, or regulations changed.

 

This process can be substantially improved with LLMs. 

 

A claim specialist can ask a RAG-powered system: "What is the average payout for a broken window in a homeowner's insurance claim based on zip code X?" The system would retrieve relevant data from policy documents, historical claims data, and external market reports. The LLM would then analyse this information and generate a response with the average payout amount.

 

LLMs can reduce or eliminate the need for rule-based systems, since they make information retrieval simpler, and information more accessible. Again, introducing this technology, can make claim evaluation tools cheaper to develop and maintain, and more accurate and less relying on sampling.



Investment Portfolio Monitoring

 

In the past, my team developed an Investment Portfolio Monitoring software for a financial services company. The tool was made of three parts: a crawler mechanism, that scrapped the contents of each relevant company’s website; an information extraction module, that extracted relevant information from the website pages, quarterly reports, etc; and a summarization service that summarised everything and compiled a report for each company. 

 

Developing such a Portfolio Monitoring tool was a challenging task and a large investment by our customer that, while worth it, meant a substantial amount of effort in development and maintenance.

 

LLMs, and in particular RAG can make the development and maintenance of such tools much faster and cheaper. On top of that, financial advisors or investment portfolio managers can use RAG to query vast amounts of financial data, news articles, quarterly and yearly reports, as well as analyst reports. An LLM can then summarise key insights and trends. Also, questions can be asked about a particular company or portfolio, making the access to information and decision making much easier.

 

 

Enhancing Insurance Sales with LLMs

 

Insurance products are often difficult to understand by customers, as well as challenging to explain. It takes a specialist to understand coverages and exclusions, and to distinguish between the various products that insurance companies offer. 

 

Let's take an example. What insurance coverage does a family of four need? Let's say two adults, two children, a house and a car? A few things come to mind: health, accident, household and personal liability, at least. What's the best product for this? Ideally, there would be a single "Family Insurance'' product that covers everything. But there is in practice no such thing, for various reasons that are out of the scope of this article. But, there is not one single product that covers all these, and the result is that the family solution is a combination of several insurance products. To find a good combination, a specialist is required that evaluates different options, often coming up with a package of various products, from various insurance companies. And what if the family is partially insured? 

 

We developed a solution for this a few years back with great results, but the effort put into it was considerable. There was a mixture of rules, natural language processing and machine learning, which meant the involvement of a team of AI specialists for at least 6 months. 

 

LLMs can bring this to the next level and help build applications that provide much better advice, at a much lower cost. 

 

For example, a solution can be developed that given knowledge about the current insurance coverage of a certain family, and a portfolio of products of a company, can determine the following:

  • The current coverage that the family has
  • The family's protection gap
  • Suggest products that cover the protection gap
  • Suggest better options for the current coverage

 

Using LLLs, such a solution can be built with higher quality and lower cost, compared to the technology we used 6 years ago.

 

This is only one example of how LLMs can be applied in Insurance and Financial Services sales. 

enhancing-insurance-sales-with-llms

Customer Service

 

I discussed above how LLMs can help in claims processing in the insurance field.

 

But LLM technology can also be applied to Customer Service in banking. For example, my bank offers a voicebot for cancelling and ordering new credit cards. The last time I called to cancel my credit card (I believe, summer of 2023), the service was still very difficult to use. It took me 5 attempts until I could successfully cancel my card.

 

This is of course a difficult problem, since there are several steps, including stating the goal of the call, to authentication of the user, identifying which card to cancel, down to confirming the transaction with a code.

 

LLMs can help make this process much smoother, again because this technology allows for faster and better training of the language understanding parts of the system, which took substantial effort to develop, making for cheaper and more reliable solutions. 

 

 

Conclusion

 

LLMs are a game-changer for the financial services industry. I believe they are changing, and will revolutionise how this industry operates. 

 

Putting it short and sweet, LLMs may give you, your team and your business unprecedented power.