5 December 2022

AI and fairness in insurance

Gábor Csontos

By Gábor Csontos LinkedIn

Artificial intelligence (AI) is on the verge of transforming the insurance sector. But what shape will this transformation take? And should businesses and customers be concerned? From promises of more efficient claims processes, to worries about privacy, bias and unjust outcomes, we explore the opportunities and challenges of AI in insurance, and look at what businesses can do to play their part in securing fair outcomes.

The benefits of AI in insurance

The impact of AI is already being felt in all aspects of the insurance product cycle.

At the point of sale, insurers can use AI to spot patterns in the data on their potential customers, and so set custom prices and cover levels. From the insurer’s perspective, this can minimise risk and help undercut competitors. For the customer, more bespoke pricing can mean that people that were previously uninsurable can now gain access to services.

In underwriting, too, AI-driven automation is making an impact. Underwriters can set up automated program interfaces (APIs) so that when a broker is ready to sell a product, the underwriting can happen virtually instantaneously, in a completely automated way.

Technologies like computer vision and natural language processing (NLP) are also set to transform the claims process. Claims processing is currently one of the highest overhead costs for insurance businesses, and slow and unfair claims procedures are one of the key reasons people can become dissatisfied with their insurer. AI can help here, since it is now advanced enough that many providers can start automating the processing of claims using photos and videos of the damage submitted by the customer.

When it comes to reducing fraud, AI can also be useful. Several insurtech companies are now offering tools to check incoming claims against others to detect fraudulent duplicates, and to check customers' social media and other online activity for signs of potential fraud.

Finally, there are great hopes that AI can help to prevent claims arising in the first place. We have previously written about insurtech companies who try to gain an edge by providing digital services that might help customers prevent harm before it comes their way. Pet insurers can disseminate information about pet health care, for example, and travel insurers can inform customers about risks in popular target destinations.

AI can further these efforts. Using the troves of data produced by wearable devices, health insurers can provide early warning of risk factors. Smart use of telematics in car insurance can give motorists insights into their driving habits and how to make them safer. The insurer might provide GPS apps that recommend routes with lower risk of accidents.

All these developments reinforce each other. With more data on the individual customer, and with a constant and comprehensive stream of information on their behaviour, insurers can price in a more individualised way. This may lead to a decline of the yearly subscription model of insurance. Travel cover could be triggered when the customer leaves their home, for example, while car insurance premiums could be adjusted in line with how much (and how safely) people actually drive.

Concerns of surveillance and freedom

Critics of AI argue that its workings are not all as positive as its proponents make them out to be.

For one, big tech companies’ opaque practices around handling user data have gained more general attention in recent years. While companies like Google and Facebook have, since their inception, made money off collecting customer data and using it to sell targeted advertisements, these activities have not been well regulated and customers have often been unaware of them.

We now see signs of customers and regulators becoming more security-conscious. Tech companies like Apple and Google, keen to cut out competitors, have recently introduced advanced security features in their smartphone operating systems. A new iOS feature, for example, allows users to disable third party apps’ data collection with a single tap. This feature may alone have been a major contributor to the recent collapse of Facebook parent company Meta’s share prices. We might be entering an era where people are just not that keen on sharing their data—and this might also indicate that they are not going to be happy about insurers requesting exactly that.

A second concern is about fairness. While some people might become more insurable as a result of bespoke pricing, others might see the opposite effect and become uninsurable. As AI algorithms are often not transparent, it may be impossible to say or justify why specific people are denied cover.

Third, as data scientists repeatedly stress, the quality of the input data will determine the quality of the predictions. And it is sadly the case that lots of older data sources insurers hold might contain the imprints of historical and present discrimination against racial or ethnic groups, women, or those living in certain geographic areas. Once again, the exact patterns that AI spots are not always transparent to the humans supervising them—therefore, discrimination based on old data might survive into new predictions. (Thankfully, though, there are some substantial efforts made to counteract this risk, such as by IBM’s ethical AI toolkit.)

There is also a concern about surveillance and freedom. Insurance was originally invented as a form of pooling risk. Participants accepted that they pay into a mutual pot from which benefits will be allocated based on need, not on contribution. Models that increasingly set bespoke premiums and benefits increasingly move away from this ethos of pooling social risk and towards one where customers and insurance companies essentially make bets on individual outcomes.

One of the outcomes of this trend might be that customers are increasingly incentivised to adjust their lifestyles to fit insurance products, rather than to find products that fit their lifestyles. McKinsey’s recent report on AI in insurance envisions a future where insurance customers will be fed a constant stream of data about the risk factors of their actions (for example, choosing one driving route over another), which will then directly affect their premiums and therefore their bank balances. Those less well off, or those forced to pursue precarious or dangerous professions, might be punished under such a regime.

What can businesses do?

Industry leaders are now getting ready for the transition to AI technologies, and their proliferation seems inevitable. However, besides building technical capabilities, companies will need to convince their customers that the use of AI will actually serve their best interests.

One step businesses can take on the structural side is to make sure that the data used to train AIs is fair and accurate, and that the obscurity of its inner workings will not result in the reproduction of social discrimination. We at Fairer Finance also believe that companies should pursue a strategy which keeps close to insurance’s original spirit of pooling risk across society, rather than gambling on individuals’ futures.

On a more practical note, it is questionable whether insurance companies could even get to the point where changes outlined above become a reality. If they want to build a world where customers end up adjusting a noticeable portion of their everyday lives based on what insurance AIs recommend them to do, insurance companies first need to do some serious work building trust with their customers.

At present, insurers have issues with trust among the public. Our polling shows that insurers are less trusted than providers of banking services, with travel insurance the least trusted sector of all.

There are small but significant everyday steps that insurers can take to improve trust. Businesses can work to improve the way they interact and communicate with their customers. More accessible websites and apps, more readable terms and conditions documents, and good-faith attempts at informing customers about making the right product choice are all measures the industry should consider taking.

Technological progress provides us with new tools, and those tools are easier to use for some ends than others. This makes them carry some inherent risks, which industry and regulators must keep in mind and do their best to counteract. However, one thing that cannot be underappreciated in times of rapid change is the trust of the customer and the sense that the business acts in good faith to provide fair value. Insurance businesses in the AI revolution should try to do their best in this department.