Skip to main content

https://rtau.blog.gov.uk/2020/02/13/reflecting-on-the-cdeis-snapshot-on-ai-personal-insurance/

Reflecting on the CDEI’s Snapshot on AI & Personal Insurance

Posted by: , Posted on: - Categories: AI & Insurance, Snapshots

Informing public dialogue and debate

Artificial intelligence is no stranger to the media spotlight. Barely a day passes without another story about a new research breakthrough or a fresh risk to society. In the last few weeks alone, we have heard of how AI is now outperforming doctors at detecting breast cancer, of how Facebook is planning to ban AI-powered deepfakes, and of a fresh warning about the police’s use of facial recognition technology.

But how accurate is this media commentary? While much of the reporting on AI gives an honest account of its impact, in too many cases it is prone to hype and speculation, driven more by rumour than evidence. Not only does this prevent policymakers from understanding what is a genuine risk to society, it prevents the public from holding them to account and of confidently voicing their concerns.

This is why the CDEI last year launched a new series of briefing papers. Called Snapshots, these papers aim to bring non-experts up to speed on AI and data-related issues that have recently risen up the media agenda. Their purpose is to separate fact from fiction, point out obvious misconceptions, and suggest areas for further investigation. In September, we published one of our first Snapshots exploring the use of AI and novel forms of data in personal insurance.

What did the Snapshot say?

The paper argued that data-driven innovation has the potential to reshape the sector for the better. The use of telematics in cars, for example, is already helping to reduce premiums for many drivers, particularly young people who are traditionally seen as a high-risk group. AI-driven insights are also helping customers to learn about and to reduce their own risk exposure. US company Capgemini, for example, combines machine learning software with aerial images of people’s houses to assess the quality of their rooftops – information that can then be channelled to customers to help them spot and repair damage before it worsens.

But the use of AI and novel data has also raised several concerns. Chief among them is the volume of data that insurers are collecting, which may threaten the privacy of customers if not carefully managed. Another is that new algorithmic approaches could result in some people being priced out of insurance, as risk assessments become more precise and previously unseen indicators of risk are revealed for the first time. This could be, for example, a previously undiscovered link between someone’s occupational grouping and their likelihood of falling ill at work.

The challenge is that there is little societal consensus about how the industry should be using this technology. Take the example of social media data. One could reasonably argue that insurers should be prohibited from using data from platforms like Twitter and Facebook to screen out fraudulent claims – a practice that intuitively feels invasive and unnecessary. But were insurers to do so – and were they successful – it could lead to fewer false claims and potentially lower premiums for the vast majority of customers who do play by the rules.

Setting the ground rules, together

So where do we go from here? In the paper, we argue that the only way of overcoming this impasse is for the industry to come together to agree common principles for AI and data use. Not alone and not behind closed doors. But hand-in-hand with the public.

The good news is that we can already see new standards emerging. In 2018, the Chartered Insurance Institute (CII) established a Digital Ethics Forum to bring industry representatives together to identify common concerns and solutions. The following year, the Forum published Digital Ethics: A Companion to the Code of Ethics, which was the first joint industry guidance on the use of data. Several insurers have also set up their own ethics panels, including AXA, whose Data Advisory Panel informs the company’s policy on algorithms and data.

But what is still lacking is meaningful and direct engagement with the public. What do people think about their social media data being used to inform their premiums? How do people feel about insurers purchasing third party data that relates to them? Are people willing to have their behaviour tracked – from their driving habits to their exercise schedules – in return for lower premiums? Where – in short – do the public’s boundaries lie?

So far, your guess is as good as mine.

What happens next?

The CDEI is now busy meeting with interested parties to press home the message. The CII’s Managing Director Keith Richards has done likewise, publicly backing the CDEI’s report, while its Professional Standards Director, Melissa Collett, has highlighted the CDEI’s report in the CII’s Journal and called for more public dialogue. Private conversations with insurers also indicate a growing appetite to agree some ground rules on how data and AI should be deployed. 

Fortunately, there is precedent to learn from. In 2001, the insurance industry came together to issue a moratorium on the use of genetic testing data in insurance pricing – an agreement that subsequently turned into a formal code agreed by all members of the Association of British Insurers. This proves that change is possible.

If 2020 is going to be the year that we get serious about ethical principles, as many commentators in the AI and ethics community claim, I can think of few better places to begin than with personal insurance.  

If you have any thoughts to share with the CDEI on this or any other Snapshot, or want to know more about our future work - please get in touch by leaving a comment or sending us an email on cdei@cdei.gov.uk

Sharing and comments

Share this page

Leave a comment

We only ask for your email address so we know you're a real person

By submitting a comment you understand it may be published on this public website. Please read our privacy notice to see how the GOV.UK blogging platform handles your information.