Moderated by Carmen Elder, Insurance Partner and Chair of Leadership Alliance for Women at DLA Piper, panellists Kiran Singh, Executive Manager (Insurance Advisory) at Suncorp, Emma McCudden, Head of Innovation at Suncorp, Simone Dossetor, CEO at Insurtech Australia, Alex Horder, a Senior Associate at DLA Piper and the firm’s AI lead in Australia and Clancy King, Employment Partner at DLA Piper, came together to answer the question “Innovation and AI – will it help or hurt diversity and inclusion? Here are three takeaways we learned from that session.
1. When using AI, data quality and good governance is crucial
AI has incredible potential within the insurance industry. It can create sophisticated product offerings that reach more diverse numbers of insureds, foster inclusive access to products and services, and more-effectively price policies. The strength of this potential lies within the data and governance frameworks when training and using these systems. While insurers have a clear opportunity to use the data they possess to train these systems, if that data is biased or unbalanced, it creates the risk that AI systems will learn to amplify these biases, which can undermine the benefit of using AI and create undesirable outcomes, particularly where AI systems are designed to make decisions affecting people’s lives.
The potential for discriminatory treatment could also stem from biased AI decision-making, where those decisions are made in the recruitment and employment context. This also creates a material risk under many country’s anti-discrimination laws.
To mitigate this risk, you must understand the data you hold, the demographics it covers, and the gaps in representations of certain demographics that could help propagate biases when that data is used to train an AI system. Implementing solid governance frameworks around using AI will limit the extent to which it can have negative impacts. User organisations should ensure that the AI systems being used to make decisions are explainable and unbiased, and allow for human intervention to mitigate legal and reputational risk.
2. Human centred design and intentionality is key to designing accessible products and customer experiences
The key to ensuring D&I with AI begins with intent – a user organisation must have the intent of using AI in a way that actively upholds their principles, not just using AI with indifference to the impact it may have.
Suncorp explained the benefits they have seen in making their services more accessible through AI, from automating processes for customers with hearing impairments, to using tools to support vulnerable customers who need further support.
Suncorp also designed a steering committee and working groups across product teams, risk, legal, innovation, data science, services, and people and culture teams, to deliberate the opportunities and risks of AI – these groups demonstrate Suncorp’s focus on human-centric design and strong governance, which helps to remove blockers, mitigate risk and pave the way for innovation to occur.
3. Regulation will evolve
There has been a huge rise in interest surrounding AI in the insurance industry in recent months. Some insurers are cautious, believing that there is too much risk involved, whereas others are fully embracing AI. Regardless, there is a common concern in respect of balancing the risk of AI with the opportunity it presents.
From a legal perspective, we face a challenge in utilising AI to its full potential, while complying with existing, and potentially incoming, legal and regulatory frameworks.
As of August this year, over 800 laws have been proposed globally relating to AI regulation, and it looks like Australia will follow suit. While it is likely that the Australian Government will likely take a light touch approach to AI regulation, probably seeking to enhance existing frameworks as opposed to creating new ones, organisations should not be waiting for ‘hard law’, but rather, should develop their AI strategies and governance frameworks now.