close
close
migores1

How the insurance industry can use artificial intelligence safely and ethically

Executive summary: Artificial intelligence (AI) is on the verge of transforming most aspects of business, including insurance, but must be used responsibly, according to Zywave’s chief technology officer Doug Marquis, who takes a look at some of the practical steps insurance companies can take to use AI safely and ethically.

Several types of AI are already being adopted by different parts of the insurance industry and have the potential to deliver tremendous efficiency savings, opening the door to even more profitability, innovation and solving complex problems.

While use cases for large AI-based language models, such as those used in ChatGPT, in the insurance industry are currently evolving, examples of how it is being used include document summarization and generation, performing analysis of data and data acquisition for risk assessment and underwriting. As an insurtech company, we are also looking at how AI can help us write software in an automated way and exchange data between two entities in the insurance ecosystem.

AI risks

There are, however, several risks that can arise when using AI – primarily because it can easily generate errors. For example, AI may ingest statutory information from one US state and assume it applies to all states, which is not necessarily the case. AI can also hallucinate – make up facts – by taking real information and extrapolating the wrong answer.

AI can also be biased if it uses data that may be inherently biased and creates algorithms that discriminate against a group of people based on, for example, ethnicity or gender. This could lead the AI ​​to recognize that a racial or ethnic group has higher mortality rates and then infer that they should be paid more for life cover.

AI bias also poses a danger when it comes to recruitment, potentially discriminating against people from certain regions or socio-economic backgrounds. For these reasons, there is still a critical need for human oversight of AI decisions to ensure inclusiveness, fairness and equal opportunity.

New AI regulations

AI technology has moved so fast in the past two years that regulations have lagged far behind. Lawmakers are scrambling to catch up with the rapid development of AI and the potential risks it could pose, meaning insurers need to be prepared for a raft of new regulations.

Earlier this year, Colorado became the first state to pass comprehensive legislation to regulate high-risk AI developers and implementers to protect the consumer. High-risk AI systems are those that make a substantial contribution to consequential decisions regarding education, employment, financial or credit services, essential government services, healthcare, housing, insurance or legal services.

Avoiding algorithmic bias

The Colorado AI Act, which takes effect on February 1, 2026, insists that developers and implementers of high-risk AI systems must take care to protect consumers from any known or reasonably foreseeable risk of algorithmic discrimination or bias.

This means that developers must share certain information with implementers, including high-risk harmful or inappropriate uses of the AI ​​system, the types of data used to train the system, and the mitigation measures taken. Developers must also publish information such as the types of high-risk AI systems they have released and how they manage the risks of algorithmic discrimination.

In turn, AI users must adopt a risk management policy and program that oversees the use of high-risk AI systems, as well as complete an impact assessment of AI systems and any changes they make to those systems.

The necessary transparency

Colorado’s legislation also has a basic transparency requirement, similar to the recent EU AI Act, Utah’s AI Policy Act, and chatbot laws in California and New Jersey. Consumers must be told when interacting with an AI system, such as a chatbot, unless the interaction with the system is obvious. Implementers must also state on their website that they use AI systems to inform consequential decisions about a customer.

Moving forward, it’s likely that other states will begin adopting similar AI regulations to Colorado’s. However, it is important to note that many governance measures, such as risk classification AI, control test data, and data monitoring and auditing, are already covered by other laws and regulatory frameworks not only in the US, but and throughout the world. With the layers of legislation expanding at every level, we can expect the AI ​​landscape to become more complex in the near future. For now, there are several actions businesses can take to ensure they are protected.

Five practical steps for insurers

  1. Transparency: With simple disclaimers, insurers can inform customers when they use chatbots and disclose where AI is used to inform decisions in certain systems, including recruitment.
  2. Intellectual property: It is important that insurers protect ownership of customer data when working with AI providers – and also protect sensitive personal data such as medical information. At Zywave, for example, we’ve seen AI vendors with contracts that demand ownership of the data or models they provide. Companies must be more diligent than ever when reviewing contracts to ensure confidentiality, IP ownership, and protection of trade secrets that may be placed on vendor systems.
  3. Correct data: When it comes to making sure AI is basing decisions on the right information, it’s the company’s responsibility to check that they’re giving the AI ​​system access to reliable data. For example, at Zywave, we use our own data warehouse consisting of proprietary data, data purchased from a trusted third party, or from public websites of US government agencies that we have acquired and verified with Careful. The new Colorado AI regulations require a company to be able to explain how it reached its hiring decision and be able to prove it is not biased, which goes back to transparency and recording where the data came from.
  4. Documentation: As an increasing number of AI products are used in the insurance industry, it is critical to meticulously document what data is being used, where it comes from, and who owns it. This allows companies to protect themselves from accusations of copyright infringement and intellectual property theft, as well as AI mistakes based on inaccurate data taken from the Internet.
  5. Learning new skills: Insurance companies need to have a better understanding of AI to ensure they comply with the regulations, which are likely to be implemented in the US and other countries in the next two years. While new roles have already been created for agile engineers to ensure AI systems produce the best answers, they still need to be overseen by other people if the information they input is biased or poses a security risk.

Given the increased use and advancement of AI over the past few years, it’s likely that the technology is here to stay. And while the additional administrative and oversight work required to ensure that AI is used safely and ethically can seem daunting, the new technology offers tremendous business value, with potential automation that dramatically improves efficiency and profitability.

There is no doubt that the benefits outweigh the extra work of developing a robust AI protocol. By implementing strict guardrails, the insurance industry will reap the rewards of AI while remaining compliant in a rapidly evolving regulatory landscape.

TOPICS
The InsurTech data-driven AI marketplace

Related Articles

Back to top button