The leading regulatory policy group for U.S. insurers has stressed that companies need to protect consumers from potential abuse and discrimination when using artificial intelligence, urging caution amid rapid uptake of the technology.
On Thursday, the National Association of Insurance Commissioners released new guiding principles on the use of AI in insurance, highlighting the need for fairness, accountability and safety. These principles are guidelines that reflect the NAIC’s expectations of insurers and do not carry legal weight.
The final draft culminates months of discussions within NAIC’s AI working group and nationwide protests that erupted in response to the police killing of George Floyd in Minneapolis.
“The vast amounts of data and ever-expanding computing power is accelerating the use of artificial intelligence within the insurance industry,” said Jon Godfread, North Dakota insurance commissioner and chairman of the NAIC AI working group.
“And while this tool can greatly aid businesses across the sector, it also raises new challenges to be addressed, including consumer privacy and safeguards to protect against unintended discrimination that may be built into algorithms.”
NAIC outlines five key tenets for all U.S. insurers who are using or researching the use of AI: fairness and ethics, accountability, compliance, transparency and safety. The guidelines are based on the Organization for Economic Cooperation and Development’s AI principles, which have been adopted by 42 countries including the U.S.
The principles establish that firms using AI should “respect the rule of law throughout the AI lifecycle,” including not only insurance laws and rules, but rules and ethics against unfair discrimination, denying access to policy and more. This idea extends to underwriting, protecting consumer rights and privacy and advertising.
“AI actors should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for consumers and to avoid proxy discrimination against protected classes,” the NAIC said. “AI systems should not be designed to harm or deceive people and should be implemented in a manner that avoids harmful or unintended consequences.”
Companies should ensure that any AI they use is compliant with rules governing its use of data and algorithms, NAIC said, and firms should be held responsible for the creation of an AI and any unintended consequences. To this end, ongoing monitoring of systems and even human intervention of AI systems are encouraged.
Such developers of AI for insurance use should also take care that the system complies with insurance laws at the individual state level where they are deployed, NAIC said.
However, insurers that use AI systems will not be held in violation of existing insurance regulations if that same decision would have been made by a person.
The commissioners encouraged firms to “commit to transparency and responsible disclosures” regarding their AI use, while protecting the confidentiality of proprietary algorithms. Further, they argue that stakeholders such as regulators and consumers should have a way to review how companies use the technology.
The current draft of the principles was first adopted by the NAIC working group in June and unanimously adopted by the rest of the organization on Aug. 14.
In the weeks leading up to the working group’s June 30 meeting, the Centers for Economic Justice, a Texas nonprofit, wrote to the AI group and others at the NAIC, calling on them to address racial bias in the insurance industry. The center accused insurers and their regulators of having “consistently opposed” such efforts in the past.
“Most state insurance regulators believe they have the authority to stop proxy discrimination against protected classes. This belief, however, has never manifested itself, in regulatory standards, models, laws or consistent approaches across states,” the center said.
The following month, NAIC President Ray Farmer revealed he would co-chair a new “special committee” to review racial discrimination in insurance practices and make recommendations for regulators by year-end 2020.
The NAIC guidelines also arrive amid a growing number of insurtech startups as well as traditional insurers using AI to make new consumer-facing products. Liberty Mutual, for example, teamed up with Queen’s University Belfast this week to develop AI that can communicate dangerous road conditions to policyholders.
The COVID-19 pandemic appears to be speeding up AI development at other firms, like Aflac, which plans to unveil a new AI-powered virtual consultation and enrollment product later this year as the virus limits in-person meetings.
Additional reporting by: Beth Newhart, Allie Ciaramella, Reese Wallace