AI fraud detection ‘arms race’ poses ethical dilemma for insurers

September 18, 2020.

An Israel-based company promises insurers state-of-the-art weaponry in the fight against fraud: artificial intelligence that reveals the “genuine emotional state of a person” when they file a claim.

Nemesysco counts the world’s largest insurer by assets, Germany-based Allianz, and China’s fourth-largest insurer by premium, China Pacific Insurance, as clients. Its software creates a profile of callers, describing a person as “emotional,” “logical,” “hesitant,” “stressed,” “energetic,” or “thoughtful” to help flag if they might be lying.

“What we look for is the traces of the masked emotions,” said Namesysco’s founder and CEO, Amir Liberman. He declined to provide more details about what those traces are, calling them “secret gravy.” 

The technology, first developed by Liberman while he was working with the Israeli military, purportedly detects emotions that can be analyzed by artificial intelligence — an approach Liberman calls “AI+EI,” or artificial intelligence plus emotional intelligence.

Nemesysco’s technology is one of a growing proliferation of new software-based tools aimed at increasing insurer profitability. Yet, as the tech has grown stronger, so, too, has the controversy over its application in the insurance industry, over concerns about the use of privacy and the use of factors that might be detrimental to minority groups. 

Last year, a group of researchers at New York University’s AI Now Institute said in a report that emotion detection AI “can encode biases.” Calling for stricter regulation, the researchers said the technologies “are often applied in ways that amplify and exacerbate historical patterns of inequality and discrimination.” 

The Bank Policy Institute has also warned financial institutions about the potential for AI discrimination in lending. The lobbying group said lenders should audit their AI systems for race, gender and national origin-based discrimination while arguing that the implimentation of AI is an overall net good for the industry because it can increase access for underserved groups. 

Locked in an “AI arms race,” insurers are not always mindful of the downside risk of chasing after the latest and greatest technologies, such as public blowback or ethical concerns, U.K. consultant Duncan Minty told Fastinform.

“The fear of losing out and the hope of gaining more are both more powerful influences than ‘Hey, there’s a question over this science. Let’s just have a debate about it,’” said Minty, who runs a London-based ethics consultancy that works mostly with U.K.-based and European insurers. 

“Do [insurers] recognize the science and the technology is still in its infancy?” he said. “I don’t know. I hope they do.” 

The pull toward innovation

Insurers have good reason to be especially concerned with trimming costs and weeding out fraud. As the coronavirus pandemic upended the world economy, it wreaked havoc on insurer profitability. Property and casualty insurers experienced a 22% drop in profits to $25 billion during the first half of this year, according to the Intelligent Insurer.

The immediate future does not look encouraging either. With a worse-than-normal hurricane season pummeling the American South and fires raging across California, the industry remains under pressure and likely to absorb significant catastrophe losses.

Fraud costs U.S. property and casualty insurers about $30 billion annually, according to estimates from the Insurance Information Institute. That equals about 10% of the industry’s total incurred losses and loss-adjustment expenses, the trade group said. AI offers an opportunity to cut down that amount substantially, according to tech company executives.

“We can industrialize fraud fighting,” said Eric Sibony, co-founder and chief scientist of Paris-based Shift Technologies, which counts AXA, P&V Assurances and Spirica among its clients for its AI “network detection” software. 

Shift, which boasts a 75% “hit rate,” meaning that three out of four claims it flags are deemed suspicious by a human, got its start as an idea hatched by Sibony when he worked as an intern at Paris-based AXA in 2011. 

While working in fraud detection, “we realized there was a big opportunity there,” Sibony told Fastinform. “These people needed some good tools, and those tools did not exist.” 

Emotion detection vs. network detection

While Namesysco focuses on identifying a policyholder’s emotional state, Shift tries to “exploit as much information as possible,” Sibony said. 

For example, for a car insurance claim, Shift’s software analyzes the history of the vehicle, the mechanic, “as well as all the connections between people.” 

If the vehicles involved had a series of suspicious repairs or thefts, the system could flag the claim as potential fraud. If the two parties involved in an accident lived near each other or appeared to know each other, that, too, could be marked as suspicious. 

The company tailors its AI inputs to each insurer’s data and to the country they’re operating in, as Sibony says fraud techniques vary by country. 

“If an insurer has invested in certain technologies that provide data, we can usually find a way to incorporate that into our model,” said Shift spokesperson Rob Morton. He added that if an insurer uses emotion detection technology from a firm like Nemesysco, Shift could incorporate it into its broader model.  

Meanwhile, Nemesysco helps flag potentially suspicious claimants directly, allowing insurers to open an investigation immediately. Picking up fishy claims at the outset provides an added benefit of allowing the company to process legitimate claims more easily, spokesperson Tony Miller said.

“People who are supposed to receive their claim, whether it’s a life insurance benefit or any other type of insurance, that gets processed much more quickly,” Miller said.

Other industries apart from insurance have also put the AI to use. Liberman estimates that 30% to 35% of his company’s business comes from insurers, 60% comes from call centers monitoring client and customer moods, and the remainder comes from law enforcement and human resources clients.

‘Getting close to the customer’

Not everyone, of course, is excited about insurers’ increasing reliance on AI and use of technology-enabled investigations in fraud detection. Critics have argued that the technology can exacerbate already problematic biases rampant in the industry, or worse, doesn’t work.

In 2007, two Swedish phoneticians criticized Nemesysco’s technology as part of a scientific journal article, writing that “the ideas on which [Nemesysco’s] products are based are simply complete nonsense” and saying Liberman has no education relevant to the technology he developed.

In response, Liberman threatened to sue the journal’s publisher for defamation. In order to avoid a suit, the publisher then published a response by Liberman in which he defended his technology, according to Science magazine. 

There is also a concern that AI can “move insurers toward that sort of dark place that they found themselves in in the early 60’s and late 50’s,” when firms charged customers in minority ZIP codes higher rates or refused to provide services to them altogether, Minty said.

Sibony told Fastinform that Shift actively takes steps to avoid potential discrimination, giving its scientists and engineers ethics training and actively auditing its algorithms to check for bias related to geography or ethnicity. 

Similarly, Liberman said that internal tests have shown Nemesysco’s technology does not discriminate based on accents and works across all languages. 

Even if AI and emotion detection systems prove effective in flagging fraud, and avoid the problems of inherent biases, Minty said the optics of invasive detection technology and violation of privacy could hurt insurers. 

“The narrative here is about us, the insurer, getting close to the customer,” he said. “Well, if you think about it, you’re sitting on the tube or the metro or the bus and someone gets closer to you, you think that’s a bit creepy.” 

Rather than making customers feel surveilled — like having to second-guess whether their tone of voice is suspicious — insurers should instead aim to make clients think, “I like the way they look after me,” he said.