Artificial intelligence developed by Charlotte, North Carolina-based Bank of America predicted a higher corporate rate default rate than the bank’s analysts, particularly in the energy sector, according to research disclosed Friday.
Using technology to analyze corporate earnings calls, the U.S.’s second-largest bank with $2.03 trillion in assets added to its modeling by having software look for phrases that correlate with eventual loan defaults, such as “cost cutting,” “asset sales” and “cash burn.”
The system also flagged other phrases that the researchers suspected may have been incidental correlations, such as "investor relations," "oil gas" and "cash generation."
Adding in the software’s predictions to Bank of America’s default prediction model, which also includes factors such as the Federal Reserve’s lending survey and the price of oil, the bank produced a 5.9% corporate default prediction for the next 12 months, up from 5.75% before the artificial intelligence analysis was added in.
The gap was larger for certain sector-specific predictions, namely energy, where the bank’s standard model predicted an 11% default rate, but the software ramped that up to 18%. It also doubled predicted media company defaults from 4% to 8%, and was more optimistic on health care, lowering the default prediction from 6% to 2%.
By backtesting the model, the researchers found that it had the most significant effect on predicting defaults 9 to 12 months out.
The Bank of America team was encouraged by this, because that range “is the most significant timeframe as it tends to lead the overall HY [high yield] credit cycle to its next turning points,” and, “It is the most difficult time horizon to model as good leading factors are hard to identify.”
Artificial intelligence in finance creates unique opportunities and concerns, because its precise methodology can be opaque, even to the company that created it. In August, the National Association of Insurance Commissioners urged caution among insurers in adopting artificial intelligence, because models could introduce biases that are not immediately obvious to the companies using them.
--Additional reporting by Covey Son and Zack Fishman