The Consumer Financial Protection Bureau (CFPB) has issued a guidance related to legal requirements that lenders must follow when using artificial intelligence (AI) and other complex models.
The CFPB noted that creditors often use complex algorithms with large datasets, sometimes including data that may be harvested from consumer surveillance. As a result, a consumer may be denied credit for reasons they may not consider particularly relevant to their finances.
Despite the potentially expansive list of reasons for adverse credit actions, some creditors may inappropriately rely on a checklist of reasons provided in CFPB sample forms. But the CFPB warned that the Equal Credit Opportunity Act does not allow creditors to simply conduct check-the-box exercises when delivering notices of adverse action if doing so fails to accurately inform consumers why adverse actions were taken.
According to the agency, the new guidance describes how lenders must use specific and accurate reasons when taking adverse actions against consumers. This means that creditors cannot simply use CFPB sample adverse action forms and checklists if they do not reflect the actual reason for the denial of credit or a change of credit conditions.
The agency stressed the importance of this guidance in the wake of increased advanced algorithms and personal consumer data in credit underwriting. Explaining the reasons for adverse actions help improve consumers’ chances for future credit and protect consumers from illegal discrimination.
“Technology marketed as artificial intelligence is expanding the data used for lending decisions, and also growing the list of potential reasons for why credit is denied,” said CFPB Director Rohit Chopra. “Creditors must be able to specifically explain their reasons for denial. There is no special exemption for artificial intelligence.”