Skip to main content

CFPB Ruling Protects Public From Hand-Waving Credit Model Rejection Decisions

The Consumer Financial Protection Bureau has made it clear today that the complexity of the decisions made by credit machine learning models must be sufficiently explainable to be able to be used legally in credit applicant decisions.
Created on May 26|Last edited on May 27
Today, the Consumer Financial Protection Bureau (CFPB) has put out a reminder to creditors and law-enforcers that within the Equal Credit Opportunity Act (ECOA), where it describes that rejected credit applicants deserve a full explanation on why exactly they were rejected, creditors cannot hand-wave the decision outcomes of credit machine learning models as "unexplainable".
This is a clear heads up to those developing and running machine learning models built for credit analysis: Make sure your algorithms and data are explainable in the highest possible degree.
As ML developers, we all understand that the cloud of millions and billions of neural connections that make up the internals of a model are simply indescribable; You cannot say "this neuron lighting up this much is why that is the outcome". Despite this, it is the ruling under the ECOA that creditors must be able to explain the technologies they are using, and that includes credit models.

Find out more

Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.