The Biggest Data: Advising Clients about Alternative Lending Models and the Regulatory Scrutiny They Generate

13 Min Read By: Warren E. Agin, Aki Estrella

IN BRIEF

  • Federal regulators have been honing in on instances of discriminatory lending based on alternative lending criteria and unconventional lending programs, exposing organizations to systemic operational changes and substantial regulatory costs.
  • Avoiding discriminatory bias requires an understanding of the processes used to avoid data bias in building machine learning models.
  • What data collection problems can impact a lender’s model?

Introduction

The Equal Credit Opportunity Act (ECOA) was one[1] of the seminal anti-discrimination standards set in the lending/credit industry. It set the standard for preventing discrimination in lending. When it was enacted, ECOA was meant to offer similarly qualified borrowers equity in lending transactions regardless of race, color, marital status, age, religion, sex, or national origin. Before ECOA, lenders, who held the reigns on financing, could easily turn away a black family or an unmarried woman simply because they wanted to.

Today, we might be surprised if a lender were openly violating ECOA. Or not.

Despite the longevity of the lending discrimination laws, traditional lenders have had trouble discerning when something is a reasonable and lawful lending criterion and when it is not. For example, a large national bank entered into a $5 million dollar settlement with a federal regulatory authority when it denied loans to pregnant women.[2] Another paid $54 million for charging black and Hispanic borrowers higher fees than similarly situated white borrowers.[3]

Some businesses have taken a different approach to lending. Instead of relying solely on traditional lending and risk characteristics, these lenders consider traditional lending criteria such as income and credit scores, nontraditional criteria like college attended, or some mixture of the two, using machine learning techniques to correlate these criteria with lending risk. The trend has caught on and more and more fintech lenders and fintech partnerships are entering the market each year.

And why not? If traditional lenders, with years of experience in underwriting and with rigorous compliance controls, sometimes fail at complying with fair lending standards, might there be another way? Perhaps.

One of the most discussed impacts of fintech has been on who is able to receive credit. Unburdened by the many risk requirements of traditional lenders, fintech companies and partnerships are able to extend credit to a wider variety of people, offering credit access to the underbanked[6] and creating opportunities for profit in markets that may not be accessible to other financial institutions.[7]

Although the possibilities for reaching a greater diversity of customers seems limitless, financial institutions and credit providers that use complementary data and artificial intelligence (AI) must consider state and federal consumer protection laws when they use novel technologies and criteria for lending.

AI and Lending

In particular, using alternative criteria and AI for credit decisions can violate fair lending laws, even when the criteria used in the credit decision is not based on a protected class, such as race, gender, religion, or marital status. Federal regulators have been honing in on instances of discriminatory lending based on alternative lending criteria and unconventional lending programs, exposing organizations to systemic operational changes and substantial regulatory costs.

There are ways to avoid regulatory risk and still reach underserved markets when using alternative lending models; however, it helps to understand how they work in order to craft salient, useful questions when working with a client’s computing and data professionals. Understanding machine bias and how AI can go wrong is the first step. Understanding what regulators are looking for when considering alternative data is the second. This information, taken together, will assist attorneys who advise lenders that are using expanded data and/or machine learning to make credit decisions.

What Is Bias, Anyway?

Data scientists use the term “bias” or “prediction bias” to refer to a program’s inability to accurately reflect the reality that it is supposed to measure.[8] When financial regulators refer to bias caused by AI systems, the term refers to results prejudiced against a protected group.[9] The concept of prediction bias, which refers to the way a program works, and of discriminatory bias are somewhat linked, and in many cases have similar causes, but prediction bias is morally neutral, whereas discriminatory bias is not. 

A machine learning model is the way certain computer programs seek to categorize and connect things. A model can be used to determine the risk of flooding in a Florida city or the likelihood that a borrower will pay back a loan. The things that models seek to categorize are called “examples,” and models categorize the differences between examples based on the relationships between known characteristics called “features” and unknown characteristics called “labels.”

For instance, if we want to build a machine learning model to help us decide whether to advance funds to a borrower based on the borrower’s income, address, and FICO score, the borrower is the “example”; the income, address, and FICO score are the “features”; and the likelihood that the borrower is an acceptable credit risk is the “label.” The model created will provide a representation of the relationships between the features and the labels based on the data it has, but that representation will not necessarily create accurate predictions when presented with new information.

A model can fail in one of two ways. A model with high “variance” is a model that captures the relationships in training data rather well, but fails to translate that knowledge to new information. A model with high “bias” is one that fails to do a good job of capturing the relationships at all. In general, a high bias model results from using the wrong programming technique for the specific task, but can also be caused by poor data selection. Poor data selection also leads to models with high variance.

It’s All about Data and Critical Thinking

When data scientists are talking about “data bias,” they are talking about errors in selecting data that lead to both high bias and high variance models. The data scientists are focused on selecting the appropriate data in order to obtain more accurate results from the models.

When regulators are discussing bias, they are concerned with preventing models from making decisions based on the protected characteristics of the people involved. From a fair lending perspective, a machine learning model that causes a bank to deny loans to more women than men would be improperly biased, whereas a model that denies loans equally to both groups, but does an equally poor job for both groups, is not.

When working with organizations that use data and AI in lending, it’s important to connect the dots that the data set might not. This is the time to use both common sense and critical thinking to consider what relationships might not be apparent to a model. This is especially important as data sets expand beyond the traditional income, debt-to-income ratio, and credit scoring systems on which many institutions have relied for years.

The FDIC recently commissioned a paper on the use of digital footprints in lending determinations, and the CFPB issued a set of “principles” meant to ensure that consumers remain protected as AI and the data it uses becomes more commonplace. Both of these publications show that regulators have a keen interest in continuing to ensure that lending remains fair without regard to the data used to make lending decisions. More aggressively, the state of New York has introduced a bill that specifically prohibits the use of certain types of data in lending models.[10]

As we have noted above, data bias and discriminatory bias are usually caused by poor data selection. As an advisor to organizations that use data, the best counsel that you can offer is to connect the relationships that your programming team may not be able to from the data alone. Below, we discuss steps for advising AI lenders how to overcome common data issues that can lead to fair lending problems.

Again, in most cases, discrimination in a model is caused by problems in data collection and “feature” selection (picking the types of information that will go into a model). In other words, avoiding discriminatory bias requires an understanding of the processes used to avoid data bias in building machine learning models.

The following four data collection problems can impact a lender’s model: use of a prohibited feature, use of correlated features, selection bias, and imbalanced data sets. To illustrate the manner in which these issues can arise, let us visit our fictional lending organization: XYZ Online Loans. XYZ wants to use information about potential borrowers to decide whether to extend credit.

Data Problem 1: The Use of Prohibited Features. A bank is prohibited from making a decision about whether to extend credit based on the borrower’s sex, so the borrower’s sex is a prohibited feature.

Lawyer Answer: Sex is obviously only one protected characteristic under U.S. fair lending laws. Ensure that those programming or writing machine learning models understand that features that implicate sex or any other protected characteristic should not be used in the model itself. It is not as simple as “sex” or “race.” Consider the Department of Housing and Urban Development’s (HUD) recent suit against Facebook, where organizations could market ads that avoided people with an interest in childcare. HUD charged that this was discrimination on the basis of familial status. In addition, consider a model that uses higher education as the basis for determining borrower risk and whether higher education may ultimately exclude borrowers unfairly based on age, race, or national origin.

Data Problem 2: Correlated Features. XYZ’s model has analyzed borrower loan histories and discovered that borrowers with long hair have a .1 percent default rate, whereas those with short hair have a .2 percent default rate. However, 70 percent of the borrowers with short hair are male. Hair length correlates with sex, the prohibited feature.

Lawyer Answer: Identifying the features that actually cause defaults and using those features to build models will generate better models. When a potentially usable feature correlates to sex, using that feature can improperly bias the model based on sex. Even though hair length might be a potential feature from a purely statistical point of view, sex remains a prohibited feature. Correlation, which occurs when one measure changes value in step with another measure, does not imply causation. Unless a causal connection between hair length and default rates can be established, the feature should not be used for analysis purposes. Even if some causal relationship exists, care should be taken to normalize the data to avoid indirectly making credit decisions based on sex.

Data Problem 3: Selection Bias. Selection bias occurs when insufficient attention has been paid to the sources of data. Selection bias is a term that encompasses a number of potential errors.

Assume, for example, that a lender is building a machine learning model based on historical information showing the results of prior lending decisions. The bank has two branches. One branch services a neighborhood where many of the residents are single. The second branch services a neighborhood where many of the residents are married, but the loan officers at that branch tended to reject most loan applications from married applicants. A machine learning model built on this data may, in considering borrower location, discriminate against married applicants because it will embed in its decision structure the prior biases of the loan officers. This is selection bias.

Similarly, consider a lender that builds a model based solely on data collected about users through the bank’s app. If members of a particular age group are more likely to bank in person at a branch, rather than use the app, the model will not reflect their behavior accurately and might discriminate against them. This is also selection bias.

Lawyer Answer: Consider the source! Selection bias is avoided primarily by spending additional time and effort reviewing the sources of data used to build the model, analyzing that data for inherent bias, and understanding the business processes used to create that data. Advising your client to include various business departments and a diverse selection of individuals when reviewing the data set will help identify potential issues. This is also an excellent time to review the testing, policies, and practices used in underwriting decisions to ensure that your client is not building on a shaky or discriminatory lending foundation.

Data Problem 4: Imbalanced Data Sets. Imbalanced data sets occur when machine learning models lack a complete data set. For a successful model, there must be a large amount of data. When a certain group is underrepresented in the sample, predictions relating to that group will be less accurate. If a model is built on a dataset that contains little information about people who are Asian American, the resulting model will do a poor job of decision making when a potential borrower is Asian American. A lender using a model built with insufficient information about Asian American borrowers will end up denying them loans when they would have received a loan had they belonged to another race.

Lawyer Answer: Dataset imbalance is addressed by identifying the important categories of data within the set and remedying the imbalance by collecting additional data so that all groups are well represented, or building out synthetic data to help the model generate better results. When advising your client, take the time to ensure that they are holistically considering the content of the data. If the data cannot be found with features that include all potential borrowers, it’s time to pause and consider why and what that might mean about the quality of the data.

Ultimately, the best way to advise your fintech and AI clients is to understand where the processes begin. They begin with data. A model is only as useful as its data. Whether you are serving the underbanked or trying to open up to borrowers working in a gig-economy, the quality of the data and an understanding of relationships will be key. Advising clients on relationships and counseling them toward more thorough and complete data sets is one of the best ways to counsel them for the future of fair lending.


[1] For a more in-depth discussion of other applicable fair lending laws, like the FCRA and the Civil Rights Act of 1964, read this FTC report.

[2] Wells Fargo settled with the Department of Housing and Urban Development for $5 million when it denied loans to women who were pregnant, had recently given birth, or who were otherwise on maternity leave.

[3] JP Morgan Chase settled with the DOJ for stipulated ECOA violations.

[4] A brief discussion of the ways that data is being used in lending. We are not discussing the types of data used here, however, we’re offering tips for advising organizations that DO use these types of data. https://www.npr.org/sections/alltechconsidered/2017/03/31/521946210/will-using-artificial-intelligence-to-make-loans-trade-one-kind-of-bias-for-anot

[5] Upstart, a fintech lender based in California, successfully received a “no-action letter” from the CFPB in exchange for providing the regulator with ongoing information about its AI/ML loans.

[6]The CFPB’s Office of Research conducted a study of how many Americans were underbanked and unbanked, calling such people “credit invisibles.” These demographics are often correlated to age and race, which is useful to remember when considering how to structure data sets for marketing to these communities.

[7] A discussion of the underbanked and underwriting in China and beyond can be found here.

[8] See Google Developers, Machine Learning Glossary; Brookings Institute, Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms (May 22, 2019).

[9] 15 U.S.C. § 1691(a).

[10] N.Y. S.B. S2302 prohibits the use of social network information in lending decisions.

By: Warren E. Agin, Aki Estrella

MORE FROM THESE AUTHORS

Connect with a global network of over 30,000 business law professionals

18264

Login or Registration Required

You need to be logged in to complete that action.

Register/Login