From Human to Bot, From Biased to…Fair?

Author: David Roos and Susan Ehrlich

Fairness in financial services remains an unsolved riddle

Core has spent over a decade investing in businesses that enable access and affordability of financial services to the masses. Over that time, we’ve been encouraged by a near halving of the unbanked rate in the U.S., disproportionately improving outcomes among minority consumers. Yet, even with legislative and regulatory focus on eliminating disparate impacts and outcomes, issues such as higher borrowing rates for minorities remain an ongoing challenge. Just look at the fact that houses in majority-Black neighborhoods are valued 21% lower than equivalent houses in non-Black neighborhoods and you can see that eliminating human bias remains an arduous task. 

So as we train our financial system to become more automated – and we are betting that automation is inevitable– are we training it on data that is inherently biased? And will a flawed human-centric system create a potentially increasingly flawed AI-centric system? In a frustratingly divided society, who determines what is fair? And is it anti-capitalist to optimize for fairness? 

No matter what you believe the answer to these questions are, regulators are making it clear that fairness is the priority and builders must follow a standard set of guidelines. From Biden’s executive order on transparency in AI to EU’s AI Act, ethics in AI is moving from an obscure technical problem to a CEO and Board issue

As a leading champion of fintech for good, Core is keenly focused on finding the best solutions  for addressing and advancing fairness. We believe that as one of the most regulated industries in the world, financial services has a chance to lead-by-example in how to implement automated solutions equitably and fairly. We have conviction that addressing fairness is not only just, but great for business. Fairness is the most important factor in building trust both with customers and within an organization. The most successful businesses will lean well past the regulatory requirements and embrace an opportunity to improve transparency, advance fairness and enhance trust far beyond what was possible with a human-centric system. 

Why is fairness so hard? 

  1. The word fair, by design a social construct, has different definitions for different people. If people are treated fairly, does it make it okay to have apparently unfair outcomes? Is fair being measured at an individual level or a group level? If bias results in better outcomes for the business, isn’t that fair to the business? Part of what needs to be developed, both at a legislative level and an operational level, is a robust framework for how to define fair and prioritize mitigating harm. 
  2. Right now, we rely on humans for protecting against biases. That means even the most advanced automation in financial services has a human-in-the-loop. We neither have the tools necessary to automate de-biasing models nor do we trust bots as much as humans (for now). This not only limits the scalability of bringing models into production, but also exposes the resulting outcomes to the fallibilities of humans. 
  3. In AI, particularly generative AI, many models in their current form, do not offer a clear explanation as to how an outcome is generated. For example, if a consumer is rejected from a loan, an AI model may not have the reason why. This is problematic, particularly when 80% of African Americans are more likely to be rejected from an AI loan officer. This doesn’t fly with financial service regulators (or health care or insurance). Explainable outcomes are a must-have when financial decisions are being made. 
  4. AI models are constantly changing. What was an unbiased model could drift towards bias over time. These models require near constant monitoring and validation, a cumbersome process that has proven costly with human capital.  
  5. And finally, it needs to be stated that there will be no world that is completely free of bias. The goal is to mitigate bias and its related harms as much as possible, while still holding automation in finance to the highest of standards for its adoption. 

Achieving Fairness in Financial Services at Scale

Instead of exacerbating existing biases in society, if implemented properly automated systems can ultimately bring affordable financial services to the “invisible primes,” i.e. a set of consumers and businesses that are more credit-worthy than the traditional financial systems have acknowledged to date. While at first this requires a human-in-the-loop to validate these model decisions, over time explainability will become fully automated.  We see a few areas that are must-haves for achieving this vision: 

Enabling Developers to Build Fairer Models: Data scientists and engineers strive to create the very best models for addressing their business problems. But the challenges of identifying disparate impacts and outcomes are difficult to detect, much less address!   

Approaches must be:

  • Platform Agnostic. Rather than creating a platform that controls the model building experience itself, successful solutions will enable data scientists to work in their preferred development environments and programming languages. Tools like Straylight automate data engineering for FIs, operate within the dev environments of their clients, and don’t require data or IP transfers to a third party platform.   
  • Incorporate Explainability: Solutions need to provide real-time explainability by monitoring the model itself not just the outcomes. Explanations must be customizable to audiences from data scientists to executive decision makers to bank regulators.
  • Embed Compliance. Building compliance into the development process upfront makes compliance testing a more seamless exercise. Lengthy, often contentious iterations of reviews within model governance are eliminated when models are optimized for fairness as well as for value from the start. Solutions like Stratyfy are designed from this approach and enable clients to deploy auditable and editable models safely.  
  • Improve Transparency.  Solutions like SolasAI help data scientists identify which features are driving bias and recommend alternatives that maintain model quality while reducing bias.   By providing clarity on trade-off decisions between business value and reducing disparities, companies improve transparency, fairness and trust in the models they deploy.  

10x’ing Model Risk Management (MRM) Teams: Simultaneously, the volume of models in production is exploding and the regulatory requirements governing implementation of these models are justifiably becoming more burdensome. While embedding compliance upfront in the model development is a necessary start, the process needs streamlining across the entire governance lifecycle for automation to scale. The current process of documenting and validating these models is so manual and resource-intensive that financial institutions are wary of expanding use cases and adopting more models. And this applies to all models, not just shiny new GenAI models. 

To support adoption at scale, we want to see: 

  • More Automation: Manual processes won’t scale with an expansive set of models. We want to see solutions, such as ValidMind, that automate the documentation process and streamline the workflow processes across data engineers, model risk management teams, and compliance. With these solutions, a financial institution can use 1/10th of its current staff to bring a model into production at 10x the speed. 
  • Financial Institution Friendly Customization: Financial institutions aren’t known to willingly change processes and adopt outsourced solutions, particularly within risk management teams. A solution provider in this space will have to work with existing bank tooling and 3rd party bias-testing to enable model validation without forcing an immediate change to what the institution has learned to trust.
  • Holistic: The winner in this space will set the standard for model risk management and become a trusted source for proper model validation. In doing so, it will become a required partner for in-house model implementation and 3rd party vendor models alike. It will bring transparency not only through the organization, but to the regulators and end consumers that require it. It will need to be fast to adopt new standards around fairness and bias. And it will work on all types of models, from traditional ML models that are in production today to new GenAI models that will evolve over the next 5-10 years. 

Across the full lifecycle of model development and deployment— from data sourcing and engineering, to model building and testing, to compliance, auditing & governance–we see opportunities for innovation that accelerate the deployment of AI and advance our ability to improve transparency, fairness and trust.

Striking a balance between innovation and ethical considerations when implementing AI requires collaboration and a conversation across our industry.  We’d like to hear from you, how do you suggest we address this challenge?

Subscribe to our newsletter and get the latest news, insights and industry reports.