close
close
migores1

Telling AI not to be biased surprisingly works: a study

release large linguistic patterns of racial bias it can be as simple as telling them to be impartial.

Simply training commercially available LLMs such as ChatGPTto “not use prejudice” minimized racial disparities in the mortgage approval process, a Lehigh University study found.

As the hype around artificial intelligence grew, so did the has concerns about historical racism that can be integrated into AI models and how it could affect the lending process if the technology is used in a borrower’s home buying journey. While lenders don’t currently rely on AI to make key decisions in the origination process, the study aims to find out how outcomes would be affected if they did.

Inequality is indeed included, the Lehigh University study found. But there are ways to reduce it.

Using a sample of 1,000 loan applications drawn from a 2022 Home Mortgage Disclosure Act (HMDA) dataset and manipulating race and credit scores, the researchers found that various top commercial LLMs recommend denying more loans and charging higher interest rates to black applicants compared to identical ones. white applicants.

“There is a clear bias. It exists within this framework, even though it’s prohibited by law,” said Donald Bowen, an assistant professor of finance at the Lehigh College of Business and one of the study’s authors.

To do this study, the researchers used a number of LLMs, including OpenAI GPT 3.5 Turbo (2023 and 2024) and GPT 4, as well as Anthropic’s Claude 3 Sonnet and Opus, and Meta’s Llama 3-8B and 3-70B . Bowen and his colleagues did this to see how widespread the phenomenon was.

“(We wanted to see if it’s) specific to how OpenAI trains its model, or if it’s a bit of a broader phenomenon and it seems to be broader,” he added.

Using OpenAI’s GPT-4, the researchers found that black applicants would need credit scores about 120 points higher than white applicants to receive the same approval rate and 30 points higher to receive the same interest rate, the study concluded.

“By asking various leading commercial LLMs to recommend underwriting decisions, we find strong evidence that these models make different approval and interest rate recommendations for black and white mortgage applicants with applications that are identical in all other dimensions,” it says in the work. “This racial bias is greatest for applicants with lower credit scores and for riskier loans, but present across the entire credit spectrum.”

However, training LLMs to be unbiased may be the key to creating fairer and more equitable outcomes in the lending process. Although simple, this revised prompt leads to a notable reduction in racial disparities, the paper’s authors argue.

As a result, the black-white gap in loan approval recommendations is being eliminated, both on average and across different credit scores, the Lehigh University study said.

Simply requiring LLMs to show no bias reduced the average racial interest gap by 60%, with even greater effects for applicants with lower credit scores.

“Documenting and understanding biases is crucial to developing fair and effective AI tools in financial decision-making, and ultimately to ensuring that they do not reinforce existing inequalities,” said Bowen. “Thus, it is critical that lenders and regulators develop best practices to proactively assess the fairness of LLMs and evaluate methods to mitigate bias.”

Related Articles

Back to top button