TechNews Pictorial PriceGrabber Video Thu Mar 28 08:49:36 2024

0


Discrimination and Algorithms in Financial Services: Unintended Consequences of
Source: Davis Wright Tremaine LLP


It’s troubling enough when facial recognition software couldn’t recognize Asian faces, the crime prediction algorithm targeted black neighborhoods, the job bank was more likely to show men highly paid executive jobs, and the criminal recidivism model had racial bias. But what about the day when the online lending platform uses big data to determine a person’s credit score and systematically rejects more loan applications from women or racial minorities than white men?

Concerns about fairness and bias – AI’s so called “white guy problem” — have continued to emerge in the world of AI, as financial services institutions have used different types of algorithms to review loan applications, trade securities, predict financial markets, identify prospective employees and assess potential customers. Unfortunately, few mechanisms are in place to ensure they’re not causing more harm than good for financial services companies.

Unintentional Discrimination

Since the civil rights movement, fair lending claims have focused on allegations that an institution intentionally treated a protected class of individuals less favorably than other individuals. In 1971, the term “disparate impact” was first used in the Supreme Court case Griggs v. Duke Power Company. The Court ruled that, under Title VII of the Civil Rights Act, it was illegal for the company to use intelligence test scores and high school diplomas—factors which were shown to disproportionately favor white applicants and substantially disqualify people of color—to make hiring or promotion decisions, whether or not the company intended the tests to discriminate. A key aspect of the Griggs decision was that the power company couldn’t prove their intelligence tests or diploma requirements were actually relevant to the jobs they were hiring for.

More recently the government and other plaintiffs have advanced disparate impact claims that focus more on the effect, instead of the intention, of lending policies. Recently, the Supreme Court’s decision in Texas Department of Housing and Community Affairs v. Inclusive Communities Project affirmed the use of the disparate impact theory. The Inclusive Communities Project had used a statistical analysis of housing patterns to show that a tax credit program effectively segregated Texans by race.

The fundamental validation of disparate impact theory by the Court in the Inclusive Communities case remains a wakeup call for technology and compliance managers in financial services, including fintech companies. An algorithm that inadvertently disadvantages a protected class continues to have the potential to create expensive and catastrophic fair lending lawsuits and attract regulatory scrutiny, as well as significantly harm a company’s reputation.

Active Engagement

Financial services companies, which are already highly regulated, must be attentive to their use of algorithms that incorporate AI and machine learning. As algorithms become more common in company operations, more and more risks will appear – including the risk that an innocuous algorithm may inadvertently generate biased conclusions that discriminate against communities that have less power and are not dominate in Silicon Valley circles.

AI’s better decision-making and personalized consumer technologies must be matched with an awareness of and active engagement in identifying and reducing associated risks. For AI, this means countering biased or incomplete results, improving the transparency of decision-making, and addressing general lack of consumer awareness and understanding.

The biggest problems are the accuracy and integrity of the data inputs and what data can and should be used in developing or operating AI. Financial institutions should build in extra steps to evaluate what data is considered relevant, whether there are gaps or inconsistencies in the available data, how to clean the data and whether the data is truly representative. Regardless of the expense and time, everyone from large established companies to small new startups should figure out how to audit AI results.

Algorithms are not inherently dangerous. To the contrary, they are rapidly revolutionizing the financial services industry and making it easier than ever to improve the delivery of products.    But without checks and balances to see if an artificial intelligence system is working as desired and a clear plan to address its flaws, companies are taking on increased risk.

Financial Institutions Benefit from Diversity

We know intuitively that diversity matters. It is well publicized that organizations that promote and achieve diversity in the workplace will do even better at attracting and retaining quality employees while at the same time increase customer loyalty. For financial institutions, it also translates into effective delivery of essential services to communities with diverse needs.

In order to manage the risk of bias as technologies advance, financial institutions must hire a diverse cross-section of employees who actually interact with one another and are vigilant about monitoring how machine-learning systems are designed. Not only does inclusivity matter from a best practices and global business perspective, but also because an artificial intelligence system will reflect the values of its designers. Without diversity and inclusivity, a financial institution risks constructing artificial intelligence models that are exposed to archaic, antiquated and prejudicial stereotypical ideas and, when automated and released on a wide scale, violate antidiscrimination and fair lending laws.

In addition, companies will need to train these diverse teams responsible for developing machine learning on fair lending and antidiscrimination laws. By letting its employees know how important complying with the law is, financial sector firms will be in a better position to identify discriminatory outcomes and address them quickly, with a reduced risk of harmful legal and regulatory consequences. Many law firms, including Davis Wright Tremaine, offer clients complimentary CLEs and other programs on how to comply with laws and regulatory expectations.

To monitor the impact of these efforts, the best approach would be to utilize internal or external legal counsel (hopefully diverse) who, under the protection of attorney-client privilege, can continuously monitor and test the outcomes of algorithmic programs to identify any problems. Financial institutions should already be conducting fair lending reviews with the advice of counsel and, with the increasing reliance on machine learning, should be going above and beyond by running broad hypothetical scenarios to identify discriminatory outcomes and to prevent adverse outcomes from occurring.

In addition to the testing, reviews and monitoring of algorithmic programs discussed above, some consideration should be given to the development and adherence to a voluntary code of conduct to ensure financial institutions are adhering to best practices. Indeed, development of a voluntary code of conduct may be a useful tool for institutions to review and evaluate internal practices, and may insulate against later claims of bias that could invite scrutiny from regulators or other decisionmakers.

If a financial institution chooses to take a particular approach or work with a business partner that poses disparate impact risk, a lawyer should draft a report detailing the business justification for using the particular methodology or partner and the reasons for not using a less discriminatory alternative.

Discrimination and bias can take on lives of their own in digital platforms, and if they are ignored or allowed to exist, it is in the financial services industry that the consequences of the failure may be the most severe.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |