Seeing potential trouble ahead with AI, agencies vow to enforce anti-discrimination laws, rules

Protecting consumers from unlawful discrimination created by automated systems using artificial intelligence (AI) and other emerging automated systems is the aim of a pledge made Tuesday by the federal consumer financial protection agency and three other federal agencies, according to a release.

The Consumer Financial Protection Bureau (CFPB) said it joined with the three other agencies to make a commitment to enforce their respective laws and regulations. The four agencies pointed out that their commitment is not new. In the past, they said, they have “expressed concerns about potentially harmful uses of automated systems and resolved to vigorously enforce their collective authorities and to monitor the development and use of automated systems.”

The three agencies joining the bureau in the statement were the Civil Rights Division of the U.S. Department of Justice, the Federal Trade Commission (FTC), and the U.S. Equal Employment Opportunity Commission (EEOC).

In their joint statement, the agencies asserted that use of automated systems, including those sometimes marketed as AI, is becoming commonly used. “Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes,” they stated.

The statement makes two key points:

  • The agencies’ enforcement authorities apply to automated system in that their jurisdiction applies to enforcing civil rights, non-discrimination, fair competition, consumer protection, and “other vitally important legal protections.”
  • Automated systems may contribute to unlawful discrimination and otherwise violate federal law. The agencies contended that many automated systems rely on large amounts of data to find patterns or correlations, and then apply those patterns to new data to perform tasks or make recommendations and predictions. “While these tools can be useful, they also have the potential to produce outcomes that result in unlawful discrimination,” they stated.

Separately, the CFPB said it has launched a way for tech workers to blow the whistle on misuse of AI. “The CFPB encourages engineers, data scientists and others who have detailed knowledge of the algorithms and technologies used by companies and who know of potential discrimination or other misconduct within the CFPB’s authority to report it,” the agency said in its statement. “CFPB subject-matter experts review and assess credible tips, and the CFPB’s process ensures that all credible tips receive appropriate analysis and investigation.”

Joint statement on enforcement efforts against discrimination and bias in automated systems