Pass the Stop Discrimination by Algorithms Act

Landlords can’t legally discriminate against you. But their algorithms can.

And that’s not right.

We need AI fairness now. D.C. Council must pass the Stop Discrimination by Algorithms Act.

What is AI?

Artificial intelligence is computer trained to mimic and automate various human functions, such as decision making. These models and algorithms analyze data about people, like rental or credit history, in order to make predictions about who can qualify for housing, employment, health care, education, credit and lending, and more.

Artificial Intelligence is involved in almost every aspect of modern-day life, from applying for housing or a job, to getting a mortgage or even getting healthcare when you’re sick.

How is AI used?

AI is used to make life-changing decisions about us. Every day, landlords, employers, health care companies, and other service providers use algorithms to automatically deny people services and opportunities.

How does AI discriminate?

There is no such thing as unbiased AI. Artificial intelligence relies on historical information, which can be inaccurate, unfair, or colored by past discrimination. In any situation where a person might discriminate intentionally or unintentionally, AI can also discriminate. That’s because AI is programmed by people and makes predictions and decisions based on what has happened in the past.

Instead of removing bias, artificial intelligence can worsen existing discrimination and can deny opportunities to people who have been discriminated against in the past. A 2019 study revealed a widely-used clinical algorithm showed racial bias by requiring Black patients to be much sicker than white patients to be recommended for the same care. The inequality in medical treatment for Black people will only worsen as racist technology becomes further embedded in health care."

Studies and experience have also shown that facial recognition systems are often less accurate for people of color than for white people. Groundbreaking research conducted by Black scholars Joy Buolamwini, Deb Raji, and Timnit Gebru showed that yes, algorithms can be racist. Buolamwini and Gebru’s 2018 research concluded that some facial analysis algorithms misclassified Black women nearly 35 percent of the time, while nearly always getting it right for white men.

In a test the ACLU conducted of Amazon’s facial recognition tool, called “Rekognition,” the software incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime. The members of Congress who were falsely matched with the mugshot database we used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country. The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus.

In 2020, police arrested Robert Williams — a Black man living in a Detroit suburb — on his front lawn in front of his wife and two little daughters (ages 2 and 5). Robert was hauled off and locked up for nearly 30 hours. His crime? Face recognition software owned by Michigan State Police told the cops that Robert Williams was the watch thief they were on the hunt for. But the AI was wrong because face recognition technology often can’t tell Black people apart. That includes Williams, whose only thing in common with the suspect caught by the watch shop’s surveillance feed is that they are both large-framed Black men.

Has AI discriminated in D.C.?

Yes, in 2022, the Electronic Privacy Information Center reported many ways AI discriminates in D.C.

Take the story of Juan Luis Hernandez, for example. Juan struggled to find housing within his budget and in proximity to public transportation after not being prioritized for permanent supportive housing based on an algorithmic assessment used in D.C. known as VI-SPADT, the Vulnerability Index and Service Prioritization Decision Assistance Tool. After being denied permanent supportive housing, Juan found and applied for an apartment in his budget and near public transportation but was denied after RentGrow, a tenant screening tool used in D.C. showed he had a criminal record in Texas, despite never having been to the state. He requested the full report from RentGrow, which showed that he also had an eviction in Nevada; however, despite both records being under his name, they were two other people with his name, and by the time it was corrected, the apartment was already rented out. Juan ran into the same issue with similar tenant screening algorithms such as CoreLogic, and after continuing to correct data inaccuracies for a year, his Rapid Rehousing funds expired before he could get into an apartment.

What laws protect us from AI discrimination?

We have laws that protect us from discrimination by people and by companies, but there are no laws protecting us from discrimination by AI. The law has not kept up with technology, and without protections, AI can harm people and prevent fair and equal access to services and opportunities.

Most times, people have no idea why they were denied, and they certainly don’t know that an algorithm, not a person, rejected them. Right now, we don't know which District landlords, employers, and service providers use algorithms to make decisions about our housing, our employment, our health care, and so many other aspects of our lives. And we deserve to know.

What can the D.C. Council do?

The D.C. Council can and should pass the Stop Discrimination by Algorithms Act (SDAA).

The SDAA would prohibit algorithmic discrimination based on protected characteristics – like race, gender, and disability – in housing, education, employment, credit and lending, and insurance.

The SDAA requires companies to be transparent about when and how they are using artificial intelligence to inform their decisions.

Everyone, regardless of whether they are Black, white, or brown, should be treated fairly -- by people and by artificial intelligence tools.

We cannot allow algorithms to continue cycles of discrimination and inequality. We need protections to ensure fair, equitable AI systems for everyone.

That’s why the D.C. Council should pass the Stop Discrimination by Algorithms Act (SDAA). This bill would:

  • prohibit algorithmic discrimination based on protected characteristics – like race, gender, and disability – in housing, education, employment, credit and lending, and insurance.
  • require companies to be transparent about when and how they are using artificial intelligence to inform their decisions.

District leaders need to make sure that artificial intelligence tools are fair when they make life-changing decisions about us.

D.C. Council must pass the Stop Discrimination by Algorithms Act.