Dear friends,
We know that biased data leads to biased machine learning. But does the problem go beyond that? A few colleagues asked about this after a heated exchange on Twitter between Yann LeCun and Timnit Gebru (see “Image Resolution in Black and White” below).
There are plenty of documented examples of biased data contributing to bad outcomes. But suppose we find purely unbiased data and build an AI system that helps lenders optimize interest rates for payday loans. We’re careful to make sure the data, algorithms, and learned models don’t discriminate unfairly against any disadvantaged or minority group. Our results are unbiased and in the clear, right?
Unfortunately, no. Payday loans are quick-turnaround loans often with very high interest rates — in California, a lender can charge 459 percent interest on a $100, 14-day loan. They target low income individuals. In the U.S., they’re used disproportionately by the Black community. Thus even a fair algorithm will hurt this community especially.
Beyond biased data, the way we frame problems, choose what to build, and choose where to deploy can add to or subtract from problems of bias and privilege. An “unbiased” AI technology operating in an unfair social system can contribute to biased outcomes.
We still have a lot of work ahead to address harmful biases throughout society. Twenty years ago, the AI community was a small group working on an exciting but obscure technology. Today our community is large, worldwide, and rapidly growing, and we contribute to applications at the center of daily life. We have a greater responsibility than ever to educate ourselves not only in the technology but also in its social context.
It’s not always easy to foresee the indirect impact of our work. Who would have guessed that a poorly designed software implementation to enable freedom of speech would lead to toxic communications on social media? But with a broader perspective, I hope our community can better understand the impact of our work and make better decisions about how to help society move forward with greater fairness and less bias.
Keep learning!
Andrew