BLB #5

Racial & Gender Bias in Algorithms

This presentation is about racial and gender bias in algorithms. The focus of the topic is based on the research of Joy Buolamwini, a MIT grad student.

Buolamwini was working with facial analysis software when she noticed a problem: the software didn't detect her face -- because the people who coded the algorithm hadn't taught it to identify a broad range of skin tones and facial structures.

Her research also uncovered algorithm biases in major companies including IBM, Amazon, and Microsoft. In the case of Amazon, their computer algorithm that was used to filter out female applicants from the hundreds of resumes.

The main problem with the computer algorithms is that they were programed and then just passed on to other programmers. This software should and can be reprogrammed to keep up with forever changing demographics that include current qualifications that are not based on gender, and software that can recognize all skin tones.

As law enforcement agencies proved after the January 6 Capital Riots, facial recognition was an invaluable tool to find those guilty of vandalism. With this being used more, it is vital that the software is used responsibly so that it is not abused. By showing a respect to the software, this will also show respect to the citizens that law enforcement is supposed to serve.

At the same time, not everything should be automated, because critical decision making should be flexible and major decisions should be communicated.