According to the latest news, Twitter has announced the results of the open competition to find algorithmic bias in its photo cropping system. Back in March, the company disabled automatic photo cropping after some experiments by Twitter users last year suggested it favors white faces over black ones.
The competition has confirmed biases that are in line with earlier findings. The first entry showed that Twitter’s cropping algorithm favors “slim, young, of light or warm skin color and smooth skin texture, and with stereotypically feminine facial traits” while the second and third entries showed the system is biassed against people with white or grey hair, and favoritism of English over Arabic script in images respectively.
In the presentation of these results, Rumman Chowdhury, director of Twitter’s META team praised the entrants for identifying algorithmic bias. He said, “When we think about biases in our models, it’s not just about the academic or the experimental […] but how that also works with the way we think in society, I use the phrase ‘life imitating art imitating life.’ We create these filters because we think that’s what beauty is, and that ends up training our models and driving these unrealistic notions of what it means to be attractive.”
Bogdan Kulynych, a graduate student at EPFL is the winner of first prize ($3,500). He used an AI program called StyleGAN2 to generate a large number of realistic faces which he varied by skin color, feminine versus masculine facial features, and slimness, and then fed these variants into Twitter’s photo cropping algorithm. Kulynych pointed Twitter’s algorithm amplifies biases in society by cropping out “those who do not meet the algorithm’s preferences of body weight, age, skin color.”
Vincenzo di Cicco, won a special mention for his innovative approach. He showed Twitter’s image cropping algorithm favors emojis with lighter skin tones over emojis with darker skin tones. Roya Pakzad, founder of tech advocacy organization Taraaz is the winner of the third prize. He compared memes using English and Arabic script and showed the algorithm crops the image to highlight the English text.
Twitter’s bias competition highlights the ugly side of tech companies. However, it still points out how tech companies can combat these problems by opening their systems to external scrutiny. Chowdhury said, “The ability of folks entering a competition like this to deep dive into a particular type of harm or bias is something that teams in corporations don’t have the luxury to do.”
Patrick Hall, one of the judges of this competition and an AI researcher working in algorithmic discrimination stressed that such biases exist in all AI systems, and companies must proactively try to find those out. Hall said “AI and machine learning are just the Wild West, no matter how skilled you think your data science team is. If you’re not finding your bugs, or bug bounties aren’t finding your bugs, then who is finding your bugs? Because you definitely have bugs.”