Twitter is looking into racially biased auto-crop feature

A number of users noticed a not-so-unusual bias in the neural network Twitter users for its auto-cropping feature

Last Updated on

Twitter is looking into why its photo preview feature seems to favor predominantly white – or lighter – faces over Black, or darker, faces. 

The issue was highlighted by a user who first discovered that Zoom’s facial recognition feature failed to recognize his Black colleagues face. After posting to Twitter, he realized the platform had the same issue.

Following this, users began conducting non-scientific experiments, tweeting photos including both Black and white people in different positions to see which face the algorithm would favor. 

Some such experiments showed that the neural network used by Twitter for this purpose frequently favorited white faces, even when an image included two or three of the same Black person and one white person. 

Others showed that background color and attire (like tie color) didn’t make a difference by swapping them around.

One experiment showed that the same issue was apparent even with images of cartoons.


A few Twitter officials have responded to the unofficial experiments. Liz Kelley, part of Twitter’s communications team, tweeted on Sunday thanking everyone who raised the issue and stating that, although initial tests revealed the neural network did not contain biases, there is clearly more analysis to be done.

Parag Agrawal, Twitter’s Cheif Technical Operator, also tweeted saying that their model needs “continuous improvement.” 

The neural network Twitter uses for its facial recognition feature for auto-cropping has long been a mystery. Twitter’s auto-cropping tool is designed to show the most “salient” feature of the image – or they part your eyes are most drawn to. But it’s hard to know what exactly this means. 

And the thing is, racially-biased algorithms are nothing new (unfortunately). An article by Wired last year showed that “even top-performing facial recognition systems misidentify Black [people] at rates five to 10 times higher than they do white [people].” Technology doesn’t exist in a vacuum and, as Alyse Stanley, writing for Gizmodo, puts it: an algorithm doesn’t need to be intentionally racist in order to actually be racist. 

The next step will be for Twitter to vigorously reassess its auto-crop feature and then continue to do so periodically to make sure its systems are as partial as possible.