Home Applications The Twitter algorithm is racist, or is your data?

The Twitter algorithm is racist, or is your data?

0
The Twitter algorithm is racist, or is your data?

Are AIs racist?

Crop images Twitter

As we already told you, Twitter has long offered the ability to crop images intelligently. It is based on a series of patterns and data with which, thanks to this function, you should not worry about how the image will appear on the timeline of other users when they review what the users they follow or other tweets have published appearing on it.

They will always see the most interesting. Or, at least, what the AI ​​considers to be what should have more weight and relevance to attract the user who sees your publication on that platform. If the official applications are used, because in other Twitter clients this does not work.

However, now there are those who consider that Twitter AI is racist. Because after a small experiment carried out by users like @bascule or @JackCouvela, the cut always applies to the white person. That is, neither Barack Obama nor Chadwick Boseman (actor who played Black Panther) ever appear despite being in the same images, it is never cut so that they are the ones seen in the reduced view of the timeline

Curious? Yes, quite a lot, especially since it does not matter if the order is changed and that Barack Obama's face is at the top, bottom or center. Even that details such as the color of the tie or other elements are changed to see if they are taken into account or not. It is always the white person who seems to have the most relevance.

AIs aren't racist, it's data

With all this it is easy to think that the AI ​​of Twitter, right? Even that all AIs are. But that is not true, or it is not as is. Artificial intelligences only take into account the data that those responsible for their development enter. If other indications are given as a basis, as they learn they will consider different values ​​and in a cut of this type it could perfectly be Barack Obama or Boseman who appear.

So, data is racist and not AI. It is also very difficult to solve this type of behavior. What criteria should they follow to choose in the case of two people? A solution could be to rely on popularity, but it is complex, because how it is measured. And if they are people who are not in the public interest at all. And if it is based on size or proportions, color of clothing … it is difficult to establish a criterion that will satisfy everyone.

Therefore, it is true that AI must continue to be trained in a way that avoids biases and racist behaviors, but there will always be situations that generate controversy for one or the other.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

%d bloggers like this: