I’ve written before about the linguistic codes white supremacists use to find each other online and to avoid triggering the anti-trolling software employed by various platforms to ban them. So it is no surprise that now, as one of the most toxic Presidential election cycles in the history of… well… history is coming to a close, racist Trump supporters on the alt-right
have developed a new code for racial, homophobic, and bigoted slurs in an attempt avoid censorship.
The code, which uses terms like “Google,” “Skittle,” and “Yahoo” as substitutes for offensive words describing black people, Muslims, and Mexicans, appears to be in use by various accounts on Twitter and elsewhere.
My first question was: why do so many of these “code words” (if by “code” you mean childlike word substitution game) reference common Internet tools like Google, Skype, Yahoo, and Bing?
According to Alex Kantrowitz,
The code appears to have originated in response to Google’s Jigsaw program, a new AI-powered approach to combating harassment and abuse online. The program seems to have inspired members of the online message board 4chan to start “Operation Google,” using “Google” as a derogatory term for a black person in an attempt to get Google to filter out its own name. The code developed from there.
But I think there is something else going on here, also: a deliberate attempt to make the Internet itself an unwelcome space for people of color.
Think about what a project like this does: it takes some of the most common words that you might see on social media and turns them into potential weapons. In addition to fooling anti-trolling algorithms and block bots, these codes are designed to give these common words a negative valence or at least to make seeing them give certain targeted groups an uncomfortable twinge. The goal is to make these common words serve as a constant reminder of white supremacy, even when they are deployed in their usual contexts.
It puts me in mind of when racists and anti-feminists targeted Microsoft’s Twitter-based Artificial Intelligence, Tay.
If the goal of white supremacists online is to make the Internet itself inhospitable to people of color, then teaching machines to be bigots is a pretty effective (if horrific) way of going about it. Ultra-right wing websites have been accused of using Google AdSense to push ads filled with hate speech, but its own algorithms have also produced some pretty racist assumptions about, for example, what kinds of people are more likely to be in need of an ad targeting people with arrest records (hint: people with “black sounding names“). Google has also been taken to task for allowing Autocomplete results to fill in antisemetic and misogynist responses.
Put together, experiences like this might make people feel as though racism on the Internet is “baked in” and unavoidable. And, if you are a modern-day neo-Nazi, that might sound pretty agreeable to you. Social engineering projects like the ones described here are intended to “hack” the systems we use to make social media safe(r) for women, people of color, and LGBTQI folk and turn them into yet another barrier to entry.