Twitter is Filtering the Quality Out of Social Media

Posted in Politics
Thu, May 7 - 6:35 pm EST | 4 years ago by
Comments: 13
Be Sociable, Share!
  • Tweet
Use Arrow Keys (← →) to Browse

The term “political correctness” has long been viewed as a far right straw man attack on culturally sensitive speech and basic manners. While it is debatable whether this was ever historically true, political correctness as we know it today is a real thing with real demands, and the context of any statement – or lack thereof – is being treated as a non-factor when determining any given statement’s level of offensiveness.

Stevenson College in Santa Cruz recently issued an apology for serving Mexican food at an “Intergalactic” themed school party night. There was a perceived connection between the space alien decorations and the immigration debate, as illegal immigrants are often referred to as “illegal aliens.” A letter to students, penned by student life administrator Carolyn Golz, states that while it was an “unintended mistake,” the planners “demonstrated a cultural insensitivity.” Golz also stated that cultural competence training would be required for all students interested in putting together on-campus programs. In this instance, context and intention do not matter. People made a connection between two unrelated topics, and by adding their own context, forced an apology and the implementation of a new training program.

During the Grammy Awards, actor Zach Braff remarked on singer Pharrell Williams’ outfit, tweeting: “Grammys are time delayed in LA (?!) but someone just sent me this: #IWoreItBetter” and included a side by side image of Pharrell and his character from Oz the Great and Powerful, who happened to be a monkey. Despite the fact that the outfits were strikingly similar, and Braff’s intentions were very clear with accompanying context, he was immediately criticized for comparing his character (a monkey) to a mixed race man. Braff was widely labeled as a racist for this tweet and issued an apology. Yet another instance of people adding their own context in order to make something appear offensive.

Now that we’ve established that context and intention apparently do not matter when determining what is or is not offensive, let’s talk about Twitter.

Shortly after the partnership of Twitter with Women, Action, and the Media (WAM), a leaked memo from Twitter CEO Dick Costolo was obtained and shared by The Verge. In the memo, dated February 2nd, Costolo states: “We suck at dealing with abuse and trolls on the platform and we’ve sucked for years.” After taking full personal responsibility, Costolo vowed, “We’re going to start kicking these people off right and left and making sure when they issue their ridiculous attacks, nobody hears them.”

Dick has done just that. Sort of.

On April 21, Twitter announced the testing of a product feature which would “identify suspected abusive Tweets and limit their reach.” The feature is supposed to take into account a wide range of signals and context that “frequently correlates with abuse including the age of the account itself, and the similarity of a Tweet to other content that our safety team has in the past independently determined to be abusive.” While the tweets that are determined to be “abusive” are still able to be found via the search tool, they will not appear in a mentioned user’s notifications. Neither the sender nor the recipient have any way of knowing whether a particular tweet has been filtered, short of the recipient specifically searching for it or being linked to it.

This feature was inspired by Twitter’s Quality Filter, an optional tool made available to verified user accounts. The new filter, however, is not only a feature that extends to all accounts, there is no way to turn it off or opt out. In addition, while the age of any particular account may be a factor for consideration, last night I personally discovered that even tweets sent between friends who are mutually following one another are being filtered. While the language may be potentially offensive to some users, others – such as myself – tend to interact with their friends with strong language and jestful insults. The filter also appears look for keywords without accounting for context, as is evidenced in these additional filtered tweets. Even when directly requesting that a specific word (semen) be tweeted to her, YouTube personality and Twitter user @shoe0nhead (who I have followed and interacted with on multiple occasions) was unable to receive or view notifications of a majority of the tweets sent in response.

In addition to restricting the ways in which personal friends can speak with one another, the tool also has one likely unintended, yet severe, consequence. By removing the option for potential targets of threats and harassment to see what is being sent to them, the would-be recipient is now incapable of assessing the severity of any given threat, rendering them unable to respond in a way they personally deem appropriate. These tweets would remain visible on the sender’s timeline, viewable by any number of people, yet the intended target would be ignorant of their existence unless specifically searching them out or being linked to them by another user. You can’t fight what you can’t see, and you can’t take additional measures to protect yourself if you don’t even know you’re in need of such protection.

The following is a small sample of the criticism users have voiced over this feature:

While it is possible that the idea of political correctness was manufactured by far right ideologues, its existence in 2015 is virtually indisputable. A school has apologized for something that needed outside context added in order to make it appear offensive. An actor was labeled racist in spite of his clear intention when sharing a particular innocent observation. A social media platform is dictating the ways in which friends can socially engage with one another by limiting exposure to “offensive” words, like semen. Most concerning, however, is the fact that society in general shows more reverence for a person’s nonexistent right to not be offended than their legally guaranteed right to free speech.

DISCLOSURE: A recent change.org petition was started, calling for Twitter to change this to an optional feature. I signed and shared this petition, and included the following accompanying message:

Every person should be able to make a decision for themselves as to what they can or can not tolerate viewing. Hand holding grownups by ensuring they can not view “naughty words” is infantilizing and insulting. The filter itself is fine, IF it is used at each individual user’s discretion.

This tool also serves to ensure that legitimate threats can be made against any person, and they will not know about them in order to make an informed decision on involving law enforcement.

Forcing users to endure this tool is equally insulting and dangerous.

Liz Finnegan is a soulless ginger with no political leanings. Pun enthusiast. Self-proclaimed “World’s Okayest Person.” Retro gaming contributor for The Escapist.

Use Arrow Keys (← →) to Browse

Be Sociable, Share!
  • Tweet

Related Posts