Free speech advocates are sounding the alarm over a slew of impending Twitter updates that they claim are a thinly-veiled attempt to ramp up censorship while claiming to be fighting against things like “hate symbols” and “glorifying violence.”
Last week, Twitter CEO Jack Dorsey hinted at sweeping changes on the horizon geared toward cleaning up the platform, which was once seen as an anything-goes medium but has since gained a reputation for banning users that go against the narrative.
“We decided to take a more aggressive stance in our rules and how we enforce them,” he wrote on Twitter. “New rules around: unwanted sexual advances, non-consensual nudity, hate symbols, violent groups, and tweets that glorifies violence. These changes will start rolling out in the next few weeks. More to share next week.”
Now, Twitter seems geared to take things to the next level.
“Although we planned on sharing these updates later this week, we hope our approach and upcoming changes, as well as our collaboration with the Trust and Safety Council, show how seriously we are rethinking our rules and how quickly we’re moving to update our policies and how we enforce them,” Twitter told TechCrunch.
Amid fairly uncontroversial rules tweaks surrounding non-consensual nudity and unwanted sexual advances, which were already punished to an extent, Twitter plans to roll out a new system to block viewers from accidentally viewing hate symbols and from encountering hate groups or violent tweets.
“At a high level, hateful imagery, hate symbols, etc will now be considered sensitive media,” Twitter wrote in an email to the Trust and Safety Council, noting that it is yet unclear what will be considered as hate symbols.
Organizations deemed “violent groups” will be subject to “enforcement action” on the part of Twitter, with “insight into the factors we will consider to identify such groups” still forthcoming.
“We realize that a more aggressive policy and enforcement approach will result in the removal of more content from our service,” the email concluded. “We are comfortable making this decision, assuming that we will only be removing abusive content that violates our Rules.”