Twitter has finally taken a long-awaited stand against online trolling, announcing new abuse policies in a post published last week. But with so much damage already done — particularly during the U.S. election — is it too little, too late?
In a statement published on its website, Twitter admitted that “the amount of abuse, bullying, and harassment we’ve seen across the internet has risen sharply over the past few years.” It also announced enhanced controls, reporting and enforcement for dealing with harassment and abuse.
The platform’s users will already be familiar with the mute feature, which lets you hide accounts you don’t want to see. New functionality gives users more control over the mute function, letting them silence specific words and phrases they don’t want to see.
Twitter is also updating the way that hateful conduct is dealt with, allowing users to report it for others whenever they see it happening, instead of putting the onus entirely on the target of the abuse. The company’s hope is that this approach will help “to strengthen a culture of collective support on Twitter.”
On top of all of that, Twitter has recently suspended a number alt-right and white nationalist accounts in a push to create a more civil online space. Twitter has long been criticized for taking too long to respond to reports of abuse, and not taking strong enough action when they do. Since the summer, we’ve witnessed the harassment of female sports journalists, prominent actresses targeted because of their race and gender, and in a report last month, the Anti-Defamation League cited roughly 2.6 million anti-Semitic tweets in the past year, with over 10 billion impressions across the internet.
Abuse is common
While Twitter often bears the brunt of criticism about online toxicity, this kind of behaviour is all too common elsewhere as well — from YouTube comments to Reddit threads. Newspaper comments have gotten so bad that many media outlets have actually shut them down because fixing them seems like too daunting a task.
Wikipedia, meanwhile, has proven to be a slightly more positive space. While it is certainly not immune to sexism and flame wars, according to a recent paper, individuals who edit political articles on the platform seem to grow less biased.
Because of the nature of the site, users have a tendency to edit pages with opposing political positions; a right-wing contributor is likely to edit a left-wing page and encounter different views and vice-versa — something that researchers suggest helps people break out of their filter bubbles. Indeed, of the 70,000 articles analyzed for the study, contributors who started out with extreme political stances developed more neutral language over time.
The lesson for other platforms? According to Barnabe Geis, the Manager of Impact & Accelerators at the Centre for Social Innovation in Toronto, there is something to be said for a design that promotes meaningful engagement with differing view points, as opposed to the status quo which often “perpetuates a culture of anger, confusion and fear, where people lash out online and at each other, where sides are black and white and cannot come together.”
Time for soul-searching
Yet part of the challenge when it comes to fostering online civility is still acknowledging the importance of the undertaking. We’ve long diminished the impact of social media, shrugging it off as “just” Facebook, or “just a tweet.” But the internet is a reflection of us, and we are good, bad and everything in between. Vint Cerf, an internet pioneer who is currently Google’s Chief Internet Evangelist, says, “If you don’t like what you see, don’t break the mirror.” In other words, instead of pointing fingers at which social media platform is to blame, it’s time to do some soul-searching.