I hate Twitter trolls. Reading some of the vitriol aimed at comedienne and Ghostbusters star Leslie Jones made my blood boil.
Outraged, my head pounded with the same questions we’ve all been asking for months, if not years:
How could this be happening again?
Why is it taking so long for Twitter to stop it?
How long can they hide behind free speech?
As we noted earlier this week, Jones is not the first celebrity to be trolled right off the public social media platform.
However, something about this incident seemed different, at least to me. Jones was, essentially, an innocent bystander pulled in by one notorious troll (and all around jerk), Milo Yiannopoulos, who didn’t like her movie (I thought it was okay) and then piled on by the troll’s minions.
Yiannopoulos’ actions revealed Twitter’s darkest, most disturbing and racist impulses. The only good things to come out of it is Yiannopoulos’ Twitter expulsion and the spotlight it shed on the platform’s big and growing problem.
In light of all this, I sent Twitter a list of burning questions. To my mind, it was time to stop talking about the psychology of hate and whether or not people have the right to say stuff (they do, but that’s another post), it was about the fact that this is a technology platform and that it should afford some important options.
Why doesn’t Twitter automate the identification of hate and abuse speech?
Can it see when hundreds or even thousands start tweeting to one account at once in a sort of coordinated attack similar to what happened to Jones?
Why does it allow people to use other people’s photos and names on their handles?
I know it’s fairly easy to report abuse, but the speed of Twitter’s response is incredibly slow. What can and is Twitter doing to change this and speed it up?
What’s the real benchmark for abuse that runs afoul of your rules. How does Twitter define hate speech?
As I see it, the first question may be the most important one. Tweets are, obviously, simply data running on Twitter’s servers. It’s public information that Twitter can watch and analyze in real time. Granted, it’s a lot of data. Internet Stats Live puts the global Twitter stream at approximately 350,000 tweets per minute.
Even so, I think Twitter can handle it. YouTube, which sees 300 hours of new video every minute, manages to catch copyright infringement before the video even goes live.
On a geographic level, Twitter can see where every tweet is coming from and even the concentration of Tweets. I do not know if they can do as I ask and see when a lot of tweets are being aimed at a particular user. Actually, I’m certain they can do this and I have no idea why they don’t.
I use a tool called Dataminr to track interesting activity and news on Twitter and it uses the Twitter firehouse to gather at least some of its information. Through Dataminr, I noted the Leslie Jones activity hours before Yiannopoulos got the boot. From Jones’ tweets it just seems as if she was drowning and Twitter had yet to throw her a life buoy.
If Twitter had an automated system, this could have been handled in minutes instead of hours.
Twitter also took hours to reply to my questions and, in the end, send this official statement
People should be able to express diverse opinions and beliefs on Twitter. But no one deserves to be subjected to targeted abuse online, and our rules prohibit inciting or engaging in the targeted abuse or harassment of others. Over the past 48 hours in particular, weve seen an uptick in the number of accounts violating these policies and have taken enforcement actions against these accounts, ranging from warnings that also require the deletion of Tweets violating our policies to permanent suspension.
We know many people believe we have not done enough to curb this type of behavior on Twitter. We agree. We are continuing to invest heavily in improving our tools and enforcement systems to better allow us to identify and take faster action on abuse as it’s happening and prevent repeat offenders. We have been in the process of reviewing our hateful conduct policy to prohibit additional types of abusive behavior and allow more types of reporting, with the goal of reducing the burden on the person being targeted. Well provide more details on those changes in the coming weeks.
Sure, Twitter starts by repeating the freedom of speech line, but it dismisses it so quickly that it’s hard to imagine they aren’t just plain fed up, too.
I’m happy Twitter agrees that it have not done enough. I’m also encouraged that it’s working on the tools that help identify this behavior and move quickly.
The company still stop short of promising automation and this is a problem, because I guarantee that there will be no solution until they can red flag words and phrases (image recognition would help, too) in real-time so Twitter can take swift action.
The system should at least hide potentially hateful tweets immediately and, maybe, then toss them to a team for a second pass. If there aren’t enough Twitter employees to handle this labor-intensive task, it can be skipped. These are just tweets, after all, not the Declaration of Independence. On the other hand, a diverse volunteer army of Twitter users could pitch in to give the thumbs up or down on these tweets.
An automated system will change trolling behavior and these miscreants will realize their hateful words can be detected as soon as they tweet. The system could be so effective that it stamps out the tweet before the target sees it and will definitely help end Twitter Troll Storms.
Will this result in false positives? Absolutely. Is it worth it to clean Twitter the hell up while still allowing for free speech? Why don’t you ask Leslie Jones?
Have something to add to this story? Share it in the comments.