Iggy Azalea is the latest celebrity to take a public break from Twitter due to harassment experienced on the site. And while the problem has grown rapidly more pervasive on a platform that prides itself on quality interaction, Twitter trolls show no sign of being stopped.

After calling the Internet “the ugliest reflection of mankind there is,” rapper Iggy Azalea announced she was leaving Twitter to get away from the negativity of trolls. She reached her breaking point after users made numerous rude comments about her body in paparazzi pictures from her vacation.

Photo Credit: Gigi Ibrahim cc
Photo Credit: Gigi Ibrahim cc

Clearly, Azalea’s experience is far from the worst or most threatening case from Twitter trolls. But it’s also part of a bigger problem: Twitter sucks at handling abuse on its site. It’s so bad that even the CEO has come forward and admitted that the site has earned its reputation for awful handling of abuse on its site. It’s a problem overwhelmingly faced by women, as writer Lindy West has often written about. In an article for The Daily Dot:

“The system is predicated on the idea that the harassment is going to be fairly benign name-calling,” Chemaly told me, “which we all know that men experience more. It is not built to capture context or sustained harassment. It’s also not built to recognize trauma or re-traumatization, especially as it’s linked to violence.

…Twitter’s system isn’t built, for example, to recognize that the user who tweeted [a] photo of [feminist writer Aja] Romano is the leader of a small but vocal movement of outspoken misogynists and rape apologists who regularly organize large-scale, sustained harassment campaigns against women. It has no mechanism for recognizing the context—that the tweet is a deliberate incitement to harassment, that behind that single tweet are 10,000 followers salivating at the chance to be unleashed on a disobedient woman. It has no way to take into account the cumulative effect that such campaigns have on women’s mental health and safety.

…Obviously Twitter and other social media sites—if they truly want to prioritize the safety of vulnerable users—have a complex and delicate task ahead of them, and I’m sympathetic to that. But in the meantime, actual human beings are bearing the brunt.

Hurry up—it’s heavy.

After Twitter’s abuse policies came under the spotlight thanks to GamerGate trolls, and they’ve since made some changes that make it easier to flag abusive accounts. It’s a step in the right direction, especially given that there are so few laws in place to help people combat cyber harassment. Alistar Maughan and Susan McLean note in a post for Socially Aware blog that though laws are increasingly more available to combat trolls, it’s hard to defend against specific cases without painting too broad a stroke:

As an increased number of Twitter-related cases have hit the front pages and the UK courts, it is becoming increasingly clear that, in the United Kingdom at least, the authorities are working hard to re-purpose laws designed for other purposes to catch unwary and unlawful online posters.

It’s typically hard to argue that someone who maliciously trolls a Facebook page set up in the memory of a dead teenager or sends racist tweets should not be prosecuted for the hurt they cause.  But in other cases, it may not be so clear-cut—how does the law decide what is and what is not unlawful?  For example, would a tweet criticizing a religious belief be caught?  What about a tweet that criticizes someone’s weight or looks?  Where is the line drawn between our freedom of expression and the rights of others?

And in the U.S. it’s even less clear. While threats or immediate calls for harmful or illegal activities are clearly illegal, and some states like Texas have laws against online harassment, there’s often no recourse for targets of online abuse. Out of pocket costs can be too high for the average person seeking legal help and law enforcement are often untrained or uncaring, writes Marlisse Silver Sweeney for The Atlantic:

This is why the question, “Why didn’t she just go to the police?” is often a bad one—one that ignores the reality of what the authorities are willing to do for victims. Take the case of feminist blogger Rebecca Watson. Watson writes that in 2012, she came across a website of a man who was writing about murdering her. After some research, she tracked down his real name and location (which was within a three-hour drive of her home). She called the police department in that jurisdiction, her own, and the FBI, but after some initial questions, she said the authorities didn’t seem to care. “I’ve lived in several different cities…and received several frightening threats, and never have I met a single helpful cop who even made an attempt to help me feel safe,” she writes. Amanda Hess keeps a running file of people who make online death threats against her, she explains in her oft-cited article, “Why Women Aren’t Welcome on the Internet.” The first time she filed a report about a man threatening to murder her, the police officer asked her, “Why would anyone bother to do something like that?” and decided not to file a report.

The Pew Research Center shared a report last year that recorded a 65 percent jump in social media users from 2005-2013, with 23 percent of all of Internet users also joining Twitter. So, Twitter harassment isn’t a problem that’s going to go away without any regulation. But so long as the law has its hands tied trying to avoid broad legal precedent that could be abused down the road, it’s up to Twitter. And while it may be easy for Twitter CEO Dick Costolo to say the site will change, it isn’t for him to actually do something about it.

Since the internal memo from him owning up to the problem was leaked, the company hasn’t made any moves or announcements as to what those steps may be. The thing is, many say it would potentially alter the fabric of the site—but it’s clear, as Tom Hawking writes for Flavorwire, that might be necessary for Twitter to keep what for them is a clear selling point:

Where else can you sign up, create an account, and send an anonymous death threat directly to a celebrity you dislike? Change any one of these aspects of Twitter — requiring a “real” identity for sign-up, like Facebook does, or disabling @ responses for new users, etc. — and you fundamentally change the nature of the service. If you’re Twitter, you only do this if you’re forced to.

And the way to force it to is, as Costolo’s post indicates, via its “core users.” Twitter probably doesn’t care if you or I quit because people are being obnoxious to us. If it’s Robin Williams’ daughter, though… that’s different. I’ve argued before that Twitter’s verified celebs are its lifeblood, and if it had been Taylor Swift who ended up fleeing her home because of threats of violence, rather than Anita Sarkeesian, I’m sure we would have seen much more definitive action. When Swift’s Twitter account was hacked last week, Twitter bent over backwards to help: as the singer herself wrote on Tumblr, “Twitter is deleting the hacker tweets and locking my account until they can figure out how this happened and get me new passwords.”

The hacking incident, when coupled with how Twitter reacted when Zelda Williams briefly left Twitter, shows that Twitter is capable of reworking their policies when they need to. The disconcerting part is that they hadn’t already felt that way. If people keep leaving they might start to finally appreciate their users’ safety more. For their sake,As I hope it’s not too late.