Kool Kanya News / Speaking Out

New Algorithm Developed To Detect Misogynistic Content On Twitter

. 4 min read . Written by Sanjana Bhagwat
New Algorithm Developed To Detect Misogynistic Content On Twitter

Social media with its easy accessibility coupled with lack of moderation, consequence and boundaries has slowly but surely developed into a space for patriarchy to discipline women – their bodies, minds, and voice.

From armchair enthusiasts who spread misogyny under the guise of pseudo-intellectual musings to the graphic insults, abuses, and violent threats hurled at women online, digital platforms have, rather than becoming a gender-neutral safe space, repackaged sexism and misogyny into an even more large-scale and public phenomenon online.

A team of researchers from Queensland University of Technology (QUT) have now developed an algorithm to help weed out this harmful misogyny on Twitter.

Women Aren’t Welcome Online – Social Media Is An Especially Toxic Space For Women  

One might argue that social media can be a dark space for men too, but the harassment targeted at men is almost never rooted in their gender. More often than not, the hate and harassment women face online is rooted in their simply being women.

Female celebrities often tend to have a more carefully curated social media feed than male celebrities. One look at the comments on their posts, filled with people nit-picking every detail of what they’ve put out, making sexual innuendos and hateful threats – will tell you why.

If a woman posts pictures of herself, many seem to see that as open season for showering her with sexual or body shaming remarks. If a woman expresses an opinion online, abusing and belittling her intelligence are a knee-jerk reaction for many. If she expresses a political opinion, rape threats are obviously fair game (ugh).  

Twitter has become an especially toxic place for women. My barometer for a good day now correlates with the number of sexist hashtags that are trending on Twitter that day (one hashtag is a great day, zero is a day that doesn’t seem to exist in this space-time continuum).

Image courtesy: lokmatnews.in

A space that is founded on an unfiltered posting of everyday thoughts and opinions is not a space where women, conditioned to believe that their thoughts don’t matter and to not make themselves heard, can feel welcome anyway. Those who break out of that conditioning enough to actively share and express on Twitter, face dismissive jokes, backlash and abuse.

Researchers Have Developed An Algorithm That Identifies Misogynistic Language On Twitter With Unprecedented Accuracy

A team of researchers has developed a statistical model that wades through millions of tweets to detect misogynistic, abusive, threatening, and harmful content against women on Twitter.

The researchers extracted a dataset of 1 million tweets, and these were further refined by identifying those holding one of three abusive keywords – whore, rape, and slut.

The newly developed algorithm has proven to identify misogynistic content with 75 per cent accuracy. This is higher than any other method being used to monitor social media language and content.

Associate Professor Richi Nayak further explained, “At the moment, the onus is on the user to report abuse they receive. We hope our machine-learning solution can be adopted by social media platforms to automatically identify and report this content to protect women and other user groups online,”

The key challenge in the past in automatically detecting misogynistic tweets has been understanding the meaning that often depends on context and tone. Teaching a machine how to understand natural language is hard, and the unfiltered and social media-friendly nature of tweets often makes this harder.

“So, we developed a text mining system where the algorithm learns the language as it goes, first by developing a base-level understanding then augmenting that knowledge with both tweet-specific and abusive language,” says Associate Professor Richi Nayak.

They implemented a deep learning algorithm that allows the machine to change its previous understanding of a term as it learns and expands its understanding of semantics and context over time.

“Take the phrase ‘get back to the kitchen’ as an example – devoid of context of structural inequality, a machine’s literal interpretation could miss the misogynistic meaning,” Nayak said.

However, with a developed understanding of misogynistic context and abusive language, a tweet containing that phrase can be identified as misogynistic by the machine.

The team believes that over time, the model can even be expanded to be used in other contexts, like identifying content that is racist, homophobic, or abusive towards people with disabilities.

They hope that their research will be rendered a platform-level policy that will help make social media a safer space for women. We hope so too! A “zero sexist hashtags trending” day seems closer than ever. It’s time the dark void of social media extends a warmer welcome towards women.

You’re invited! Join the Kool Kanya women-only career Community where you can network, ask questions, share your opinions, collaborate on projects, and discover new opportunities. Join now.