A controversial support group for men who can’t find a date has been banned for promoting hate speech against women.
The Reddit group r/incel attracted 40,000 members who defined themselves as involuntarily celibate, or incel, blaming ‘femoids’ for their failings.
Among the opinions advocated by members were messages of support of rape and sexual violence by males against females.
Staff at Reddit took the decision to ban the community for violation of its house rules as part of their anti-violence crackdown.
There are numerous examples of popular posts that have appeared in recent months, which would be in breach of social media site’s terms.
Topics for discussion have included ‘all women are sluts’, ‘proof that girls are nothing but trash that use men’ and ‘reasons why women are the embodiment of evil’.
Less extreme, but equally sexist subjects have included how makeup should be illegal as ‘its only purpose is to lie and deceive men.’
The group’s rules didn’t explicitly ban women from participation.
However, writing in them a group administrator said: ‘Those who continuously claim there are as many female incels in the same situation as male incels will receive a warning and then a ban.
‘Most can agree that women can be incel in some rare situations such as extreme disfigurement, but their numbers do not come close to male incels.’
The ban came as Reddit updated its site-wide policy to prevent publication of content that ‘encourages, glorifies, incites or calls for violence or physical harm against an individual or group of people.’
A spokesman for the San Francisco social media firm said: ‘Communities focused on this content and users who post such content will be banned from the site.
‘As of November 7, r/Incels has been banned for violating this policy.
‘Reddit is the home to some of the most authentic conversations online.
‘We strive to be a welcoming, open platform for all by trusting our users to maintain an environment that cultivates genuine conversation.’
This is not the first time that a social media site has cracked down on hate speech.
In September, Facebook said it is stepping up its monitoring of extreme content.
The Menlo Park firm added 3,000 content reviewers to nearly double the size of its existing team, Senior Vice President for Global Marketing Solutions Carolyn Everson said in a blog post.
‘As soon as we determine that content has breached our community standards, we remove it,’ she said.
‘With a community as large as Facebook, however, zero tolerance cannot mean zero occurrence.’
Google has been using artificial intelligence moderators to monitor illegal videos on YouTube throughout October.
In this time it has more than doubled the number of offensive videos deleted from YouTube.
Google said: ‘With over 400 hours of content uploaded to YouTube every minute, finding and taking action on violent extremist content poses a significant challenge.
‘But over the past month, our initial use of machine learning has more than doubled both the number of videos we’ve removed for violent extremism, as well as the rate at which we’ve taken this kind of content down.’
The robot moderators are now more accurate than its human moderators at flagging offensive content, according to Google.
The firm wrote: ‘While these tools aren’t perfect, and aren’t right for every setting, in many cases our systems have proven more accurate than humans at flagging videos that need to be removed.’