Real-World Events Drive Increases In Online Hate Speech, Study Finds

top line

A study published Wednesday in the journal PLOS ONE found that real-world events such as elections and protests can lead to an increase in online hate speech on mainstream and fringe platforms, with hateful posts increasing even as many social media platforms try to suppress

Also Read :  5 things to know for Dec. 14: Immigration, Storms, Student loans, Ukraine, Sandy Hook

Key facts

Using machine learning analysis—a way of analyzing data that automates model building—the researchers looked at seven types of online hate speech in 59 million posts by users of 1,150 online hate forums, the online forums most likely to contain hate speech. Abomination is used, checked. Including on sites like Facebook, Instagram, 4Chan and Telegram.

The total number of posts, including scattered hate speech, trended upward in a seven-day average that ran from June 2019 to December 2020, increasing by 67% from 60,000 to 100,000 posts per day.

Sometimes the hate speech of social media users reaches groups that were not involved in the real world events of the time.

Among the things that the researchers pointed out was the increase in religious and anti-Semitic hate speech after the assassination of General Qassem Soleimani by the United States in early 2020 and the increase in religious and gender hate speech after the November 2020 American election, during which Kamala Harris as The first female vice president was elected.

Despite efforts by individual platforms to remove hate speech, online hate speech persists, according to researchers.

The researchers pointed to media attention as a key factor in the creation of hate-related posts: for example, when Bruna Taylor was first killed by police, there was little media attention, so the researchers found the least hate speech online, but When George Floyd was killed months later. And the media attention increased, the hate speech increased.

big number

250% of the rate of racial hate speech increased to this extent after the murder of George Floyd. This was the biggest jump in hate speech researchers studied during this period.

Key background

Hate speech has plagued social networks for years: platforms like Facebook and twitter They have policies to ban hate speech and are committed to removing offensive content, but that doesn’t stop these posts from spreading. Earlier this month, nearly two dozen UN-appointed independent human rights experts called for more accountability from social media platforms to reduce online hate speech. And human rights experts aren’t alone in wanting social media companies to do more: A December USA Today-Suffolk poll found that 52% of respondents said social media platforms should limit hateful and inaccurate content. , while 38% say sites should be restricted. An open forum

tangent

Days after billionaire Elon Musk closed his deal to buy Twitter last year and vowed to loosen the site’s moderation policies, the site has seen an “increase in hateful behavior.” According to Joel Roth, Twitter’s former head of security and integrity. At the time, Roth tweeted that the security team removed more than 1,500 accounts for hateful behavior in a three-day period. Musk has faced sharp criticism from advocacy groups who argue that hate speech on Twitter has increased dramatically under Musk’s leadership as speech regulations have been loosened, although Musk has insisted that perceptions of hateful tweets have decreased.

Read more

Twitter’s safety chief admits to ‘increasing hateful behavior’ as form restricts access to moderation tools (Forbes)

Some considerations regarding a consistency requirement for social media content monitoring decisions. (Forbes)

What should policymakers do to better encourage platform content moderation? (Forbes)



Source

Leave a Reply

Your email address will not be published.

Related Articles

Back to top button