The San Francisco-based discussion website launched a campaign this past June to remove hateful content from its site, removing more than 2,000 communities that violated its new content policies at the time.
In a new report, Reddit announced that it has since removed nearly 7,000 subreddits under its new content policy, resulting in an 18 percent drop in users posting hateful content as compared to the two weeks prior to the ban wave.
"While I would love that number to be 100 percent, I'm encouraged by the progress," Reddit user u/worstnerd, a member of the company's security team, wrote in the report.
When the new policy came into effect in late June, more than 40,000 pieces of hateful content were shared daily, representing a mere 0.2 percent of the content available on the platform.
Such content generated some 6.47 million views every day, although the company found that 30 percent of potentially hateful content is removed on a daily basis by moderators and moderation bots.
Additionally, Reddit revealed that nearly half of all hateful content (48%) was targeting people's ethnicity or nationality, while their political affiliation and sexuality respectively accounted for 16 and 12 percent.
"Defining hate at scale is fraught with challenges. Sometimes hate can be very overt, other times it can be more subtle. In other circumstances, historically marginalized groups may reclaim language and use it in a way that is acceptable for them, but unacceptable for others to use. Additionally, people are weirdly creative about how to be mean to each other. They evolve their language to make it challenging for outsiders (and models) to understand," Reddit user u/worstnerd noted.
Tech giants under fire for spreading hateful content
Reddit's announcement arrives at a time when social media companies are increasingly under pressure to remove hateful content and misinformation from their platform.
Earlier this August, Facebook Inc. stated that it had removed over 790 groups, 100 pages and 1,500 ads connected to the far-right conspiracy group QAnon, as they violated a newly expanded version of its policy regarding "dangerous individuals and organizations."
"While we will allow people to post content that supports these movements and groups, so long as they do not otherwise violate our content policies, we will restrict their ability to organize on our platform," the company said in a statement.