TikTok, has today released its Community Guidelines Enforcement Report, which details the volume and nature of violative content and accounts removed from the platform in Q2 of 2021. The report provides insight into content removed for violating the strict Community Guidelines, reinforcing the platform’s public accountability to the community, policymakers, and NGOs.
To protect the safety of the community and the integrity of the platform, 81,518,334 videos were removed globally from April to June, comprising less than 1% of all uploaded content. Of these, 93% were actioned within 24 hours of posting and 94.1% before being reported by a user. Even more promising, was the finding that 87.5% of removed content had zero views. With 9,851,404 videos removed, Pakistan ranked second in the world for the largest volume of videos taken down for Community Guidelines violations in Q2 2021.
Looking more closely at the findings, 73.3% of content promoting harassment and bullying videos and 72.9% focused on hateful behavior videos were removed prior to being reported, a significant increase from the 66.2% and 67% respectively, from the first quarter of this year. The improvement stems from the pioneering combination of technology and content moderation by a dedicated investigations team used to identify videos that violate policies. To better enforce these policies, moderators also receive regular training to identify content featuring reappropriation, slurs and bullying.
The platform also announced improved mute settings for comments and questions during live streams, whereby hosts can temporarily mute select viewers for anywhere between a few seconds to the entire duration of the LIVE. Once muted, the user’s entire comment history will also be removed, in addition to the existing option to turn off comments or limit potentially harmful comments using a keyword filter.
Further to the removal of negative content, TikTok empowers users to customize their experience with a range of tools and resources, including effective ways to filter comments on their content, delete or report multiple comments at once, and block accounts in bulk. More recently, prompts have been introduced to urge users to consider the impact of their words before posting unkind or violative comments. This has already proven effective with nearly four in 10 people withdrawing or editing their comments.