Instagram said in a blog post last Tuesday, May 1st, the move (Anti-Bullying Filter) will “filter bullying comments intended to harass or upset people in the Instagram community.”
With more than 500 million daily active users it is inevitable for bullying to occur, that is why the Facebook-owned photo sharing application has been reinforcing its machine learning technology to purge comments that contain toxic bullying content automatically.
The update is switched on by default; therefore it has already reached its global user base.
Instagram’s co-founder Kevin Sytrom and software engineer Mike Krieger said the anti-bullying technology would build upon the offensive comment filter that was first introduced in June last year to hunt down divisive comments.
Kevin Sytrom wrote, “This new filter hides comments containing attacks on a person’s appearance or character, as well as threats to a person’s well-being or health.”
He also stressed that the anti-bullying feature could be disabled in the Comment Controls center in the Instagram app. “The new filter will also alert us to repeated problems so we can take action,” he said.
Instagram is built on a Facebook-developed text-processing system called DeepText. It sorts negative comments into categories, including bullying, racism, and sexual harassment.
Facebook said the DeepText uses deep neural networks and other state-of-the-art machine-learning tools to try to parse language in context. It further explained that the DeepText should reduce the chance of the filters blocking too much innocuous content (which could lead users to disable it on their profiles ) or making it too easy for bullies and trolls to circumvent the filters by, say, tweaking the spelling of insults and slurs.