Instagram is launching a new feature designed to help users by letting them know before they post potentially offensive subtitles. If Instagram detects a photo or video description that may offend someone, the platform will display an alert on the user's screen requesting a change of phrases or word. The alert system is fully automated, based on previously collected data and research on bullying.
Measure helps customize user experience and protect young people, social network says in a statement
This implementation is a feature similar to what Instagram launched in July, focusing on the comments section, which automatically detected offenses. Instagram will use artificial intelligence (AI) to detect language, warn the user, and "encourage positive interactions."
Of course, the user can ignore the platform suggestion and post an offensive comment or caption. "As part of our long-term commitment to leading the fight against online bullying, we have developed and tested AI that can recognize different forms of bullying on Instagram," the company said on its blog.
“Earlier this year, we launched a feature that notifies people when their comments might be considered offensive before they are posted. The results were promising and we found that these types of warnings can encourage people to reconsider their words when given a chance. ”
Instagram's offensive subtitle alert feature is still under test and will arrive first in "selected countries". Following the platform test, the social media "filter" should reach global markets in the coming months.
Via: Venture beat
(tagsToTranslate) social networks (t) internet (t) instagram (t) offensive subtitles (t) artificial intelligence