New York: In a bid to enhance safety measures for teenagers and combat potential threats, Instagram, under its parent company Meta , has announced plans to test features that blur messages containing nudity, Meta revealed on Thursday. This move comes as Meta aims to address concerns surrounding harmful content circulating on its platforms.
Facing increasing scrutiny in both the United States and Europe, Meta has been challenged with allegations that its apps contribute to addiction and exacerbate mental health issues among young users.
Also Read | Bill Gates Shares Surprising Thoughts on ChatGPT’s Advancements with Sam Altman
The proposed protection feature for Instagram’s direct messages will utilize on-device machine learning to analyze whether images sent through the platform contain nudity. Specifically targeting users under 18, the feature will be activated by default, with Meta prompting adult users to enable it.
“Because the images are analyzed on the device itself, nudity protection will also work in end-to-end encrypted chats, where Meta won’t have access to these images – unless someone chooses to report them to us,” Meta explained.
Also Read | US Takes Action to Close ‘Gun Show Loophole’ and Strengthen Background Checks
While direct messages on Instagram currently lack encryption unlike Meta’s Messenger and WhatsApp apps, Meta has expressed intentions to introduce encryption for the service in the future.
Furthermore, Meta revealed ongoing efforts to develop technology aimed at identifying accounts potentially involved in sextortion scams. Additionally, the company is testing new pop-up messages to alert users who may have interacted with such accounts.
Also Read | Biden to Address Beijing’s South China Sea Actions in Philippines-Japan Summit at White House
This announcement follows Meta’s January declaration of plans to conceal more content from teenage users on Facebook and Instagram, with the aim of reducing exposure to sensitive material such as suicide, self-harm, and eating disorders.
The company’s actions come amidst legal challenges, with attorneys general from 33 U.S. states, including California and New York, filing a lawsuit in October alleging that Meta repeatedly misled the public regarding the risks associated with its platforms.
In Europe, the European Commission has initiated inquiries into Meta’s measures for safeguarding children from illegal and harmful content.