Meta Set to Launch Tool that Will Block Sensitive Images in Private Messages

Meta Set to Launch Tool that Will Block Sensitive Images in Private Messages

Meta Set to Launch Tool that Will Block Sensitive Images in Private Messages

Meta has announced the development of a new safety tool aimed at preventing the exchange of sensitive images among teenagers on its platforms.

This move comes amidst criticism from governmental authorities and law enforcement agencies following Meta's decision to enable end-to-end encryption in its Messenger chats by default, a move perceived to potentially hinder the detection of child abuse.

The upcoming safety tool, expected to launch later this year, will be designed to discourage both the sending and receiving of explicit content, particularly targeting minors and women. While the tool is likely to be optional, it will be made available to adults on both Instagram and Facebook. The primary goal is to protect users, with a specific focus on teenagers, from the sharing of inappropriate content and the associated pressures that may arise.

In addition to this safety tool, Meta has revealed that minors will, by default, be restricted from receiving messages on Instagram and Messenger from individuals they are not connected to or do not follow. This measure is part of Meta's broader strategy to enhance child safety on its platforms, recognizing the potential risks associated with online interactions.

The decision to implement end-to-end encryption (e2ee) in Facebook Messenger has faced significant backlash from various quarters, including government authorities, law enforcement, and child protection organizations. Critics argue that e2ee makes it challenging for platforms to identify and report instances of child abuse material within messages. Notably, other messaging apps like Apple's iMessage, Signal, and Meta-owned WhatsApp already employ end-to-end encryption and have staunchly defended its use.

Amid calls for the adoption of client-side scanning – a technique that involves scanning messages for known child abuse images before encryption – Meta has chosen a different approach. The company insists that its new safety feature does not involve client-side scanning, which it views as undermining the privacy protection provided by encryption. Instead, Meta will leverage machine learning to identify explicit content, emphasizing that this process will occur entirely on the user's device.

Machine learning, according to Meta, provides a more nuanced approach to identifying nudity, as distinguishing child abuse content using this technology across its vast user base introduces a significant risk of errors and potential false reporting. The company remains committed to upholding the privacy of users while addressing safety concerns through a combination of measures.

Meta asserts that it has already introduced over 30 tools and resources aimed at ensuring the safety of children on its platforms. The newly announced safety features include default settings that restrict teenagers from receiving messages from unknown individuals and parental supervision tools that empower parents to control and deny changes to their teenagers' safety settings.

Under the new default settings, teenagers will only receive messages or be added to group chats by individuals they already follow or are connected to. This move aligns with Meta's existing policy of preventing adults from messaging teenagers who do not follow them.

Moreover, the enhanced parental supervision tools will now provide parents with the ability to deny requests from teenagers seeking to alter their default safety settings. Previously, parents were merely notified of such changes. This shift underscores Meta's commitment to involving parents in the safety measures implemented for teenage users.

As Meta continues to navigate the evolving landscape of online safety, these new features and tools signal a proactive approach to addressing concerns related to explicit content and potential risks to minors. The delicate balance between privacy and safety remains a central challenge for social media platforms, and Meta's ongoing efforts aim to strike that balance while fostering a secure online environment for users of all ages.

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • No comments found

Share this article

Azamat Abdoullaev

Tech Expert

Azamat Abdoullaev is a leading ontologist and theoretical physicist who introduced a universal world model as a standard ontology/semantics for human beings and computing machines. He holds a Ph.D. in mathematics and theoretical physics. 

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline