Meta, the parent company of Instagram and Facebook, announced on Tuesday a significant update to its content policies aimed at protecting teenagers from exposure to inappropriate material. This change, detailed in a blog post by the Menlo Park, California-based social media giant, involves not showing content related to suicide, self-harm, and eating disorders in the feeds of teenage users, even if it comes from accounts they follow.
Meta emphasized its commitment to ensuring that teens have safe, age-appropriate experiences on its platforms. To enforce this, teen users — assuming they provided their true age at sign-up — will find their accounts automatically set to the most restrictive privacy settings. Additionally, they will be blocked from searching for terms associated with harmful content.
Meta acknowledged the complexity of issues like self-harm, recognizing that while sharing experiences can help destigmatize mental health problems, such topics might not be suitable for all young audiences. As a result, the company plans to remove such content from teens’ experiences on Instagram and Facebook.
This move by Meta occurs amidst ongoing legal challenges. The company faces lawsuits from several U.S. states accusing it of contributing to the youth mental health crisis. These lawsuits allege that Meta knowingly designed Instagram and Facebook features to addict children, thereby exacerbating mental health issues among young users.
However, critics argue that Meta’s latest policy changes are insufficient. Josh Golin, executive director of the children’s online advocacy group Fairplay, criticized the announcement as a belated response and an attempt to avoid regulation. He questioned why Meta delayed until 2024 to implement these changes, especially given the known risks such content poses to young users. Golin’s statement reflects a broader skepticism about whether these measures will effectively address the concerns surrounding online safety and mental health for young users on social media platforms.