LONDON -- In a significant move to enhance online safety, Instagram has announced that users under the age of 16 will no longer be able to livestream or remove the blur on nudity in direct messages without obtaining parental approval. This policy change was revealed by Meta Platforms, the parent company of Instagram, on Tuesday, signaling a broader commitment to protect the well-being of teenagers using its platforms.

Instagram’s decision comes in response to increasing concerns surrounding the impacts of social media on young people’s mental health and safety. The company has been facing mounting criticism regarding how its platforms affect the lives of adolescents, prompting a reassessment of its policies aimed at younger users.

As part of this initiative, the social media giant is also extending its protective measures to its other platforms, Facebook and Messenger, specifically aimed at users under the age of 18. This means that the new safeguards will not only improve safety for Instagram users but will also benefit teens on these additional platforms.

Meta initiated its teen account program for Instagram back in September, offering parents greater control over their children’s online activities. This program was launched during a time when there is rising scrutiny on social media's influence on youth and their wellbeing.

The upcoming changes will first be implemented for users located in the United States, the United Kingdom, Canada, and Australia. Following this initial rollout, Meta plans to expand these features to users around the globe over the coming months.

With these updates, teens under the age of 16 will now be prohibited from utilizing Instagram Live unless they have explicit permission from their parents. Additionally, should they receive direct messages containing images suspected of nudity, parental consent will be required to disable the blur feature that is automatically applied to such content.

In a comprehensive update, Meta has outlined a series of protective measures that will now also apply to Facebook and Messenger. These include:

  • Setting teen accounts to private by default, ensuring that only approved followers can view their content.
  • Blocking private messages from users who are not on the teen's approved list of contacts.
  • Imposing strict restrictions on sensitive content, such as videos depicting violence or inappropriate behavior.
  • Issuing reminders for users to take breaks after 60 minutes of usage.
  • Halting notifications during designated bedtime hours to promote healthier online habits.

Meta emphasized that “Teen Accounts on Facebook and Messenger will offer similar, automatic protections to limit inappropriate content and unwanted contact, as well as ways to ensure teens’ time is well spent.” Since the introduction of the teen account program in September, at least 54 million teen accounts have been established, underscoring the demand for these safety measures.