Meta Platforms, the parent company of Instagram, is taking significant strides to enhance its artificial intelligence (AI) capabilities to better protect teenage users on its platform. This initiative comes as part of a broader commitment to ensure the safety and privacy of young individuals engaging with social media.

In a recent announcement, Instagram revealed that it would be expanding its use of AI for age detection, beginning in 2024. The AI system will analyze various signals to identify users who are under 18 years old. For instance, it might consider interactions such as if friends send messages wishing someone a 'happy 16th birthday.' Additionally, Meta's AI will utilize engagement data, as research indicates that individuals in the same age group often interact with content in similar manners. This comprehensive approach aims to create a safer environment for younger users.

Teen accounts on Instagram are automatically subjected to stricter privacy settings. These restrictions include making accounts private by default, preventing strangers from sending direct messages, and limiting the types of content that teens can view. Notably, last year, Instagram proactively modified the default settings for all teen accounts to enable safety features automatically, reflecting the company's growing commitment to safeguarding its younger audience.

Instagram's new measure will involve using AI to proactively identify teen accounts that might misrepresent their ages by listing an adult birthday. Starting today, the company will begin testing this feature in the United States. If the AI detects that a user is likely a minor, yet their account specifies an adult age, Instagram will automatically adjust the account settings to apply the more restrictive regulations designed for teen users. In a transparent acknowledgment of the technology's limitations, Instagram has stated that users will still retain the ability to manually revert any changes made to their account settings.

This move by Meta is part of a larger initiative to implement more protective measures for children and teens on its platforms. These developments have often been driven by mounting concerns from parents, lawmakers, and child advocacy groups regarding the dangers young users might face online. For instance, last year, the European Union launched an investigation to determine whether Meta was adequately safeguarding the health of minors using its platforms. In the United States, alarming reports surfaced about predators targeting children on Instagram, leading to a lawsuit filed by a state attorney general.

Debates among major tech companies, including Meta, Google, Snap, and X, have also intensified over responsibilities for child safety online. In March, Google accused Meta of attempting to shift its accountability onto app stores following the passage of new legislation in Utah, which has sparked further discussions on the roles of tech firms in protecting children in digital spaces.