Meta is ramping up its efforts to protect younger users on Instagram by expanding its “Teen Accounts” program—this time with help from artificial intelligence. The company announced that it’s currently testing a new AI system in the U.S. that can detect if an Instagram user is likely a teenager, even if they’ve input a fake adult birthday when signing up. If the AI suspects a user is underage, it will automatically recategorize their account into a Teen Account, activating a series of built-in protections.
The update comes as part of Meta’s broader strategy to create a safer digital environment for teens on its platforms, including Instagram, Facebook, and Messenger. First introduced in September 2024, the Teen Account feature includes default private profiles, limits on who can message the teen (only people they follow), restrictions on sensitive content such as violence or cosmetic procedures, and app usage reminders that encourage users to take a break after 60 minutes.
In a blog post, Meta acknowledged the challenge that many parents face in staying on top of their children’s digital settings. “Parents are busy and don't always have the time to review these settings,” the company wrote. “That’s why we’re continuing to take additional steps to ensure as many teens as possible are in Teen Account settings.” By using AI to proactively identify and protect underage users, Meta hopes to close the gap that allows teens to bypass age restrictions with a few simple clicks.
The AI system doesn’t rely on a single data point, like a birthdate, to make its determinations. Instead, it analyzes multiple signals—such as when the account was created, the type of content the user interacts with, and behavioral patterns—to estimate a user’s real age. If a user is mistakenly placed into a Teen Account, they will have the option to appeal and adjust their account settings to reflect their true age.
This isn't the first time Meta has leaned into AI for age verification. Back in June 2022, the company publicly committed to investing in AI tools to more accurately gauge the ages of its users, stating that traditional self-reported data was often unreliable, especially among younger audiences trying to gain access to age-restricted content.
The rollout of AI-enforced Teen Accounts follows a broader push for online safety and transparency across the tech industry, especially as regulators, parents, and watchdog groups demand more accountability from social media platforms. Meta reports that more than 54 million teen accounts have already been enrolled into this system globally. The protections have also expanded beyond Instagram to include Facebook and Messenger, where similar restrictions are now applied to teenage users.
Early results show the feature is well-received by its target audience. According to Meta, 97% of users aged 13 to 15 chose to stick with the Teen Account protections after being enrolled. That statistic signals a growing awareness among younger users about the importance of online safety and a willingness to engage with features that support healthier digital habits.
As of the end of 2024, Meta’s suite of apps—Instagram, Facebook, and WhatsApp—collectively saw 3.35 billion daily active users. Instagram alone boasts 169 million users in the U.S., making it one of the most influential platforms among teens and young adults.
While it’s still in testing, Meta’s AI age-detection model could become a major tool in helping prevent teens from accessing adult content or being contacted by strangers. If successful, it might set a new industry standard for how tech companies handle age verification and youth protections in the ever-evolving world of social media.