Meta is rolling out new protections for Instagram accounts that primarily feature children, aiming to curb abuse and exploitation on the platform. The update targets accounts run by parents, guardians, or talent managers that regularly post content featuring minors, especially those under 13. The move is part of a broader effort by Meta, Instagram's parent company, to expand protections for younger users and address concerns about their online well-being.
According to an announcement from Meta, Instagram will implement new default settings and features for these accounts. While specific details of all changes were not fully outlined, the core aim is to limit how such content can be viewed, shared, and interacted with by a wider audience, thereby reducing potential risks.
“We’ve added new safety features to DMs in Teen Accounts to give teens more context about the accounts they’re messaging and help them spot potential scammers. Now, teens will see new options to view safety tips and block an account, as well as the month and year the account joined Instagram, all prominently displayed at the top of new chats,” said the company.
Meta has also launched a new block and report option in DMs. “While we’ve always encouraged people to both block and report, this new combined option will make this process easier, and help make sure potentially violating accounts are reported to us, so we can review and take action,” added the company.
The update will significantly affect family vloggers and child influencers, whose accounts have come under scrutiny for exposing children to online risks. A New York Times investigation found that some parents knowingly participated in exploitative practices, including selling photos or clothing worn by their children.
Meta clarified that while these accounts are “overwhelmingly used in benign ways,” the new measures are designed to prevent abuse before it happens.
According to an announcement from Meta, Instagram will implement new default settings and features for these accounts. While specific details of all changes were not fully outlined, the core aim is to limit how such content can be viewed, shared, and interacted with by a wider audience, thereby reducing potential risks.
“We’ve added new safety features to DMs in Teen Accounts to give teens more context about the accounts they’re messaging and help them spot potential scammers. Now, teens will see new options to view safety tips and block an account, as well as the month and year the account joined Instagram, all prominently displayed at the top of new chats,” said the company.
Meta has also launched a new block and report option in DMs. “While we’ve always encouraged people to both block and report, this new combined option will make this process easier, and help make sure potentially violating accounts are reported to us, so we can review and take action,” added the company.
The update will significantly affect family vloggers and child influencers, whose accounts have come under scrutiny for exposing children to online risks. A New York Times investigation found that some parents knowingly participated in exploitative practices, including selling photos or clothing worn by their children.
Meta clarified that while these accounts are “overwhelmingly used in benign ways,” the new measures are designed to prevent abuse before it happens.
You may also like
"Brave Comrade VS": Kerala bids farewell to Achuthanandan at Valiya Chudukadu
US announces investigation into Harvard's use of international visas, gives one-week deadline for details
Football: Valencia's Mosquera joins Arsenal squad ahead of closing transfer
Inside Ozzy Osbourne's younger years – from being 'shy' to lesson in prison
Monsoon woes push companies to expand flexible work policies for employee safety and productivity