On Thursday, Meta revealed its plan to begin the rollout of more sophisticated AI systems to manage content enforcement as it aims to reduce dependency on third-party suppliers. Content enforcement tasks encompass identifying and eliminating material pertaining to terrorism, child exploitation, drugs, fraud, and scams.
The corporation states that it will implement these enhanced AI systems throughout its applications once they reliably surpass its existing content enforcement techniques. Concurrently, it will lessen its reliance on external vendors for content moderation.
“Although we will still employ individuals to review content, these systems will be capable of handling tasks that are more suited to technology, such as repetitive assessments of graphic material or areas where malicious actors are continually altering their strategies, such as in illegal drug sales or scams,” Meta elaborated in a blog entry.
Meta is confident that these AI systems can identify more infractions with improved precision, enhance scam prevention, respond more swiftly to real-world occurrences, and minimize over-enforcement.
According to the company, initial tests of the AI systems have shown encouraging results, as they can recognize twice as much violating adult sexual solicitation content compared to its review teams, while also decreasing the error rate by over 60%. It further claims that the systems can detect and thwart more impersonation accounts involving celebrities and other high-profile figures, as well as assist in preventing account takeovers by identifying indicators like logins from unfamiliar locations, password alterations, or profile edits.
Moreover, Meta states that the systems can recognize and address approximately 5,000 scam attempts daily, where fraudsters try to deceive individuals into disclosing their login information.
“Specialists will design, educate, supervise, and assess our AI systems, measuring performance and making the most intricate, high-impact choices,” Meta stated in the blog entry. “For instance, people will remain pivotal in making the highest-risk and most critical decisions, like appeals against account deactivation or reports made to law enforcement.”

This initiative arrives as Meta has been easing its content moderation policies over the past year or so, following President Donald Trump’s return to office. Last year, the firm discontinued its third-party fact-checking initiative in favor of a model similar to X’s Community Notes. It has also removed limitations on “subjects that are part of mainstream discussions” and remarked that users would be encouraged to adopt a “personalized” method regarding political content.
Additionally, this occurs as Meta and other major technology firms currently face several legal challenges aimed at holding social media giants responsible for the harm caused to children and young users.
On Thursday, Meta also declared the launch of a Meta AI support assistant that will provide users with round-the-clock support. The assistant is being rolled out internationally to the Facebook and Instagram applications for both iOS and Android, as well as within the Help Center on Facebook and Instagram on desktop.

