Meta, the parent company of Facebook and Instagram, has made some changes to its policies regarding digitally created and altered media. The announcement, made by Vice President of Content Policy Monika Bickert in a blog post, comes ahead of the upcoming a election season, which are will test the platform's ability to police misleading content generated by new artificial intelligence technologies.
Starting in May, Meta will implement "Made with AI" labels for AI-generated videos, images and audio posted on its platforms. This expansion of the policy aims to provide users with transparency regarding the origin of such content. Additionally, Meta will introduce separate and more prominent labels for digitally altered media that presents a "particularly high risk of materially deceiving the public on a matter of importance."
The shift in Meta's treatment of manipulated content will move from solely removing a limited set of posts to keeping the content up while providing viewers with information about its creation.
Meta had previously announced plans to detect images made using other companies' generative AI tools using invisible markers built into the files. While no specific start date was provided at the time, this initiative seemingly aligns with Meta's broader efforts to combat deceptive content across its services.
The new labeling approach will apply to content posted on Meta's Facebook, Instagram, and Threads services, with different rules governing its other services, including WhatsApp and Quest virtual reality headsets. Meta will begin applying the more prominent "high-risk" labels immediately, signaling the urgency of addressing deceptive content ahead of the US presidential election in November.
The announcement follows criticism from Meta's oversight board, which described the company's existing rules on manipulated media as "incoherent."