Meta’s new AI deep fake playbook: it will now label the content as “Made with AI”
Meta has recently announced that it will now start labeling the audio, video, and text content as “Made with AI”, for AI-generated content. This will be applied from May for all the AI-generated posts on their platform.
Monika Bickert, Vice President of Content Policy wrote in her blog post, We will begin labeling a wider range of video, audio and image content as “Made with AI” when we detect industry standard AI image indicators or when people disclose that they’re uploading AI-generated content.”
Meta, the Facebook owner, has stated that they are changing their policy based on the feedback Meta has received. After so many surveys and board insights, Meta decided to change the way they are handling the AI-generated content. This new label policy will be applicable on Meta’s platforms Facebook, Instagram, and Thread. This policy for now will not be applicable to WhatsApp and Quest virtual reality headsets, these are covered under other policies.
Also read: How to find default gateway on iPhone
The company will also stop removing content solely on the basis of its current manipulated video policy in July, added in the blog post. Meta said, “This timeline gives people time to understand the self-disclosure process before we stop removing the smaller subset of manipulated media.”
Previously in February, Meta announced that they are working with different industry norms to improve their standards for identifying AI-generated content. “Our “Made with AI” labels on AI-generated video, audio, and images will be based on our detection of industry-shared signals of AI images or people self-disclosing that they’re uploading AI-generated content. We already added “Imagined with AI” to photorealistic images created using our Meta AI feature, Said Meta.
In February the oversight board urged Meta to rethink about their on the AI generated content. To understand this better you should that the oversight board finds the previous policies of Meta very narrow. Under those policies, Meta only covered the AI-generated content that makes a person appear to say something they didn’t say.
Also read: SiMa.ai raised $70M in funding to introduce a multimodal GenAI chip
Now posting their blog on Friday, Meta said, “We agree with the Oversight Board’s argument that our existing approach is too narrow since it only covers videos that are created or altered by AI to make a person appear to say something they didn’t say. Our manipulated media policy was written in 2020 when realistic AI-generated content was rare and the overarching concern was about videos.” “We agree that providing transparency and additional context is now the better way to address this content. The labels will cover a broader range of content in addition to the manipulated content that the Oversight Board recommended labeling”, Meta.
Meta also highlights a ‘network of nearly 100 independent fact-checkers’. As per the company these will engaged to identify risks related to manipulated content.