Google is formulating a policy to guide content creators on the responsible use of synthetic content, particularly deepfakes, on platforms like YouTube. Discussions with India's IT minister and industry representatives reflect growing concerns over deepfake misuse, prompting Google to take measures to distinguish between positive and harmful applications of the technology.
The policy focuses on creator disclosures and labeling for AI-generated content, with plans for disclaimers on YouTube for video descriptions and within videos. Punitive measures for non-compliance were not detailed, but existing policies involve suspending accounts and removing content violating compliance guidelines.
Saikat Mitra, VP and Head of Trust and Safety for Asia-Pacific at Google, highlighted the need for nuanced regulation, emphasizing the positive potential of AI-generated synthetic content.
Why does it matter?
Increased attention on synthetic content arose following the use of deepfake videos to target notable figures such as Prime Minister Narendra Modi and actor Katrina Kaif on various social media platforms. Google already requires advertisers globally to disclose deepfakes for electoral content. The fact that the Indian government is actively considering regulations to prevent the misuse of deepfakes signals a proactive stance rather than relying solely on tech companies to address the issue.