The trade association for the digital content industry, Digital Content Next, shared its draft principles for the development and governance of generative AI, which addresses ways to prevent potential violations of copyright law by generative AI content. According to the CEO of Digital Content Next, Jason Kint, the draft guidelines aim to assist publishers in discussions with generative AI companies, regulators, and internal stakeholders.
Publishers' trade associations, including the New York Times, the Washington Post, Disney, and NBCUniversal, have raised concerns about the impact of generative AI on the media ecosystem, and they worry about AI-generated responses accessing paywalled content and reducing site traffic. Additionally, publishers expressed their concerns that their content may have been used to train AI models without proper compensation or attribution.
The draft principles emphasize that it is crucial for developers and deployers to uphold the rights of content creators, recognizing their ownership and control over their intellectual property. As rightful owners, publishers have the prerogative to engage in negotiations to ensure fair compensation for using their content. Copyright laws protect content creators, and the use of copyrighted works in AI systems should be analyzed under copyright and fair use law. Additionally, it is imperative for AI systems to operate in a transparent manner, fostering open communication with both publishers and users. Deployers of AI systems should be accountable for the outcomes generated by their technology, ensuring that fair market practices and competition are not compromised. Additionally, prioritizing safety and addressing privacy risks are essential aspects of responsible GAI development and deployment.