Sunday, August 04, 2024

Because my AI is better at this than I am.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4909907

Regulating Hidden AI Authorship

With the rapid emergence of high-quality generative artificial intelligence (AI), some have advocated for mandatory disclosure when the technology is used to generate new text, images, or video. But the precise harms posed by non-transparent uses of generative AI have not been fully explored. While the use of the technology to produce material that masquerades as factual (deepfakes) is clearly deceptive, this Article focuses on a more ambiguous area of harm: the consumer’s interest in knowing whether works of art or entertainment were created using generative AI.

In the markets for creative content—fine art, books, movies, television, music, and the like—producers have several financial reasons to hide the role of generative AI in a work’s creation. Copyright law is partially responsible. The Copyright Office and courts have concluded that only human-authored works are copyrightable, meaning much AI-generated content falls directly into the public domain. Producers thus have an incentive to conceal the role of generative AI in a work’s creation because disclosure could jeopardize their ability to secure copyright protection and monetize the work.

Whether and why this obfuscation harms consumers is a different matter. The law has never required disclosure of the precise ways a work is created; indeed, failing to publicly disclose the use of a ghostwriter or other creative assistance is not actionable. But AI authorship is different. Not only is there growing evidence that consumers have strong ethical and aesthetic preferences for human-created works, but we can understand such failure-to-disclose as damaging to art’s social role. Building on various theories of artistic value, the Article argues that works that masquerade as human-made destabilize art’s ability to encourage self-definition, empathy, and democratic engagement, turning all creative works into exclusively entertainment-focused commodities.

The Article also investigates ways to facilitate disclosure of the use of generative AI in creative works. Industry actors could be motivated to self-regulate, adopting a provenance-tracking or certification scheme. And Federal Trade Commission (FTC) enforcement could provide some additional checks on the misleading use of AI in a work’s creation. Intellectual property law could also help incentivize disclosure. In particular, doctrines designed to prevent the overclaiming of material in the public domain—such as copyright misuse—could be used to raise the financial stakes of failing to disclose the role of AI in a work’s creation.



No comments: