I once read a book that gave me an idea. Is that fair use?
An AI engine scans a book. Is that copyright infringement or fair use?
As artificial intelligence programs have become ubiquitous over the past year, so have lawsuits from authors and other creative professionals who argue that their work has been essential to that ubiquity—the “large language models” (or LLMs) that power text-generating AI tools are trained on content that has been scraped from the Web, without its authors’ consent—and that they deserve to be paid for it. Last week, my colleague Yona Roberts Golding wrote about how media outlets, specifically, are weighing legal action against companies that offer AI products, including OpenAI, Meta, and Google. They may have a case: a 2021 analysis of a dataset used by many AI programs showed that half of its top ten sources were news outlets. As Roberts Golding noted, Karla Ortiz, a conceptual artist and one of the plaintiffs in a lawsuit against three AI services, recently told a roundtable hosted by the Federal Trade Commission that the creative economy only works “when the basic tenets of consent, credit, compensation, and transparency are followed.”
As Roberts Golding pointed out, however, AI companies maintain that their datasets are protected by the “fair use” doctrine in copyright law, which allows for copyrighted work to be repurposed under certain limited conditions. Matthew Butterick, Ortiz’s lawyer, told Roberts Golding that he is not convinced by this argument; LLMs are “being held out commercially as replacing authors,” he said, noting that AI-generated books have already been sold on Amazon, under real or fake names. Most copyright experts would probably agree that duplicating a book word for word isn’t fair use. But some observers believe that the scraping of books and other content to train LLMs likely is protected by the fair use exception—or, at least, that it should be. In any case, debates around news content, copyright, and AI are building on similar debates around other types of creative content—debates that have been live throughout AI’s recent period of rapid development, and that build on much older legal concepts and arguments.
(Related)
Whose content is it anyway? AI-generated content and the tangled web of creation, ownership, and responsibility
… The use of AI prompts legal, business, and moral considerations, with ownership rights being a key concern. Who can claim copyright for content generated by AI? Is it the human creator who initiated the process? The AI platform itself? The original owners of the training material? Or someone else entirely?
The inevitable false alarm. If they make changes to reduce false alarms will they miss the real thing?
Brazoswood High School false active shooter lockdown prompts concerns with A.I. security system
An image of a student outside Brazoswood High School is what prompted the campus to go into a lockdown during school drop-off Wednesday morning
… The ZeroEyes A.I. security system in the school picked up the image. The technology notified ZeroEyes staff members who relayed the image to school officials. The school made the decision to go into lockdown. The district said it notified parents around 7:40 a.m.
“Our analysts erred on the side of safety and said we believe that this indeed is a rifle just based on the image that we’ve given the software and the service that we’re providing,” said Zero Eyes Chief Customer Officer Dustin Brooks.
Tools & Techniques.
https://www.kdnuggets.com/5-free-books-to-master-machine-learning
5 Free Books to Master Machine Learning
No comments:
Post a Comment