Monday, February 05, 2024

Should this be simple?

https://thenextweb.com/news/uk-ai-copyright-code-artists

UK fails to reach consensus on AI copyright code in major blow to artists

The UK government, AI companies, and creative organisations have failed to reach consensus on a proposed code that would set clear guidelines for the training of AI models on copyrighted material.

For almost a year, the Intellectual Property Office (IPO) has been consulting with companies including Microsoft, Google DeepMind, and Stability AI as well as various art and news organisations like the BBC, the British Library, and the Financial Times.

The purpose of the talks was to produce a rulebook on text and data mining, where AI models are trained on materials like books, images, and films produced by humans — often under copyright.

However, the IPO-mediated consortium has been unable to agree on a voluntary code of practice, reports the Financial Times.





Reformatting raw data so it looks more like the output you desire does not seem like an “improvement” to me.

https://arxiv.org/abs/2401.16380

Rephrasing the Web: A Recipe for Compute and Data-Efficient Language Modeling

Large language models are trained on massive scrapes of the web, which are often unstructured, noisy, and poorly phrased. Current scaling laws show that learning from such data requires an abundance of both compute and data, which grows with the size of the model being trained. This is infeasible both because of the large compute costs and duration associated with pre-training, and the impending scarcity of high-quality data on the web. In this work, we propose Web Rephrase Augmented Pre-training (WRAP) that uses an off-the-shelf instruction-tuned model prompted to paraphrase documents on the web in specific styles such as "like Wikipedia" or in "question-answer format" to jointly pre-train LLMs on real and synthetic rephrases. First, we show that using WRAP on the C4 dataset, which is naturally noisy, speeds up pre-training by ∼3x. At the same pre-training compute budget, it improves perplexity by more than 10% on average across different subsets of the Pile, and improves zero-shot question answer accuracy across 13 tasks by more than 2%. Second, we investigate the impact of the re-phrasing style on the performance of the model, offering insights into how the composition of the training data can impact the performance of LLMs in OOD settings. Our gains are attributed to the fact that re-phrased synthetic data has higher utility than just real data because it (i) incorporates style diversity that closely reflects downstream evaluation style, and (ii) has higher 'quality' than web-scraped data.



No comments: