“How to Lawyer” seems to be changing…
AI’s Rise May Motivate Law Firms to Quit Their Traditional Ways
The traditional law firm structure—with many lower-level lawyers performing mostly analytical tasks on behalf of a few partners—is poised to become obsolete thanks to artificial intelligence.
The firms that survive and thrive will embrace AI to elevate their value and rethink their approach to human capital, changing their practices and culture to emphasize innovation and insight.
A 2023 Goldman Sachs report estimated that 44% of tasks within legal could be automated by AI. While any such projection is speculative, it doesn’t feel far off.
(Related)
https://abovethelaw.com/2023/11/the-ideal-partnership-combining-ai-and-lawyer-expertise/
The Ideal Partnership: Combining AI And Lawyer Expertise
… Legal departments can now harness the same power of AI. Lawyers fill a central role that evolves as a department’s use of AI matures. Because AI is far from infallible, how you guide AI-driven workflows, monitor AI’s performance, evaluate its output, and make all final decisions in legal matters is critical.
… However, blindly accepting AI outputs and recommendations without human analysis can lead to inaccurate legal advice and misinformed decisions. Stay mindful that a risk of errors or biases is always possible in the algorithms that power AI systems, making it vital to validate and verify their outputs.
Perspective.
The legal framework for AI is being built in real time, and a ruling in the Sarah Silverman case should give publishers pause
… To be clear: The legal framework for generative AI — large language models, or LLMs — is still very much TBD. But things aren’t looking great for the news companies dreaming of billions in new revenue from AI companies that have trained LLMs (in very small part) on their products. While elements of those models’ training will be further litigated, courts have thus far not looked favorably on the idea that what they produce is a copyright infringement.
Silverman’s1 complaint is important, because in one significant way, it’s stronger than what news companies might be able to argue. The overwhelming share of news content is made free for anyone online to read — on purpose, by its publishers. Anyone with a web browser can call up a story — a process that necessarily involves a copy of the copyrighted material being downloaded to their device. That publishers choose to make their content available to web users makes it harder to argue that an OpenAI or Meta webcrawler had done special harm.
But Silverman’s copyrighted content in question is a book — specifically, her 2010 memoir The Bedwetter. This is not, importantly, a piece of content that’s been made freely available by its publisher to web users. To access The Bedwetter legally in digital form, HarperCollins asks you to pay $13.99.
And we know that Meta did not get its copy of The Bedwetter by spending $13.99. It’s acknowledged that its LLM was trained using something called Books3 — part of something else called The Pile. Books3 is a 37-gigabyte text file that contains the complete contents of 197,000 books, sourced from a pirated shadow library called Bibliotik. The Pile mixes those books with another 800 gigs or so of content, including papers from PubMed, GitHub, Wikipedia, and those Enron emails. Large language models need a large amount of language to work, so The Pile became a popular early input in LLM training.
No comments:
Post a Comment