Toward creation of a Large Legal Language Model (LLLM)?
https://www.bespacific.com/the-necessary-and-proper-stewardship-of-judicial-data/
The Necessary and Proper Stewardship of Judicial Data
Huq, Aziz Z. and Clopton, Zachary D., The Necessary and Proper Stewardship of Judicial Data (September 20, 2023). Stanford Law Review, Vol. 76, Forthcoming, Northwestern Public Law Research Paper No. 23-55, Available at SSRN: https://ssrn.com/abstract=4578337 – “Governments and commercial firms create profits and social gain by exploiting large pools of data. One source of valuable data, however, lies in public hands yet remains largely untapped. While the deep reservoirs of data produced by Congress and federal agencies have long been available for public use, the data produced by federal judiciary is only loosely regulated, imperfectly used (except by a small number of well-resourced private data cartels), and largely ignored by scholars. But the ordinary process of litigation in federal courts generates an enormous volume of data. Especially after recent developments in large language models, this data holds immense potential for private gain and public good. It can be used to predict case outcomes or clarify the law in ways that advance legality and judicial access. It can reveal shortfalls in judicial practice and enable the provision of cheaper, better access to justice. It can make legible many otherwise invisible social facts that, if brought to light, can help improve public policy. Or else it can serve as a private profit center, its benefits accruing to a small coterie of data brokering firms capable of monopolizing its commercial use. This Article is the first to address the complex empirical, legal, and normative questions raised by the untapped public asset of judicial data. It first develops a positive, descriptive account of how federal courts produce, dissipate, preserve, or disclose information. This includes a map of the known sources of Article III data (e.g., opinions, orders, briefs), but also extends, however, to a massive volume of ‘dark data’ produced but either lost or buried by the courts. This positive analysis further uncovers a complex administrative framework by which a plethora of manifold walls and hurdles—some categorical, and some individuated—are thrown up to slow down or stop public access. With this positive understanding in hand, we offer a careful analysis of the constitutional questions implicated in decisions to disclose, or to render opaque, judicial data. Drawing attention to the key question of who controls judicial data flows, we demonstrate the existence of sweeping congressional power to regulate judicial data outside of a small zone of inherent judicial authority and a handful of instances in which privacy or safety are in play. Congressional authority, therefore, is the rule and not the exception. With these empirical and legal foundations in hand, the Article offers a normative evaluation of how Congress should regulate the production and dissemination of judicial data, in light of the capabilities and incentives of relevant actors. The information produced by the federal courts should not be exclusively a source of private profit for the data-centered firms presently monopolizing access. It is a public asset that should be elicited and disseminated in ways that advance the federal courts’ mission of equal justice under law.”
Worth reading all. (Will we elect the most creative liar.)
https://www.schneier.com/blog/archives/2023/10/ai-and-us-election-rules.html
AI and US Election Rules
If an AI breaks the rules for you, does that count as breaking the rules? This is the essential question being taken up by the Federal Election Commission this month, and public input is needed to curtail the potential for AI to take US campaigns (even more) off the rails.
At issue is whether candidates using AI to create deepfaked media for political advertisements should be considered fraud or legitimate electioneering. That is, is it allowable to use AI image generators to create photorealistic images depicting Trump hugging Anthony Fauci? And is it allowable to use dystopic images generated by AI in political attack ads?
For now, the answer to these questions is probably “yes.” These are fairly innocuous uses of AI, not any different than the old-school approach of hiring actors and staging a photoshoot, or using video editing software. Even in cases where AI tools will be put to scurrilous purposes, that’s probably legal in the US system. Political ads are, after all, a medium in which you are explicitly permitted to lie.
The concern over AI is a distraction, but one that can help draw focus to the real issue. What matters isn’t how political content is generated; what matters is the content itself and how it is distributed.
(Related?)
https://www.bespacific.com/social-medias-frictionless-experience-for-terrorists/
Social Media’s ‘Frictionless Experience’ for Terrorists
The Atlantic [read free ]: “These platforms were already imperfect. Now extremist groups are making sophisticated use of their vulnerabilities. The incentives of social media have long been perverse. But in recent weeks, platforms have become virtually unusable for people seeking accurate information…. Social media has long encouraged the sharing of outrageous content. Posts that stoke strong reactions are rewarded with reach and amplification. But, my colleague Charlie Warzel told me, the Israel-Hamas war is also “an awful conflict that has deep roots … I am not sure that anything that’s happened in the last two weeks requires an algorithm to boost outrage.” He reminded me that social-media platforms have never been the best places to look if one’s goal is genuine understanding: “Over the past 15 years, certain people (myself included) have grown addicted to getting news live from the feed, but it’s a remarkably inefficient process if your end goal is to make sure you have a balanced and comprehensive understanding of a specific event.”
See also Washington Post: Pro-Palestinian creators evade social media suppression by using ‘algospeak’ and Hamas turns to social media to get its message out — and to spread fear. Unmoderated messaging services and gruesome video from a deadly Gaza hospital strike have helped Hamas prosecute its ‘video jihad’
No comments:
Post a Comment