How would you blur recall without destroying AI usefulness? (Did Llama get 58 percent wrong?)
https://www.understandingai.org/p/metas-llama-31-can-recall-42-percent
Meta's Llama 3.1 can recall 42 percent of the first Harry Potter book
In recent years, numerous plaintiffs—including publishers of books, newspapers, computer code, and photographs—have sued AI companies for training models using copyrighted material. A key question in all of these lawsuits has been how easily AI models produce verbatim excerpts from the plaintiffs’ copyrighted content.
For example, in its December 2023 lawsuit against OpenAI, the New York Times Company produced dozens of examples where GPT-4 exactly reproduced significant passages from Times stories. In its response, OpenAI described this as a “fringe behavior” and a “problem that researchers at OpenAI and elsewhere work hard to address.”
But is it actually a fringe behavior? And have leading AI companies addressed it? New research—focusing on books rather than newspaper articles and on different companies—provides surprising insights into this question. Some of the findings should bolster plaintiffs’ arguments, while others may be more helpful to defendants.
The paper was published last month by a team of computer scientists and legal scholars from Stanford, Cornell, and West Virginia University. They studied whether five popular open-weight models—three from Meta and one each from Microsoft and EleutherAI—were able to reproduce text from Books3, a collection of books that is widely used to train LLMs. Many of the books are still under copyright.
Probably not going to happen.
https://scholarlycommons.law.case.edu/jolti/vol16/iss2/3/
Policing in Pixels
Artificial Intelligence (AI) is transforming border security and law enforcement, with facial recognition technology (FRT) at the forefront of this shift. Widely adopted by U.S. federal agencies such as the FBI, ICE, and CBP, FRT is increasingly used to monitor both citizens and migrants, often without their knowledge. While this technology promises enhanced security, it’s early-stage deployment raises significant concerns about reliability, bias, and ethical data sourcing. This paper examines how FRT is being used at the U.S.-Mexico border and beyond, highlighting its potential to disproportionately target vulnerable groups and infringe on constitutional rights.
The paper provides an overview of AI’s evolution into tools like FRT that analyze facial features to identify individuals. It discusses how these systems are prone to errors—such as false positives—and disproportionately affect racial minorities. The analysis then delves into constitutional implications under the Fourth Amendment’s protection against unreasonable searches and seizures and the Fourteenth Amendment’s guarantee of equal protection. This framework is particularly relevant when considering cases like those involving Clearview AI and Rite Aid, which resulted in severe consequences for both companies and exemplify how improper facial recognition technology (FRT) deployment can lead to significant privacy violations and reinforce societal disparities.
This paper advocates for a multi-layered approach to address these challenges. It argues for halting FRT deployment until comprehensive safeguards are established, including bias mitigation measures, uniform procedures, and increased transparency. By reevaluating the relationship between law enforcement and citizens in light of emerging technologies, this paper underscores the urgent need for policies that balance national security with individual rights.
Because, science fiction!
https://alsun.journals.ekb.eg/article_432675.html?lang=en
The Ethical Dilemmas of the “Three Laws of Robotics” in Isaac Asimov’s Runaround (1942) and Little Lost Robot (1947)
This paper examines the ethical dilemmas presented by Isaac Asimov’s Three Laws of Robotics in his stories Runaround (1942) and Little Lost Robot (1947). The Laws are analyzed and reevaluated within the framework of the ethical theories of Immanuel Kant’s deontology and Jeremy Bentham’s utilitarianism. The analysis demonstrates the ethical conflicts between deontology’s rigid adherence to universal moral absolutes and utilitarianism’s emphasis on maximizing societal welfare. This is through illustrating Asimov’s critical insights into contemporary debates on artificial intelligence ethics and regulation, prompting a re-evaluation of human responsibility, human-robot trust, and the boundaries of robotic autonomy. The stories reveal the limitations of Asimov’s Laws in addressing real-world complexities, exposing their inability to guarantee consistent ethical behavior in artificial intelligence systems. Furthermore, this study introduces a novel perspective on the interplay between ethical theory and speculative fiction, underscoring the practical value of Asimov’s narratives in shaping forward-thinking approaches to robotic legislation and ethical programming
Runaround https://archive.org/details/Astounding_v29n01_1942-03_dtsg0318/page/n93/mode/2up
No comments:
Post a Comment