Sunday, April 13, 2025

Interesting.

https://pogowasright.org/our-privacy-act-lawsuit-against-doge-and-opm-why-a-judge-let-it-move-forward/

Our Privacy Act Lawsuit Against DOGE and OPM: Why a Judge Let It Move Forward

On April 9, Adam Schwartz of EFF wrote:

Last week, a federal judge rejected the government’s motion to dismiss our Privacy Act lawsuit against the U.S. Office of Personnel Management (OPM) and Elon Musk’s “Department of Government Efficiency” (DOGE). OPM is disclosing to DOGE agents the highly sensitive personal information of tens of millions of federal employees, retirees, and job applicants. This disclosure violates the federal Privacy Act, a watershed law that tightly limits how the federal government can use our personal information.

We represent two unions of federal employees: the AFGE and the AALJ. Our co-counsel are Lex Lumina LLP, State Democracy Defenders Fund, and The Chandra Law Firm LLC.

We’ve already explained why the new ruling is a big deal, but let’s take a deeper dive into the Court’s reasoning.





Perspective.

https://www.technologyreview.com/2025/04/11/1114914/generative-ai-is-learning-to-spy-for-the-us-military/

Generative AI is learning to spy for the US military

For much of last year, about 2,500 US service members from the 15th Marine Expeditionary Unit sailed aboard three ships throughout the Pacific, conducting training exercises in the waters off South Korea, the Philippines, India, and Indonesia. At the same time, onboard the ships, an experiment was unfolding: The Marines in the unit responsible for sorting through foreign intelligence and making their superiors aware of possible local threats were for the first time using generative AI to do it, testing a leading AI tool the Pentagon has been funding.

Two officers tell us that they used the new system to help scour thousands of pieces of open-source intelligence—nonclassified articles, reports, images, videos—collected in the various countries where they operated, and that it did so far faster than was possible with the old method of analyzing them manually. Captain Kristin Enzenauer, for instance, says she used large language models to translate and summarize foreign news sources, while Captain Will Lowdon used AI to help write the daily and weekly intelligence reports he provided to his commanders.





Should ethics change because of new technology?

https://journals.ezenwaohaetorc.org/index.php/TIJAH/article/view/3128

ETHICS IN THE AGE OF ARTIFICIAL INTELLIGENCE: RECONCEPTUALISING THE TRADITIONAL ETHICAL THEORIES

The rapid evolution of artificial intelligence (AI) has fundamentally disrupted traditional ethical theories, necessitating a re-conceptualization of moral frameworks in the age of AI. As AI systems grow increasingly sophisticated, they challenge long-standing notions of moral agency, responsibility, and ethical decision-making. This research examines how the three waves of AI—Predictive AI, Generative AI, and Agentic AI—reshape ethical paradigms. Predictive AI, with its data-driven algorithms, exposes inherent biases and raises critical questions about justice, fairness, and accountability in automated decision-making. Generative AI, capable of creating synthetic content, disrupts traditional concepts of authorship, authenticity, and intellectual property, forcing a re-evaluation of ethical norms in creativity and ownership. Agentic AI, with its capacity for autonomous action, pushes the boundaries of moral responsibility, challenging humans to reconsider the ethical implications of delegating decision-making to machines. These developments demand a rethinking of traditional ethical theories such as utilitarianism, deontology, and virtue ethics, which were designed for human moral agents but fall short in addressing the unique moral dilemmas posed by AI. The research highlights the limitations of these classical theories in dealing with AI's opacity, autonomy, and responsibility. The study examines key ethical challenges, including the issue of moral agency and algorithmic bias, and proposes the need for a new ethical framework that accounts for the collaborative nature of human-AI interactions, emphasizing distributed moral responsibility and the importance of human oversight in ensuring ethical outcomes.





Perfecting AI?

https://cris.unibo.it/handle/11585/1013658

Argumentation in AI and law

This chapter introduces AI & Law approaches to legal argumentation, showing how such approaches provide formal models that succeed in capturing key aspects of legal reasoning. The chapter is organised as follows. Sections 2, 3, and 4 introduce the motivations and developments of research on argumentation within AI & Law. Section 2 looks into the notion of formal inference, and shows how deduction-based approaches fail to account for important aspects of legal reasoning. Section 3 introduces the idea of defeasibility, and argues that an adequate model of legal reasoning should take it into account. Section 4 presents some AI & Law models of argumentation. The remaining sections are dedicated to introducing a formal account of argumentation based on AI & Law research. Section 5 defines and exemplifies the notion of an argument. Section 6 discusses conflicts between arguments and their representation in argument graphs. Section 7 defines methods for assessing the status of arguments and evaluating their conclusions, and Section 8 summarises the steps from premises to dialectically supported conclusions



No comments: