But the computer said it was okay!
https://link.springer.com/article/10.1007/s10506-023-09347-w
Going beyond the “common suspects”: to be presumed innocent in the era of algorithms, big data and artificial intelligence
This article explores the trend of increasing automation in law enforcement and criminal justice settings through three use cases: predictive policing, machine evidence and recidivism algorithms. The focus lies on artificial-intelligence-driven tools and technologies employed, whether at pre-investigation stages or within criminal proceedings, in order to decode human behaviour and facilitate decision-making as to whom to investigate, arrest, prosecute, and eventually punish. In this context, this article first underlines the existence of a persistent dilemma between the goal of increasing the operational efficiency of police and judicial authorities and that of safeguarding fundamental rights of the affected individuals. Subsequently, it shifts the focus onto key principles of criminal procedure and the presumption of innocence in particular. Using Article 6 ECHR and the Directive (EU) 2016/343 as a starting point, it discusses challenges relating to the protective scope of presumption of innocence, the burden of proof rule and the in dubio pro reo principle as core elements of it. Given the transformations law enforcement and criminal proceedings go through in the era of algorithms, big data and artificial intelligence, this article advocates the adoption of specific procedural safeguards that will uphold rule of law requirements, and particularly transparency, fairness and explainability. In doing so, it also takes into account EU legislative initiatives, including the reform of the EU data protection acquis, the E-evidence Proposal, and the Proposal for an EU AI Act. Additionally, it argues in favour of revisiting the protective scope of key fundamental rights, considering, inter alia, the new dimensions suspicion has acquired.
Which ethic?
https://ui.adsabs.harvard.edu/abs/2023arXiv230212149K/abstract
Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI
AI ethics is an emerging field with multiple, competing narratives about how to best solve the problem of building human values into machines. Two major approaches are focused on bias and compliance, respectively. But neither of these ideas fully encompasses ethics: using moral principles to decide how to act in a particular situation. Our method posits that the way data is labeled plays an essential role in the way AI behaves, and therefore in the ethics of machines themselves. The argument combines a fundamental insight from ethics (i.e. that ethics is about values) with our practical experience building and scaling machine learning systems. We want to build AI that is actually ethical by first addressing foundational concerns: how to build good systems, how to define what is good in relation to system architecture, and who should provide that definition. Building ethical AI creates a foundation of trust between a company and the users of that platform. But this trust is unjustified unless users experience the direct value of ethical AI. Until users have real control over how algorithms behave, something is missing in current AI solutions. This causes massive distrust in AI, and apathy towards AI ethics solutions. The scope of this paper is to propose an alternative path that allows for the plurality of values and the freedom of individual expression. Both are essential for realizing true moral character.
Yes, this again. Cutting the AI out of the benefits?
https://digitalcommons.law.scu.edu/chtlj/vol39/iss2/2/
RECONCEPTUALIZING CONCEPTION: MAKING ROOM FOR ARTIFICIAL INTELLIGENCE INVENTIONS
Artificial intelligence (AI) enables the creation of inventions that no natural person conceived, at least as conception is traditionally understood in patent law. These can be termed “AI inventions,” i.e., inventions for which an AI system has contributed to the conception in a manner that, if the AI system were a person, would lead to that person being named as an inventor. Deeming such inventions unpatentable would undermine the incentives at the core of the patent system, denying society access to the full benefits of the extraordinary potential of AI systems with respect to innovation. But naming AI systems as inventors and allowing patentability on that basis is also problematic, as it involves granting property rights to computer programs. This Article proposes a different approach: AI inventions should be patentable, with inventorship attributed to the natural persons behind the AI under a broadened view of conception. More specifically, conception should encompass ideas formed through collaboration between a person and tools that act as extensions of their mind. The “formation” of those ideas should be attributed to the person, including when the ideas underlying the invention were first expressed by a tool used to enhance their creative capacity and subsequently conveyed to them.
No comments:
Post a Comment