A consequence of digital citizenship. My AI says, “I am not a crook!”
Can Artificial Intelligence Be Punished for Committing Offences? A Critical Analysis of The Applicability of Criminal Law Principles on Artificial Intelligence
Artificial intelligence is rapidly occupying every sphere of our life. From the mobiles to airplanes, AI is being employed everywhere. However, what if AI commits crime while carrying out its duties? Can AI be punished? This article presents possible answers to the question.
Section I define AI along with present day examples of the same and delineate the grounds for subjecting AI to the rigors of law whereas Section II lists out the reasons justifying the imposition of punishment on AI. Section III deals with the analysis of the scope of punishment for AI followed by conclusion that AI punishment is plausible but there are limitations which needs to be addressed for the same.
Everyone seems sure that AI will go to war.
International Humanitarian Law and Artificial Intelligence: A Canadian Perspective
Artificial Intelligence (AI) is one of the most remarkable achievements in the technology world. AI can be used dually by both civilians and combatants, serving with both beneficial and harmful aims. In the military realm, by empowering military systems to perform most warfare tasks without human involvement, AI developments have changed the capacity of militaries to conduct complex operations with heightened legal implications. Accordingly, it is vital to consider the consequences emanating from its use in military operations. International Humanitarian Law (IHL), also known as the laws of war, or the Law of Armed Conflict (LOAC), is a set of rules which regulates armed conflict between States, as well as civil wars. IHL protects people who are not involved or have ceased participating in hostilities and restricts the means and methods of war. While capabilities of new means of military AI continue to advance at incredible rates, on an international level, IHL principles should be revisited to account for the new reality in military operations. Additionally, on a national level, the impacts of military AI developments on military power for international competition have attracted the attention of national authorities. Therefore, studying both international and national pathways will be necessary as the first step toward promoting transparency in legal rules. Ultimately, central to my research is analyzing the Canadian perspective on IHL and the military use of AI at both national and international levels. Using a comparative approach with the American perspective, I conclude that if Canada develops more cohesive policies on the new military use of AI, it could become a legal leader in this realm.
Just because…
https://www.degruyter.com/document/isbn/9781474483599/html?lang=en
Ethics of Drone Strikes
The violent use of armed, unmanned aircraft (‘drones’) is increasing worldwide, but uncertainty persists about the moral status of remote-control killing and why it should be restrained. Practitioners, observers and potential victims of such violence often struggle to reconcile it with traditional expectations about the nature of war and the risk to combatants. Addressing the ongoing policy concern that state use of drone violence is sometimes poorly understood and inadequately governed, the book’s ethical assessments are not restricted to the application of traditional Just War principles, but also consider the ethics of artificial intelligence (AI), virtue ethics, and guiding principles for forceful law-enforcement.
This edited collection brings together nine original contributions by established and emerging scholars, incorporating expertise in military ethics, critical military studies, gender, history, international law and international relations, in order to better assess the multi-faceted relationship between drone violence and justice.
Is super-AI ethical?
Artificial Intelligence: Human Ethics in Non-Human Entities
Artificial intelligence is one of the basic foundations of the Industrial Revolution 4.0. We find its application every day in various devices, and modern life is inconceivable without artificial intelligence of a certain level. In everyday life, we encounter smart algorithms that have the ability to learn and automate certain processes or manage certain hardware. Such artificial intelligence is not an ethical challenge. However, as technological development is very fast, and the creation of artificial superintelligence is one of the proclaimed goals of further development, it is necessary to analyze various aspects that the transformation of AI into ASI would have, especially with cognitive abilities, as well as ethical challenges that the existence and application of artificial superintelligence, and whose consequences can be far-reaching. This paper aims to investigate and present the ethical problems that humanity would face in creating and delegating responsibility for certain work and life processes to artificial intelligence, especially to its most complex iteration, which we call cognitive artificial superintelligence.
Bias by design.
https://link.springer.com/article/10.1007/s43681-022-00136-w
Algorithms are not neutral
When Artificial Intelligence (AI) is applied in decision-making that affects people’s lives, it is now well established that the outcomes can be biased or discriminatory. The question of whether algorithms themselves can be among the sources of bias has been the subject of recent debate among Artificial Intelligence researchers, and scholars who study the social impact of technology. There has been a tendency to focus on examples, where the data set used to train the AI is biased, and denial on the part of some researchers that algorithms can also be biased. Here we illustrate the point that algorithms themselves can be the source of bias with the example of collaborative filtering algorithms for recommendation and search. These algorithms are known to suffer from cold-start, popularity, and homogenizing biases, among others. While these are typically described as statistical biases rather than biases of moral import; in this paper we show that these statistical biases can lead directly to discriminatory outcomes. The intuitive idea is that data points on the margins of distributions of human data tend to correspond to marginalized people. The statistical biases described here have the effect of further marginalizing the already marginal. Biased algorithms for applications such as media recommendations can have significant impact on individuals’ and communities’ access to information and culturally-relevant resources. This source of bias warrants serious attention given the ubiquity of algorithmic decision-making.
Creating Robolawyers is easy, Robojudges not so much.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4021610
Of Robolawyers and Robojudges
Artificial intelligence (AI) may someday play various roles in litigation, particularly complex litigation. It may be able to provide strategic advice, advocate through legal briefs and in court, help judges assess class action settlements, and propose or impose compromises. It may even write judicial opinions and decide cases. For it to perform those litigation tasks, however, would require two breakthroughs: one involving a form of instrumental reasoning that we might loosely call common sense or more precisely call abduction and the other involving a form of reasoning that we will label purposive, that is, the formation of ends or objectives. This Article predicts that AI will likely make strides at abductive reasoning but not at purposive reasoning. If those predictions prove accurate, it contends, AI will be able to perform sophisticated tasks usually reserved for lawyers, but it should not be trusted to perform similar tasks reserved for judges. In short, we might welcome a role for robolawyers but resist the rise of robojudges.
(Related) What will work here and what won’t…
https://link.springer.com/chapter/10.1007/978-3-030-88615-8_10
‘Intelligent Justice’: AI Implementations in China’s Legal Systems
How are Artificial Intelligence (AI) systems transforming China’s public security and judicial systems? As part of China’s AI national strategy, AI technologies are leveraged for judicial reform and modernization. Big data, cloud computing, natural language processing, and video recognition support ‘internet courts’ such as the ‘Court2Judge’ platform (Chen 2015). Machine learning and cognitive computing used by the ‘206 System’ assist public security and court personnel with evidence verification and trial argumentation (Cui 2020). Advanced AI robotics power ‘smart courts’ that employ the nation’s first robot judges such as ‘Xiaozhi’ that can efficiently adjudicate some civil cases (Gao, Xia, and Luo 2019). AI is also used by public security agencies for locating lawbreakers and interrogating suspects to ensure the integrity and expediency of the processes from arrest to trial. However, not everyone is optimistic about this ‘new age’ law enforcement. Concerns about data privacy and skepticism around the credibility of the so-called black box of AI algorithms call into question the benefits of AI-assisted justice. This paper historicizes AI-powered systems by discussing their implementation in China’s courts and public security bodies through three stages of AI development: ‘intelligent perception,’ ‘intelligent cognition,’ and ‘intelligent decision making.’ This paper also aims to demonstrate why China’s effort to pursue AI as an innovative technical practice for realizing judicial fairness and justice must recognize the legitimate roles played by social and ethical considerations; progress is predicated on public participation, respect for human values, and clear-eyed understanding of AI’s current challenges.
The “how”
https://www.sciencedirect.com/science/article/pii/S2214785322003315
Analysis of facial recognition techniques
Facial recognition has become a hot spot for researchers this past few decades due to its relevance in privacy and security application. The various facial recognition techniques can be catalogued into Database Matching, Statistical Approach, Contour Mapping, Fiducial Mapping, and Feature Mapping. This paper aims to compile the various techniques proposed into the five subcategories mentioned above for facile discrimination. Each technique involves training a classifier with a large number of facial images, applying statistical algorithms to the pixel matrix of the facial images, capturing the distinct variations in the contour of the facial image, tracing distance between points that stand out and differentiate the person and the use of a neural network to extract feature that best represents a particular face. The final object of the paper is to compile features and the drawback of each of the five categories of facial recognition, as there is does not exists a singular facial recognition technology that can perform facial recognition under various external factors such as poor lighting, different facial condition, and distortion in facial images. The compiled set of features for each technique will help the reader decide the best techniques for a given condition in which facial recognition has to be performed.
Perspective. May explain why I’m uncomfortable reading about it.
https://thenextweb.com/news/is-bitcoin-technically-a-religion-a-scholar-investigates
Is Bitcoin technically a religion? A scholar investigates
Read enough about Bitcoin, and you’ll inevitably come across people who refer to the cryptocurrency as a religion.
Bloomberg’s Lorcan Roche Kelly called Bitcoin “the first true religion of the 21st century.” Bitcoin promoter Hass McCook has taken to calling himself “The Friar” and wrote a series of Medium pieces comparing Bitcoin to a religion. There is a Church of Bitcoin, founded in 2017, that explicitly calls legendary Bitcoin creator Satoshi Nakamoto its “prophet.”
No comments:
Post a Comment