Perhaps I’ll find an article about the benefits of facial recognition.
What facial recognition and the racist pseudoscience of phrenology have in common
… In recent years, machine-learning algorithms have promised governments and private companies the power to glean all sorts of information from people’s appearance. Several startups now claim to be able to use artificial intelligence (AI) to help employers detect the personality traits of job candidates based on their facial expressions. In China, the government has pioneered the use of surveillance cameras that identify and track ethnic minorities. Meanwhile, reports have emerged of schools installing camera systems that automatically sanction children for not paying attention, based on facial movements and microexpressions such as eyebrow twitches.
Better ethics via AI?
Digitalization for Ethical Awareness and Ethical Practice
Digitalization has the potential to improve life in many ways, and also a tendency to waken worries and raise questions about harmful consequences and ethical implications. Such concerns are reasonable, since artificial intelligence, interconnected systems, large-scale repositories of personal data, and the use of technologies indeed relate to values among different stakeholders. This paper attempts to provide a more balanced view of ethics concerns in digitalization initiatives. We show examples of how digitalization of the public sector has the potential to make already existing ethical issues visible and known. We present three cases, where the process of digitalization reveals unethical activities and ideas hidden in the well-established, 'analog' context. Drawing from the examples, we introduce the Systematic Ethical Reflection (SER) model, a three-dimension model that can guide systematic reflections and a rational discourse about ethical issues in digitalization initiatives. We discuss the SER model's merits as a practical theory – i.e., its usefulness to facilitate discussions about ethical issues in various phases of digitalization initiatives. Such discussions are imperative from a discourse ethical standpoint.
...and by the way, my AI wants the right to vote.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3776481
Autonomous Corporate Personhood
Currently, several states are considering changes to their business organization law to accommodate autonomous businesses—businesses operated entirely through computer code. Several international civil society groups are also actively developing new frameworks and a model law for enabling decentralized, autonomous businesses to achieve a corporate or corporate-like status that bestows legal personhood. Meanwhile, various jurisdictions, including the European Union, have considered whether and to what extent artificial intelligence (AI) more broadly should be endowed with personhood in order to respond to AI’s increasing presence in society. Despite the fairly obvious overlap between the two sets of inquiries, the legal and policy discussions between the two only rarely overlap. As a result of this failure to communicate, both areas of personhood theory fail to account for the important role that socio-technical and socio-legal context plays for law and policy development. This Article fills the gap by investigating the limits of artificial rights at the intersection of corporations and artificial intelligence. Specifically, this Article argues that building a comprehensive legal approach to artificial rights—rights enjoyed by artificial people, whether entity, machine, or otherwise—requires approaching the issue through a systems lens to ensure law’s consideration of the varied socio-technical contexts in which artificial people exist.
How to avoid a terminator? Are we still concerned with that?
https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=2816&context=ilj
THE DUTY TO TAKE PRECAUTIONS IN HOSTILITIES, AND THE DISOBEYING OF ORDERS: SHOULD ROBOTS REFUSE?
This Article not only questions whether an embodied artificial intelligence (“EAI”) could give an order to a human combatant, but controversially, examines whether it should also refuse one. A future EAI may be capable of refusing to follow an order, for example, where an order appeared to be manifestly unlawful, was otherwise in breach of International Humanitarian Law (“IHL”), national Rules of Engagement (“ROE”) or, even, where they appeared to be immoral or unethical. Such an argument has traction in the strategic realm in terms of “system of systems”—the premise that more advanced technology can potentially help overcome Clausewitzian “friction” or “fog of war.” An aircraft’s anti-stall mechanism, which takes over, and corrects human error, is seen as nothing less than “positive.”
Interesting. Only an 8 page Word document.
Submission to the Department of Industry, Innovation and Science regarding the 'Artificial Intelligence: Australia’s Ethics Framework' discussion paper.
No comments:
Post a Comment