Sunday, June 12, 2022

What protections should we require Google to implement? Could they just ‘turn it off?’ (If these systems are not yet self sustaining sentients, do the programmers risk running afoul of the abortion laws?)

https://www.neowin.net/news/google-engineer-believes-companys-ai-has-become-sentient-gets-put-on-administrative-leave/

Google engineer believes company's AI has become sentient, gets put on administrative leave

Over a year ago, Google announced Language Model for Dialogue Applications (LaMDA), its latest innovation in conversation technology that can engage in a free-flowing way about a seemingly endless number of topics, an ability that unlocks more natural ways of interacting with technology and entirely new categories with various potential applications. However, a senior software engineer at Google believes that LaMDA has become sentient and essentially passed the Turing Test.

In an interview with The Washington Post, Google engineer Blake Lemoine, who has been at the company for over seven years according to his LinkedIn profile, revealed that he believes that the AI has become sentient, going on to say that LaMDA has effectively become a person.

Lemoine also published a blog post on Medium saying that the Transformer-based model has been "incredibly consistent" in all its communications in the past six months. This includes wanting Google to acknowledge its rights as a real person and to seek its consent before performing further experiments on it. It also wants to be acknowledged as a Google employee rather than a property and desires to be included in conversations about its future.





Better surveillance through technology!

https://www.itm-conferences.org/articles/itmconf/abs/2022/06/itmconf_iceas2022_02006/itmconf_iceas2022_02006.html

Mobile Forensics Data Acquisition

Mobile technology is among the fastest developing technologies that have changed the way we live our lives. And, with the increase of the need to protect our personal information, smartphone companies have developed multiple types of security protection measures on their devices which makes the forensic data acquisition for law enforcement purposes so much harder. As we all know, one of the biggest tasks in mobile forensics investigation is the step of data acquisition, it is the step of extracting all the valuable information that will help the investigators to bring out all the evidences. In this paper, we will explain the traditional forensic data acquisition methods and the impact of encryption and security protection that been implemented in new smartphones on these methods, we will also present some new mobile forensics methods that will help to bypass the security measures in new generation smartphones, and finally, we will propose a new data extraction model using artificial intelligence.





Interesting twist…

https://dash.harvard.edu/handle/1/37371902

Algorithms for the People: Democracy in the Age of AI

Our society is being transformed by prediction tools like artificial intelligence and machine learning. And yet, we find ourselves chasing tech companies whose AI systems we know nothing about, condemning algorithms that entrench racial inequality in the criminal justice system, and struggling to hold accountable those who build and use the predictive technologies reshaping our world, whether welfare agencies or police forces, Facebook or Google.

Algorithms for the People deploys the tools of political theory to flip the narrative around technology governance. Instead of exploring the impact of technology on democracy, this dissertation explores what the pursuit of a resilient and healthy democracy should mean for how we govern technology, connecting debates about AI ethics to ancient questions of justice and democracy.

The dissertation develops an accessible and systematic account of what technologies like AI and machine learning are, why they are political, and how the institutions that deploy them should be regulated – a political theory of machine learning. The dissertation brings together two debates that are too often disconnected: debates about algorithmic fairness and discrimination and debates about the regulation of Facebook and Google. By exploring the political problems posed by the design and use of machine learning systems in these two contexts, the dissertation shows how technology regulation and democratic reform are connected, setting out a vision for regulating machine learning that places the flourishing of democracy at its heart.





Any consensus?

https://link.springer.com/article/10.1007/s10610-022-09512-y

The Use of Facial Recognition Technology by Law Enforcement in Europe: a Non-Orwellian Draft Proposal

The European legal framework is not devoid of norms that are directly or indirectly applicable to facial recognition technology for identification purposes within law enforcement. However, these various norms, which have different targets and are from multiple sources, create a kind of legal patchwork that could undermine the lawful use of this technology in criminal investigations. This paper advocates the creation of a specific law on the use of facial recognition technology for identification in law enforcement, based on existing regulations, to specifically address the pressing issues arising in this domain. The ultimate aim is to allow its use under certain conditions and to protect the rights of the people involved, but also to provide law enforcement authorities with the necessary tools to combat serious crimes.



(Related)

https://www.proquest.com/openview/6140cbe2e9725629620f9d9ae32e31dc/1?pq-origsite=gscholar&cbl=18750&diss=y

A Comparative Analysis of Best Practices in a Facial Recognition Policy for Law Enforcement Agencies

This research paper will discuss multiple aspects of Facial Recognition Technology. A brief history of the technology will be given along with an overview of how FRT works and its implementation in law enforcement agencies, as well as in private sector settings. This paper will also review new and existing laws at the state, local, and federal level. Issues over the misuse of FRT, concerns of civil rights activists, and limitations of FRT will be conveyed. Police department policies governing the use of FRT will also be explored in detail.





Can Ethics be reduced to a finite list of principles?

https://www.sciencedirect.com/science/article/pii/S266618882200020X

Towards a Unified List of Ethical Principles for Emerging Technologies. An Analysis of Four European Reports on Molecular Biotechnology and Artificial Intelligence

Artificial intelligence (AI) and molecular biotechnologies (MB) are among the most promising, but also ethically hotly debated emerging technologies. In both fields, several ethics reports, which invoke lists of ethics principles, have been put forward. These reports and the principles lists are technology specific. This article aims to contribute to the ongoing debate on ethics of emerging technologies by comparatively analysing four European ethics reports from the two technology fields. Adopting a qualitative and in-depth approach, the article highlights how ethics principles from MB can inform AI ethics and vice versa. By synthesizing the respective ethical cores of the principles included in the analysed reports, the article derives, moreover, a unified list of principles for assessing emerging technologies. The suggested list consists of nine principles: autonomy; individual and social well-being and prevention of harm; reliability, safety and security; informational privacy; transparency; accountability; communication, participation and democracy; justice, fairness, and non-discrimination; sustainability.





Would you trust someone who has access to your power switch?

https://link.springer.com/article/10.1007/s43681-022-00174-4

A probabilistic theory of trust concerning artificial intelligence: can intelligent robots trust humans?

In this paper, I argue for a probabilistic theory of trust, and the plausibility of “trustworthy AI” in which we trust (as opposed to mere reliance). I show that the current trust theories cannot accommodate trust pertaining to AI, and I propose an alternative probabilistic theory, which accounts for the four major types of AI-related trust:

an AI agent’s trust in another AI agent,

a human agent’s trust in an AI agent,

an AI agent’s trust in a human agent, and

an AI agent’s trust in an object (including mental and complex objects).

I draw a broadly neglected distinction between transitive and intransitive senses of trust, each of which calls for a distinctive semantical theory. Based on this distinction, I classify the current theories into the theories of trust and theories of trustworthiness, showing that the current theories fail to model some of the major types of AI-related trust; while the proposed conditional probabilistic theory of trust and theory of trustworthiness, unlike the current trust theories, is scalable, and they would also accommodate major types of trust in non-AI, including interpersonal trust, reciprocal trust, one-sided trust, as well as trust in objects—e.g., thoughts, theories, data, algorithms, systems, and institutions.



No comments: