Sunday, January 29, 2023

Very Frankenstein.  Grab your torches and pitchforks. 

https://www.cambridge.org/core/journals/cambridge-quarterly-of-healthcare-ethics/article/what-do-chimeras-think-about/D09DD8677F262F7E26A89E3455F113BF

What Do Chimeras Think About?

Non-human animal chimeras, containing human neurological cells, have been created in the laboratory.  Despite a great deal of debate, the status of such beings has not been resolved.  Under normal definitions, such a being could either be unconventionally human or abnormally animal.  Practical investigations in animal sentience, artificial intelligence, and now chimera research, suggest that such beings may be assumed to have no legal rights, so philosophy could provide a different answer.  In this vein, therefore, we can ask: What would a chimera, if it could think, think about?  Thinking is used to capture the phenomena of a novel, chimeric being perceiving its terrible predicament as no more than a laboratory experiment.  The creation of a thinking chimera therefore forces us to reconsider our assumptions about what makes human beings (potentially) unique (and other sentient animals different), because, as such, a chimera’s existence bridges our social and legal expectations about definitions of human and animal.  Society has often evolved new social norms based on different kinds of (ir)rational contrivances; the imperative of non-contradiction, which is defended here, therefore requires a specific philosophical response to the rights of a thinking chimeric being.



Change the question…

https://link.springer.com/article/10.1007/s43681-023-00260-1

What would qualify an artificial intelligence for moral standing?

What criteria must an artificial intelligence (AI) satisfy to qualify for moral standing?  My starting point is that sentient AIs should qualify for moral standing.  But future AIs may have unusual combinations of cognitive capacities, such as a high level of cognitive sophistication without sentience.  This raises the question of whether sentience is a necessary criterion for moral standing, or merely sufficient.  After reviewing nine criteria that have been proposed in the literature, I suggest that there is a strong case for thinking that some non-sentient AIs, such as those that are conscious and have non-valenced preferences and goals, and those that are non-conscious and have sufficiently cognitively complex preferences and goals, should qualify for moral standing.  After responding to some challenges, I tentatively argue that taking into account uncertainty about which criteria an entity must satisfy to qualify for moral standing, and strategic considerations such as how such decisions will affect humans and other sentient entities, further supports granting moral standing to some non-sentient AIs.  I highlight three implications: that the issue of AI moral standing may be more important, in terms of scale and urgency, than if either sentience or consciousness is necessary; that researchers working on policies designed to be inclusive of sentient AIs should broaden their scope to include all AIs with morally relevant interests; and even those who think AIs cannot be sentient or conscious should take the issue seriously.  However, much uncertainty about these considerations remains, making this an important topic for future research.



Making Big Brother smaller?  

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4331633 

Does Big Brother exist? Face Recognition Technology in the United Kingdom

Face recognition technology (FRT) has achieved remarkable progress in the last decade due to the improvement of deep convolutional neural networks.  The United Kingdom (UK) law enforcement sector has been remarkably à l'avant-garde in employing this technology.  Smart CCTV cameras were allegedly first used in the UK, where the London Metropolitan Police Service has operated them since the 1998.  More recently, it was reported that businesses in the UK have been using a FRT system known as 'Facewatch' to share CCTV images with the police and identify suspected shoplifters entering their store.

The massive deployment of FRT has unsurprisingly tested the limits of the UK's democracy: where should the line be drawn between acceptable uses of this technology for collective or private purposes, and the protection of individual entitlements that are compressed by the employment of FRT?  Bridges v. South Wales Police case offered guidance on this issue.  After lengthy litigation, the Court of Appeal of England and Wales ruled in favour of the applicant, a civil rights campaigner who claimed that the active FRT deployed by the police at public gatherings infringed his rights.  The outcome of this case suggests that the use of FRT for law enforcement should be strictly regulated.

Although the Bridges case offered crucial directives on the balancing between individual rights and the lawful use of FRT for law enforcement purposes under the current UK rules, several ethical and legal questions still remain unsolved.  This chapter sheds light on the UK approach to FRT regulation and offers a threefold contribution to existing literature.  First, it provides an overview of sociological and regulatory attitudes towards this technology in the UK.  Second, the chapter discusses the Bridges saga and its implications.  Third, it offers reflections on the future of FRT regulation in the UK.



AI may pass the bar, but doesn’t seem like much real competition.  (Yet.) 

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4335905

ChatGPT Goes to Law School

How well can AI models write law school exams without human assistance?  To find out, we used the widely publicized AI model ChatGPT to generate answers on four real exams at the University of Minnesota Law School.  We then blindly graded these exams as part of our regular grading processes for each class.  Over 95 multiple choice questions and 12 essay questions, ChatGPT performed on average at the level of a C+ student, achieving a low but passing grade in all four courses.  After detailing these results, we discuss their implications for legal education and lawyering.  We also provide example prompts and advice on how ChatGPT can assist with legal writing.


No comments: