Sunday, January 22, 2023

If I never use social media, am I flagged as ‘suspicious?’

https://www.igi-global.com/chapter/social-media-intelligence/317161

Social Media Intelligence: AI Applications for Criminal Investigation and National Security

This chapter aims at discussing how social media intelligence (SOCMINT) can be and has been applied to the field of criminal justice. SOCMINT is composed of a set of computer forensic techniques used for intelligence gathering on social media platforms. Through this chapter, readers will be able to better understand what SOCMINT is and how it may be helpful for criminal investigation and national security. Different aspects of SOCMINT are addressed, including application in criminal justice, intelligence gathering, monitoring, metadata, cyber profiling, social network analysis, tools, and privacy concerns. Further, the challenges and future research directions are discussed as well. This chapter is not meant to serve as a technical tutorial as the focus is on the concepts rather than the techniques.





I find the debate interesting but suspect the answer will always be “AI = person”

https://link.springer.com/article/10.1007/s12369-022-00958-y

Can Robots have Personal Identity?

This article attempts to answer the question of whether robots can have personal identity. In recent years, and due to the numerous and rapid technological advances, the discussion around the ethical implications of Artificial Intelligence, Artificial Agents or simply Robots, has gained great importance. However, this reflection has almost always focused on problems such as the moral status of these robots, their rights, their capabilities or the qualities that these robots should have to support such status or rights. In this paper I want to address a question that has been much less analyzed but which I consider crucial to this discussion on robot ethics: the possibility, or not, that robots have or will one day have personal identity. The importance of this question has to do with the role we normally assign to personal identity as central to morality. After posing the problem and exposing this relationship between identity and morality, I will engage in a discussion with the recent literature on personal identity by showing in what sense one could speak of personal identity in beings such as robots. This is followed by a discussion of some key texts in robot ethics that have touched on this problem, finally addressing some implications and possible objections. I finally give the tentative answer that robots could potentially have personal identity, given other cases and what we empirically know about robots and their foreseeable future.





Wrong by design?

https://scholarship.richmond.edu/pilr/vol26/iss1/8/

From Ban to Approval: What Virginia's Facial Recognition Technology Law Gets Wrong

Face recognition technology (FRT), in the context of law enforcement, is a complex investigative technique that includes a delicate interplay between machine and human. Compared to other biometric and investigative tools, it poses unique risks to privacy, civil rights, and civil liberties. At the same time, its use is generally unregulated and opaque. Recently, state lawmakers have introduced legislation to regulate face recognition technology, but this legislation often fails to account for the complexities of the technology, or to address the unique risks it poses. Using Virginia’s recently passed face recognition law and the legislative history behind it as an example, we show how legislation can fail to properly account for the harms of this technology.





AI as slave?

https://www.cambridge.org/core/journals/legal-studies/article/bridging-the-accountability-gap-of-artificial-intelligence-what-can-be-learned-from-roman-law/8B2B88D50E0A795F358C2F53958BDB43

Bridging the accountability gap of artificial intelligence – what can be learned from Roman law?

This paper discusses the accountability gap problem posed by artificial intelligence. After sketching out the accountability gap problem we turn to ancient Roman law and scrutinise how slave-run businesses dealt with the accountability gap through an indirect agency of slaves. Our analysis shows that Roman law developed a heterogeneous framework in which multiple legal remedies coexist to accommodate the various competing interests of owners and contracting third parties. Moreover, Roman law shows that addressing the various emerging interests had been a continuous and gradual process of allocating risks among different stakeholders. The paper concludes that these two findings are key for contemporary discussions on how to regulate artificial intelligence.



No comments: