Learning from the differences?
https://www.hsaj.org/resources/uploads/2021/12/hsaj_V18_SouthoftheBorder_Dec2021v3.pdf
South of the Border: Legal and Privacy Underpinnings of Canadian and U.S. Approaches to Police Video Usage
This article utilizes modern privacy theory to examine the differing approaches of Canadian and U.S. law enforcement to the use of camera technology and advancing video analytics. The article documents the deployment of camera technology in two comparably-sized urban centers in Canada and the U.S. The reception of Canada and the U.S. to policing strategies utilizing sensors like cameras and video analytics tools like license plate and facial recognition are in sharp contrast. The article then offers a summary of Canadian and U.S. legal privacy protections following Alan Westin’s privacy model. This analysis suggests that the differences in the use of technology stem from philosophical and legal assessments of privacy rights. Specifically, protection for privacy concepts like anonymity and reserve explains the differing approaches. Understanding those differences is useful for administrators on either side of the border to plan their use of emerging imaging sensor technology and analytics.
Because I use self-driving tech as a challenge to my students.
https://ideas.repec.org/p/ipt/iptwpa/jrc127051.html
Trustworthy Autonomous Vehicles
This report aims to advance towards a general framework on Trustworthy AI for the specific domain of Autonomous Vehicles (AVs). The implementation and relevance of the assessment list established by the independent High Level Expert Group on Artificial Intelligence (AI HLEG) as a tool to translate the seven requirements that AI systems should meet in order to be trustworthy, defined in the Ethics Guidelines, are discussed in detail and contextualized for the field of AVs. The general behaviour of an AV depends on a set of multiple, complex, interrelated Artificial Intelligence (AI) based systems, each dealing with problems of different nature. The application context of AVs can intuitively be considered high-risk, and their adoption involves addressing significant technical, political and societal challenges. However, AVs could bring substantial benefits, improving safety, mobility, and the environment. Therefore, although challenging, it seems necessary to deepen the application of the assessment criteria of trustworthy AI for Avs.
Perspective.
https://www.nationalreview.com/2022/01/beware-amoral-humans-not-artificial-intelligence/
Beware Amoral Humans, Not Artificial Intelligence
… A rapidly growing number of people now earnestly wonder: What will AI do to society?
It’s an evocative, important question. But the answer will depend on the answer to a more important question altogether: Who is building AI? Artificial intelligence will do great evil or great good depending on the beliefs of those who make it. And right now, it is primarily being built by tech leaders with little to no understanding of or respect for the morality, virtues, and wisdom of the Judeo-Christian tradition. The greatest threat with AI isn’t all-powerful robots. It’s amoral people.
Should we debate it now or wait for the AIs to join the debate?
https://onlinelibrary.wiley.com/doi/abs/10.1111/sjp.12450
Moral Status and Intelligent Robots
The great technological achievements in the recent past regarding artificial intelligence (AI), robotics, and computer science make it very likely, according to many experts in the field, that we will see the advent of intelligent and autonomous robots that either match or supersede human capabilities in the midterm (within the next 50 years) or long term (within the next 100–300 years). Accordingly, this article has two main goals. First, we discuss some of the problems related to ascribing moral status to intelligent robots, and we examine three philosophical approaches—the Kantian approach, the relational approach, and the indirect duties approach—that are currently used in machine ethics to determine the moral status of intelligent robots. Second, we seek to raise broader awareness among moral philosophers of the important debates in machine ethics that will eventually affect how we conceive of key concepts and approaches in ethics and moral philosophy. The effects of intelligent and autonomous robots on our traditional ethical and moral theories and concepts will be substantial and will force us to revise and reconsider many established understandings. Therefore, it is essential to turn attention to debates over machine ethics now so that we can be better prepared to respond to the opportunities and challenges of the future.
Sort of like Donald Trump fighting to keep is “letters” private? (Nope. Not even close.)
https://www.pogowasright.org/the-duchess-and-the-tabloid-the-mail-admits-its-beaten/
The Duchess and the tabloid: The Mail admits it’s beaten
Brian Cathcart writes:
The owners of the Mail on Sunday have finally capitulated in the privacy and breach of copyright case brought by the Duchess of Sussex, abandoning any idea of a further appeal and agreeing to pay an undisclosed sum in settlement.
Following an order [pdf] made by the Court of Appeal which (among other things) required the publication of the front-page admission of defeat that appeared in the newspaper on Sunday, all that remains to be settled is the amount to be paid to the Duchess to reimburse her for her legal costs.
Read more at Inforrm’s Blog.
[From the article:
Most students of law or journalism could have told the editor of the Mail on Sunday that, in publishing large parts of an obviously personal letter from a daughter to her father, he was breaching her rights in ways he could not hope to justify. He too must have known this, and the only viable explanation for his action is that he assumed she would not sue.
He assumed wrongly. She not only sued but really went for the paper – a paper that had lied about her and smeared her for years. And Associated, again blind to reality, decided to fight the case.
No comments:
Post a Comment