Sunday, March 07, 2021

All the issues of self-driving cars, except sharing the sidewalk with you!

https://www.axios.com/sidewalk-robots-legal-rights-pedestrians-821614dd-c7ed-4356-ac95-ac4a9e3c7b45.html

Sidewalk robots get legal rights as "pedestrians"

As small robots proliferate on sidewalks and city streets, so does legislation that grants them generous access rights and even classifies them, in the case of Pennsylvania, as "pedestrians."

Why it matters: Fears of a dystopian urban world where people dodge heavy, fast-moving droids are colliding with the aims of robot developers large and small — including Amazon and FedEx — to deploy delivery fleets.





Concern over who/what might be listening is now “a thing?”

https://www.sciencedirect.com/science/article/abs/pii/S0747563221000856

Okay google, what about my privacy?’: User's privacy perceptions and acceptance of voice based digital assistants

Conversational Artificial Intelligence (AI) backed Alexa, Siri and Google Assistants are examples of Voice-based digital assistants (VBDA) that are ubiquitously occupying our living spaces. While they gather an enormous amount of personal information to provide bespoke user experience, they also evoke serious privacy concerns regarding the collection, use and storage of personal data of the consumers. The objective of this research is to examine the perception of the consumers towards the privacy concerns and in turn its influence on the adoption of VBDA. We extend the celebrated UTAUT2 model with perceived privacy concerns, perceived privacy risk and perceived trust. With the assistance of survey data collected from tech-savvy respondents, we show that trust in technology and the service provider plays an important role in the adoption of VBDA. In addition, we notice that consumers showcase a trade-off between privacy risks and benefits associated with VBDA while adopting the VBDA such technologies, reiterating their calculus behaviour. Contrary to the extant literature, our results indicate that consumers' perceived privacy risk does not influence adoption intention directly. It is mediated through perceived privacy concerns and consumers’ trust. Then, we propose theoretical and managerial implications to conclude the paper.





Confusing. Do you have “a right to work” for me? Must I employ anyone who asks? If not, do I owe them other compensation?

https://scholarlycommons.law.emory.edu/eilr-recent-developments/30/

Automation and the International Human Right to Work

Automation continues to result in significant structural changes to the nature of work as computers, robots, or Artificial Intelligence (AI) are performing an increasing number of jobs. These technologies have elevated the possibilities for human prosperity and innovation, but job loss, privacy infringements, and the increasing agency of robotic systems are all acknowledged risks. These concerns are not new. In 1948, when delegates from 48 countries came together to sign the Universal Declaration of Human Rights (UDHR), they sought to capture in words what a “good human life” meant, which included the "right to work." Human rights instruments, like the UDHR, provide a useful framework for analyzing the risks and ramifications of technological development in automation. As such, this Article examines how technology is exacerbating "right to work" violations and increasing the need for "right to work" protections in order to proactively respond to the negative effects of an increasingly automated world.





It’s obvious, isn’t it?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3795427

Artificial Intelligence and The Struggle Between Good and Evil

Numerous reports have described—in great detail—the real and potential harms of the widespread development and adoption of artificial intelligence by both government and private industry. However, artificial intelligence has also been shown to create faster, more accurate and more equitable outcomes than humans in many situations. This seeming contradiction has led to dichotomous thinking describing artificial intelligence as either good or evil. Artificial intelligence, like all technological developments, is a tool: one that can be used—intentionally and sometimes, unintentionally—for good or for harm. The gray area comes into play when there is good or neutral intent that leads to a harmful result. This Essay is designed to help policymakers, investors, scholars, and students understand the multifaceted nature of artificial intelligence and the key challenges it presents and to caution against creating laws that prohibit AI programs outright rather than addressing the fundamental need to develop AI responsibly.





Good questions?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3794777

Artificial Intelligence and the Rule of Law

This book chapter examines an interaction between technological shocks and the “rule of law.” It does so by analyzing the implications of a class of loosely related computational technologies termed “machine learning” (ML) or, rather less precisely “artificial intelligence” (AI). These tools are presently employed in the pre-adjudicative phase of enforcing of the laws, for example facilitating the selection of targets for tax and regulatory investigations.

Two general questions respecting the rule of law arise from these developments. The more immediately apparent one is whether these technologies, when integrated into the legal system, are themselves compatible or in conflict with the rule of law.

The second question posed by new AI and ML technologies has also not been extensively discussed. Yet it is perhaps of more profound significance. Rather than focusing on the compliance of new technologies with rule-of-law values, it hinges on the implications of ML and AI technologies for how the rule of law itself is conceived or implemented.



No comments: