I’m sure we have already covered these...
https://www.zdnet.com/article/5-exciting-applications-for-artificial-intelligence/
Artificial intelligence: 5 innovative applications that could change everything
Artificial intelligence is transforming how businesses across many different industries operate. By adopting AI, businesses can automate activities, producing more efficient and effective results. The McKinsey Technology Trends Outlook 2022 report took an in-depth look at AI and its many applications – which reach far beyond the tech industry. Here's a look at a few major sectors where AI will have important impacts.
About time!
Facebook-Cambridge Analytica data breach lawsuit ends in 11th hour settlement
Mark Townsend reports:
Facebook has dramatically agreed to settle a lawsuit seeking damages for allowing Cambridge Analytica access to the private data of tens of millions of users, four years after the Observer exposed the scandal that mired the tech giant in repeated controversy.
A court filing reveals that Meta, Facebook’s parent company, has in principle settled for an undisclosed sum a long-running lawsuit that claimed Facebook illegally shared user data with the UK analysis firm.
Read more at The Guardian.
More on the ultimate question.
https://dial.uclouvain.be/pr/boreal/object/boreal:264528
Humans With, Not Versus Robots
Wesley Newcomb Hohfeld postulated, synthesizing seemingly unbudging legal tradition, that law is about (a finite set of) relationships between humans. First animals and now, increasingly, robots make us question this. This paper will discuss in some detail the ways in which the law might accommodate some of the relationships that humans have with robots. These relationships can vary greatly as to their degree of closeness between the parties, ranging from the rather detached, in which robots are seen as tools (which arguably forms the great majority of cases now and, some argue, should always be so), to quite up close and personal as is the case with certain robots which are seen as companions or partners (increasingly being reported as a trend we are moving towards), to even seeing them as part and parcel, as extensions of our own person or body parts. Since our relationship to our tools has been dealt with by law and is largely uncontroversial, the current article will focus on human-robot collaborations and what legal shape they may take, exploring available legal avenues, as well as innovations in terms of legal status.
From a local…
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4195066
Regulating the Risks of AI
Margot E. Kaminski University of Colorado Law School; Yale University - Yale Information Society Project; University of Colorado at Boulder - Silicon Flatirons Center for Law, Technology, and Entrepreneurship
Companies and governments now use Artificial Intelligence (AI) in a wide range of settings. But using AI leads to well-known risks—that is, not yet realized but potentially catastrophic future harms that arguably present challenges for a traditional liability model. It is thus unsurprising that lawmakers in both the United States and the European Union (EU) have turned to the tools of risk regulation for governing AI systems.
This Article observes that constructing AI harms as risks is a choice with consequences. Risk regulation comes with its own policy baggage: a set of tools and troubles that have emerged in other fields. Moreover, there are at least four models for risk regulation, each with divergent goals and methods. Emerging conflicts over AI risk regulation illustrate the tensions that emerge when regulators employ one model of risk regulation, while stakeholders call for another.
This Article is the first to examine and compare a number of recently proposed and enacted AI risk regulation regimes. It asks whether risk regulation is, in fact, the right approach. It closes with suggestions for addressing two types of shortcomings: failures to consider other tools in the risk regulation toolkit (including conditional licensing, liability, and design mandates), and shortcomings that stem from the nature of risk regulation itself (including the inherent difficulties of non-quantifiable harms, and the dearth of mechanisms for public or stakeholder input).
No comments:
Post a Comment