Sunday, August 15, 2021

A law in need of an update?

https://techcrunch.com/2021/08/14/how-the-law-got-it-wrong-with-apple-card/

How the law got it wrong with Apple Card

Advocates of algorithmic justice have begun to see their proverbial “days in court” with legal investigations of enterprises like UHG and Apple Card. The Apple Card case is a strong example of how current anti-discrimination laws fall short of the fast pace of scientific research in the emerging field of quantifiable fairness.

While it may be true that Apple and their underwriters were found innocent of fair lending violations, the ruling came with clear caveats that should be a warning sign to enterprises using machine learning within any regulated space. Unless executives begin to take algorithmic fairness more seriously, their days ahead will be full of legal challenges and reputational damage.

And yet, there is no doubt in my mind that the Goldman/Apple algorithm discriminates, along with every other credit scoring and underwriting algorithm on the market today. Nor do I doubt that these algorithms would fall apart if researchers were ever granted access to the models and data we would need to validate this claim. I know this because the NY DFS partially released its methodology for vetting the Goldman algorithm, and as you might expect, their audit fell far short of the standards held by modern algorithm auditors today.





This may be a repeat, but worth repeating.

https://venturebeat.com/2021/08/13/ai-weekly-the-road-to-ethical-adoption-of-ai/

AI Weekly: The road to ethical adoption of AI

As new principles emerge to guide the development ethical, safe, and inclusive AI, the industry faces self-inflicted challenges. Increasingly, there are many sets of guidelines — the Organization for Economic Cooperation and Development’s AI repository alone hosts more than 100 documents — that are vague and high-level. And while a number of tools are available, most come without actionable guidance on how to use, customize, and troubleshoot them.

This is cause for alarm, because as the coauthors of a recent paper write, AI’s impacts are hard to assess — especially when they have second- and third-order effects. Ethics discussions tend to focus on futuristic scenarios that may not come to pass and unrealistic generalizations that make the conversations untenable. In particular, companies run the risk of engaging in “ethics shopping,” “ethics washing,” or “ethics shirking,” in which they ameliorate their position with customers to build trust while minimizing accountability.





A summary of AI laws.

https://www.gibsondunn.com/artificial-intelligence-and-automated-systems-legal-update-2q21/

Artificial Intelligence and Automated Systems Legal Update (2Q21)

Click for PDF

Our 2Q21 Artificial Intelligence and Automated Systems Legal Update focuses on these key regulatory efforts, and also examines other policy developments within the U.S. and EU that may be of interest to domestic and international companies alike.[3]





Perspective. How to get from here to there?

https://thenextweb.com/news/create-artificial-general-intelligence-we-need-reevaluate-intelligence-syndication

To achieve AGI, we need new perspectives on intelligence

In a paper that was presented at the Brain-Inspired Cognitive Architectures for Artificial Intelligence (BICA*AI), Sathyanaraya Raghavachary, Associate Professor of Computer Science at the University of Southern California, discusses “considered response,” a theory that can generalize to all forms of intelligent life that have evolved and thrived on our planet.

Titled, “Intelligence—consider this and respond! the paper sheds light on the possible causes of the troubles that have haunted the AI community for decades and draws important conclusions, including the consideration of embodiment as a prerequisite for AGI.





Perspective.

https://philpapers.org/rec/RYAEAA-2

Ethics and Artificial Intelligence

In Encyclopedia of Business and Professional Ethics. pp. 1-5 (2021)

A subdiscipline has emerged around AI ethics, which is comprised of a wide array of individuals: computer scientists, ethicists, cognitive scientists, roboticists, legal professionals, economists, sociologists, gender, and race theorists. This has led to a very interesting branch of research, addressing issues surrounding the development and use of AI. This chapter will give a very brief snapshot of some of the most pertinent ethical concerns. Many of the issues in the Big Data Ethics chapter in this collection are often applicable to AI ethics, because of the data that these technologies retrieve, store, and use, so will not be duplicated here. While data-related issues are not new or unique to AI, AI does hold the potential to dramatically retrieve and analyze data that would not be possible otherwise. Data is being used in unique and transformative ways, such as the use of facial recognition to identify individuals from photos or CCTV; AI robots to retrieve live video feed about the patient(s) that it is monitoring; or in new ways, such as self-driving cars collecting an abundance of data about our surroundings, how we drive, and passengers in the car. There is the potential to infringe on individuals’ privacy, restriction of resources, or at worst, the creation of a surveillance society.





Perspective.

http://27.109.7.66:8080/xmlui/bitstream/handle/123456789/673/The%20Interface%20Between%20Law%20and%20Technology%20.pdf?sequence=1

The Interface between Law and Technology

There is no gainsaying that advancements in science and technology have had a massive impact in the advancement of the law and legal techniques. This is no better exemplified than by our advancements in the field of forensic technology. Indeed, advancements in forensic technology have highly optimised our investigative tools making criminal investigation more potent and penetrative. This is especially true in the case of DNA profiling. The use of DNA samples has proven to be an effective investigative tool and this technique has successfully aided in identifying “unknown victims, suspects, and serial offenders” and in some case it has also helped in releasing wrongfully charged or convicted individuals. The fact is contemporary DNA profiling methods are based on scientifically approved research standards. Thus, the DNA backed evidence are not only accurate but at times the only method to come to a conclusion. There are caveats, however, and even the most advanced scientific procedure can mislead us in our search for justice. While the caveat may apply to overreliance of on technologies for dealing with our legal issues, it also equally applies to wrongful or erroneous use of technology for dealing with legal issues. History is replete with examples of how the erroneous use of technological tools have impacted our fundamental freedoms including life. In this context Petherick observes that in most cases this occurs,

when experts do not avail themselves of all available evidence, when they are oblivious or unaware of evidence that exists, when experts are not aware of their own shortcomings, or where bias or cognitive distortion taint the expert’s opinion, even in cases where the evidence may be pristine or voluminous.





Answer only one question: Under what circumstances should my self-driving car kill me rather than the other guy?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3902217

Autonomous Vehicles, Moral Hazards & the "AV Problem"

The autonomous vehicle (“AV”) industry faces the following ethical question: “How do we know when our AV technology is safe enough to deploy at scale?” The search for an answer to this question is the “AV Problem.” This essay examines that question through the lens of the July 15, 2021 filing on Form S-4 with the Securities and Exchange Commission in the going public transaction for Aurora Inventions, Inc.

The filing reveals that successful implementation of Aurora’s business plan in the long term depends on the truth of the following proposition: A vehicle controlled by a machine driver is safer than a vehicle controlled by a human driver (the “Safety Proposition”).

In a material omission for which securities law liability may attach, the S-4 fails to state Aurora’s position on deployment: will Aurora delay deployment until such time as it believes the Safety Proposition is true to a reasonable certainty or will it deploy at scale earlier in the hope that increased current losses will be offset by anticipated future safety gains?

The Safety Proposition is a statement about physical probability which is either true or false. For success, AV companies need the public to believe the Safety Proposition, yet belief is not the same as truth. The difference between truth and belief creates tension in the S-4 because the filing both fosters a belief in the Safety Proposition while at the same time making clear there is insufficient evidence to support the truth of the Safety Proposition.

A moral hazard results when financial pressures push for early deployment of AV systems before evidence shows that the Safety Proposition is true to a reasonable certainty. This problem is analyzed by comparison with the famous trolley problem in ethics and consideration of corporate governance techniques which an AV company might use to ensure the integrity of its decision process for deployment. The AV industry works to promote belief in the safety proposition in the hope that the public will accept that AV technology has benefits, thus avoiding the need to confront the truth of the Safety Proposition directly. This hinders a meaningful public debate about the merits and timing of deployment of AV technology, raising the question of whether there is a place for meaningful government regulation.



 

No comments: