I think the more likely scenario is that AI will ignore us.
New paper by Google and Oxford scientists claims AI will soon destroy mankind
Researchers with the University of Oxford and Google Deepmind have shared a chilling warning in a new paper. The paper, which was published in AI Magazine last month, claims that the threat of AI is greater than previously believed. It’s so great, in fact, that artificial intelligence is likely to one day rise up and annihilate humankind.
(Related) Unfortunately, neither are the human supervisors.
https://hbr.org/2022/09/ai-isnt-ready-to-make-unsupervised-decisions
AI Isn’t Ready to Make Unsupervised Decisions
AI has progressed to compete with the best of the human brain in many areas, often with stunning accuracy, quality, and speed. But can AI introduce the more subjective experiences, feelings, and empathy that makes our world a better place to live and work, without cold, calculating judgment? Hopefully, but that remains to be seen. The bottom line is, AI is based on algorithms that responds to models and data, and often misses the big picture and most times can’t analyze the decision with reasoning behind it. It isn’t ready to assume human qualities that emphasize empathy, ethics, and morality.
Are things that different at the border?
https://www.pogowasright.org/customs-officials-have-copied-americans-phone-data-at-massive-scale/
Customs officials have copied Americans’ phone data at massive scale
Drew Harwell reports:
U.S. government officials are adding data from as many as 10,000 electronic devices each year to a massive database they’ve compiled from cellphones, iPads and computers seized from travelers at the country’s airports, seaports and border crossings, leaders of Customs and Border Protection told congressional staff in a briefing this summer.
The rapid expansion of the database and the ability of 2,700 CBP officers to access it without a warrant — two details not previously known about the database — have raised alarms in Congress about what use the government has made of the information, much of which is captured from people not suspected of any crime. CBP officials told congressional staff the data is maintained for 15 years.
Read more at The Washington Post,
Am I correct to say we no longer need actual harm but rather proof that the risk of future harm is “sufficient.” (What does sufficient mean?)
Data Breach and the Dark Web: Third Circuit Allows Class Action Standing With Sufficient Risk of Harm
In a new post on the Inside Class Actions blog, our colleagues discuss a recent Third Circuit decision reinstating the putative class action Clemens v. ExecuPharm Inc., concluding there was sufficient risk of imminent harm after a data breach to confer standing on the named plaintiff when the information had been posted on the Dark Web.
Will AI produce a rebuttal? Is everything they write an attempt to “fool” people? Can AI contribute nothing to our understanding of law?
https://www.bespacific.com/a-human-being-wrote-this-law-review-article/
A Human Being Wrote This Law Review Article: GPT-3 and the Practice of Law
Cyphert, Amy, A Human Being Wrote This Law Review Article: GPT-3 and the Practice of Law (November 1, 2021). UC Davis Law Review, Volume 55, Issue 1, WVU College of Law Research Paper No. 2022-02, Available at SSRN: https://ssrn.com/abstract=3973961
“Artificial intelligence tools can now “write” in such a sophisticated manner that they fool people into believing that a human wrote the text. None are better at writing than GPT-3, released in 2020 for beta testing and coming to commercial markets in 2021. GPT-3 was trained on a massive dataset that included scrapes of language from sources ranging from the NYTimes to Reddit boards. And so, it comes as no surprise that researchers have already documented incidences of bias where GPT-3 spews toxic language. But because GPT-3 is so good at “writing,” and can be easily trained to write in a specific voice — from classic Shakespeare to Taylor Swift — it is poised for wide adoption in the field of law. This Article explores the ethical considerations that will follow from GPT-3’s introduction into lawyers’ practices. GPT-3 is new, but the use of AI in the field of law is not. AI has already thoroughly suffused the practice of law. GPT-3 is likely to take hold as well, generating some early excitement that it and other AI tools could help close the access to justice gap. That excitement should nevertheless be tempered with a realistic assessment of GPT-3’s tendency to produce biased outputs. As amended, the Model Rules of Professional Conduct acknowledge the impact of technology on the profession and provide some guard rails for its use by lawyers. This Article is the first to apply the current guidance to GPT-3, concluding that it is inadequate. I examine three specific Model Rules — Rule 1.1 (Competence), Rule 5.3 (Supervision of Nonlawyer Assistance), and Rule 8.4(g) (Bias) — and propose amendments that focus lawyers on their duties and require them to regularly educate themselves about pros and cons of using AI to ensure the ethical use of this emerging technology.”
Perspective.
https://www.bespacific.com/blackboxing-law-by-algorithm/
Blackboxing Law by Algorithm
Grigoleit, Hans Christoph, Blackboxing Law by Algorithm (June 16, 2022). Speech delivered at Oxford Business Law Blog Annual Conference on June 16, 2022
“This post is part of a special series including contributions to the OBLB Annual Conference 2022 on ‘Personalized Law—Law by Algorithm’, held in Oxford on 16 June 2022. This post comes from Hans Christoph Grigoleit, who participated on the panel on ‘Law by Algorithm’. “Adapting a line by the ingenious pop-lyricist Paul Simon, there are probably 50 ways to leave the traditional paths of legal problem solving by making use of algorithms. However, it seems that the law lags behind other fields of society in realizing synergies resulting from the use of algorithms. In their book ‘Law by Algorithm’, Horst Eidenmüller and Gerhard Wagner accentuate this hesitance in a paradigmatic way: while the chapter on ‘Arbitration’ is optimistic regarding the use of algorithms in law (‘… nothing that fundamentally requires human control …’), the authors’ view turns much more pessimistic when trying to specify the perspective of the ‘digital judge’. Following up on this ambivalence, I would like to share some observations on where and why it is not so simple to bring together algorithms and legal problem solving.”
No comments:
Post a Comment