Sunday, April 24, 2022

Untrustworthy lawyers? I can’t believe it! (Includes ‘things to look for’)

https://thenextweb.com/news/scammers-used-ai-generated-faces-to-pose-as-a-boston-law-firm

Scammers used AI-generated faces to pose as a Boston law firm

Nicole Palmer is a lawyer who graduated from Columbia University. Her profile states that she “specializes in the application and protection of industrial design” and has “been building her career successfully for 30 years.”

The only problem is that she doesn’t exist. And she helped me uncover an online scam operation involved in shady activities, including extorting backlinks from bloggers and website owners.

I’ve spent a good part of the past week investigating Arthur Davidson, the so-called “law firm” Nicole works for. What I found was unsettling, a testament to how advances in technology have made it easy for scammers to set up legitimate-looking outfits to prey on their victims.





Are all Internet regulations ‘too much?”

https://www.theglobeandmail.com/politics/article-twitter-compared-liberal-governments-online-harms-plan-to-china-north/

Ottawa faces blowback for plan to regulate internet

Newly released documents reveal Twitter Canada told government officials that a federal plan to create a new internet regulator with the power to block specific websites is comparable to drastic actions used in authoritarian countries like China, North Korea and Iran.

The letter, marked confidential, is among more than 1,000 pages of submissions to an online consultation the Liberal government launched in July, in order to gather opinions on its draft plan for curbing hate speech and other online harms. The documents show the wide-ranging blowback Ottawa received.

Another private letter, from the National Council of Canadian Muslims, warns that the government’s plans “could inadvertently result in one of the most significant assaults on marginalized and racialized communities in years.”





You pays your money and you takes your chance.

https://www.bu.edu/bulawreview/files/2022/04/STEIN.pdf

ASSUMING THE RISKS OF ARTIFICIAL INTELLIGENCE

Tort law has long served as a remedy for those injured by products—and injuries from artificial intelligence (“AI”) are no exception. While many scholars have rightly contemplated the possible tort claims involving AI-driven technologies that cause injury, there has been little focus on the subsequent analysis of defenses. One of these defenses, assumption of risk, has been given particularly short shrift, with most scholars addressing it only in passing. This is intriguing, particularly because assumption of risk has the power to completely bar recovery for a plaintiff who knowingly and voluntarily engaged with a risk. In reality, such a defense may prove vital to shaping the likelihood of success for these prospective plaintiffs injured by AI, first-adopters who are often eager to “voluntarily” use the new technology but simultaneously often lacking in “knowledge” about AI’s risks.

To remedy this oversight in the scholarship, this Article tackles assumption of risk head-on, demonstrating why this defense may have much greater influence on the course of the burgeoning new field of “AI torts” than originally believed. It analyzes the historic application of assumption of risk to emerging technologies, extrapolating its potential use in the context of damages caused by robotic, autonomous, and facial recognition technologies. This Article then analyzes assumption of risk’s relationship to informed consent, another key doctrine that revolves around appreciation of risks, demonstrating how an extension of informed consent principles to assumption of risk can establish a more nuanced approach for a future that is sure to involve an increasing number of AI-human interactions—and AI torts. In addition to these AI-human interactions, this Article’s reevaluation also can help in other assumption of risk analyses and tort law generally to better address the evolving innovation-riskconsent trilemma.





Imagine the IRS replaced by a single computer: The Taxinator!

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4084844

Assessing Automated Administration

To fulfill their responsibilities, governments rely on administrators and employees who, simply because they are human, are prone to individual and group decision-making errors. These errors have at times produced both major tragedies and minor inefficiencies. One potential strategy for overcoming cognitive limitations and group fallibilities is to invest in artificial intelligence (AI) tools that allow for the automation of governmental tasks, thereby reducing reliance on human decision-making. Yet as much as AI tools show promise for improving public administration, automation itself can fail or can generate controversy. Public administrators face the question of when exactly they should use automation. This paper considers the justifications for governmental reliance on AI along with the legal concerns raised by such reliance. Comparing AI-driven automation with a status quo that relies on human decision-making, the paper provides public administrators with guidance for making decisions about AI use. After explaining why prevailing legal doctrines present no intrinsic obstacle to governmental use of AI, the paper presents considerations for administrators to use in choosing when and how to automate existing processes. It recommends that administrators ask whether their contemplated uses meet the preconditions for the deployment of AI tools and whether these tools are in fact likely to outperform the status quo. In moving forward, administrators should also consider the possibility that a contemplated AI use will generate public or legal controversy, and then plan accordingly. The promise and legality of automated administration ultimately depends on making responsible decisions about when and how to deploy this technology.





Interesting questions?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4083771

Considerations regarding Artificial Intelligence and Civil Liability: the case of autonomous vehicles

The current paper intents to discuss the possibilities and the legal grounds of civil liability for damages caused by autonomous vehicles, meaning vehicles that operate without human intervention, under Portuguese Law. Artificial Intelligence [AI] has been evolving in such a way that allows us to assume that is rapidly approaching the time when machines like automobiles will be able to operate in a completely autonomous way, without human intervention. The automobile industry has been implementing navigation systems progressively more independent and some are already testing vehicles without driver. But who will be held liable if these vehicles cause damages to persons or goods? Who will be held liable if a mortal accident or car collision occurs? The automobile’s owner, the person who is being transported, the manufacturer of the vehicle or the programmer that created the algorithms in which the vehicle based its conduct? And on which legal grounds?

On the other hand, there are several other issues that can arise regarding this matter: can we call “conduct” to the action of a driverless automobile? Will we reach a point in which the AI evolves until it is able to learn by itself and decides to cause damages? How can an autonomous agent be held liable if it doesn’t have legal personality?





AI is showing up everywhere else, why not the human brain?

https://www.nationalreview.com/corner/transhumanist-theorist-calls-the-ai-unenhanced-useless-people/

Transhumanist Theorist Calls the AI-Unenhanced ‘Useless People’

Transhumanism, boiled down to its bones, is pure eugenics. It calls itself “H+,” for more or better than human. Which, of course, is what eugenics is all about.

Alarmingly, transhumanist values are being embraced at the highest strata of society, including in Big Tech, in universities, and among the Davos crowd of globalist would-be technocrats. That being so, it is worth listening in to what they are saying under the theory that forewarned is forearmed.

Israeli philosophy professor Yuval Harari is one of the movement’s chief proselytizers. He believes that AI/human hybrids are inevitably going to take over — and that those of us who refuse to join our minds with these computer programs will come to be considered a “useless class,” or even, “useless people.” From the Miami Standard story:



Useful resource?
https://dataconomy.com/2022/04/artificial-intelligence-terms-ai-glossary/
AI DICTIONARY: BE A NATIVE SPEAKER OF ARTIFICIAL INTELLIGENCE
You’ve undoubtedly heard the phrases “data mining” and “machine learning,” but you’ve never been able to find a succinct definition for what you were reading. Now? You don’t have to go very far to find one.

No comments: