Saturday, June 03, 2023

I think it’s a bit of a stretch, but it might be fun to try.

https://theconversation.com/how-ai-could-take-over-elections-and-undermine-democracy-206051

How AI could take over elections – and undermine democracy

Could organizations use artificial intelligence language models such as ChatGPT to induce voters to behave in specific ways?

Sen. Josh Hawley asked OpenAI CEO Sam Altman this question in a May 16, 2023, U.S. Senate hearing on artificial intelligence. Altman replied that he was indeed concerned that some people might use language models to manipulate, persuade and engage in one-on-one interactions with voters.





Perspective.

https://www.fox13now.com/news/local-news/utah-starts-crafting-policies-to-regulate-artificial-intelligence

Utah starts crafting policies to regulate artificial intelligence

"You can use a chatbot to personalize, especially these chatbots that are very human seeming, you can use that chatbot to personalize the customer experience to a point it’s indistinguishable for the person where it’s a person talking to them or a robot," he told the crowd.

Pelikan noted a "hype cycle" right now surrounding AI, pointing out that the technology has been used for years now but what has changed is the quality of it and responses that are more "human." But others warned of risks for misinformation, "deepfake" hoaxes and an over-reliance on the technology with a lack of human oversight.

"The machines are going to become a lot smarter and people are going to become a lot stupider," joked Barclay Burns, the CEO of GenerativeImpact.AI.





Perspective.

https://www.schneier.com/blog/archives/2023/06/open-source-llms.html

Open-Source LLMs

In February, Meta released its large language model: LLaMA. Unlike OpenAI and its ChatGPT, Meta didn’t just give the world a chat window to play with. Instead, it released the code into the open-source community, and shortly thereafter the model itself was leaked. Researchers and programmers immediately started modifying it, improving it, and getting it to do things no one else anticipated. And their results have been immediate, innovative, and an indication of how the future of this technology is going to play out. Training speeds have hugely increased, and the size of the models themselves has shrunk to the point that you can create and run them on a laptop. The world of AI research has dramatically changed.

This development hasn’t made the same splash as other corporate announcements, but its effects will be much greater.

This essay was written with Jim Waldo, and previously appeared on Slate.com.





Perspective. Hey! It can’t hurt.

https://lithub.com/tool-or-terror-looking-to-literature-to-better-understand-artificial-intelligence/

Tool or Terror? Looking to Literature to Better Understand Artificial Intelligence

The Algorithm knew the timing of our periods. It knew when and if we’d marry,” begins “The Future Is a Click Away,” a curious short story in Allegra Hyde’s new collection, The Last Catastrophe. “It knew how we’d die… It knew what seemed unknowable: the hidden chambers of our hearts. When it sent us tampons in the mail, we took them. We paid.”

In an arrestingly quirky first paragraph, Hyde sets up the central conceit of the story: that in an unspecified future, humans live in a world where something only known as “the Algorithm” sends them packages—often twice daily—that they have not ordered, unlike, say, on Amazon, but that seem to uncannily reflect their needs (as well as their budgets, for the Algorithm usually only sends packages that each person can afford). It’s a playful satire of artificial intelligence and corporate surveillance into our lives—one that seems funny until it isn’t, for it hits all too close to home.





Resources.

https://mashable.com/uk/deals/free-artificial-intelligence-chatgpt-courses

10 of the best artificial intelligence courses you can take online for free

… These are the best online artificial intelligence courses you can take for free as of June 3:



Friday, June 02, 2023

It is always difficult to sort through claims and counterclaims to determine the truth. In this case it seems interesting that no Russian diplomats were compromised, just “foreign diplomats based in Russia.”

https://www.reuters.com/technology/russias-fsb-says-us-nsa-penetrated-thousands-apple-phones-spy-plot-2023-06-01/

Russia says US hacked thousands of Apple phones in spy plot

Russia's Federal Security Service (FSB) said on Thursday it had uncovered an American espionage operation that compromised thousands of iPhones using sophisticated surveillance software.

The FSB, the main successor to the Soviet-era KGB, said in a statement that several thousand Apple Inc devices had been infected, including those of domestic Russian subscribers as well as foreign diplomats based in Russia and the former Soviet Union.

In a statement, Apple denied the allegation. "We have never worked with any government to insert a backdoor into any Apple product and never will," the firm said in a statement.

In a blog post, Kaspersky said the oldest traces of infection it discovered dated back to 2019. "As of the time of writing in June 2023, the attack is ongoing," the company said. It added that while its staff was hit, "we are quite confident that Kaspersky was not the main target of this cyberattack."





Is there any benefit in keeping the use of ChatGPT secret?

https://www.bespacific.com/when-do-your-employees-need-to-disclose-their-use-of-chatgpt/

When do your employees need to disclose their use of ChatGPT?

HR Brew: “…As ChatGPT and other generative AI technologies provide a helping hand to employees, HR teams are grappling with policies regarding its use, including disclosure. Some companies have banned or restricted employees from the tech. Others are embracing the possibilities the tech can offer to employee productivity and see it as a tool to boost productivity. As HR teams develop company policies governing generative AI use, they need to consider whether employees need to come clean about the assist. “Over the next couple years, I think every single organization in every industry is going to have to come to a crossroads as to how their organization is really going to standardize…the use of generative AI…depending on the type of work they do for their clients,” Christie Lindor, Bentley University professor and CEO of DE&I firm Tessi Consulting, said. “One of the biggest questions is around, ‘Should I disclose or not disclose?’” Lindor helps companies assess AI use, making sure it’s aligned with their DE&I strategies, and she said how she advises companies on disclosure is “industry dependent.”





Study history!

https://www.omfif.org/2023/06/the-macroeconomics-of-artificial-intelligence/

The macroeconomics of artificial intelligence

Although the technology itself is wholly new, the macroeconomic challenges associated with AI are not. History provides ample evidence that, while AI is unlikely to spur joblessness or mass unemployment, the likelihood of rising inequality is high. And this brings with it a host of monetary and macroeconomic considerations that will impact economists and central banks alike.

What are the macroeconomics of AI?

A common starting place for economists is to study previous technological revolutions – most notably the industrial revolution and the early 20th century technological transformation (railroads and glassware, for example). Both of these periods coincided with significant growth in labour productivity. Both boosted growth and quality of life for subsequent generations. And both of these moments were replete with contemporary discourse that fretted about the ‘end of work’ and the prospect of widespread unemployment at the hands of new machines.

We now know that previous major technological revolutions did not induce mass unemployment. And so, it often becomes tempting for economists to dismiss such technophobic angst as having been completely misguided. Efficiency gains allowed households to save money due to the lower costs that emerged from more efficient industries (agriculture, for example), which led to the creation of new occupations.

There is currently no evidence to suggest that the economics of AI are substantively different from those of previous technologies. AI promises to greatly expand the scope of what occupations can be made ‘digital’, which may boost productivity and encourage a reallocation of labour towards new industries or occupations, which are best performed by humans. It marks an extension of existing trends in digitalisation – present since the 1980s – as opposed to something entirely distinct.





Read, ruminate, relax.

https://venturebeat.com/ai/top-ai-researcher-dismisses-ai-extinction-fears-challenges-hero-scientist-narrative/

Top AI researcher dismisses AI ‘extinction’ fears, challenges ‘hero scientist’ narrative

Kyunghyun Cho, a prominent AI researcher and an associate professor at New York University, has expressed frustration with the current discourse around AI risk. While luminaries like Geoffrey Hinton and Yoshua Bengio have recently warned of potential existential threats from the future development of artificial general intelligence (AGI) and called for regulation or a moratorium on research, Cho believes these “doomer” narratives are distracting from the real issues, both positive and negative, posed by today’s AI.

In a recent interview with VentureBeat, Cho — who is highly regarded for his foundational work on neural machine translation, which helped lead to the development of the Transformer architecture that ChatGPT is based on — expressed disappointment about the lack of concrete proposals at the recent Senate hearings related to regulating AI’s current harms, as well as a lack of discussion on how to boost beneficial uses of AI.



Thursday, June 01, 2023

It seems to me that we have a serious need for anyone who can understand the code behind AI. Should this be done by independent auditors? Is there a business opportunity here?

https://www.c4isrnet.com/artificial-intelligence/2023/05/31/us-army-may-ask-defense-industry-to-disclose-ai-algorithms/

US Army may ask defense industry to disclose AI algorithms

U.S. Army officials are considering asking companies to give them an inside look at the artificial intelligence algorithms they use to better understand their provenance and potential cybersecurity weak spots.

The nascent AI “bill of materials” effort would be similar to existing software bill of materials practices, or SBOMs, the comprehensive lists of ingredients and dependencies that make up software, according to Young Bang, the principal deputy assistant secretary of the Army for acquisition, logistics and technology.





Lies lead to liability?

https://www.axios.com/2023/05/31/ftc-ring-employees-illegally-accessed-user-private-videos

Amazon to pay over $30 million for Ring and Alexa privacy violations

A lawsuit filed by the Department of Justice on behalf of the FTC said Wednesday Amazon violated the Children's Online Privacy Protection Act by retaining voice and geolocation information from young users for years despite parents' requests that they delete the data.

The lawsuit said the company sought to retain the data for "its own potential use," despite repeatedly assured its users that they could delete voice recordings collected from its Alexa voice assistant and geolocation information collected by the Alexa app.

… Separately, the FTC's complaint against Ring says that the company, despite emphasizing security in promotional materials, had no safeguards in place to prevent employees and hundreds of contractors from having full access to videos from every customer.





“Not California?” How extraordinary!

https://www.pogowasright.org/montanas-new-consumer-data-privacy-law-follows-the-leaders-and-were-not-talking-about-california/

Montana’s New Consumer Data Privacy Law Follows the Leaders … and we’re not talking about California!

Michael B. Katz, Cynthia J. Larose, and Angie K. Isaza-Loaiza of Mintz write:

In Montana, Governor Greg Gianforte signed the Montana’s Consumer Data Privacy Act (S.B. 384) (“MCDPA”) on May 19, 2023 – one of the strongest privacy bills signed in a red state. Montana now becomes the ninth state to enact a comprehensive consumer data privacy law.
Montana’s legislature chose to build its statute on models passed in states like Virginia two years ago and in Connecticut in 2022, with a few interesting distinctions noted in bold in this article.

Read more at Mintz.

Related:

Jason C. Gavejian & Joseph J. Lazzarotti of Jackson Lewis also have a post about the provisions of the new Montana law. You can read their overview at Workplace Privacy, Data Management & Security Report.





Resource.

https://www.bespacific.com/the-a-to-z-of-economics/

The A to Z of economics

The Economist [read here free ]: “Economic terms, from “absolute advantage” to “zero-sum game”, explained to you in plain English.”





Yes, judge, my AI wrote this brief because it is smarter than I am.

https://www.bespacific.com/best-practices-for-disclosure-and-citation-when-using-artificial-intelligence-tools/

Best Practices for Disclosure and Citation When Using Artificial Intelligence Tools

Shope, Mark, Best Practices for Disclosure and Citation When Using Artificial Intelligence Tools (January 26, 2023). Available at SSRN: https://ssrn.com/abstract=4338115

This article is intended to be a best practices guide for disclosing the use of artificial intelligence tools in legal writing. The article focuses on using artificial intelligence tools that aid in drafting textual material, specifically in law review articles and law school courses. The article’s approach to disclosure and citation is intended to be a starting point for authors, institutions, and academic communities to tailor based on their own established norms and philosophies. Throughout the entire article, the author has used ChatGPT to provide examples of how artificial intelligence tools can be used in writing and how the output of artificial intelligence tools can be expressed in text, including examples of how that use and text should be disclosed and cited. The article will also include policies for professors to use in their classrooms and journals to use in their submission guidelines.”



Wednesday, May 31, 2023

A mere pendulum swing or the start of a trend?

https://www.eff.org/deeplinks/2023/05/federal-judge-makes-history-holding-border-searches-cell-phones-require-warrant

Federal Judge Makes History in Holding That Border Searches of Cell Phones Require a Warrant

With United States v. Smith (S.D.N.Y. May 11, 2023), a district court judge in New York made history by being the first court to rule that a warrant is required for a cell phone search at the border, “absent exigent circumstances” (although other district courts have wanted to do so).

EFF is thrilled about this decision, given that we have been advocating for a warrant for border searches of electronic devices in the courts and Congress for nearly a decade. If the case is appealed to the Second Circuit, we urge the appellate court to affirm this landmark decision.





Worth repeating and repeating and repeating.

https://www.theverge.com/2023/5/30/23741996/openai-chatgpt-false-information-misinformation-responsibility

OpenAI isn’t doing enough to make ChatGPT’s limitations clear

Users deserve blame for not heeding warnings, but OpenAI should be doing more to make it clear that ChatGPT can’t reliably distinguish fact from fiction.





Are we making progress or simply repeating what is already out there?

https://fpf.org/blog/the-right-to-be-let-a-lone-star-state-texas-passes-comprehensive-privacy-bill/

THE RIGHT TO BE LET A LONE STAR STATE: TEXAS PASSES COMPREHENSIVE PRIVACY BILL

Over Memorial Day weekend Texas lawmakers passed the Texas Data Privacy and Security Act (TDPSA) with unanimous votes in both the State House and Senate. If enacted by Governor Abbott, Texas will become the tenth U.S. state (and fifth in 2023) to enact broad-based data privacy legislation governing the collection, use, and transfer of consumer data. TDPSA contains several drafting innovations that drove backers of the bill to call it the “strongest data privacy law in the country.” While this is likely to be a controversial statement (especially to regulators in states such as California, Colorado, and Connecticut), TDPSA’s novel provisions deserve close attention by stakeholders:





This should be interesting. Perhaps a new “Expert” specialty?

https://techcrunch.com/2023/05/30/no-chatgpt-in-my-court-judge-orders-all-ai-generated-content-must-be-declared-and-checked/?guccounter=1&guce_referrer=aHR0cHM6Ly9uZXdzLmdvb2dsZS5jb20v&guce_referrer_sig=AQAAAJSwvnw0Mr1xsVAYP9ircJ6J5xG6sBB7ZhhHoNYXr1nGzHrIdG0f49_HU1yihRleix_ynebOY0JeKnMpLVPN0Y2D-0jJUS2G96XW9CKvi7A9rFeE9qewjcLORg8eCCRvgKZ73wb50eWpAC9u1Ve8INBwcSnp-Fsv_N0Kz2HwZICO

No ChatGPT in my court: Judge orders all AI-generated content must be declared and checked

Few lawyers would be foolish enough to let an AI make their arguments, but one already did, and Judge Brantley Starr is taking steps to ensure that debacle isn’t repeated in his courtroom.

The Texas federal judge has added a requirement that any attorney appearing in his court must attest that “no portion of the filing was drafted by generative artificial intelligence,” or if it was, that it was checked “by a human being.”





Tools & Techniques. Fire up your Feedly!

https://www.bespacific.com/congressional-research-service-syndication-feed/

Congressional Research Service Syndication Feed

Disruptive Library Technology Jester: “One of the hidden gems of the Library of Congress is the Congressional Research Service (CRS). With a staff of about 600 researchers, analysts, and writers, the CRS provides “policy and legal analysis to committees and Members of both the House and Senate, regardless of party affiliation.” It is kind of like a “think tank” for the members of Congress. And an extensive selection of their reports are available from the CRS homepage and—as government publications—are not subject to copyright; any CRS Report may be reproduced and distributed without permission. And they publish a lot of reports. …The problem is that no automated RSS/Atom feeds of CRS reports exists. Use your favorite search engine to look for Congressional Research Service RSS or Atom”; you’ll find a few attempts to gather selected reports or comprehensive archives that stopped functioning years ago. And that is a real shame because these reports are good, taxpayer-funded work that should be more widely known. So I created a syndication feed in Atom: https://feeds.dltj.org/crs.xml, You can subscribe to that in your feed reader to get updates. I’m also working on a Mastodon bot account that you can follow and automated saving of report PDFs in the Internet Archive Wayback Machine…”



Tuesday, May 30, 2023

Is this the direction we need to take?

https://www.bespacific.com/from-ethics-to-law-why-when-and-how-to-regulate-ai/

From Ethics to Law: Why, When, and How to Regulate AI

Chesterman, Simon, From Ethics to Law: Why, When, and How to Regulate AI (April 29, 2023). Forthcoming in The Handbook of the Ethics of AI edited by David J. Gunkel (Edward Elgar Publishing Ltd.), NUS Law Working Paper No. 2023/014, Available at SSRN: https://ssrn.com/abstract=4432941 or http://dx.doi.org/10.2139/ssrn.4432941

The past decade has seen a proliferation of guides, frameworks, and principles put forward by states, industry, inter- and non-governmental organizations to address matters of AI ethics. These diverse efforts have led to a broad consensus on what norms might govern AI. Far less energy has gone into determining how these might be implemented — or if they are even necessary. This chapter focuses on the intersection of ethics and law, in particular discussing why regulation is necessary, when regulatory changes should be made, and how it might work in practice. Two specific areas for law reform address the weaponization and victimization of AI. Regulations aimed at general AI are particularly difficult in that they confront many ‘unknown unknowns’, but the threat of uncontrollable or uncontainable AI became more widely discussed with the spread of large language models such as ChatGPT in 2023. Additionally, however, there will be a need to prohibit some conduct in which increasingly lifelike machines are the victims — comparable, perhaps, to animal cruelty laws.”





Tools & Techniques. For the truly paranoid?

https://www.bespacific.com/chrome-extension-helps-students-prove-ai-didnt-write-their-essays/

Chrome Extension Helps Students Prove AI Didn’t Write Their Essays

Slash Gear: “…Draftback is a Google Chrome browser extension available as a free download from the Chrome Web Store. When installed, Draftback adds a special button to the top of a Google Doc interface that retraces the entire revision history of the document. As the extension’s creator, writer, and programmer James Somers explains on the extension’s Chome Web Store page, “It’s like going back in time to look over your own shoulder as you write.” When you click the Draftback button, a secondary window pops up showing a timeline of the document. When you press play, you can see every single entry and revision that went into it playing out like a movie. You can even fast-forward and rewind. The timeline features a precise timestamp showing when work was conducted on the document and for how long. Besides the timeline, Draftback also provides a data and stat summary, including a graph showing when and where the document was altered. If AI-produced text were copied and pasted into a document, the Draftback timeline would show it all appearing at once. Ergo, if the timeline does not show that, it definitively proves that a student wrote their entire essay themselves..”



Monday, May 29, 2023

Interesting, but I would rather see data on how it is being received than how it is presented.

https://www.cjr.org/tow_center/media-coverage-chatgpt.php

How the media is covering ChatGPT

News reporting of new technologies often takes the pattern of a hype cycle, said Felix M. Simon, a doctoral researcher at the Oxford Internet Institute and Tow Center fellow. First, “It starts with a new technology which leads to all kinds of expectations and promises”. ChatGPT’s initial press release promised a chatbot that “interacts in a conversational way”. Next, media coverage branches into two extremes: “We have people say it’s the nearing apocalypse for industry XYZ or democracy,” or, alternatively, “it promises all kinds of utopias which will be brought about by the technology,” Simon said. Finally, after a few months, a more nuanced period of coverage—away from catastrophe or utopia—to discuss real-world impacts. “That’s when the cycle starts to cool off again.”

Generative AI tools like ChatGPT, trained on immense amounts of data, are skilled at guessing the next word in a sentence sequence but don’t “think” in the ways humans do. “So it’s literally just walking down the line statistically, looking at the statistical distribution of words that have already been written in the text, and then adding one next word,” Diakopoulos said. More reporting should outline how these technologies actually work—and don’t.





Legitimate interest” seems a bit vague… What if my product is a facial recognition system for police?

https://www.pogowasright.org/britain-to-crack-down-on-unauthorised-ai-data-collection/

Britain to Crack Down on Unauthorised AI Data Collection

Michael Edgar reports:

In the aftermath of Rishi Sunak’s sit down with leaders from Open-AI, DeepMind, and Google, Britain announces new measures to crack down on unauthorised gathering of personal data from Artificial Intelligence (AI) companies.
Companies utilising generative AI technology have been informed by the Information Commissioner’s Office (ICO) that they remain bound by data protection laws in the UK.
Consequently, they are required to obtain consent or provide evidence of a legitimate interest when collecting personal information.

Read more at Digit News.



Sunday, May 28, 2023

What do you expect from Forrest Gump and Associates?

https://www.theverge.com/2023/5/27/23739913/chatgpt-ai-lawsuit-avianca-airlines-chatbot-research

A lawyer used ChatGPT and now has to answer for its ‘bogus’ citations

Lawyers suing the Colombian airline Avianca submitted a brief full of previous cases that were just made up by ChatGPT, The New York Times reported today. After opposing counsel pointed out the nonexistent cases, US District Judge Kevin Castel confirmed, “Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” and set up a hearing as he considers sanctions for the plaintiff’s lawyers.

Lawyer Steven A. Schwartz admitted in an affidavit that he had used OpenAI’s chatbot for his research. To verify the cases, he did the only reasonable thing: he asked the chatbot if it was lying.





Can I be compelled to explain the trade secret functions of my AI?

https://ui.adsabs.harvard.edu/abs/2023arXiv230512167W/abstract

The Case Against Explainability

As artificial intelligence (AI) becomes more prevalent there is a growing demand from regulators to accompany decisions made by such systems with explanations. However, a persistent gap exists between the need to execute a meaningful right to explanation vs. the ability of Machine Learning systems to deliver on such a legal requirement. The regulatory appeal towards "a right to explanation" of AI systems can be attributed to the significant role of explanations, part of the notion called reason-giving, in law. Therefore, in this work we examine reason-giving's purposes in law to analyze whether reasons provided by end-user Explainability can adequately fulfill them. We find that reason-giving's legal purposes include: (a) making a better and more just decision, (b) facilitating due-process, (c) authenticating human agency, and (d) enhancing the decision makers' authority. Using this methodology, we demonstrate end-user Explainabilty's inadequacy to fulfil reason-giving's role in law, given reason-giving's functions rely on its impact over a human decision maker. Thus, end-user Explainability fails, or is unsuitable, to fulfil the first, second and third legal function. In contrast we find that end-user Explainability excels in the fourth function, a quality which raises serious risks considering recent end-user Explainability research trends, Large Language Models' capabilities, and the ability to manipulate end-users by both humans and machines. Hence, we suggest that in some cases the right to explanation of AI systems could bring more harm than good to end users. Accordingly, this study carries some important policy ramifications, as it calls upon regulators and Machine Learning practitioners to reconsider the widespread pursuit of end-user Explainability and a right to explanation of AI systems.





Change the law so the AI is never guilty. (Sounds like an AI generated idea…)

https://scholarlycommons.law.case.edu/jolti/vol14/iss2/1/

Corporate Fiduciary Duty in the Age of Algorithms

The Age of Algorithms will soon have a seismic impact on fiduciary law and thus, on the fiduciary duty of directors and officers. On one hand, corporate fiduciaries will have access to Artificial Intelligence-based tools which may make their jobs more efficient, more accurate, and more effective. As a result, fulfilling fiduciary duties will be easier, and the use of these tools may significantly lower the exposure of corporate fiduciaries to claims of breaching fiduciary duties. However, artificial intelligence (AI) may be a double-edged sword because those attractive tools will create new standards corporate fiduciaries must meet to fulfill their fiduciary duties. At the same time, the risks and limitations of algorithm-based products will mean that fiduciaries who delegate their decision-making to AI tools will face new claims of fiduciary breaches. Corporate fiduciaries might resign from those roles rather than face these new AI-based legal hazards, and AI-tool developers could decide to withdraw from this market rather than face an avalanche of breach of fiduciary duty claims. To mitigate these risks, the jurisprudence of corporate fiduciary law must be modernized and clarified to establish new but understandable fiduciary obligations and protections for corporate fiduciaries.





Must a robot treat another robot ethically?

https://link.springer.com/chapter/10.1007/978-3-031-32439-0_44

A Code of Ethics for Social Cooperative Robots

The article addresses the interaction between Robots supported by Artificial Intelligence and interacting with each other for collaborative and social purposes and proposes the establishment of a code of ethics also for those contexts in which the intersubjective relationship is only between Machines and does not involve interaction with Humans. This proposal poses the problem of which ethics to apply and how to program robots to learn a behavior that makes possible a social interaction between Machines that is respectful of tasks, rights, and duties. The first dilemma regards whether it is legitimate to apply human ethics to a society of machines alone. This scenario would result in a society of Machines completely modeled on human society, which offers advantageous, but also problematic sides. For the second aspect, the investigation is a true propaedeutic to the programming and engineering design in the construction of Machines predisposed to Ethical Acting. In this case, the results of the survey will be able to support and guide the technology in the design of the Machine, highlighting which elements to give absolute priority to create a device that contains, in nuce, the predisposition to Ethical Acting.





Is a “Turing positive” result sufficient to guarantee a right to life?

https://link.springer.com/article/10.1007/s43681-023-00296-3

Artificial intelligence’s right to life

The right to life is fundamental and primary and is a precondition for exercising other rights (Ramcharan in Ramcharan (ed), The right to life in International Law, Martinus Nijhoff Publishers, Dordrecht, 1985). Its universal recognition in the arena of international law is associated with the concept of a human being endowed with inherent and inalienable dignity. Categorization of the circle of entities covered with the right to life today seems obvious and indisputable. Intense development of artificial intelligence, also the fact that it has passed the Turing test which checks AI’s thinking ability in a way similar to human reasoning, inspires a reflection on AI’s future legal status. This study will investigate a thesis of whether artificial intelligence may be entitled to the right to life. The analysis will be carried out around an exploratory question: what are the requirements for being afforded protection of the right to life?





Tools & Techniques. (You may need an AI to search for AI tools.)

https://www.makeuseof.com/online-directories-of-ai-tools-search-app/

6 Online Directories of AI Tools to Discover or Search for the Best AI App

These free directories list all the AI tools available online, so you can browse or search for them quickly and easily.