Saturday, December 30, 2023

A mess AI created. Is it possible AI could solve it? Has anyone asked ChatGPT? (Some really good bad examples…)

https://garymarcus.substack.com/p/things-are-about-to-get-a-lot-worse

Things are about to get a lot worse for Generative AI

A full of spectrum of infringement

At around the same time as news of the New York Times lawsuit vs OpenAI broke, Reid Southen, the film industry concept artist (Marvel, DC, Matrix Resurrections, Hunger Games, etc.) I wrote about last week, and I started doing some experiments together.

We will publish a full report next week, but it is already clear that what we are finding poses serious challenges for generative AI.

The crux of the Times lawsuit is that OpenAI’s chatbots are fully capable of reproducing text nearly verbatim:

The thing is, it is not just text. OpenAI’s image software (which we accessed through Bing) is perfectly capable of verbatim and near-verbatim repetition of sources as well.





No child left unsurveilled?

https://www.politico.com/news/2023/12/29/artificial-intelligence-privacy-schools-00132790

Artificial intelligence stirs privacy challenges for schools

State and local leaders are navigating protections for students despite a lag in federal support.

Dozens of Arizona school districts have been vetting technology vendors to weed out products that might use student data for advertising. Schools in West Virginia and Montana have started to boost their security using facial recognition systems even though it has a high rate of false matches among women and children and is already a concern across New York.

Oregon provides a checklist and other materials for schools looking to develop generative AI policies while California is directing schools on how they can integrate AI in the classroom in a way that prioritizes student safety. Mississippi expects to release school AI guidance in January, and Arizona is forming a committee in early 2024 to recommend policy procedures for implementing and monitoring the technology in schools.

After a legal challenge and subsequent moratorium, New York banned the use of facial recognition in schools in September after the state found the use of the technology for security purposes “may implicate civil rights laws,” noting that it could lead to a “potentially higher rate of false positives for people of color, non-binary and transgender people, women, the elderly and children.” But in Montana, the state barred the continuous use of facial recognition technology by state and local governments but carved schools out of the ban.



Friday, December 29, 2023

Perspective.

https://www.techdirt.com/2023/12/28/the-ny-times-lawsuit-against-openai-would-open-up-the-ny-times-to-all-sorts-of-lawsuits-should-it-win/

The NY Times Lawsuit Against OpenAI Would Open Up The NY Times To All Sorts Of Lawsuits Should It Win

This week the NY Times somehow broke the story of… well, the NY Times suing OpenAI and Microsoft. I wonder who tipped them off. Anyhoo, the lawsuit in many ways is similar to some of the over a dozen lawsuits filed by copyright holders against AI companies. We’ve written about how silly many of these lawsuits are, in that they appear to be written by people who don’t much understand copyright law. And, as we noted, even if courts actually decide in favor of the copyright holders, it’s not like it will turn into any major windfall. All it will do is create another corruptible collection point, while locking in only a few large AI companies who can afford to pay up.

I’ve seen some people arguing that the NY Times lawsuit is somehow “stronger” and more effective than the others, but I honestly don’t see that. Indeed, the NY Times itself seems to think its case is so similar to the ridiculously bad Authors Guild case, that it’s looking to combine the cases.

But while there are some unique aspects to the NY Times case, I’m not sure they are nearly as compelling as the NY Times and its supporters think they are. Indeed, I think if the Times actually wins its case, it would open the Times itself up to some fairly damning lawsuits itself, given its somewhat infamous journalistic practices regarding summarizing other people’s articles without credit. But, we’ll get there.





Keep learning.

https://www.kdnuggets.com/25-free-books-to-master-sql-python-data-science-machine-learning-and-natural-language-processing

25 Free Books to Master SQL, Python, Data Science, Machine Learning, and Natural Language Processing



Thursday, December 28, 2023

An idea not just for lawyers…

https://www.bespacific.com/openjustice-ai-a-global-open-source-legal-language-model

OpenJustice.ai: A Global Open-source Legal Language Model

Dahan, Samuel and Bhambhoria, Rohan and Liang, David and Zhu, Xiaodan, OpenJustice.ai: A Global Open-source Legal Language Model (October 2023). Available at SSRN: https://ssrn.com/abstract=4624814 or http://dx.doi.org/10.2139/ssrn.4624814

Generalized AI like ChatGPT cannot and should not be used for legal tasks. It presents significant risks for both the legal professions as well as litigants. However, domain-specific AI should not be ruled out. It has the potential for legal research as well as access to justice. In this paper, we call for the development of an open-source and distributed legal AI accessible to the entire legal community. We believe it has the potential to address some of the limitations related to the use of general AI for legal problems and resolving disputes – shortcomings that include legal misinformation or hallucinations, lack of transparency and precision, and inability to offer diverse and multiple narratives.”





Perspective.

https://searchengineland.com/seo-2023-recap-436035

SEO year in review 2023: The year of generative AI

It was one of the biggest years of change in search and SEO history. A recap of Google SGE, ranking revelations, Bing Chat and more.





Keep learning.

https://www.kdnuggets.com/25-free-courses-to-master-data-science-data-engineering-machine-learning-mlops-and-generative-ai

25 Free Courses to Master Data Science, Data Engineering, Machine Learning, MLOps, and Generative AI

In today's rapidly developing technological landscape, it is crucial to master skills in data science, machine learning, and AI. Whether you're seeking to embark on a new career or enhance your existing expertise, there is a plethora of online resources available, and many of them are free! We have gathered the top posts on free courses (that you love) from KDnuggets and compiled them to provide you with a collection of courses that are excellent. Bookmark this page for future reference, as you will likely return to it to learn new skills or try out new courses.





Tools & Techniques.

https://www.fastcompany.com/91000628/these-were-some-of-the-most-useful-tools-of-2023

These were some of the most useful tools of 2023





Tools & Techniques.

https://www.forbes.com/sites/lanceeliot/2023/12/28/must-read-best-of-practical-prompt-engineering-strategies-to-become-a-skillful-prompting-wizard-in-generative-ai/?sh=38e9534019cd

Must-Read Best Of Practical Prompt Engineering Strategies To Become A Skillful Prompting Wizard In Generative AI

In today’s column, I have put together my most-read postings on how to skillfully craft your prompts when making use of generative AI such as ChatGPT, Bard, Gemini, Claude, GPT-4, and other popular large language models (LLM). These are handy strategies and specific techniques that can make a tremendous difference when using generative AI. If you ever wondered what other people know about prompting but for which you don’t know, perhaps this recap will ensure that you are in the know.



 

Tuesday, December 26, 2023

Many players, many targets. Expect more of both next year.

https://www.bespacific.com/odni-intel-community-assessment-of-foreign-threats-to-2022-us-elections/

ODNI Releases Intelligence Community Assessment of Foreign Threats to the 2022 U.S. Elections

The Office of the Director of National Intelligence (ODNI) today released the declassified Intelligence Community Assessment of Foreign Threats to the 2022 U.S. Elections [redacted] Coordinated across the Intelligence Community (IC), the assessment addresses the intentions and efforts of foreign actors to influence or interfere with the 2022 U.S. elections. Within 45 days of the 2022 U.S. elections, ODNI completed and distributed the classified version of this report pursuant to Executive Order 13848. “We share our assessment and the accompanying material to help inform the American public about foreign influence efforts, including attempts by foreign actors to induce friction and undermine confidence in the electoral process that underpins our democracy,” said Director of National Intelligence Avril Haines. “As global barriers to entry lower and accessibility rises, such influence efforts remain a continuing challenge for our country, and an informed understanding of the problem can serve as one defense.” In addition to the declassified Intelligence Community Assessment, the accompanying National Intelligence Council Memorandum, Other Countries’ Activities During the 2022 Election Cycle, provides added insights…”





Something for politicians to consider?

https://www.bespacific.com/most-readers-want-publishers-to-label-ai-generated-articles-but-trust-outlets-less-when-they-do/

Readers want publishers to label AI-generated articles but trust outlets less when they do

Nieman Lab: “An overwhelming majority of readers would like news publishers to tell them when AI has shaped the news coverage they’re seeing. But, new research finds, news outlets pay a price when they disclose using generative AI. That’s the conundrum at the heart of new research from University of Minnesota’s Benjamin Toff and Oxford Internet Institute’s Felix M. Simon. Their working paper “‘Or they could just not use it?’: The paradox of AI disclosure for audience trust in news” is one of the first experiments to examine audience perceptions of AI-generated news. More than three-quarters of U.S. adults think news articles written by AI would be “a bad thing.” But, from Sports Illustrated to Gannett, it’s clear that particular ship has sailed. Asking Google for information and getting AI-generated content back isn’t the future, it’s our present-day reality. Much of the existing research on perceptions of AI in newsmaking has focused on algorithmic news recommendation, i.e. questions like how readers feel about robots choosing their headlines. Some have suggested news consumers may perceive AI-generated news as more fair and neutral owing to the “machine heuristic” in which people credit technology as operating without pesky things like human emotions or ulterior motives. For this experiment, conducted in September 2023, participants read news articles of varying political content — ranging from a piece on the release of the “Barbie” film to coverage of an investigation into Hunter Biden. For some stories, the work was clearly labeled as AI-generated. Some of the AI-labeled articles were accompanied by a list of news reports used as sources…”





Perspective. I think it loses something in translation but there are some interesting points.

https://english.elpais.com/technology/2023-12-25/gemma-galdon-algorithm-auditor-artificial-intelligence-is-of-very-poor-quality.html

Gemma Galdón, algorithm auditor: ‘Artificial intelligence is of very poor quality’

The founder of Eticas Consulting advises international organizations to help them identify and avoid bias. She distrusts the expectations of the sector: ‘To propose that a data system is going to make a leap into consciousness is a hallucination’



Monday, December 25, 2023

We’re the good guys so let us look over your shoulder…

https://richmond.com/zzstyling/column/microsoft-365-copilot-is-here-what-are-the-legal-risks-of-using-it/article_9004342e-9f9c-11ee-9b82-df3d9f4cc1df.html

Microsoft 365 Copilot is here. What are the legal risks of using it?

Copilot adds generative AI capability to core Microsoft Office applications, such as Word, Outlook, Excel, Teams, and PowerPoint. It can be used to create, summarize and analyze things in those applications.

The biggest concern is confidentiality. With many generally available generative AIs, such as ChatGPT, anything you put in a prompt is used in the AI’s training. That creates a risk that your input could appear in someone else’s output. Also, the AI provider can see your input and output.

Microsoft promises that, with Copilot, your inputs and outputs are kept confidential. It says it will not use your input or output to train its AI for its other customers, and your input will not show up in the output of other Copilot users (at least outside of your company).

But there is a major catch: Microsoft says it captures and may access your Copilot prompts and outputs for 30 days. It operates an abuse monitoring system to review that material for violations of its code of conduct and “other applicable product terms.” Microsoft says its customers with special needs regarding inputs containing sensitive, confidential, or legally regulated input data can apply to Microsoft for an exemption from this abuse monitoring.



Sunday, December 24, 2023

Looking for anyone who has a possible solution...

https://scholarship.law.marquette.edu/cgi/viewcontent.cgi?article=1042&context=ipilr

Artificial Intelligence Owning Patents: A Worldwide Court Debate

In the international sphere, a showdown is unfolding in the highest courts of many countries, including the United States, Canada, Australia, China, Japan, India, and several European countries.1 The surrounding issue is whether artificial intelligence (AI) can be recognized as the sole inventor of a patent.2

… As the events surrounding The Artificial Inventor Project and its legal adventures unfold around the world, this Comment explores what it means to be an inventor and different countries’ legal reasoning for their decisions to recognize, or not recognize, AI as a patent inventor. Specifically, this Comment will analyze the United States’ Patent Laws to better understand why The Artificial Inventor Project is not recognized in the United States. Following this analysis, the focus will turn to analyzing United Kingdom, Germany, South Africa, and Australia’s legal interpretations of AI as a patent inventor. The final section of this Comment proposes a better approach, based on the analyzed countries’ approaches, for the United States to ta7e regarding recognizing DABUS as a patent inventor.



(Related)

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4666432

Granting Legal Personality To Artificial Intelligences In Brazil’s Legal Context: A Possible Solution To The Copyright Limbo

This article investigates the feasibility and consequences of granting legal personality to Artificial Intelligences (AIs) in the context of Brazilian law, with a special focus on copyright law. It conducts a thorough analysis of how such a grant can enhance legal security and encourage innovation in AI technologies. Through an integrative review of the literature and a comparative analysis of national and international legislation and jurisprudence, the study explores the implications of this legislative innovation. The article highlights the importance of legal clarity for companies and investors in the AI sector, emphasizing that granting legal personality to AIs can simplify the identification of the copyright holder and protect investments. However, the work also recognizes challenges, such as the complexity of assigning authorship and evaluating the originality of works created by AIs. A careful debate is proposed on criteria for determining which AIs should be considered legal persons and how to balance the rights and duties of AIs and their creators. The study suggests adapting the legal structure of the LTDA to incorporate AIs as operational entities, aiming for an effective legal framework for managing risks associated with AI. It concludes that granting legal personality to AIs in Brazil is a promising strategy, requiring careful consideration and forward-looking vision, emphasizing the need for Brazilian law to prepare for the opportunities and challenges of the AI era.





Perhaps AI copyright is not possible… (Makes LLM output sound like politicians.)

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4667410

Asemic Defamation, or, the Death of the AI Speaker

Large Language Model (“LLM”) systems have captured considerable popular, scholarly, and governmental notice. By analyzing vast troves of text, these machine learning systems construct a statistical model of relationships among words, and from that model they are able to generate syntactically sophisticated texts. However, LLMs are prone to “hallucinate,” which is to say that they routinely generate statements that are demonstrably false. Although couched in the language of credible factual statements, such LLM output may entirely diverge from known facts. When they concern particular individuals, such texts may be reputationally damaging if the contrived false statements they contain are derogatory.

Scholars have begun to analyze the prospects and implications of such AI defamation. However, most analyses to date begin from the premise that LLM texts constitute speech that is protected under constitutional guarantees of expressive freedom. This assumption is highly problematic, as LLM texts have no semantic content. LLMs are not designed, have no capability, and do not attempt to fit the truth values of their output to the real world. LLM texts appear to constitute an almost perfect example of what semiotics labels “asemic signification,” that is, symbols that have no meaning except for meaning imputed to them by a reader.

In this paper, I question whether asemic texts are properly the subject of First Amendment coverage. I consider both LLM texts and historical examples to examine the expressive status of asemic texts, recognizing that LLM texts may be the first instance of fully asemic texts. I suggest that attribution of meaning by listeners alone cannot credibly place such works within categories of protected speech. In the case of LLM outputs, there is neither a speaker, nor communication of any message, nor any meaning that is not supplied by the text recipient. I conclude that LLM texts cannot be considered protected speech, which vastly simplifies their status under defamation law.