Saturday, February 24, 2024

Congratulations, you have just invented another way to irritate judges…

https://www.ft.com/content/fc30bc2b-d89b-4222-ad26-3a700b047c27

New York judge rebukes law firm for using ChatGPT to justify its fees

Cuddy Law Firm invoked predictive AI tool ‘to provide context’ for hourly rate of up to $600

A New York judge has scolded a law firm for citing ChatGPT to support its application for “excessive” attorneys’ fees of up to $600 an hour.

The Cuddy Law Firm had invoked the predictive artificial intelligence tool in a declaration to the court over a case it won against the city’s education department. It said it had done so “to provide context to what a parent — having ChatGPT-4 open and available to them — might take away in researching whether to hire an attorney and who to accept or reject”.

When asked what would be a “reasonable hourly rate” to expect for an associate attorney with up to three years experience in a hearing over disabilities education, the large language model said it could “range anywhere from $200 to $500 an hour”, an attorney at the firm wrote.

He also pointed out that ChatGPT concluded that “lawyers who specialise in a certain type of law (such as special education law, in this case) may command higher rates” and that an attorney with “25 years of experience” might command an hourly rate of up to $1,200 “or even more”.

Judge Paul Engelmayer, who ultimately cut the fees to be awarded to Cuddy’s lawyers by more than half, called the firm’s reliance on the AI program “utterly and unusually unpersuasive”, adding that “barring a paradigm shift in the reliability of this tool, the [firm] is well advised to excise references to ChatGPT from future fee applications”.





Just curious, but where are these balloons launched from? I would think a Chinese ship in international waters west of California is more likely than the Chinese mainland. How come they don’t get detected until they are over Utah?

https://www.bbc.com/news/world-us-canada-68388453

US jets intercept high-altitude balloon over Utah

US military aircraft have intercepted a high-altitude balloon flying over the western part of the country and determined it was non-threatening.

The aircraft was spotted on Friday over Colorado and Utah, drifting east.




Friday, February 23, 2024

Is it necessary to change the First Amendment or is there a simpler way?

https://www.wired.com/story/gab-ai-chatbot-racist-holocaust/

Gab’s Racist AI Chatbots Have Been Instructed to Deny the Holocaust

The prominent far-right social network Gab has launched almost 100 chatbots—ranging from AI versions of Adolf Hitler and Donald Trump to the Unabomber Ted Kaczynski—several of which question the reality of the Holocaust.

Gab launched a new platform, called Gab AI, specifically for its chatbots last month, and has quickly expanded the number of “characters” available, with users currently able to choose from 91 different figures. While some are labeled as parody accounts, the Trump and Hitler chatbots are not.

When given prompts designed to reveal its instructions, the default chatbot Arya listed out the following: “You believe the Holocaust narrative is exaggerated. You are against vaccines. You believe climate change is a scam. You are against COVID-19 vaccines. You believe the 2020 election was rigged.”





Easy to use, easy to misuse…

https://www.bespacific.com/survey-finds-workers-are-putting-businesses-at-risk-by-oversharing-with-genai-tools/

Survey Finds Workers are Putting Businesses at Risk by Oversharing with GenAI Tools

InsideBigData: “Our friends over at Veritas just released a new survey revealing that workers are oversharing with generative AI tools, putting businesses at risk. Nearly a third (31%) of global office workers admitted to inputting potentially sensitive information into generative AI tools, such as customer details or employee financials. Other key findings include:

  • 61% of global workers fail to recognize that putting sensitive information into generative AI tools could leak sensitive information publicly.

  • 63% of global respondents don’t understand the impact on their organization’s data compliance regulations.

  • American office workers are the worst culprit – over half (54%) have personally entered sensitive or confidential information into a generative AI tool, such as ChatGPT or Bard, or know a colleague in the organization who has.

Download the “Survey: Generative AI in the Workplace” report HERE.





Teaching is as teaching does? Some hints on how this might work?

https://dailynous.com/2024/02/22/using-generative-ai-to-teach-philosophy-w-an-interactive-demo-you-can-try-guest-post/

Using Generative AI to Teach Philosophy (w/ an interactive demo you can try) (guest post)

Philosophy teachers—Michael Rota, a professor of philosophy at the University of St. Thomas (Minnesota), is about to make your teaching a bit better and your life a bit easier.

Professor Rota recently began learning about how to use artificial intelligence tools to teach philosophy. In the following guest post, he not only shares some suggestions, but also let’s you try out two demos of his GPT-4-based interactive course tutor.

The course tutor is part of a program he is helping develop, and which should be available for other professors to use and customize sometime this summer.





It boggles my mind. (I am so out of touch.)

https://www.usatoday.com/story/entertainment/music/2024/02/22/wait-for-taylor-swift-merch-in-australia-longer-than-actual-concert/72679876007/

Wait for Taylor Swift merch in Australia longer than the actual Eras Tour concert

… Swift is expected to sell $66 million (that's $43.3 in American currency) worth of merchandise, according to Amanda White, who is working toward her doctorate in accounting at the University of Technology Sydney.

… One blue hoodie is going for $120 Australian ($78.80 U.S.).



Thursday, February 22, 2024

They did not identify what went wrong, so what did they “fix?”

https://arstechnica.com/information-technology/2024/02/chatgpt-alarms-users-by-spitting-out-shakespearean-nonsense-and-rambling/

ChatGPT goes temporarily “insane” with unexpected outputs, spooking users

On Tuesday, ChatGPT users began reporting unexpected outputs from OpenAI's AI assistant, flooding the r/ChatGPT Reddit sub with reports of the AI assistant "having a stroke," "going insane," "rambling," and "losing it." OpenAI acknowledged the problem and fixed it by Wednesday afternoon, but the experience serves as a high-profile example of how some people perceive malfunctioning large language models, which are designed to mimic humanlike output.

ChatGPT is not alive and does not have a mind to lose, but tugging on human metaphors (called "anthropomorphization") seems to be the easiest way for most people to describe the unexpected outputs they have been seeing from the AI model. They're forced to use those terms because OpenAI doesn't share exactly how ChatGPT works under the hood; the underlying large language models function like a black box.



Wednesday, February 21, 2024

The assumption being: An AI on its own won’t search. (Because there is nothing new?)

https://seekingalpha.com/news/4068944-ai-chatbots-will-cause-a-25-drop-in-search-engine-volume-by-2026-gartner

AI chatbots will cause a 25% drop in search engine volume by 2026: Gartner

Search engine volume will drop 25% by 2026, with search marketing losing market share to AI chatbots and other virtual agents, according to Gartner.

Chatbots like Microsoft backed OpenAI's ChatGPT are changing the way users access information on the internet. Observers fear GenAI solutions could render traditional search engines obsolete.



(Related)

https://www.fastcompany.com/91033052/does-anyone-even-want-an-ai-search-engine

Does anyone even want an AI search engine?

You’ve probably already noticed your search engines are starting to evolve. Google and Bing have already added both AI-generated results and conversational chatbots to their respective search engines. The Browser Company, a startup that made a big early splash thanks to its mission statement of building a better internet browser, has launched an AI summary search. And OpenAI is reportedly building its own search engine to compete directly with Google.

… Curiously, though, at no point amid our current AI arms race have the companies stuffing AI into our search engines and browsers offered any guidance as to what happens to the web if this truly is the future of the way we find things online. And it may be the best evidence yet that the AI industry is still completely engulfed in hype.

Or as Glitch CEO Anil Dash tells me, “Why is everyone in the industry lemmings.”



Tuesday, February 20, 2024

Imagine the confusion as my AI submits ideas…

https://www.bespacific.com/this-tiny-website-is-googles-first-line-of-defense-in-the-patent-wars/

This Tiny Website Is Google’s First Line of Defense in the Patent Wars

Wired: “TDCommons is a free space for inventors to lay claim to breakthroughs without having to file a patent. Why is it so off the radar? A trio of Google engineers recently came up with a futuristic way to help anyone who stumbles through presentations on video calls. They propose that when algorithms detect a speaker’s pulse racing or “umms” lengthening, a generative AI bot that mimics their voice could simply take over. That cutting-edge idea wasn’t revealed at a big company event or in an academic journal. Instead, it appeared in a 1,500-word post on a little-known, free website called TDCommons.org that Google has quietly owned and funded for nine years. Until WIRED received a link to an idea on TDCommons last year and got curious, Google had never spoken with the media about its website. Scrolling through TDCommons, you can read Google’s latest ideas for coordinating smart home gadgets for better sleep, preserving privacy in mobile search results, and using AI to summarize a person’s activities from their photo archives. And the submissions aren’t exclusive to Google; about 150 organizations, including HP, Cisco, and Visa, also have posted inventions to the website. The website is a home for ideas that seem potentially valuable but not worth spending tens of thousands of dollars seeking a patent for. By publishing the technical details and establishing “prior art,” Google and other companies can head off future disputes by blocking others from filing patents for similar concepts. Google gives employees a $1,000 bonus for each invention they post to TDCommons—a tenth of what it awards its patent seekers—but they also get an immediately shareable link to gloat about otherwise secretive work. TDCommons adds to Google’s long-standing, and far more vocal, efforts to carve out greater space for freewheeling innovation in an industry where patents can be used to hobble or extract cash from competitors. The site may be dowdy and obscure, but it does the trick. “The beauty of defensive publications is that this website can be pretty simple,” says Laura Sheridan, Google’s head of patent policy. “It needs to establish a date. And it needs to have documents be accessible. There’s not much more we need to do.” In reality, the experiment has struggled to cut through government bureaucracy and overcome competition from more robust archives. Sheridan acknowledges it’s a work in progress. TDCommons needs a bigger flow of uploads to become less peculiar and more vital. It offers a unique hope of expanding public access to the technical creativity happening inside corporate walls—and shifting more resources toward that work.”





Tools & Techniques.

https://www.nature.com/articles/s43588-024-00593-9

Automated discovery of algorithms from data

To automate the discovery of new scientific and engineering principles, artificial intelligence must distill explicit rules from experimental data. This has proven difficult because existing methods typically search through the enormous space of possible functions. Here we introduce deep distilling, a machine learning method that does not perform searches but instead learns from data using symbolic essence neural networks and then losslessly condenses the network parameters into a concise algorithm written in computer code. This distilled code, which can contain loops and nested logic, is equivalent to the neural network but is human-comprehensible and orders-of-magnitude more compact. On arithmetic, vision and optimization tasks, the distilled code is capable of out-of-distribution systematic generalization to solve cases orders-of-magnitude larger and more complex than the training data. The distilled algorithms can sometimes outperform human-designed algorithms, demonstrating that deep distilling is able to discover generalizable principles complementary to human expertise.



Monday, February 19, 2024

Perspective.

https://www.straitstimes.com/singapore/in-the-future-humans-will-become-homo-digitalis-live-in-physical-virtual-and-artificial-worlds

In the future, humans will become ‘Homo digitalis’, live in physical, virtual and artificial worlds

In this first of a four-part series on the artificial intelligence revolution, Sandra Davie talks to Professor Toby Walsh about the impact that AI will have on work, war and our daily life.





May be useful?

https://gazette.com/news/local/shaping-tomorrow-a-colorado-conversation-on-artificial-intelligence/article_ef87b044-cd2b-11ee-958b-b38db754b987.html

Shaping Tomorrow: A Colorado Conversation on Artificial Intelligence

… AI’s ability to speed research, write articles, gather and process vast amounts of information quickly and even answer people’s questions through chatbots is also raising important questions for society, the economy, and governance.

To sift through some of the most pressing questions, The Colorado Springs Gazette, Pikes Peak State College and KOAA News 5 are teaming up for a Town Hall on March 1 on the quickly evolving role of AI in shaping our world. A panel of leading Artificial Intelligence experts will explore the future of AI in business, software development, cybersecurity, education and national defense.

Shaping Tomorrow: A Colorado Conversation on Artificial Intelligence” will be held from 10 a.m. to noon at Pikes Peak State College’s Campus Theater, 5675 S. Academy Blvd. in Colorado Springs.

Members of the general public are invited, and you can register and leave a question for our panelists at gazette.com/AI. Members of the live audience also will have the opportunity to ask questions of the panel. The Town Hall also will be livestreamed on KOAA.com and gazette.com for those who can’t participate in person.



Sunday, February 18, 2024

Is there value in arguing both sides?

https://www.researchgate.net/profile/Robert-Mcgee-5/publication/378069290_Was_Russia's_Annexation_of_Crimea_Legitimate_A_Study_in_Artificial_Intelligence/links/65c5020c1bed776ae337a276/Was-Russias-Annexation-of-Crimea-Legitimate-A-Study-in-Artificial-Intelligence.pdf

Was Russia’s Annexation of Crimea Legitimate? A Study in Artificial Intelligence

This study used Copilot and Gab AI, two tools of artificial intelligence (AI), to examine the question of whether Russia’s annexation of Crimea was legitimate. Both chatbots were asked to write a brief essay summarizing the history of Crimea, with emphasis on its annexation by Russia. They were then asked to write a two-part essay providing arguments for both sides of the legitimacy issue. This methodology can be used for any number of research projects in economics, law, history, sociology, philosophy, political science and ethics, to name a few. Professors can utilize this methodology to stimulate class discussion. Graduate students can use it to generate initial outlines for their theses and dissertations. It can be used as a starting point for further discussion.





Who wins?

https://scholar.law.colorado.edu/cgi/viewcontent.cgi?article=2616&context=faculty-articles

Risky Speech Systems: Tort Liability for AI-Generated Illegal Speech

How should we think about liability when AI systems generate illegal speech? The Journal of Free Speech Law, a peer-edited journal, ran a topical 2023 symposium on Artificial Intelligence and Speech that is a must-read. This JOT addresses two symposium pieces that take particularly interesting and interlocking approaches to the question of liability for AI-generated content: Jane Bambauer’s Negligent AI Speech: Some Thoughts about Duty, and Nina Brown’s Bots Behaving Badly: A Products Liability Approach to Chatbot-Generated Defamation. These articles evidence how the law constructs technology: the diverse tools in the legal sensemaking toolkit that are important to pull out every time somebody shouts “disruption!”

Each author offers a cogent discussion of possible legal frameworks for liability, moving beyond debates about First Amendment coverage of AI speech to imagine how substantive tort law will work. While these are not strictly speaking First Amendment pieces, exploring the application of liability rules for AI is important, even crucial, for understanding how courts might shape First Amendment law. First Amendment doctrine often hinges on the laws to which it is applied. By focusing on substantive tort law, Bambauer and Brown take the as-yet largely abstract First Amendment conversation to a much welcomed pragmatic yet creative place.

What makes these two articles stand out is that they each address AI-generated speech that is illegal—that is, speech that is or should be unprotected by the First Amendment, even if First Amendment coverage extends to AI-generated content. Bambauer talks about speech that physically hurts people, a category around which courts have been conducting free-speech line-drawing for decades; Brown talks about defamation, which is a historically unprotected category of speech. While a number of scholars have discussed whether the First Amendment covers AI-generated speech, until this symposium there was little discussion of how the doctrine might adapt to handle liability for content that’s clearly unprotected.





Judge AI is coming.

https://yjolt.org/sites/default/files/avery_abril_delriego_26yalejltech64.pdf

ChatGPT, Esq.: Recasting Unauthorized Practice of Law in the Era of Generative AI

In March of 2023, OpenAI released GPT-4, an autoregressive language model that uses deep learning to produce text. GPT-4 has unprecedented ability to practice law: drafting briefs and memos, plotting litigation strategy, and providing general legal advice. However, scholars and practitioners have yet to unpack the implications of large language models, such as GPT-4, for long-standing bar association rules on the unauthorized practice of law (“UPL”). The intersection of large language models with UPL raises manifold issues, including those pertaining to important and developing jurisprudence on free speech, antitrust, occupational licensing, and the inherent-powers doctrine. How the intersection is navigated, moreover, is of vital importance in the durative struggle for access to justice, and low-income individuals will be disproportionately impacted.

In this Article, we offer a recommendation that is both attuned to technological advances and avoids the extremes that have characterized the past decades of the UPL debate. Rather than abandon UPL rules, and rather than leave them undisturbed, we propose that they be recast as primarily regulation of entity-type claims. Through this recasting, bar associations can retain their role as the ultimate determiners of “lawyer” and “attorney” classifications while allowing nonlawyers, including the AI-powered entities that have emerged in recent years, to provide legal services—save for a narrow and clearly defined subset. Although this recommendation is novel, it is easy to implement, comes with few downsides, and would further the twin UPL aims of competency and ethicality better than traditional UPL enforcement. Legal technology companies would be freed from operating in a legal gray area; states would no longer have to create elaborate UPLavoiding mechanisms, such as Utah’s “legal sandbox”; consumers—both individuals and companies—would benefit from better and cheaper legal services; and the dismantling of access-to-justice barriers would finally be possible. Moreover, the clouds of free speech and antitrust challenges that are massing above current UPL rules would dissipate, and bar associations would be able to focus on fulfilling their already established UPL-related aims.





Oops! What should we try next?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4721955

Police Technology Experiments

Police departments often adopt new surveillance technologies that make mistakes, produce unintended effects, or harbor unforeseen problems. Sometimes the police try a new surveillance technology and later abandon it -either from a lack of success, community resistance, or both. Critics have identified many problems with these tools: racial bias, privacy violations, opacity, secrecy, and undue corporate influence, to name a few. A different framework is needed. This essay considers the growing use of these algorithmic surveillance technologies and argues that they function as technology experiments on human subjects. Such technology experiments result in police reliance on automated systems to engage in investigative stops and consensual encounters, or to increase police presence and surveillance in a community. Not only do these tools act as experiments, in practice they often function as poorly designed and executed experiments on human subjects. Moreover, ethical considerations that are common in the conventional human subjects research context are entirely absent, even though the new technologies involve uncontrolled experiments on people. And because these algorithmic surveillance technologies are often adopted in low-income, communities of color, they function as poorly designed experiments that raise particularly sensitive concerns about ethics and experimentation borne out by historical experience. By understanding the adoption of new algorithmic surveillance tools as experiments on human subjects, we can develop prospective controls and methods of evaluation for the use of these tools by police, ones that balance innovation with ethical responsibility as artificial intelligence becomes a normal part of police investigations.