Saturday, May 25, 2024

Undo reliance to avoid undue reliance?

https://futurism.com/the-byte/study-chatgpt-answers-wrong

STUDY FINDS THAT 52 PERCENT OF CHATGPT ANSWERS TO PROGRAMMING QUESTIONS ARE WRONG

… A team of researchers from Purdue University presented research this month at the Computer-Human Interaction conference that shows that 52 percent of programming answers generated by ChatGPT are incorrect.

… For the study, the researchers looked over 517 questions in Stack Overflow and analyzed ChatGPT's attempt to answer them.

"We found that 52 percent of ChatGPT answers contain misinformation, 77 percent of the answers are more verbose than human answers, and 78 percent of the answers suffer from different degrees of inconsistency to human answers," they wrote.



Friday, May 24, 2024

It can’t be just the legal field.

https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-queries

AI on Trial: Legal Models Hallucinate in 1 out of 6 Queries

Artificial intelligence (AI) tools are rapidly transforming the practice of law. Nearly three quarters of lawyers plan on using generative AI for their work, from sifting through mountains of case law to drafting contracts to reviewing documents to writing legal memoranda. But are these tools reliable enough for real-world use?

Large language models have a documented tendency to “hallucinate,” or make up false information. In one highly-publicized case, a New York lawyer faced sanctions for citing ChatGPT-invented fictional cases in a legal brief; many similar cases have since been reported. And our previous study of general-purpose chatbots found that they hallucinated between 58% and 82% of the time on legal queries, highlighting the risks of incorporating AI into legal practice. In his 2023 annual report on the judiciary, Chief Justice Roberts took note and warned lawyers of hallucinations.

Across all areas of industry, retrieval-augmented generation (RAG) is seen and promoted as the solution for reducing hallucinations in domain-specific contexts. Relying on RAG, leading legal research services have released AI-powered legal research products that they claim avoid”  hallucinations and guarantee hallucination-free” legal citations. RAG systems promise to deliver more accurate and trustworthy legal information by integrating a language model with a database of legal documents. Yet providers have not provided hard evidence for such claims or even precisely defined “hallucination,” making it difficult to assess their real-world reliability.

In a new preprint study by Stanford RegLab and HAI researchers, we put the claims of two providers, LexisNexis and Thomson Reuters (the parent company of Westlaw), to the test. We show that their tools do reduce errors compared to general-purpose AI models like GPT-4. That is a substantial improvement and we document instances where these tools can spot mistaken premises. But even these bespoke legal AI tools still hallucinate an alarming amount of the time: these systems produced incorrect information more than 17% of the time—one in every six queries.





No surprise. It’s a mess.

https://pogowasright.org/resource-biometric-privacy-as-a-case-study-for-us-privacy-overall/

Resource: Biometric Privacy as a Case Study for US Privacy Overall

WilmerHale lawyers Kirk Nahra, Ali Jessani, Amy Olivero and Samuel Kane authored an article in the April 2024 issue of the CPI TechREG Chronicle that outlines how the rules governing biometric data reflect US privacy at large.
Excerpt: “Privacy law in the United States is best described as a patchwork of rules and regulations at both the state and federal level. This development is perhaps no better exemplified than by how the US regulates biometric information. From competing definitions to (sometimes) contradictory compliance obligations, the rules surrounding the processing of biometric information are myriad and complex, creating meaningful challenges for companies that wish to take advantage of the benefits associated with processing it (which include increased security and more convenience for consumers). This article outlines how the rules governing biometric data reflect US privacy at large and how this approach negatively impacts both consumers and businesses.”

Read the full article.





I believe that an explanation is always possible. Getting there is complex, but possible.

https://www.bespacific.com/heres-whats-really-going-on-inside-an-llms-neural-network/

Here’s what’s really going on inside an LLM’s neural network

Ars Technica: “With most computer programs—even complex ones—you can meticulously trace through the code and memory usage to figure out why that program generates any specific behavior or output. That’s generally not true in the field of generative AI, where the non-interpretable neural networks underlying these models make it hard for even experts to figure out precisely why they often confabulate information, for instance. Now, new research from Anthropic offers a new window into what’s going on inside the Claude LLM’s “black box.” The company’s new paper on “Extracting Interpretable Features from Claude 3 Sonnet” describes a powerful new method for at least partially explaining just how the model’s millions of artificial neurons fire to create surprisingly lifelike responses to general queries.



Thursday, May 23, 2024

Could we allow deals like these for exclusive access by one LLM? Would that degrade those banned from access?

https://www.bespacific.com/openais-news-corp-deal-licenses-content-from-wsj-new-york-post/

OpenAI’s News Corp deal licenses content from WSJ, New York Post

The Verge: “OpenAI has struck a deal with News Corp, the media company that owns The Wall Street Journal, the New York Post, The Daily Telegraph, and others. As reported by The Wall Street Journal, OpenAI’s deal with News Corp could be worth over $250 million in the next five years “in the form of cash and credits for use of OpenAI technology.” The multi-year agreement gives OpenAI access to current and archived articles from News Corp publications for AI training and to answer user questions. This is the latest in a string of licensing deals OpenAI has inked with major media companies and outlets, including The Associated Press, the Financial Times, People publisher Dotdash Meredith, and Politico owner Axel Springer. Some outlets have filed lawsuits against OpenAI instead, like The New York Times, New York Daily News, Chicago Tribune, and The Intercept. They’ve accused both OpenAI and Microsoft of copyright infringement by training AI models on their work. The partnership also includes outlets like Barron’s, MarketWatch, Investor’s Business Daily, FN, The Sunday Times, The Sun, and The Australian, among others, and News Corp will “share journalistic expertise” with OpenAI to “ensure the highest journalism standards.”





What do you do when you don’t have an anti-deepfake law?

https://www.nbcnews.com/politics/politics-news/steve-kramer-admitted-deepfaking-bidens-voice-new-hampshire-primary-rcna153626

Political consultant who admitted deepfaking Biden's voice is indicted

Steve Kramer, the political consultant who admitted to NBC News that he was behind a robocall impersonating Joe Biden's voice, has been indicted in New Hampshire.

Kramer faces five counts, including bribery, intimidation and suppression, according to WMUR-TV of Manchester, which first reported the indictment. It is unclear how he is pleading to the charges.



Wednesday, May 22, 2024

Isn’t this an example of ‘new technology’ rubbing against ‘old law?’

https://pogowasright.org/court-rejects-claims-that-websites-live-chat-feature-violates-californias-prohibitions-on-wiretapping-and-eavesdropping/

Court Rejects Claims that Website’s Live Chat Feature Violates California’s Prohibitions on Wiretapping and Eavesdropping

Amy Gordon, Leslie Shanklin, and Jeff Warshafsky of Proskauer write:

In recent years, the “live chat” feature often used on consumer-facing websites has become the subject of lawsuits brought under the California Invasion of Privacy Act (“CIPA”). In particular, there have been a surge of putative class actions challenging the use of this feature under Sections 631(a) and 632.7 of CIPA, which prohibit wiretapping and eavesdropping on certain communications.
This month, Judge Annette Cody of the Central District of California dismissed one such lawsuit at the motion to dismiss phase, holding that the plaintiff had failed to allege any unlawful conduct under CIPA. Cody v. Boscov’s, Inc., No. 22-cv-01434 (C.D. Cal. May 6, 2024).

Read more at Proskauer on Privacy.





Interesting but I wonder if Donald Trump could do the same? He would have to articulate a coherent philosophy.

https://www.ft.com/content/43378c6e-664b-4885-a255-31325d632ee9

China’s latest answer to OpenAI is ‘Chat Xi PT’

Internet regulator uses Chinese leader’s political philosophy to help answer questions posed to latest large language model

Beijing’s latest attempt to control how artificial intelligence informs Chinese internet users has been rolled out as a chatbot trained on the thoughts of President Xi Jinping.

The country’s newest large language model has been learning from its leader’s political philosophy, known as “Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era”, as well as other official literature provided by the Cyberspace Administration of China.

“The expertise and authority of the corpus ensures the professionalism of the generated content,” CAC’s magazine said, in a Monday social media post about the new LLM.





We can, therefore we must?

https://www.theatlantic.com/technology/archive/2024/05/openai-scarlett-johansson-sky/678446/?gift=2iIN4YrefPjuvZ5d2Kh306LFFT6yU6HZVP_tmIcOfig&utm_source=copy-link&utm_medium=social&utm_campaign=share

OpenAI Just Gave Away the Entire Game

The Scarlett Johansson debacle is a microcosm of AI’s raw deal: It’s happening, and you can’t stop it.

If you’re looking to understand the philosophy that underpins Silicon Valley’s latest gold rush, look no further than OpenAI’s Scarlett Johansson debacle. The story, according to Johansson’s lawyers, goes like this: Nine months ago, OpenAI CEO Sam Altman approached the actor with a request to license her voice for a new digital assistant; Johansson declined. She alleges that just two days before the company’s keynote event last week, in which that assistant was revealed as part of a new system called GPT-4o, Altman reached out to Johansson’s team, urging the actor to reconsider. Johansson and Altman allegedly never spoke, and Johansson allegedly never granted OpenAI permission to use her voice. Nevertheless, the company debuted Sky two days later—a program with a voice many believed was alarmingly similar to Johansson’s.





Spend wisely or not at all.

https://www.makeuseof.com/gpt4-free-for-everyone-but-still-reasons-keep-using-chatgpt-plus/

GPT-4 Is Now Free for Everyone, but There Are Still 6 Reasons to Keep Using ChatGPT Plus

Following OpenAI's Spring Update, GPT-4o has become publicly available to Free users. This means that everyone can access GPT-4-level intelligence without paying. You might wonder why you would want to keep paying $20 monthly when you can get it for free.

Well, here are some reasons you might want to keep your ChatGPT Plus sub.





Tools & Techniques.

https://www.engadget.com/microsoft-teams-up-with-khan-academy-to-make-the-khanmigo-ai-teaching-assistant-free-153008848.html

Microsoft teams up with Khan Academy to make the Khanmigo AI teaching assistant free

Microsoft and non-profit educational organization Khan Academy have formed a partnership that will allow all K-12 educators in the US to access the pilot version of Khanmigo for Teachers at no cost. Khanmigo is an AI-powered teaching assistant that can help teachers find ways to make lessons more fun and engaging. It will also recommend assignments, display information on a student's performance so that teachers can assess their progress and provide resources educators can use to refresh their knowledge.

The tool can also quickly create lesson plans and suggest student groups for team activities.



Tuesday, May 21, 2024

Is this the solution?

https://techcrunch.com/2024/05/20/uks-autonomous-vehicle-legislation-becomes-law-paving-the-way-for-first-driverless-cars-by-2026/

UK’s autonomous vehicle legislation becomes law, paving the way for first driverless cars by 2026

The U.K. has been eager to position itself at the forefront of the autonomous vehicle revolution, funding various AV projects and research programs around safety. The government has touted the potential safety benefits of self-driving cars in that they remove human error from roads, though it acknowledges that crashes will still happen, as evidenced by reports from the U.S., where self-driving cars have a firmer foothold. In fact, California has emerged as a hotbed for proposed AV regulation, too.

This is why liability is one of the core facets of the U.K.’s new regulation — who will bear responsibility in the event of a crash? The U.K. clarified this point in 2022 when it published a roadmap that stated that its new legislation will make corporations responsible for any mishaps, “meaning a human driver would not be liable for incidents related to driving while the vehicle is in control of driving.”

Each approved self-driving vehicle will have a corresponding “authorized self-driving entity,” which will typically be the manufacturer but could also be the software developer or insurance company. And this entity will be responsible for the vehicle when self-driving mode is activated.





But could it take over my blog?

https://www.bespacific.com/see-how-easily-ai-chatbots-can-be-taught-to-spew-disinformation/

See How Easily AI Chatbots Can Be Taught to Spew Disinformation

The New York Times [no paywall]: “Ahead of the U.S. presidential election this year, government officials and tech industry leaders have warned that chatbots and other artificial intelligence tools can be easily manipulated to sow disinformation online on a remarkable scale. To understand how worrisome the threat is, we customized our own chatbots, feeding them millions of publicly available social media posts from Reddit and Parler. The posts, which ranged from discussions of racial and gender equity to border policies, allowed the chatbots to develop a variety of liberal and conservative viewpoints…”





The same, but different.

https://pogowasright.org/minnesota-legislature-passes-consumer-data-privacy-act/

Minnesota Legislature Passes Consumer Data Privacy Act

David Stauss & Brad Hammer of Husch Blackwell write:

On May 19, the Minnesota legislature passed the Minnesota Consumer Data Privacy Act (HF 4757 / SF 4782). The bill, which is sponsored by Representative Steve Elkins, was passed as Article 5 of a larger omnibus bill. The bill next moves to Governor Tim Walz for consideration.
The Minnesota bill largely tracks the Washington Privacy Act model but with some significant and unique variations. For example, the bill creates a novel right to question the result of a profiling decision and have a controller provide additional information regarding that decision. It also contains privacy policy requirements that are intended to increase interoperability with other state consumer data privacy laws. Further, the bill contains provisions requiring controllers to maintain a data inventory and document and maintain a description of policies and procedures the controller has adopted to comply with the bill’s provisions. We discuss those requirements and provisions, along with others, in the below article.
As with prior bills, we have added the Minnesota bill to our chart providing a detailed comparison of laws enacted to date.

Read more at Byte Back.





In Alabama nobody will know you’re a dog.

https://pogowasright.org/alabama-enacts-genetic-privacy-bill/

Alabama Enacts Genetic Privacy Bill

Libbie Canter and Elizabeth Brim of Covington and Burling write:

On May 16, 2024, Alabama enacted a genetic privacy bill (HB 21), which regulates consumer-facing genetic testing companies. HB 21 continues the recent trend of states enacting genetic privacy legislation aimed at regulating direct-to-consumer (“DTC”) genetic testing companies, such as in Nebraska and Virginia, with more than 10 states now having similar laws on the books.
Scope of HB 21
HB 21 regulates “genetic testing companies’” practices involving “genetic data.” HB 21 defines a “genetic testing company” as “[a]ny person, other than a health care provider, that directly solicits a biological sample from a consumer for analysis in order to provide products or services to the consumer which include disclosure of information that may include, but is not limited to, the following:
  1. The genetic link of the consumer to certain population groups based on ethnicity, geography, or anthropology;
  2. The probable relationship of the consumer to other individuals based on matching DNA for purposes that include genealogical research; or
  3. Recommendations to the consumer for managing wellness which are based on physical or metabolic traits, lifestyle tendencies, or disease predispositions that are associated with genetic markers present in the consumer’s DNA.”

Read more at Inside Privacy.





Worth noting…

https://www.bespacific.com/how-to-tell-if-a-conspiracy-theory-is-probably-false/

How to tell if a conspiracy theory is probably false

Via LLRX How to tell if a conspiracy theory is probably false Conspiracy theories abound. What should you believe − and how can you tell? H. Colleen Sinclair, a social psychologist who studies misleading narratives, identifies seven step you can take to vet a claim you’ve seen or heard.



Sunday, May 19, 2024

How to get it wrong.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4829598

A Real Account of Deep Fakes

Laws regulating deepfakes are often characterized as protecting privacy or preventing defamation. But examining anti-deepfakes statutes reveals that their breadth exceeds what modern privacy or defamation doctrines can justify: the typical law proscribes material that neither discloses true facts nor asserts false ones. Anti-deepfakes laws encompass harms not cognizable as invasion of privacy or defamation—but not because the laws are overinclusive. Rather, anti-deepfakes laws significantly exceed the dignitary torts’ established boundaries in order to address a distinct wrong: the outrageous use of images per se.

The mechanism by which non-deceptive, pornographic deepfakes cause harm is intuitively simple, yet almost entirely unexamined. Instead, legislators and jurists usually behave as if AI-generated images convey the same information as the photographs and videos they resemble. This approach ignores two blindingly obvious facts: deepfakes are not photographs or video recordings, and often, they don’t even pretend to be. What legal analysis of deepfakes has lacked is a grounding in semiotics, the study of how signs communicate meaning.

Part I of this Article surveys every domestic statute that specifically regulates pornographic deepfakes and distills the characteristics of the typical law. It shows that anti-deepfakes regimes do more than regulate assertions of fact: they ban disparaging uses of images per se, whether or not viewers understand them as fact. Part II uses semiotic theory to explain how deepfakes differ from the media they mimic and why those differences matter legally. Photographs are indexical: they record photons that passed through a lens at a particular moment in time. Deepfakes are iconic: they represent by resemblance. The legal rationales invoked to regulate indexical images cannot justify the regulation of non-deceptive deepfakes. Part III in turn reveals—through a tour of doctrines ranging from trademark dilution to child sexual abuse imagery—that anti-deepfakes laws are not alone in regulating expressive, non-deceptive uses of icons per se. Finally, in Part IV, the Article explains why a proper semiotic understanding of AI-generated pornography is vital. Lawmakers are racing to address an oncoming deluge of photorealistic, AI-generated porn. We can confront this deluge by doubling down on untenable rationales that equate iconic images with indexical images. Or we can acknowledge that deepfakes are icons, not indices, and address them with the bodies of law that regulate them as such: obscenity and an extended version of the tort of appropriation.





Is GDPR adequate?

https://www.researchgate.net/profile/Alfio-Grasso-4/publication/380317554_The_Bad_Algorithm_Automated_Discriminatory_Decisions_in_the_European_General_Data_Protection_Regulation/links/6635073e7091b94e93eed43f/The-Bad-Algorithm-Automated-Discriminatory-Decisions-in-the-European-General-Data-Protection-Regulation.pdf

The Bad Algorithm

The use of automated systems to reach a decision is increasingly widespread, and more and more automated systems have been involved in the formulation of decisions that have a significant impact on individual and collective lives, especially since the beginning of the COVID-19 pandemic. Automation in decision making has proved to be able to produce extremely positive results in terms of greater efficiency and speed in decision-making, but often conceals the risk of discrimination, longstanding and newly minted.

Based on an analytical examination of the legal provisions on the subject and a close comparison with the stances of law scholars, and the European Court of Justice jurisprudence, the study examines the topic of discriminatory automated decisions in the light of data protection law, in order to ascertain whether the European General Data Protection Regulation (GDPR) provides effective tools for counteracting them





Perspective.

https://www.aol.com/news/stephen-wolfram-powerful-unpredictability-ai-100053978.html

Stephen Wolfram on the Powerful Unpredictability of AI

Stephen Wolfram is, strictly speaking, a high school and college dropout: He left both Eton and Oxford early, citing boredom. At 20, he received his doctorate in theoretical physics from Caltech and then joined the faculty in 1979. But he eventually moved away from academia, focusing instead on building a series of popular, powerful, and often eponymous research tools: Mathematica, WolframAlpha, and the Wolfram Language. He self-published a 1,200-page work called A New Kind of Science arguing that nature runs on ultrasimple computational rules. The book enjoyed surprising popular acclaim.

Wolfram's work on computational thinking forms the basis of intelligent assistants, such as Siri. In an April conversation with Reason's Katherine Mangu-Ward, he offered a candid assessment of what he hopes and fears from artificial intelligence, and the complicated relationship between humans and their technology.