Saturday, October 28, 2023

Is anyone shocked?

https://www.pogowasright.org/what-can-we-learn-from-recent-wiretapping-lawsuits/

What Can We Learn From Recent Wiretapping Lawsuits?

Fox Rothschild LLP & Odia Kagan write:

Here is what we can learn from class action lawsuits filed in the last few days under the California Invasion of Privacy Act (CIPA), the Pennsylvania Wiretapping and Electronic Surveillance Control Act, and other wiretapping-like causes of action.
  • If you are going to record calls or chats, you need consent. Period. This means just in time, and just disclosing it in your privacy notice, is not enough.

  • If you are using a third party to do your chat recordings and that third party is allowed to use the information for its own purposes (like to develop new products or improve current offerings), you need to tell people about it and get consent.

  • If you are using artificial intelligence on your customer service calls in order to improve and train your AI and machine learning models, you need to tell people about it and get consent.





Potentially another GDPR?

https://www.theguardian.com/technology/2023/oct/24/eu-touching-distance-world-first-law-regulating-artificial-intelligence-dragos-tudorache

EU ‘in touching distance’ of world’s first laws regulating artificial intelligence

The EU is within “touching distance” of passing the world’s first laws on artificial intelligence, giving Brussels the power to shut down services that cause harm to society, says the AI tsar who has spent the last four years developing the legislation.

A forthcoming EU AI Act could introduce rules for everything from homemade chemical weapons made through AI to copyright theft of music, art and literature, with negotiations between MEPs, EU member states and the European Commission over final text coming to a head on Wednesday.

One of the remaining areas of contention is the use of AI-powered live facial recognition. Member states want to retain this right, arguing it is vital for security on borders but also to avert public disorder. But MEPs felt real-time facial recognition cameras on streets and public spaces was an invasion of privacy, and voted to remove those clauses.

They also voted to remove the right of authorities or employers to use AI-powered emotional recognition technology already used in China, whereby facial expressions of anger, sadness, happiness and boredom as well as other biometric data is monitored to spot tired drivers or workers.



(Related)

https://www.politico.com/news/2023/10/27/white-house-ai-executive-order-00124067

Sweeping new Biden order aims to alter the AI landscape

President Joe Biden will deploy numerous federal agencies to monitor the risks of artificial intelligence and develop new uses for the technology while attempting to protect workers, according to a draft executive order obtained by POLITICO.

The order, expected to be issued as soon as Monday, would streamline high-skilled immigration, create a raft of new government offices and task forces and pave the way for the use of more AI in nearly every facet of life touched by the federal government, from health care to education, trade to housing, and more.

At the same time, the Oct. 23 draft order calls for extensive new checks on the technology, directing agencies to set standards to ensure data privacy and cybersecurity, prevent discrimination, enforce fairness and also closely monitor the competitive landscape of a fast-growing industry. The draft order was verified by multiple people who have seen or been consulted on draft copies of the document.



Friday, October 27, 2023

I once read a book that gave me an idea. Is that fair use?

https://www.cjr.org/the_media_today/an-ai-engine-scans-a-book-is-that-copyright-infringement-or-fair-use.php

An AI engine scans a book. Is that copyright infringement or fair use?

As artificial intelligence programs have become ubiquitous over the past year, so have lawsuits from authors and other creative professionals who argue that their work has been essential to that ubiquity—the “large language models” (or LLMs) that power text-generating AI tools are trained on content that has been scraped from the Web, without its authors’ consent—and that they deserve to be paid for it. Last week, my colleague Yona Roberts Golding wrote about how media outlets, specifically, are weighing legal action against companies that offer AI products, including OpenAI, Meta, and Google. They may have a case: a 2021 analysis of a dataset used by many AI programs showed that half of its top ten sources were news outlets. As Roberts Golding noted, Karla Ortiz, a conceptual artist and one of the plaintiffs in a lawsuit against three AI services, recently told a roundtable hosted by the Federal Trade Commission that the creative economy only works “when the basic tenets of consent, credit, compensation, and transparency are followed.”

As Roberts Golding pointed out, however, AI companies maintain that their datasets are protected by the “fair use” doctrine in copyright law, which allows for copyrighted work to be repurposed under certain limited conditions. Matthew Butterick, Ortiz’s lawyer, told Roberts Golding that he is not convinced by this argument; LLMs are “being held out commercially as replacing authors,” he said, noting that AI-generated books have already been sold on Amazon, under real or fake names. Most copyright experts would probably agree that duplicating a book word for word isn’t fair use. But some observers believe that the scraping of books and other content to train LLMs likely is protected by the fair use exception—or, at least, that it should be. In any case, debates around news content, copyright, and AI are building on similar debates around other types of creative content—debates that have been live throughout AI’s recent period of rapid development, and that build on much older legal concepts and arguments.



(Related)

https://www.fastcompany.com/90970705/whose-content-is-it-anyway-ai-generated-content-and-the-tangled-web-of-creation-ownership-and-responsibility

Whose content is it anyway? AI-generated content and the tangled web of creation, ownership, and responsibility

The use of AI prompts legal, business, and moral considerations, with ownership rights being a key concern. Who can claim copyright for content generated by AI? Is it the human creator who initiated the process? The AI platform itself? The original owners of the training material? Or someone else entirely?





The inevitable false alarm. If they make changes to reduce false alarms will they miss the real thing?

https://www.click2houston.com/news/local/2023/10/26/brazoswood-high-school-false-active-shooter-lockdown-prompts-concerns/

Brazoswood High School false active shooter lockdown prompts concerns with A.I. security system

An image of a student outside Brazoswood High School is what prompted the campus to go into a lockdown during school drop-off Wednesday morning

… The ZeroEyes A.I. security system in the school picked up the image. The technology notified ZeroEyes staff members who relayed the image to school officials. The school made the decision to go into lockdown. The district said it notified parents around 7:40 a.m.

“Our analysts erred on the side of safety and said we believe that this indeed is a rifle just based on the image that we’ve given the software and the service that we’re providing,” said Zero Eyes Chief Customer Officer Dustin Brooks.





Tools & Techniques.

https://www.kdnuggets.com/5-free-books-to-master-machine-learning

5 Free Books to Master Machine Learning



Thursday, October 26, 2023

Real money!

https://www.cpomagazine.com/cyber-security/lloyds-of-london-cyber-attack-on-major-payments-system-could-cost-the-world-3-5-trillion/

Lloyd’s of London: Cyber Attack on Major Payments System Could Cost the World $3.5 Trillion

Leading insurer Lloyd’s of London has issued a dire warning about a potential cyber attack scenario on one of the world’s major payments systems, estimating that the global cost would total about $3.5 trillion and that much of the recovery cost would not be covered by insurance policies.

The Lloyd’s scenario imagines a successful malware attack on major transaction software that is commonly used, which then moves downstream to infect potentially tens of thousands of partner payment networks. The financial damage would be spread over a five-year period, with the United States bearing almost a third of the cost alone.



(Related) Take any of the stories where AI takes over the world and substitute “dim-witted pigeons” for AI and things don’t sound as bad.

https://news.osu.edu/dim-witted-pigeons-use-the-same-principles-as-ai-to-solve-tasks/

Dim-witted’ pigeons use the same principles as AI to solve tasks

A new study provides evidence that pigeons tackle some problems just as artificial intelligence would – allowing them to solve difficult tasks that would vex humans.

Previous research had shown pigeons learned how to solve complex categorization tasks that human ways of thinking – like selective attention and explicit rule use – would not be useful in solving.

Researchers had theorized that pigeons used a “brute force” method of solving problems that is similar to what is used in AI models, said Brandon Turner, lead author of the new study and professor of psychology at The Ohio State University.

But this study may have proven it: Turner and a colleague tested a simple AI model to see if it could solve the problems in the way they thought pigeons did – and it worked.



Wednesday, October 25, 2023

Reawakens my concern that I will be viewed with great suspicion because I don’t use social media.

https://www.404media.co/inside-ices-database-derogatory-information-giant-oak-gost/

Inside ICE’s Database for Finding ‘Derogatory’ Online Speech

Immigration and Customs Enforcement (ICE) has used a system called Giant Oak Search Technology (GOST) to help the agency scrutinize social media posts, determine if they are “derogatory” to the U.S., and then use that information as part of immigration enforcement, according to a new cache of documents reviewed by 404 Media.

The documents peel back the curtain on a powerful system, both in a technological and a policy sense—how information is processed and used to decide who is allowed to remain in the country and who is not.

The government should not be using algorithms to scrutinize our social media posts and decide which of us is ‘risky.’ And agencies certainly shouldn't be buying this kind of black box technology in secret without any accountability. DHS needs to explain to the public how its systems determine whether someone is a ‘risk’ or not, and what happens to the people whose online posts are flagged by its algorithms,” Patrick Toomey, Deputy Director of the ACLU's National Security Project, told 404 Media in an email. The documents come from a Freedom of Information Act (FOIA) lawsuit brought by both the ACLU and the ACLU of Northern California. Toomey from the ACLU then shared the documents with 404 Media.





Suspicions confirmed. (How does the AI know what you want?)

https://cointelegraph.com/news/humans-ai-prefer-sycophantic-chatbot-answers-truth-study

Humans and AI often prefer sycophantic chatbot answers to the truth — Study

Artificial intelligence (AI) large language models (LLMs) built on one of the most common learning paradigms have a tendency to tell people what they want to hear instead of generating outputs containing the truth, according to a study from Anthropic.

In one of the first studies to delve this deeply into the psychology of LLMs, researchers at Anthropic have determined that both humans and AI prefer so-called sycophantic responses over truthful outputs at least some of the time.

Per the team’s research paper:

“Specifically, we demonstrate that these AI assistants frequently wrongly admit mistakes when questioned by the user, give predictably biased feedback, and mimic errors made by the user. The consistency of these empirical findings suggests sycophancy may indeed be a property of the way RLHF models are trained.”

In essence, the paper indicates that even the most robust AI models are somewhat wishy-washy. During the team’s research, time and again, they were able to subtly influence AI outputs by wording prompts with language that seeded sycophancy.





Perspective.

https://www.bespacific.com/lessig-on-why-ai-and-social-media-are-causing-a-free-speech-crisis-for-the-internet/

Lessig on why AI and social media are causing a free speech crisis for the internet

The Verge: Harvard Professor Lawrence Lessig After 30 years teaching law, the internet policy legend is as worried as you’d think about AI and TikTok and he has surprising thoughts about balancing free speech with protecting democracy. Nilay Patel: “…Larry and I talked about the current and recurring controversy around react videos on YouTube, not what they are but what they represent: the users of a platform trying to establish their own culture around what people can and cannot remix and reuse — their own speech regulations based in copyright law. That’s a fascinating cultural development. There’s a lot of approaches to create these types of speech regulations that get around the First Amendment, and I wanted to know how Larry felt about that as someone who has been writing about speech on the internet for so long. His answers really surprised me. Of course, we also had to talk about artificial intelligence. You’ll hear us pull apart two different types of AI that are really shaping our cultural experiences right now. There’s algorithmic AI, which runs the recommendation engines on social platforms and tries to keep you engaged. And then there’s the new world of generative AI, which everyone agrees is a huge risk for the spread of misinformation, both today and in the future, but which no two people seem to agree on how to tackle. Larry’s thoughts here were also surprising. Maybe, he says, we need to get all of politics offline if we’re going to solve this problem.”





Perspective. Proves that AI is everywhere!

https://time.com/collection/best-inventions-2023/

THE BEST INVENTIONS OF 2023



Tuesday, October 24, 2023

Reminder:

The Fall Privacy Foundation Seminar is scheduled for Friday, October 27th.

Recent Developments in State Privacy Laws

Register for the Fall 2023 Privacy Seminar

NOTE:

Something happened to compromise the Privacy Foundation E-Mail list. If anyone was on the Privacy Foundation e-mail list, but have not heard from us this Fall, please email us so we can add your name back to our e-mail list. [Please check with friends who might not see this blog. Bob]





We must expect unintended consequences. Gross changes are easy to spot. How about something subtle?

https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/

This new data poisoning tool lets artists fight back against generative AI

A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.

The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission. Using it to “poison” this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs useless—dogs become cats, cars become cows, and so forth. MIT Technology Review got an exclusive preview of the research, which has been submitted for peer review at computer security conference Usenix.





Perspective.

https://www.foreignaffairs.com/world/coming-ai-economic-revolution

The Coming AI Economic Revolution

In June 2023, a study of the economic potential of generative artificial intelligence estimated that the technology could add more than $4 trillion dollars annually to the global economy. This would be on top of the $11 trillion that nongenerative AI and other forms of automation could contribute. These are enormous numbers: by comparison, the entire German economy—the world’s fourth largest—is worth about $4 trillion. According to the study, produced by the McKinsey Global Institute, this astonishing impact will come largely from gains in productivity.

At least in the near term, such exuberant projections will likely outstrip reality. Numerous technological, process-related, and organizational hurdles, as well as industry dynamics, stand in the way of an AI-driven global economy. But just because the transformation may not be immediate does not mean the eventual effect will be small.

By the beginning of the next decade, the shift to AI could become a leading driver of global prosperity. The prospective gains to the world economy derive from the rapid advances in AI—now further expanded by generative AI, or AI that can create new content, and its potential applications in just about every aspect of human and economic activity. If these innovations can be harnessed, AI could reverse the long-term declines in productivity growth that many advanced economies now face.





Perspective.

https://www.bespacific.com/ai-algorithms-and-awful-humans/

AI, Algorithms, and Awful Humans

Solove, Daniel J. and MATSUMI, Hideyuki, AI, Algorithms, and Awful Humans (October 16, 2023). 96 Fordham Law Review (forthcoming 2024), Available at SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4603992

This Essay critiques a set of arguments often made to justify the use of AI and algorithmic decision-making technologies. These arguments all share a common premise – that human decision-making is so deeply flawed that augmenting it or replacing it with machines will be an improvement. In this Essay, we argue that these arguments fail to account for the full complexity of human and machine decision-making when it comes to deciding about humans. Making decisions about humans involves special emotional and moral considerations that algorithms are not yet prepared to make – and might never be able to make. It is wrong to view machines as deciding like humans do, but better because they are supposedly cleansed of bias. Machines decide fundamentally differently, and bias often persists. These differences are especially pronounced when decisions have a moral or value judgment or involve human lives and behavior. Some of the human dimensions to decision-making that cause great problems also have great virtues. Additionally, algorithms often rely too much on quantifiable data to the exclusion of qualitative data. Whereas certain matters might be readily reducible to quantifiable data, such as the weather, human lives are far more complex. Having humans oversee machines is not a cure; humans often perform badly when reviewing algorithmic output. We contend that algorithmic decision-making is being relied upon too eagerly and with insufficient skepticism. For decisions about humans, there are important considerations that must be better appreciated before these decisions are delegated in whole or in part to machines.”



Monday, October 23, 2023

Perspective. I hope this is wrong, but I’m not sure I could prove it wrong.

https://venturebeat.com/ai/snoop-dogg-sentient-ai-and-the-arrival-mind-paradox/

Snoop Dogg, sentient AI and the ‘Arrival Mind Paradox’

Sure, there’s far more conversation these days about the “existential risks of AI than in years past, but the discussion often jumps directly to movie plots like Wargames (1983), in which an AI almost causes a nuclear war by accidentally misinterpreting human objectives, or Terminator (1984), in which an autonomous weapons system evolves into a sentient AI that turns against us with an army of red-eyed robots. Both are great movies, but do we really think these are the likely risks of a superintelligence?

Of course, an accidental nuclear launch or autonomous weapons gone rogue are real threats, but they happen to be dangers that governments already take seriously. On the other hand, I am confident that a sentient superintelligence would be able to easily subdue humanity without resorting to nukes or killer robots. In fact, it wouldn’t need to use any form of traditional violence. Instead, a superintelligence will simply manipulate humanity to meet its own interests.





Very surprised to hear that digitization is not the default. I thought the idea behind museums was to spread knowledge, not keep it locked away.

https://www.artnews.com/art-news/news/british-museum-digitize-collection-thefts-comments-parliament-1234683315/

British Museum Will Digitize Entire Collection at a Cost of $12.1 M. in Response to Thefts

British Museum has announced plans to digitize its entire collection in order to increase security and public access, as well as ward off calls for the repatriation of items.

The project will require 2.4 million records to upload or upgrade and is estimated to take five years to complete. The museum’s announcement on October 18 came after the news 2,000 items had been stolen from the institution by a former staff member, identified in news reports as former curator Peter Higgs. About 350 have been recovered so far, and last month the museum launched a public appeal for assistance.





Maybe words are easier to digitize?

https://www.bespacific.com/public-case-access/

Public Case Access

This new Public Case Access site was created as a result of a collaboration between the Harvard Law School Library and Ravel Law. The company supported the library in its work to digitize 40,000 printed volumes of cases, comprised of over forty million pages of court decisions, including original materials from cases that predate the U.S. Constitution. Members of the public now have access to one of the largest collections of published caselaw available online. The site offers robust search and filter functionality [Note – these documents contain redactions. Filters include: Court, Author, Judge, Attorney, Jurisdiction, Reporter, and Timeline]. as well as links to PDF images that resulted from the scanning project. In addition to searching the Public Case Access site, users can also access these material through an API available on this site. Case Collection Disclosure – The Public Case Access contains all US court cases published in official reporters from 1658 to 2018. The collection includes over eight million cases from state courts, federal courts, and territorial courts for American Samoa, Dakota Territory, Guam, Native American Courts, Navajo Nation, and the Northern Mariana Islands. Please note that the Public Case Access collection does not include:

    • Cases not designated as officially published

    • Non-published trial documents such as party filings, orders, and exhibits

    • Parallel versions of cases from regional reporters, unless those cases were designated by a court as official

    • Cases officially published in digital form

    • Copyrighted material such as headnotes, for cases still under copyright..”





This will likely get worse. We don’t teach cursive writing, now we have computers even smartphones that will read for us.

Skimming, scanning, scrolling – the age of deep reading is over

Financial Times (read free ): “…Digital reading appears to be destroying habits of “deep reading”. Stunning numbers of people with years of schooling are effectively illiterate. Admittedly, nostalgics have been whining about new media since 1492, but today’s whines have an evidential basis. To quote this month’s Ljubljana Reading Manifesto, signed by publishers’ and library associations, scholars, PEN International and others: “The digital realm may foster more reading than ever in history, but it also offers many temptations to read in a superficial and scattered manner — or even not to read at all. This increasingly endangers higher-level reading.” That’s ominous, because “higher-level reading” has been essential to civilisation. It enabled the Enlightenment, democracy and an international rise in empathy for people who aren’t like us. How will we cope without it?”



Sunday, October 22, 2023

Lessons for future elections? (If war is an economic event, disinformation allows really cheap war.)

https://www.washingtonpost.com/technology/2023/10/21/percepto-africa-france-russia-disinformation/

Spy vs. spy: How Israelis tried to stop Russia’s information war in Africa

This never-before-told tale reveals how covert online battles in the French-speaking Sahel region helped topple governments.

When Israeli businessmen Royi Burstien and Lior Chorev touched down in the busy capital of the West African nation of Burkina Faso, they had an urgent message for the country’s embattled ruler.

The Israelis — one a veteran political operative and the other a former army intelligence officer — had been hired with the mission of keeping the government of President Roch Marc Kaboré in power. Their company, Percepto International, was a pioneer in what’s known as the disinformation-for-hire business. They were skilled in deceptive tricks of social media, reeling people into an online world comprised of fake journalists, news outlets and everyday citizens whose posts were intended to bolster support for Kaboré’s government and undercut its critics.

But as Percepto began to survey the online landscape across Burkina Faso and the surrounding French-speaking Sahel region of Africa in 2021, they quickly saw that the local political adversaries and Islamic extremists they had been hired to combat were not Kaboré’s biggest adversary. The real threat, they concluded, came from Russia, which was running what appeared to be a wide-ranging disinformation campaign aimed at destabilizing Burkina Faso and other democratically-elected governments on its borders.