Tuesday, October 24, 2023

Reminder:

The Fall Privacy Foundation Seminar is scheduled for Friday, October 27th.

Recent Developments in State Privacy Laws

Register for the Fall 2023 Privacy Seminar

NOTE:

Something happened to compromise the Privacy Foundation E-Mail list. If anyone was on the Privacy Foundation e-mail list, but have not heard from us this Fall, please email us so we can add your name back to our e-mail list. [Please check with friends who might not see this blog. Bob]





We must expect unintended consequences. Gross changes are easy to spot. How about something subtle?

https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/

This new data poisoning tool lets artists fight back against generative AI

A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.

The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission. Using it to “poison” this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs useless—dogs become cats, cars become cows, and so forth. MIT Technology Review got an exclusive preview of the research, which has been submitted for peer review at computer security conference Usenix.





Perspective.

https://www.foreignaffairs.com/world/coming-ai-economic-revolution

The Coming AI Economic Revolution

In June 2023, a study of the economic potential of generative artificial intelligence estimated that the technology could add more than $4 trillion dollars annually to the global economy. This would be on top of the $11 trillion that nongenerative AI and other forms of automation could contribute. These are enormous numbers: by comparison, the entire German economy—the world’s fourth largest—is worth about $4 trillion. According to the study, produced by the McKinsey Global Institute, this astonishing impact will come largely from gains in productivity.

At least in the near term, such exuberant projections will likely outstrip reality. Numerous technological, process-related, and organizational hurdles, as well as industry dynamics, stand in the way of an AI-driven global economy. But just because the transformation may not be immediate does not mean the eventual effect will be small.

By the beginning of the next decade, the shift to AI could become a leading driver of global prosperity. The prospective gains to the world economy derive from the rapid advances in AI—now further expanded by generative AI, or AI that can create new content, and its potential applications in just about every aspect of human and economic activity. If these innovations can be harnessed, AI could reverse the long-term declines in productivity growth that many advanced economies now face.





Perspective.

https://www.bespacific.com/ai-algorithms-and-awful-humans/

AI, Algorithms, and Awful Humans

Solove, Daniel J. and MATSUMI, Hideyuki, AI, Algorithms, and Awful Humans (October 16, 2023). 96 Fordham Law Review (forthcoming 2024), Available at SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4603992

This Essay critiques a set of arguments often made to justify the use of AI and algorithmic decision-making technologies. These arguments all share a common premise – that human decision-making is so deeply flawed that augmenting it or replacing it with machines will be an improvement. In this Essay, we argue that these arguments fail to account for the full complexity of human and machine decision-making when it comes to deciding about humans. Making decisions about humans involves special emotional and moral considerations that algorithms are not yet prepared to make – and might never be able to make. It is wrong to view machines as deciding like humans do, but better because they are supposedly cleansed of bias. Machines decide fundamentally differently, and bias often persists. These differences are especially pronounced when decisions have a moral or value judgment or involve human lives and behavior. Some of the human dimensions to decision-making that cause great problems also have great virtues. Additionally, algorithms often rely too much on quantifiable data to the exclusion of qualitative data. Whereas certain matters might be readily reducible to quantifiable data, such as the weather, human lives are far more complex. Having humans oversee machines is not a cure; humans often perform badly when reviewing algorithmic output. We contend that algorithmic decision-making is being relied upon too eagerly and with insufficient skepticism. For decisions about humans, there are important considerations that must be better appreciated before these decisions are delegated in whole or in part to machines.”



No comments: