Sunday, May 19, 2024

How to get it wrong.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4829598

A Real Account of Deep Fakes

Laws regulating deepfakes are often characterized as protecting privacy or preventing defamation. But examining anti-deepfakes statutes reveals that their breadth exceeds what modern privacy or defamation doctrines can justify: the typical law proscribes material that neither discloses true facts nor asserts false ones. Anti-deepfakes laws encompass harms not cognizable as invasion of privacy or defamation—but not because the laws are overinclusive. Rather, anti-deepfakes laws significantly exceed the dignitary torts’ established boundaries in order to address a distinct wrong: the outrageous use of images per se.

The mechanism by which non-deceptive, pornographic deepfakes cause harm is intuitively simple, yet almost entirely unexamined. Instead, legislators and jurists usually behave as if AI-generated images convey the same information as the photographs and videos they resemble. This approach ignores two blindingly obvious facts: deepfakes are not photographs or video recordings, and often, they don’t even pretend to be. What legal analysis of deepfakes has lacked is a grounding in semiotics, the study of how signs communicate meaning.

Part I of this Article surveys every domestic statute that specifically regulates pornographic deepfakes and distills the characteristics of the typical law. It shows that anti-deepfakes regimes do more than regulate assertions of fact: they ban disparaging uses of images per se, whether or not viewers understand them as fact. Part II uses semiotic theory to explain how deepfakes differ from the media they mimic and why those differences matter legally. Photographs are indexical: they record photons that passed through a lens at a particular moment in time. Deepfakes are iconic: they represent by resemblance. The legal rationales invoked to regulate indexical images cannot justify the regulation of non-deceptive deepfakes. Part III in turn reveals—through a tour of doctrines ranging from trademark dilution to child sexual abuse imagery—that anti-deepfakes laws are not alone in regulating expressive, non-deceptive uses of icons per se. Finally, in Part IV, the Article explains why a proper semiotic understanding of AI-generated pornography is vital. Lawmakers are racing to address an oncoming deluge of photorealistic, AI-generated porn. We can confront this deluge by doubling down on untenable rationales that equate iconic images with indexical images. Or we can acknowledge that deepfakes are icons, not indices, and address them with the bodies of law that regulate them as such: obscenity and an extended version of the tort of appropriation.





Is GDPR adequate?

https://www.researchgate.net/profile/Alfio-Grasso-4/publication/380317554_The_Bad_Algorithm_Automated_Discriminatory_Decisions_in_the_European_General_Data_Protection_Regulation/links/6635073e7091b94e93eed43f/The-Bad-Algorithm-Automated-Discriminatory-Decisions-in-the-European-General-Data-Protection-Regulation.pdf

The Bad Algorithm

The use of automated systems to reach a decision is increasingly widespread, and more and more automated systems have been involved in the formulation of decisions that have a significant impact on individual and collective lives, especially since the beginning of the COVID-19 pandemic. Automation in decision making has proved to be able to produce extremely positive results in terms of greater efficiency and speed in decision-making, but often conceals the risk of discrimination, longstanding and newly minted.

Based on an analytical examination of the legal provisions on the subject and a close comparison with the stances of law scholars, and the European Court of Justice jurisprudence, the study examines the topic of discriminatory automated decisions in the light of data protection law, in order to ascertain whether the European General Data Protection Regulation (GDPR) provides effective tools for counteracting them





Perspective.

https://www.aol.com/news/stephen-wolfram-powerful-unpredictability-ai-100053978.html

Stephen Wolfram on the Powerful Unpredictability of AI

Stephen Wolfram is, strictly speaking, a high school and college dropout: He left both Eton and Oxford early, citing boredom. At 20, he received his doctorate in theoretical physics from Caltech and then joined the faculty in 1979. But he eventually moved away from academia, focusing instead on building a series of popular, powerful, and often eponymous research tools: Mathematica, WolframAlpha, and the Wolfram Language. He self-published a 1,200-page work called A New Kind of Science arguing that nature runs on ultrasimple computational rules. The book enjoyed surprising popular acclaim.

Wolfram's work on computational thinking forms the basis of intelligent assistants, such as Siri. In an April conversation with Reason's Katherine Mangu-Ward, he offered a candid assessment of what he hopes and fears from artificial intelligence, and the complicated relationship between humans and their technology.



No comments: