Saturday, August 19, 2023

Interesting that they haven’t run with the ‘free money.’

https://www.reuters.com/technology/mad-men-machines-big-advertisers-shift-ai-2023-08-18/

From Mad Men to machines? Big advertisers shift to AI

Some of the world's biggest advertisers, from food giant Nestle to consumer goods multinational Unilever, are experimenting with using generative AI software like ChatGPT and DALL-E to cut costs and increase productivity, executives say.

But many companies remain wary of security and copyright risks as well as the dangers of unintended biases baked into the raw information feeding the software, meaning humans will remain part of the process for the foreseeable future.

… WPP, the world's biggest advertising agency, is working with consumer goods companies including Nestle and Oreo-maker Mondelez to use generative AI in advertising campaigns, its CEO Mark Read said.

"The savings can be 10 or 20 times," Read said in an interview. "Rather than flying a film crew down to Africa to shoot a commercial, we've created that virtually."





Revising economics for the AI era?

https://www.aljazeera.com/opinions/2023/8/19/ai-and-the-tyranny-of-the-data-commons

AI and the tyranny of the data commons

The myth of data as a non-rival resource needs to be abandoned.

… How did we get here? The seeds of the wholesale appropriation of our data were planted a long time ago when economists and media theorists declared data a non-rival resource, the basis for a sharing economy, where ownership is not important and consumers are free to create and distribute goods outside of a market system precisely because they are non-rivalrous.

An example of rivalrous good is a cake. If I eat the cake, no one else can eat it. A non-rival resource, on the other hand, can be consumed by many people without diminishing its value. Think of a digital picture of a cake. If I use it on a website or a social media post, this would not prevent others from doing the same and it would not diminish the quality and value of the digital picture.





Readers of this blog should do well…

https://www.pewresearch.org/internet/quiz/digital-knowledge-quiz-2023/

Quiz: Test your knowledge of digital topics



Friday, August 18, 2023

Transition to AI lawyers? Imagine law as an early adopter industry!

https://www.nfx.com/post/legal-and-ai

AI is Reinventing the Legal Industry

Legal Tech has not been the best sector for founders to spend time in over the last 20 years. But Legal x AI is different, and software in the legal sector has taken off.

We started investing in AI-powered Legal Tech several years ago, before the Generative AI boom. There are bigger patterns behind the explosive growth in this vertical, and reason to believe there’s much more value yet to come.

When we first invested in EvenUp, Darrow, ZERO Systems/Hercules, , and others still in stealth, we saw three big differences from traditional Legal Tech.

First, AI x Legal helps lawyers make more money. Law firms that bring “AI inside” experience higher caseloads, more cases won, and faster claim processing.

Second, AI-powered software immediately and obviously improves the day-to-day experience of the lawyer using it. AI does tasks that lawyers traditionally find hard, tedious, or expensive – tasks that feel like eating glass.

Third, there were 10X leaps in workflows to save time, not the 1.5X offered by traditional Legal Tech. Counterintuitively, saving time is always a double edged sword with law firms, because their billable hours business model means they only make money when they take time, not save time.

Nevertheless, the 10X improvement in hours spent breaks them out of that incremental mindset that has been holding them back from embracing software. It helps them generate more revenue through more cases at higher value.

These benefits will repeat across many industries, not just legal. But it’s worth looking deeply at legal because it’s one of the first verticals to really take off with AI.



Thursday, August 17, 2023

De-Trumping elections?

https://www.bespacific.com/proposed-rule-artificial-intelligence-in-campaign-ads/

Proposed Rule – Artificial Intelligence in Campaign Ads

Federal Election Commission – The Commission announces its receipt of a Petition for Rulemaking filed by Public Citizen. The Petition asks the Commission to amend its regulation on fraudulent misrepresentation of campaign authority to make clear that the related statutory prohibition applies to deliberately deceptive Artificial Intelligence campaign ads… The Petition asserts that generative Artificial Intelligence and deepfake technology, is being ‘‘used to create convincing images, audio and video hoaxes.’’ The Petition asserts that while the technology is not so far advanced currently as for viewers to not be able to identify when it is used disingenuously, if the use of the ‘‘technology continues to improve, it will become increasingly difficult, and perhaps, nearly impossible for an average person to distinguish deepfake videos and audio clips from authentic media.’’ The Petition notes that the technology will ‘‘almost certainly create the opportunity for political actors to deploy it to deceive voters[,] in ways that extend well beyond any First Amendment protections for political expression, opinion or satire.’’ According to the Petition, this technology might be used to ‘‘create a video that purports to show an opponent making an offensive statement or accepting a bribe’’ and, once disseminated, be used for the purpose of ‘‘persuading voters that the opponent said or did something they did not say or do.’’ The Petition explains that a deepfake audio clip or video by a candidate or their agent would violate the fraudulent misrepresentation provision by ‘‘falsely putting words into another candidate’s mouth, or showing the candidate taking action they did not [take],’’ thereby ‘‘fraudulently speak[ing] or act[ing] ‘for’ that candidate in a way deliberately intended to [harm] him or her.’’ The Petitioner states that because the deepfaker misrepresents themselves as speaking for the deepfaked candidate, ‘‘the deepfake is fraudulent because the deepfaked candidate in fact did not say or do what is depicted by the deepfake and because the deepfake aims to deceive the public.’’ The Petitioner draws a distinction between deepfakes, which it contends violates the prohibition on fraudulent misrepresentation, and other uses of Artificial Intelligence in campaign communications, such as in parodies, where the purpose and effect are not to deceive voters, or as in other communications where ‘‘there is a sufficiently prominent disclosure that the image, audio or video was generated by [A]rtificial [I]ntelligence and portrays fictitious statements and actions.’’ …





Do I have to share my data?

https://fpf.org/blog/data-sharing-for-research-a-compendium-of-case-studies-analysis-and-recommendations/

DATA SHARING FOR RESEARCH: A COMPENDIUM OF CASE STUDIES, ANALYSIS, AND RECOMMENDATIONS

Today, the Future of Privacy Forum (FPF) published a report on corporate-academic partnerships that provides practical recommendations for companies and researchers who want to share data for research. The Report, Data Sharing for Research: A Compendium of Case Studies, Analysis, and Recommendations, demonstrates how, for many organizations, data-sharing partnerships are transitioning from being considered an experimental business activity to an expected business competency.

DOWNLOAD THE FULL REPORT DOWNLOAD THE EXEC SUMMARY





Reasonable?

https://www.theverge.com/2023/8/16/23834586/associated-press-ai-guidelines-journalists-openai

The Associated Press sets AI guidelines for journalists

Journalists for AP can experiment with ChatGPT but are asked to exercise caution by not using the tool to create publishable content. Any result from a generative AI platform “should be treated as unvetted source material” and subject to AP’s existing sourcing standards. The publication said it will not allow AI to alter photos, videos, or audio and will not use AI-generated images unless it is the subject of a news story. In that event, AP said it would label AI-generated photos in captions.





Can the peak get peakyer?

https://venturebeat.com/ai/gartner-hype-cycle-places-generative-ai-on-the-peak-of-inflated-expectations/

Gartner Hype Cycle places generative AI on the ‘Peak of Inflated Expectations’

Many might not be surprised, but today the 2023 Gartner Hype Cycle for emerging technologies placed generative AI on the ‘Peak of Inflated Expectations’ for the first time.



Wednesday, August 16, 2023

It is always worth checking.

https://www.trendmicro.com/en_us/ciso/23/h/top-ai-risks.html

Top 10 AI Security Risks According to OWASP

The unveiling of the first-ever Open Worldwide Application Security Project (OWASP) risk list for large language model AI chatbots was yet another sign of generative AI’s rush into the mainstream—and a crucial step toward protecting enterprises from AI-related threats.





Would this fall under a ‘duty to use’ classification?

https://www.bespacific.com/how-to-use-large-language-models-for-empirical-legal-research/

How to Use Large Language Models for Empirical Legal Research

Choi, Jonathan H., How to Use Large Language Models for Empirical Legal Research (August 9, 2023). Journal of Institutional and Theoretical Economics (Forthcoming), Available at SSRN: https://ssrn.com/abstract=4536852 – “Legal scholars have long annotated cases by hand to summarize and learn about developments in jurisprudence. Dramatic recent improvements in the performance of large language models (LLMs) now provide a potential alternative. This Article demonstrates how to use LLMs to analyze legal documents. It evaluates best practices and suggests both the uses and potential limitations of LLMs in empirical legal research. In a simple classification task involving Supreme Court opinions, it finds that GPT-4 performs approximately as well as human coders and significantly better than a variety of prior-generation NLP classifiers, with no improvement from supervised training, fine-tuning, or specialized prompting.”





I guess it might be concerning if the school was doing something with my kid but they won’t tell me what they are doing… (Is that a ‘harm?’) Of course I could always ask my kid.

https://www.pogowasright.org/parents-lack-standing-to-challenge-schools-gender-support-program-4th-circuit-says/

Parents Lack Standing to Challenge School’s Gender-Support Program, 4th Circuit Says

Steve Lash reports:

Parents cannot challenge in court a Maryland county school board’s confidential gender-identity-support system for students because the parents failed to allege their children had availed themselves of the system, a divided federal appeals court ruled Monday.
In its 2-1 decision, the U.S. Court of Appeals for the Fourth Circuit said the parents lack standing because they have not suffered an injury from the Montgomery County Board of Education’s system, the students’ use of which is not disclosed to their parents if the school believes they would not be supportive.

Read more at Law.com.





Tools & Techniques. Do I have enough data to train this AI?

https://www.bespacific.com/you-can-build-your-own-ai-chatbot-with-this-drag-and-drop-tool/

You can build your own AI chatbot with this drag-and-drop tool

ZDNET – We try out Botpress, a tool that helps you create powerful AI-based chatbots. “Botpress is a tool for building interactive chatbots. While it supports building chatbots for a wide range of applications, the killer app is using it to build a customer support chatbot and backing it up with AI smarts. At its core, Botpress is a drag-and-drop interaction builder. You bring cards out onto the workspace, assign inputs, outputs, and calculations to the cards, and then connect one card to the next until a complete interaction has been mapped out. On the surface, bot building is fairly straightforward. You can build question cards and, based on the answers provided by users, transfer the interaction to another card which will either ask more questions or provide answers. Rinse. Wash. Repeat. Where this product stands out in the AI arena is that you can feed it knowledge sources ranging from a set of documents to a specific webpage, to searching on a specific website, to searching for answers across the web. AI analysis is powered by the ChatGPT API.”



Tuesday, August 15, 2023

Does media have a “right to be forgotten?”

https://www.bespacific.com/internet-archive-responds-to-recording-industry-lawsuit-targeting-obsolete-media/

Internet Archive Responds to Recording Industry Lawsuit Targeting Obsolete Media

Internet Archives Blogs: “Late Friday, some of the world’s largest record labels, including Sony and Universal Music Group, filed a lawsuit against the Internet Archive and others for the Great 78 Project, a community effort for the preservation, research and discovery of 78 rpm records that are 70 to 120 years old. As a non-profit library, we take this matter seriously and are currently reviewing the lawsuit with our legal counsel. Of note, the Great 78 Project has been in operation since 2006 to bring free public access to a largely forgotten but culturally important medium. Through the efforts of dedicated librarians, archivists and sound engineers, we have preserved hundreds of thousands of recordings that are stored on shellac resin, an obsolete and brittle medium. The resulting preserved recordings retain the scratch and pop sounds that are present in the analog artifacts; noise that modern remastering techniques remove…”

  • TechDirt – RIAA Piles On In The Effort To Kill The World’s Greatest Library: Sues Internet Archive For Making It Possible To Hear Old 78s. “On Friday, the Internet Archive put up a blog post noting that its digital book lending program was likely to change as it continues to fight the book publishers’ efforts to kill the Internet Archive. As you’ll recall, all the big book publishers teamed up to sue the Internet Archive over its Open Library project, which was created based on a detailed approach, backed by librarians and copyright lawyers, to recreate an online digital library that matches a physical library. Unfortunately, back in March, the judge decided (just days after oral arguments) that everything about the Open Library infringes on copyrights. There were many, many problems with this ruling, and the Archive is appealing. However, in the meantime, the judge in the district court needed to sort out the details of the injunction in terms of what activities the Archive would change during the appeal. The Internet Archive and the publishers negotiated over the terms of such an injunction and asked the court to weigh in on whether or not it also covers books for which there are no ebooks available at all. The Archive said it should only cover books where the publishers make an ebook available, while the publishers said it should cover all books, because of course they did. Given Judge Koeltl’s original ruling, I expected him to side with the publishers, and effectively shut down the Open Library. However, this morning he surprised me and sided with the Internet Archive, saying only books that are already available in electronic form need to be removed. That’s still a lot, but at least it means people can still access those other works electronically. The judge rightly noted that the injunction should be narrowly targeted towards the issues at play in the case, and thus it made sense to only block works available as ebooks…”

  • See also The New York Times – The Dream Was Universal Access to Knowledge. The Result Was a Fiasco ….” In the pandemic emergency, Brewster Kahle’s Internet Archive freely lent out digital scans of its library. Publishers sued. Owning a book means something different now. Information wants to be free.”





Can AI use the library? Is fair use fair for AI?

https://www.bespacific.com/copyright-fair-use-regulatory-approaches-in-ai-content-generation/

Copyright Fair Use Regulatory Approaches in AI Content Generation

Tech Policy News Ariel Soiffer is a partner and Aric Jain is an associate at the law firm WilmerHale. “Midjourney, a platform to generate images from natural language descriptions, is an example of generative AI tools that are raising new questions around copyright and fair use, The impact of generative artificial intelligence (AI) has quickly caught the attention of technologists and policymakers around the world. Among others, policymakers in Washington are scrambling to apply intellectual property (IP) laws and concepts in response. Indeed, just this month, the Senate Subcommittee on Intellectual Property held its second hearing on AI and its implications for copyright law. Congressional attention to copyright and AI matches a growing public interest in understanding how AI – and generative AI, in particular – uniquely affects what it means to be an author and how ownership of expression of ideas is determined. Fundamentally, this is about the relationship of works generated from generative AI models (Output Works) to works used to train generative AI models (Input Works) and how U.S. copyright law applies to that relationship. This article begins with an overview of generative AI and copyright law with a focus on fair use doctrine. It then examines four schools of thought that have emerged to address the novelty of generative AI under copyright law. It posits some of the implications for each of these approaches for innovation and the growth of the generative AI industry.”





I can hear the teachers screaming already.

https://www.trendmicro.com/en_us/research/23/h/chatgpt-flaw.html

ChatGPT Highlights a Flaw in the Educational System

… The fundamental problem is that the grading system depends on homework. If education aims to teach an individual both a) a body of knowledge and b) the techniques of reasoning with that knowledge, then the metrics proving that achievement is misaligned.

One of the most quoted management scientists is Fredrick W. Taylor. He is most known for saying, “If you can’t measure it, you can’t manage it.” Interestingly, he never said that – which is fortunate because it is entirely wrong. People always manage things without metrics – from driving a car to raising children. He said: “If you measure it, you’ll manage it” – and he intended that as a warning. Whenever you adopt a metric, you will adjust your assessment of the underlying process in terms of your chosen metric. His warning is to be very careful about which metrics you choose.

Sometime in the past forty years, we decided that the purpose of education is to do well on tests. Unfortunately, that is also wrong. The purpose of education is to teach people to gather evidence and to think clearly about it. Students should learn how to judge various forms of evidence. They should understand rhetorical techniques (in the classical sense – how to render ideas clearly). They should be aware of common errors in thinking – the cognitive pitfalls we all fall into when rushed or distracted and logical fallacies which rob our arguments of their validity.



Monday, August 14, 2023

Is it more important that an AI can pretend to be human or for that AI to give you the right answers? (And no, the proper response to questions like that is not “hire a lawyer…”)

https://www.makeuseof.com/is-turing-test-outdated-turing-test-alternatives/

Is the Turing Test Outdated? 5 Turing Test Alternatives

Over 70 years ago, when artificial intelligence was conceptualized, Alan Turing published a paper that described how to identify it. It was later known as the Turing test, and it has been used for decades to distinguish between a human and an AI.

However, with the introduction of advanced AI chatbots like ChatGPT and Google Bard, it's becoming more difficult to tell if you're talking to an AI. It begs the question; is the Turing test outdated? And if it is, what are the alternatives?



(Related) It’s not people but it has people-like rights?

https://www.bespacific.com/freedom-of-speech-and-ai-output/

Freedom of Speech and AI Output

Lemley, Mark A. and Henderson, Peter and Volokh, Eugene, Freedom of Speech and AI Output (August 3, 2023). Available at SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4531003

Is the output of generative AI entitled to First Amendment protection? We’re inclined to say yes. Even though current AI programs are of course not people and do not themselves have constitutional rights, their speech may potentially be protected because of the rights of the programs’ creators. But beyond that, and likely more significantly, AI programs’ speech should be protected because of the rights of their users—both the users’ rights to listen and their rights to speak. In this short Article, we sketch the outlines of this analysis.



Sunday, August 13, 2023

Who are we explaining it for?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4531323

Lost in Translation: The Limits of Explainability in AI

As artificial intelligence becomes more prevalent, regulators are increasingly turning to legal measures, like “a right to explanation” to protect against potential risks raised by AI systems. However, are eXplainable AI (XAI) tools - the artificial intelligence tools that provide such explanations – up for the task?

This paper critically examines XAI’s potential to facilitate the right to explanation by applying the prism of explanation’s role in law to different stakeholders. Inspecting the underlying functions of reason-giving reveals different objectives for each of the stakeholders involved. From the perspective of a decision-subject, reason-giving facilitates due process and acknowledges human agency. From a decision-maker’s perspective, reason-giving contributes to improving the quality of the decisions themselves. From an ecosystem perspective, reason-giving may strengthen the authority of the decision-making system toward different stakeholders by promoting accountability and legitimacy, and by providing better guidance. Applying this analytical framework to XAI’s generated explanations reveals that XAI fails to fulfill the underlying objectives of the right to explanation from the perspective of both the decision-subject and the decision-maker. In contrast, XAI is found to be extremely well-suited to fulfil the underlying functions of reason-giving from an ecosystems’ perspective, namely, strengthening the authority of the decision-making system. However, lacking all other virtues, this isolated ability may be misused or abused, eventually harming XAI’s intended human audience. The disparity between human decision-making and automated decisions makes XAI an insufficient and even a risky tool, rather than serving as a guardian of human rights. After conducting a rigorous analysis of these ramifications, this paper concludes by urging regulators and the XAI community to reconsider the pursuit of explainability and the right to explanation of AI systems.





Are we ready?

https://dergipark.org.tr/en/pub/jai/issue/77844/1318812

The Metaverse: A Brave New "World"

As we stand on the precipice of the next significant socio-technological revolution, the Metaverse promises to transform our lives as profoundly as the internet did, if not more. The Metaverse is evolving as an immersive, collaborative, and interactive digital space, of ering early glimpses of its vast potential. The scope of this digital universe extends far beyond just entertainment and gaming—it provides innovative ways to revolutionize education, business, healthcare, and finance, including burgeoning areas like cryptocurrencies. However, without establishing appropriate safeguards, the Metaverse also poses considerable challenges. The pervasive risks to privacy, security, and safety of individuals in an environment where redress mechanisms are yet undefined, are areas of concern that need urgent attention. This article defines the Metaverse, its evolution, potential benefits, and potentially harmful impact due to data privacy. Subsequently, it shares the results of a bibliographic study demonstrating that the Metaverse is becoming popular along with ethics and AI. Next, it presents the results from a global survey which suggests that the Metaverse implies cautiously optimistic tones. Moreover, the article introduces an AI-based new technology as an example between today's and tomorrow’s worlds. Based on the results, it concludes why it is important to establish educational programs and guidelines for applying the technologies in the Metaverse. Finally, it makes recommendations for new research and other actions for the entire Metaverse ecosystem.





Someone will do this first. Will that be a significant advantage?

https://www.preprints.org/manuscript/202308.0308/v1

Towards Granting of Legal Personality to Autonomous Robots in the UAE

The Dubai Digital Government launched its recent guidelines, which call for artificial intelligence systems to be subject to legal accountability. This study discusses the extent to which autonomous robots can be granted legal personality in UAE law and the consistency of this approach with the provisions of Islamic jurisprudence. This research paper answered two main questions: First, the extent to which these guidelines are considered the beginning of work on granting legal personality to AI systems in the UAE. Second, what form of legal personality can be given to autonomous robots in UAE law to be consistent with the provisions of Islamic jurisprudence as a primary source of legislation in the country? The research concluded the impossibility of considering autonomous robots as a "thing" and classifying them within the concept of "persons." It also concluded that it is possible to give them legal personality according to two legislative solutions: granting them partial or incomplete performance eligibility like minors.





Perspective.

https://ijrah.com/index.php/ijrah/article/view/284

Artificial Intelligence and Mary Shelley's Frankenstein: A Comparative Analysis of Creation, Morality and Responsibility

In the ever-evolving landscape of technology, Artificial Intelligence (AI) has emerged as a revolutionary force that continues to shape various aspects of our lives. From transforming industries to redefining how we interact with machines, AI's pervasive influence has captured the collective imagination of modern society. However, as we marvel at the wonders of AI's capabilities, it becomes crucial to pause and reflect on the ethical and moral implications of creating intelligent machines. Mary Shelley's magnum opus, "Frankenstein," published nearly two centuries ago, remains an enduring cautionary tale about the perils of unchecked ambition and the consequences of playing god. The narrative of Victor Frankenstein's relentless pursuit of creating life, only to be haunted by the unforeseen horrors of his creation, has resonated across generations. This tale of hubris, moral dilemmas, and the intricate relationships between creator and creation continues to transcend time, finding a striking resonance in contemporary discussions on AI and its potential implications. The research article endeavors to delve into the parallels between AI and "Frankenstein," unraveling the profound ethical dilemmas faced by AI developers, policymakers, and society at large. By drawing upon the cautionary lessons embedded within Shelley's classic tale, we aim to extract timeless wisdom that can guide us in the responsible and humane development of AI technologies. While AI holds the potential to revolutionize our lives positively, the dark echoes of Victor Frankenstein's missteps serve as a stark reminder of the need for ethical frameworks and interdisciplinary collaboration to ensure that AI remains a powerful force for good.





Onea these days…

https://www.makeuseof.com/write-ebook-in-30-days-guide/

How to Write an Ebook in 30 Days: A Step-by-Step Guide

While writing an ebook in 30 days isn't practical for everyone, it's definitely possible with the right tools and motivation. This article will introduce a plan for how to write an ebook in 30 days, with the preparation, editing, and formatting accounted for separately to better your chances of success.