Saturday, June 22, 2024

The pendulum swings to overreaction? How much AI is too much?

https://techcrunch.com/2024/06/21/meta-tagging-real-photos-made-with-ai/

Meta is tagging real photos as ‘Made with AI,’ say photographers

Earlier in February, Meta said that it would start labeling photos created with AI tools on its social networks. Since May, Meta has regularly tagged some photos with a “Made with AI” label on its Facebook, Instagram and Threads apps.

But the company’s approach of labeling photos has drawn ire from users and photographers after attaching the “Made with AI” label to photos that have not been created using AI tools.





The first of many.

https://news.usni.org/2024/06/21/gao-report-on-generative-ai-and-commercial-applications

GAO Report on Generative AI and Commercial Applications

For this technology assessment, we were asked to describe generative AI and key aspects of its development. This report is the first in a body of work looking at generative AI In future reports, we plan to assess best practices and other factors considered for developing and deploying generative AI tools, societal and environmental effects of the use of generative AI, and federal development and adoption of generative AI technologies. To perform this assessment, we conducted literature reviews and interviewed several leading companies developing generative AI technologies. This report provides an overview of how generative AI works, how it differs from other kinds of AI, and examples of its use across various industries.

Download the document here.





Ah! Clearly the end is in sight.

https://abovethelaw.com/2024/06/law-schools-are-preparing-for-ais-takeover-of-the-legal-profession/

Law Schools Are Preparing For AI’s Takeover Of The Legal Profession

I estimate that within five years, it will no longer be possible to be a successful lawyer without using AI.

— Professor Gary Marchant of Arizona State University Sandra Day O’Connor College of Law, in comments given to Reuters on the rise of the use of artificial intelligence within the legal profession.



Friday, June 21, 2024

Better than GPT-4o?

https://venturebeat.com/ai/anthropics-claude-3-5-sonnet-wows-ai-power-users-this-is-wild/

Anthropic’s Claude 3.5 Sonnet wows AI power users: ‘this is wild’

A new large language model (LLM) has apparently taken the performance crown from OpenAI’s GPT-4o about a month after its release: the new Claude 3.5 Sonnet chatbot and LLM from rival AI firm Anthropic, released today, bests all others in the world on key third-party benchmark tests, according to the company. And it does so while being faster and cheaper than prior Claude 3 models.



(Related) Maybe...

https://techcrunch.com/2024/06/20/anthropic-claims-its-latest-model-is-best-in-class/

Anthropic claims its latest model is best-in-class

OpenAI rival Anthropic is releasing a powerful new generative AI model called Claude 3.5 Sonnet. But it’s more an incremental step than a monumental leap forward.

Claude 3.5 Sonnet can analyze both text and images as well as generate text, and it’s Anthropic’s best-performing model yet — at least on paper. Across several AI benchmarks for reading, coding, math and vision, Claude 3.5 Sonnet outperforms the model it’s replacing, Claude 3 Sonnet, and beats Anthropic’s previous flagship model Claude 3 Opus.





A ‘good journalism’ story.

https://krebsonsecurity.com/2024/06/krebsonsecurity-threatened-with-defamation-lawsuit-over-fake-radaris-ceo/

KrebsOnSecurity Threatened with Defamation Lawsuit Over Fake Radaris CEO

On March 8, 2024, KrebsOnSecurity published a deep dive on the consumer data broker Radaris, showing how the original owners are two men in Massachusetts who operated multiple Russian language dating services and affiliate programs, in addition to a dizzying array of people-search websites. The subjects of that piece are threatening to sue KrebsOnSecurity for defamation unless the story is retracted. Meanwhile, their attorney has admitted that the person Radaris named as the CEO from its inception is a fabricated identity.





Tools & Techniques.

https://www.zdnet.com/article/can-ai-detectors-save-us-from-chatgpt-i-tried-5-online-tools-to-find-out/

Can AI detectors save us from ChatGPT? I tried 6 online tools to find out

With the sudden arrival of ChatGPT, educators and editors face a worrying surge of automated content submissions. We look at the problem and what can be done about it.



Thursday, June 20, 2024

Hallucinate with confidence?

https://www.nature.com/articles/d41586-024-01641-0

Fighting fire with fire’ — using LLMs to combat LLM hallucinations

The number of errors produced by an LLM can be reduced by grouping its outputs into semantically similar clusters. Remarkably, this task can be performed by a second LLM, and the method’s efficacy can be evaluated by a third.





Perspective.

https://www.lawnext.com/2024/06/is-gen-ai-creating-a-divide-among-law-firms-of-haves-and-have-nots.html

Is Gen AI Creating A Divide Among Law Firms Of Haves and Have Nots?

On Friday, I spoke to a group of trial lawyers on the use of generative AI in litigation. Many in the room were that increasingly rare breed of lawyer who actually go into court and try cases. Of several that I spoke to before and after my talk, they were proud of their courtroom skills and happy to share a war story or two. But when it came to talking about generative AI, most seemed to have barely given it a thought.

Recently, 11th U.S. Circuit Court of Appeals Judge Kevin Newsom made news for his 32-page concurring opinion pondering the use of generative AI by courts in interpreting words and phrases. It’s a good read and worth your time.

But what struck me in his opinion as particularly sage advice — advice directly applicable to lawyers in smaller firms — were his concluding words.

AI is here to stay,” he wrote. “Now, it seems to me, is the time to figure out how to use it profitably and responsibly.”





Perspective.

https://www.theguardian.com/books/article/2024/jun/20/the-atomic-human-by-neil-lawrence-review-return-of-the-terminator

The Atomic Human by Neil Lawrence review – return of the Terminator

There is, it seems, an unwritten law in the world of artificial intelligence, which I will attempt to distil here: “Any discussion of AI must include an early and robust reference to the Terminator”. Though the 1984 James Cameron film and its 1991 sequel are quite good, here are two equally made-up but probably mostly true facts: no one under the age of 30 has seen either film and, in any case, neither film has anything particularly insightful to say about AI. But here we are, and the relentless analyses of the moment we are in – where we apparently stand on precipices of revolutions, ushering in utopia or the apocalypse – tend to be written by men who have seen Arnold Schwarzenegger’s Terminator failing to assassinate Sarah Connor many times over. If you can also allude to biblical creation, then you’re winning at AI bingo.



Wednesday, June 19, 2024

Should be an interesting if lengthy process.

https://www.oreilly.com/radar/how-to-fix-ais-original-sin/

How to Fix “AI’s Original Sin”

Last month, The New York Times claimed that tech giants OpenAI and Google have waded into a copyright gray area by transcribing the vast volume of YouTube videos and using that text as additional training data for their AI models despite terms of service that prohibit such efforts and copyright law that the Times argues places them in dispute. The Times also quoted Meta officials as saying that their models will not be able to keep up unless they follow OpenAI and Google’s lead. In conversation with reporter Cade Metz, who broke the story, on the New York Times podcast The Daily, host Michael Barbaro called copyright violation “AI’s Original Sin.

At the very least, copyright appears to be one of the major fronts so far in the war over who gets to profit from generative AI. It’s not at all clear yet who is on the right side of the law. In the remarkable essay “Talkin’ Bout AI Generation: Copyright and the Generative-AI Supply Chain,” Cornell’s Katherine Lee and A. Feder Cooper and James Grimmelmann of Microsoft Research and Yale note:

Copyright law is notoriously complicated, and generative-AI systems manage to touch on a great many corners of it. They raise issues of authorship, similarity, direct and indirect liability, fair use, and licensing, among much else. These issues cannot be analyzed in isolation, because there are connections everywhere. Whether the output of a generative AI system is fair use can depend on how its training datasets were assembled. Whether the creator of a generative-AI system is secondarily liable can depend on the prompts that its users supply.

But it seems less important to get into the fine points of copyright law and arguments over liability for infringement, and instead to explore the political economy of copyrighted content in the emerging world of AI services: Who will get what, and why? And rather than asking who has the market power to win the tug of war, we should be asking, What institutions and business models are needed to allocate the value that is created by the “generative AI supply chain” in proportion to the role that various parties play in creating it? And how do we create a virtuous circle of ongoing value creation, an ecosystem in which everyone benefits?





Resources.

https://www.pcmag.com/articles/the-best-free-online-classes-to-level-up-your-ai-skills

The Best Free Online Classes to Level Up Your AI Skills and Understanding

Artificial intelligence is progressing at a breakneck pace, and if you want to keep up, we highly recommend checking out these top-notch courses from leaders at Google, IBM, and Microsoft.



Tuesday, June 18, 2024

Most of this will not rise to the level where journalists would find it news-worthy.

https://apnews.com/article/artificial-intelligence-local-races-deepfakes-2024-1d5080a5c916d5ff10eadd1d81f43dfd

AI experimentation is high risk, high reward for low-profile political campaigns

Adrian Perkins was running for reelection as the mayor of Shreveport, Louisiana, when he was surprised by a harsh campaign hit piece.

The satirical TV commercial, paid for by a rival political action committee, used artificial intelligence to depict Perkins as a high school student who had been called into the principal’s office. Instead of giving a tongue-lashing for cheating on a test or getting in a fight, the principal blasted Perkins for failing to keep communities safe and create jobs.

The video superimposed Perkins’ face onto the body of an actor playing him. Although the ad was labeled as being created with “deep learning computer technology,” Perkins said it was powerful and resonated with voters. He didn’t have enough money or campaign staff to counteract it, and thinks it was one of many reasons he lost the 2022 race. A representative for the group behind the ad did not respond to a request for comment.

One hundred percent the deepfake ad affected our campaign because we were a down-ballot, less resourced place,” said Perkins, a Democrat. “You had to pick and choose where you put your efforts.”



(Related) Do we need an Article 50?

https://www.bespacific.com/a-detailed-analysis-of-article-50-of-the-eus-artificial-intelligence-act/

A Detailed Analysis of Article 50 of the EU’s Artificial Intelligence Act

Gils, Thomas, A Detailed Analysis of Article 50 of the EU’s Artificial Intelligence Act (June 14, 2024). Chapter to appear in an upcoming commentary on the EU AI Act (Q3-4 2024)., https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4865427 – “Article 50 of the EU’s AI Act contains transparency requirements for (i) interactive AI systems; (ii) synthetic content (including synthetic audio, image, video or text content); (iii) emotion recognition systems and biometric categorisation systems; (iv) deep fakes, and; (v) synthetic text informing the public on matters of public interest. This commentary offers a detailed analysis of this provision, taking into account the position of article 50 within the AI Act and the broader AI policy context.”





Perspective.

https://www.schneier.com/blog/archives/2024/06/rethinking-democracy-for-the-age-of-ai.html

Rethinking Democracy for the Age of AI

There is a lot written about technology’s threats to democracy. Polarization. Artificial intelligence. The concentration of wealth and power. I have a more general story: The political and economic systems of governance that were created in the mid-18th century are poorly suited for the 21st century. They don’t align incentives well. And they are being hacked too effectively.

At the same time, the cost of these hacked systems has never been greater, across all human history. We have become too powerful as a species. And our systems cannot keep up with fast-changing disruptive technologies.

We need to create new systems of governance that align incentives and are resilient against hacking at every scale. From the individual all the way up to the whole of society.

For this, I need you to drop your 20th century either/or thinking. This is not about capitalism versus communism. It’s not about democracy versus autocracy. It’s not even about humans versus AI. It’s something new, something we don’t have a name for yet. And it’s “blue sky” thinking, not even remotely considering what’s feasible today.

Throughout this talk, I want you to think of both democracy and capitalism as information systems. Socio-technical information systems. Protocols for making group decisions. Ones where different players have different incentives. These systems are vulnerable to hacking and need to be secured against those hacks.





Interesting, but I suspect a very small audience.

https://www.dtnext.in/edit/bibliophiles-corner-now-read-the-classics-with-ai-powered-expert-guides-790255

Bibliophile’s corner: Now read the classics with AI-powered expert guides

For the past year, two philosophy professors have been calling around to prominent authors and public intellectuals with an unusual, perhaps heretical, proposal. They have been asking these thinkers if, for a handsome fee, they wouldn’t mind turning themselves into A.I. chatbots.

As Dubuque envisioned it, the imprint would pair a world-class expert with a classic work and use technology similar to ChatGPT to replicate the dialogue between a student and teacher.



Monday, June 17, 2024

Is there a commercial technology we did not test on children?

https://www.theatlantic.com/technology/archive/2024/06/kids-generative-ai/678694/?gift=2iIN4YrefPjuvZ5d2Kh300zjvd8M-WeJys073Nn3zn0&utm_source=copy-link&utm_medium=social&utm_campaign=share

A Generation of AI Guinea Pigs

This spring, the Los Angeles Unified School District—the second-largest public school district in the United States—introduced students and parents to a new “educational friend” named Ed. A learning platform that includes a chatbot represented by a small illustration of a smiling sun, Ed is being tested in 100 schools within the district and is accessible at all hours through a website. It can answer questions about a child’s courses, grades, and attendance, and point users to optional activities.

As Superintendent Alberto M. Carvalho put it to me, “AI is here to stay. If you don’t master it, it will master you.” Carvalho says he wants to empower teachers and students to learn to use AI safely. Rather than “keep these assets permanently locked away,” the district has opted to “sensitize our students and the adults around them to the benefits, but also the challenges, the risks.” Ed is just one manifestation of that philosophy; the school district also has a mandatory Digital Citizenship in the Age of AI course for students ages 13 and up.





Keep humans in the loop, but put AI in charge? Sounds wrong to me.

https://www.scmp.com/news/china/science/article/3266444/chinese-scientists-create-and-cage-worlds-first-ai-commander-pla-laboratory

Chinese scientists create and cage world’s first AI commander in a PLA laboratory

In China, where it is forbidden for artificial intelligence to lead the armed forces, scientists have created an AI commander.

This “virtual commander”, strictly confined to a laboratory at the Joint Operations College of the National Defence University in Shijiazhuang, Hebei province, mirrors the human commander in all ways, from experience to thought patterns to personality – and even their flaws.

In large-scale computer war games involving all branches of the People’s Liberation Army (PLA), the AI commander has been granted unprecedented supreme command authority, learning and growing fast in the endlessly evolving virtual wars.



Sunday, June 16, 2024

Perspective. Did Elvis or the Beatles ever have this much of an impact?

https://www.livemint.com/market/stock-market-news/taylor-swifts-london-eras-tour-poses-potential-delay-for-bank-of-england-rate-cut-cnbc-11718420430358.html

Taylor Swift's London Eras Tour poses potential delay for Bank of England rate cut: CNBC

Taylor Swift's Eras Tour in the U.K. is boosting consumer spending, potentially delaying a Bank of England interest rate cut. Analysts predict a cut in August, but Swift's impact on inflation data could affect the timeline.