Saturday, December 02, 2023

Interesting and definitely worth reading but I’m still not sure I understand what went on.

https://www.newyorker.com/magazine/2023/12/11/the-inside-story-of-microsofts-partnership-with-openai?currentPage=all

The Inside Story of Microsoft’s Partnership with OpenAI

The companies had honed a protocol for releasing artificial intelligence ambitiously but safely. Then OpenAI’s board exploded all their carefully laid plans.





Imagine how screwed up this must have been before they deemed it ready for pubic release?

https://www.platformer.news/p/amazons-q-has-severe-hallucinations

Amazon’s Q has ‘severe hallucinations’ and leaks confidential data in public preview, employees warn

Some hallucinations could ‘potentially induce cardiac incidents in Legal,’ according to internal documents

Three days after Amazon announced its AI chatbot Q, some employees are sounding alarms about accuracy and privacy issues. Q is “experiencing severe hallucinations and leaking confidential data,” including the location of AWS data centers, internal discount programs, and unreleased features, according to leaked documents obtained by Platformer.





How could I miss this?

https://teachprivacy.com/webinar-breaking-into-privacy-law-strategies-for-entry-level-lawyers-blog/

Webinar – Breaking Into Privacy Law: Strategies for Entry-Level Lawyers Blog

In case you weren’t able to make it to my recent webinar with Jared Coseglia (TRU Staffing Partners), you can watch the replay here. We had a great discussion about strategies for entering the privacy field.





Perspective.

https://spectrum.ieee.org/personal-ai-assistant

When AI Unplugs, All Bets Are Off

Run natively on edge devices, personalized AI assistants will get wild, and weird, soon

The next great chatbot will run at lighting speed on your laptop PC—no Internet connection required.

That was at least the vision recently laid out by Intel’s CEO, Pat Gelsinger, at the company’s 2023 Intel Innovation summit. Flanked by on-stage demos, Gelsinger announced the coming of “AI PCs” built to accelerate all their increasing range of AI tasks based only on the hardware beneath the user’s fingertips.

Intel’s not alone. Every big name in consumer tech, from Apple to Qualcomm, is racing to optimize its hardware and software to run artificial intelligence at the “edge”—meaning on local hardware, not remote cloud servers. The goal? Personalized, private AI so seamless you might forget it’s “AI” at all.



Friday, December 01, 2023

How do you discipline an unethical AI?

https://www.bespacific.com/the-right-to-human-counsel-real-responsibility-for-artificial-intelligence/

The Right to (Human) Counsel: Real Responsibility for Artificial Intelligence

Swisher, Keith, The Right to (Human) Counsel: Real Responsibility for Artificial Intelligence (February 11, 2023). 74 S.C. L. Rev. 823 (2023), Available at SSRN:  https://ssrn.com/abstract=4583580

The bench and bar have created and enforced a comprehensive system of ethical rules and regulation. In many respects, it is a unique and laudable system for regulating and guiding lawyers, and it has taken incremental measures to account for the wave of new technology involved in the practice of law. But it is not ready for the future. It rests on an assumption that humans will practice law. Although humans might tinker at the margins, review work product, or serve some other useful purposes, they likely will not be the ones doing most of the legal work in the future. Instead, AI counsel will be serving the public. For the system of ethical regulation to serve its core functions in the future, it needs to incorporate and regulate AI counsel. This will necessitate, among other things, bringing on new disciplines in the drafting of ethical guidelines and in the disciplinary process, along with a careful review and update of the ethical rules as applied to AI practicing law”





How about that!

https://www.bloomberg.com/news/articles/2023-11-29/ai-use-by-us-firms-is-highest-in-denver-topping-san-francisco

Denver Ranks No. 1 in Business Use of AI, Topping San Francisco

When it comes to using artificial intelligence at work, the Denver metropolitan area has the largest share of firms that already have adopted the technology, according to a survey of businesses from the US Census Bureau.





Perspective.

https://www.fastcompany.com/90989785/what-year-2-of-the-generative-ai-craze-will-look-like-according-to-41-experts

What year 2 of the generative AI craze will look like, according to 41 experts

… To mark ChatGPT’s anniversary, we asked 41 AI experts, business leaders, and other stakeholders a simple question: How will generative AI tools like ChatGPT and Midjourney be applied over the next year to best help businesses function more efficiently or assist individual consumers? Here’s what they said. Their quotes have been edited for length and clarity.





Perspective.

https://www.imf.org/en/Publications/fandd/issues/2023/12/Macroeconomics-of-artificial-intelligence-Brynjolfsson-Unger

THE MACROECONOMICS OF ARTIFICIAL INTELLIGENCE

AI may affect society in a number of areas besides the economy—including national security, politics, and culture. But in this article, we focus on the implications of AI on three broad areas of macroeconomic interest: productivity growth, the labor market, and industrial concentration. AI does not have a predetermined future. It can develop in very different directions. The particular future that emerges will be a consequence of many things, including technological and policy decisions made today. For each area, we present a fork in the road: two paths that lead to very different futures for AI and the economy. In each case, the bad future is the path of least resistance. Getting to the better future will require good policy—including

  • Creative policy experiments

  • A set of positive goals for what society wants from AI, not just negative outcomes to be avoided

  • Understanding that the technological possibilities of AI are deeply uncertain and rapidly evolving and that society must be flexible in evolving with them



Thursday, November 30, 2023

Training data is all an AI knows. What if it’s the wrong data?

https://www.nytimes.com/2023/11/30/business/ai-data-standards.html

Big Companies Find a Way to Identify A.I. Data They Can Trust

Data is the fuel of artificial intelligence. It is also a bottleneck for big businesses, because they are reluctant to fully embrace the technology without knowing more about the data used to build A.I. programs.

Now, a consortium of companies has developed standards for describing the origin, history and legal rights to data. The standards are essentially a labeling system for where, when and how data was collected and generated, as well as its intended use and restrictions.

The data provenance standards, announced on Thursday, have been developed by the Data & Trust Alliance, a nonprofit group made up of two dozen mainly large companies and organizations, including American Express, Humana, IBM, Pfizer, UPS and Walmart, as well as a few start-ups.

This is a step toward managing data as an asset, which is what everyone in industry is trying to do today,” said Ken Finnerty, president for information technology and data analytics at UPS. “To do that, you have to know where the data was created, under what circumstances, its intended purpose and where it’s legal to use or not.”

Surveys point to the need for greater confidence in data and for improved efficiency in data handling. In one poll of corporate chief executives, a majority cited “concerns about data lineage or provenance” as a key barrier to A.I. adoption. And a survey of data scientists found that they spent nearly 40 percent of their time on data preparation tasks.



(Related)

https://sloanreview.mit.edu/article/the-working-limitations-of-large-language-models/

The Working Limitations of Large Language Models

But while LLMs are incredibly powerful, their ability to generate humanlike text can invite us to falsely credit them with other human capabilities, leading to misapplications of the technology. With a deeper understanding of how LLMs work and their fundamental limitations, managers can make more informed decisions about how LLMs are used in their organizations, addressing their shortcomings with a mix of complementary technologies and human governance.





One clear and present danger of AI.

https://www.csoonline.com/article/1249838/almost-all-developers-are-using-ai-despite-security-concerns-survey-suggests.html

Almost all developers are using AI despite security concerns, survey suggests

While more than half of developers acknowledge that generative AI tools commonly create insecure code, 96% of development teams are using the tools anyway, with more than half using the tools all the time, according to a report released Tuesday by Snyk, maker of a developer-first security platform.

The report, based on a survey of 537 software engineering and security team members and leaders, also revealed that 79.9% of the survey’s respondents said developers bypass security policies to use AI.





Perspective.

https://theconversation.com/a-year-of-chatgpt-5-ways-the-ai-marvel-has-changed-the-world-218805

A year of ChatGPT: 5 ways the AI marvel has changed the world

… We’ve never seen a technology roll out so quickly before. It took about a decade or so before most people started using the web. But this time the plumbing was already in place.

As a result, ChatGPT’s impact has gone way beyond writing poems about Carol’s retirement in the style of Shakespeare. It has given many people a taste of our AI-powered future. Here are five ways this technology has changed the world.



Wednesday, November 29, 2023

Individuals with little AI knowledge certifying that their AI use was appropriate and valid?

https://www.bespacific.com/us-5th-circuit-court-seeks-regulation-on-lawyers-ai-use-in-legal-filings/

US 5th Circuit Court seeks regulation on lawyers’ AI use in legal filings

Coin Telegraph: “The suggested regulation would apply to attorneys and litigants without legal representation appearing before the court, obliging them to confirm that filings produced with the help of AI were assessed for precision. A federal appeals court in New Orleans is considering a proposal that would mandate lawyers to confirm whether they utilized artificial intelligence (AI) programs to draft briefs, affirming either independent human review of AI-generated text accuracy or no AI reliance in their court submissions. In a notice issued on Nov. 21, the Fifth U.S. Circuit Court of Appeals revealed what seems to be the inaugural proposed rule among the nation’s 13 federal appeals courts, focusing on governing the utilization of generative AI tools, including OpenAI’s ChatGPT, by lawyers presenting before the court…”



(Related) Are these tools less likely to get lawyers in trouble?

https://www.llrx.com/2023/11/all-things-ai-law-librarian-ish-generative-ai-and-legal-research-education-technology/

All Things AI Law Librarian-ish, Generative AI, and Legal Research/Education/Technology

Historically, acquiring case law data has been a significant challenge, acting as a barrier to newcomers in the legal research market. Established players are often protective of their data. For instance, in an antitrust counterclaim, ROSS Intelligence accused Thomson Reuters of withholding their public law collection, claiming they had to instead resort to purchasing cases piecemeal from sources like Casemaker and Fastcase. Other companies have taken more extreme measures. For example, Ravel Law partnered with the Harvard Law Library to scan every single opinion in their print reporter collections. There’s also speculation that major vendors might even license some of their materials directly to platforms like Google Scholar, albeit with stringent conditions.

Despite the historic challenges, several new products have recently emerged offering advanced legal research capabilities:





Perspective.

https://www.oreilly.com/radar/generative-ai-in-the-enterprise/

Generative AI in the Enterprise

What’s the reality? We wanted to find out what people are actually doing, so in September we surveyed O’Reilly’s users. Our survey focused on how companies use generative AI, what bottlenecks they see in adoption, and what skills gaps need to be addressed.

AI adoption is in the process of becoming widespread, but it’s still not universal. Two-thirds of our survey’s respondents (67%) report that their companies are using generative AI. 41% say their companies have been using AI for a year or more; 26% say their companies have been using AI for less than a year. And only 33% report that their companies aren’t using AI at all.



Tuesday, November 28, 2023

How to Lawyer” seems to be changing…

https://news.bloomberglaw.com/us-law-week/ais-rise-may-motivate-law-firms-to-quit-their-traditional-ways

AI’s Rise May Motivate Law Firms to Quit Their Traditional Ways

The traditional law firm structure—with many lower-level lawyers performing mostly analytical tasks on behalf of a few partners—is poised to become obsolete thanks to artificial intelligence.

The firms that survive and thrive will embrace AI to elevate their value and rethink their approach to human capital, changing their practices and culture to emphasize innovation and insight.

A 2023 Goldman Sachs report estimated that 44% of tasks within legal could be automated by AI. While any such projection is speculative, it doesn’t feel far off.



(Related)

https://abovethelaw.com/2023/11/the-ideal-partnership-combining-ai-and-lawyer-expertise/

The Ideal Partnership: Combining AI And Lawyer Expertise

Legal departments can now harness the same power of AI. Lawyers fill a central role that evolves as a department’s use of AI matures. Because AI is far from infallible, how you guide AI-driven workflows, monitor AI’s performance, evaluate its output, and make all final decisions in legal matters is critical.

However, blindly accepting AI outputs and recommendations without human analysis can lead to inaccurate legal advice and misinformed decisions. Stay mindful that a risk of errors or biases is always possible in the algorithms that power AI systems, making it vital to validate and verify their outputs.





Perspective.

https://www.niemanlab.org/2023/11/the-legal-framework-for-ai-is-being-built-in-real-time-and-a-ruling-in-the-sarah-silverman-case-should-give-publishers-pause/

The legal framework for AI is being built in real time, and a ruling in the Sarah Silverman case should give publishers pause

To be clear: The legal framework for generative AI — large language models, or LLMs — is still very much TBD. But things aren’t looking great for the news companies dreaming of billions in new revenue from AI companies that have trained LLMs (in very small part) on their products. While elements of those models’ training will be further litigated, courts have thus far not looked favorably on the idea that what they produce is a copyright infringement.

Silverman’s1 complaint is important, because in one significant way, it’s stronger than what news companies might be able to argue. The overwhelming share of news content is made free for anyone online to read — on purpose, by its publishers. Anyone with a web browser can call up a story — a process that necessarily involves a copy of the copyrighted material being downloaded to their device. That publishers choose to make their content available to web users makes it harder to argue that an OpenAI or Meta webcrawler had done special harm.

But Silverman’s copyrighted content in question is a book — specifically, her 2010 memoir The Bedwetter. This is not, importantly, a piece of content that’s been made freely available by its publisher to web users. To access The Bedwetter legally in digital form, HarperCollins asks you to pay $13.99.

And we know that Meta did not get its copy of The Bedwetter by spending $13.99. It’s acknowledged that its LLM was trained using something called Books3 — part of something else called The Pile. Books3 is a 37-gigabyte text file that contains the complete contents of 197,000 books, sourced from a pirated shadow library called Bibliotik. The Pile mixes those books with another 800 gigs or so of content, including papers from PubMed, GitHub, Wikipedia, and those Enron emails. Large language models need a large amount of language to work, so The Pile became a popular early input in LLM training.



Monday, November 27, 2023

Well, it’s a start.

https://www.bespacific.com/dhs-cisa-and-uk-ncsc-release-joint-guidelines-for-secure-ai-system-development/

DHS CISA and UK NCSC Release Joint Guidelines for Secure AI System Development

Taking a significant step forward in addressing the intersection of artificial intelligence (AI) and cybersecurity, the U.S. Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC) today jointly released Guidelines for Secure AI System Development to help developers of any systems that use AI make informed cybersecurity decisions at every stage of the development process. The guidelines were formulated in cooperation with 21 other agencies and ministries from across the world – including all members of the Group of 7 major industrial economies — and are the first of their kind to be agreed to globally. “We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure, and trustworthy,” said Secretary of Homeland Security Alejandro N. Mayorkas. The guidelines jointly issued today by CISA, NCSC, and our other international partners, provide a commonsense path to designing, developing, deploying, and operating AI with cybersecurity at its core. By integrating ‘secure by design’ principles, these guidelines represent an historic agreement that developers must invest in, protecting customers at each step of a system’s design and development. Through global action like these guidelines, we can lead the world in harnessing the benefits while addressing the potential harms of this pioneering technology.” The guidelines provide essential recommendations for AI system development and emphasize the importance of adhering to Secure by Design principles that CISA has long championed.”





This isn’t new, is it? I seem to remember AT&T doing the same thing years ago.

https://www.schneier.com/blog/archives/2023/11/secret-white-house-warrantless-surveillance-program.html

Secret White House Warrantless Surveillance Program

There seems to be no end to warrantless surveillance:

According to the letter, a surveillance program now known as Data Analytical Services (DAS) has for more than a decade allowed federal, state, and local law enforcement agencies to mine the details of Americans’ calls, analyzing the phone records of countless people who are not suspected of any crime, including victims. Using a technique known as chain analysis, the program targets not only those in direct phone contact with a criminal suspect but anyone with whom those individuals have been in contact as well.
The DAS program, formerly known as Hemisphere, is run in coordination with the telecom giant AT&T, which captures and conducts analysis of US call records for law enforcement agencies, from local police and sheriffs’ departments to US customs offices and postal inspectors across the country, according to a White House memo reviewed by WIRED. Records show that the White House has, for the past decade, provided more than $6 million to the program, which allows the targeting of the records of any calls that use AT&T’s infrastructure—a maze of routers and switches that crisscross the United States.



Sunday, November 26, 2023

Is there a difference? Can’t the same laws apply?

https://revista.amagis.com.br/index.php/amagis-juridica/article/view/298

How to Punish a Robot

What happens when artificially intelligent robots misbehave? The question is not just hypothetical. As robotics and artificial intelligence systems increasingly integrate into our society, they will do bad things. In this Essay, we explore some of the challenges emerging robotics technologies will pose for remedies law. We argue robots will require us to rethink many of our current doctrines and that the emerging technology also offers important insights into the law of remedies we already apply to people and corporations.





Ready or not, here it comes...

https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1800138&dswid=5685

Use of AI in Legal Work

Law and the legal workplace have traditionally been resistant to change. At times, however, there are societal or technological changes so great that the jurist is forced to make a change in turn. Artificial intelligence has been hailed as the leader of a new industrial revolution, promising to effectivize and optimize seemingly all parts of life. Not only that, but on the surface it seems to share a remarkable amount of qualities with the idealized legal practitioner: objective, impartial, consistent, and more. Yet under the surface, alike the legal practitioner in practice, artificial intelligence often hides a host of biases that could prove difficult, or even impossible, to untangle. Consequently, its use—particularly its misuse—could risk threatening the very rule of law. The EU has responded to this threat by proposing a so-called “AI Act”, specifically tailored to the challenges that the use of artificial intelligence presents. In tandem, already existing EU law, such as the prohibition of automated decision-making in the GDPR, has gained new relevance. But even with this targeted regulation in mind, there is still the question of whether the potential benefits of the use of AI in legal work truly outweigh the risks.