Saturday, May 27, 2023

I have a hard time fearing AI.

https://www.scientificamerican.com/article/heres-why-ai-may-be-extremely-dangerous-whether-its-conscious-or-not/

Here’s Why AI May Be Extremely Dangerous—Whether It’s Conscious or Not

Artificial intelligence algorithms will soon reach a point of rapid self-improvement that threatens our ability to control them and poses great potential risk to humanity

... Why are we all so concerned? In short: AI development is going way too fast.

The key issue is the profoundly rapid improvement in conversing among the new crop of advanced "chatbots," or what are technically called “large language models” (LLMs). With this coming “AI explosion,” we will probably have just one chance to get this right.

If we get it wrong, we may not live to tell the tale. This is not hyperbole.

This rapid acceleration promises to soon result in “artificial general intelligence” (AGI), and when that happens, AI will be able to improve itself with no human intervention. It will do this in the same way that, for example, Google’s AlphaZero AI learned how to play chess better than even the very best human or other AI chess players in just nine hours from when it was first turned on. It achieved this feat by playing itself millions of times over.





If you pass laws regulating ‘tech’ perhaps you should include software to detect violations? This should have been detected a few billion calls earlier. (Perhaps they waited until every potential juror had been called several times?)

https://apnews.com/article/robocall-lawsuit-do-not-call-75e26f66a3f6e3145c18c62f5e164a12

Do not call: States sue telecom company over billions of robocalls

Attorneys general across the U.S. joined in a lawsuit against a telecommunications company accused of making more than 7.5 billion robocalls to people on the national Do Not Call Registry.

The 141-page lawsuit was filed Tuesday in U.S. District Court in Phoenix against Avid Telecom, its owner Michael D. Lansky and company vice president Stacey S. Reeves. It seeks a jury trial to determine damages.





Tools & Techniques. (At the risk of being repetitious and redundant, allow me to repeat and reiterate.)

https://www.makeuseof.com/online-courses-mastering-ai-prompt-engineering/

The Top 5 Online Courses for Mastering AI Prompt Engineering



Friday, May 26, 2023

 For when we get serious.

https://a16z.com/2023/05/25/ai-canon/

AI Canon

Research in artificial intelligence is increasing at an exponential rate. It’s difficult for AI experts to keep up with everything new being published, and even harder for beginners to know where to start.

So, in this post, we’re sharing a curated list of resources we’ve relied on to get smarter about modern AI. We call it the “AI Canon” because these papers, blog posts, courses, and guides have had an outsized impact on the field over the past several years.

We start with a gentle introduction to transformer and latent diffusion models, which are fueling the current AI wave. Next, we go deep on technical learning resources; practical guides to building with large language models (LLMs); and analysis of the AI market. Finally, we include a reference list of landmark research results, starting with “Attention is All You Need” — the 2017 paper by Google that introduced the world to transformer models and ushered in the age of generative AI.





Anyone want to play?

https://www.reuters.com/technology/openai-offers-100000-grants-ideas-ai-governance-2023-05-25/

OpenAI offers $100,000 grants for ideas on AI governance

OpenAI, the startup behind the popular ChatGPT artificial intelligence chatbot, said Thursday it will award 10 equal grants from a fund of $1 million for experiments in democratic processes to determine how AI software should be governed to address bias and other factors.

The $100,000 grants will go to recipients who present compelling frameworks for answering such questions as whether AI ought to criticize public figures and what it should consider the “median individual” in the world, according to a blog post announcing the fund.





Interesting.

https://www.japantimes.co.jp/news/2023/05/26/asia-pacific/china-pla-ai-cognitive-warfare/

Winning without fighting? Why China is exploring 'cognitive warfare.'

With the U.S. and its allies rapidly bolstering military capabilities around Taiwan, a successful Chinese invasion, let alone an occupation, of the self-ruled island is becoming an increasingly difficult proposition.

But with the Chinese People’s Liberation Army (PLA) increasingly focused on “intelligent warfare” — a reference to artificial intelligence-enabled military systems and operational concepts — experts warn that Beijing could eventually have a new card up its sleeve: “cognitive warfare.”

The term refers to operations based on techniques and technologies such as AI aimed at influencing the minds of one’s adversaries and shaping their decisions, thereby creating a strategically favorable environment or subduing them without a fight.





Because words fail us?

https://www.politico.eu/article/meta-online-safety-europe-privacy-gdpr-big-tech-regime-5-years-in-5-charts/

Europe’s privacy regime: 5 years in 5 charts

Europe's most famous technology law, the General Data Protection Regulation (GDPR), turned 5 on Thursday.

The law, which came into force on May 25, 2018, has prompted businesses — from tech giants to hotel chains, cellphone companies to mom-and-pop businesses — to tighten their privacy policies. Many have cleaned up how they handle people’s personal data, encouraged by the prospect of being fined up to 4 percent of their annual revenue.





Tools & Techniques.

https://www.cnet.com/tech/services-and-software/google-launches-new-ai-search-engine-how-to-sign-up/

Google Launches New AI Search Engine: How to Sign Up

Google has launched Search Generative Experience, or SGE, an experimental version of Search that integrates artificial intelligence answers directly into results, the company said in a blog post on Thursday.

Unlike a normal Google Search, which brings up a list of blue links, SGE uses AI to answer your questions right on the Google Search webpage. After entering a query in Google Search, a green or blue box will expand with a novel answer generated by Google's large language model, like the one powering OpenAI's ChatGPT.

At the moment, SGE isn't open to the public and requires you to sign up to Google's Search Labs. To join, click the link here. Search Labs is currently available only to a limited number people in the US and in English only, though you can join the waitlist, Google didn't immediately respond to a request for comment.



Thursday, May 25, 2023

I think you’ll find this is very easy to do and very difficult to detect. (Think of an LLM populated with AI generated data.)

https://www.schneier.com/blog/archives/2023/05/on-the-poisoning-of-llms.html

On the Poisoning of LLMs

Interesting essay on the poisoning of LLMs—ChatGPT in particular:

Given that we’ve known about model poisoning for years, and given the strong incentives the black-hat SEO crowd has to manipulate results, it’s entirely possible that bad actors have been poisoning ChatGPT for months. We don’t know because OpenAI doesn’t talk about their processes, how they validate the prompts they use for training, how they vet their training data set, or how they fine-tune ChatGPT. Their secrecy means we don’t know if ChatGPT has been safely managed.
They’ll also have to update their training data set at some point. They can’t leave their models stuck in 2021 forever.
Once they do update it, we only have their word—pinky-swear promises—that they’ve done a good enough job of filtering out keyword manipulations and other training data attacks, something that the AI researcher El Mahdi El Mhamdi posited is mathematically impossible in a paper he worked on while he was at Google.





Tools & Techniques.

https://www.bespacific.com/what-is-chatgpt-and-why-does-it-matter/

What is ChatGPT and why does it matter?

ZDNET – Here’s what you need to know ….Updated: This AI chatbot’s advanced conversational capabilities have generated quite the buzz. We answer your questions.

How to use ChatGPT to:



(Related)

https://www.bespacific.com/how-to-be-on-the-lookout-for-misinformation-when-using-generative-ai/

How to be on the lookout for misinformation when using generative AI

Fast Company: “Until very recently, if you wanted to know more about a controversial scientific topic—stem cell research, the safety of nuclear energy, climate change—you probably did a Google search. Presented with multiple sources, you chose what to read, selecting which sites or authorities to trust. Now you have another option: You can pose your question to ChatGPT or another generative artificial intelligence platform and quickly receive a succinct response in paragraph form. ChatGPT does not search the internet the way Google does. Instead, it generates responses to queries by predicting likely word combinations from a massive amalgam of available online information. Although it has the potential to enhance productivity, generative AI has been shown to have some major faults. It can produce misinformation. It can create hallucinations a benign term for making things up. And it doesn’t always solve reasoning problems accurately. For example, when asked if both a car and a tank can fit through a doorway, it failed to consider both width and height. Nevertheless, it is already being used to produce articles and website content you may have encountered, or as a tool in the writing process. Yet you are unlikely to know if what you’re reading was created by AI. As the authors of Science Denial: Why It Happens and What to Do About It, we are concerned about how generative AI may blur the boundaries between truth and fiction for those seeking authoritative scientific information. Every media consumer needs to be more vigilant than ever in verifying scientific accuracy in what they read. Here’s how you can stay on your toes in this new information landscape…”



Wednesday, May 24, 2023

It is possible that all future fraud trials will have at least this much evidence. Lawyers will need a ChatGPT just to find the key points.

https://www.nytimes.com/2023/05/23/technology/ftx-evidence-sam-bankman-fried.html

Emails, Chat Logs, Code and a Notebook: The Mountain of FTX Evidence

Prosecutors investigating Sam Bankman-Fried, the cryptocurrency exchange’s founder, have accumulated more than six million pages of documents and other records.





And we’re not even talking the ‘self driving’ ones…

https://www.trendmicro.com/en_us/research/23/e/connected-car-cyber-risk.html

How Connected Car Cyber Risk will Evolve

Automobiles are increasingly more akin to powerful computers on wheels than they are traditional vehicles. They’re estimated to contain over 100 million lines of code. Compare that to an average passenger plane, which has just 15 million. Yet just as this smart functionality can enhance the driving experience and even improve car safety, it also opens the door to hackers.

So where are these cyber threats most pronounced? We believe a key area of risk for manufacturers and drivers is the vehicle user account. By hijacking or stealing such an account via phishing for credentials or installing malware, a cyber-criminal could locate the car, break into it and potentially sell it on for parts or follow-on crimes. They might even be able to locate the owner’s home address and target it for burglary when they’re not in. It’s a crossover between cyber and physical crime which we’ve seen before with ATM break-ins.





More to compare.

https://www.schneier.com/blog/archives/2023/05/indiana-iowa-and-tennessee-pass-comprehensive-privacy-laws.html

Indiana, Iowa, and Tennessee Pass Comprehensive Privacy Laws

It’s been a big month for US data privacy. Indiana, Iowa, and Tennessee all passed state privacy laws, bringing the total number of states with a privacy law up to eight. No private right of action in any of those, which means it’s up to the states to enforce the laws.





What would you do if your every need was satisfied by AI?

https://www.economist.com/finance-and-economics/2023/05/23/what-would-humans-do-in-a-world-of-super-ai

What would humans do in a world of super-AI?

A thought experiment based on economic principles

In “wall-e”, a film that came out in 2008, humans live in what could be described as a world of fully automated luxury communism. Artificially intelligent robots, which take wonderfully diverse forms, are responsible for all productive labour. People get fat, hover in armchairs and watch television. The “Culture” series by Iain M. Banks, a Scottish novelist, goes further still, considering a world in which ai has grown sufficiently powerful as to be superintelligent—operating far beyond anything now foreseeable. The books are a favourite of Jeff Bezos and Elon Musk, the bosses of Amazon and Tesla. In the world spun by Banks, scarcity is a thing of the past and ai “minds” direct most production. Instead, humans turn to art, explore the cultures of the vast universe and indulge in straightforwardly hedonistic pleasures.





Why assume this would be a bad thing?

https://www.bespacific.com/evolving-law-in-ais-handts-preliminary-experiments-thoughts-observations-on-the-basis-of-chat-gpt/

Evolving Law in AI’s Hands? – Preliminary Experiments, Thoughts, Observations on the Basis of Chat GPT

Barth, Fabian, Evolving Law in AI’s Hands? – Preliminary Experiments, Thoughts and Observations on the Basis of Chat GPT (April 27, 2023). Available at SSRN: https://ssrn.com/abstract=4431234 or http://dx.doi.org/10.2139/ssrn.4431234

Everyone operating in the field of law contributes to its evolution. That applies not only to judges, but also to advisors and academics who, collectively, form and inform our understanding of what the law is. If Artificial Intelligence is used for legal tasks, for example for providing legal advice, it may therefore also influence said evolution. This article explores, based on preliminary experiments and assessments, whether that influence exists and if it could be detrimental to the evolution of the law. In particular, it shall be investigated whether there is a risk that Artificial Intelligence could unintendedly change the law without human influence, therefore effectively creating rules governing human society with insufficient human control.”



Tuesday, May 23, 2023

I’ll keep an eye out for similar articles…

https://venturebeat.com/security/forrester-predicts-2023-top-cybersecurity-threats-generative-ai-geopolitical-tensions/

Forrester predicts 2023’s top cybersecurity threats: From generative AI to geopolitical tensions

The nature of cyberattacks is changing fast. Generative AI, cloud complexity and geopolitical tensions are among the latest weapons and facilitators in attackers’ arsenals. Three-quarters (74%) of security decision-makers say their organizations’ sensitive data was “potentially compromised or breached in the past 12 months” alone. That’s a sobering cybersecurity baseline for any CISO to consider.

With attackers quickly weaponizing generative AI, finding new ways to compromise cloud complexity and exploiting geopolitical tensions to launch more sophisticated attacks, it will get worse before it gets better.

Forrester’s Top Cybersecurity Threats in 2023 report (client access reqd.) provides a stark warning about the top cybersecurity threats this year, along with prescriptive advice to CISOs and their teams on countering them. By weaponizing generative AI and using ChatGPT, attackers are fine-tuning their ransomware and social engineering techniques.





This makes perfect sense if you consider everyone a potential criminal. You might even want to make extra effort to identify the ones who haven’t done anything criminal yet.

https://www.wired.com/story/europe-break-encryption-leaked-document-csa-law/

Leaked Government Document Shows Spain Wants to Ban End-to-End Encryption

SPAIN HAS ADVOCATED banning encryption for hundreds of millions of people within the European Union, according to a leaked document obtained by WIRED that reveals strong support among EU member states for proposals to scan private messages for illegal content.

The document, a European Council survey of member countries’ views on encryption regulation, offered officials’ behind-the-scenes opinions on how to craft a highly controversial law to stop the spread of child sexual abuse material (CSAM) in Europe. The proposed law would require tech companies to scan their platforms, including users’ private messages, to find illegal material. However, the proposal from Ylva Johansson, the EU commissioner in charge of home affairs, has drawn ire from cryptographers, technologists, and privacy advocates for its potential impact on end-to-end encryption.

For years, EU states have debated whether end-to-end encrypted communication platforms, such as WhatsApp and Signal, should be protected as a way for Europeans to exercise a fundamental right to privacy—or weakened to keep criminals from being able to communicate outside the reach of law enforcement. Experts who reviewed the document at WIRED’s request say it provides important insight into which EU countries plan to support a proposal that threatens to reshape encryption and the future of online privacy.





Inevitable.

https://www.schneier.com/blog/archives/2023/05/credible-handwriting-machine.html

Credible Handwriting Machine

In case you don’t have enough to worry about, someone has built a credible handwriting machine:

This is still a work in progress, but the project seeks to solve one of the biggest problems with other homework machines, such as this one that I covered a few months ago after it blew up on social media. The problem with most homework machines is that they’re too perfect. Not only is their content output too well-written for most students, but they also have perfect grammar and punctuation – something even we professional writers fail to consistently achieve. Most importantly, the machine’s “handwriting” is too consistent. Humans always include small variations in their writing, no matter how honed their penmanship.
Devadath is on a quest to fix the issue with perfect penmanship by making his machine mimic human handwriting. Even better, it will reflect the handwriting of its specific user so that AI-written submissions match those written by the student themselves.
Like other machines, this starts with asking ChatGPT to write an essay based on the assignment prompt. That generates a chunk of text, which would normally be stylized with a script-style font and then output as g-code for a pen plotter. But instead, Devadeth created custom software that records examples of the user’s own handwriting. The software then uses that as a font, with small random variations, to create a document image that looks like it was actually handwritten.

Watch the video.

My guess is that this is another detection/detection avoidance arms race.



Monday, May 22, 2023

I’m not certain that I understand this. It seems to suggest there will be more (hidden?) variables in the process to make the results less understandable.

https://www.marktechpost.com/2023/05/21/microsoft-researchers-introduce-reprompting-an-iterative-sampling-algorithm-that-searches-for-the-chain-of-thought-cot-recipes-for-a-given-task-without-human-intervention/

Microsoft Researchers Introduce Reprompting: An Iterative Sampling Algorithm that Searches for the Chain-of-Thought (CoT) Recipes for a Given Task without Human Intervention

In recent times, Large Language Models (LLMs) have evolved and transformed Natural Language Processing with their few-shot prompting techniques. These models have extended their usability in almost every domain, ranging from Machine translation, Natural Language Understanding, Text completion, sentiment analysis, speech recognition, and so on. With the few-shot prompting approach, LLMs are provided with a few examples of a particular task, along with some natural language instructions, and using these; they are able to adapt and learn how to perform the task properly. The tasks requiring iterative steps and constraint propagation come with many limitations when using these prompting techniques, to overcome which a new approach has been introduced.

A team of researchers at Microsoft Research, Redmond, USA, recently introduced a new method called Reprompting, which addresses all the limitations accompanying prompting techniques. This approach automatically searches for some useful and effective chain-of-thought (CoT) prompts. Chain-of-thought prompting helps improve the reasoning ability of large language models and helps them perform complex reasoning tasks. For this, a few chains of thought demonstrations are provided as exemplars during prompting. Reprompting finds CoT prompts very efficiently without any human involvement.

Check out the Paper.





Have I got your attention now?

https://www.moneycontrol.com/news/opinion/metas-1-3-billion-eu-fine-could-get-worse-10638321.html

Meta’s $1.3 billion EU fine could get worse

If the US and Europe don’t reach a data-transfer agreement, Meta may have to delete European user data from its American servers





Tools & Techniques. ‘cause I don’t want nothing smarter than me!

https://www.bespacific.com/10-ai-detection-tools/

10 AI Detection tools

The BrainyActs – “Tool you can Use: While more and more of us are using AI, more of us are also thinking about how to tell the difference between AI and human-generated output. Here are 10 tools you can use to learn what has been generated with AI and what hasn’t.”

See also PcCMag – 5 Ways to Detect Text Written by ChatGPT and Other AI Tools – The best way to figure out if an artificial intelligence wrote something may be to ask AI. We test AI-detection services with text written by ChatGPT and text written by a human: Here are the results.





Tools & Techniques.

https://www.bespacific.com/the-best-34-free-ai-tools-for-education-in-2023-so-far/

The Best 34 Free AI Tools For Education In 2023 – So Far

Larry Ferlazzo: “I’ve begun posting my mid-year “Best” lists, and this is a new one – the first time I’ve shared a “Best” list specifically on AI tools. As you probably know, I’ve been publishing a weekly list of free AI tools for education since January. You can see all my “Best” lists related to Artificial Intelligence here. Here are my picks for the best of the lot [snipped]:



Sunday, May 21, 2023

It’s hard to keep track so articles like this are useful.

https://jsp-ls.berkeley.edu/sites/default/files/california_legal_studies_journal_2023.pdf#page=15

Science vs. The Law: How Developments in Artificial Intelligence Are Challenging Various Civil Tort Laws

For centuries, the law has been playing catch-up as science pushes the boundaries of how we define both society and our own realities. Today, artificial intelligence (AI) perhaps poses the biggest challenges the legal system has ever faced. This paper aims to explore many of the numerous ways in which artificial intelligence developments are actively pushing the boundaries of contemporary civil law, challenging lawyers and judges alike to rethink the law as they know it. It first offers a general overview of artificial intelligence as well as some relevant legal fields: negligence, product liability, fault information, invasion of privacy, and copyright. Then, it dives into the specifics of how artificial intelligence is challenging these fields. Each section will introduce a new field in which artificial intelligence is rapidly changing the game, explain the benefits and pitfalls of said use of AI, introduce the relevant legal field and its policies, and then explore the challenges that AI is causing to the law and how, if at all, that legal field is adapting to those challenges.





It seems we need an example greater than Russia’s attacks on the Ukraine?

https://digitalcommons.liberty.edu/hsgconference/2023/foreign_policy/13/

The Future of the Cyber Theater of War

Few could imagine how it would develop when the air was the new theater of war. The literature showcases that a lack of imagination and state-level institutionalized power structures, particularly in the U.S., hampered the progress of air as a new theater of war both in thought and application. Today, a similar lack of imagination on the cyber theater of war is a great source of insecurity in the world system; it sets the stage for strategic shocks like the ones to the U.S. on December 7, 1941, and 9/11. To avoid this, states should imagine how a convergence of cyber technologies into new weapons could be used in war and by whom. Popular movies today form the basis for considering what has yet to be realized in the cyber theater of war. Its nascent history and designation as a theater of war foreshadow the expectation that eventual traditional war will occur in the cyber realm. When nanocomputers, artificial intelligence, quantum computing, speed, and advanced robotics fully converge, new weapons are possible and likely. The Just War Theory, understood through the Christian lens rather than only as a matter of secular international law, is applied to the evolving cyber theater of war to fill current doctrinal gaps in the just cause and conduct of future war within the cyber realm.





AI is too human to be considered a person?

https://link.springer.com/article/10.1007/s43545-023-00667-x

Hybrid theory of corporate legal personhood and its application to artificial intelligence

Artificial intelligence (AI) is often compared to corporations in legal studies when discussing AI legal personhood. This article also uses this analogy between AI and companies to study AI legal personhood but contributes to the discussion by utilizing the hybrid model of corporate legal personhood. The hybrid model simultaneously applies the real entity, aggregate entity, and artificial entity models. This article adopts a legalistic position, in which anything can be a legal person. However, there might be strong pragmatic reasons not to confer legal personhood on non-human entities. The article recognizes that artificial intelligence is autonomous by definition and has greater de facto autonomy than corporations and, consequently, greater potential for de jure autonomy. Therefore, AI has a strong attribute to be a real entity. Nevertheless, the article argues that AI has key characteristics from the aggregate entity and artificial entity models. Therefore, the hybrid entity model is more applicable to AI legal personhood than any single model alone. The discussion recognises that AI might be too autonomous for legal personhood. Still, it concludes that the hybrid model is a useful analytical framework as it incorporates legal persons with different levels of de jure and de facto autonomy.





Government by ChatBot? What evidence will I have to keep when the ChatBot says I no longer have to pay taxes?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4444869

Automated Agencies

When individuals have questions about federal benefits, services, and legal rules, they increasingly seek help from government chatbots, virtual assistants, and other automated tools. Most scholars who have studied artificial intelligence and federal government agencies have not focused on the government’s use of technology to offer guidance to the public. The absence of scholarly attention to automation as a means of communicating government guidance is an important gap in the literature. Through the use of automated legal guidance, the federal government is responding to millions of public inquiries each year about the law, a number that may multiply many times over in years to come. This new form of guidance is thereby shaping public views of and behavior with respect to the law, without serious examination.

This Article describes the results of a qualitative study of automated legal guidance across the federal government. This study was conducted under the auspices of the Administrative Conference of the United States (ACUS), an independent federal agency of the U.S. government charged with recommending improvements to administrative process and procedure. Our goal was to understand federal agency use of automated legal guidance, and offer recommendations to ACUS based on our findings. During our study, we canvassed the automated legal guidance activities of all federal agencies. We found extensive use of automation to offer guidance to the public by federal agencies, with varying levels of sophistication and legal content. We identified two principal models of automated legal guidance, and we conducted in-depth legal research regarding the most sophisticated examples of such models. We also interviewed agency officials with direct, supervisory, or support responsibility over well-developed automated legal guidance tools.

We find that automated legal guidance offers agencies an inexpensive way to help the public navigate complex legal regimes. However, we also find that automated legal guidance may mislead members of the public about how the law will apply in their individual circumstances. In particular, automated legal guidance exacerbates the tendency of federal agencies to present complex law as though it is simple without actually engaging in simplification of the underlying law. While this approach offers advantages in terms of administrative efficiency and ease of use by the public, it also causes the government to present the law as simpler than it is, leading to less precise advice and potentially inaccurate legal positions. In some cases, agencies heighten this problem by, among other things, making guidance seem more personalized than it is, ignoring how users may rely on the guidance, and failing to adequately disclose that the guidance cannot be relied upon as a legal matter. At worst, automated legal guidance enables the government to dissuade members of the public from accessing benefits to which they are entitled, a cost that may be borne disproportionately by members of the public least capable of obtaining other forms of legal advice.

In reaching these conclusions, we do not suggest that automated legal guidance is uniquely problematic relative to alternative forms of communicating the law. The question of how to respond to complex legal problems, in light of a public that has limited ability or inclination to understand complex legal systems, is a difficult one. There are different, potential solutions to this problem, which each present their own series of cost-benefit tradeoffs. However, failure to appreciate, or even examine, the tradeoffs inherent in automated legal guidance, relative to the alternatives, undermines our ability to make informed decisions about when to use which solution, or how to minimize the costs of this form of guidance.

In this Article, after exploring these challenges, we chart a path forward. We offer policy recommendations, organized into five categories: transparency; reliance; disclaimers; process; and accessibility, inclusion, and equity. We believe that our descriptive as well as theoretical work regarding automated legal guidance, and the detailed policy recommendations that flow from it, will be critical for evaluating existing, as well as future, government uses of automated legal guidance.