Tuesday, June 11, 2024

Perhaps automating lawyers will have to wait.

https://www.bespacific.com/law-firms-start-training-summer-associates-on-using-generative-ai/

Law Firms Start Training Summer Associates on Using Generative AI

Bloomberg Law: “Some Big Law firms are now making summer associates learn the ins and outs of generative AI as they begin integrating what’s considered to be a game-changing technology for the profession. K&L Gates, Dechert, and Orrick Herrington & Sutcliffe have incorporated training on the technology for this year’s class of summer associates, teaching them how to use research and chatbot tools now being used by the firms. The programs offer a window into what some firms believe artificial intelligence will mean for those now entering the profession. Future junior lawyers won’t be replaced by AI, as some fear, but they will need to harness it to be successful, said Brendan McDonnell, a K&L Gates partner and member of the firm’s AI solutions group. That includes understanding how to effectively interact with generative AI chatbots to unearth the most useful information for clients, he said. “That’s the whole idea about the training program: You need to teach people how this is going to impact the way they come to work,” said McDonnell. While AI will automate many tasks, he said, it’s also going to open up new lines of legal practice while freeing up new professionals’ time to learn and master the complex work they went to law school for. Most firms are still in an experimentation phase when it comes to deploying generative AI chatbot and research tools. Firms’ use of the tech is also dependent on clients’ openness to it. “We’re in a transition period,” said Alex Su, the chief revenue officer at Latitude Legal, a global flexible legal staffing firm. “It’s hard to say there’s going to be a huge impact in how law firms staff in the near-term.” Still, legal experts caution that future lawyers need to address the technology…”



(Related)

https://www.bespacific.com/ai-now/

AI Now

Perkins, Rachelle Holmes, AI Now (May 24, 2024). Temple Law Review, Vol. 97, Forthcoming, George Mason Legal Studies Research Paper No. LS 24-14, Available at SSRN: https://ssrn.com/abstract=4840481 or http://dx.doi.org/10.2139/ssrn.4840481

Legal scholars have made important explorations into the opportunities and challenges of generative artificial intelligence within legal education and the practice of law. This Article adds to this literature by directly addressing members of the legal academy. As a collective, law professors, who are responsible for cultivating the knowledge and skills of the next generation of lawyers, are seemingly adopting a laissez faire posture towards the advent of generative artificial intelligence. In stark contrast to law practitioners, law professors generally have displayed a lack of urgency in responding to the repercussions of this emerging technology. This Article contends that all law professors have an inescapable duty to understand generative artificial intelligence. This obligation stems from the pivotal role faculty play on three distinct but interconnected dimensions: pedagogy, scholarship, and governance. No law faculty are exempt from this mandate. All are entrusted with responsibilities that intersect with at least one, if not all three dimensions, whether they are teaching, research, clinical, or administrative faculty. It is also not dependent on whether professors are inclined, or disinclined, to integrate artificial intelligence into their own courses or scholarship. The urgency of the mandate derives from the critical and complex role law professors have in the development of lawyers and architecture of the legal field.”





Lawyers: We don’t need no stinking rules!

https://www.reuters.com/legal/transactional/5th-circuit-scraps-plans-adopt-ai-rule-after-lawyers-object-2024-06-10/

5th Circuit scraps plans to adopt AI rule after lawyers object

… The 5th U.S. Circuit Court of Appeals said it had decided not to adopt a rule it first proposed in November after taking into consideration the use of AI in the legal practice and public comment from lawyers, which had been largely negative.

The proposed rule aimed to regulate lawyers use of generative AI tools like OpenAI's ChatGPT and govern both attorneys and litigants appearing before the court without counsel.

It would have required them to certify that, to the extent an AI program was used to generate a filing, citations and legal analysis were reviewed for accuracy. Lawyers who misrepresented their compliance with the rule could face sanctions and the prospect of their filings being stricken.

… But members of the bar in public comments submitted to the 5th Circuit largely opposed its proposal, arguing that rules already on the books were good enough to deal with any issues with the technology, including ensuring the accuracy of court filings.





Who fools who? Has AI fooled the CEO/BoD?

https://www.hklaw.com/en/insights/media-entities/2024/06/the-secs-intensified-focus-on-ai-washing-practices

The SEC’s Intensified Focus on AI Washing Practices

Litigation attorney Andrew Balthazor was a featured guest on the RiskWatch podcast hosted by Vcheck, where he discussed the growing concern of artificial intelligence (AI) washing. This deceptive practice involves companies exaggerating or misrepresenting their use of artificial intelligence to attract investor interest. Notably, the U.S. Securities and Exchange Commission (SEC) has recently taken steps against investment advisers for making false claims about their use of AI, leading to more explicit regulations and an anticipated increase in enforcement actions with stricter penalties. Throughout the episode, Mr. Balthazor emphasizes the need for caution in AI investing, highlights the importance of understanding a company's true AI capabilities and suggests practical due diligence measures to help cut through misleading misinformation.



(Related)

https://sloanreview.mit.edu/article/auditing-algorithmic-risk/

Auditing Algorithmic Risk

How do we know whether algorithmic systems are working as intended? A set of simple frameworks can help even nontechnical organizations check the functioning of their AI tools.



No comments: