A useful overview?
https://www.bespacific.com/the-legal-ethics-of-generative-ai/
The Legal Ethics of Generative AI
Perlman, Andrew, The Legal Ethics of Generative AI (February 22, 2024). Suffolk University Law Review, Forthcoming, Available at SSRN: https://ssrn.com/abstract=4735389 or http://dx.doi.org/10.2139/ssrn.4735389
The legal profession is notoriously conservative when it comes to change. From email to outsourcing, lawyers have been slow to embrace new methods and quick to point out potential problems, especially ethics-related concerns. The legal profession’s approach to generative artificial intelligence (generative AI) is following a similar pattern. Many lawyers have readily identified the legal ethics issues associated with generative AI, often citing the New York lawyer who cut and pasted fictitious citations from ChatGPT into a federal court filing. Some judges have gone so far as to issue standing orders requiring lawyers to reveal when they use generative AI or to ban the use of most kinds of artificial intelligence (AI) outright. Bar associations are chiming in on the subject as well, though they have (so far) taken an admirably open-minded approach to the subject. Part II of this essay explains why the Model Rules of Professional Conduct (Model Rules) do not pose a regulatory barrier to lawyers’ careful use of generative AI, just as the Model Rules did not ultimately prevent lawyers from adopting many now-ubiquitous technologies. Drawing on my experience as the Chief Reporter of the ABA Commission on Ethics 20/20 (Ethics 20/20 Commission), which updated the Model Rules to address changes in technology, I explain how lawyers can use generative AI while satisfying their ethical obligations. Although this essay does not cover every possible ethics issue that can arise or all of generative AI’s law-related use cases, the overarching point is that lawyers can use these tools in many contexts if they employ appropriate safeguards and procedures. Part III describes some recent judicial standing orders on the subject and explains why they are ill-advised. The essay closes in Part IV with a potentially provocative claim: the careful use of generative AI is not only consistent with lawyers’ ethical duties, but the duty of competence may eventually require lawyers’ use of generative AI. The technology is likely to become so important to the delivery of legal services that lawyers who fail to use it will be considered as incompetent as lawyers today who do not know how to use computers, email, or online legal research tools.”
Real rules on deepfakes?
https://www.bespacific.com/deepfakes-in-the-courtroom/
Deepfakes in the courtroom
Ars Technica: “US judicial panel debates new AI evidence rules Panel of eight judges confronts deep-faking AI tech that may undermine legal trials. On Friday, a federal judicial panel convened in Washington, DC, to discuss the challenges of policing AI-generated evidence in court trials, according to a Reuters report. The US Judicial Conference’s Advisory Committee on Evidence Rules, an eight-member panel responsible for drafting evidence-related amendments to the Federal Rules of Evidence, heard from computer scientists and academics about the potential risks of AI being used to manipulate images and videos or create deepfakes that could disrupt a trial. The meeting took place amid broader efforts by federal and state courts nationwide to address the rise of generative AI models (such as those that power OpenAI’s ChatGPT or Stability AI’s Stable Diffusion ), which can be trained on large datasets with the aim of producing realistic text, images, audio, or videos. In the published 358-page agenda for the meeting, the committee offers up this definition of a deepfake and the problems AI-generated media may pose in legal trials..”
How should you regulate an AI that might grow into a person?
https://coloradosun.com/2024/04/25/colorado-generative-ai-artificial-intelligence-senate/
Colorado bill to regulate generative artificial intelligence clears its first hurdle at the Capitol
A Colorado bill that would require companies to alert consumers anytime artificial intelligence is used and to add more protections to the budding AI industry cleared its first legislative hurdle late Wednesday, even as critics testified it could stifle technological innovation in the state.
At the end of the evening, most sides seemed to agree: The bill still needs work.