It’s happening.
https://www.lawfareblog.com/artificial-intelligence-virtual-courts-and-real-harms
Artificial Intelligence, Virtual Courts, and Real Harms
The international legal community saw a paradigm redefined at the beginning of this year. Judges in Colombia, for the first time, openly used generative artificial intelligence (GAI) to author parts of their judicial opinions. In the first case, a judge used GAI to help research and draft a holding addressing a petitioner’s request for a waiver of medical fee payments for treatment for her child with autism. In the second case, the court addressed how to conduct a virtual court appearance in the metaverse while citing GAI-based research.
These courtroom applications are the first two cases in which GAI has been applied in a context that parses and interprets the law, applying the weight of state authority.
Useful?
https://www.bespacific.com/gpt-4-will-make-chatgpt-smarter-but-wont-fix-its-flaws/
GPT-4
GPT 4 – “We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks.”
• Read paper
• View system card
• Try on ChatGPT Plus
• Join API waitlist
• Rewatch developer demo livestream
• Contribute to OpenAI Evals
See also Wired – GPT-4 Will Make ChatGPT Smarter but Won’t Fix Its Flaws. “A new version of the AI system that powers the popular chatbot has better language skills, but is still biased, prone to fabrication, and can be abused. The new model scores more highly on a range of tests designed to measure intelligence and knowledge in humans and machines, OpenAI says. It also makes fewer blunders and can respond to images as well as text. However, GPT-4 suffers from the same problems that have bedeviled ChatGPT and cause some AI experts to be skeptical of its usefulness—including tendencies to “hallucinate” incorrect information, exhibit problematic social biases, and misbehave or assume disturbing personas when given an “adversarial” prompt.”
See also GeekWire: Commentary: OpenAI’s GPT-4 has some limitations that are fixable — and some that are not
No comments:
Post a Comment