Tuesday, December 17, 2024

It’s not just for legal training, I hope.

https://www.bespacific.com/revolutionizing-legal-education-with-ai-the-socratic-quizbot/

Revolutionizing Legal Education with AI: The Socratic Quizbot

AI Law Librarians – Sean Harrington – “I had the pleasure of co-teaching AI and the Practice of Law with Kenton Brice last semester at OU Law. It was an incredible experience. When we met to think through how we would teach this course, we agreed on one crucial component: We wanted the students to get a lot of reps using AI throughout the entire course. That is fairly easy to accomplish for things like research, drafting, and general studying for the course but we hit a roadblock with the assessment component. I thought about it for a week and said, “Kenton, what if we created an AI that would Socratically quiz the students on the readings each week?” His response was, “Do you think you can do that?” I said, “I don’t know but I’ll give it a try.” Thus Socratic Quizbot was born. If you follow me on social media, you’ve probably seen me soliciting feedback on the paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4975804





Another result of AI mirroring what it finds in training data?

https://www.bespacific.com/inescapable-ai/

Inescapable AI

A Report from TechTonic Justice – Inescapable AI The Ways AI Decides How Low-Income People Work, Live, Learn, and Survive – “The use of artificial intelligence, or AI, by governments, landlords, employers, and other powerful private interests restricts the opportunities of low-income people in every basic aspect of life: at home, at work, in school, at government offices, and within families. AI technologies derive from a lineage of automation and algorithms that have been in use for decades with established patterns of harm to low-income communities. As such, now is a critical moment to take stock and correct course before AI of any level of technical sophistication becomes entrenched as a legitimate way to make key decisions about the people society marginalizes. Employing a broad definition of AI, this report represents the first known effort to comprehensively explain and quantify the reach of AI-based decision-making among low-income people in the United States. It establishes that essentially all 92 million low-income people in the U.S. states—everyone whose income is less than 200 percent of the federal poverty line—have some basic aspect of their lives decided by AI.”





Probably right about rights.

https://pogowasright.org/why-individual-rights-cant-protect-privacy/

Why Individual Rights Can’t Protect Privacy

Law professor and privacy law scholar Dan Solove recently wrote:

Today, the California Privacy Protection Agency (CPPA) published a large advertisement in the San Francisco Chronicle encouraging people to exercise their privacy rights. “The ball is in your court,” the ad declared. (H/T Paul Schwartz)
While I admire the CPPA’s effort to educate, the notion that the ball is in the individuals’ court is not a good one. This puts the on individuals to protect their privacy when they are ill-equipped to do so and then leads to blaming them when they fail to do so.
I wrote an article last year about how privacy laws rely too much on rights, which are not an effective way to bring data collection and use under control: The Limitations of Privacy Rights, 98 Notre Dame Law Review 975 (2023).
Individual privacy rights are often at the heart of information privacy and data protection laws. Unfortunately, rights are often asked to do far more work than they are capable of doing.

Read  more of his post on LinkedIn.





Speedy?

https://www.reuters.com/technology/meta-pay-32-mln-it-settles-facebook-quiz-apps-privacy-breach-2024-12-17/

Facebook-parent Meta settles with Australia's privacy watchdog over Cambridge Analytica lawsuit

Meta Platforms has agreed to a A$50 million settlement ($31.85 million), Australia's privacy watchdog said on Tuesday, closing long-drawn, expensive legal proceedings for the Facebook parent over the Cambridge Analytica scandal.

The breaches were first reported by the Guardian in early 2018, and Facebook received fines from regulators in the United States and the UK in 2019.

Australia's privacy regulator has been caught up in the legal battle with Meta since 2020.



No comments: