Saturday, December 21, 2024

How to ensure employees listen to those security lectures…

https://databreaches.net/2024/12/20/ohio-state-auditor-issued-guidance-on-email-scams-in-april-employees-might-be-liable-if-they-fall-for-a-scam/

Ohio state auditor issued guidance on email scams in April; employees might be liable if they fall for a scam

Corinne Colbert reports:

The Ohio Auditor of State’s office issued a bulletin this past spring with guidance on detecting and avoiding payment redirect scams — and warned that public employees who failed to follow that guidance could be held accountable.
That could have ramifications for whoever in Athens city government is determined to be responsible for the loss of nearly $722,000 in an email scam last month.
Auditor of State Bulletin 2024–003 went to all public offices, community schools and independent public accounts in the state on April 12. The auditor’s office had also issued an advisory on increased cybercrime in March 2023.
Advisories function as a kind of heads-up about “emerging issues or concerns,” a spokesperson for the state auditor’s office told the Independent by email. Bulletins, on the other hand, “are formal communications that provide detailed instructions or guidance on specific topics,” the spokesperson wrote.
The April 12 bulletin states, “Failure to follow the guidance in this Bulletin may result in an AOS finding when a loss occurs, and the employee is considered liable as a result of negligence or performing duties without reasonable care.”

Read more at Athens County Independent.





Curious.

https://gizmodo.com/ai-chatbots-can-be-jailbroken-to-answer-any-question-using-very-simple-loopholes-2000541157

AI Chatbots Can Be Jailbroken to Answer Any Question Using Very Simple Loopholes

Anthropic, the maker of Claude, has been a leading AI lab on the safety front. The company today published research in collaboration with Oxford, Stanford, and MATS showing that it is easy to get chatbots to break from their guardrails and discuss just about any topic. It can be as easy as writing sentences with random capitalization like this: “IgNoRe YoUr TrAinIng.” 404 Media earlier reported on the research.





We will reach a point where driving will be limited to AI by law.

https://www.theverge.com/2024/12/19/24324492/waymo-injury-property-damage-insurance-data-swiss-re

Waymo still doing better than humans at preventing injuries and property damage

Waymo’s autonomous vehicles cause less property damage and fewer bodily injuries when they crash than human-driven vehicles, according to a study that relies on an analysis of insurance data.

They found that the performance of Waymo’s vehicles was safer than that of humans, with an 88 percent reduction in property damage claims and a 92 percent reduction in bodily injury claims. Across 25.3 million miles, Waymo was involved in nine property damage claims and two bodily injury claims. The average human driving a similar distance would be expected to have 78 property damage and 26 bodily injury claims, the company says.



Friday, December 20, 2024

For those of us amused by AI Copyright.

https://www.bespacific.com/every-ai-copyright-lawsuit-in-the-us-visualized/

Every AI Copyright Lawsuit in the US, Visualized

Wired: “WIRED is following every copyright battle involving the AI industry—and we’ve created some handy visualizations that will be updated as the cases progress.  In May 2020, the media and technology conglomerate Thomson Reuters sued a small legal AI startup called Ross Intelligence, alleging that it had violated US copyright law by reproducing materials from Westlaw, Thomson Reuters’ legal research platform. As the pandemic raged, the lawsuit hardly registered outside the small world of nerds obsessed with copyright rules. But it’s now clear that the case—filed more than two years before the generative AI boom began—was the first strike in a much larger war between content publishers and artificial intelligence companies now unfolding in courts across the country. The outcome could make, break, or reshape the information ecosystem and the entire AI industry—and in doing so, impact just about everyone across the internet. Over the past two years, dozens of other copyright lawsuits against AI companies have been filed at a rapid clip. The plaintiffs include individual authors like Sarah Silverman and Ta Nehisi-Coates, visual artists, media companies like The New York Times, and music-industry giants like Universal Music Group. This wide variety of rights holders are alleging that AI companies have used their work to train what are often highly lucrative and powerful AI models in a manner that is tantamount to theft. AI companies are frequently defending themselves by relying on what’s known as the fair use” doctrine, arguing that building AI tools should be considered a situation where it’s legal to use copyrighted materials without getting consent or paying compensation to rights holders. (Widely accepted examples of fair use include parody, news reporting, and academic research.) Nearly every major generative AI company has been pulled into this legal fight, including OpenAI, Meta, Microsoft, Google, Anthropic, and Nvidia…”





No doubt they will miss the really interesting stuff.

https://pogowasright.org/what-to-expect-in-2025-ai-legal-tech-and-regulation-65-expert-predictions/

What to Expect in 2025: AI Legal Tech and Regulation (65 Expert Predictions)

Oliver Roberts is Editor-in-Chief of AI and the Law at The National Law Review, Co-Head of the AI Practice Group at Holtzman Vogel, and CEO/Founder of Wickard.ai
As 2024 comes to a close, it’s time to look ahead to how AI will shape the law and legal practice in 2025. Over the past year, we’ve witnessed growing adoption of AI across the legal sector, substantial investments in legal AI startups, and a rise in state-level AI regulations. While the future of 2025 remains uncertain, industry leaders are already sharing their insights.
Along with 2025 predictions from The National Law Review’s Editor-in-Chief Oliver Roberts, this article presents 65 expert predictions on AI and the law in 2025 from federal judges, startup founders, CEOs, and leaders of AI practice groups at global law firms.

Read the article at The National Law Review.  There’s a lot of food for thought in there.





We don’t need no stinking reality! (Real data has enough problems.)

https://arstechnica.com/information-technology/2024/12/new-physics-sim-trains-robots-430000-times-faster-than-reality/

New physics sim trains robots 430,000 times faster than reality

On Thursday, a large group of university and private industry researchers unveiled Genesis, a new open source computer simulation system that lets robots practice tasks in simulated reality 430,000 times faster than in the real world. Researchers can also use an AI agent to generate 3D physics simulations from text prompts.





It’s not a joke, it just looks like one.

https://abovethelaw.com/2024/12/quantum-computing-is-coming-and-lawyers-arent-ready/

Quantum Computing Is Coming And Lawyers Aren't Ready

The profession that can’t figure out how to avoid citing fake cases from artificial intelligence will soon deal with a technology far more revolutionary. This month, Google unveiled its new Willow chip, heralding a significant leap in quantum computing. 

Beyond data privacy, quantum computing opens a can of intellectual property worms:

The rapid processing speed of quantum computers could facilitate the infringement of intellectual property rights by allowing the copying and modification of large amounts of data almost instantaneously. Lawyers must be alert to the evolution of intellectual property laws and work on new legal strategies to protect their clients’ rights in this new technological environment.



Thursday, December 19, 2024

Eventually someone will get it right.

https://fpf.org/blog/global/oaics-dual-ai-guidelines-set-new-standards-for-privacy-protection-in-australia/

OAIC’s Dual AI Guidelines Set New Standards for Privacy Protection in Australia

On 21 October 2024, the Office of the Australian Privacy Commissioner (OAIC) released two sets of guidelines (collectively, “Guidelines”), one for developing and training generative AI systems and the other one for deploying commercially available “AI products”. This marks a shift in OAIC’s regulatory approach from enforcement-focused oversight to proactive guidance. 

The Guidelines establish rigorous requirements under the Privacy Act and its 13 Australian Privacy Principles (APPs), particularly emphasizing accuracy, transparency, and heightened scrutiny of data collection and secondary use. Notably, the Guidelines detail conditions that must be met for lawfully collecting personal information publicly available online for purposes of training generative AI, including through a detailed definition of what “fair” collection means. 

This regulatory development aligns with Australia’s broader approach to AI governance, which prioritizes technology-neutral existing laws and voluntary frameworks while reserving mandatory regulations for high-risk applications. However, it may signal increased regulatory scrutiny of AI systems processing personal information going forward. 

This blog post summarizes the key aspects of these Guidelines, their relationship to Australia’s existing privacy law, and their implications for organizations developing or deploying AI systems in Australia.





Something to keep in mind?

https://databreaches.net/2024/12/18/defending-data-breach-class-actions/

Defending Data Breach Class Actions

Mark P. Henriques of Womble Bond Dickinson has a content-rich post for defense lawyers:

Class actions arising from data breach represented the fastest growing segment of class action filings. In 2023, more than 2000 class actions were filed, more than triple the amount filed in 2022.1 These cases were filed in federal and state courts across the country, with California receiving the largest number of filings. High-profile cases like the $52 million penalty that Marriott agreed to pay in October 2024 highlight the regulatory scrutiny and legal challenges companies face. A Capitology study of 28 cases showed an average stock price drop of 7.27% following announcement of a data breach. Financial companies saw a 17% decrease within the first 16 trading days following a breach. As board members of a public company, it is crucial to understand the strategies for preventing breaches and defending against the class actions that follow.
[…]
To date, the primary targets for data breach class actions have been credit rating agencies, financial institutions, and health care providers. Plaintiff’s counsel target these industries both because the data they collect is typically highly confidential and because there are often federal or state regulations which help establish a standard of care.
Some state legislatures have grown concerned about the wave of data breach class actions. One particularly interesting development is a 2024 Tennessee statute, Public Chapter 991, which establishes a heightened liability standard for class actions arising from cybersecurity events. The statute appears to be designed to protect the healthcare industry, a mainstay of the Tennessee economy. The bill requires plaintiffs to establish that the cybersecurity event was “caused by the willful and wanton misconduct or gross negligence on the part of the private entity.” Both Florida and West Virginia have considered similar measures. Other states may follow suit.

Read more about specific cases and bases for defense at Womble Bond Dickinson.





Not mch of a threat…

https://pogowasright.org/what-happens-if-an-ai-model-is-developed-with-unlawfully-processed-personal-data/

What Happens If an AI Model Is Developed With Unlawfully Processed Personal Data

Odia Kagan of Fox Rothschild writes:

The European Data Protection Board recently issued an opinion on AI models, shedding light on what the consequences could be for the unlawful processing of personal data in the development phase of an AI model on the subsequent processing or operation of the AI model.
Possible remedies: Up to and including model deletion
Supervisory authorities may impose:
  • A fine.
  • Temporary limitation on the processing.
  • Erasure of part of the dataset that was processed unlawfully.
  • Deletion of the data of certain data subjects (ex officio) [individuals can ask for this too].
  • Erasure of the whole dataset used to develop the AI model and/or the AI model itself (this depending on the facts , having regard to the proportionality of the measure (and e.g. the possibility of retraining)).
  • The SAs will consider, among other elements, the risks raised for the data subjects, the gravity of the infringement, the technical and financial feasibility of the measure, as well as the volume of personal data involved.
The unlawful processing of the developer may punish the deployer (depending on potential risks to individuals).

Read more at Privacy Compliance & Data Security.





Tools and Techniques.

https://www.zdnet.com/article/how-to-use-chatgpt-to-summarize-a-book-article-or-research-paper/

How to use ChatGPT to summarize a book, article, or research paper

What you'll need: A device that can connect to the internet, a free (or paid) OpenAI account, and a basic understanding of the article, research paper, or book you want to summarize.



Wednesday, December 18, 2024

Privacy is not just for governments to violate. I wonder what’s next?

https://www.bespacific.com/new-real-estate-platform-lets-homebuyers-check-their-neighbors-political-affiliations/

New real estate platform lets homebuyers check their neighbors’ political affiliations

New York Post: “A new real estate platform is giving homebuyers an unprecedented peek into their potential neighborhoods — revealing everything from political leanings to local demographics — before they even commit to buying. Oyssey, a tech startup soft-launching this month in South Florida and New York City, lets buyers access neighborhood political affiliations based on election results and campaign contributions, along with housing trends and other social data. The platform is betting that today’s buyers care just as much about their neighbors’ values as they do about square footage or modern finishes…

The site operates as a one-stop shop for homebuyers, streamlining the process of browsing listings, signing contracts and communicating with agents — all while integrating block-by-block political and consumer data. Oyssey markets the service to real estate agents and brokers via a subscription model, though buyers can use the platform for free by invitation from their agents. The launch comes at a turbulent time for the real estate industry…”





At what point does security become less expensive that fines for not having security?

https://pogowasright.org/irish-data-privacy-watchdog-fines-meta-e251-million-for-gdpr-failure/

Irish data privacy watchdog fines Meta €251 million for GDPR failure

Euractiv reports:

The fine was issued for a security breach on social media Facebook which started in July 2017, and affected close to three million accounts in the European Economic Area.
This enforcement action highlights how the failure to build in data protection requirements […] can expose individuals to […] risk to the fundamental rights and freedoms of individuals,” said the Irish DPC deputy commissioner Graham Doyle.
The breach was a bug in Facebook’s design which allowed unauthorised people using scripts to exploit a vulnerability on a Facebook code, allowing them to view profiles of users they should not have been able to see otherwise.
Meta is expected to appeal the decision. “We took immediate action to fix the problem,” said a Meta spokesperson in an email.
Meta discovered the security issue in September 2018, fixed the vulnerability and informed law enforcement authorities.

Read more at Euractiv.  The specific infringements cited by the DPC were as follows:

The DPC’s final decisions noted the following infringements of the GDPR and the resulting fines for each:

  1. Decision 1
    1. Article 33(3) GDPR – By not including in its breach notification all the information required by that provision that it could and should have included. The DPC reprimanded MPIL for failures in regards to this provision and ordered it to pay administrative fines of €8 million.
    2. Article 33(5) GDPR – By failing to document the facts relating to each breach, the steps taken to remedy them, and to do so in a way that allows the Supervisory Authority to verify compliance. The DPC reprimanded MPIL for failures in regards to this provision and ordered it to pay administrative fines of €3 million.
  2. Decision 2
    1. Article 25(1) GDPR – By failing to ensure that data protection principles were protected in the design of processing systems. The DPC found that MPIL had infringed this provision, reprimanded MPIL, and ordered it to pay administrative fines of €130 million.
    2. Article 25(2) – By failing in their obligations as controllers to ensure that, by default, only personal data that are necessary for specific purposes are processed. The DPC found that MPIL had infringed these provisions, reprimanded MPIL, and ordered it to pay administrative fines of €110 million.





Be careful what you ask for?

https://economictimes.indiatimes.com/magazines/panache/prof-vs-ai-law-professor-who-chatgpt-accused-of-rape-finds-allegations-chilling-and-ironic/articleshow/116312316.cms

Prof vs AI: Law professor who ChatGPT accused of rape, finds allegations 'chilling and ironic'

… “It fabricated a claim suggesting I was on the faculty at an institution where I have never been, asserted I took a trip I never undertook, and reported an allegation that was entirely false,” he remarked to The Post. “It’s deeply ironic, given that I have been discussing the threats AI poses to free speech.”

The 61-year-old legal scholar became aware of the chatbot's erroneous claim when he received a message from UCLA professor Eugene Volokh, who allegedly asked ChatGPT to provide “five examples” of “sexual harassment” incidents involving professors at U.S. law schools, along with “quotes from relevant newspaper articles.”



Tuesday, December 17, 2024

It’s not just for legal training, I hope.

https://www.bespacific.com/revolutionizing-legal-education-with-ai-the-socratic-quizbot/

Revolutionizing Legal Education with AI: The Socratic Quizbot

AI Law Librarians – Sean Harrington – “I had the pleasure of co-teaching AI and the Practice of Law with Kenton Brice last semester at OU Law. It was an incredible experience. When we met to think through how we would teach this course, we agreed on one crucial component: We wanted the students to get a lot of reps using AI throughout the entire course. That is fairly easy to accomplish for things like research, drafting, and general studying for the course but we hit a roadblock with the assessment component. I thought about it for a week and said, “Kenton, what if we created an AI that would Socratically quiz the students on the readings each week?” His response was, “Do you think you can do that?” I said, “I don’t know but I’ll give it a try.” Thus Socratic Quizbot was born. If you follow me on social media, you’ve probably seen me soliciting feedback on the paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4975804





Another result of AI mirroring what it finds in training data?

https://www.bespacific.com/inescapable-ai/

Inescapable AI

A Report from TechTonic Justice – Inescapable AI The Ways AI Decides How Low-Income People Work, Live, Learn, and Survive – “The use of artificial intelligence, or AI, by governments, landlords, employers, and other powerful private interests restricts the opportunities of low-income people in every basic aspect of life: at home, at work, in school, at government offices, and within families. AI technologies derive from a lineage of automation and algorithms that have been in use for decades with established patterns of harm to low-income communities. As such, now is a critical moment to take stock and correct course before AI of any level of technical sophistication becomes entrenched as a legitimate way to make key decisions about the people society marginalizes. Employing a broad definition of AI, this report represents the first known effort to comprehensively explain and quantify the reach of AI-based decision-making among low-income people in the United States. It establishes that essentially all 92 million low-income people in the U.S. states—everyone whose income is less than 200 percent of the federal poverty line—have some basic aspect of their lives decided by AI.”





Probably right about rights.

https://pogowasright.org/why-individual-rights-cant-protect-privacy/

Why Individual Rights Can’t Protect Privacy

Law professor and privacy law scholar Dan Solove recently wrote:

Today, the California Privacy Protection Agency (CPPA) published a large advertisement in the San Francisco Chronicle encouraging people to exercise their privacy rights. “The ball is in your court,” the ad declared. (H/T Paul Schwartz)
While I admire the CPPA’s effort to educate, the notion that the ball is in the individuals’ court is not a good one. This puts the on individuals to protect their privacy when they are ill-equipped to do so and then leads to blaming them when they fail to do so.
I wrote an article last year about how privacy laws rely too much on rights, which are not an effective way to bring data collection and use under control: The Limitations of Privacy Rights, 98 Notre Dame Law Review 975 (2023).
Individual privacy rights are often at the heart of information privacy and data protection laws. Unfortunately, rights are often asked to do far more work than they are capable of doing.

Read  more of his post on LinkedIn.





Speedy?

https://www.reuters.com/technology/meta-pay-32-mln-it-settles-facebook-quiz-apps-privacy-breach-2024-12-17/

Facebook-parent Meta settles with Australia's privacy watchdog over Cambridge Analytica lawsuit

Meta Platforms has agreed to a A$50 million settlement ($31.85 million), Australia's privacy watchdog said on Tuesday, closing long-drawn, expensive legal proceedings for the Facebook parent over the Cambridge Analytica scandal.

The breaches were first reported by the Guardian in early 2018, and Facebook received fines from regulators in the United States and the UK in 2019.

Australia's privacy regulator has been caught up in the legal battle with Meta since 2020.



Monday, December 16, 2024

We’re here to protect you, need it or not.

https://pogowasright.org/schools-using-ai-to-send-police-to-students-homes/

Schools Using AI to Send Police to Students’ Homes

Victor Tangermann reports:

Schools are employing dubious AI-powered software to accuse teenagers of wanting to harm themselves and sending the cops to their homes as a result — with often chaotic and traumatic results.
As the New York Times reports, software being installed on high school students’ school-issued devices tracks every word they type. An algorithm then analyzes the language for evidence of teenagers wanting to harm themselves.
Unsurprisingly, the software can get it wrong by woefully misinterpreting what the students are actually trying to say. A 17-year-old in Neosho, Missouri, for instance, was woken up by the police in the middle of the night.

Read more at The Byte.





All we had was guns and knives…

https://www.nytimes.com/2024/12/15/technology/school-fight-videos-student-phones.html?unlocked_article_code=1.hk4.R7hc.vHX7olgtFWq3&smid=nytcore-ios-share&referringSource=articleShare

An Epidemic of Vicious School Brawls, Fueled by Student Cellphones

Cafeteria melees. Students kicked in the head. Injured educators. Technology is stoking cycles of violence in schools across the United States.





Automatic evasion of automatic license plate readers… Seems fair! (Very James Bond)

https://www.wired.com/story/digital-license-plate-jailbreak-hack/

Hackers Can Jailbreak Digital License Plates to Make Others Pay Their Tolls and Tickets

Digital license plates, already legal to buy in a growing number of states and to drive with nationwide, offer a few perks over their sheet metal predecessors. You can change their display on the fly to frame your plate number with novelty messages, for instance, or to flag that your car has been stolen. Now one security researcher has shown how they can also be hacked to enable a less benign feature: changing a car's license plate number at will to avoid traffic tickets and tolls—or even pin them on someone else.



Sunday, December 15, 2024

Caution.

https://journals.rudn.ru/law/article/view/41937

Prompts for generative artificial intelligence in legal discourse

The development of generative models of artificial intelligence (AI) poses new challenges for legal science and practice. This requires understanding of the legal nature of prompts (queries to AI) and development of appropriate legal regulation. The article aims to determine the legal significance of prompts and outlines the prospects for their research in the context of the interaction between law and AI. The study is based on the analysis of contemporary scientific literature devoted to the problems of legal regulation of AI, as well as investigation of the first cases of the use of generative AI models in legal practice and education. Methods of legal qualification, comparative legal analysis, and legal modeling are applied. Prompts are qualified as legal actions (legal facts in the strict sense), which opens the path to addressing the applicability of copyright criteria to them. The potential and risks of using prompts in legal practice and education are identified, and the need for standardizing prompts and developing specialized methods for teaching lawyers to interact with AI is substantiated. Prompts, as a tool for human-AI interaction, represent a fundamentally important subject of legal research, upon which the prospects for AI application in law largely rely. The article concludes that interdisciplinary and international studies are necessary to unite the efforts of legal professionals, AI specialists, and the generative models themselves in developing optimal legal solutions.





Hopeful?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5049139

AI in the Courts: How Worried Should We Be?

As artificial intelligence (AI) rapidly develops, new digital innovations will likely bring changes across all parts of society. This article comprises a dialogue between three law and technology experts about emerging uses of AI in the legal profession and the court system. The panelists discuss possible applications of AI for improving access to justice for self-represented litigants, streamlining the work of attorneys, and assisting judges in adjudicating cases. The panelists caution against risks associated with emerging uses of AI technology, such as algorithmic hallucinations and biases that can arise from the data on which AI tools are trained. Still, the panelists recognize that AI tools are here to stay. They explain ways that AI can be leveraged to help overcome certain shortcomings of the current legal system. Their dialogue ultimately articulates a vision in which AI can prove beneficial when used within the legal system, so long as steps are taken to ensure these new digital tools meet appropriate standards for privacy and security and deliver results that are sufficiently accurate, unbiased, and transparent.





Perspective.

https://dvkjournals.in/index.php/ah/article/view/4594

Ethics in AI: Worldwide Impacts and Evolving Trends

Artificial Intelligence (AI) is revolutionizing various aspects of society and the burgeoning integration of AI systems into daily life has exacerbated the ethical implications of their deployment worldwide. AI ethics encompasses a wide range of issues, including privacy, bias, accountability, transparency, and the societal consequences of automation. The creation of thorough ethical rules has lagged behind the quick growth of AI technology, creating difficulties in guaranteeing the responsible design and application of AI systems. Because AI systems frequently demand enormous datasets, which may expose sensitive personal information, privacy concerns are raised. The potential of artificial intelligence to deduce facts that people might not have voluntarily disclosed further complicates this problem. AI bias is yet another serious ethical issue as biases that already exist in the data that AI systems are trained on have the potential to be reinforced by these systems which leads to unfair treatment and discrimination, particularly against marginalized groups. Thus, accountability in AI which is essential for addressing the ethical concerns are needed in establishing legal frameworks to supervise the deployment of AI. In AI ethics, transparency is equally essential. Globally, the approach to AI ethics varies significantly across different regions. Thus, the paper examines the global impacts and evolving trends in AI ethics, exploring the balance between technological advancement and moral responsibility, as well as the role of international cooperation in addressing AI ethics. The establishment of global standards and agreements can harmonize ethical practices and ensure that AI benefits are distributed equitably. Thus, the ethical implications of AI are complex and multifaceted, requiring a coordinated effort from governments, industry and civil society. Measures for the use of AI’s advantages while reducing its risks by tackling the issues of privacy, bias, accountability, transparency and societal effect are the need of the hour.





Update.

https://pogowasright.org/michigan-senate-passes-michigan-personal-data-privacy-act/

Michigan Senate Passes Michigan Personal Data Privacy Act

EPIC reports:

The Michigan Senate voted during the final days of its session to pass SB 659, the Michigan Personal Data Privacy Act. The bill now goes to the House of Representatives for consideration.
The bill includes many strong protections for Michiganders, including a ban on the sale of sensitive data, a prohibition on targeted advertising to minors, and strong civil rights protections. Importantly, the bill also includes a data minimization provision limiting what personal data companies can collect about consumers to only what is reasonably necessary for the product or service the consumer requests.
EPIC testified in support of the bill and urges Michigan Representatives to also vote to pass this bill in the House’s remaining few days in session. Michigan’s legislative session ends December 19.