Eventually someone will get it
right.
https://fpf.org/blog/global/oaics-dual-ai-guidelines-set-new-standards-for-privacy-protection-in-australia/
OAIC’s
Dual AI Guidelines Set New Standards for Privacy Protection in
Australia
On
21 October 2024, the Office of the Australian Privacy Commissioner
(OAIC)
released two sets of guidelines (collectively, “Guidelines”),
one for developing and training generative AI systems and the other
one for deploying commercially available “AI products”. This
marks a shift in OAIC’s regulatory approach from
enforcement-focused oversight to proactive guidance.
The Guidelines
establish rigorous requirements under the Privacy
Act and
its 13 Australian
Privacy Principles (APPs),
particularly emphasizing accuracy, transparency, and heightened
scrutiny of data collection and secondary use. Notably, the
Guidelines detail conditions that must be met for lawfully collecting
personal information publicly available online for purposes of
training generative AI, including through a detailed definition of
what “fair” collection means.
This
regulatory development aligns with Australia’s broader approach to
AI governance, which prioritizes technology-neutral existing laws and
voluntary frameworks while reserving mandatory regulations for
high-risk applications. However, it may signal increased regulatory
scrutiny of AI systems processing personal information going
forward.
This blog post
summarizes the key aspects of these Guidelines, their relationship to
Australia’s existing privacy law, and their implications for
organizations developing or deploying AI systems in Australia.
Something
to keep in mind?
https://databreaches.net/2024/12/18/defending-data-breach-class-actions/
Defending
Data Breach Class Actions
Mark
P. Henriques of Womble Bond Dickinson has a content-rich post for
defense lawyers:
Class actions arising from data breach
represented the fastest growing segment of class action filings. In
2023, more than 2000 class actions were filed, more than triple the
amount filed in 2022.1 These cases were filed in federal and state
courts across the country, with California receiving the largest
number of filings. High-profile cases like the $52 million penalty
that Marriott agreed to pay in October 2024 highlight the regulatory
scrutiny and legal challenges companies face. A Capitology study of
28 cases showed an average stock price drop of 7.27% following
announcement of a data breach. Financial companies saw a 17%
decrease within the first 16 trading days following a breach. As
board members of a public company, it is crucial to understand the
strategies for preventing breaches and defending against the class
actions that follow.
[…]
To date, the primary targets for data
breach class actions have been credit rating agencies, financial
institutions, and health care providers. Plaintiff’s counsel
target these industries both because the data they collect is
typically highly confidential and because there are often federal or
state regulations which help establish a standard of care.
Some state legislatures have grown
concerned about the wave of data breach class actions. One
particularly interesting development is a 2024 Tennessee statute,
Public Chapter 991, which establishes a heightened liability standard
for class actions arising from cybersecurity events. The statute
appears to be designed to protect the healthcare industry, a mainstay
of the Tennessee economy. The bill requires plaintiffs to establish
that the cybersecurity event was “caused by the willful and wanton
misconduct or gross negligence on the part of the private entity.”
Both Florida and West Virginia have considered similar measures.
Other states may follow suit.
Read
more about specific cases and bases for defense at Womble
Bond Dickinson.
Not
mch of a threat…
https://pogowasright.org/what-happens-if-an-ai-model-is-developed-with-unlawfully-processed-personal-data/
What
Happens If an AI Model Is Developed With Unlawfully Processed
Personal Data
Odia
Kagan of Fox Rothschild writes:
The European Data Protection Board
recently issued an opinion on AI models, shedding light on what the
consequences could be for the unlawful processing of personal data in
the development phase of an AI model on the subsequent processing or
operation of the AI model.
Possible remedies: Up to and
including model deletion
Supervisory authorities may impose:
A fine.
Temporary limitation on the
processing.
Erasure of part of the dataset that
was processed unlawfully.
Deletion of the data of certain data
subjects (ex officio) [individuals can ask for this too].
Erasure of the whole dataset used to
develop the AI model and/or the AI model itself (this depending on
the facts , having regard to the proportionality of the measure (and
e.g. the possibility of retraining)).
The SAs will consider, among other
elements, the risks raised for the data subjects, the gravity of the
infringement, the technical and financial feasibility of the
measure, as well as the volume of personal data involved.
The unlawful processing of the developer
may punish the deployer (depending on potential risks to
individuals).
Read
more at Privacy
Compliance & Data Security.
Tools
and Techniques.
https://www.zdnet.com/article/how-to-use-chatgpt-to-summarize-a-book-article-or-research-paper/
How
to use ChatGPT to summarize a book, article, or research paper
… What
you'll need: A device that can connect to the internet, a free (or
paid) OpenAI account, and a basic understanding of the article,
research paper, or book you want to summarize.