Could
be important.
https://www.insideprivacy.com/data-privacy/colorado-privacy-act-amended-to-include-biometric-data-provisions/
Colorado
Privacy Act Amended To Include Biometric Data Provisions
On
May 31, 2024, Colorado Governor Jared Polis signed HB 1130 into law.
This legislation amends the Colorado Privacy Act to add specific
requirements for the processing of an individual’s biometric data.
This law does not have a private right of action.
Similarly
to the Illinois Biometric Information Privacy Act (BIPA), this law
requires controllers to provide notice and obtain consent prior to
the collection or processing of a biometric identifier. The law also
prohibits controllers from selling or disclosing biometric
identifiers unless the customer consents or unless disclosure is
necessary to fulfill the purpose of collection, to complete a
financial transaction, or is required by law.
How
much data needs to be removed before reality drifts away?
https://www.bespacific.com/if-google-kills-news-media-who-will-feed-the-ai-beast/
If
Google Kills News Media, Who Will Feed the AI Beast?
Vanity
Fair [unpaywalled
]
– “Summarization tools from OpenAI and Google offer a CliffsNotes
version of journalism that may further dumb down public discourse and
deliver a brutal blow to an already battered media business…we’re
on the cusp of a similar phenomenon with the new wave of AI
summarization tools being launched by OpenAI, Google, and Facebook.
These tools, though impressive in their ability to distill
information, are just a few steps away from creating an “Irtnog”-like
reality, where the richness of human knowledge and depth of
understanding are reduced to bite-size, and sometimes dangerously
inaccurate, summaries for our little brains to consume on our tiny
devices. Case in point, this month Google launched several new
AI-powered features for its search engine. One of the most notable
additions is the AI Overviews feature, which provides AI-generated
summaries at the top of search results. Essentially, that’s a
fancy way of saying AI will summarize search results for you, because
apparently reading anything that is not a summary is just too much
effort these days. For news publishers, this
is—understandably!—quite worrisome. Over the past three decades,
tech companies have systematically helped siphon off the advertising
revenue that once supported robust journalism, as advertisers have
flocked to the targeted offerings of social media and search
platforms. At the same time, the proliferation of free news content
aggregated by tech giants (ahem, Google News) has made it
increasingly difficult for news outlets to attract and retain paying
subscribers. As such, the publishing industry has been declining
since the early 2000s, when the real tech companies were separated
from the chaff of the dot-com bubble, with
newspaper revenues falling by
more than 50% over the past two decades…”
Perhaps
the horror isn’t so horrible? (Will they ask the AI to take the
stand?)
https://www.bespacific.com/11th-circuit-judge-admits-to-using-chatgpt-to-help-decide-a-case/
11th
Circuit Judge Admits to Using ChatGPT to Help Decide a Case
e-discovery
Team: Urges Other Judges and Lawyers to Follow Suit:
“The Eleventh Circuit published a ground breaking Concurring
Opinion on May 28, 2024 by Judge
Kevin C. Newsom on
the use of generative AI to help decide contract interpretation
issues. Snell
v. United Specialty Ins. Co.,
2024 U.S. App. LEXIS 12733 *; _ F.4th _ (11th Cir., 05/28/24). The
case in question centered around interpretation of an insurance
policy. Circuit Judge Kevin C. Newsom not only admits to using
ChatGPT to help him make his decision, but praises its utility and
urges other judges and lawyers to do so too. His analysis is
impeccable and his writing is superb. That is bold judicial
leadership – Good News. I love his opinion and bet that you will
too. The only way to do the Concurring Opinion justice is to quote
all of it, all 6,485 words. I know that’s a lot of words, but
unlike ChatGPT, which is a good writer, Judge Newsom is a great
writer.
Judge
Kevin C. Newsom, a
Harvard law graduate from Birmingham, Alabama, is creative in his
wise and careful use of AI. Judge Newsom added photos to his opinion
and, as I have been doing recently in my articles, quoted in full the
transcripts of the ChatGPT sessions he relied upon. He leads by
doing and his analysis is correct, including especially his
commentary on AI and human hallucinations…”
Perspective.
How to benefit from lies about you even if they are true.
https://www.bespacific.com/the-liars-dividend-the-impact-of-deepfakes-and-fake-news-on-politician-support-and-trust-in-media/
The
Liar’s Dividend: The Impact of Deepfakes and Fake News on
Politician Support and Trust in Media
“This
project, The
Liar’s Dividend: Can Politicians Claim Misinformation to Evade
Accountability?
is
joint work between the Georgia Institute of Technology and Emory
University. While previous work has addressed the direct effects of
misinformation, we
propose to study the phenomenon of misinformation about
misinformation,
or politicians “crying wolf” over fake news. We argue that
strategic and false allegations of misinformation (i.e., fake news
and deepfakes) benefit politicians by helping them maintain support
in the face of information damaging to their reputation. This
concept is known as the “liar’s dividend”(Chesney and Citron
2018) and suggests that some politicians profit from an informational
environment saturated with misinformation. While previous
scholarship has demonstrated that the direct effects of
misinformation may be overstated (Lazer et al. 2018, Little 2018),
the more subtle indirect effects of misinformation may be even more
concerning. Therefore, we aim to assess the extent of the harms to
political accountability and trust in media posed by the liar’s
dividend. Importantly, our study will also evaluate which
“protective factors,” such as media literacy, help to insulate
against this form of misinformation. We posit that the payoffs from
the liar’s dividend work through two theoretical channels. First,
the allegation of a deepfake or fake news can produce informational
uncertainty.
After learning of a political scandal, a member of the public will
be more likely to downgrade their evaluation of the politician or to
think that the politician is a “bad type.” However, if the
politician then issues a statement disclaiming the story and alleging
foul play by the opposition in the form of a deepfake or fake news,
then some members of the public may be more uncertain about what to
believe. Compared to a counterfactual where the politician makes no
so such allegation, we think claims of a deepfake or fake news will
result in aunidirectional shift in average evaluations of the
politician in the positive direction, along with an associated
increased variance (a reflection of increased uncertainty). Second,
an allegation of a deepfake or fake news can provide rhetorical
cover.
To avoid cognitive dissonance, core supporters or strong
co-partisans may be looking for an “out” or a motivated reason
(Taber and Lodge 2006) to maintain support for their preferred
politician in the face of a damaging news story. This rhetorical
strategy also employs a “devil shift”(Sabatier, Hunter and
McLaughlin 1987) where politicians not only signal their own
innocence, but also criticize political opponents and the media,
prompting supporters to rally against the opposition. To evaluate
these potential impacts of the liar’s dividend and the channels
through which the liar’s dividend bestows its benefits, we use a
survey experiment to randomly assign vignette treatments detailing
embarrassing or scandalous information about American politicians to
American citizens. Our study design, treatments, outcomes,
covariates, estimands, and analysis strategy are described in more
detail in our pre-analysis plan, which was pre-registered with
EGAP/OSF.”