I
worry that eventually robots will make all the rules.
https://www.brookings.edu/research/robotic-rulemaking/
Robotic
rulemaking
…
Rulemaking
by federal agencies is a very text-intensive process, both in terms
of writing the rules themselves, which express not only the law but
also the agencies’ rationales for their regulatory choices, as well
as public comments which arrive almost exclusively in the form of
text. How might generative AI intersect with rulemaking?
In this essay, we work through some use cases for generative AI in
the rulemaking process, for better and for worse, both for the public
and federal agencies.
(Related)
Even if the AI is better at war than the admirals?
https://gcaptain.com/us-navy-admiral-says-ai-warships-must-obey/
US
Navy Admiral Says AI Warships ‘Must Obey’
This
week marked significant AI-related announcements for the US Navy at
the annual Sea
Air Space conference.
Top Admiral and CNO, Mike Gilday, announced increased investments in
Artificial Intelligence software and autonomous
warships.
Meanwhile, Marine Corps General Karston
Heckl mentioned
that their Warfighting
Lab is
exploring the
integration of AI or autonomy “everywhere”.
Military
jargon such as “force multiplier” and “game-changing
technology” was abundant, but Vice Admiral Scott Conn’s
insistence that AI “must obey” stood out as the most powerful
statement.
During
a session moderated by Defense News journalist Megan
Eckstein,
Vice Admiral Conn,
Deputy Chief of Naval Operations,
explained how the US Navy is using technology to simultaneously
engage multiple fleets and achieve various objectives. He
highlighted that AI is transforming not only warfighting but also
addressing long-standing, mundane issues faced by commanders.
Interesting.
Chasing down a surveillance satellite in order to surveil it.
https://techcrunch.com/2023/04/06/true-anomaly-wants-to-train-space-warfighters-with-spy-satellites/
True
Anomaly wants to train space warfighters with spy satellites
… Colorado-based
True Anomaly was founded last year by a quartet of ex-Space Force
members. The company’s set out to supply the Pentagon with
defensive tech to protect American assets in space, and to conduct
recon on enemy spacecraft. The startup has developed a technology
stack that includes training software and “autonomous orbital
pursuit vehicles” that will be able to collect video and other data
on objects in space.
Interesting
despite the Forrest Gump title.
https://www.pogowasright.org/article-data-is-what-data-does-regulating-use-harm-and-risk-instead-of-sensitive-data/
Article:
Data Is What Data Does: Regulating Use, Harm, and Risk Instead of
Sensitive Data
Daniel
J. Solove has posted a draft of a new article and welcomes feedback,
Abstract:
Heightened protection for sensitive data
is becoming quite trendy in privacy laws around the world.
Originating in European Union (EU) data protection law and included
in the EU’s General Data Protection Regulation (GDPR), sensitive
data singles out certain categories of personal data for extra
protection. Commonly recognized special categories of sensitive data
include racial or ethnic origin, political opinions, religious or
philosophical beliefs, trade union membership, health, sexual
orientation and sex life, biometric data, and genetic data.
Although heightened protection for
sensitive data appropriately recognizes that not all situations
involving personal data should be protected uniformly, the sensitive
data approach is a dead end. The sensitive data categories are
arbitrary and lack any coherent theory for identifying them. The
borderlines of many categories are so blurry that they are useless.
Moreover, it is easy to use non-sensitive data as a proxy for certain
types of sensitive data.
Personal data is akin to a grand
tapestry, with different types of data interwoven to a degree that
makes it impossible to separate out the strands. With Big Data and
powerful machine learning algorithms, most non-sensitive data can
give rise to inferences about sensitive data. In many privacy laws,
data that can give rise to inferences about sensitive data is also
protected as sensitive data. Arguably, then, nearly all personal
data can be sensitive, and the sensitive data categories can swallow
up everything. As a result, most organizations are currently
processing a vast amount of data in violation of the laws.
This Article argues that the problems
with the sensitive data approach make it unworkable and
counterproductive — as well as expose a deeper flaw at the root of
many privacy laws. These laws make a fundamental conceptual mistake
— they embrace the idea that the nature of personal data is a
sufficiently useful focal point for the law. But
nothing meaningful for regulation can be determined solely by looking
at the data itself. Data is what data does. Personal
data is harmful when its use causes harm or creates a risk of harm.
It is not harmful if it is not used in a way to cause harm or risk of
harm.
To be effective, privacy law must focus
on use, harm, and risk rather than on the nature of personal data.
The implications of this point extend far beyond sensitive data
provisions. In many elements of privacy laws, protections should be
based on the use of personal data and proportionate to the harm and
risk involved with those uses.
Solove,
Daniel J., Data Is What Data Does: Regulating Use, Harm, and Risk
Instead of Sensitive Data (January 11, 2023). 118 Northwestern
University Law Review (Forthcoming), Available at SSRN:
https://ssrn.com/abstract=4322198
or
http://dx.doi.org/10.2139/ssrn.4322198
You can download the article for free at the SSRN
link above.
What did I consent to?
https://www.pogowasright.org/article-murky-consent-an-approach-to-the-fictions-of-consent-in-privacy-law/
Article:
Murky Consent: An Approach to the Fictions of Consent in Privacy Law
On
SSRN, this article by Daniel J. Solove:
Abstract
Consent plays a profound role in nearly
all privacy laws. As Professor Heidi Hurd aptly said, consent works
“moral magic” – it transforms things that would be illegal and
immoral into lawful and legitimate activities. Regarding privacy,
consent authorizes and legitimizes a wide range of data collection
and processing.
There are generally two approaches to
consent in privacy law. In the United States, the notice-and-choice
approach predominates, where organizations post a notice of their
privacy practices and then people are deemed to have consented if
they continue to do business with the organization or fail to opt
out. In the European Union, the General Data Protection Regulation
(GDPR) uses the express consent approach, where people must
voluntarily and affirmatively consent.
Both
approaches fail. The evidence of actual consent is
non-existent under the notice-and-choice approach. Individuals are
often pressured or manipulated, undermining the validity of their
consent. The express consent approach also suffers from these
problems – people are ill-equipped to make decisions about their
privacy, and even experts cannot fully understand what algorithms
will do with personal data. Express consent also is highly
impractical; it inundates individuals with consent requests from
thousands of organizations. Express consent cannot scale.
In this Article, I contend that in
most circumstances, privacy consent is fictitious.
Privacy law should take a new approach to consent that I call “murky
consent.” Traditionally, consent has been binary – an on/off
switch – but murky consent exists in the shadowy middle ground
between full consent and no consent. Murky consent embraces the fact
that consent in privacy is largely a set of fictions and is at best
highly dubious.
Abandoning consent entirely in most
situations involving privacy would involve the government making most
decisions regarding personal data. But this approach would be
problematic, as it would involve extensive government control and
micromanaging, and it would curtail people’s autonomy. The law
should allow space for people’s autonomy over their decisions, even
when those decisions are deeply flawed. The law should thus strive
to reach a middle ground, providing a sandbox for free play but with
strong guardrails to protect against harms.
Because it conceptualizes consent as
mostly fictional, murky consent recognizes its lack of legitimacy.
To return to Hurd’s analogy, murky consent is consent without
magic. Instead of providing extensive legitimacy and power, murky
consent should authorize only a very restricted and weak license to
use data. This would allow for a degree of individual autonomy but
with powerful guardrails to limit exploitative and harmful behavior
by the organizations collecting and using personal data. In the
Article, I propose some key guardrails to use with murky consent.
Solove,
Daniel J., Murky Consent: An Approach to the Fictions of Consent in
Privacy Law (January 22, 2023). 104 Boston University Law Review
(Forthcoming), Available at SSRN: https://ssrn.com/abstract=4333743
or
http://dx.doi.org/10.2139/ssrn.4333743
Download the
article for free at the SSRN link above.
...Because
it can.
https://www.euronews.com/next/2023/04/07/why-does-chatgpt-make-things-up-australian-mayor-prepares-first-defamation-lawsuit-over-it
Why
does ChatGPT make things up? Australian mayor prepares first
defamation lawsuit over its content
ChatGPT
has caught the world's attention with its ability to instantly
generate human-sounding text, jokes and poems, and even pass
university exams.
Another
of the artificial intelligence (AI) chatbot's characteristics,
however, is its tendency to make things up entirely - and it could
get OpenAI, the company behind it, in legal trouble.
A
regional Australian mayor said this week he may sue OpenAI if it does
not correct ChatGPT's false claims that he served time in prison for
bribery. If he follows through, it would likely be the first
defamation lawsuit against the service, which was launched in
November last year.
Does that
include access to the hardware required to use it?
https://www.bespacific.com/the-socio-economic-argument-for-the-human-right-to-internet-access/
The
Socio-Economic Argument for the Human Right to Internet Access
The
Socio-Economic Argument for the Human Right to Internet Access,
Politics
Philosophy & Economics (2023).
DOI:
10.1177/1470594X231167597
PHSY.org:
“People around the globe are so dependent on the internet to
exercise socioeconomic human rights such as education, health care,
work, and housing that online access must now be considered a basic
human right, a new study reveals. Particularly in developing
countries, internet
access can
make the difference between people receiving an education, staying
healthy, finding a home, and securing employment—or not. Even if
people have offline opportunities, such as accessing social
security schemes
or finding housing, they are at a comparative disadvantage to those
with Internet access. Publishing his findings today in Politics,
Philosophy & Economics,
Dr. Merten Reglitz, Lecturer in Global Ethics at the University of
Birmingham, calls for a standalone human right to internet
access—based
on it being a practical necessity for a range of socioeconomic human
rights.”