Someone
must have created a procedure/checklist to follow to avoid things
like ‘forgetting to redact.’ Is there no management supervision
here?
White
House Tells EPIC to Delete COVID-19 Records, EPIC Declines
I
usually post items from EPIC.org over on PogoWasRight.org, but this
one gets
posted as a government breach on this site, too.
In
an unusual development, the White House directed EPIC this week to
delete a set of records that EPIC recently obtained from the Office
of Science & Technology Policy—a request which EPIC declined.
On Tuesday, EPIC published
hundreds
of records about the White House’s response to the COVID-19
pandemic and proposals to use location data for public health
surveillance (1,
2,
3,
4).
Hours later, a White House attorney sent EPIC a letter “order[ing]”
EPIC “to immediately cease using and disclosing” one
set of records and
to “destroy all electronics copies.” The letter stated that OSTP
had “inadvertently and erroneously” provided EPIC with an
unredacted copy of the records. Although EPIC voluntarily decided to
redact personal contact information contained in the documents, EPIC
informed the OSTP that it would still make the records available to
the public. Under
the Freedom of Information Act, a federal agency is not entitled to
“claw back” a record that it discloses to a requester.
EPIC has filed numerous
FOIA
requests
concerning
the federal government’s COVID-19 response and has compiled a
resource page about privacy
and the pandemic.
It’s
good to see that Yasmin is keeping up.
Privacy
Shield Invalidated
Can
we agree on ethics?
A
Comparative Assessment and Synthesis of Twenty Ethics Codes on AI and
Big Data
Up
to date, more than 80 codes exist for handling ethical risks of
artificial intelligence and big data. In this paper, we analyse
where those codes converge and where they differ. Based on an
in-depth analysis of 20 guidelines, we identify three procedural
action types (1. control and document, 2. inform, 3. assign
responsibility) as well as four clusters of ethical values whose
promotion or protection is supported by the procedural activities.
We achieve a synthesis of previous approaches with a framework of
seven principles, combining the four principles of biomedical ethics
with three distinct procedural principles: control, transparency and
accountability.
Cute
title.
AI
Report: Humanity Is Doomed. Send Lawyers, Guns, and Money!
AI
systems are powerful technologies being built and implemented by
private corporations motivated by profit, not altruism. Change
makers, such as attorneys and law students, must therefore be
educated on the benefits, detriments, and pitfalls of the rapid
spread, and often secret implementation of this technology. The
implementation is secret because private corporations place
proprietary AI systems inside of black boxes to conceal what is
inside. If they did not, the popular myth that AI systems are
unbiased machines crunching inherently objective data would be
revealed as a falsehood. Algorithms created to run AI systems
reflect the inherent human categorization process and can, in some
respects, become a lazy way
to interact with the world because the systems attempt to
outsource the unparalleled cognitive skills of a human being into a
machine. AI systems can also be extremely dangerous because human
categorization processes can be flawed by bias (explicit or
implicit), racism, and sexism.
Another
AI perspective.
John
Allen and Darrell West discuss artificial intelligence on The Lawfare
Podcast
Darrell
West and John Allen recently spoke about their new Brookings book
“Turning
Point: Policymaking in the Era of Artificial Intelligence”
with
Benjamin Wittes on The Lawfare Podcast. Darrell West is a senior
fellow in the Center for Technology Innovation and the vice president
and director of Governance Studies at the Brookings Institution.
John Allen is the president of Brookings and a retired U.S. Marine
Corps four-star general. In this podcast episode, West and Allen
describes what AI is, how it is being deployed, why people are
anxious about it, and what we can do to move forward. You can
download or listen to the episode in the podcast player below.
If
I write the AI you use to create (whatever) can I claim the
copyright?
Technical
Elements of Machine Learning for Intellectual Property Law
Recent
advances in artificial intelligence (AI) technologies have
transformed our lives in profound ways. Indeed, AI has not only
enabled machines to see (e.g., face recognition), hear (e.g., music
retrieval), speak (e.g., speech synthesis), and read (e.g., text
processing), but also, so it seems, given machines the ability to
think (e.g., board game-playing) and create (e.g., artwork
generation). This chapter introduces the key technical elements of
machine learning (ML), which is a rapidly growing sub-field in AI and
drives many of the aforementioned applications. The goal is to
elucidate the ways human efforts are involved in the development of
ML solutions, so as to facilitate legal discussions on intellectual
property issues.
Robot
crimes.
CIVIL
LIABILITY AND ARTIFICIAL INTELLIGENCE: WHO IS RESPONSIBLE FOR DAMAGES
CAUSED BY AUTONOMOUS INTELLIGENT SYSTEMS?
The
article has as its object of analysis the inquiries related to civil
liability in cases where
the damage is caused by systems equipped with artificial
intelligence.
With this intent, the study analyzes the possibility of considering
the autonomous system responsible for the damage,
as well as what are the essential requirements for the analysis of
civil liability in these cases. In addition, the article proposes
to understand how the exclusions of civil liability in the described
situation work, in addition to making a consideration regarding the
Bills of Law nº 5.051/2019 and nº 5.691/2019, in progress in the
National Congress, which deal with the principles for the use of
autonomous intelligence, as well as the incentives of the development
of new technologies in the Brazilian territory. The study intends to
emphasize that the rules of responsibility need to find a balance
between protecting citizens from possible damage arising from
activities carried out by an artificial intelligence system and
allowing technological innovation. The methodology used was
qualitative, with bibliographic and documentary research, as well as
data collection in international organizations, published on the
internet.
Interesting.
Regulation ain’t easy...
Regulating
technology
Technology
was a small industry until very recently. It was exciting and
interesting, and it was on lots of magazine covers, but it wasn‘t
actually an important part of most people’s lives. When Bill Gates
was on every magazine cover, Microsoft was a small company that sold
accounting tools to big companies. When Netscape kicked off the
consumer internet in 1994, there were only 100m or so PCs on earth,
and most of them were in offices. Today 4bn people have a smartphone
- three quarters of all the adults on earth. In most developed
countries, 90% of the adult population is online.
… The
trouble is, when tech becomes the world, all of tech’s problems
matter much more, because they become so much bigger and touch so
many more people; and in parallel all of the problems that society
had already are expressed in this new thing, and are amplified and
changed by it, and channeled in new ways. When you connect all of
society, you connect all of society’s problems as well. You
connect all the bad people, and more importantly you connect all of
our own worst instincts. And then, of course, all of these combine
and feed off each other, and generate new externalities. The
internet had hate speech in 1990, but it didn’t affect elections,
and it didn’t involve foreign intelligence agencies.
No comments:
Post a Comment