About
time?
https://www.brookings.edu/research/geopolitical-implications-of-ai-and-digital-surveillance-adoption/
Geopolitical
implications of AI and digital surveillance adoption
… The
United States and partner democracies have implemented sanctions,
export controls, and investment bans to rein in the unchecked spread
of surveillance technology, but the opaque nature of supply chains
leaves it unclear how well these efforts are working. A major
remaining vacuum is at the international standards level at
institutions such as the United Nations’ International
Telecommunication Union (ITU), where Chinese companies have been the
lone proposers of facial recognition standards that are fast-tracked
for adoption in broad parts of the world.
To
continue addressing these policy challenges, this brief provides five
recommendations for democratic governments and three for civil
society.
Does
better/newer/other technology make this more concerning? I think
not.
https://www.pogowasright.org/cops-will-be-able-to-scan-your-fingerprints-with-a-phone/
Cops
Will Be Able to Scan Your Fingerprints With a Phone
Matt
Burgess reports:
For
more than 100 years, recording people’s fingerprints has
involved them pressing their fingertips against a surface.
Originally this involved ink but has since moved to sensors embedded
in scanners at airports and phone screens. The next stage of
fingerprinting doesn’t involve touching anything at all.
So-called contactless fingerprinting
technology uses your phone’s camera and image processing algorithms
to capture people’s fingerprints. Hold
your hand in front of the camera lens and the software can identify
and record all the lines and swirls on your fingertips.
The technology, which has been in development for years, is ready to
be more widely used in the real world. This
includes use by police—a move that worries civil liberty and
privacy groups.
Read
more at WIRED.
This
technique has been used before…
https://www.pogowasright.org/the-stolen-sip/
The
Stolen Sip
A New York appellate court expunges a
teen’s DNA sample, which was obtained by police who gave him a cup
of water before taking it for DNA testing without his knowledge or
consent.
Read
the ruling In
the Matter of Francis O.
Source:
Courthouse
News,
via Joe
Cadillic
Intelligence
gathering tools are whatever works for you…
https://www.protectprivacynow.org/news/eye-opening-report-on-how-coffee-makers-could-spy-on-you
Eye-Opening
Report on How Coffee Makers Could Spy on You
In
the early post-Cold War era, anti-Communist crusaders were often
accused of being hysterical, seeing Communists under their beds. Now
a report from Christopher Balding and Joe Wu, researchers at New
Kite Data Labs.
sees the Chinese Communist Party inside coffee makers in American
homes. And they are not crazy.
This
alarming report is a consequence of the Internet of Things (IoT), in
which ordinary appliances are given smart applications to interact
with each other, as well as to report on performance and consumer
behavior. According to Balding, interviewed by The
Washington Times,
Chinese-made
coffee makers gather and report information about
customers’ names, their locations, usage patterns and other
information. In hotels, a coffee maker could report to China types
of payments and routing information.
Similar
issues have been found with vacuum cleaners that respond to voice
commands, baby monitors and video
doorbells.
The
Chinese government has famously built a “panopticon,” a
ubiquitous surveillance network that seamlessly integrates facial
recognition, social media activities, payments, and other data to
potentially track every citizen of that country. IoT, by design but
mostly by technological evolution, is rapidly scaling the capacity to
bring universal surveillance into the homes of the world.
Have
we been thinking too small?
https://www.bespacific.com/what-makes-data-personal/
What
Makes Data Personal?
Montagnani,
Maria Lillà and Verstraete, Mark, What Makes Data Personal? (June 4,
2022). UC Davis Law Review, Vol. 56, No. 3, Forthcoming 2023,
Available at SSRN: https://ssrn.com/abstract=4128080
or
http://dx.doi.org/10.2139/ssrn.4128080
“Personal
data is an essential concept for information privacy law. Privacy’s
boundaries are set by personal data: for a privacy violation to
occur, personal data must be involved. And an individual’s right
to control information extends only to personal data. However,
current
theorizing about personal data is woefully incomplete.
In light of this incompleteness, this Article offers a new
conceptual approach to personal data. To start, this Article argues
that personal data is simply a legal construct that describes the set
of information or circumstances where an individual should be able to
exercise control over a piece of information. After displacing the
mythology about the naturalness of personal data, this Article
fashions a new theory of personal data that more adequately tracks
when a person should be able to control specific information.
Current approaches to personal data rightly examine the relationship
between a person and information; however, they misunderstand what
relationship is necessary for legitimate control interests. Against
the conventional view, this Article suggests that how the information
is used is an indispensable part of the analysis of the relationship
between a person and data that determines whether the data should be
considered personal. In doing so, it employs the philosophical
concept of separability as a method for making determinations about
which uses of information are connected to a person and, therefore,
should trigger individual privacy protections and which are not.
This framework offers a superior foundation to extant theories for
capturing the existence and scope of individual interests in data.
By doing so, it provides an indispensable contribution for crafting
an ideal regime of information governance. Separability enables
privacy and data protection laws to better identify when a person’s
interests are at stake. And further, separability offers a resilient
normative foundation for personal data that grounds interests of
control in a philosophical foundation of autonomy and dignity
values—which are incorrectly calibrated in existing theories of
personal data. Finally, this Article’s reimagination of personal
data will allow privacy and data protection laws to more effectively
combat modern privacy harms such as manipulation and inferences.”
How
Microsoft has changed…
https://blogs.microsoft.com/on-the-issues/2022/06/21/microsofts-framework-for-building-ai-systems-responsibly/
Microsoft’s
framework for building AI systems responsibly
Today
we are sharing publicly Microsoft’s
Responsible AI Standard,
a framework to guide how we build AI systems.
It
is an important step in our journey to develop better, more
trustworthy AI. We are releasing our latest Responsible AI Standard
to share what we have learned, invite feedback from others, and
contribute to the discussion about building better norms and
practices around AI.
(Related)
https://www.theregister.com/2022/06/21/microsoft_api_ai_algorithms/
Microsoft
promises to tighten access to AI it now deems too risky for some devs
Deep-fake
voices, face recognition, emotion, age and gender prediction ... A
toolbox of theoretical tech tyranny
“What's
in a name? That which we call a rose by any other name would smell
just as sweet.” Did Shakespeare have the right idea? (Perhaps he
was an AI?)
https://theconversation.com/from-ais-to-an-unhappy-elephant-the-legal-question-of-who-is-a-person-is-approaching-a-reckoning-185268
From
AIs to an unhappy elephant, the legal question of who is a person is
approaching a reckoning
Happy
the elephant’s story is a sad one. She is currently a resident of
the Bronx Zoo in the US, where the Nonhuman Rights Project (a
civil rights organisation) claims she is subject to unlawful
detention. The campaigners sought a writ of habeas corpus on Happy’s
behalf to request that she be transferred to an elephant sanctuary.
Historically, this ancient right which offers
recourse to someone being detained illegally had been limited to
humans. A New York court previously decided that it excluded
non-human animals. So if the courts wanted to find in Happy’s
favour, they would first have to agree that she was legally a person.
It was this question that made its way to the New
York Court of Appeal, which published its judgment
on June 14. By a 5-2 majority, the judges sided with the Bronx Zoo.
Chief Judge DiFiore held that Happy was not a person for the purposes
of a writ of habeas corpus, and the
claim was rejected. As a researcher who specialises in the
notion of legal personhood, I’m not convinced by their reasoning.
DiFiore first discussed what it means to be a
person. She did not dispute that Happy is intelligent, autonomous
and displays emotional awareness. These are things that many
academic lawyers consider sufficient
for personhood, as they suggest Happy can benefit from the
freedom protected by a writ of habeas corpus. But DiFiore rejected
this conclusion, signalling that habeas corpus “protects the right
to liberty of humans because they are humans with certain fundamental
liberty rights recognised by law”. Put simply, whether
Happy is a person is irrelevant, because even if she is, she’s not
human.
(Related)
https://www.newsweek.com/soon-humanity-wont-alone-universe-opinion-1717446
Soon,
Humanity Won't Be Alone in the Universe | Opinion
… Even
if humanity has been alone in this galaxy, till now, we won't be for
very much longer. For better or worse, we're about to meet
artificial intelligence — or AI — in one form or another.
Though, alas, the encounter will be murky, vague, and fraught with
opportunities for error.
(Related)
https://www.washingtonpost.com/opinions/2022/06/17/google-ai-ethics-sentient-lemoine-warning/
Opinion
We warned Google that people might believe AI was sentient. Now it’s
happening.
By
Timnit Gebru and Margaret Mitchell
A
Post
article by
Nitasha Tiku revealed last week that Blake Lemoine, a software
engineer working in Google’s Responsible AI organization, had made
an astonishing claim: He believed that Google’s chatbot LaMDA was
sentient. “I know a person when I talk to it,” Lemoine said.
Google had dismissed his claims and, when Lemoine reached out to
external experts, put him on paid administrative leave for violating
the company’s confidentiality policy.
But
if that claim seemed like a fantastic one, we were not surprised
someone had made it. It was exactly what we had warned would happen
back in 2020, shortly before we were fired by Google ourselves.
Lemoine’s claim shows we were right to be concerned — both by the
seductiveness of bots that simulate human consciousness, and by how
the excitement around such a leap can distract from the real problems
inherent in AI projects.
Perspective.
https://www.bespacific.com/how-to-future/
How
to Future
Via
LLRX
–
How
to Future –
Kevin
Kelly, is
a Web Maverick and by his own definition, a futurist. This
discipline is comprised of really keen historians who study the past
to see the future. They look carefully at the past because most
of what will happen tomorrow is already happening today.
In addition, most of the things in the future will be things that
don’t change, so they are already here. The past is the bulk of
our lives, and it will be the bulk in the future. It is highly
likely that in 100 years or even 500 years, the bulk of the stuff
surrounding someone will be old stuff, stuff that is being invented
today. All this stuff, plus our human behaviors, which are very old,
will continue in the future.