Interesting
question to ask before implementing an AI system, “Can we explain
this to a jury?” (Can your AI Expert explain it?)
https://www.bespacific.com/the-right-to-contest-ai/
The
Right to Contest AI
Kaminski,
Margot E. and Urban, Jennifer M., The Right to Contest AI (November
16, 2021). Columbia Law Review, Vol. 121, No. 7, 2021, U
of Colorado Law
Legal Studies Research Paper No. 21-30, Available at SSRN:
https://ssrn.com/abstract=3965041
–
“Artificial
intelligence (AI) is increasingly used to make important decisions,
from university admissions selections to loan determinations to the
distribution of COVID-19 vaccines. These uses of AI raise a host of
concerns about discrimination, accuracy, fairness, and
accountability. In the United States, recent proposals for
regulating AI focus largely on ex ante and systemic governance. This
Article argues instead—or really, in addition—for an individual
right to contest AI decisions, modeled on due process but adapted for
the digital age.
The European Union, in fact, recognizes such a right, and a growing
number of institutions around the world now call for its
establishment. This Article argues that despite considerable
differences between the United States and other countries,
establishing the right to contest AI decisions here would be in
keeping with a long tradition of due process theory. This Article
then fills a gap in the literature, establishing a theoretical
scaffolding for discussing what a right to contest should look like
in practice. This Article establishes four contestation archetypes
that should serve as the bases of discussions of contestation both
for the right to contest AI and in other policy contexts. The
contestation archetypes vary along two axes: from contestation rules
to standards and from emphasizing procedure to establishing
substantive rights. This Article then discusses four processes that
illustrate these archetypes in practice, including the first in depth
consideration of the GDPR’s right to contestation for a U.S.
audience. Finally, this Article integrates findings from these
investigations to develop normative and practical guidance for
establishing a right to contest AI.”
(Related)
The first wave of contests?
https://www.bespacific.com/feds-warn-employers-against-discriminatory-hiring-algorithms/
Feds
Warn Employers Against Discriminatory Hiring Algorithms
Wired:
“As
companies increasingly involve AI
in their hiring processes, advocates, lawyers, and researchers have
continued to sound the alarm. Algorithms have been
found to automatically
assign job candidates different scores based on arbitrary criteria
like whether they wear
glasses or a headscarf or
have a bookshelf in the background. Hiring algorithms can penalize
applicants for having a Black-sounding
name, mentioning a
women’s
college, and even
submitting their résumé using certain file
types. They can
disadvantage people who stutter or have a physical disability that
limits their ability to interact with a keyboard. All of this has
gone widely unchecked. But now, the US Department of Justice and the
Equal Employment Opportunity Commission have offered guidance
on what businesses and
government agencies must do to ensure their use of AI in hiring
complies with the Americans with Disabilities Act. “We cannot let
these tools become a high-tech pathway to discrimination,” said
EEOC chair Charlotte Burrows in a briefing with reporters on
Thursday. The
EEOC instructs employers to disclose to applicants not only when
algorithmic tools are being used to evaluate them but what traits
those algorithms assess.
“Today we are sounding an alarm regarding the dangers tied to blind
reliance on AI and other technologies that we are seeing increasingly
used by employers,” assistant attorney general for civil rights
Kristen Clark told reporters in the same press conference. “Today
we are making clear that we must do more to eliminate the barriers
faced by people with disabilities, and no doubt: The use of AI is
compounding the long-standing discrimination that job seekers with
disabilities face.”
Keeping
current.
https://www.theregister.com/2022/05/18/fraud_economy_booms/
State
of internet crime in Q1 2022: Bot traffic on the rise, and more
The
fraud industry, in some respects, grew in the first quarter of the
year, with crooks putting more human resources into some attacks
while increasingly relying on bots to carry out things like
credential stuffing and fake account creation.
That's
according to Arkose Labs, which claimed in its latest State
of Fraud and Account Security report
that one in four online accounts created in Q1 2022 were fake and
used for fraud, scams, and the like.
If
I can sign in with a photo, can I hack in the same way?
https://www.cnbc.com/2022/05/17/mastercard-launches-tech-that-lets-you-pay-with-your-face-or-hand.html
Mastercard
launches tech that lets you pay with your face or hand in stores
Mastercard
is piloting new technology that lets shoppers make payments with just
their face or hand at the checkout point.
… The
program has already gone live in five St Marche grocery stores in Sao
Paulo, Brazil. Mastercard says it plans to roll it out globally later
this year.
… To
sign up on Mastercard, you
take a picture of your face or scan your fingerprint to
register it with an app. This is done either on your smartphone or
at a payment terminal. You can then add a credit card, which gets
linked to your biometric data.
Face
it, we still have a lot to learn about the use of faces.
https://www.pogowasright.org/letter-to-the-standing-committee-on-access-to-information-privacy-and-ethics-on-their-study-of-the-use-and-impact-of-facial-recognition-technology/
Letter
to the Standing Committee on Access to Information, Privacy and
Ethics on their Study of the Use and Impact of Facial Recognition
Technology
The
Privacy Commissioner of Canada, Daniel Therrien has sent the
following letter to the Standing Committee on Access to Information,
Privacy and Ethics to provide information requested during his
appearance
before
the Committee on May 2, 2022.
[…]
Recommended legal framework for police
use of facial recognition technology
During
the appearance, I undertook to provide the committee with a copy of
our Recommended
legal framework for police agencies’ use of facial recognition
Footnote1,
which was issued jointly by Federal, Provincial and Territorial
Privacy Commissioners on May 2, 2022. Our recommended framework sets
out our views on changes needed to ensure appropriate regulation of
police use of facial recognition technology (FRT) in Canada. A
future framework should, we believe, establish clearly and explicitly
the circumstances in which police use of FRT is acceptable – and
when it is not. It should include privacy protections that are
specific to FRT use, and it should ensure appropriate oversight when
the technology is deployed. While developed specifically for the
policing context, there are many elements of our proposed that could
be leveraged beyond this context.
Best practices for FRT regulation
The committee requested that I provide
examples of best practices for regulating FRT from jurisdictions
where regulatory frameworks have been enacted or proposed. Several
international jurisdictions have enacted or proposed regulatory
frameworks for FRT specifically, or biometrics more broadly that
would also apply to FRT, which could inspire Canada’s approach. In
particular, I would draw your attention to a number of notable
measures worthy of consideration:
Read
the full letter at the Office
of the Privacy Commissioner of Canada.
My
AI says, “Probably not so.”
https://finance.yahoo.com/news/game-over-google-deepmind-says-133304193.html
‘The
Game is Over’: Google’s DeepMind says it is on verge of achieving
human-level AI
Human-level
artificial
intelligence is
close to finally being achieved, according to a lead researcher at
Google’s DeepMind AI division.
Dr
Nando de Freitas said “the game is over” in the decades-long
quest to realise artificial general intelligence (AGI) after DeepMind
unveiled
an AI system capable of completing a wide range of complex tasks,
from stacking blocks to writing poetry.
Described
as a “generalist agent”, DeepMind’s new
Gato AI needs
to just be scaled up in order to create an AI capable of rivalling
human intelligence, Dr de Freitas said.
Responding
to an opinion piece written in The
Next Web
that
claimed “humans will never achieve AGI”, DeepMind’s research
director wrote that it was his opinion that such an outcome is an
inevitability.
“It’s
all about scale now! The Game is Over!” he wrote
on Twitter.
“It’s
all about making these models bigger, safer, compute efficient,
faster at sampling, smarter memory, more modalities, innovative data,
on/offline... Solving these challenges is what will deliver AGI.”
When
asked by machine learning researcher Alex Dimikas how far
he believed the Gato AI was from passing a real Turing test – a
measure of computer intelligence that requires a human to be unable
to distinguish a machine from another human – Dr de Freitas
replied: “Far still.”
Closer
to safe self-driving cars or a new way to reduce employee headcount?
https://www.cnbc.com/2022/05/17/argo-ai-robotaxis-ditch-human-safety-drivers-in-miami-and-austin.html
Ford-backed
robotaxi start-up Argo AI is ditching its human safety drivers in
Miami and Austin
Robotaxi
start-up Argo AI said Tuesday it has begun operating its autonomous
test vehicles without human safety drivers in two U.S. cities —
Miami and Austin, Texas — a major milestone for the Ford- and
Volkswagen-backed company.
For
now, those driverless vehicles won't be carrying paying customers.
But they will be operating in daylight, during business hours, in
dense urban neighborhoods, shuttling Argo AI employees who can summon
the vehicles via a test app.
After that first ethical question…
https://news.mit.edu/2022/living-better-algorithms-sarah-cen-0518
Living
better with algorithms
Laboratory for Information and Decision Systems
(LIDS) student Sarah Cen remembers the lecture that sent her down the
track to an upstream question.
At a talk on ethical artificial intelligence, the
speaker brought up a variation on the famous trolley problem, which
outlines a philosophical choice between two undesirable outcomes.
The speaker’s scenario: Say a self-driving car
is traveling down a narrow alley with an elderly woman walking on one
side and a small child on the other, and no way to thread between
both without a fatality. Who should the car hit?
Then the speaker said: Let’s take a step back.
Is this the question we should even be asking?
That’s when things clicked for Cen. Instead of
considering the point of impact, a self-driving car could have
avoided choosing between two bad outcomes by making a decision
earlier on — the speaker pointed out that, when entering the alley,
the car could have determined that the space was narrow and slowed to
a speed that would keep everyone safe.
… . In one such project, Cen studies options
for regulating social media. Her recent work provides a method for
translating human-readable regulations into implementable audits.
To get a sense of what this means, suppose that
regulators require that any public health content — for example, on
vaccines — not be vastly different for politically left- and
right-leaning users. How should auditors check that a social media
platform complies with this regulation? Can a platform be made to
comply with the regulation without damaging its bottom line? And how
does compliance affect the actual content that users do see?
Designing an auditing procedure is difficult in
large part because there are so many stakeholders when it comes to
social media. Auditors have to inspect the algorithm without
accessing sensitive user data. They also have to work around tricky
trade secrets, which can prevent them from getting a close look at
the very algorithm that they are auditing because these algorithms
are legally protected. Other considerations come into play as well,
such as balancing the removal of misinformation with the protection
of free speech.
Would you merge your boss’s face with Gandi or
Donald Trump?
https://www.makeuseof.com/tag/morphthing/
How to
Morph Faces Online and Create Face Merges With MorphThing
You can have a lot
of fun with face mashup tools. Here are some ways to morph two faces
online and share them with friends.
Tools & Techniques. I have some spare time,
perhaps I’ll write a symphony…
https://www.makeuseof.com/best-tools-write-musical-notation/
The 4 Best
Online Tools to Write Musical Notation