Pre-crime? Is refusing the ‘screening’
proof of mental illness?
Joe
Cadillic writes:
It has been nearly two years, since I reported on the dangers of creating a law enforcement run Mental Health Assessment (MHA) program. In Texas, police use MHA’s to “screen” every person they have arrested for mental illness.
But the TAPS Act, first introduced in January, would take law enforcement screenings to a whole new level. It would create a national threat assessment of children and adults.
In the course of six months the Threat Assessment, Prevention and Safety (TAPS) Act (H.R. 838) has seen support of the bill grow to nearly 80 Congress members.
Read
more on MassPrivateI
Is
this a massive privacy breach? I’m not sure.
CU
Colorado Springs students secretly photographed for government-backed
facial-recognition research
Elizabeth
Hernandez follows up on a story that the Colorado
Springs Independent broke last week:
A professor at the University of Colorado’s Colorado Springs campus led a project that secretly snapped photos of more than 1,700 students, faculty members and others walking in public more than six years ago in an effort to enhance facial-recognition technology.
The photographs were posted online as a dataset that could be publicly downloaded from 2016 until this past April.
Read
more on the Denver
Post.
Until AIs achieve peoplehood.
When
algorithms mess up, the nearest human gets the blame
Earlier
this month, Bloomberg published
an article about
an unfolding lawsuit over investments lost by an algorithm. A Hong
Kong tycoon lost more than $20 million after entrusting part of his
fortune to an automated platform. Without a legal framework to sue
the technology, he placed the blame on the nearest human: the man who
sold it to him.
It’s
the first known case over automated investment losses, but not the
first involving the liability of algorithms. In March of 2018, a
self-driving Uber struck
and killed a
pedestrian in Tempe, Arizona, sending another case to court. A year
later, Uber was exonerated
of
all criminal liability, but the safety driver could face charges of
vehicular manslaughter instead.
Both
cases tackle one of the central questions we face as automated
systems trickle into every aspect of society: Who or what deserves
the blame when an algorithm causes harm? Who or what actually gets
the blame is a different yet equally important question.
Do
you think Forbes knows something we don’t?
What
If Artificial Intelligence (AI) & Machine Learning (ML) Ruled the
World?
What
if instead of political parties, presidents, prime ministers, kings,
queens, armies, autocrats, and who knows what else, we turned
everything over to expert systems? What if we engineered them to be
faithful, for example, to one simple principle: "human beings
regardless of age, gender, race, origin, religion, location,
intelligence, income or wealth, should be treated equally, fairly and
consistently"?
Here’s
some dialogue – enabled by natural language processing (NLP) –
with an expert system named “Decider” that operates from that
single principle (you can imagine how it might behave if the
principle was completely different – the opposite of equal and
fair). The principle is supported by the data and probabilities the
system collects and interprets. The “inferences” made by Decider
are pre-programmed. In today’s political parlance, Decider is
“liberal.” Imagine the one the American TEA Party or Freedom
Caucus might engineer – which is the essence of this post: first
principles rule.
Keep
trying until we get it right (or until AI writes its own?)
Will
we ever agree to just one set of rules on the ethical development of
artificial intelligence?
Australia
is among 42 countries that last
week signed up to
a new set of policy guidelines for the development of artificial
intelligence (AI) systems.
Yet
Australia has its own draft
guidelines for ethics in AI out
for public consultation, and a number of other countries and industry
bodies have developed their own AI guidelines.
… Responding
to these fears and a number of very real problems with narrow AI, the
OECD recommendations are the latest of a number of projects and
guidelines from governments and other bodies around the world that
seek to instil an ethical approach to developing AI.
These
include initiatives by the Institute
of Electrical and Electronics Engineers,
the French
data protection authority,
the Hong
Kong Office of the Privacy Commissioner and
the European
Commission.
No comments:
Post a Comment