It
can’t hurt.
FPF
Offers New Resources on Privacy and Pandemics
… The
resources are accessible on the FPF website at
fpf.org/privacy-and-pandemics.
Arguments
are fun! (And entertaining)
A
debate between AI experts shows a battle over the technology’s
future
Since
the 1950s, artificial intelligence has repeatedly overpromised and
underdelivered. While recent years have seen incredible leaps thanks
to deep learning, AI today is still narrow: it’s fragile in the
face of attacks, can’t generalize to adapt to changing
environments, and is riddled with bias. All these challenges make
the technology difficult to trust and limit its potential to benefit
society.
On
March 26 at MIT Technology Review’s annual EmTech Digital event,
two prominent figures in AI took to the virtual stage to debate how
the field might overcome these issues.
Something
to read when the library reopens.
Human
Compatible: A timely warning on the future of AI
… It’s
very easy to dismiss warnings of the robot apocalypse. After all,
virtually all of the field’s who’s who agree that we’re at
least half-a-century away from
achieving artificial general intelligence, the key milestone to
developing an AI that can dominate humans. As for the AI that we
have today, it can best be described as “idiot savant.” Our
algorithms can perform remarkably
well at narrow tasks but
fail miserably when faced with situations that require general
problem–solving skills.
But
we should
reflect
on these warnings, if not take them at face value, computer scientist
Stuart Russell argues in his latest book Human
Compatible: Artificial Intelligence and the Problem of Control.
… For
the most part, current research in the field is focused
on using more compute power and data to
advance the field instead of seeking fundamentally new ways to create
algorithms that can manifest intelligence.
“Focusing
on raw computing power misses the point entirely. Speed alone won’t
give us AI,” Russell writes. Running flawed algorithms faster
computer does have a bright side however: You
get the wrong answer more quickly.
No comments:
Post a Comment