Some thoughts from Scientific
American.
Misinformation
Has Created a New World Disorder
Our
willingness to share content without thinking is exploited to spread
disinformation
- Many types of information disorder exist online, from fabricated videos to impersonated accounts to memes designed to manipulate genuine content.
- Automation and microtargeting tactics have made it easier for agents of disinformation to weaponize regular users of the social web to spread harmful messages.
- Much research is needed to understand the effects of disinformation and build safeguards against it.
(Related)
One “fake” hack.
How Artist
Imposters and Fake Songs Sneak Onto Streaming Services
When
songs leak on Spotify and Apple Music, illegal uploads can generate
substantial royalty payments—but for whom?
...and
apparently they all have different ways of describing the “perfect”
AI development process.
Meet the
Researchers Working to Make Sure Artificial Intelligence Is a Force
for Good
… To
help ensure future AI is developed in humanity’s best interest, AI
Now’s researchers have divided the challenges into four
categories:
rights and liberties; labor
and automation; bias and inclusion; and safety and critical
infrastructure. Rights and liberties pertains to the potential for
AI to infringe on people’s civil liberties, like cases of facial
recognition technology in public spaces. Labor and automation
encompasses how workers are impacted by automated
management and hiring systems.
Bias and
inclusion has to do with the potential for AI systems to exacerbate
historical discrimination against marginalized groups. Finally,
safety and critical infrastructure looks at risks posed by
incorporating AI into important systems like the energy grid.
… AI
Now is far from the only research institute founded in recent years
to study ethical issues in AI. At Stanford University, the Institute
for Human-Centered Artificial Intelligence has
put ethical and societal implications at the core of its thinking on
AI development, while the University of Michigan’s new Center for
Ethics, Society, and Computing (ESC) focuses on addressing
technology’s potential to replicate and exacerbate inequality and
discrimination. Harvard’s Berkman
Klein Center for Internet and Society concentrates
in part on the challenges of ethics and governance in AI.
I’m not so pessimistic. My library has her
book, so I may change my mind when I read it.
Futurist
Amy Webb envisions how AI technology could go off the rails
… Webb’s
latest book, The
Big Nine,
examines the development of AI and how the ‘big nine’
corporations – Amazon, Google, Facebook, Tencent, Baidu, Alibaba,
Microsoft, IBM and Apple – have all taken control over the
direction that development is heading. She
says that the foundation upon which AI is built is fundamentally
broken and that, within our lifetimes, AI will begin to behave
unpredictably, to our detriment.
… One
of the main issues is that corporations have a much greater incentive
to push out this kind of technology quickly than they do to release
it safely.
A geek lecture (45 minutes)
Computer
Mathematics, AI and Functional Programming
For my students who might be slightly nervous
about their presentations.
No comments:
Post a Comment