Feedly
seems to be unavailable. Fortunately, everything will be there
tomorrow when (I hope) it returns.
Can
you trust a hacker to do what you pay for?
Cloud
provider stopped ransomware attack but had to pay ransom demand
anyway
BlackBaud
said it had to pay a ransom demand to ensure hackers would delete
data they stole from its network.
Another
SciFi term shows up in the AI literature.
A
beginner’s guide to the AI apocalypse: Artificial stupidity
Welcome
to the latest article in TNW’s
guide to the AI apocalypse.
In this series we’ll examine some of the most popular doomsday
scenarios prognosticated by modern AI experts.
… You
won’t find any comprehensive data on the subject outside of the
testimonials at the Darwin
Awards,
but stupidity is surely the biggest threat to humans throughout all
of history.
… Based
on the fact that we can’t know exactly what’s going to happen
once a superintelligent artificial being emerges, we should probably
just start hard-coding “artificial
stupidity”
into
the mix.
… So,
rather than attempting to program advanced AI with a philosophical
view on the sanctity of human life and what constitutes the greater
good, we should just hamstring them with artificial stupidity from
the start.
When
do hackers cross the line?
Researchers
warn court ruling could have a chilling effect on adversarial machine
learning
A
cross-disciplinary team of machine learning, security, policy, and
law experts say inconsistent court interpretations of an anti-hacking
law have a chilling effect on adversarial machine learning security
research and cybersecurity. At question is a portion of the Computer
Fraud and Abuse Act (CFAA). A ruling to decide how part of the law
is interpreted could shape the future of cybersecurity and
adversarial machine learning.
… “If we
are correct and the Supreme Court follows the Ninth Circuit’s
narrow construction, this will have important implications for
adversarial ML research. In fact, we believe that this will lead to
better security outcomes in the long term,” the researchers’
report reads. “With a more narrow construction of the CFAA, ML
security researchers will be less likely chilled from conducting
tests and other exploratory work on ML systems, again leading to
better security in the long term.”
Roughly half
of circuit courts have ruled on the CFAA provisions around the
country and have reached a 4-3 split. Some courts adopted a broader
interpretation, which finds that “exceed authorized access” can
deem improper access to information as including a breach of some
terms of service or agreement. A narrow view finds that accessing
information alone constitutes a CFAA violation.
… The
paper, titled “Legal
Risks of Adversarial Machine Learning Research,”
was accepted for publication and presented today at the Law
and Machine Learning workshop at
the International Conference on Machine Learning (ICML).
No comments:
Post a Comment