This concerns me and should concern those government agencies
reaching into my computers. What if the bad guys left a tripwire
that recognized an “unauthorized” command and set off some nasty
responses?
https://www.bleepingcomputer.com/news/security/emotet-malware-forcibly-removed-today-by-german-police-update/
Emotet
malware forcibly removed today by German police update
Emotet,
one of the most dangerous email spam botnets in recent history, is
being uninstalled today from all infected devices with the help of a
malware module delivered in January by law enforcement.
The
botnet's takedown is the result of an international law enforcement
action that allowed investigators to take
control of the Emotet's servers and
disrupt the malware's operation.
… After
the takedown operation, law enforcement pushed a new configuration to
active Emotet infections so that the malware would begin to use
command and control servers controlled by the Bundeskriminalamt,
Germany's federal police agency.
Law
enforcement then distributed a new Emotet module in the form of a
32-bit EmotetLoader.dll to all infected systems that will
automatically
uninstall the malware on
April 25th, 2021.
I thought that might be the case. Move
“sensitive” processes outside your agency so you can say, “We
don’t do that” with a straight face.
https://www.pogowasright.org/the-postal-services-social-media-surveillance-program-sends-100s-of-reports-to-fusion-centers/
The Postal
Service’s Social Media Surveillance Program Sends 100’s of
Reports To Fusion Centers
Earlier
this week, this site linked to a report by The
Guardian about
the U.S. Postal Service monitoring social media posts. Over on
MassPrivateI,
Joe Cadillic also had something to say about the revelations.
“DHS
has succeeded in turning the Postal Service into a clandestine
government agency,”
Joe writes. And he says we shouldn’t be surprised.
All
of this really shouldn’t come as a surprise; a 2019 USPIS annual
report revealed
the existence of iCOP, albeit in less ominous terms.
Page 36 of the report says iCOP is one of
seven functional groups that send hundreds of intelligence reports to
fusion centers.
Joe
also notes that
A
2019 Federal
News Network article
revealed
how iCOP postal inspectors have been going ‘undercover’ on the
dark web since at least 2014.
Anyone
who was surprised simply hasn’t been paying enough attention. And
that’s probably what the government hopes for.
Read
Joe’s entire post.
Perhaps we will need to wait for AI systems that
will “know it when they see it.”
https://www.pogowasright.org/canadas-attempt-to-regulate-sexual-content-online-ignores-technical-and-historical-realities/
Canada’s
Attempt to Regulate Sexual Content Online Ignores Technical and
Historical Realities
Daly
Barnett writes:
Canadian
Senate Bill S-203,
AKA the “Protecting Young Persons from Exposure to Pornography
Act,” is another woefully misguided proposal aimed at regulating
sexual content online. To say the least, this bill fails to
understand how the internet functions and would be seriously damaging
to online expression and privacy. It’s bad in a variety of ways,
but there are three specific problems that need to be laid out: 1)
technical impracticality, 2) competition harms, and 3) privacy and
security.
[…]
Then there’s the privacy angle. It’s
ludicrous to expect all adult users to provide private personal
information every time they log onto an app that might contain sexual
content. The implementation of verification schemes in contexts like
this may vary on how far privacy intrusions go, but it generally
plays out as a cat and mouse game that brings surveillance and
security threats instead of responding to initial concerns. The more
that a verification system fails, the more privacy-invasive measures
are taken to avoid criminal liability.
Read
more on EFF.
Oh, the horror!
https://commons.ln.edu.hk/pg-conf-2021/day2-3/panel2/4/
Comical
computers and dull PCs: The ethics of giving artificial intelligence
a sense of humour
There seems an increasing and equal measure of
excitement and anxiety about the growth in sophistication technology
has demonstrated over the past 100 years. One particular anxiety
that has been the subject of academic research and science fiction
alike is artificial intelligence (AI) and the threat or hope it
poses. But a major
research gap in contemplating the ethics and future of AI is humour.
While science fiction often portrays AI as humourless, researchers
of computational humour are working to engineer into AI an
understanding of humour. Research projects like JAPE, HAHAacronym
and STANDUP have attempted to implement humour to varying degrees of
success. But humour can be greatly contentious, inflicting offence
and even breaking certain speech laws. If we are to programme a
sense of humour into AI, whose sense of humour ought to be? Equally,
if we exclude this form of
intelligence from AI, then will we be able to safely engage in
human-agent interaction without AI being able to discern between
bona-fide and non bona-fide communication? This paper
offers an introduction to the ethical problems in computational
humour.
A
potential for privacy.
https://scholarcommons.sc.edu/aii_fac_pub/508/
Machine
Learning Meets Internet of Things: From Theory to Practice
Standalone
execution of problem-solving Artificial Intelligence (AI) on IoT
devices produces a higher level of autonomy and privacy. This is
because the sensitive user data collected by the devices need not be
transmitted to the cloud for inference. The chipsets used to design
IoT devices are resource-constrained due to their limited memory
footprint, fewer computation cores, and low clock speeds. These
limitations constrain one from deploying and executing complex
problem-solving AI (usually an ML model) on IoT devices. Since there
is a high potential for building intelligent IoT devices, in this
tutorial, we teach researchers and developers; (i) How to deep
compress CNNs and efficiently deploy on resource-constrained devices;
(ii) How to efficiently port and execute ranking, regression, and
classification problems solving ML classifiers on IoT devices; (iii)
How to create ML-based self-learning devices that can locally
re-train themselves on-the-fly using the unseen real-world data.
For
lawyers who expect to have AI clients… Lots of reference articles.
https://link.springer.com/article/10.1007/s11948-021-00306-9
Rights
for Robots: Artificial Intelligence, Animal and Environmental Law
(2020) by Joshua Gellers
AI and the law?
https://books.google.com/books?hl=en&lr=&id=AMMpEAAAQBAJ&oi=fnd&pg=PA89&dq=%22artificial+intelligence%22++%2Blaw&ots=uNPfc5gIEf&sig=y2cHdmR0Pz0H2rx7ES0FAlGpxkM#v=onepage&q&f=false
Technology
and International Relations: The New Frontier in Global Power
5 Artificial Intelligence: a paradigm shift in
international law and politics? Autonomous weapons systems as a case
study.
(Related) How to write AI law?
https://www.researchgate.net/profile/Joshua-Ellul/publication/350889927_A_Pragmatic_Approach_to_Regulating_Artificial_Intelligence_A_Technology_Regulator's_Perspective/links/607889e6881fa114b406c5b4/A-Pragmatic-Approach-to-Regulating-Artificial-Intelligence-A-Technology-Regulators-Perspective.pdf
A Pragmatic
Approach to Regulating Artificial Intelligence: A Technology
Regulator’s Perspective
Artificial Intelligence (AI) and the regulation
thereof is a topic that is increasingly being discussed within
various fora. Various proposals have been made in literature for
defining regulatory bodies and/or related regulation. In this paper,
we present a pragmatic approach for providing a technology assurance
regulatory framework. To the best knowledge of the authors this work
presents the first national AI technology assurance legal and
regulatory framework that has been implemented by a national
authority empowered through law to do so. In aim of both providing
assurances where required and not stifling innovation yet supporting
it, herein it is proposed that such regulation should not be mandated
for all AI-based systems and that rather it should primarily provide
a voluntary framework and only be mandated in sectors and activities
where required and as deemed necessary by other authorities for
regulated and critical areas.
Perspective.
https://www.worldscientific.com/doi/abs/10.1142/S2705078521300012
The AI
Wars, 1950–2000, and Their Consequences
Philosophy and AI have had a difficult
relationship from the beginning. The “classic” period from 1950
to 2000 saw four major conflicts, first about the logical coherence
of AI as an endeavor, and then about architecture, semantics, and the
Frame Problem. Since 2000, these early debates have been largely
replaced by arguments about consciousness and ethics, arguments that
now involve neuroscientists, lawyers, and economists as well as AI
scientists and philosophers. We
trace these developments, and speculate about the future.
The future of the law firm?
https://fortune.com/2021/04/21/legal-tech-rocket-lawyer-raises-223-million-expansion/
Exclusive:
Legal tech startup Rocket Lawyer raises $223 million for expansion
… The
company, which has 25 million registered users, already offers online
legal documents and
virtual attorney meetings to businesses and individuals in the U.S.,
U.K. and parts of Europe. Customers pay $40 monthly for a
subscription or for individual documents, from $40 for a simple
living will to $100 for an incorporation filing.
…
Rocket
Lawyer is among a host of legal startups attracting money from
venture capitalists looking to disrupt the staid legal profession.
Last year, Everlaw, which helps lawyers sort and search vast amounts
of digital documentary evidence, raised
$62 million from
investors including Google
parent
Alphabet. Verbit, an A.I.-powered courtroom transcription service,
raised
$91 million in
two rounds of fundraising. And Notarize, which offers online notary
services, raised
$35
million.
Perspective. The Streisand Effect won’t work if
people in India can’t see news about the suppression and India
probably doesn’t care what outsiders think since they don’t vote.
https://www.makeuseof.com/india-removes-tweets-criticizing-government/
India
Orders Twitter to Remove Tweets Criticizing the Government's Handling
of the Pandemic
Tools?
https://www.makeuseof.com/how-to-use-linkedin-as-a-research-tool/
How to Use
LinkedIn as a Research Tool