My students will have to figure
this out.
How
New A.I. Is Making the Law’s Definition of Hacking Obsolete
… In
April this year, a research team at the Chinese
tech giant Tencent showed
that a Tesla Model S in autopilot mode could be tricked into
following a bend in the road that didn’t exist simply by adding
stickers to the road in a particular pattern. Earlier research in
the U.S. had
shown that
small changes to a stop sign could cause a driverless car to
mistakenly perceive it as a speed limit sign. Another study found
that by
playing tones indecipherable to a person, a malicious attacker could
cause an Amazon Echo to order unwanted items.
These
discoveries are part of a growing area of study known as adversarial
machine learning. As more machines become artificially intelligent,
computer scientists are learning that A.I. can be manipulated into
perceiving the world in wrong, sometimes dangerous ways. And because
these techniques “trick” the system instead of “hacking” it,
federal laws and security standards may not protect us from these
malicious new behaviors — and the serious consequences
they can have.
Are they being
overly secretive or do they just not know?
Attackers
Demand Millions in Texas Ransomware Incident
The
cybercriminals behind the recent
ransomware incident that
impacted over 20 local governments in Texas are apparently demanding
$2.5 million in exchange for access to encrypted data.
The
incident took place on August 16, when 23 towns in Texas revealed
they were targeted in a coordinated attack to infect their systems
with ransomware.
… City
of Borger was one of the victims, with its business and financial
operations and services impacted by ransomware, although basic and
emergency services continued to be operational.
“Currently,
Vital Statistics (birth and death certificates) remains offline, and
the
City is unable to take utility or other payments.
Until such time as normal operations resume, no late fees will be
assessed, and no services will be shut off,” the city said earlier
this week (PDF
).
Listen
to other views and carefully consider. (Then burst out laughing?)
Political
Confessional: The Man Who Thinks Mass Surveillance Can Work
This week we
talked to Owen, a 37-year-old white man from the Bay Area in
California. He wrote that he is “open to mass surveillance if it
can lead to a world where a much higher percent of crimes are caught,
leading to better public safety and, ideally, shorter [or] lighter
sentences (because you don’t need as big a threat of punishment to
deter people from crimes if the likelihood of catching them is very
high).”
Creating the
Terminator?
CRS
Report to Congress on Lethal Autonomous Weapon Systems
The
following is the August 16, 2019 Congressional Research Service In
Focus report – International
Discussions Concerning Lethal Autonomous Weapon Systems.
“As technology, particularly artificial intelligence (AI),
advances, lethal autonomous weapon systems (LAWS)—weapons designed
to make decisions about using lethal force without manual human
control—may soon make their appearance, raising a number of
potential ethical, diplomatic, legal, and strategic concerns for
Congress. By providing a
brief overview of ongoing international discussions concerning LAWS,
this In Focus seeks to assist Congress as it conducts oversight
hearings on AI within the military (as the House and Senate
Committees on Armed Services have done in recent years), guides U.S.
foreign policy, and makes funding and authorization decisions related
to LAWS…”
(Related) An
alternate view...
Amazon,
Microsoft, May be Putting World at Risk of Killer AI, Says Report
Amazon,
Microsoft and Intel are among leading tech companies putting the
world at risk through killer robot development, according to a report
that surveyed major players from the sector about their stance on
lethal autonomous weapons.
Dutch NGO Pax
ranked 50 companies by three criteria: whether they were developing
technology that could be relevant to deadly AI, whether they were
working on related military projects, and if they had committed to
abstaining from contributing in the future.
"Why are
companies like Microsoft and Amazon not denying that they're
currently developing these highly controversial weapons, which could
decide to kill people without direct human involvement?" said
Frank Slijper, lead author of the report published this week.
… The
report
noted
that Microsoft employees had also voiced their opposition to a US
Army contract for an augmented reality headset, HoloLens, that aims
at "increasing lethality" on the battlefield.
Make the world
safe from the Terminator?
IBM
joins Linux Foundation AI to promote open source trusted AI workflows
AI
is advancing rapidly within the enterprise -- by Gartner's count,
more
than half of organizations already
have at least one AI deployment in operation, and they're planning to
substantially accelerate their AI adoption within the next few years.
At the same time, the organizations building and deploying these
tools have yet to really grapple with the flaws and shortcomings of
AI – whether the models deployed are fair, ethical, secure or even
explainable.
Before
the world is overrun with flawed AI systems, IBM is aiming to rev up
the development of open source trusted AI workflows. As part of that
effort, the company is joining
the Linux Foundation AI (LF AI) as
a General Member.
… As
a Linux Foundation project, the
LF AI Foundation provides
a vendor-neutral space for the promotion of Artificial Intelligence
(AI), Machine Learning (ML) and Deep Learning (DL) open source
projects. It's backed by major organizations like AT&T, Baidu,
Ericsson, Nokia and Huawei.
… IBM
has already spearheaded efforts on this front with a series
of open source toolkits designed
to help build trusted AI. The AI
Fairness 360 Toolkit helps
developers and data scientists detect and mitigate unwanted bias in
machine learning models and datasets. The Adversarial
Robustness 360 Toolbox is
an open source library that helps researchers and developers defend
deep neural networks from adversarial attacks. Meanwhile, the AI
Explainability 360 Toolkit provides
a set of algorithms, code, guides, tutorials and demos to support the
interpretability and explainability of machine learning models.
“We need
ethics, we just don’t need them right now.” What can we agree on
today?
International
AI ethics panel must be independent
China
wants to be the world’s leader in artificial intelligence (AI) by
2030. The United States has a strategic plan to retain the top spot,
and, by some measures, already
leads in influential papers, hardware and AI talent.
Other wealthy nations are also jockeying for a place in the world AI
league.
A
kind of AI arms race is under way, and governments and corporations
are pouring eye-watering sums into research and development. The
prize, and it’s a big one, is that AI is forecast to add around
US$15 trillion to the world economy by 2030 — more than four times
the 2017 gross domestic product of Germany. That’s $15 trillion in
new companies, jobs, products, ways of working and forms of leisure,
and it explains why countries are competing so vigorously for a slice
of the pie.
… Officials
from Canada and France, meanwhile, have been working to establish an
International Panel on Artificial Intelligence (IPAI), to be launched
at the G7 summit of world leaders in Biarritz, France, from 24 to 26
August.
… To be credible, the IPAI has to be
different. It needs the support of more countries, but it must also
commit to openness and transparency. Scientific advice must be
published in full. Meetings should be open to observers and the
media. Reassuringly, the panel’s secretariat is described in
documents as “independent”. That’s an important signal.
Looks interesting.
Data
Management Law for the 2020s: The Lost Origins and the New Needs
Pałka,
Przemysław, Data Management Law for the 2020s: The Lost Origins and
the New Needs (August 10, 2019). Available at SSRN:
https://ssrn.com/abstract=3435608
or
http://dx.doi.org/10.2139/ssrn.3435608
“In
the data analytics society, each individual’s disclosure of
personal information imposes costs on others. This
disclosure enables companies, deploying novel forms of data
analytics, to infer new knowledge about other people and to use this
knowledge to engage in potentially harmful activities. These
harms go beyond privacy and include difficult to detect price
discrimination, preference manipulation, and even social exclusion.
Currently existing, individual-focused, data protection regimes
leave law unable to account for these social costs or to manage them.
This Article suggests a way out, by proposing to re-conceptualize
the problem of social costs of data analytics through the new frame
of “data management law.” It offers a critical comparison of the
two existing models of data governance: the American “notice and
choice” approach and the European “personal data protection”
regime (currently expressed in the GDPR). Tracing their origin to a
single report issued in 1973, the article demonstrates how they
developed differently under the influence of different ideologies
(market-centered liberalism, and human rights, respectively). It
also shows how both ultimately failed at addressing the challenges
outlined already forty-five years ago. To tackle these challenges,
this Article argues for three normative shifts. First, it proposes
to go beyond “privacy” and towards “social costs of data
management” as the framework for conceptualizing and mitigating
negative effects of corporate data usage. Second, it argues to go
beyond the individual interests, to account for collective ones, and
to replace contracts with regulation as the means of creating norms
governing data management. Third, it argues that the nature of the
decisions about these norms is political, and so political means, in
place of technocratic solutions, need to be employed.”
No comments:
Post a Comment