Somehow,
this does not give me confidence that Facebook is ready for our next
election.
Operation
Carthage: How a Tunisian company conducted influence operations in
African presidential elections
A
Tunisia-based company operated a sophisticated digital campaign
involving multiple social media platforms and websites in an attempt
to influence the country’s 2019 presidential election, as well as
other recent elections in Africa. In an exclusive investigation
that began in September 2019,
the DFRLab uncovered dozens of online assets with connections to
Tunisian digital communications firm UReputation. On June 5, 2020,
after conducting its own investigation, Facebook announced
it
had taken down more than 900 assets affiliated with the UReputation
operation, including 182 user accounts, 446 pages, and 96 groups, as
well as 209 Instagram accounts. The operation also involved the
publication of multiple Francophone websites, some
going back more than five years.
In
a statement, a Facebook spokesperson said that these assets were
removed for violating the company’s policy against foreign
interference, which is coordinated
inauthentic behavior on
behalf of a foreign entity. “The individuals behind this activity
used fake accounts to masquerade as locals in countries they
targeted, post and like their own content, drive people to
off-platform sites, and manage groups and pages posing as independent
news entities,” the spokesperson said. “Some of these pages
engaged in abusive audience building tactics changing their focus
from non-political to political themes, including substantial name
and admin changes over time.”
Rather
gloomy…
On
the Ethics of Artificial Intelligence
Currently,
AI ethics is failing in many cases. Ethics lacks a reinforcement
mechanism. Deviations from the various codes of ethics have no
consequences. And in cases where ethics is integrated into
institutions, it mainly serves as a marketing strategy.
Furthermore, empirical experiments show that reading ethics
guidelines has no significant influence on the decision- making of
software developers. It is a boom time for artificial
intelligence (AI) and ethics. All sorts of groups have launched
manifestos, declarations, toolkits and lists of principles to set the
ethical agenda. There are so many lists of principles that now other
groups are providing guides to all the lists. You would think that
having all these principles and checklists would be a good thing, but
many of them are being generated by industry or by scientists. We
risk ignoring other approaches to ethics that come from the
humanities. In this panel we will present a dialogue of
philosophical perspectives and informatics approaches on artificial
intelligence (AI). These reflect an interdisciplinary collaboration
at the University of Alberta between faculty and students across
Digital Humanities, Philosophy, Communications, and Library and
Information Studies
The
tools already exist?
A
“right to explanation” for algorithmic decisions?
In
discussions about the legal hurdles facing the increased use of
artificial intelligence (AI), academic debate has focused on civil
and criminal liability for damage caused by AI. At the same time,
however, data protection law also creates challenges to the increased
use of intelligent systems and machines, i.e. machines which are
capable of learning, and these challenges should not be
underestimated. As a means of protecting fundamental rights1, data
protection law is used to safeguard each individual’s general right
of personality, and in particular, his right to determine what
information concerning him is made available or known to parties in
his surrounding environment.2 To this end, the law has in its
arsenal procedures, mechanisms and rights, which apply to every
processing of personal data within its scope, even if the data is
handled not by a human processor but by a self-learning system.
Perspective.
In China, Privacy is not about the ‘personal?’
Personal
information Legislation in the Age of Big Data and Artificial
Intelligence: Challenges of New Technology to Information Privacy
Abstract:
The development of new technology has posed new challenges to the
traditional framework of data protection. In the context of big
data, artificial intelligence, internet of things, cloud technology
and so on, authorization of personal information based on user
consent often falls into invalid or hinders technological progress,
and meanwhile, cannot really provide data privacy protection.
Besides, there are also problems with some principles of restricting
data users. The reason is that the current personal information
protection law adopts the individualism and static protection
approach, and there is tension with data utilization under the
background of new science and technology. In
the future, the protection of personal information should shift from
individualistic permission to risk-oriented public law control, from
static protection to dynamic protection.
Perspective.
Amazon is probably not alone.
Amazon’s
Heavily Automated HR Leaves Workers in Sick-Leave Limbo
The
company is struggling to handle thousands of requests from ailing
employees and those who must stay home to care for kids or elderly
relatives.
… the
design of Amazon’s HR department reflects the strengths and
weaknesses of the company’s culture. It’s heavily automated,
which helps Amazon grow quickly and restrain costs but these days
leaves employees hitting dead ends with chatbots, smartphone apps and
phone trees.
Three
people with experience in the company’s human resources group say
the unit has been weighed down by competing priorities. HR is
expected to offer workers the same speedy customer service as
Amazon’s customers, while practicing a level of frugality that
Amazon sometimes takes to extremes, the employees say.
HR
“is always struggling to automate and keep pace with the scale of
the company,” says one of the people, who all requested anonymity
because they signed confidentiality agreements. “The horror
stories happen because [HR] people are overwhelmed. And they don’t
have the resources and the mental capacity to deal with [workers]
because they’re pulled in so many different directions. It’s
bound to have negative, real-life human impacts.”
No comments:
Post a Comment