Translated
by Google. I don’t think this is as exciting as the reporting
makes it seem.
Mosquito,
Nadezhda, Nautilus: hackers revealed the essence of the projects of
the secret contractor of the FSB
The
hackers broke into the server of a major contractor of the Russian
special services and departments, and then shared with reporters
descriptions of dozens of non-public Internet projects: from
de-anonymization of users of the Tor browser to research the
vulnerability of torrents.
It
is possible that this is the largest data leak in the history of the
work of Russian special services on the Internet.
“We
can, therefore we must!”
Why
Drive-Thru Restaurants Want to Track Your License Plate
"Welcome
back. Would you like to order the same thing you ordered Tuesday at
5:37 p.m.?"
… Some
fast food brands are toying with the idea of using license plate
recognition technology (the kind of thing used in speed traps) to
track customers and provide them with additional features like
customized ordering screens, according
to the Financial
Times.
Clearly,
there are benefits here: Beyond just customized menus, chains could
offer other conveniences like linking your car to a loyalty program
or even a credit card, making payment a breeze. This technology
could almost certainly speed up drive-thru times — a
major industry concern.
“We’ll
protect your Privacy, unless ordered not to.”
Scott
Greenfield writes:
The LA Times calls it “unprecedented,” which may be true in the sense that it’s never happened before. But the notion that corporations aren’t above the law isn’t entirely novel, and even the behemoths, Facebook, Instagram and Twitter (“FIT”), must comply with a California state court order in a criminal proceeding. Imagine that.
In an unprecedented move, the California Supreme Court has allowed the defense in a gang-related murder trial in San Francisco to obtain private postings from Facebook, Instagram and Twitter.
In a brief order Wednesday, the court let stand a San Francisco judge’s ruling that the social media companies must turn over the private postings being sought by defendants in a murder trial.
According to Jim Tyre, this “brief order” was the California equivalent of a cert denial, referred to as “postcard denials” because they were sent out on a post card in the olden days that said something on the order of, “nah.” But in this instance, the refusal to bend to the will of the FIT was apparently shocking. Don’t they own California?
Read
more on Simple
Justice.
[From
the article:
From
the perspective of FIT, this was a test of fortitude, one that
reflected their dedication to preserving the sanctity of users’
secrecy from the prying eyes of the defense. Mind you, the
government already had access to it, which may not have
thrilled FIT but didn’t present enough of a problem that it was
willing to go to the mattresses. But the defense?
Innovating
yourself into trouble.
Publishers
are pissed about Amazon’s upcoming Audible Captions feature
Earlier
this week, Audible
revealed that it was working on a new feature for
its audiobook app: Audible Captions, which will use machine learning
to transcribe an audio recording for listeners, allowing them to read
along with the narrator. While the Amazon-owned company claims it is
designed as an educational feature, a number of publishers are
demanding that their books be excluded, saying these captions are
“unauthorized and brazen infringements of the rights of authors and
publishers.”
On
its face, the idea seems useful, much in the same way that I turn on
subtitles for things that I’m watching on TV, but publishers have
some reason to be concerned: it’s possible that fewer people will
buy distinct e-book or physical books if they can simply pick up an
Audible audiobook and get the text for free, too.
… This
isn’t the first time that Amazon has come under fire for publishers
when it comes to translating text to audio, or vice-versa. In 2009,
the company backtracked on a text-to-speech feature on the Kindle,
which allowed readers to listen to their book with machine-generated
narrator. The Authors Guild argued that the feature deprived authors
of their audio rights, and Amazon disabled it.
Perfect
for anyone thinking about researching a Privacy topic.
The
Privacy Good Research Fund is now open for applications for
privacy-related research.
The
Privacy Good Research Fund offers funding support to researchers
worth in total up to $75,000. Successful applicants will receive
funding of up to $25,000 for individual projects.
Information
for Applicants
The
Privacy Good Research Fund is open for applications from 15 July to 9
September 2019. Application forms and further information are
available below.
Research
priority areas
Applications
are welcome for research projects that develop new privacy-related
knowledge, provide practical solutions, or promote innovation.
The
PGRF also welcomes research that will support organisational
experiences in improving practice in data privacy and ethics, using
the Data Protection and Use Policy (DPUP) as the implementation
example. For more information, see the SIA
website.
The
SIA has a dedicated portion of funding to apply to this area.
US
ethics aren’t ‘universal ethics?’ Surely we must be talking of
Normative or Applied Ethics, right?
Global
AI Ethics: A Review of the Social Impacts and Ethical Implications of
Artificial Intelligence
The ethical implications and social impacts of
artificial intelligence have become topics of compelling interest to
industry, researchers in academia, and the public. However, current
analyses of AI in a global context are biased toward perspectives
held in the U.S., and limited by a lack of research, especially
outside the U.S. and Western Europe. This article summarizes the key
findings of a literature review of recent social science scholarship
on the social impacts of AI and related technologies in five global
regions. Our team of social science researchers reviewed more than
800 academic journal articles and monographs in over a dozen
languages. Our review of
the literature suggests that AI is likely to have markedly different
social impacts depending on geographical setting.
Likewise, perceptions and understandings of AI are likely to be
profoundly shaped by local cultural and social context. Recent
research in U.S. settings demonstrates that AI-driven technologies
have a pattern of entrenching social divides and exacerbating social
inequality, particularly among historically marginalized groups. Our
literature review indicates that this pattern exists on a global
scale, and suggests that low and middle-income countries may be more
vulnerable to the negative social impacts of AI and less likely to
benefit from the attendant gains. We call for rigorous ethnographic
research to better understand the social impacts of AI around the
world. Global, on-the-ground research is particularly critical to
identify AI systems that may amplify social inequality in order to
mitigate potential harms. Deeper understanding of the social impacts
of AI in diverse social settings is a necessary precursor to the
development, implementation, and monitoring of responsible and
beneficial AI technologies, and forms the basis for meaningful
regulation of these technologies.
No comments:
Post a Comment