Not the best target to irritate…
Colin
Lecher reports:
Since May 21st, a virus has shut down Philadelphia’s online court system, bringing network access to a standstill. The problems started unexpectedly: suddenly, no one could seem to access the system to file documents. “It wasn’t working,” says Rachel Gallegos, a senior staff attorney with the civil legal aid organization Community Legal Services. “I thought it was my computer.”
Another way to defy ransomware.
Alternative rock legends Radiohead on Tuesday
released an 18-hour trove of private recordings from their 1997 album
"OK Computer" after getting hacked by someone seeking a
ransom of $150,000 for the music.
The genre-banding English musicians uploaded the
1.8-gigabyte collection of recording session outtakes and rare live
performances on their radiohead.bandcamp.com website.
The songs can be accessed online for free.
Security is complicated. Third parties can help,
but it’s still your responsibility.
Liisa
Thomas, Sarah Aberg, Kari Rollins, and Katherine
Boy Skipsey write:
The SEC recently issued a risk alert warning about using vendors and cloud-based platforms. Many broker dealers and investment advisors are turning to these third parties to store customer data. In its alert, the SEC’s Office of Compliance Inspections and Examinations warns firms that relying on those third parties’ security tools is not, in and of itself, sufficient for the companies to demonstrate compliance with Regulations S-P and S-ID. These regulations require broker-dealers and investment advisers to protect customer records and detect and prevent identity theft.
Targeting
fans.
Telecompaper
reports:
Spain’s football league (La Liga) has been fined a total of EUR 250,000 by the country’s data protection agency (AEPD) for using a mobile app to remotely activate smartphone microphones, reports local daily El Diario. The league last year admitted that its highly popular official app, which is used by 4 million people in Spain to check incoming results live, can monitor user location and activate microphones to identify whether smartphone owners are watching a game at a public venue via an illegal feed. One of the app’s requested permissions is for access to user microphones and geopositioning “to detect fraud in the consumption of football in unauthorised public establishments”.
Read
more on Telecompaper
More targets.
Cybersecurity:
These are the Internet of Things devices that are most targeted by
hackers
… Research
from cybersecurity company SAM Seamless Network found that security
cameras represent 47 percent of vulnerable devices installed on home
networks.
According
to the data, the average US household contains 17 smart devices while
European homes have an average of 14 devices connected to the
network.
… Figures
from the security firm suggest that the average device is the target
of an average of five attacks per day, with midnight the most common
time for attacks to be executed – it's likely that at this time of
the night, the users will be asleep and not paying attention to
devices, so won't be witness to a burst of strange behavior.
Leading
to a full Privacy law?
Daniel
J. Moses of JacksonLewis writes:
As we recently noted, Washington state amended its data breach notification law on May 7 to expand the definition of “personal information” and shorten the notification deadline (among other changes ). Not to be outdone by its sister state to the north, Oregon followed suit shortly thereafter— Senate Bill 684 passed unanimously in both legislative bodies on May 20, and was signed into law by Governor Kate Brown on May 24. The amendments will become effective January 1, 2020.
Among the changes effected by SB 684 is a trimming of the Act’s short title—now styled the “Oregon Consumer Information Protection Act” or “OCIPA” (formerly the “Oregon Consumer Identity Theft Protection Act” or “OCITPA”). Apart from establishing a much more palatable acronym, the amended short title mirrors the national (and international ) trend of expanding laws beyond mere “identity theft protection” to focus on larger scale consumer privacy and data rights.
Read
more on The
National Law Review.
(Related)
Will
R. Daugherty and Caroline B. Brackeen of BakerHostetler write:
Texas is one of the many states that looked to be following in the footsteps of California’s enactment of a broad consumer privacy law (the California Consumer Privacy Act), which has far-ranging implications for businesses and consumers. Two comprehensive data privacy bills, HB 4390 and HB 4518, were filed and heard at the last legislative session. HB 4518, also known as the Texas Consumer Privacy Act, proposed overarching consumer protection legislation that closely resembled the California Consumer Privacy Act. HB 4518 stalled in the Texas House of Representatives in favor of HB 4390. HB 4390, also known as the Texas Privacy Protection Act, was introduced as comprehensive data privacy legislation, but was significantly less detailed than HB 4518. HB 4390 went through several rounds of revisions in both the Texas House and Senate until it was whittled down to the final version, which revises the notification requirements of the Texas Identity Theft Enforcement and Protection Act and creates the Texas Privacy Protection Advisory Council in order to develop recommendations for future data privacy legislation. HB 4390 has passed both the Texas House and Senate and is awaiting signature from the governor to be enacted.
Worth
studying.
Here’s
Mary Meeker’s 2019 Internet Trends report
… This morning, Meeker highlighted slowed
growth in e-commerce sales, increased internet ad spending, data
growth, as well as the rise of freemium subscription business models,
telemedicine, photo-sharing, interactive gaming, the on-demand
economy and more.
“If it
feels like we’re all drinking from a data firehose, it’s because
we are,” Meeker told the audience.
… We’ll
be back later with a full analysis of this year’s report. For now,
here’s
a look at all 333 slides.
You can view the full internet trends report archive here.
How very James Bond. “Q” would be delighted.
Facebook
lets deepfake Zuckerberg video stay on Instagram
The
clip is a "deepfake", made by AI software that uses photos
of a person to create a video of them in action.
Facebook
had previously been criticised for not removing
a doctored clip of US House Speaker Nancy Pelosi.
… The
deepfake video of Mark Zuckerberg was created for an art
installation on display in Sheffield called Spectre.
It is designed to draw attention to how people can be monitored and
manipulated via social media in light of the Cambridge Analytica
affair - among other scandals.
It features a
computer-generated image of the chief executive's face merged with
footage of his body sourced from a video presentation given in 2017
at an office in Facebook's Silicon Valley headquarters. An actor
provided the audio recording it is synched to.
How many can
we trust?
Number
of fact-checking outlets surges to 188 in more than 60 countries
Poynter
– Strong growth in Asia and Latin America helps fuel global
increase –
“The number of fact-checking outlets around the world has grown to
188 in more than 60 countries amid global concerns about the spread
of misinformation, according to the latest tally by the Duke
Reporters’ Lab. Since the
last annual fact-checking census in
February 2018, we’ve added 39 more outlets that actively assess
claims from politicians and social media, a 26% increase. The new
total is also more than four times the 44 fact-checkers we counted
when we launched our global
database and map in
2014.
Fear?
What’s
Behind the International Rush to Write an AI Rulebook?
There’s
no better way of ensuring you win a race than by setting the rules
yourself. That may be behind the recent rush by countries,
international organizations, and companies to put forward their
visions for how the AI
race should
be governed.
China
became the latest to release a set of “ethical
standards” for
the development of AI last month, which might raise eyebrows given
the country’s well-documented AI-powered
state surveillance program and
suspect approaches to privacy and human rights.
But
given the recent flurry of AI guidelines, it may well have been
motivated by a desire not to be left out of the conversation. The
previous week the OECD, backed by the US, released its own “guiding
principles” for
the industry, and in April the EU released “ethical
guidelines.”
30
years is near.
AI’s
Near Future
… In
this conversation, Jürgen and Azeem Azhar discuss what the next
thirty years of AI will look like.
AI
cheats!
How
in the world did I not know about this for three years?
Researchers
at the University of Tokyo have developed
a
robot
that
always wins
at rock-paper-scissors. It watches the human player's hand, figures
out which finger position the human is about to deploy, and reacts
quickly enough to always win.
Will we need
to delete the data and then retrain our AI? Expensive if necessary.
THE
NEXT BIG PRIVACY HURDLE? TEACHING AI TO FORGET
WHEN THE EUROPEAN Union enacted the General Data
Protection Regulation (GDPR) a year ago, one of the most
revolutionary aspects of the regulation was the “right to be
forgotten”—an often-hyped and debated right, sometimes perceived
as empowering individuals to request the erasure of their information
on the internet, most commonly from search engines or social
networks.
… Virtually every modern enterprise is in some
way or another collecting data on its customers or users, and that
data is stored, sold, brokered, analyzed, and used to train AI
systems. For instance, this is how recommendation engines work—the
next video we should watch online, the next purchase, and so on, are
all driven by this process.
At present, when data is sucked into this complex
machinery, there’s no efficient way to reclaim it and its influence
on the resulting output. When we think about exerting the right to
be forgotten, we recognize that reclaiming specific data from a vast
number of private businesses and data brokers offers its own unique
challenge. However, we need to realize that even if we can succeed
there, we’ll still be left with a difficult question—how do we
teach a machine to “forget” something?
Perspective. My search for why.
The DOJ’s
antitrust chief just telegraphed exactly how it could go after
Google, Apple and other big tech companies
The
Department of Justice’s assistant attorney general brought the case
against big tech into focus in a new
speech delivered
at the Antitrust New Frontiers Conference in Tel Aviv on Tuesday.
… Delrahim’s
speech, as transcribed
on the DOJ’s website,
argues existing antitrust laws are strong enough to regulate tech.
“We
already have in our possession the tools we need to enforce the
antitrust laws in cases involving digital technologies,” Delrahim
said. “U.S. antitrust law is flexible enough to be applied to
markets old and new.”
… One
way of evaluating whether a company has violated antitrust law is
through what Delrahim called the “no economic sense test.” A
monopoly that makes a decision that makes no economic sense except
for “its tendancy to eliminate or lessen competition” would fail
the test, according to Delrahim’s definition.
For
my students.
No comments:
Post a Comment