I wonder. Is this “Oh look, we’re victims
too!” or is this a few countries letting Russia know they could
tamper with their elections, if Russia actually had elections?
Russia
claims foreign hackers are trying to interfere with its election
Russia, a country which has been accused numerous
times of attempting to interfere with elections overseas, has claimed
that its own presidential contest is under attack from foreign
hackers.
Officials in Moscow said that the Russian Central
Election Commission's website was hit by a coordinated attack by IP
addresses from 15 different countries on election day.
It said that a distributed denial of service
(DDoS) attack, which bombards a website with data requests in an
attempt to overwhelm it, hit between 2 a.m. and 5 a.m. on polling
day.
Maybe all that ‘fake news’ didn’t come from
Russia…
Facebook
and its executives are getting destroyed after botching the handling
of a massive data 'breach'
Facebook and its executives faced a torrent of
backlash on Saturday following news reports that the data firm
Cambridge Analytica, which worked on the Trump campaign in 2016,
improperly
harvested private information from 50 million Facebook users.
The company quickly faced calls for increased
regulation and oversight, and Massachusetts' Attorney General, Maura
Healey, even announced an investigation.
… Sen. Amy Klobuchar of Minnesota also
excoriated the company, demanding that Facebook CEO Mark Zuckerberg
face the Senate Judiciary Committee for questioning.
… But much of the online outrage came after
multiple Facebook executives took to Twitter to respond to the news
reports, insisting the incident was not a "data breach."
… In a series
of tweets that have since been deleted, Facebook's chief security
officer, Alex Stamos, insisted that although user's personal
information may have been misused, it wasn't retroactively a
"breach."
(Related)
Cambridge
Analytica and Facebook accused of misleading MPs over data breach
The head of the parliamentary committee
investigating fake news has accused Cambridge
Analytica and Facebook of misleading MPs in testimony, after the
Observer revealed details of a vast data breach affecting tens of
millions of people.
After a whistleblower detailed the
harvesting of more than 50 million Facebook profiles for
Cambridge Analytica, Damian Collins, the chair of the House of
Commons culture, media and sport select committee, said he would be
calling on the Facebook boss, Mark Zuckerberg, to testify before the
committee.
He said the company appeared to have previously
sent executives able to avoid difficult questions who had “claimed
not to know the answers”.
Collins also said he would be recalling the
Cambridge Analytica CEO, Alexander Nix, to give further testimony.
(Related)
Here’s
how Facebook allowed Cambridge Analytica to get data for 50 million
users
Facebook says it
isn’t at fault.
I’ll skip the long list. I’m sure they each
feel justified, if not just.
Compiled by the Daily
Record, where you can read more, the following is what they
report as the full list of organizations and agencies that can ask
ISPs for any UK citizens browsing history for the prior 12 months:
More fuel for the ongoing AI debate my students
are having.
When an AI
finally kills someone, who will be responsible?
Here’s a curious question: Imagine it is the
year 2023 and self-driving cars are finally navigating our city
streets. For the first time one of them has hit and killed a
pedestrian, with huge media coverage. A high-profile lawsuit is
likely, but what laws should apply?
… At the heart of this debate is whether an AI
system could be held criminally liable for its actions. Kingston says
that Gabriel Hallevy at Ono Academic College in Israel has explored
this issue in detail.
Criminal liability usually requires an action and
a mental intent (in legalese an actus rea and mens rea).
Kingston says Hallevy explores three scenarios that could apply to AI
systems.
The first, known as perpetrator via
another, applies when an offense has been committed by a
mentally deficient person or animal, who is therefore deemed to be
innocent. But anybody who has instructed the mentally deficient
person or animal can be held criminally liable. For example, a dog
owner who instructed the animal to attack another individual.
… The second scenario, known as natural
probable consequence, occurs when the ordinary actions of an AI
system might be used inappropriately to perform a criminal act.
Kingston gives the example of an artificially intelligent robot in a
Japanese motorcycle factory that killed a human worker. “The robot
erroneously identified the employee as a threat to its mission, and
calculated that the most efficient way to eliminate this threat was
by pushing him into an adjacent operating machine,” says Kingston.
“Using its very powerful hydraulic arm, the robot smashed the
surprised worker into the machine, killing him instantly, and then
resumed its duties.”
The key question here is whether the programmer of
the machine knew that this outcome was a probable consequence of its
use.
The third scenario is direct liability,
and this requires both an action and an intent. An action is
straightforward to prove if the AI system takes an action that
results in a criminal act or fails to take an action when there is a
duty to act.
The intent is much harder to determine but is
still relevant, says Kingston. “Speeding is a strict liability
offense,” he says. “So according to Hallevy, if a self-driving
car was found to be breaking the speed limit for the road it is on,
the law may well assign criminal liability to the AI program that was
driving the car at that time.”
Who said the our legal system always makes sense?
A $1.6
billion Spotify lawsuit is based on a law made for player pianos
Spotify is finally gearing up
to go public, and the company’s February 28th filing with the SEC
offers a detailed
look at its finances. More than a decade after Spotify’s
launch in 2006, the world’s leading music streaming service is
still struggling to turn a profit, reporting a net loss of nearly
$1.5 billion last year. Meanwhile, the company has some weird
lawsuits hanging over its head, the most eye-popping being the $1.6
billion lawsuit filed by Wixen Publishing, a music publishing company
that includes the likes of Tom Petty, The Doors, and Rage Against the
Machine.
… Spotify is being sued
by Wixen because of mechanical licenses — a legal regime that was
created in reaction to the dire threat to the music industry posed by
player pianos. Yes, the automated pianos with the rolls of paper
with punch holes in them.
But that’s not actually the
weird part. The weird part is that Spotify is fundamentally being
sued for literal paperwork: Wixen says Spotify is legally required to
notify songwriters in writing that they’re in the Spotify catalog —
a fact that escapes probably zero songwriters today. A paper notice
requirement made sense in the age of player pianos when songwriters
could hardly be expected to keep track of every player piano roll in
the country. It makes no sense in the age of Spotify, Pandora, and
Apple Music.
Dilbert again is not talking about the White
House. Honest!
No comments:
Post a Comment