There
are similar repositories here in the US.
No
More Ransom project has prevented ransomware profits of at least $108
million
On
the three-year anniversary of the No More Ransom project, Europol
announced today that users who downloaded and decrypted files using
free tools made available through the
No More Ransom portal have
prevented ransomware gangs from making profits estimated at at least
$108 million.
Just
the free
decryption tools for the GandCrab ransomware alone
offered on the No More Ransom website have prevented ransom payments
of nearly $50 million alone, Europol said.
… Per
statistics Europol shared today, most of the site's visitors came
from South Korea, the US, the Netherlands, Russia, and Brazil.
… The
only oddity in No More Ransom's make-up is the lack
of any US-based law enforcement agency. Other than that,
everyone else is represented.
I
think changing US attitudes would be almost impossible.
Teenage
hackers are offered a second chance under European experiment
Police
in the U.K. and the Netherlands have created a legal intervention
campaign for first-time offenders accused of committing cybercrimes,
officials explained Tuesday at the International Conference on
Cybersecurity at Fordham University. The effort, called
“Hack_Right,” is aimed at people between 12 and 23 years old who
may be skirting the law from behind their keyboard and not even
realize it.
… The
average age of an accused cybercriminal is 19 years old, according to
Floor Jansen, an adviser to the Dutch National High Crime Unit.
There is an “overrepresentation” of autistic traits in those
offenders, she said, and the recidivism rate is relatively low
compared to other crimes.
… “Most
offenders will go to a forum right on the clear web … and buy a
remote access tool for $40,” she said. “If they don’t
understand what it does, they can call a help desk. So it doesn’t
seem too illegal.”
… There
is a stark difference in the European and American approaches to
cybercriminal enforcement. Bulgarian police last week released
a 20-year-old security
specialist accused of hacking the country’s National Revenue
Agency, and accessing information about
5 million people,
most of Bulgaria’s population. Meanwhile, suspects accused of
similar crimes in the U.S. often face years in prison.
Perhaps
LinkedIn believes requiring personal information is their duty?
Libraries
contest lynda.com learning site privacy issues with new owner
LinkedIn
Boing
Boing: Linkedin to libraries: drop dead –
“For years, libraries across America have paid to subscribe to
lynda.com for online learning content; four years ago, lynda.com
became a division of Linkedin, and this year, the company has
informed libraries that they’re migrating all lynda.com users to
Linkedin Learning, which would be fine, except Linkedin
only allows you to access Linkedin Learning if you create and connect
a Linkedin profile to the system.
If libraries accept this change, it will mean that any patron who
uses this publicly funded service will also have to have a publicly
searchable Linkedin profile. Linkedin’s explanation
of why this is OK is
purest tech-bro PR bullshit, condescending and dismissive.
Libraries
are fighting back: California
State Libraries is
recommending that libraries boycott the service, and the American
Library Association has publicly condemned the move...”
[From
LinkedIn’s explanation:
...helping
us to authenticate that users are real people and further protect our
members.
Mentions some of the organizations working on AI
ethics.
The
Regulation of AI — Should Organizations Be Worried?
What happens when injustices are propagated not by
individuals or organizations but by a collection of machines?
Lately, there’s been increased attention on the downsides of
artificial intelligence and the harms it may produce in our society,
from unequitable access to opportunities to the escalation of
polarization in our communities. Not surprisingly, there’s been a
corresponding rise in discussion around how to regulate AI. Do we
need new laws and rules from governmental authorities to police
companies and their conduct when designing and deploying AI into the
world?
Part of the conversation arises from the fact that
the public questions — and rightly so — the ethical restraints
that organizations voluntarily choose to comply with.
… Trust around AI requires fairness,
transparency, and accountability. But even AI researchers can’t
agree on a single definition of fairness: There’s always a question
of who is in the affected groups and what metrics should be used to
evaluate, for instance, the impact of bias within the algorithms.
No comments:
Post a Comment