Is Harvard saying we are doomed?
Over three
billion credentials were reported stolen last year. This means
that cybercriminals possess usernames and passwords for more than
three billion online accounts. And that’s not just social media
accounts; it’s bank accounts, retailer gift card accounts with cash
and credit cards attached, airline loyalty accounts with years of
accumulated frequent flyer points, and other accounts with real
value.
This statistic is alarming, but in fact it
significantly understates the scope of the threat. Because of a form
of attack called credential
stuffing, tens of billions of other accounts are also at risk.
Here’s how that attack works. Because most people have many online
accounts (a recent estimate put it at 191
per person on average) they regularly reuse passwords across
those accounts. Cybercriminals take advantage of this. In a
credential stuffing attack, they take known valid email addresses and
passwords from one website breach—for example, the Yahoo
breach—and they use those same email addresses and passwords to
log in to other websites, such as those of major banks.
… Our network statistics at Shape Security
show that a typical credential stuffing attack has up to a 2%
success rate on major websites. In other words, with a set of 1
million stolen passwords from one website, attackers can easily take
over 20,000 accounts on another website. Now multiply those numbers
by the total number of websites where users have reused their
passwords, as well as the number of data breaches that have been
reported, to get a better sense of the threat. Of course, that still
only includes the data breaches we know about. And new
research from Google indicates that phishing may be an even
larger source of stolen passwords than data breaches, making the
scope of the problem even larger.
“Great fleas have little fleas upon their backs to bite 'em,
And little fleas have lesser fleas, and so ad infinitum.”
Eventually, well before “infinitum,” an AI
will create an AI that wants to rule the world.
Google's AI
made its own AI, and it's better than anything ever created by humans
The Google's Brain team of researchers has been
hard at work studying artificial intelligence systems. Back in May
they developed AutoML,
an AI system that could in turn generate
its own subsequent AIs.
For the time being, we’ll use humans. A really
good AI will take a while. After all, the rules keep changing.
YouTube to
combat abusive content with primitive tool: humans
In this age of
machine-learning-artificial-intelligence-driven blah blah blah, the
folks at YouTube have decided that to win the battle against violent
and racist content they must rely more on good old-fashioned human
beings.
In a pair
of blog
posts today, the company elaborated on its strategy for stemming
the rising tide of unsavory video content that has turned services
such as YouTube, Facebook, and Twitter into bottomless cesspools of
fake news, terrorist propaganda, and Nazi-fueled rage.
Over the summer, YouTube trumpeted investments in
machine learning designed to find content that violates the company’s
terms of service. That effort will certainly continue.
But YouTube CEO Susan Wojcicki wrote that the
machine learning tools will now be complimented by expanded use of
carbon-based lifeforms.
(Related). We’ll get to the terrorist stuff
later? Meanwhile, we’ll make our own rules.
You might accidentally be enabling abuse of
animals by taking selfies with them, a new report has warned.
Seemingly innocent animal selfies actually
encourages all kinds of exploitation and distress, according to an
investigation. And Instagram will now try and alert people to those
dangers, while discouraging them from posting such pictures.
On the other hand… Perhaps there are no rules.
Facebook Is
Banning Women for Calling Men ‘Scum’
I would agree if all you did was ask.
US says it
doesn't need secret court's approval to ask for encryption backdoors
The US government does not need the approval of
its secret surveillance court to ask a tech company to build an
encryption backdoor.
The government made
its remarks in July in response to questions posed by Sen. Ron
Wyden (D-OR), but they were only made public this weekend.
The implication is that the government can use its
legal authority to secretly ask a US-based company for technical
assistance, such as building an encryption backdoor into a product,
but can petition the Foreign Intelligence Surveillance Court (FISC)
to compel the company if it refuses.
Oh, the horror, the horror! Perhaps they could
use it to attract Amazon’s second HQ?
Ireland
forced to collect €13bn in tax from Apple that it doesn't want
… The European Commission
ruled in August 2016 that the iPhone maker must reimburse the
Irish state a record €13bn to make up for what it considered to be
unpaid taxes over a number of years.
… Ireland built its economic success on being
a low tax entryway for multinationals seeking access to the EU, and
is concerned that collecting the back taxes could dent its
attractiveness to firms.
Not politically neutral…
Fact check:
Net-neutrality claims leave out key context
Seeking to dispel "myths" about net
neutrality, the Trump administration's telecom chief instead put out
his own incomplete and misleading talking points when he suggested
that internet providers had never influenced content available to
their customers before neutrality rules took effect in 2015.
Iffy claims have come from the other side of the
debate, too, such as the notion that federal regulators had never
stepped in to make those providers change their service plans.
Although no such cases were brought, the Federal Communications
Commission was possibly on track to do so when the new administration
stopped the investigation.
No comments:
Post a Comment