A
war, by any other name…
Greece
hires 80 new hackers as cyberwar with Turkey intensifies
Paul
Antonopoulos reports:
Greece’s National Intelligence Service has hired 80 new hackers at a time when the cyberwar with Turkish hackers has intensified, Ethnos reported.
The last time hackers were recruited was in 2009, but now “new blood” is considered necessary for the renewal of staff.
In January, Turkish hackers breached security systems and hacked the websites of the Greek Interior Ministry, the Foreign Ministry and the Prime Minister’s website, prompting for immediate security upgrades.
Last week however, Turkish hackers initiated a new cyber war when they brought down the website of a small municipality of only 30,000 people called Chalkidona in the Thessaloniki regional unit. Ayyidiz Tim, the “cyber soldier of Turkey,” took responsibility for the cyber attack.
Read
more on Greek
City Times
(Related)
Follow the money.
Examining
the US Cyber Budget
Jason
Healey takes a detailed
look at
the US federal cybersecurity budget and reaches an important
conclusion: the US keeps saying that we need to prioritize defense,
but in
fact we prioritize attack.
Cringe-worthy,
but I’ll have my students read it.
Lessons
learned from the ANPR data leak that shook Britain
On
April 28, 2020, The
Register reported
the massive Automatic Number-Plate Recognition (ANPR) system used by
the Sheffield government authorities was leaking some 8.6 million
driver records. An online ANPR dashboard responsible for managing
the cameras, tracking license plate numbers and viewing vehicle
images was left exposed on the internet, without
any password or security in place.
This meant anybody on the internet could have accessed the dashboard
via their web browser and peeked into a vehicle’s journey or
possibly corrupted records and overridden camera system settings.
… The
Council and South Yorkshire Police have suggested there were no
victims of the data leak, but experts aren’t so sure.
… “As
forensic investigators, we have often come across data breaches where
the reason there were no
signs is because there [were] no systems monitoring for signs,”
says Barratt. “No
evidence of compromise is not the same as evidence of no compromise.”
Protect
yourself.
How
to Clean Up Your Social Media Posts As Much as You Can
Wired
– How to Clean Up Your Old Social Media Posts –
“These tips will help you safely tidy up your Twitter, Facebook,
and Instagram accounts—or give your profile a fresh start.”
Wired
UK –
“…First off–everything you do on Instagram is tracked. Almost
every online service you use collects information about your actions.
Every thumb scroll made through your feed provides it with
information about your behavior. Instagram knows that you spent 20
minutes scrolling to the depths of your high-school crush’s profile
at 2am. The data that Instagram collects isn’t just for
advertising. The company uses your information—for instance, what
device you use to login—to detect suspicious login attempts. Crash
reports from your phone can help it identify bugs in its code and
identify parts of the app that nobody uses. In 2019 it ditched
the Following tab,
which showed everyone the public posts you had liked. Other than
deleting the app completely there’s very little you can do to stop
Instagram from tracking your behavior on its platform, but there are
things
you can do to limit some
of the data that’s collected and the types of ads you see online…”
Look
for violations, sure. Enforcement requires people.
Many
Police Departments Have Software That Can Identify People In Crowds
BriefCam,
a facial recognition and surveillance video analysis company, sells
the ability to surveil protesters and enforce
social distancing — without the public knowing.
… Some
of the cities using BriefCam’s technology — such as New
Orleans and
St.
Paul —
have been
the site of
extreme
police violence,
with officers using rubber bullets, tear gas, and batons on
protesters. Authorities in Chicago;
Boston;
Detroit;
Denver;
Doral,
Florida;
Hartford,
Connecticut; and
Santa
Fe County, New Mexico have
also used it.
… This
month, BriefCam launched a new “Proximity Identification"
feature, which it marketed as a way to combat the COVID-19 pandemic.
The company claimed it could gauge the distance between individuals,
detect who is wearing a mask and who isn't, and identify crowds and
bottlenecks. In a brochure, BriefCam said that these features could
be combined with facial recognition to determine the identities of
people who may have violated social distancing recommendations.
News
is nice, but not worth paying for.
Facebook
says it doesn’t need news stories for its business and won’t pay
to share them in Australia
Facebook
has rejected a proposal
to share advertising revenue with news organisations,
saying there would “not be significant” impacts on its business
if it stopped sharing news altogether.
On
Monday, the social media giant issued its response to the Australian
Competition and Consumer Commission, which has been tasked
with creating a mandatory code of conduct aimed
at levelling the playing field.
… “It
is not healthy nor sustainable to expect that two private companies,
Facebook and Google, are solely responsible for supporting a public
good and solving the challenges faced by the Australian media
industry,” it said.
“The
code needs to recognise that there is healthy, competitive rivalry in
the relationship between digital platforms and news publishers, in
that we compete for advertising revenue.”
The
company said the revenue-sharing proposal would be forcing them to
“subsidise a competitor” and “distort advertising markets,
potentially leading to higher prices”.
The
rush to adopt any new technology often exceeds the ability to
understand the risks.
The
Liabilities of Artificial Intelligence Are Increasing
“With
the proliferation of machine learning and predictive analytics, the
FTC should make use of its unfairness authority to tackle
discriminatory algorithms and practices in the economy.”
This
statement came
from
FTC Commissioner Rohit Chopra at the end of May. The fact that these
words followed a
more formal blogpost from
the regulator focused on artificial intelligence—in the midst of a
global pandemic, no less—highlights what is becoming the new
normal: Liabilities on the use of algorithmic decision-making are
increasing. This holds true with or without new federal regulations
on AI.
For
those paying attention to the rapid adoption of AI, this trend might
come as no surprise—especially given that regulators have been
discussing new regulations on AI for years (as I’ve written
about here before ).
But the increasing liability of algorithmic decision-making systems,
which often incorporate artificial intelligence and machine learning,
also stems from a newer development: the longer regulators wait, the
more widely used AI becomes. In the process, the concrete harms
these technologies can cause are becoming clear.
Take,
for example, automated screening systems for tenants, which the
publication The
Markup recently
revealed have
been plagued by inaccuracies that have generated millions of dollars
in lawsuits and fines.
… Or
take the Michigan Integrated Data Automated System, used by the state
to monitor filing for unemployment benefits, which was also recently
alleged to
have falsely accused thousands of citizens of fraud
… Then
there’s the
recent lawsuit against
Clearview AI, filed in Illinois at the end of May by the ACLU and a
leading privacy class action law firm, alleging that the company’s
algorithms violated the state’s Biometric Information Privacy Act.
… In
other words, the list of lawsuits, fines and other liabilities
created by AI is long and getting
longer.
The non-profit Partnership on AI even recently released an AI
incident database to
track how models can be misused or go awry.
… Whatever
the cause, there’s a range of materials that lawyers can use to
help their organizations prepare, like this series
of articles focused
on legal planning for the adoption of AI.
A
good look at AI.
AI
will shift our industry even more than the internet did 20 years ago.
Here’s why.
There
have been significant shifts in computing every 10 to 15 years,
starting in the 1980s with personal computers, then the internet in
the ’90s and more recently the advent of mobile in the mid-2000s.
What’s next for computers and the digitalization of the world?
After the internet, artificial intelligence (AI) looms large.
… It’s
important to start by making sure we don’t misunderstand what
artificial intelligence is. Humans like to compare themselves with
AI and keep stressing that AI will never be our equivalent. This
completely misses the point. AI
today is just a new generation of computing that happens to be very
good at handling any kind of information and making predictions.
For
us non-lawyers…
The
Internet’s most important—and misunderstood—law, explained
Ars
Technica – Section
230 is the legal foundation of social media, and it’s under
attack.”…To
understand Section
230,
you have to understand how the law worked before Congress enacted it
in 1996. At the time, the market for consumer online services was
dominated by three companies: Prodigy,
CompuServe, and AOL.
Along with access to the Internet, the companies also offered
proprietary services such as real time chats and online message
boards. Prodigy distinguished itself from rivals by advertising a
moderated, family-friendly experience. Employees would monitor its
message boards and delete posts that didn’t meet the company’s
standards. And this difference proved to have an immense—and
rather perverse—legal consequence…”
- The Verge – Everything you need to know about Section 230
No comments:
Post a Comment