Would
Clausewitz recognize a war this subtle? I’m fairly sure
Sun
Tzu
would.
The
Saudi oil attacks could be a precursor to widespread cyberwarfare —
with collateral damage for companies in the region
A
recent attack against Saudi Aramco damaged the world’s largest oil
producer and delayed oil production, roiling oil and gas markets.
The Saudi government and U.S. intelligence officials have claimed the
incident is the work of Iran, while Iran blamed Yemeni rebels.
… Iran’s
nuclear facilities were attacked by a virus called Stuxnet in the
mid-2000s. This malicious software was sophisticated, built in a
“modular” format. Attackers could use it not only to extract
intelligence but also to control and destroy sensitive machinery.
Iran
reacted to Stuxnet in a surprising way: they didn’t talk about it
much at all. But they did take action, said Lieutenant Colonel Scott
Applegate, an expert in the history of cybersecurity and a cyber
professor at Georgetown University.
One
theory is that Iran took some of what they learned from Stuxnet and
created a new weapon, which they then deployed against Saudi Aramco
in 2012.
That
virus, known as “Shamoon,” was modular and multi-faceted like
Stuxnet, but had only one purpose: To find and destroy data
… “You
saw that at Saudi Aramco, 30,000 boxes got bricked,” said Hussey,
describing
how 30,000 of the oil agency’s computers
were erased over the course of the day, destroying swaths of data.
“We
know who you are. We’ll be coming for you soon.” (What else
could they mean?)
DW
reports:
A website registered on a Russian domain has shared detailed personal information of dozens of Hong Kong protesters and journalists. Observers view it as another serious blow to the city’s dwindling civil liberties.
Amid continuing tensions between pro-democracy protesters and the government in Hong Kong, a website named “HK Leaks” has been collecting and leaking confidential personal information of pro-democracy protesters, activists, journalists and politicians in recent days.
The site designates profiles to one of the three main categories, and shows their headshots, date of birth, telephone numbers, social media accounts residential address and “nasty behaviors.”
Read
more on DW.
[From
the article:
According
to the site, refusing to pay MTR ticket fares, participating in
protests and peaceful strikes, sharing information about
the anti-extradition bill movement and
even covering protests as a journalist are all considered nasty
behaviors. The website calls on the public to share information
about the people who are "messing up Hong Kong," as part of
its tagline reads.
Boy,
‘dem GDPR folks is serious!
Odia
Kagan of Fox Rothschild writes:
Asking to read an electronic ID card as a condition for the provision of a service (issuing a rewards/loyalty card) is disproportionate and in violation of GDPR, says the Belgian data protection authority. The company was fined €10,000.
Read
more on Privacy
Compliance & Data Security
When
your computer talks to my computer do they violate the GDPR?
Enterprise
API Security and GDPR Compliance: Design and Implementation
Perspective
With
the advancements in the enterprise-level business development, the
demand for new applications and services is overwhelming. For the
development and delivery of such applications and services,
enterprise businesses rely on Application Programming Interfaces
(APIs). In essence, API is a double-edged sword. On one hand, API
provides ease of expanding the business through sharing value and
utility, but on another hand it raises security and privacy issues.
Since the applications usually use APIs to retrieve important data,
therefore it is extremely important to make sure that an effective
access control and security mechanism are in place, and the data does
not fall into wrong hands. In this article, we discuss the current
state of the enterprise API security and the role of Machine Learning
(ML) in API security. We also discuss the General Data Protection
Regulation (GDPR) compliance and its effect on the API security.
I
hope they’re right!
Asymptotically
Unambitious Artificial General Intelligence
General
intelligence, the ability to solve arbitrary solvable problems, is
supposed by many to be artificially constructible. Narrow
intelligence, the ability to solve a given particularly difficult
problem, has seen impressive recent development. Notable examples
include self-driving cars, Go engines, image classifiers, and
translators. Artificial General Intelligence (AGI) presents dangers
that narrow intelligence does not: if something smarter than us
across every domain were indifferent to our concerns, it would be an
existential threat to humanity, just as we threaten many species
despite no ill will. Even the theory of how to maintain the
alignment of an AGI's goals with our own has proven highly elusive.
We present the first algorithm we are aware of for asymptotically
unambitious AGI, where "unambitiousness" includes not
seeking arbitrary power. Thus, we identify an exception to the
Instrumental Convergence Thesis, which is roughly that by default, an
AGI would seek power, including over us.
Are
robots AI that moves?
First
Steps Towards an Ethics of Robots and Artificial Intelligence
“Our
computer says your little Johnny is a drug abusing anorexic suicide
risk. Just thought you’d like to know.”
Rowland
Manthorpe reports:
One of England’s biggest academy chains is testing pupils’ mental health using an AI (artificial intelligence) tool which can predict self-harm, drug abuse and eating disorders, Sky News can reveal.
A leading technology think tank has called the move “concerning”, saying “mission creep” could mean the test is used to stream pupils and limit their educational potential.
Read
it all on Sky
News.
(Related)
Just like a real person,
only more consistent?
Illinois
law regulates artificial intelligence use in video job interviews
The
Artificial Intelligence Video Interview Act, House Bill 2557,
requires companies to notify the applicant when the system is being
used, explain how the AI works, get permission from the applicant,
limit distribution of the video to people involved with the process
and to destroy the video after 30 days.
… Jedreski
said AI video interviews apply psychometrics, which is the science of
measuring attitude and personality traits.
“It's
reading data and then analyzing it to determine whether it can draw
conclusions about the person being interviewed,” Jedreski said.
Jedreski
said that AI is used to analyze an applicant’s body language, voice
and tone.
“It's
watching what the person is doing, what they look like when they're
talking,” Jedreski said. “It's listening to what they say and
how they say it and it's offering an analysis on what that person
might be thinking, whether they're being honest, what their
personality traits might be like, including what their attitude is.”
Yes, a very
basic overview. Strange a Seattle paper couldn’t do more.
A.I. 101:
What is artificial intelligence and where is it going?
Technology
that will help those Amazon drones land you package on the back
porch? Self-flying, like self-driving but in three dimensions.
Daedalean &
The AI That Knows Where Not To Land
Starting today, Daedalean and UAVenture are
announcing Magpie, a light entry version of their AI that uses one
simple downward-facing camera and neural-network-processed computer
vision than will be released before the end of the year. This
enables GPS-independent navigation and allows drones to find safe
landing spots
… Magpie is intended for professional drones
and will enable them to perform autonomous tasks, avoid obstacles,
return home, and find a safe landing spot.
No comments:
Post a Comment