Spoiler
alert: Not many.
U.S.
Cyber Command, Russia and Critical Infrastructure: What Norms and
Laws Apply?
… Damaging
critical infrastructure is clearly be out of bounds as responsible
peacetime state behavior and would likely violate international law.
But do these types of intrusions – seemingly intended to prepare
for future operations or deter them, or both, without causing any
actual harm – also run counter to applicable non-binding norms or
violate international law during peacetime?
(Related)
Russia
Says Victim of US Cyberattacks 'for Years'
Catching
up with new technology, slowly.
Mike Maharrey
writes:
SANTA FE, N.M. (June 14, 2019) – Today, a New Mexico law goes into effect that limits the warrantless use of stingray devices to track people’s location and sweep up electronic communications, and more broadly protects the privacy of electronic data. The new law will also hinder the federal surveillance state.
Sen. Peter Wirth (D) filed Senate Bill 199 (SB199 ) on Jan. 8. Titled the “Electronic Communications Privacy Act,” the new law will help block the use of cell site simulators, known as “stingrays.” These devices essentially spoof cell phone towers, tricking any device within range into connecting to the stingray instead of the tower, allowing law enforcement to sweep up communications content, as well as locate and track the person in possession of a specific phone or other electronic device.
The law requires police to obtain a warrant or wiretap order before deploying a stingray device, unless they have the explicit permission of the owner or authorized possessor of the device, or if the device is lost or stolen. SB199 includes an exception to the warrant requirement for emergency situations. Even then, police must apply for a warrant within 3 days and destroy any information obtained if the court denies the application.
Read
more on Tenth
Amendment Center.
Failure
to manage?
At
least 50,000 license plates leaked in hack of border contractor not
authorized to retain them
At
least 50,000 American license plate numbers have been made available
on the dark web after a company hired by Customs and Border
Protection was at the center of a major
data breach,
according to CNN analysis of the hacked data. What's more, the
company was never authorized to keep the information, the agency told
CNN.
"CBP
does not authorize contractors to hold license plate data on non-CBP
systems," an agency spokesperson told CNN.
The
admission raises questions about who's responsible when the US
government hires contractors to surveil citizens, but then those
contractors mishandle the data.
… "This
data does have to be deleted," the CBP spokesperson said, though
the agency didn't clarify the specifics of the policy that would
apply to Perceptics.
… Last
week, CBP said in a statement that "none of the image data has
been identified on the Dark Web or internet," though CNN was
able to still find it.
An unintended consequence?
GDPR Has
Been a Boon for Google and Facebook
Europe’s privacy
laws have pushed advertisers to give business to the tech giants they
trust
The General Data Protection Regulation, or GDPR,
which went into effect across the European Union last year, has
pushed marketers to spend more of their ad dollars with the biggest
players, in particular Alphabet Inc.’s Google and Facebook Inc.,
ad-tech companies and media buyers say.
Trivial for an AI.
Odia
Kagan of FoxRothschild writes:
“Whenever we make a call, go to work, search the web, pay with our credit card, we generate data. While de-identification might have worked in the past, it doesn’t really scale to the type of large-scale datasets being collected today.”
It turns out that “four random points (i.e. time and location where a person has been) are enough to uniquely identify someone 95 percent of the time in a dataset with 1.5 million individuals…”
All these results lead to the conclusion that an efficient enough, yet general, anonymization method is extremely unlikely to exist for high-dimensional data — say Y.A. de Montjoye and A. Gadotti.
Read
more on Privacy
Compliance & Data Security.
How would we ‘vet’ the data that trains an AI?
The
unforeseen trouble AI is now causing
AI
has come a long way in recent years — but as many who work with
this technology can attest, it is still prone to surprising
errors that
wouldn’t be made by a human observer. While these errors can
sometimes be the result of the required learning curve for artificial
intelligence, it is becoming apparent that a far more serious problem
is posing an increasing risk: adversarial
data.
For
the uninitiated, adversarial
data describes
a situation in which human users intentionally supply an algorithm
with corrupted information. The corrupted data throws off the
machine learning process, tricking the algorithm into reaching fake
conclusions or incorrect predictions.
As
a biomedical engineer, I view adversarial data as a significant cause
for concern. UC Berkeley professor Dawn Song notably tricked
a self-driving car into
thinking that a stop sign says the speed limit is 45 miles per hour.
… Interestingly,
adversarial data output can occur even without malicious intent.
This is largely because of the way algorithms can “see” things in
the data that we humans are unable to discern. Because of that
“visibility,” a recent case
study from MIT describes
adversarial examples as “features” rather than bugs.
… As
Moazzam Khan noted at Security Intelligence, there are two
main types of attacks that
rely on adversarial data: poisoning attacks, in which “the attacker
provides input samples that shift the decision boundary in his or her
favor,” and evasion attacks, in which “an attacker causes the
model to misclassify a sample.”
Should we feed in the speech of political
candidates?
MACHINE
LEARNING SAYS ‘SOUND WORDS’ PREDICT PSYCHOSIS
The
researchers also developed a new machine-learning method to more
precisely quantify the semantic richness of people’s conversational
language, a known indicator for psychosis.
Their results
show that automated analysis of the two language variables—more
frequent use of words associated with sound and speaking with low
semantic density, or vagueness—can predict whether an at-risk
person will later develop psychosis with 93 percent accuracy.
Even trained
clinicians had not noticed how people at risk for psychosis use more
words associated with sound than the average, although abnormal
auditory perception is a pre-clinical symptom.
“Trying
to hear these subtleties in conversations with people is like trying
to see microscopic germs with your eyes,” says Neguine Rezaii,
first author of the paper in npj
Schizophrenia.
“The automated technique we’ve developed is a really sensitive
tool to detect these hidden patterns. It’s like a microscope for
warning signs of psychosis.”
No comments:
Post a Comment