So this could reveal who is in a
'battered women's shelter' or the location of foster children? Not
good, but not talking about the risks is even worse!
This could be bad. WGN TV reports:
Chicago police are
investigating a “significant theft” of computer equipment from
the Dept of Family and Support Services.
Someone stole
about $41,000 worth of computer equipment from a city office building
on the West Side in a burglary, police said.
Police could not
comment on what information may have been compromised.
A spokesperson for
DFSS was also unable to comment saying “Because this matter is
under investigation by Chicago Police, it would be inappropriate for
us to comment on any of the details that could be a part of that
investigation.”
The Chicago
Tribune has a bit more on the burglary, including the fact that
not all of the equipment stolen was new equipment.
If any unencrypted PII or PHI were on
the stolen computers, the police would not want to tip the burglars
to the presence of usable information. But by the same token, if
there are PII or PHI on it, DFSS cannot afford to wait long to alert
clients, who will need to protect themselves.
(Related) Statistically, this might be
true, but potential victims would rather have the particulars for
this case, not what happens “on average.”
A breach notification letter submitted
this week to the Vermont Attorney General’s Office by WorldVentures
Marketing had me grinding my teeth.
According to the notification to
consumers, WorldVentures recently became aware of unauthorized access
to their servers. The access may have occurred from
October 23, 2012 through
March 14, 2013. The server held customers’ credit card
numbers with expiration dates. They do not indicate how they became
aware of the unauthorized access.
The firm says that they do not have any
evidence that the card data were extracted. Then again, do they have
any firm proof it wasn’t extracted?
“We believe the risk of harm
to you is low.”
If you don’t know for sure that data
were not extracted, should you write that? No.
The firm did
not offer affected customers any free credit monitoring services.
“It's for your own good!”
hypnosec tipped us to news that India
is rolling out a new intrusive monitoring system, using the authority
of a 2000 telecom law. Quoting The Times of India:
"However,
Pavan Duggal, a Supreme Court advocate specialising in cyberlaw, said
the
government has given itself unprecedented powers to monitor private
Internet records of citizens. 'This system is capable of abuse,'
he said. The Central Monitoring System, being set up by the Centre
for Development of Telematics, plugs into telecom gear and gives
central and state investigative agencies a single point of access to
call records, text messages, and emails as well as the geographical
location of individuals."
Privacy advocates are worried about
abuse, partially because India has
no effective privacy legislation, and the "...Indian government
under PM Manmohan Singh has taken an increasingly uncompromising
stance when it comes to online freedoms, with the stated
aim usually to preserve social order and national security or
fight 'harmful' defamation."
Don't locate it next to a skeet
range...
At a Washington
speech Wednesday to entrepreneurs and business leaders in the
unmanned aerial technology sector, Udall urged development of the
technology, saying it will help people. [at least,
Sen. Udall Bob]
"We need to
integrate unmanned aerial systems into the American psyche in a way
that isn't threatening or scary," [i.e. Sneak up
on them? Bob] he said, in remarks at the National Press
Club. "Many here today have likely recognized that I'm
deliberately not using the word 'drone' because it carries a stigma.
… Udall, along
with other Colorado officials, is urging the Federal Aviation
Administration to make Colorado a test site for the unmanned aircraft
systems.
… To keep the
checks and balances intact, Udall plans proposed legislation that
would prohibit individuals or private businesses from spying on
another person using a privately operated drone.
It's a Jedi mind trick: “This is not
the Fourth Amendment violation you are looking for...”
Additional perspective on today’s
ruling in Rigmaiden from Linda Lye of the ACLU:
Today, a federal
district judge in Arizona issued a very disappointing decision
concerning the government’s obligations to be candid with courts
about new technologies they are seeking a warrant to use.
The case involves
Daniel Rigmaiden, who is being criminally prosecuted for an alleged
electronic tax fraud scheme. The government used a surveillance
device known as a stingray to locate Mr. Rigmaiden. A stingray
operates by simulating a cell tower and tricking all
wireless devices on the same network in the immediate vicinity to
communicate with it, as though it were the carrier’s
cell tower. In order to locate a suspect, a stingray scoops up
information not only of the suspect, but all third parties on the
same network in the area. This means that when the
government uses a stingray to conduct a search, it is searching not
only the suspect, but also tens or hundreds of third parties who have
nothing to do with the matter. When the FBI sought court
permission to use the device to locate Mr. Rigmaiden, it didn’t
explain the full reach of stingrays to the court.
The ACLU and the
Electronic Frontier Foundation filed an amicus brief arguing that
when the government wants to use invasive surveillance technology, it
has an obligation to explain to the court basic information about the
technology, such as its impact on innocent third parties. This is
necessary to ensure that courts can perform their constitutional
function of ensuring that the search does not violate the Fourth
Amendment. Unfortunately, today’s decision trivializes the
intrusive nature of electronic searches and potentially opens the
door to troubling government misuse of new technology.
In
today’s decision denying the motion to suppress, the judge held
that information about how the stingray operates – such as the fact
that it scoops up third party data – was merely a “detail of
execution which need not be specified.” We respectfully
but strongly disagree.
Read more on ACLU’s
blog.
If aggregating lots and lots of trivial
data points could add up to a search, where would that leave the data
brokers who collect all that behavioral advertising stuff?
Orin Kerr writes:
I haven’t
blogged recently on judicial decisions considering the
mosaic theory of the Fourth Amendment. As regular readers will
recall, the “mosaic theory” is a term for the idea that long-term
monitoring of a suspect can be a Fourth Amendment search even if
short-term monitoring is not. Under this approach, which was
suggested by the concurring opinions in United
States v. Jones, surveillance and analysis of a suspect is
outside the Fourth Amendment until it reaches some point when it has
gone on for too long, has created a full picture of a person’s life
(the mosaic), and therefore becomes a search that must be justified
under the Fourth Amendment. I think the mosaic approach is a misstep
for reasons I elaborated on in
this article. And the handful of lower courts to have considered
the theory since Jones mostly have not adopted it, either
because they found it unpersuasive, because they distinguished Jones
on the facts, or because they avoided the question under the
good-faith exception to the exclusionary rule. See, e.g., United
States v. Graham, 846 F.Supp.2d 384 (D.Md. 2012).
In the last week,
two district courts have divided on the question: United
States v. Ringmaiden (D. Ariz. May 8, 2013), and United
States v. Powell, — F.Supp.2d –, 2013 WL 1876761 (E.D. Mich
May 3, 2013) In this post, I want to discuss the two rulings,
and then offer some critical commentary on Powell at the
end.
Read more on The
Volokh Conspiracy.
(Related) We can, therefore we must?
(Remember, “To Serve Man” is a Twilight Zone cookbook) Note that
the sensors can tell what department you are in (and that is no doubt
logged with a time stamp), but can't follow from deparment to
department (except by arranging the deparments in time sequence).
Does no one ever read this stuff before publishing?
CBS in Dallas-Forth Worth reports:
Nordstrom says
it wants to serve you better, so it’s tracking your movements
through their stores. The CBS 11 I-Team has learned the retailer is
using software to track how much time you spend in specific
departments within the store. The technology is being used in 17
Nordstrom and Nordstrom Rack stores nationwide, including the
NorthPark store in Dallas.
A company
spokesperson says sensors within the store collect information
from customer smart phones as they attempt to connect to Wi-Fi
service. The sensors can monitor which departments
you visit and how much time you spend there.
However, the
sensors do not follow your phone from department to
department, nor can they identify any personal information
tied to the phone’s owner, says spokesperson Tara Darrow.
Read more on CBSDFW.
So if you want to shop and don’t want
to contribute to their “aggregate” information, you have to shut
off your phone? I guess they can get away with this, but should they
be able to?
(Related) For my Ethical Hackers
"A researcher has found that
Apple user locations can
be potentially determined by tapping into Apple Maps and he has
created a Python tool to make the process easier. iSniff GPS
accesses Apple's database of wireless access points, which is
collected by iPhones and iPads that have GPS and Wi-Fi location
services enabled. Apple uses this crowd-sourced data to run its
location services;
however, the location database is not meant to be public. You
can download the tool
via Giuthub."
A clear explanation...
Why
facial recognition tech failed in the Boston bombing manhunt
In the last decade, the US government
has made a big investment in facial recognition technology. The
Department of Homeland Security paid out hundreds of
millions of dollars in grants to state and local
governments to build facial recognition databases—pulling photos
from drivers' licenses and other identification to create a massive
library of residents, all in the name of anti-terrorism. In New
York, the Port Authority is installing a "defense
grade" computer-driven surveillance system around the World
Trade Center site to automatically catch potential terrorists through
a network of hundreds of digital eyes.
But then an act of terror happened in
Boston on April 15. Alleged perpetrators Dzhokhar and Tamerlan
Tsarnaev were both in the database. Despite having an array of
photos of the suspects, the system couldn't come up with a match. Or
at least it didn't come up with one before the Tsarnaev brothers had
been identified by other means.
For people who understand how facial
recognition works, this comes as no surprise. Despite advances in
the technology, systems are only as good as the data they're given to
work with. Real life isn't like anything you may have seen on NCIS
or Hawaii Five-0. Simply put, facial recognition isn't an
instantaneous, magical process. Video from a gas station
surveillance camera or a police CCTV camera on some lamppost cannot
suddenly be turned into a high-resolution image of a suspect's face
that can then be thrown against a drivers' license photo database to
spit out an instant match.
Nothing new to my Ethical Hackers, but
it might amuse my Intro to IT students.
Use
These Secret NSA Google Search Tips to Become Your Own Spy Agency
There’s so much data available on the
internet that even government cyberspies need a little help now and
then to sift through it all. So to assist them, the National
Security Agency produced a book to help its spies uncover
intelligence hiding on the web.
The 643-page tome, called Untangling
the Web: A Guide to Internet Research (.pdf), was just
released by the NSA following a FOIA request filed in April by
MuckRock, a site that
charges fees to process public records for activists and others.
The book was published by the Center
for Digital Content of the National Security Agency, and is filled
with advice for using search engines, the Internet Archive and other
online tools. But the most interesting is the chapter titled “Google
Hacking.”
… Lest you think that none of this
is new, that Johnny Long has been talking about this for years at
hacker conferences and in his book Google
Hacking, you’d be right. In fact, the authors of the NSA book
give a shoutout to Johnny, but with the caveat that Johnny’s tips
are designed for cracking — breaking into websites and servers.
“That is not something I encourage or advocate,” the author
writes.
No comments:
Post a Comment