A bit late, don’t you think?
The
Federal Trade Commission issued an Opinion finding that the data
analytics and consulting company Cambridge Analytica, LLC engaged in
deceptive practices to harvest personal information from tens of
millions of Facebook users for voter profiling and targeting. The
Opinion also found that Cambridge Analytica engaged in deceptive
practices relating to its participation in the EU-U.S. Privacy Shield
framework.
In
an administrative
complaint filed in July,
FTC staff alleged that Cambridge Analytica and its then-CEO Alexander
Nix and app developer Aleksandr Kogan deceived consumers. Nix and
Kogan agreed to settle the FTC’s allegations. Cambridge Analytica,
which filed for bankruptcy in 2018, did not respond to the complaint
filed by FTC staff, or a motion submitted for summary judgment of the
allegations.
The
FTC staff’s administrative complaint alleged that Kogan worked with
Nix and Cambridge Analytica to enable Kogan’s GSRApp to collect
Facebook data from app users and their Facebook friends. The
complaint alleged that app users were falsely told the app would not
collect users’ names or other identifiable information. The
GSRApp, however, collected users’ Facebook User ID, which connects
individuals to their Facebook profiles.
The
complaint also alleged that Cambridge Analytica claimed it
participated in the EU-U.S.
Privacy Shield —which
allows companies to transfer consumer data legally from European
Union countries to the United States—after allowing its
certification to lapse. In addition, the complaint alleged the
company failed to adhere to the Privacy Shield requirement that
companies that cease participation in the Privacy Shield affirm to
the Department of Commerce, which maintains the list of Privacy
Shield participants, that they will continue to apply the Privacy
Shield protections to personal information collected while
participating in the program.
In
its Opinion,
the Commission found that Cambridge Analytica violated the FTC Act
through the deceptive conduct alleged in the complaint. The Final
Order prohibits
Cambridge Analytica from making misrepresentations about the extent
to which it protects the privacy and confidentiality of personal
information, as well as its participation in the EU-U.S. Privacy
Shield framework and other similar regulatory or standard-setting
organizations. In addition, the company is required to continue to
apply Privacy Shield protections to personal information it collected
while participating in the program (or to provide other protections
authorized by law), or return or delete the information. It also
must delete the personal information that it collected through the
GSRApp.
Will
others pile on? Apparently not only US Senators (Orrin Hatch) are
fooled.
Facebook
Gets $4m Fine From Hungary for Claim Services Are Free
Hungary’s
competition watchdog handed Facebook Inc. a 1.2 billion forint ($4
million) fine for claiming its services were free.
Facebook
made a profit from utilizing users’ online activity and data, which
served as “payment” for the services, the Budapest-based
authority said in an emailed statement on Friday. Claiming the
website was free may have misled users regarding the value of the
data they were giving the technology firm, it said. A Facebook
spokesman was not immediately available for comment.
Interesting.
Was this entirely a “Gee, that sounds good. Let’s try it.”
kind of thing?
Social
Media Vetting of Visa Applicants Violates the First Amendment
Since
May, the State Department has required almost everyone applying for a
U.S. visa—more than 14 million people each year—to register every
social media handle they’ve used over the past five years on any of
20 platforms, including Facebook, Instagram, Twitter, and YouTube.
The information collected through the new registration requirement is
then retained indefinitely, shared widely within the federal
bureaucracy as well as with state and local governments, and, in some
contexts, even disseminated to foreign governments. The registration
requirement chills the free speech of millions of prospective
visitors to the United States, to their detriment and to ours.
On
Thursday, on behalf of two U.S.-based documentary film organizations,
the Knight First Amendment Institute and the Brennan Center for
Justice sued to stop this policy, arguing that it violates the First
Amendment as well as the Administrative Procedure Act.
… There
is no evidence that the social media registration requirement serves
the government’s professed goals. Despite the State Department’s
bare
assertion that
collecting social media information will “strengthen” the
processes for “vetting applicants and confirming their identity,”
the government has failed
—in
numerous attempts—to show that social media screening is even
effective as a visa-vetting or national security tool.
Do
they see many of these hacks? Not clear from the article or the FBI
notice.
If
You Have an Amazon Echo or Google Home, the FBI Has Some Urgent
Advice for You
… The FBI puts it like this:
Hackers can use that innocent device to do a virtual drive-by of your digital life. Unsecured devices can allow hackers a path into your router, giving the bad guy access to verything else on your home network that you thought was secure. Are private pictures nd passwords safely stored on your computer? Don't be so sure.
Change the device's factory settings from the default password. A simple Internet search should tell you how -- and if you can't find the information, consider moving on to another product.
- Many connected devices are supported by mobile apps on your phone. These apps could be running in the background and using default permissions that you never realized you approved. Know what kind of personal information those apps are collecting and say "no" to privilege requests that don't make sense.
- Secure your network. Your fridge and your laptop should not be on the same network. Keep your most private, sensitive data on a separate system from your other IoT devices.
Obvious? My students think so.
General
Counsel Must Come to Grips With Artificial Intelligence
Artificial
intelligence is
evolving and exposing companies to new areas of liability and
regulatory minefields, which means it’s high time for general
counsel to get comfortable with AI if they want to avoid costly
compliance missteps.
That’s
the takeaway from a report
that
Lex Mundi released on Thursday. The report is based on a workshop
discussion that Lex Mundi, a Houston-based global network of
independent law firms, hosted earlier this year in Amsterdam.
…
Alexander
Birnstiel, a Brussels-based partner at Noerr who contributed to the
report, noted that “companies and their in-house legal teams must
navigate an environment characterized by a patchwork of new
competition enforcement initiatives and regulatory rules across
jurisdictions whenever they engage in digital business.”
… Several
of the conference participants also suggested that corporate boards
include cyber experts, who can serve as a “valuable ally to a
general counsel.”
Government
agencies, including the Securities and Exchange Commission and the
Australian Securities and Investment Commission, are already using
AI to detect misconduct. At the same time, general
counsel should be pushing companies to leverage the technology to
identify potential regulatory issues before they become serious
problems.
(Related)
AI,
Machine Learning and Robotics: Privacy, Security Issues
… "we're beginning to see surgical
robots ... and robots that take supplies from one part of a hospital
to another. … You can use AI to help sequence a child's DNA ... and
match and identify a condition in very short order," Wu says in
an interview with Information Security Media Group.
But along with those bold technological advances
come emerging privacy and security concerns.
"The
HIPAA Security Rule doesn't talk about surgical robots and AI
systems," he notes. Nevertheless, HIPAA's
administrative, physical and technical safeguard requirements still
apply, he says.
As a result, organizations must determine, for
example, "what kind of security management procedures are
touching these devices and systems - and do you have oversight over
them?"
Also critical is ensuring that "communications
are secure from one point to another," he points out. "If
you have an AI system that's drawing records from an electronic
health record, how is that transmission being secured? How do we
know the AI systems drawing [information] from the EHR system has
been properly authenticated?"
If you are a regular reader of this Blog, you know
what these parents were concerned about.
Lois
Beckett reports:
Parents at a public school district in Maryland have won a major victory for student privacy: tech companies that work with the school district now have to purge the data they have collected on students once a year. Experts say the district’s “Data Deletion Week” may be the first of its kind in the country.
It’s not exactly an accident that schools in Montgomery county, in the suburbs of Washington DC, are leading the way on privacy protections for kids. The large school district is near the headquarters of the National Security Agency and the Central Intelligence Agency. It’s a place where many federal employees, lawyers and security experts send their own kids.