Keep
up!
Round-Up
of Recent Changes to U.S. State Data Breach Notification Laws
We
should have this figured out in a few (Okay, 30) years.
Disinformation
and the 2020 Election: How Social Media Industry Should Prepare
NYU
Stern Center for Business and Human Rights – The
role of social media in a democracy.
“In our fourth
report on
online disinformation, the NYU Stern Center for Business and Human
Rights explores risks to democracy and free speech posed by the
expected spread of disinformation during the 2020 U.S. presidential
election. The report outlines steps the social media companies
should take to counter the coming wave of disinformation. Preparing
for the fight against false and divisive content will not be
cost-free. But investments in R&D and personnel ultimately will
help social media platforms restore their brand reputations and slow
demands for draconian government regulation.
Social media companies’ policies on disinformation often lack clarity and strategic foresight and have been enforced in an ad hoc fashion. To reduce the probability of governmental content regulation in the U.S., these companies should show they can close the governance gap when it comes to disinformation. Read our examination of how social media companies have reacted to politically oriented false content, and the disinformation tactics hey will need to prepare for in 2020…”
(Related)
US
plans for fake social media run afoul of Facebook rules
Facebook said
Tuesday that the U.S. Department of Homeland Security would be
violating the company’s rules if agents create fake profiles to
monitor the social media of foreigners seeking to enter the country.
“Law
enforcement authorities, like everyone else, are required to use
their real names on Facebook and we make this policy clear,”
Facebook spokeswoman Sarah Pollack told The Associated Press in a
statement Tuesday. “Operating fake accounts is not allowed, and we
will act on any violating accounts.”
Pollack said
the company has communicated its concerns and its policies on the use
of fake accounts to DHS. She said the company will shut down fake
accounts, including those belonging to undercover law enforcement,
when they are reported.
The
company’s statement followed the
AP’s report Friday that U.S. Citizenship and Immigration Services
had authorized its officers to use fake social media accounts in
a reversal of a previous ban on the practice.
For
discussion.
Russell
Brandom reports on another case where law enforcement served Google
with a search warrant,
...asking for data that would identify any Google user who had been within 100 feet of the bank during a half-hour block of time around the robbery. They were looking for the two men who had gone into the bank, as well as the driver who dropped off and picked up the crew, and would potentially be caught up in the same dragnet. It was an aggressive technique, scooping up every Android phone in the area and trusting police to find the right suspects in the mess of resulting data. But the court found it entirely legal, and it was returned as executed shortly after.
Read
more about this type of reverse warrant on The
Verge,
and then think about whether you leave for your cellphone’s default
settings for location ON or OFF.
Moving
slowly is better than not moving at all.
Facebook
will no longer scan user faces by default
Facebook
is making facial recognition in photos opt-in by default. Starting
today, it’s rolling out its Face Recognition privacy setting, which
it first
introduced in
December 2017,
to all users. If you have Face Recognition turned on, Facebook will
notify you if someone uploads a photo of you, even if you aren’t
tagged. You can then tag yourself, stay untagged, or report the
photo if it’s something you want taken down. Facebook tells The
Verge it
expects to complete the rollout over the next several weeks.
Everything
helps.
Transferring
Data Under GDPR
We
have found beliefs about managing data transfers can be broad and
confusing since the EU General Data Protection Regulation (GDPR) was
put in force in May 2018. Some believe no data transfers outside of
the EU are allowed. Others believe if you have a legitimate business
reason to transfer data, and an agreement with the customer, it is
simply business as usual. The real answer often lives in between.
We
will walk through the GDPR requirements for processing personal data
to help you envision how the GDPR data transfer rules may apply to
your organization and your customers.
Confusing,
isn’t it?
German
court decides that GDPR consent can be tied to receiving advertising
On
June 27, 2019, the High Court of Frankfurt decided
that
a consent for data processing tied to a consent for receiving
advertising can be considered as freely given under the GDPR.
… The
claimant’s consent had been obtained in connection with his
participation in a sweepstakes contest. In order for the claimant to
participate in the contest, he had to consent to receive advertising
from partners of the sweepstakes company
… In
line with previous case
law,
the court decided that bundling consent for advertising with the
participation in a sweepstakes contest does not prevent it from being
“freely given”. According to the court, “freely given”
consent is a consent that is given without “coercion” or
“pressure”. The court decided that enticing
a customer with a promise of a discount or the participation in a
sweepstakes contest in exchange for the consent to process his data
for advertising does not amount to such coercion or pressure.
According to the court, “a consumer may and should decide himself
or herself if the participation in the sweepstakes is worth his or
her data”.
Do
you have a secure procedure for forwarding email?
Beware
of web beacons that can secretly monitor your email
Legal
By the Bay – Joanna L. Storey: “A twist in the recent prosecution
of a Navy Seal charged with killing a prisoner in Iraq in 2017
brought to the forefront an ethics issue that has been squarely
addressed by several jurisdictions, but not yet in California: the
unethical surreptitious tracking of emails sent to opposing counsel
using software embedded in a logo or other image. Also known as a
web beacon, the tracking software is an invisible image no larger
than a pixel that is placed in an email and, once activated, monitors
such actions as when the email was opened, for how long, how many
times, where, and whether the email was forwarded. The sender’s
goal may be to determine how seriously you are considering a
settlement demand that he attached to an email – the more you view
the email, the more you may be inclined to accept the demand. Or,
the sender may want to know to where you forward the email (e.g., you
may forward the email to a client whose location is unknown to
opposing counsel)….”
[The
full article:
https://blog.sfbar.org/2019/08/27/beware-of-web-beacons-that-can-secretly-monitor-your-email/
Interesting
article. Do my grad students know as much?
MIT
developed a course to teach tweens about the ethics of AI
This
summer, Blakeley Payne, a graduate student at MIT, ran a week-long
course on ethics
in artificial intelligence for
10-14 year olds. In one exercise, she asked the group what they
thought YouTube’s recommendation algorithm was used for.
“To
get us to see more ads,” one student replied.
“These
kids know way more than we give them credit for,” Payne said.
Payne
created an
open source, middle-school AI ethics curriculum
to
make kids aware of how AI systems mediate their everyday lives, from
YouTube and Amazon’s Alexa to Google search and social media.
… “Kids
today are not just digital natives, they are AI natives,” said
Cynthia Breazeal, Payne’s advisor and the head of the
personal robots group at
the MIT Media Lab. Her group has developed
an AI curriculum for
preschoolers.
Training
the next generation. Probably worth considering?
4
Ways to Avoid Having AI Release Consumers’ Inner Sociopath
“Alexa,
you’re ugly. Alexa, you’re stupid. Alexa, you’re fat.”
This
barrage of abuse came from my
friend’s children, who were shouting at his Amazon device,
trying
to prompt a witty comeback from the AI assistant. What was just a
game to the kids looked a lot like the worst kind of playground
bullying, and as my friend unplugged the device, he scolded, “We
don’t talk to people like that.”
But
unfortunately, we do talk like that, especially to AI
assistants and chatbots that are unable to establish the boundaries
that humans do.
After all, if you hit someone, they may hit you back. If you call
your barista ugly, you should expect them to spit in your latte. In
their inability to push back, virtual
assistants and chatbots shield us from the consequences of bad
behavior.
(Related)
Can
Artificial Intelligence Help Prevent Mental Illness?
… The
company has developed a wearable device, an app and machine learning
system to collect data and monitor users’ level of stress, before
predicting when it could be the cause of a more serious or physical
health condition.
Mental
illness is one of the biggest medical challenges of the 21st century.
According to the World
Health Organization,
around 450 million people globally are affected by mental illness.
But
two-thirds of people with a known mental condition, such as anxiety,
depression and co-occurring disorders, fail to seek help from medical
professionals. This can be due to a number of factors, including
stigma and discrimination.
(Related)
15
Social Challenges AI Could Help Solve
What
could possibly go wrong?
Air
Force-Affiliated Researchers Want to Let AI Launch Nukes
Air
Force Institute of Technology associate dean Curtis McGiffin and
Louisiana Tech Research Institute researcher Adam Lowther, also
affiliated with the Air Force, co-wrote an article — with the
ominous title “America
Needs a ‘Dead Hand ”
— arguing that the United States needs to develop “an automated
strategic response system based on artificial intelligence.”
In
other words, they want to give an AI the nuclear codes. And yes, as
the authors admit, it sure sounds a lot like the “Doomsday Machine”
from Stanley Kubrick’s 1964 satire “Dr. Strangelove.”
The
“Dead Hand” referenced in the title refers to the Soviet Union’s
semiautomated
system that
would have launched nuclear weapons if certain conditions were met,
including the death of the Union’s leader.
This
time, though, the AI-powered system suggested by Lowther and McGiffin
wouldn’t even wait for a first strike against the U.S. to occur —
it would know what to do ahead of time.
Dilbert offers a simple solution for bias.
No comments:
Post a Comment