Tuesday, May 21, 2019


Is gathering data always ‘stealing’ data?
US Warns Chinese Drones May Steal Data: Report
The Department of Homeland Security sent out an alert on Monday flagging drones built in China as a "potential risk to an organization's information", CNN reported.
The US government has "strong concerns about any technology product that takes American data into the territory of an authoritarian state that permits its intelligence services to have unfettered access to that data or otherwise abuses that access," wrote CNN, quoting the DHS alert.
The DHS report did not name any specific Chinese manufacturers, but the southern China-based DJI produces about 70 percent of the world's commercial drones.
"For government and critical infrastructure customers that require additional assurances, we provide drones that do not transfer data to DJI or via the internet," the company added.


(Related)
Opinion | Your Car Knows When You Gain Weight
Vehicles collect a lot of unusual data. But who owns it?




This comes from failure to RTFM.
DHS Highlights Common Security Oversights by Office 365 Customers
As organizations migrate to Microsoft Office 365 and other cloud services, many fail to use proper configurations that ensure good security practices, the U.S. Department of Homeland Security's (DHS) Cybersecurity and Infrastructure Security Agency (CISA) warns.
According to CISA, customers who used third-parties to migrate email services to Office 365 did not have multi-factor authentication enabled by default for administrator accounts, had mailbox auditing disabled and password sync enabled, and allowed for the use of legacy protocols that did not support authentication.
Although Azure Active Directory (AD) Global Administrators have the highest level of administrator privileges at the tenant level in an Office 365 environment, multi-factor authentication (MFA) is not enabled by default for these accounts, CISA points out.




Spy gooder! Clear security implications.
The Spycraft Revolution
Changes in technology, politics, and business are all transforming espionage. Intelligence agencies must adapt—or risk irrelevance.




How honest should you be? What have you been telling your customers?
Isn’t this what I’ve been saying for more than a decade now?
Now there’s a study that agrees with me. Laurel Thomas-Michigan reports on a study called, “You `Might’ Be Affected: An Empirical Analysis of Readability and Usability Issues in Data Breach Notifications” by Yixin Zou, Shawn Danino, Kaiwen Sun, Florian Schau. She reports:
Building on their previous research that showed consumers often take little action when facing security breaches, researchers analyzed the data breach notifications companies sent to consumers to see if the communications might be responsible for some of the inaction.
They found that 97 percent of the 161 sampled notifications were difficult or fairly difficult to read based on readability metrics, and that the language used in them may have contributed to confusion about whether the recipient of the communication was at risk and should take action.
Read more on Futurity.
You can access the full report in html or pdf from here.


(Related) Dilbert is on point, again.




Are you being paid enough for your data?
Return on Data
Consumers routinely supply personal data to technology companies in exchange for services. Yet, the relationship between the utility (U) consumers gain and the data (D) they supply — “return on data” (ROD) — remains largely unexplored. Expressed as a ratio, ROD = U / D. While lawmakers strongly advocate protecting consumer privacy, they tend to overlook ROD. Are the benefits of the services enjoyed by consumers, such as social networking and predictive search, commensurate with the value of the data extracted from them?




Sure they do…
Microsoft wants a US privacy law that puts the burden on tech companies
Microsoft's idea of a US privacy law would make it easier for people to protect their data.
The company's corporate vice president and deputy general counsel, Julie Brill, wrote Monday that people have a right to privacy, as they become increasingly alarmed by how much data tech giants have gathered on them.
Tech giants like Facebook, Google and Apple have also called for a data privacy law, though the specific details vary. In Microsoft's vision for privacy regulation, it calls for shifting the burden of protecting your data from the person to the tech companies.
Microsoft has the numbers to back up how often people actually take that extra step to protect their own privacy. In the year since GDPR came into effect and Microsoft released its Privacy Dashboard, Brill said more than 18 million people have used those tools.
Considering that there are about 1.5 billion Windows devices, that would mean only 1 percent of Microsoft users have actually changed their privacy settings. Similarly, there were about 2.5 billion visits last year to Google's Accounts page, but only about 20 million people viewed their ads settings.




Architecting the LoC.
Digital Strategy for the Library of Congress
The Library of Congress’s mission is to engage, inspire, and inform the Congress and the American people with a universal and enduring source of knowledge and creativity. To accomplish that mission, the Library is adopting a digital-forward strategy that harnesses technology to bridge geographical divides, expand our reach, and enhance our services. This document describes how we will secure the Library’s position in an increasingly digital world as we realize our vision that all Americans are connected to the Library of Congress.
The Digital Strategy complements the Library’s 2019-2023 strategic plan, Enriching the User Experience, which enumerates four high-level goals: expand access, enhance services, optimize resources, and measure results.
The Digital Strategy describes how we will use each interaction as an opportunity to move users along a path from awareness, to discovery, to use, and finally to a connection with the Library through three main goals: throwing open the treasure chest, connecting, and investing in our future.”




What is that thingie?
Understanding Artificial Intelligence and Machine Learning
The opening session of FPF’s Digital Data Flows Masterclass provided an educational overview of  Artificial Intelligence and Machine Learning – featuring Dr. Swati Gupta, Assistant Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech; and Dr. Oliver Grau, Chair of ACM’s Europe Technology Policy Committee, Intel Automated Driving Group, and University of Surrey. To learn more about the Basics of AI/ML and how Bias and Fairness impact these systems, watch the class video here,
In conjunction with this class, FPF released The Privacy Expert’s Guide to AI and Machine Learning. Covering much of the course content, this guide explains the technological basics of AI and ML systems at a level of understanding useful for non-programmers, and addresses certain privacy challenges associated with the implementation of new and existing ML-based products and services.




Lip service or a basis for legal actions?
US to back international guidelines for AI ethics
Only some countries will support the principles, though.
American companies have fostered ethical uses of AI before. Now, however, the government itself is posed to weigh in. Politico understands that the US, fellow members of the Organization for Economic Cooperation and Development and a "handful" of other countries will adopt a set of non-binding guidelines for creating and using AI. The principles would require that AI respects human rights, democratic values and the law. It should also be safe, open and obvious to users, while those who make and use AI should be held responsible for their actions and offer transparency.
… The guidelines should be released on May 22nd


(Related)
The Ethics of Smart Devices That Analyze How We Speak
Speech lies at the heart of our social interactions, and we unwittingly reveal much about ourselves when we talk. When someone hears a voice, they immediately start picking up on accent and intonation and make assumptions about the speaker’s age, education, personality, etc. Humans do this so we can make a good guess at how best to respond to the person speaking.
But what happens when machines start analyzing how we talk? The big tech firms are coy about exactly what they are planning to detect in our voices and why, but Amazon has a patent that lists a range of traits they might collect, including identity (gender, age, ethnic origin, etc.”), health(“sore throat, sickness, etc.”), and feelings, (“happy, sad, tired, sleepy, excited, etc.”).




Code faster and cleaner. Maybe.
Microsoft wants to apply AI ‘to the entire application developer lifecycle’
At its Build 2018 developer conference a year ago, Microsoft previewed Visual Studio IntelliCode, which uses AI to offer intelligent suggestions that improve code quality and productivity. In April, Microsoft launched Visual Studio 2019 for Windows and Mac. At that point, IntelliCode was still an optional extension that Microsoft was openly offering as a preview. But at Build 2019 earlier this month, Microsoft shared that IntelliCode’s capabilities are now generally available for C# and XAML in Visual Studio 2019 and for Java, JavaScript, TypeScript, and Python in Visual Studio Code. Microsoft also now includes IntelliCode by default in Visual Studio 2019.




Perspective. A podcast on a hot topic.
Is Amazon Getting Too Big?
In an era when legacy retailers such as Sears and Macy’s are scaling back or going bust, online behemoth Amazon continues to boom. The company is the second-largest retailer in the United States behind Walmart, and last year it became the second company in the world to reach $1 trillion in market capitalization. Perhaps more significantly, it’s also one of the world’s largest tech companies, with reams of data collected from an enormous customer base. Amazon has sold 100 million units of its voice assistant, Alexa, and an equal number of Prime subscriptions. But is Amazon too big?
… “Typically, when you think about antitrust, you think about whether the consumer is worse off. And Amazon has been so far pretty clean on that,” Kahn said, adding that Amazon hasn’t lowered product quality or raised prices. The company also appears to be transparent with its customers.




A possible follow on to our spreadsheet class?
Nine Tutorials for Making Your Own Mobile App
Glide is a service that anyone can use to create a mobile app without doing any coding. Glide lets you take one of your Google Sheets and have the information become a mobile app. It's easy to use and you can get started in minutes.
Glide recently published their own official tutorial videos. Glide offers these eight tutorials that will walk you through each step of using Glide from sign-up through publication of your app.



Monday, May 20, 2019


Maybe Facebook could buy Finland?
Finland is winning the war on fake news. What it’s learned may be crucial to Western democracy
Finland has faced down Kremlin-backed propaganda campaigns ever since it declared independence from Russia 101 years ago. But in 2014, after Moscow annexed Crimea and backed rebels in eastern Ukraine, it became obvious that the battlefield had shifted: information warfare was moving online.
Toivanen, the chief communications specialist for the prime minister’s office, said it is difficult to pinpoint the exact number of misinformation operations to have targeted the country in recent years, but most play on issues like immigration, the European Union, or whether Finland should become a full member of NATO (Russia is not a fan).




AI is too easily fooled.
How we might protect ourselves from malicious AI
We’ve touched previously on the concept of adversarial examples—the class of tiny changes that, when fed into a deep-learning model, cause it to misbehave. In March, we covered UC Berkeley professor Dawn Song’s talk at our annual EmTech Digital conference about how she used stickers to trick a self-driving car into thinking a stop sign was a 45-mile-per-hour sign, and how she used tailored messages to make a text-based model spit out sensitive information like credit card numbers. In April, we similarly talked about how white hat hackers used stickers to confuse Tesla Autopilot into steering a car into oncoming traffic.
… A new paper from MIT now points toward a possible path to overcoming this challenge. It could allow us to create far more robust deep-learning models that would be much harder to manipulate in malicious ways. To understand its significance, let’s first review the basics of adversarial examples.
It showed this by identifying a rather interesting property of adversarial examples that helps us grasp why they’re so effective. The seemingly random noise or stickers that trigger misclassifications are actually exploiting very precise, minuscule patterns that the image system has learned to strongly associate with specific objects. In other words, the machine isn’t misbehaving when it sees a gibbon where we see a panda. It is indeed seeing a pattern of pixels, imperceptible to humans, that occurred far more often in the gibbon photos than panda photos during training.




Another huge list. Browsers and their targeting, specialized tools.
New on LLRX – Online Research Browsers 2019
Via LLRX Online Research Browsers 2019 Marcus Zillman’s guide highlights multifaceted browser alternatives to mainstream search tools that researchers may regularly use by default. There are many reliable yet underutilized applications that facilitate access to and discovery of subject matter specific documents and sources. Free applications included here also offer collaboration tools, resources to build and manage repositories, to employ data visualization, to create and apply metadata management, citations, bibliographies, document discovery and data relationship analysis.



Sunday, May 19, 2019


Of course, you could stop doing business with California…
Half of companies missed GDPR deadline, 70% admit systems won’t scale
Even if given two years notice to achieve GDPR compliance, only half of companies self-reported as compliant by May 25, 2018, a DataGrail survey reveals.
The Age of Privacy: The Cost of Continuous Compliance” report benchmarks the operational impact of the European General Data Protection Regulation (GDPR ) and the California Consumer Privacy Act (CCPA ), as well as sharing insights into lessons learned and attitudes toward privacy regulations.
… “Businesses without a European presence were not impacted by the GDPR. However, with the CCPA fast approaching, US businesses without GDPR are experiencing the same challenges that multinational companies did with GDPR,” said Daniel Barber, Co-founder & CEO, DataGrail.
Most companies reported taking at least seven months to achieve GDPR readiness, but now with CCPA only seven months away, they realize their systems will not support CCPA and other forthcoming privacy regulations. Companies will need to integrate and operationalize their privacy management to avoid the time-consuming and error-prone manual processes to comply with these regulations.”


(Related)
Prince Harry won a legal battle with the paparazzi using Europe's GDPR privacy law — and it gives the royals a powerful new weapon against the media
Prince Harry this week notched another victory in the royal family's long-running battle with paparazzi photographers, securing a "substantial payout" from an agency which used a helicopter to take pictures inside a house he was renting.
Potentially even more interesting than that is the way in which he won his battle — basing a legal case partly on a sweeping new European data law that is less than a year old.
According to a statement delivered to London's High Court on Thursday, in which the paparazzi agency Splash News apologized to Harry, also known as the Duke of Sussex (emphasis ours):
"This matter concerns a claim for misuse of private information, breaches of The Duke's right to privacy under Article 8 ECHR and breaches of the General Data Protection Regulation ("GDPR") and Data Protection Act 2018 ("DPA")."
pursuing photographers on the grounds that their business constitutes illegal data processing is a new strategy, and a use of GDPR that few would have predicted.
But, at least according to the thrust of Harry's legal argument, a photograph can be personal data too — even one of your home which you are not even in. One concern may have been that the photographs gave away Harry's address.
Once you accept that the pictures Splash took count as Harry's personal data, they have a whole host of obligations.
Timothy Pinto, a senior counsel for the law firm Taylor Wessing, wrote an article about GDPR in media law, which said that it offers a potentially attractive alternative to claims of defamation of invasion of privacy.




What if your AI thinks you’re crazy? (Can you own an AI? Does the 13th Amendment apply?)
Your Robot Therapist Will See You Now: Ethical Implications of Embodied Artificial Intelligence in Psychiatry, Psychology, and Psychotherapy
This paper assesses the ethical and social implications of translating embodied AI applications into mental health care across the fields of Psychiatry, Psychology and Psychotherapy. Building on this analysis,




Questions, always more questions.
The intrinsically linked future for human and Artificial Intelligence interaction
Security and regulations have to be put in place, with questions of “Who is responsible for AI security and regulations”? and “can AI be trusted as an autonomous entity” also the ethical use of AI has to be addressed, “What about the rights and ethics of AI”?. The human race is on an inevitable path of AI dominance the question is “will humans and AI be friends or adversaries”?



Saturday, May 18, 2019


Some sanity breaks out in California?
Ding Dong the CCPA Private Right of Action is (Mostly) Dead!
there is some good news about the California Consumer Privacy Act (CCPA) this Friday afternoon! SB 561 appears to have (mostly) died in the Senate Appropriations Committee during a hearing held yesterday.




I’m thinking this is a game anyone can play. I wonder what my students would come up with? Or if they would agree with these…
New laws of robotics needed to tackle AI – expert
The world has changed since sci-fi author Asimov in 1942 wrote his three rules for robots, including that they should never harm humans, and today's omnipresent computers and algorithms demand up-to-date measures.
According to Pasquale, author of "The Black Box Society: The Secret Algorithms Behind Money and Information", four new legally-inspired rules should be applied to robots and AI in our daily lives.
"The first is that robots should complement rather than substitute for professionals"
"The second is that we need to stop robotic arms races.
The third, and most controversial, rule is not to make humanoid robots or AI
The fourth and final law is that any robot or AI should be "attributable to or owned by a person or a corporation made of persons




This may be true in the minds of certain lawyers, but I rather doubt it.
Technology Is as Biased as Its Makers
From exploding Ford Pintos to racist algorithms, all harmful technologies are a product of unethical design. Yet, like car companies in the ’70s, today’s tech companies would rather blame the user.




Pretty clear what I’ll be teaching…
Microsoft aims to train and certify 15,000 workers on AI skills by 2022
Microsoft is investing in certification and training for a range of AI-related skills in partnership with education provider General Assembly, the companies announced this morning. The goal is to train some 15,000 people by 2022 in order to increase the pool of AI talent around the world. The training will focus on AI, machine learning, data science, cloud and data engineering and more.



Friday, May 17, 2019


Failure to plan is planning to fail.”
From the for-the-love-of-a-free-press-would-someone-PLEASE-teach-these-people-about-the-first-amendment? dept.
Earlier this week, this site noted reporting by Paterson Times about an alleged breach involving the Paterson Public Schools in New Jersey. We also picked up a follow-up report that covered some… um…unexpected claims by the District as to how many threat actors might be involved and whether it was a former employee, and…. a whole bunch of other claims that seemed premature, at best. Usually, entities shut up and say they are investigating. Paterson Public Schools seems to have decided to take another approach that is not adverse to making themselves look inexperienced at handling a data security incident.
Today, the Paterson Times reports:
After a news story exposed a massive data breach at the Paterson Public Schools, superintendent Eileen Shafer threatened to sue the Paterson Times for purported “serious reputational harm” to the school district, a lawsuit that would be prohibited by law. The letter also suggested the district would use legal means to obtain materials related to the breach held by the Times, which would be prohibited by the state’s reporter’s shield law.
He asserts the breach, which claimed more than 23,000 account passwords and was not detected until the Paterson Times brought it to the district’s attention, has caused the school system to be “unfairly held out for ridicule in the community.”
Read more on the Paterson Times.
The basis for any ridicule of the district is the district’s response to the reported or alleged breach. They have repeatedly been shooting themselves in the foot and need to get a real professional in there to handle incident response properly. Their claims, demands, and legal threats are, to put it bluntly, bullshit, and should be called out as such.
How sad that those with the responsibility of educating our youth seem to be totally ignorant about the First Amendment. Hopefully, the Paterson Times’ lawyers will hand them a clue stick.




Russia can’t stop rigging elections. It would have been cheaper to bribe someone to get her into a ‘prestigious American college.’
Russian bots rigged Voice Kids TV talent show result
The result of a popular Russian TV talent show - The Voice Kids - has been cancelled after thousands of fraudulent votes were found to have handed victory to a millionaire's young daughter.
There were complaints after singer Mikella Abramova, aged 10, won with 56.5% of the phone-in vote.
A cyber security firm, Group-IB, was hired to examine the vote for Mikella Abramova, after the final of The Voice Kids, which is in its sixth season on Russian TV.
"The interim results of the check confirm that there was outside influence on the voting, which affected the result," a Channel One statement said (in Russian).
According to investigators, more than 8,000 text messages were sent from about 300 phone numbers during the vote.
A Group-IB statement said that sequential phone numbers had been used to send automated votes - in other words, "bots were used in this case".
"More than 30,000 votes came in for one contestant from those phone numbers," Group-IB said. Rival singers got no more than 3,000 votes each, Russia's Kommersant daily reported.




Deliberate bias or helpful coaching?
The NYPD uses altered images in its facial recognition system, new documents show
A new report from Georgetown Law’s Center on Privacy and Technology (CPT) has uncovered widespread abuse of the New York Police Department’s facial recognition system, including image alteration and the use of non-suspect images. In one case, officers uploaded a picture of the actor Woody Harrelson, based on a witness description of a suspect who looked like Harrelson. The search produced a match, and the matched suspect was later arrested for petty larceny. [See? It works! Bob]




I suppose we’ll be calling this ‘voiceal recognition.’
Selective hearing: AI-powered listening device picks a voice out of a crowd




A video that justifies my choice to not own a phone?
How Smartphones Sabotage Your Brain’s Ability to Focus
WSJ Podcast [no paywall] – “Our phones give us instant gratification. But there’s a cost: loss of attention and productivity. WSJ’s Daniela Hernandez goes on a quest to understand the science of distractions and what you can do stay be more focused and productive.”