Saturday, June 08, 2019

Can you explain your Algorithm?
ICO’s Interim Report on Explaining AI
On June 3, 2019, the UK Information Commissioner’s Office (“ICO”), released an Interim Report on a collaboration project with The Alan Turing Institute (“Institute”) called “Project ExplAIn.” The purpose of this project, according to the ICO, is to develop “practical guidance” for organizations on complying with UK data protection law when using artificial intelligence (“AI”) decision-making systems; in particular, to explain the impact AI decisions may have on individuals. This Interim Report may be of particular relevance to organizations considering how to meet transparency obligations when deploying AI systems that make automated decisions that fall within the scope of Article 22 of the GDPR.

If you’re big or successful, you are probably evil so we gotta find a way to chop you down.
Weighing the Antitrust Case Against Google, Apple, Amazon, and Facebook
    Last Monday, Apple, Alphabet,, and Facebook lost more than $130 billion in aggregate market value after the federal government launched what seemed to be a coordinated campaign to examine the companies’ competitive practices. The tech-heavy Nasdaq Composite fell 1.6% on the news.
According to multiple media outlets, the Federal Trade Commission was given oversight over and Facebook, while
… Investors deserve more clarity. Let’s start with one key point: The likelihood of breakups is slim. “The big challenge with these antitrust things is, it’s not obvious what the consumer harm is today,” says Scott Kupor, managing partner at venture-capital firm Andreessen Horowitz, which has invested in Facebook as well as Pinterest (PINS) and Slack Technologies. “If you think about the consumer utility of Facebook, Google, Amazon, and Apple, it’s not clear they are doing something that is curtailing competition. It’s not clear they are raising prices.”
Facebook’s and Google’s services are by and large free to consumers. And one could argue that Amazon’s e-commerce marketplace has played a key role in lowering retail prices for consumers.
“The idea that Facebook, Google, and Amazon have harmed consumers over the last decade is laughable,” Mark Mahaney of RBC Capital Markets, a longtime internet analyst, tells Barron’s. “I think these companies have created an enormous amount of convenience, savings, and benefits for consumers.”
Here’s our company-by-company breakdown on regulatory risk.

Has anyone told Facebook?
India’s draft bill proposes a 10-year jail sentence for using cryptocurrencies

Friday, June 07, 2019

Lawmakers can’t secure their campaigns because of the law? I think I’ll vote for someone who can figure out how to fix that problem.
Election Rules Are an Obstacle to Cybersecurity of Presidential Campaigns
One year out from the 2020 elections, presidential candidates face legal roadblocks to acquiring the tools and assistance necessary to defend against the cyberattacks and disinformation campaigns that plagued the 2016 presidential campaign.
Federal laws prohibit corporations from offering free or discounted cybersecurity services to federal candidates. The same law also blocks political parties from offering candidates cybersecurity assistance because it is considered an “in-kind donation.”

What if they get it wrong?
Amazon’s Home Surveillance Company Is Putting Suspected Petty Thieves in its Advertisements
Amazon's home surveillance company Ring is using video captured by its doorbell cameras in Facebook advertisements that ask users to identify and call the cops on a woman whom local police say is a suspected thief.
In the video, the woman’s face is clearly visible and there is no obvious criminal activity taking place. The Facebook post shows her passing between two cars. She pulls the door handle of one of the cars, but it is locked.
A post on the the Mountain View Police Department's websites details the incident and also shares an image from the Ring camera. "Footage obtained from a neighbor’s home captured a woman who is believed to be the suspect in the theft," the post says. The woman is suspected of stealing someone's purse and wallet from inside a car, and making a series of purchases around town with those stolen credit cards.
A spokesperson for MVPD told Motherboard in an email that "while we did not ask Ring to post footage, the additional outreach, and the additional eyes that may see this woman and recognize her, are most welcome and helpful!" A spokesperson for Ring told Motherboard in an email that its Facebook post encourages communities to work with local cops to "help keep neighborhoods safe."
Ring is also using the image of a woman who is innocent until proven guilty and calling her a thief in ad that it's paying to get in front of a targeted audience in order to sell more home surveillance equipment. The company doesn't claim to know for certain that she's committed a crime, and the police have yet to catch or convict anyone on this case.

I trust the police. Honest, I do!
iOS Shortcut for Recording the Police
"Hey Siri; I'm getting pulled over can be a shortcut:
Once the shortcut is installed and configured, you just have to say, for example, "Hey Siri, I'm getting pulled over." Then the program pauses music you may be playing, turns down the brightness on the iPhone, and turns on "do not disturb" mode.
It also sends a quick text to a predetermined contact to tell them you've been pulled over, and it starts recording using the iPhone's front-facing camera. Once you've stopped recording, it can text or email the video to a different predetermined contact and save it to Dropbox.

Definitely something to consider.
When AI Becomes an Everyday Technology
The evolution of AI has been a rich tale of exploration since its origins in the 1950’s, with the last decade providing an especially dramatic chapter of breakthrough innovations. But I believe the real story is what comes next — when the disruption stabilizes and machine learning transitions from a staple of Silicon Valley headlines to an everyday technology.
One of my favorite recent examples of this shift in possibilities comes from Carnegie Mellon University (CMU), where I formerly served as dean of the computer science department. While I was there, a student was considering her options for an upcoming artificial intelligence project, and thought of her sister, who happens to be deaf. She wanted to make it easier for her friends to learn the basics of American Sign Language, so she developed an AI-powered tool that tracked their movements and provided automatic feedback as they learned new signs. And here’s the best part: she wasn’t a computer science postdoc or even a grad student — she was a history major, taking an introductory class for fun.

"The first thing we do, let's replace all the lawyers with AI."
Artificial Intelligence and Legal Decision-Making: The Wide Open? Study on the Example of International Arbitration
Scherer, Maxi, Artificial Intelligence and Legal Decision-Making: The Wide Open? Study on the Example of International Arbitration (May 22, 2019). Queen Mary School of Law Legal Studies Research Paper No. 318/2019. Available at SSRN:
The paper explores the use of Artificial Intelligence (AI) in arbitral or judicial decision-making from a holistic point of view, exploring the technical aspects of AI, its practical limitations as well as its methodological and theoretical implications for decision-making as a whole.
The paper further finds that a blind deferential attitude towards algorithmic objectivity and infallibility is misplaced and that AI models might perpetuate existing biases. It discusses the need for reasoned decisions which is likely to be an important barrier for AI-based legal decision-making. Finally, looking at existing legal theories on judicial decision-making, the paper concludes that the use of AI and its reliance on probabilistic inferences could constitute a significant paradigm shift. In the view of the author, AI will no doubt fundamentally affect the legal profession, including judicial decision-making, but its implications need to be considered carefully.”

Well, I found it interesting… Would citing stick cases result in more wins?
Citation Stickiness
Bennardo, Kevin and Chew, Alexa, Citation Stickiness (April 19, 2019). 20 Journal of Appellate Practice & Process, Forthcoming. Available at SSRN: – “This Article is an empirical study of what we call citation stickiness. A citation is sticky if it appears in one of the parties’ briefs and then again in the court’s opinion. Imagine that the parties use their briefs to toss citations in the court’s direction. Some of those citations stick and appear in the opinion — these are the sticky citations. Some of those citations don’t stick and are unmentioned by the court — these are the unsticky ones. Finally, some sources were never mentioned by the parties yet appear in the court’s opinion. These authorities are endogenous — they spring from the internal workings of the court itself. In a perfect adversarial world, the percentage of sticky citations in courts’ opinions would be something approaching 100%. The parties would discuss the relevant authorities in their briefs, and the court would rely on the same authorities in its decision-making. Spoiler alert: our adversarial world is imperfect. Endogenous citations abound in judicial opinions and parties’ briefs are brimming with unsticky citations.
So we crunched the numbers. We analyzed 325 cases in the federal courts of appeals. Of the 7552 cases cited in those opinions, more than half were never mentioned in the parties’ briefs. But there’s more — in the Article, you’ll learn how many of the 23,479 cases cited in the parties’ briefs were sticky and how many were unsticky. You’ll see the stickiness data sliced and diced in numerous ways: by circuit, by case topic, by an assortment of characteristics of the authoring judge. Read on!”

Even I can notice something strange when the two headlines are next to each other in my RSS feed. Is Step smarter than JPMorgan?
JPMorgan Scraps New App Service for Young People

Step raises $22.5M led by Stripe to build no-fee banking services for teens

Backgrounder… (Only two pages?)
Internet of Things – An Introduction
CRS Report via LC – Internet of Things (IoT): An Introduction, June 4, 2019 – “The Internet of Things (IoT) is a system of interrelated devices that are connected to a network and/or to each other, exchanging data without necessarily requiring human-to-machine interaction. In other words, IoT is a collection of electronic devices that can share information among themselves. Examples include smart factories, smart home devices, medical monitoring devices, wearable fitness trackers, smart city infrastructures, and vehicular telematics. Potential issues for Congress include regulation, digital privacy, and data security as discussed below.

Short of a full course, here are some tools for beginners, mid-level and advanced.
Student Resources for A.I. and Machine Learning Education
Hacker Noon offers a lovely breakdown of A.I. from a programmer’s perspective, including the industry’s “Holy Grail”: Artificial General Intelligence, or AGI (which some other resources call “General Artificial Intelligence”). KDNuggets also has a rundown of the basic terms and the technologies involved.
If you want to get to know some of the tools that actually make A.I. work, start off with Google’s three-hour introduction to deep-learning fundamentals. Since it’s Google, the materials inevitably focus on the company’s open-source software library, TensorFlow, that’s used in machine-learning applications (such as neural networks ).
Google also offers a machine-learning “crash course” with 25 lessons and 40+ exercises, designed to take roughly 15 hours to complete. You don’t need to know a lot to start off with it, but it’s definitely a smoother process if you have some knowledge of programming basics, Python, and intro-level Algebra.

Handy toolkit item?

Thursday, June 06, 2019

Ransomware Attack Costs Norsk Hydro Tens of Millions of Dollars
A piece of file-encrypting ransomware named LockerGoga started infecting Norsk Hydro systems on March 18. The attack caused disruptions at several of the company’s plants, forcing workers to rely on manual processes.
Hydro has been highly transparent regarding the impact of the incident. It claimed to have good backups in place and it did not intend on paying the ransom. However, the security breach still cost the firm a significant amount of money.
Roughly two weeks after the incident was made public, Hydro estimated that it lost $35-41 million (300-350 million Norwegian crowns) in the first week following the attack. Roughly one month later it made another estimate, putting the cost of the attack at roughly $50 million.
The company on Tuesday published its financial report for the first quarter, which it was forced to delay by over one month due to the cyberattack. The report shows that its Extruded Solutions unit suffered the biggest operational and financial impact.
Hydro says the overall impact of the cyberattack in the first quarter remains $35-41 million. It estimates that losses will total $23-29 million (200-250 million Norwegian crowns) in the second quarter.

I think it’s rather naive to think there was only one objective.
China Allegedly Hacked Australian National University to Recruit Informants
Cybercriminals sponsored by the Chinese government allegedly infiltrated the Australian National University’s (ANU’s) systems in 2018 and were probably roaming freely until two weeks ago when the breach was detected, writes The Sydney Morning Herald.

It’s a not uncommon mind set.
Vietnam Cyber Threat: Government-Linked Hackers Ramping Up Attacks
Threat intelligence firm IntSights has issued a threat brief on the growing offensive cyber capabilities of Vietnam. The reasoning is a combination of state-affiliated -- or at least state-aligned -- advanced groups APT32 (OceanLotus) and APT-C-01 (Poison Ivy), and local cyber legislation that is promoting the development of cyber subterfuge among Vietnamese young.
"As Vietnamese authorities attempt to strengthen their grip via censorship," she continues, "they drive more and more Vietnamese citizens to the dark web for access to unfiltered content." In these dark web forums, cyber capable youngsters are likely to learn the skills of cyber criminality.
"While Vietnam may not have the resources to combat world superpowers - like China or the U.S. - in traditional warfare or economic stature, cyber is leveling the playing field," comments Wright. "Vietnam has the potential to develop into a cybercriminal outpost, as its government continues to censor the public and push its youthful middle class toward the fringes with its strict internet legislation."

Looking for information to influence EU voters?
The EU’s Embassy In Russia Was Hacked But The EU Kept It A Secret
Alberto Nardelli reports:
The European Union’s embassy in Moscow was hacked and had information stolen from its network, according to a leaked internal document seen by BuzzFeed News.
An ongoing “sophisticated cyber espionage event” was discovered in April, just weeks before the European Parliament elections — but the European External Action Service (EEAS), the EU’s foreign and security policy agency, did not disclose the incident publicly.
Read more on BuzzFeed.

(Related) Would anyone have noticed if it was half-vast?
Russia Effort in 2016 US Election Was 'Vast,' 'Professional'
A report by the security firm Symantec said some of the accounts linked to Russia's Internet Research Agency dated back as far as 2014 and that the manipulation effort involved a vast effort that included both automated "bots" and manual operations.

If a high school student can point out the privacy issues, why did the school district (and their attorneys?) fail to see them?
Use of Backpack may cross digital privacy lines
Piper Hansen is the Editor-in-Chief of Manual RedEye, the student newspaper of Louisville, Kentucky’s duPont Manual High School. She researched and wrote a really excellent piece on student digital privacy. It begins:
Amid the college application deadlines, school-work and football games in mid-October, duPont Manual’s senior class met in the auditorium, quickly filling the seats at the front of the room as Principal Darryl Farmer and several assistant principals faced them.
The administrators quieted the group of students and began to show them how to log in to a special website where they would upload evidence of their learning as part of Jefferson County Public Schools’ (JCPS) newest graduation requirement, the Backpack of Success Skills.
But what students and some administrators didn’t know was that the district may have been violating federal law by requiring them to use online resources, like the Backpack, without obtaining parental consent for the district-issued account used to access it per the Family Educational Rights and Privacy Act and the Children’s Online Privacy Protection Act.
Read more on Manual RedEye.

All I see is liability. Are they guaranteeing to detect and stop suicides? What if they miss something “obvious?” What if they detect something and fail to act? How quickly can they respond?
UK: Universities to trawl through students’ social media to look for suicide risk, under new project
Meanwhile…. from back on the road that is paved with good intentions but goes to the wrong place, Camilla Turner reports:
Universities are to trawl through students’ social media to look for signs that they may be suicidal, as part of a new project funded by the higher education watchdog.
The new scheme, backed by the Office for Students (OfS), is aimed at reducing suicide rates and identifying students in crisis by harvesting data on individuals.
Northumbria University, which is leading the three year project, will design and pilot an “Early Alert Tool” which, if successful, could be rolled out at all British institutions.
Read more on The Telegraph.

Clear indication that they have something to hide?
France Bans Judge Analytics, 5 Years In Prison For Rule Breakers
Artificial Lawyer – “In a startling intervention that seeks to limit the emerging litigation analytics and prediction sector, the French Government has banned the publication of statistical information about judges’ decisions – with a five year prison sentence set as the maximum punishment for anyone who breaks the new law. Owners of legal tech companies focused on litigation analytics are the most likely to suffer from this new measure. The new law, encoded in Article 33 of the Justice Reform Act, is aimed at preventing anyone – but especially legal tech companies focused on litigation prediction and analytics – from publicly revealing the pattern of judges’ behaviour in relation to court decisions.
A key passage of the new law states: The identity data of magistrates and members of the judiciary cannot be reused with the purpose or effect of evaluating, analysing, comparing or predicting their actual or alleged professional practices.’ *
As far as Artificial Lawyer understands, this is the very first example of such a ban anywhere in the world. Insiders in France told Artificial Lawyer that the new law is a direct result of an earlier effort to make all case law easily accessible to the general public, which was seen at the time as improving access to justice and a big step forward for transparency in the justice sector. However, judges in France had not reckoned on NLP and machine learning companies taking the public data and using it to model how certain judges behave in relation to particular types of legal matter or argument, or how they compare to other judges…”

Please tell me there’s a cure!
Research reveals how the Internet may be changing the brain
An international team of researchers from Western Sydney University, Harvard University, Kings College, Oxford University and University of Manchester have found the Internet can produce both acute and sustained alterations in specific areas of cognition, which may reflect changes in the brain, affecting our attentional capacities, memory processes, and social interactions.
[The article:

Average US Time Spent with Mobile in 2019 Has Increased
For the first time ever, US consumers will spend more time using their mobile devices than watching TV, with smartphone use dominating that time spent.
… The average US adult will spend 3 hours, 43 minutes (referenced as 3:43) on mobile devices in 2019, just above the 3:35 spent on TV. Of time spent on mobile, US consumers will spend 2:55 on smartphones, a 9-minute increase from last year. In 2018, mobile time spent was 3:35, with TV time spent at 3:44.

For the toolkit.
The 5 Best Grammar Checkers

Wednesday, June 05, 2019

Knowing where your data is stored and what “normal” access should be is a requirement for GDPA compliance. Isn’t it?
On May 10, when first reported that the American Medical Collection Agency had been breached, we reported that information from 200,000 payment cards had been found for sale on a top-tier market by Gemini Advisory analysts, whose investigation linked those cards to AMCA. At the time, we did not know how many other payment cards might be put up for sale in other batches at a later date. Nor did we know how much PHI and PII may have been acquired by what appeared to be a hack of AMCA’s patient portal.
That week, very few news outlets picked up my report of the breach. Maybe 200,000 didn’t seem huge or maybe my little blog still doesn’t get the attention it deserves. But this week, everyone is paying attention to the breach because Quest Diagnostics revealed that 11.9 million of their patients were impacted and Quest and Optum360 (who does billing for Quest) are investigating the incident and have suspended referring past due accounts to AMCA in the interim.
Then today, Brian Krebs reported that LabCorp reported that 7.7 million of its patients had personal and/or financial information exposed in the breach. So we’re already at almost 20 million and that’s just from two of AMCA’s clients. As I noted earlier, this may turn out to be the biggest HIPAA breach of 2019.
Of note, Krebs reports that AMCA reportedly informed LabCorp that it is notifying 200,000 LabCorp patients whose credit card or bank account information may have been accessed. That number is the same number of payment cards that Gemini Advisory found up for sale, but Gemini had informed that 15% of the cards had personal information such as DOB and/or Social Security numbers. AMCA reportedly informed LabCorp that none of LabCorp’s patients’ SSN were stored on AMCA’s server. So the 200,000 cards for sale are not necessarily — and probably aren’t — all LabCorp patients.
I really fear we are just at the tip of this iceberg.

Implications for social engineering. Why we run background checks. LinkedIn never checks,
Fake LinkedIn Profiles Are Impossible to Detect
Ever wonder if all of the LinkedIn profiles that boast comprehensive expertise, outstanding performance, and enviable recommendations…are well, real? – Fake LinkedIn Profiles Are Impossible to Detect: “Don’t trust everything you see on LinkedIn. We created a fake LinkedIn profile with a fake job at a real company. Our fake profile garnered the attention of a Google recruiter and gained over 170 connections and 100 skill endorsements. Everyone is talking about fake accounts on Facebook and fake followers on Twitter. LinkedIn hasn’t been part of the conversation, but Microsoft’s social network also has a big problem… [Note – this article is a must read – I had no idea that it was so easy to create fake LinkedIn profiles with what appear to be actual work histories, connections and bona fides…]

More evidence that the FBI is a collection of independent investigators rather than a uniform organization?
Face Recognition Technology: DOJ and FBI Have Taken Some Actions in Response to GAO Recommendations to Ensure Privacy
The FBI’s face recognition office can now search databases with more than 641 million photos, including 21 state databases. In a May 2016 report, we found the FBI hadn’t fully adhered to privacy laws and policies or done enough to ensure accuracy of its face recognition capabilities. This testimony is an update on this work and our 6 recommendations, only one of which has been fully addressed. For example, while the FBI has conducted audits to oversee the use of its face recognition capabilities, it still hasn’t taken steps to determine whether state database searches are accurate enough to support law enforcement investigations…”

Let’s try this… How about that… A handy-dandy little chart to summarize the amendments.
CCPA Amendment Update June 2019 – Twelve Bills Survive Assembly and Move to the Senate
This post provides clarity to an otherwise murky process by: 1) presenting an overview of the California state legislative process; 2) identifying a CCPA timeline and key deadlines; 3) analyzing the CCPA amendments that recently passed the Assembly along with noteworthy bills that failed in the Senate; and 4) outlining likely next steps for amendment efforts prior to the law’s effective date.

Should we expect a sea change in politics?
Can Algorithms Help Us Decide Who to Trust?
The use of artificial intelligence (AI) and algorithms is increasing within organizations to manage business processes, hire employees, and automate routine organizational decision making. This comes as no surprise, since the application of simple linear algorithms have been shown to outperform human judgment in the accuracy of many administrative tasks. A 2017 Accenture survey also revealed that 85% of executives want to invest more extensively in AI-related technologies over the next three years.
Despite this forecast, the reality is that, at least in some cases, humans display strong feelings of aversion to the use of autonomous algorithms. For example, surveys reveal that 73% of Americans report that they are afraid to ride in a self-driving vehicle. Human doctors are also preferred over algorithms in the medical context, despite evidence that algorithms might sometimes deliver more accurate diagnoses. Such aversion creates work situations where the implementation of AI leads to a sub-optimal, inefficient, and biased use of algorithms. So, if AI is to become an important management tool in our organizations, algorithms need to be used as trusted advisors to human decision-makers. They should also help promote trust within the company.
does AI really possess such a “social” skill? This is an important question to ask because trust requires socially sensitive skills that are perceived to be uniquely human. In fact, the unique ability to understand human emotions and desires is a prerequisite for judging individual’s trustworthiness and is hard to resemble artificially. So can algorithms providing advice in this area of human interaction be accepted by human decision-makers?

A podcast.
How companies like Google are dealing with the ethics of AI
The Verge editor-in-chief Nilay Patel and AI reporter James Vincent discuss AI ethics and bias, and, specifically, what companies like Google are doing to tackle such challenges.

Perspective. Not the future of space enterprise I dreamed of as a kid, but with many of the enabling tools.
Why Big Business Is Making a Giant Leap into Space
Amazing things already are. One indication that big business is taking space more seriously is that interest has moved from the fringe to the mainstream, says Wharton management professor Anoop Menon. While space retains an undeniably speculative aspect, especially around development of business models, a number of factors are coming together now to suggest that big business’s foray into space is here.
I don’t think we are necessarily a long way away — it’s a matter of being creative,” said Menon, co-author with Laura Huang and Tiona Zuzul of “Watershed Moments, Cognitive Discontinuities, and Entrepreneurial Entry: The Case of New Space.” Satellites that capture geospatial data are potentially quite lucrative, he says, tracking shipping movements, deforestation or the location of mining deposits. “This is an interesting one,” says Menon of another idea: “Taking pictures of parking lots at Wal-Mart and Target and selling that to hedge funds, since traffic is a pretty good leading indicator of economic activity.”

Expect a market in “disconnectors.”
Everything Will Connect to the Internet Someday, and This Biobattery Could Help Make That a Reality
The Internet of Disposable Things is a phenomenon in which wireless sensors are attached to nearly any type of device in order to provide up-to-date information via the internet. For example, a sensor could be attached to food packaging to monitor the freshness of the food inside.
Internet of Disposable Things (IoDT) is a new paradigm for the rapid evolution of wireless sensor networks,” said Seokheun Choi, associate professor of electrical and computer engineering at Binghamton University. “This novel technique, constructed in a small, compact, disposable package at a low price point, can connect things inexpensively to function for only a programmed period and then be readily thrown away.”

Like ‘Moneyball’ but for individuals.
How Trevor Bauer Remade His Slider — And Changed Baseball
Travis Sawchik is a FiveThirtyEight staff writer. His new book “The MVP Machine: How Baseball’s New Nonconformists Are Using Data to Build Better Players,” co-authored with The Ringer’s Ben Lindbergh, is available this week. In it, they examine how outsiders (and a few forward-thinking insiders) are employing unconventional ideas along with new data from new technology to lead a bottom-up revolution in improving skill levels. We’re publishing an excerpt of the book on how Cleveland Indians pitcher Trevor Bauer, a trailblazer in player development, used new technology like the high-speed Edgertronic camera, which he introduced to baseball — along with some stealthy reconnaissance — to fuel his 2018 breakout. It was Bauer who ushered a new, game-altering field into the sport: pitch design.

I’ve lectured on several topic, have tons of handouts – why not combine that into a book?